Social Interaction and the Development of Artificial Consciousness

Size: px
Start display at page:

Download "Social Interaction and the Development of Artificial Consciousness"

Transcription

1 Artur M. Arsenio IHSIS - Institute for Human Studies and Intelligence Sciences Instituto Superior Técnico / Universidade Técnica de Lisboa, Portugal Luisa G. Caldas IHSIS - Institute for Human Studies and Intelligence Sciences Faculdade de Arquitectura / Universidade Técnica de Lisboa, Portugal Manuel D. de Oliveira IHSIS - Institute for Human Studies and Intelligence Sciences Center of Philosophy / Universidade de Lisboa, Portugal 1 Introduction If in that which is organic there is nothing but mechanism, that is, bare matter, having differences of place, magnitude and figure; nothing can be deduced or explained from it, except mechanism ( ). Hence we may readily conclude that in no mill or clock as such is there to be found any principle which perceives what takes place in it; and it matters not whether the things contained in the machine are solid or fluid or made up of both. Further we know that there is no essential difference between coarse and fine bodies, but only a difference of magnitude. Whence it follows that, if it is inconceivable how perception arises in any coarse machine, whether it be made up of fluids or solids, it is equally inconceivable how perception can arise from a fine machine ; for if our senses were finer, it would be the same as if we were perceiving a coarse machine, as we do at present. (Leibniz, 1710) The quest for creating artificial beings has long been pursued by scientists, artists and craftsmen. Some authors regard this tendency as yet another expression of the drives of mimesis and anthropomorphism that have been present on many domains of human action throughout history (Penny, 1995). In our days, two parallel paths of research in Artificial Intelligence pursue this drive, which can cut through disciplines and find expression in the most advanced technology available at any particular historical moment (Penny, 1995). The field of Autonomous Robotics strives at building intelligent machines that share the physical world with us, grounded in principles of embodiment and situatedness. The field of Artificial Life (A-Life) attempts to replicate life-like mechanisms and behaviors in a computational environment, and although these artificial creatures may also have a body, it only exists in a virtual domain, following solely the physical rules that are set by their programmer. According to Steve Grand, who became a prominent figure in the A-Life scene since writing the groundbreaking computer game Creatures, a system will not be intelligent unless it is also alive (Grand, 2001). In his work, Grand postulates that embodied life is a necessary condition for the development of intelligence. From this point of view, a static computer can never be intelligent, for it lacks a body with its intrinsic physical

2 and psychological needs, emotions, illnesses, and the danger of the loss of life itself. In natural evolution, life is the fundamental substrate over which intelligence came into being. Indeed, only at a later stage of evolutionary history did the brain emerge. But what might have been the role played by the body in the process of intelligent life? In this context, the working of the human mind has long puzzled and amazed the human kind, drawing theorists of every persuasion, and from many disciplines, to the barely imaginable task of trying to fill in the details of such a scientific challenge. Classical Artificial Intelligence (AI) researchers adopted as an implicit and dominant hypothesis the claim of (Newell & Simon, 1961) that humans use symbolic systems to think (for a critic review see (Brooks, 1999, Caldas & Oliveira, 2005)). AI systems following such theory create explicit, internal representations and need to fully store the state of the outside world. Search strategies are used for problem-solving, applied mostly to narrow domains, such as playing chess. Similarly to early AI, robotics has been applied mostly to narrow, goal-oriented specific applications. In addition, task execution often requires artificial, off-line engineering of the robot's physical environment, creating idealized working conditions. This chapter will hence begin by discussing the early quest for humanoid intelligence on section 2. But moving towards intelligent robots will require general purpose robots capable of learning in any real-world environment. Starting from a robot s genotype, the humanoid robot should exploit social interactions with a human caregiver to develop perceptual and motor skills. Through these interactions it is possible to develop cognitive capabilities on a humanoid robot to perceive specific actions, scenes, objects, faces, and a sense of self (Arsenio, 2004). The New Picture Book by Hermann Kaulbach Figure 1: The road to intelligent, conscious robots, may require similar strategies as used by nature to evolve intelligent human beings. Indeed, we do not treat children as machines, i.e., automatons. But this automaton view is still widely employed in industry to build robots. Building robots involves often the hardware setup of sensors, actuators, metal/plastic/rubber parts, cables, processing boards, as well as software development. Such engineering might be viewed as the robot genotype, and several research works have been proposed on evolving such robotic genetic

3 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira legacy for creating new types of robots. But equally important in a child is the developmental acquisition of information in a social and cultural context (Vigotsky, 1978). Therefore, for a humanoid robot to interact effectively in its surrounding world, it must be able to learn. This chapter addresses therefore the importance of the genotype (section 3) and the phenotype (on section 4) for the development of cognitive capabilities in humanoid robots taking inspiration from evolutionary biology and human development psychology. As illustrated in Figure 1, the road to intelligent, conscious robots, may require similar strategies as used by nature to evolve intelligent human beings: (i) Genetic evolution, strating from several simple living organisms (or robots such as Ghengis, an insect-like robot at MIT AI Lab) and evolving their genetic code over time on real (or simulated) environments. (ii) Learning and development from social interactions in a cultural and environmental context - such approach was followed on Cog at MIT CSAIL (Arsenio, 2007, Fitzpatrick & Arsenio, 2004), a humanoid robot that learns through social interactions with a human caregiver. Section 4 presents learning strategies based on human-robot interactions, where the humans play the role of children caregivers, applied to a diverse set of problems. Training data for the algorithms is generated on-line, in real-time, while the robot is in operation. An important milestone in child development is reached when the child recognizes itself as an individual, and identifies its mirror image as belonging to itself. Children between 12 and 18 months of age become interested in and attracted to their reflection, and such behavior requires the integration of visual cues from the mirror with proprioceptive cues from the child's body. We demonstrate how to extrapolate such capability to a humanoid robot on section 5, so that the robot becomes aware of its own body. Through the experience of a set of social interactions, the humanoid robot is able to join percepts and context in a continuously changing and integrated frame. This temporal collection and integration of all cognitive percepts in the human brain is often denoted the movie in the brain, which is described in section 6 for providing information (or snapshots) of what is going on at a Humanoid's brain. We conclude in section 7 with a philosophical reflection on some of the questions related to the interrelation and boundaries between Human and Humanoid types of consciousness, with particular emphasis on their ethical and social implications. Finally, section 8 presents a discussion and concluding remarks concerning the impact of social interactions and developmental learning on the quest to build Artificial Consciousness into a humanoid robot. 2 The Historical Quest for (Intelligent) Humanoids We find consciousness mysterious only because we have a bad picture of matter. We have a lot of mathematical equations describing the behavior of matter, but we don t really know anything more about its intrinsic nature. The only other clue that we have about its intrinsic nature, in fact, is that when you arrange it in the way that it is arranged in things like brains, you get consciousness.(strawson, 1999) 2.1 The Search for Humanoid Automatons Since the Renaissance, many attempts have been made to create human-like automatons or robots. The common track of these early experiments towards building artificial beings was that they were fully programmed, with no emergent behavior or spontaneity. In general, they tried to literally copy the appearance of a human being and of his movements. As early as the first century of our era, Hero of Alexandria, a mathematician, physicist and engineer who is thought to have taught at the Alexandria Museum, is reported to have built human-like forms that were pneumatically operated to replicate some simple human movements.

4 In Milan, in 1495, just before he was starting the work in the Last Supper, Leonardo da Vinci designed and might have built what is thought to be the first human automaton in Western Culture (see Figure 2). It was an armored knight who could sit, wave its arms, and move its head via a flexible neck, opening his helmet while moving its anatomically correct jaw. The robot is said to represent a climax of Leonardo's relentless quest for automation and the consummate expression of the direct man-machine analogy a recurrent theme in Leonardo's anatomical studies. 1 In fact, the robot is claimed to have influenced his later anatomical studies in which he modeled the human limbs with cords to simulate the tendons and muscles. Figure 2: (left) Two images of a model for Leonardo da Vinci s Robot, recently built and placed at the Institute and Museum of the History of Science, Florence. (right) Some of Leonardo s drawings for the humanoid robot. Many other examples of primitive robots appeared during the following centuries, from the reign of Louis XIV to the 18th century, when Jacques de Vaucanson created intricate mechanical figures, namely three humanoid musicians, one playing the piano, other the flute, and a third playing a mandolin. He is said to have deserved from Voltaire the following comment: "A rival to Prometheus, (Vaucanson] seemed to steal the heavenly fires in his search to give life". In 1815, Henri Maillardet built a humanoid (shown in Figure 3) that could write in both French and English and draw a number of landscapes and other drawings. Figure 3: (left) The Maillardet s Automaton and examples of the drawings and poems it produced. (right) A wood engraving displaying The Turk. Much before Leonardo, another tradition of a humanoid artificial-being already existed, in the form of the kabalistic figure of the Golem. The Golem was made in an artificial way by virtue of the use of holy words, evoking the creative power of speech and of the letters. It served its creators and fulfilled tasks laid upon him. The word emet,(תמא) meaning truth, was written on its forehead to give the Golem life. When the first letter Cycorp, The Cycorp website, 2002,

5 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira alef was erased from the beginning of the word, there remained the word met ( dead ) thus rendering the Golem lifeless again. (Roth and Wigoder, 1972) The opinions concerning the nature of this created being varied, with some thinkers considering that man had the power to give vitality alone to the Golem but not life, spirit or soul proper. (Roth and Wigoder, 1972) The latest and best-known popular legend about the golem is connected to Rabbi Lowe of Prague, Czechoslovakia ( ), who, according to the legend, created the Golem so that he would serve him, but was forced to restore him to his dust when the Golem began to run amok and endanger people s lives. (Roth and Wigoder, 1972) The Golem legend is said to have inspired Mary Shelley in her creation of Frankenstein. Coincidentally, it also came from a Czeck author, Karel Čapek, the creation, in 1921, of the word robot, which derives from robota in Czeck. In the days when Czechoslovakia was a feudal society, robota referred to the two or three days of the week that peasants were obliged to leave their own fields to work without remuneration on the lands of noblemen. Afterwards, robota continued to be used to describe work that was not done voluntarily. In his play R.U.R. (Rossum's Universal Robots), Čapek envisioned a world where robots, or artificial humanoids, were first created as slaves, and them eventually rebelled and tried to destroy humans. R.U.R also exploits the different conceptual postures of the attempt to emulate a human body, by confronting the founder of the Rossum company, a scientist, and his nephew, an engineer. While the former was interested in replicating the human body in all its intricacies, using a matter similar to human organic tissues, the latter s concern was solely in creating slave artificial-beings in the most efficient fashion, by simplifying what had been created by nature, discarding its redundancies and inefficiencies. 2.2 Replicating Human Intelligence One of the most famous automata of all times was called The Turk (illustrated in Figure 3), a chess-playing machine built in 1770 by an Austrian civil servant with an interest in physics, mechanics and hydraulics, Wolfgang von Kempelen. The automaton, which wore a turban, carried a long pipe, and dressed like a Turk, stood over a wooden desk with a number of complex mechanisms in its interior, and a chessboard on top. During its lifetime, until it was destroyed during a fire in the Philadelphia Museum, in 1854, the Turk traveled around Europe and the United States, winning chess games against such reputed adversaries as the European chess champion and Benjamin Franklin. The Turk was eventually uncovered as a hoax, since it was found out that there was actually a human chess player hidden inside the wooden box. One of the people that played against the Turk, despite being defeated in both attempts, was Charles Babbage, the computing pioneer. That happened in England in 1820, and although Babbage believed the Turk to be a hoax, the experience of playing a chess game against a machine made him consider the possibility of building a mechanism that could actually perform some mathematical operations, and possibly even get to play chess. In 1821, around eighteen months after playing with the Turk, he presented his sketches for his first mechanical computer, the Difference Engine, which was, in fact, the father of the modern computer. Later in his life, Babbage even sketched out a rough algorithm for playing chess using the updated Analytical Engine, concluding that in theory there was nothing impossible about achieving such feat, although the size and cost of the mechanism would make it impractical. The most significant and remarkable point of this story is that Babbage did not try to emulate a human image, such as an automaton does, but instead strived at capturing a working principle, one that could translate into another medium some aspects of human reasoning. After the industrial revolution came the digital era, having as its main feature the manipulation of information. It is interesting to notice that the first expression of anthropomorphism in a digital context was not the attempt to replicate a living body, but human intelligence. The tone for anthropomorphism was set very early in the Artificial Intelligence quest, by Alan Turing, one of the fathers of computation and the man responsible for deciphering the Enigma Code of the Nazi regime. In 1950, Turing published a paper, Computing Machinery and Intelligence, where he proposed what has become known as the Turing Test. His claim was that computers would in time be programmed to acquire abilities rivaling human intelligence. As part of his argument Turing put forward the idea of an 'imitation game', in which a human being and a computer would be interrogated un-

6 der conditions where the interrogator would not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent (Turing, 1950). By putting forward this test, Turing is proposing in fact an anthropomorphic measure for assessing an artificial being s intelligence. It was not up to 1984 that John Searle proposed a counter argument to Turing s Test, called the Chinese Room argument: I don't understand Chinese. I'm hopeless at it. I can't even tell Chinese writing from Japanese writing. So I imagine that I'm locked in a room with a lot of Chinese symbols (that's the database) and I've got a rule book for shuffling the symbols (that's the program) and I get Chinese symbols put in the room through a slit, and those are questions put to me in Chinese. And then I look up in the rule book what I'm supposed to do with these symbols and then I give them back symbols and unknown to me, the stuff that comes in are questions and the stuff I give back are answers. Now, if you imagine that the programmers get good at writing the rule book and I get good at shuffling the symbols, my answers are fine. They look like answers of a native Chinese [speaker]. They ask me questions in Chinese, I answer the questions in Chinese. All the same, I don't understand a word of Chinese. And the bottom line is, if I don't understand Chinese on the basis of implementing the computer program for understanding Chinese, then neither does any other digital computer on that basis, because no computer's got anything that I don't have. That's the power of the computer, it just shuffles symbols. It just manipulates symbols. So I am a computer for understanding Chinese, but I don't understand a word of Chinese. (Searle, 1984) With this argument, Searle argues that even if a machine seems to be human-like, according to Turing s anthropomorphic test, it does not mean that it understands what it is performing, i.e., consciousness is not implied. 2.3 Early (traditional) Artificial Intelligence For many years, traditional Artificial Intelligence (AI), also known as Cognitivism, had a view of intelligence as based on thought and reason, higher-level cognitive functions that suited well the top-down approach behind the attempt to replicate human intelligence, which was seen as involving the creation of representations, and the deployment of high-level cognitive skills such as problem-solving and planning. This branch of AI gave birth to advances such as Deep Blue, the chess-playing program that was eventually able to beat the human world chess champion, after several attempts. Nevertheless, researchers in the field generally agree that it is not possible to claim intelligence for a chess-playing program. Chess is a well-defined problem, despite its enormous amount of solutions, and the task of the computer consists simply of searching for the moves that have the best pay-off. Another paradigmatic example of the traditional approach to AI is the Cyc Project, led by Douglas Lenat, which has been on-going for more than 20 years. Lenat s consideration was that one of the reasons computers were making much slower progress than initially expected in augmenting their intelligence was that the computer did not have common sense, that is, it did not have knowledge of the world to situate the information it was receiving. He reasoned that if basic knowledge was formalized in some kind of way and fed into a computer, it would, at some stage, be able to use this knowledge in order to process new information and make inferences about the world. Lenat thus created the Cyc Knowledge Base, which is a formalized representation of a vast quantity of fundamental human knowledge: facts, rules of thumb, and heuristics for reasoning about the objects and events of everyday life. The Knowledge Base consists of terms which constitute the vocabulary of CycL and assertions which relate those terms 2. Reasoning then becomes the manipulation of such representations according to formal, logic-like rules. Although in the beginning the project was supposed to last for a few of years and require some million inputs of facts about the world, project leaders have been faced with the need to progressively add more and more 2 Cycorp, The Cycorp website, 2002,

7 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira information to Cyc. It has now been running for more than 20 years, the number of inputs having climbed to tens of millions, with new rules becoming necessary all the time. It became apparent for many of the critics of this approach to AI that their claims were proven true: knowledge about the world had to be gained through experience and interaction, using sensory information and learning based on it, it could not simply be artificially fed into a static, blind machine. 3 The Genetic Legacy of a Humanoid Robot As evolving creatures, human beings are largely continuous with their forebears, having inherited from them a substrate of capacities and systems for meeting their needs, and generally coping with a given environment. (Brooks, 1999) 3.1 Situatedness and Embodiment One of the main champions of a view of Artificial Intelligence that is based on emulating some of the basic mechanisms of simple living beings interacting with their environment, is Rodney Brooks, the Director of the MIT Artificial Intelligence Laboratory, who defends the physical grounding hypothesis to AI. In is book Cambrian Intelligence, Brooks argues that intelligence is an evolutionary inheritance, and that basic mechanisms that couple perception and action, in simple organisms like insects, are the basis for intelligent behavior, being also present in complex brains like ours. Brooks is a strong supporter of behavior-based robots, which do not require high-level cognition to mediate between perception and action. In the traditional model of AI, or strong AI, researchers assumed that an intelligent system doing high-level reasoning was necessary for the coupling perception-action. This approach was called a top-down approach. The new approach of bottom-up AI tries to build intelligence from very simple mechanism up, not from high-level reasoning down, a fundamental difference that still divides scientist in the field. However, the bottom-up approach is starting to dominate not only the field of Artificial Intelligence but also of cognitive science, where the classical theory that the mind is something akin to a digital computer processing a symbolic language is being challenged by the connectionist views. Two key concepts in the behavior-based approach are embodiment (the notion that intelligence requires a body) and situatedness (that body must be able to perceive its environment and act upon it). As Mike Anderson puts it, ( ) instead of emphasizing formal operations on abstract symbols, this new approach focuses attention on the fact that most real-world thinking occurs in very particular (and often very complex) environments, is employed for very practical ends, and exploits the possibility of interaction with and manipulation of external props. It thereby foregrounds the fact that cognition is a highly embodied or situated activity emphasis intentionally on all three and suggests that thinking beings ought therefore to be considered first and foremost as acting beings. (Anderson, 2003) As a counterpoint, one may argue that Cyc employs an entirely unsituated, and totally disembodied approach. (Smith, 1991) 3.2 Constructing Behavior-Based Robots In 1943, W. Grey Walter, an American who was head of the Physiological Department at the Burden Neurological Institute in Bristol, England, started building small robots with the help of his wife. He was interested in building artificial machines that could display spontaneity and autonomy, to help him understand certain aspects of the mind. The robots were extremely simple, with a number of primitive mechanisms, some of them using second-hand material, enclosed in a transparent plastic case. He called these robots Machina Speculatrix (one such robot is shown in Figure 4), as they seemed to display the exploratory, speculative behavior that is so characteristic of most animals (Walter, 1950). The tortoises, as Walter also called them, had both light and bump sensors, and could move around an unknown space, of which they had no previous information, overcoming unknown obstacles while moving

8 towards the more illuminated areas of a space. Despite their great simplicity, they had two characteristics that until then no machine ever had: spontaneity (they could display behaviors that no-one had programmed) and autonomy (they required no previous knowledge of the environment, and only relied on information from their own sensors in order to direct its actions). Figure 4: Walter s Machina Speculatrix, The early robot Shakey Stanford Research Institute (Cart, from Stanford AI Laboratory, in the 1970 s, had a similar configuration). Although physically very simple, Walter s machines were conceptually a huge novelty, inspiring the work of Rodney Brooks as a young man, who started building small robots using all the pieces he could have access to. After graduating in mathematics in Australia, he first managed to pursue his interest in robotics and AI in Stanford, at SAIL (Stanford Artificial Intelligence Laboratory), in 1977, where he would get acquainted with two of the most sophisticated and expensive robots of the time, Shakey and Cart. Both Shakey and Cart were built in the time when computers occupied entire rooms. The robots thus communicated with their brains remotely, via radio and TV antennas. Shakey could move only inside two rooms, where some colored blocks existed. His task was to move certain blocks from one room to the other. Its inbuilt camera would send an image of the current status of the blocks inside the room, and then it would stay quiet for a long time, while its remote brain computed the next move to achieve its goal. Brooks describes Shakey s operation the following way: It clearly had no sense of the here and now of the world if someone came and moved things around while it was thinking, it would eventually start up again, acting as though the world was still in the same state as it had been before the perturbation. Shakey used reasoning in situations where real animals have direct links from perception to action. (Brooks, 2002) Cart had similar limitations regarding understanding its surroundings and constructing an appropriate model of the world to act in it. In comparing these sophisticated robots, and their million-dollar brains, with the simple machines that Walter had built, able to interact with an unknown, unstructured environment for long stretches of time, based on their own senses, and resorting to simple actions based on responsive control mechanisms, Brooks (Brooks 1999, 2002) decided to pursue an alternate approach to robotics, based on sensory loops, rather than cognition. This has led to the development of a long series of robots produced at the Artificial Intelligence Lab, at MIT (early robots such as Genghis, and more recently Kismet, Cog, Macaco, Coco). Initially, these robots only had very simple behaviors programmed into them. Their actions are commanded by their senses, activated as reactions to their sensory input. Their behavior is thus emergent, unpredictable, and not programmed. But with later robots such as Cog, the robot behavior evolves with time as new knowledge and capabilities are acquired.

9 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira 4 Learning through Human-Humanoid interactions [Observable in children] He that attentively considers the state of a child, at his first coming into the world, will have little reason to think him stored with plenty of ideas,that are to be the matter of his future knowledge. It is by degrees he comes to be furnished with them. And though the ideas of obvious and familiar qualities imprint themselves before the memory begins to keep a register of time or order, yet it is often so late before some unusual qualities come in the way, that there are few men that cannot recollect the beginning of their acquaintance with them.(locke, 1690) Learning is a matter of extracting meaning from our interactions for use in the future... extract valuable information and put it in use in guiding your own action. (Dennet, 1998) Turing suggested that, instead of producing programs to simulate the adult mind, we should rather develop one which simulates the child's mind (Turing, 1950), so that an appropriate course of education would lead to the adult brain. 4.1 From Behaviorism to Scaffolding Watson ( ), the father of behaviorism, advocated that the frequency of occurrence of stimulusresponse pairings, and not reinforcement signals, act directly to cause their learning (Watson, 1913). Skinner ( ) argued against stimulus-response learning, which led him to develop the basic concept of operant conditioning. His Skinner-Box experimental apparatus improved considerably the individual learning trials of Watson. Piaget ( ) gives equal roles to both nature (biological innate factors) and nurture (environmental factors) in child development (Piaget, 1952). In Piaget's theory, genes are the building blocks for development. He puts special emphasis on children's active participation in their own cognitive development. Socialcultural-historical aspects are instead stressed by (Vygotsky, 1978). He concentrates more on how adults help a child to develop coherent knowledge concerning such aspects of the environment. Vygotsky's social-culturalhistorical theory of cognitive development is typically described as learning by scaffolding. Examples of scaffolding includes the reduction of distractions and the description of a task's most important attributes, before the infant (or in our case, the robot) is cognitively apt to do it by himself. 4.2 Humanoids as Children Indeed, evidence suggests that infants possess several preferences and capabilities shortly after birth (Bremner, 1994). Such predispositions may be innate or pre-acquired at the mother's womb. Inspired by infants' innate or pre-acquired capabilities, the robot Cog (Arsenio, 2004) was assumed to be initially pre-programmed for the detection of real-world events both in time and frequency, and correlations among these events, no matter the sensing device from which they were perceived. In addition, the robot prefers salient visual stimuli, as do newborns (Banks and Ginsburg, 1985). These preferences correspond to the initial robot's capabilities (similar to the information stored on human genes - the genotype) programmed into the robot to process these events. Starting from this set of premises, the robot should be able to incrementally build a knowledge database and extrapolate this knowledge to different problem domains (the social, emotional, cultural, develop-mental learning will set the basis for the phenotype). For instance, the robot learns the representation of a geometric shape from a book, and is thereafter able to identify animate gestures or world structures with such a shape. Or the robot learns from a human how to poke an object, and uses afterwards such knowledge to poke objects to extract their visual appearance.

10 An autonomous robot needs thus to be able to acquire and incrementally assimilate new information, to be capable of developing and adapting to its environment. The field of machine learning offers many powerful algorithms, but these often require off-line, manually inserted training data to operate. Infant development research suggests ways to acquire such training data from simple contexts, and use these experiences to bootstrap to more complex contexts. It is thus necessary to identify situations that enable the robot to temporarily reach beyond its current perceptual abilities, giving the opportunity for development to occur (Arsenio, 2004; Metta & Fitzpatrick, 2003). Similarly to Cog, the icub robot, a more recent project (Metta et al., 2008) aims at creating a robot with good enough movement and cognitive capabilities to replicate the learning process of a real child, as it develops into a walking, talking being. The implementation of learning experiments on Cog is represented on Figure 5, from left to right: (i) A human caregiver reads a book to Cog, while tapping with the finger on book drawings / objects. The robot was able to learn geometric drawings, objects, sound of objects, and simple words describing the object (such as the sound cat associated to a cat image. (ii) Caregiver plays with an infant s toy a toy train in a circular railroad track in front of the robot. The train movement is detected as a salient stimulus by Cog s attentional system, tracked over time, and its trajectory classified as circular (by mapping the toy movement to previously learned shapes using books). Cog is then able to recognize the train used in earlier experiments (see Figure 6). (iii) Caregiver hammers a large nail in front of Cog, which segments the hammer visual appearance due to the hammer repetitive motion, it segments the nail image appearance due to discontinuous motions upon contact with the hammer, and from tracking these objects, the robot is then able to learn the task of hammering. (iv) Caregiver draws a circle to Cog, which recognizes the drawing as having a circular geometry by mapping it to previous learned geometries. (v) Cog plays with a ball with a Caregiver, extracting the ball image from the interaction, so that it is able to recognize the ball for future social interactions in different contexts. (vi) Caregiver paints a circle in front of Cog, which recognizes the successfully the geometric figure. (vii) Caregiver repetitively shows a toy cylinder to Cog, by showing its circular face through circular motions of his finger. The humanoid is then able to associate such motion to the geometric shape of the circular face. Figure 5: several learning experiments implemented on the Humanoid Robot Cog. The goal is therefore the introduction of robots into our society and treating them as us, using child development as an inspiration model for developmental learning of a humanoid robot. This strategy relies heavily on human-robot interactions, so that the humanoid robot perceives the world through the caregiver's eyes: the caregiver controls and filters the information relevant to the robot to facilitate learning.

11 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira For instance, it is essential to have a human in the loop to introduce objects from a book to the robot (as a human caregiver does to a child). This led to the creation of children-like learning scenarios for teaching the humanoid robot Cog (as shown in Figure 5), which were used for transmitting information to the humanoid to learn about objects' multiple visual and auditory representations from books, other learning aids, musical instruments and educational activities such as drawing and painting. Multi-modal object properties were learned using these educational tools, and inserted into several recognition schemes, which were then applied to developmentally acquire new object representations (Arsenio, 2004). 4.3 Objects and People Recognition Teaching a visual system information concerning the surrounding world is a difficult task, which takes several years for a child, equipped with evolutionary mechanisms stored in its genes, to accomplish. The recognition approach proposed by (Arsenio, 2007) exploits help from a human caregiver in a robot's learning loop to extract meaningful percepts from the world. Through social interactions of a robot with a caregiver, the latter facilitates the robot's perception and learning, in the same way as human caregivers facilitate a child's perception and learning during child development phases. The visual learning process of a humanoid robot presents many difficulties. For instance, objects might have various meanings in different contexts a rod is labeled as a pendulum if oscillating with a fixed endpoint. From a visual image, a large piece of fabric on the floor is most often labeled as a tapestry, while it is most likely a bed sheet if it is found on a bed. But if a person is able to feel the fabric s material or texture, or the sound that it makes (or not) when grasped with other materials, then (s)he might determine easily the fabric s true function. Object recognition thus draws on many sensory modalities and the object s behavior, which inspired (Arsenio, 2007) approach: Objects can be recognized by their appearance - color, luminance, shape, texture. Objects have other physical features, such as mass, of a dynamic nature. The dynamic behavior of an object varies depending on its actuation. Temporal information is necessary for identifying functional constraints. The object motion structure - the kinematics - should be taken into account. Objects are situated in the world, which may change an object's meaning depending on context. Objects have an underlying hierarchical tree structure - which contains information concerning objects that are reducible to other objects, i.e., that originated by assembling several objects. There is a set of actions that can be exerted on an object, and another set of actions that an object can be used for (e.g., a nail can be hammered, while a hammer is used for hammering). Therefore, a set of affordances (Fitzpatrick, 2003) also intrinsically define (albeit not uniquely) an object. The set of issues just stated requires the solution of problems in several fields of study. This is a new complex, multi-modal approach to object recognition, which requires the application of a large collection of learning algorithms to solve a broad scope of problems. Several learning tools, such as Weighted-cluster modeling, Artificial Neural Networks, Nearest Neighbor, Hybrid Markov Chains, Geometric Hashing, Receptive Field Linear Networks and Principal Component Analysis, were extensively applied to acquire categorical information about actions, scenes, objects and people (Arsenio, 2004). Recognition of objects has therefore to occur over a variety of scene contexts. This led (Arsenio, 2004) to develop an object recognition scheme to recognize objects from color, luminance and shape cues, or from combinations of them. When facing an object, for instance a red ball, the humanoid is then able to classify it both as a red ball object (based on color and structure), and also as a geometric circle. Figure 6 illustrates this approach

12 for learning and recognizing the appearance of a toy train. Learning is enabled by tracking the object over time, so that a database of large samples is built, with several viewing angles of the toy, so that it can later be recognized in different contexts (such as on a railroad track as shown in Figure 5). The human caregiver s face is also detected during the social interaction and tracked over time, so that a database of faces stores sets of training samples for the face, which is recognized in posterior interactions. Attentional System Face Detection/ Segmentation 2 Object Recog. Object Storage Multi-tracker Object Detection/ Segmentation Face Recog. Face Storage Color Histograms Eigenfaces Training Set Training Set Figure 6: Approach (and real experiment) for segmenting and recognizing faces and objects. Training data for object/face recognition is extracted by keeping objects and others faces in memory for a while, generating this way a collection of training samples consisting of multiple segmentations of objects and faces. (left) on-line experiment on Cog. 1) Object (train) segmentation, acquired by the active segmentation algorithm; 2) Face detection and segmentation; 3) Multiple object tracking algorithm, which is shown tracking simultaneously a human face and the train; 4) Object Recognition window - this window shows samples of templates in the object database corresponding to the object recognized; 5) Face Recognition - this window shows 12 samples of templates in the face database corresponding to the face recognized. (right) schematic organization. 1) Object segmentation; 2) Face detection and segmentation; 3) Multiple object tracking; 4) Object Recognition; 5) Face Recognition. 4.4 Developmental Perception and Learning Robust perception and learning follow the Epigenetic Principle (Zlatev and Balkenius, 2001): as each stage progresses, it establishes the foundation for the next stages. The methodology developed for human-robot interactions is inspired on infant's simple learning mechanisms, mainly during Margaret Mahler's autistic and symbiotic developmental phases (Mahler, 1979).

13 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira 8 6 frequency (khz) (c) (a) (b) Train s track segmentations Train s segmented trajectories Figure 7: a) Collection of segmented templates from real-time human-robot interactions, showing object s visual and acoustic profiles from human waving objects and poking at them. b) Object appearance extracted from a scene on the robot s environment by correlating the positions of a repetitive human finger tip with the book features corresponding to such positions. c) Several different objects correlated with each other with a circular shape. (from left to right) circular red apple extracted from a book of fruits. A circular shape is learned, and later matched to a circular object on the visual field. Circular drawings are equally recognized as having a similar shape, as well as circular paintings realized by a human caregiver, or his movements around a cylindrical object. The circular trajectory of a toy train around a circular railway track is also identified. Indeed, baby toys are often used in a repetitive manner - consider rattles, car/hammer toys, etc. This repetition can potentially aid a robot in perceiving these objects robustly. Playing with toys might also involve discontinuous motions. For instance, the operation of grabbing a rattle results in a sharp velocity discontinuity upon contact. This motivated the design of algorithms which implement the detection of events with such characteristics. Robots at the MIT CSAIL laboratory, such as Cog and Macaco, all employed a Visual Attentional System (Arsenio, 2004) based on human cognitive models to extract salient stimulus (such as color, motion, luminance) from the visual field in order to dedicate to them processing resources. Hence, salient visual stimulus, moving image regions that change velocity either periodicaly, or abruptly under contact, all produce visual event candidates. A human instructor may use then such protocols to transmit information to a robot. Correlation algorithms are then used to: Group sets of features according to the human instruction. Features (image texture, color) may: o o be moving together similarly as one object (object visual and acoustic segmentation from human waving objects and poking at them are shown in Figure 7a) or as several objects linked together (for instance, hammer bagging into nail during a hammering activity being shown by a human to the robot, as shown in Figure 5) being simultaneously referred to by the human actuator, by having the human pointing at them, as shown in Figure 5 for a robot to learn object appearance by correlating the positions of a repetitive human finger tip with the book features corresponding to such positions. Figure 7b illustrates several object appearance templates of chairs (sofas, tables, doors, etc were also learned using

14 such scheme) by correlating repetitive positions of the human hand with the corresponding image features for such positions Detect relations between moving features o o o Contact events between objects: objects may crash and move together or in opposite directions according to their dynamic properties (Arsenio, 2004) Kinematic constraints: kinematic structures (e.g. joints) impose correlated object motions Task structure: consists on reconstructing a Markov model for a task from the intervenient objects and their motion (Arsenio, 2004) Recognized features are then employed to bootstrap other robot capabilities (such as gesture recognition, shape and trajectory learning, or cross-modal recognition), as shown in Figure 7c. For instance, the robot initially is not able to recognize any human gesture during social interactions from the motion detection of the human hand actuator. But as shapes are learned using other mechanisms, gesture motions are now compared with such shapes for classification purposes. A circular motion of a toy train is classified into the corresponding circular motion, and the image regions (the railway track) intersected by the train trajectory are now grouped together to extract its visual appearance. And all such knowledge becomes then available to be incorporated into the Markov models that are learned to describe a task demonstrated by a human. In Cog, other mechanisms also become then available to search for objects, by allowing the robot to search its captured images for probable locations of objects (such probable object locations were computed from previously learning experiments, where objects were found) as described in (Arsenio, 2004). 5 Humanoid Learning about the Self (And) of the soul the body form doth take; For soul is form, and doth the body make. (Aristotle; BC, 350) Children become able to self-recognize their image on a mirror during Mahler's developmental practicing subphase (Mahler, 1979), which marks an important developmental step towards the child individuality. On a humanoid robot, proprioception from the robot's joints is a sensorial modality very important to control the mechanical device, as well as to provide workspace information (such as the robot's gaze direction). But proprioceptive data is also very useful to infer identity about the robotic self (Fitzpatrick and Arsenio, 2004), for instance, by having the robot recognize itself on a mirror. Large correlations of a particular robot's limb motion with data from other sensorial inputs, indicates a link between such sensing modality to that moving body part. 5.1 Self Recognition Self-recognition in a mirror is the focus of extensive study in biology. Work on self-recognition in mirrors for chimpanzees (Gallup et al., 2002) suggests that animals other than humans can also achieve such competency, although the interpretation of such results requires care and remains controversial. Self-recognition is related to the notion of a theory-of-mind, where intents are assigned to other actors, perhaps by mapping them onto oneself, a topic of interest in robotics (Kozima & Yano, 2001; Scassellati, 2001). Cross-modal rhythm is an important cue for filtering out extraneous noise and events of lesser interest. In the busy lab where Cog inhabits, people walk into view all the time, and there are frequent loud noises from the neighboring machine shop. Turning to the robot's perception of its own body, proprioceptive feedback provides very useful reference signals to identify appearances of the robot's body in different modalities. For instance, Cog treated proprioceptive feedback from its joints as just another sensory modality in which interesting events

15 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira may occur. These events can be bound to the visual appearance of its moving body part assuming it is visible - and the sound that the part makes (as shown in Figure 8), if any (in fact Cog's arms are quite noisy, making an audible whirr-whirr when they move back and forth). sound detected and bound to the motion of the arm robot is looking away from its arm as human moves it robot is looking towards its arm as human moves it Appearance, sound, and action of the arm all bound together Figure 8: (left) In this experiment, a robot caregiver grabs Cog s arm and shakes it back and forth while the robot is looking away. The sound of the arm is detected, and found to be causally related to the proprioceptive feedback from the moving joints, and so the robot s internal sense of its arm moving is bound to the external sound of that motion. (right) In this experiment, a person shakes Cog s arm in front of its face. What the robot hears and sees has the same rhythm as its own motion, so the robot s internal sense of its arm moving is bound to the sound of that motion and the appearance of the arm. 5.2 Learning about the Self An important milestone in child development is reached when the child recognizes itself as an individual, and identifies its mirror image as belonging to itself (Rochat and Striano, 2002). Such stage is also important for humanoid robots for them to learn about themselves. As shown in Figure 9, Cog employed a binding algorithm based on events correlation (Arsenio, 2004) not only to identify the robot's own acoustic rhythms, but also to identify visually the robot's mirror image (an important milestone in the development of a child's theory of mind (Baron-Cohen, 1995)). It is important to stress that we are dealing with the low-level perceptual challenges of a theory of mind approach, rather than the high-level inferences and mappings involved. Correlations of this kind could form a grounding for a theory of mind and body-mapping, but are not per si a theory of mind - for example, they are completely unrelated to the intent of the robot or the people around it, and intent is fundamental for a robot to understand others in terms of the self (Kozima and Yano, 2001). The goal is that the perceptual and cognitive research will ultimately merge and give a truly intentional robot that understands others in terms of its own goals and body image - an image which could develop incrementally using cross-modal correlations of the kind explored in this work.

16 body head Visual segmentation Detected correlations Multiple obj. tracking hand arm Sound segmentation Cog s view Object recognition Figure 9: Results for mapping visual appearances of self to ones' own body. Cog can be shown different parts of its body simply by letting it see that part (in a mirror if necessary) and then shaking it, such as its hand or flipper. Notice that this works for the head, even though shaking the head also affects the cameras. The reason why this happens lies on the visual image stabilization by the vestibular ocular reflex in Cog's eye control (Arsenio, 2004), which significantly reduces background's motion relative to the head motion. 6 The Movie in the Brain...many of the details of Cog's \neural" organization will parallel what is known about their counterparts in the human brain.(dennet, 1998) The temporal collection and integration of all cognitive percepts in the human brain is often denoted as the movie in the brain (Damasio, 1999). The robot's brain corresponds to the computational processes implemented on Cog to make it appear with some degree of intelligence. It is worth taking several snapshots of what is going on at a humanoid robot s brain at a given moment. Namely, how are memories stored? What information is interchanged between processing modules? What gets into the Robot's Learning Structures? 6.1 The Humanoid Brain In order to address these questions, we should assume no single, central control structure. Such assumption is supported by evidence from the cognitive sciences (mainly from brain-split patients at the level of the corpus callosum (Gazzaniga and LeDoux, 1978)). Although humans have specific brain processing areas, such as Broca's language area or the visual cortex, these areas are highly interconnected in complex patterns. A particular cognitive modality most often involves several brain structures. Classical AI has a tendency to overpass these complex aspects of human intelligence (Minsky and Papert, 1970).

17 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira An example of such complexity on the humanoid robot Cog (Arsenio2004) results for building a description of a scene, which requires spatial information (the where pathway), recognition of specific world structures (the what pathway) and visual memory. This requires a large collection of distinct, but highly interconnected processes. A distributed cognitive architecture, employing several AI algorithms, was therefore implemented on the humanoid robot Cog aiming at the emulation of different perceptual cognitive capabilities, as shown in Figure 10, and shortly described hereafter. The Visual Primary and Associative Area at the brain occipital lobe were modeled in Cog, by employing several low-level visual processing components for extracting a set of low level features: spectral, chrominance, luminance, and oriented lines. Low-level features such as skin-color and optical flow are integrated by a logpolar attentional system for inferring stimulus saliency in the brain s visual association area. The salient stimulus is further processed for the perceptual grouping of color and spectral regions, which also occurs chiefly in this brain area. A multi-scale approach was designed for event detection from motion created by an agent's actions. Two parallel visual pathways in the human brain have been found to process information relative to objects/people identity (the what pathway on the brain s temporal lobe, for which object and people recognition algorithms were implemented on Cog) and to objects/people location (the where pathway on the brain s parietal lobe, for which map building and localization of objects, people and the robot itself were equally employed on Cog). The sensory information reaching the human brain is not only processed based on data from the individual sensorial modalities, but it is also cross-correlated with data from other sensory modalities for extracting richer meanings. This processing in the human brain inspired the development of binding approaches and cross-modal recognition algorithms for a humanoid robot to detect and recognize crosscorrelations among data from different sensors. The effects of a visual experience are stored in the human brain for a posteriori use. Visual memory is considered to be divided into three differentiated systems (Palmer, 1999): Iconic Memory, Short Term Memory, and Long Term Memory. Data representations learned by Cog were stored in large databases, in which data in one database may be cross-correlated with other databases for bootstrapping new capabilities (for instance, mapping a hand gesture to a previously learned triangular shape). Perception in the brain involves several sensory modalities on Cog. Processing of audio signals is one such modality implemented on Cog (sound recognition is often a very important survival skill in the animal kingdom). From such information, Cog extracted simple repetitive worlds while corre-lating such information to other events, such as the visual image for an object named by a word. Another perceptual modality is proprioception (position/velocity or force sensory data from the robot's joints - or from human bone articulations and muscles). These sensory messages terminate in the Parietal Lobe (in the somato-sensory cortex). This area has direct connections to the neighboring motor area, which is responsible for the high-level control of motor movements. Building sensory-motor maps occurs early in childhood to enable reaching movements, and such functionality was implemented as well on Cog using locally linear adaptive models. In addition, networks for the generation of oscillatory motor rhythmics (networks of central pattern generators are also found in the human spinal cord) were modeled on Cog by neural oscillators. For the crucial role of the cerebellum in the control of eye movements and gaze stabilization, Cog employed a non-linear control framework. The frontal lobe is responsible for complex functions in the human brain. It has been shown that inputs from the limbic system play an essential role in modulating learning. Emotional systems implemented on Macaco (Arsenio, 2003) and Kismet (Breazeal & Fitzpatrick, 2000) emulate the essential role of human

18 emotions on learning and decision taking. Other frontal lobe functions, such as task identification and functional object recognition, were also addressed by Cog. Motor Area Control Processes Sliding-Modes Controller Learning of Locally Affine Dynamic Models Matsuoka Neural Oscillators for the execution of Rhythmic Movements Sensory-motor maps Locally weight regressions are used to map proprioceptive data from body joints to 3D cartesian space Perception of robot s own body Binding proprioceptive data from robotic body parts (head, torso, left or right arms and hand) to the sound they generate Binding proprioceptive data from the robot s joints to the robot s body visual appearance Right hemisphere Left Holistic Based integration hemisphere Lymbic Object system Based Integr. Primary Visual Association Area Space-variant Attentional System Face and Head pose Detection/Recognition Keeping Track of Multiple Objects Binding multiple features Emotional Processes Motivational Drives, Speech emotional content Acoustic Perception Sound recognition (PCA - clusters input space into eigen sounds) Recognizing sounds of objects Word Recognition Cerebellum Vestibulo-occular reflex Smooth pursuit Saccades Own body kinematics and Dynamics Learning and Task Identification Identification of tasks using Markov Models e.g. Hammering, sawing, painting, drawing Learning the kinematics and dynamics of mechanical mechanisms Where pathway Scene recognition Spatial Organization of objects Object-based mixture of gaussians to learn spatial distributions of objects and people relative to the spatial layout of a scene Holistic-based - mixture of gaussians to learn spatial distributions of objects from the spatial distribution of frequencies on an image Frontal Lobe and Motor area Binding Sounds to Visual descriptions egs. bang sound to Hammer visual segmentation acoustic representation of a geometric shape (such as a circle) to its visual description Mapping a No! sound to a head shaking movement Temporal Lobe Parietal Lobe Occipital Lobe Cerebellum Body-retina spatial maps What Pathway object recognition through: integration of shape features integration of Chrominance features integration of luminance features integration of texture descriptors Low-level visual processing - Wavelets computation, Short-time Fourier Transforms, Edge detection (Canny algorithm), Edge orientation (Sobel Masks), Line estimation (Hough transform),topological color areas (8-connectivity labelling) Skin detection, Optical flow estimation (Horn & Schrunk), Point tracking (Lukas and Kanade Pyramidal Alg.) Figure 10: Cognitive modeling of a very simple brain for the humanoid robot Cog. Cog's computational system, for instance, comprised several dozens of processes running AI algorithms on 32 processors and interchanging messages continuously. This simple artificial humanoid brain was intended to mimic the brain mechanisms of thought, and the emulation of such processes in an artificial manmade creature, such as a humanoid robot, only at a higher level of abstraction. Furthermore, it is only a very simplistic approach to something as complex as the real brain of even simple creatures. Most functionality is still lacking implementation, such as: language acquisition; complex reasoning and planning; tactile perception and control; hand manipulation and high-level interpretation of human gestures, just to name a few. 6.2 Snapshots of Brain Activity In robotics, it is essential to keep track of the robot activities, for debugging the software and eventual errors of system failures, to inspect communications between processes, to infer the correctness of the algorithms implemented, among other reasons. As such, the humanoid robot Cog made use of an array of 4x4 wall monitor displays for providing snapshots of the robot activity (grey box on Figure 11). Several user interfaces were

19 Artur M. Arsenio,Luisa G. Caldas & Manuel D. de Oliveira developed in order to facilitate the access to such information (such as motor status and commands issue, images acquired and processed, results of recognition algorithms). An analogy for humans is the biomedical sensors employed to monitor the patient health status. But at any given instant of time, one can consider the robot s state as a collection of several factors which may be monitored for inferring robot s internal states, such as the robot s emotional states (in humans, such state corresponds to an hormonal and visceral status of our body), its motivation, the stimulus being processed and selected, as well the social interactions undergoing, objects or people being recognized or tracked, conversations or turn-taking interactions see (Fitzpatrick, 2003) for work on Cog and (Breazeal & Fitzpatrick, 2000) for work on the Kismet robot, tasks being executed, sounds being heard, etc. But in humans, some activities might be performed without awareness (that is often the case for involuntary movements). Therefore, we present hereafter an hypothesis for the definition of a robot conscious state: a selected collection of information sources being processed by the robot at a given instant of time. It may include a visual image of an object and its categorization, and information concerning tasks in which the object named is being used as well as information concerning places where the object is expected to move. Mind 3 is the aspect of intellect and consciousness experienced as combinations of thought, perception, memory, emotion, will, and imagination, including all unconscious cognitive processes. The robot s mind representation of such information is the collection of all information at that moment. Overtime, the movie in the brain is formed by the temporal evolution of such state (see Figure 11 for a representative illustration of different kind of information that may be on the robot s head). The robot conscious state should be a subset of these possibilities. 6.3 The role of Artificial Sleep Dreams for Learning In animals, it is thought that the synapses made by the nervous cells play an essential role for learning (Horn et al., 1985). On the other hand according to some authors (Crick & Mitchison, 1983), sleeping dreams also play a role on human learning, so that the human brain is able through sleep to remove certain undesirable modes of interaction. Patients that are sleep deprived have also more difficulty in learning effectively new knowledge. The role of sleeping may be very important as well on humanoid robots, corresponding to idle times in which the robot is not acquiring or processing new information, neither interacting socially. As shown in Figure 11, Cog s learned knowledge was stored on databases, so that the information can be recalled later. But it is important as well to correlate information on such databases, to try new associations of data between different databases, or to discard old items learned from memory structures. The second hypothesis presented here is that it is important for the humanoid robot to Artificially Sleep (assumed here as robot s idle time with respect to social interaction and stimulus acquisition) in order to learn new information into memory according different modalities, and to explore correlations for bootstrapping and develop new capabilities. According to this hypothesis, a robot s dream can be seen as a robot s virtual state of mind (either conscious or unconscious). Not the real state the movie in the brain as previously described, but a virtual state resulting from tentative associations of data in order to test new relations between acquired knowledge, and to discard others. In the humanoid robot Cog, as well on Macaco, such idle times were essential for the robot to apply off-line learning strategies in order to store into memory new knowledge resulting from learning new data associations. 3 on November 2010

20 Figure 11: The movie in Cog s brain. It is shown the state of mind of a robot as a collection of internal states (emotional, motivational, perceptual), and external stimulus and interactions (tasks being executed, objects perceived, stimulus detected), at a given instant in time. The collection of such snapshots of humanoid processes overtime consists of its movie in the brain. 7 Interrelation and Boundaries between Human and Humanoid Consciousness Although consciousness is must be, in the end a product of some gigantically complex mechanical system, it will surely be utterly beyond anybody s intellectual powers to explain how this is so. (Dennett, 2005) The original drive for self -preservation is no more accompanied by any I-consciousness than any other drive. What wants to propagate itself is not the I but the body that does not yet know of an I. Not the I but the body wants to make things, tools, toys, wants to be inventive. And even in the primitive function of cognition one cannot find any cognosco ergo sum of even the most naïve kind, nor any conception, however childlike, of an experiencing subject. Only when the primal encounters, the vital primal words I- acting-you and You-acting-I, have been split and the participle has been reified and hypostatized, does the I emerge with the force of an element. (Buber, 1988)

Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self

Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Paul Fitzpatrick and Artur M. Arsenio CSAIL, MIT Modal and amodal features Modal and amodal features (following

More information

Teaching robots: embodied machine learning strategies for networked robotic applications

Teaching robots: embodied machine learning strategies for networked robotic applications Teaching robots: embodied machine learning strategies for networked robotic applications Artur Arsenio Departamento de Engenharia Informática, Instituto Superior técnico / Universidade Técnica de Lisboa

More information

Autonomous Robotics. CS Fall Amarda Shehu. Department of Computer Science George Mason University

Autonomous Robotics. CS Fall Amarda Shehu. Department of Computer Science George Mason University Autonomous Robotics CS 485 - Fall 2016 Amarda Shehu Department of Computer Science George Mason University 1 Outline of Today s Class 2 Robotics over the Years 3 Trends in Robotics Research 4 Course Organization

More information

Robot: icub This humanoid helps us study the brain

Robot: icub This humanoid helps us study the brain ProfileArticle Robot: icub This humanoid helps us study the brain For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-icub/ Program By Robohub Tuesday,

More information

YDDON. Humans, Robots, & Intelligent Objects New communication approaches

YDDON. Humans, Robots, & Intelligent Objects New communication approaches YDDON Humans, Robots, & Intelligent Objects New communication approaches Building Robot intelligence Interdisciplinarity Turning things into robots www.ydrobotics.co m Edifício A Moagem Cidade do Engenho

More information

Advanced Robotics Introduction

Advanced Robotics Introduction Advanced Robotics Introduction Institute for Software Technology 1 Motivation Agenda Some Definitions and Thought about Autonomous Robots History Challenges Application Examples 2 http://youtu.be/rvnvnhim9kg

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

Introduction to AI. What is Artificial Intelligence?

Introduction to AI. What is Artificial Intelligence? Introduction to AI Instructor: Dr. Wei Ding Fall 2009 1 What is Artificial Intelligence? Views of AI fall into four categories: Thinking Humanly Thinking Rationally Acting Humanly Acting Rationally The

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Introduction to cognitive science Session 3: Cognitivism

Introduction to cognitive science Session 3: Cognitivism Introduction to cognitive science Session 3: Cognitivism Martin Takáč Centre for cognitive science DAI FMFI Comenius University in Bratislava Príprava štúdia matematiky a informatiky na FMFI UK v anglickom

More information

CSC 550: Introduction to Artificial Intelligence. Fall 2004

CSC 550: Introduction to Artificial Intelligence. Fall 2004 CSC 550: Introduction to Artificial Intelligence Fall 2004 See online syllabus at: http://www.creighton.edu/~davereed/csc550 Course goals: survey the field of Artificial Intelligence, including major areas

More information

Outline. What is AI? A brief history of AI State of the art

Outline. What is AI? A brief history of AI State of the art Introduction to AI Outline What is AI? A brief history of AI State of the art What is AI? AI is a branch of CS with connections to psychology, linguistics, economics, Goal make artificial systems solve

More information

SECOND YEAR PROJECT SUMMARY

SECOND YEAR PROJECT SUMMARY SECOND YEAR PROJECT SUMMARY Grant Agreement number: 215805 Project acronym: Project title: CHRIS Cooperative Human Robot Interaction Systems Period covered: from 01 March 2009 to 28 Feb 2010 Contact Details

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

History and Philosophical Underpinnings

History and Philosophical Underpinnings History and Philosophical Underpinnings Last Class Recap game-theory why normal search won t work minimax algorithm brute-force traversal of game tree for best move alpha-beta pruning how to improve on

More information

INTRODUCTION to ROBOTICS

INTRODUCTION to ROBOTICS 1 INTRODUCTION to ROBOTICS Robotics is a relatively young field of modern technology that crosses traditional engineering boundaries. Understanding the complexity of robots and their applications requires

More information

Booklet of teaching units

Booklet of teaching units International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,

More information

Introduction to Vision & Robotics

Introduction to Vision & Robotics Introduction to Vision & Robotics Vittorio Ferrari, 650-2697,IF 1.27 vferrari@staffmail.inf.ed.ac.uk Michael Herrmann, 651-7177, IF1.42 mherrman@inf.ed.ac.uk Lectures: Handouts will be on the web (but

More information

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Humanoid robot. Honda's ASIMO, an example of a humanoid robot Humanoid robot Honda's ASIMO, an example of a humanoid robot A humanoid robot is a robot with its overall appearance based on that of the human body, allowing interaction with made-for-human tools or environments.

More information

The Science In Computer Science

The Science In Computer Science Editor s Introduction Ubiquity Symposium The Science In Computer Science The Computing Sciences and STEM Education by Paul S. Rosenbloom In this latest installment of The Science in Computer Science, Prof.

More information

Advanced Robotics Introduction

Advanced Robotics Introduction Advanced Robotics Introduction Institute for Software Technology 1 Agenda Motivation Some Definitions and Thought about Autonomous Robots History Challenges Application Examples 2 Bridge the Gap Mobile

More information

Perception and Perspective in Robotics

Perception and Perspective in Robotics Perception and Perspective in Robotics Paul Fitzpatrick MIT CSAIL USA experimentation helps perception Rachel: We have got to find out if [ugly naked guy]'s alive. Monica: How are we going to do that?

More information

Knowledge Representation and Reasoning

Knowledge Representation and Reasoning Master of Science in Artificial Intelligence, 2012-2014 Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2012 Adina Magda Florea The AI Debate

More information

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors Towards the more concrete end of the Alife spectrum is robotics. Alife -- because it is the attempt to synthesise -- at some level -- 'lifelike behaviour. AI is often associated with a particular style

More information

Humanoids. Lecture Outline. RSS 2010 Lecture # 19 Una-May O Reilly. Definition and motivation. Locomotion. Why humanoids? What are humanoids?

Humanoids. Lecture Outline. RSS 2010 Lecture # 19 Una-May O Reilly. Definition and motivation. Locomotion. Why humanoids? What are humanoids? Humanoids RSS 2010 Lecture # 19 Una-May O Reilly Lecture Outline Definition and motivation Why humanoids? What are humanoids? Examples Locomotion RSS 2010 Humanoids Lecture 1 1 Why humanoids? Capek, Paris

More information

COMP150 Behavior-Based Robotics

COMP150 Behavior-Based Robotics For class use only, do not distribute COMP150 Behavior-Based Robotics http://www.cs.tufts.edu/comp/150bbr/timetable.html http://www.cs.tufts.edu/comp/150bbr/syllabus.html Course Essentials This is not

More information

Assignment 1 IN5480: interaction with AI s

Assignment 1 IN5480: interaction with AI s Assignment 1 IN5480: interaction with AI s Artificial Intelligence definitions 1. Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work

More information

Year 1805 Doll, made by Maillardet, that wrote in either French or English and could draw landscapes

Year 1805 Doll, made by Maillardet, that wrote in either French or English and could draw landscapes Unit 8 : ROBOTICS INTRODUCTION Robots are devices that are programmed to move parts, or to do work with a tool. Robotics is a multidisciplinary engineering field dedicated to the development of autonomous

More information

Cybernetics, AI, Cognitive Science and Computational Neuroscience: Historical Aspects

Cybernetics, AI, Cognitive Science and Computational Neuroscience: Historical Aspects Cybernetics, AI, Cognitive Science and Computational Neuroscience: Historical Aspects Péter Érdi perdi@kzoo.edu Henry R. Luce Professor Center for Complex Systems Studies Kalamazoo College http://people.kzoo.edu/

More information

By Marek Perkowski ECE Seminar, Friday January 26, 2001

By Marek Perkowski ECE Seminar, Friday January 26, 2001 By Marek Perkowski ECE Seminar, Friday January 26, 2001 Why people build Humanoid Robots? Challenge - it is difficult Money - Hollywood, Brooks Fame -?? Everybody? To build future gods - De Garis Forthcoming

More information

GPU Computing for Cognitive Robotics

GPU Computing for Cognitive Robotics GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence (Sistemas Inteligentes) Pedro Cabalar Depto. Computación Universidade da Coruña, SPAIN Chapter 1. Introduction Pedro Cabalar (UDC) ( Depto. AIComputación Universidade da Chapter

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence CSE 3401: Intro to Artificial Intelligence & Logic Programming Introduction Required Readings: Russell & Norvig Chapters 1 & 2. Lecture slides adapted from those of Fahiem Bacchus. What is AI? What is

More information

Artificial Intelligence. Shobhanjana Kalita Dept. of Computer Science & Engineering Tezpur University

Artificial Intelligence. Shobhanjana Kalita Dept. of Computer Science & Engineering Tezpur University Artificial Intelligence Shobhanjana Kalita Dept. of Computer Science & Engineering Tezpur University What is AI? What is Intelligence? The ability to acquire and apply knowledge and skills (definition

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

Unit 1: Introduction to Autonomous Robotics

Unit 1: Introduction to Autonomous Robotics Unit 1: Introduction to Autonomous Robotics Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 16, 2009 COMP 4766/6778 (MUN) Course Introduction January

More information

Intelligent Systems. Lecture 1 - Introduction

Intelligent Systems. Lecture 1 - Introduction Intelligent Systems Lecture 1 - Introduction In which we try to explain why we consider artificial intelligence to be a subject most worthy of study, and in which we try to decide what exactly it is Dr.

More information

Artificial Intelligence. What is AI?

Artificial Intelligence. What is AI? 2 Artificial Intelligence What is AI? Some Definitions of AI The scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines American Association

More information

Robots: Tools or Toys? Some Answers from Biorobotics, Developmental and Entertainment Robotics. AI and Robots. A History of Robots in AI

Robots: Tools or Toys? Some Answers from Biorobotics, Developmental and Entertainment Robotics. AI and Robots. A History of Robots in AI Robots: Tools or Toys? Some Answers from Biorobotics, Developmental and Entertainment Robotics AI and Robots Outline: Verena V. Hafner May 24, 2005 Seminar Series on Artificial Intelligence, Luxembourg

More information

Below is provided a chapter summary of the dissertation that lays out the topics under discussion.

Below is provided a chapter summary of the dissertation that lays out the topics under discussion. Introduction This dissertation articulates an opportunity presented to architecture by computation, specifically its digital simulation of space known as Virtual Reality (VR) and its networked, social

More information

Why we need to know what AI is. Overview. Artificial Intelligence is it finally arriving?

Why we need to know what AI is. Overview. Artificial Intelligence is it finally arriving? Artificial Intelligence is it finally arriving? Artificial Intelligence is it finally arriving? Are we nearly there yet? Leslie Smith Computing Science and Mathematics University of Stirling May 2 2013.

More information

Introduction to Vision & Robotics

Introduction to Vision & Robotics Introduction to Vision & Robotics by Bob Fisher rbf@inf.ed.ac.uk Introduction to Robotics Introduction Some definitions Applications of robotics and vision The challenge: a demonstration Historical highlights

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

INTELLIGENT ROBOTICS VS. ROBOTIC INTELLIGENCE

INTELLIGENT ROBOTICS VS. ROBOTIC INTELLIGENCE INTELLIGENT ROBOTICS VS. ROBOTIC INTELLIGENCE What is a Robot? The term Robot first appeared in the play R.U.R. (Rossums Universal-Robots) by Karel Čapek (1920) Karel Čapek (Jan 9,1890 Dec. 25, 1938) It

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

CMSC 372 Artificial Intelligence. Fall Administrivia

CMSC 372 Artificial Intelligence. Fall Administrivia CMSC 372 Artificial Intelligence Fall 2017 Administrivia Instructor: Deepak Kumar Lectures: Mon& Wed 10:10a to 11:30a Labs: Fridays 10:10a to 11:30a Pre requisites: CMSC B206 or H106 and CMSC B231 or permission

More information

Lecture 1 What is AI?

Lecture 1 What is AI? Lecture 1 What is AI? CSE 473 Artificial Intelligence Oren Etzioni 1 AI as Science What are the most fundamental scientific questions? 2 Goals of this Course To teach you the main ideas of AI. Give you

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Chapter 1 Chapter 1 1 Outline What is AI? A brief history The state of the art Chapter 1 2 What is AI? Systems that think like humans Systems that think rationally Systems that

More information

UNIT VI. Current approaches to programming are classified as into two major categories:

UNIT VI. Current approaches to programming are classified as into two major categories: Unit VI 1 UNIT VI ROBOT PROGRAMMING A robot program may be defined as a path in space to be followed by the manipulator, combined with the peripheral actions that support the work cycle. Peripheral actions

More information

AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications. The Computational and Representational Understanding of Mind

AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications. The Computational and Representational Understanding of Mind AI Principles, Semester 2, Week 1, Lecture 2, Cognitive Science and AI Applications How simulations can act as scientific theories The Computational and Representational Understanding of Mind Boundaries

More information

The Three Laws of Artificial Intelligence

The Three Laws of Artificial Intelligence The Three Laws of Artificial Intelligence Dispelling Common Myths of AI We ve all heard about it and watched the scary movies. An artificial intelligence somehow develops spontaneously and ferociously

More information

Human Robot Interaction (HRI)

Human Robot Interaction (HRI) Brief Introduction to HRI Batu Akan batu.akan@mdh.se Mälardalen Högskola September 29, 2008 Overview 1 Introduction What are robots What is HRI Application areas of HRI 2 3 Motivations Proposed Solution

More information

CS494/594: Software for Intelligent Robotics

CS494/594: Software for Intelligent Robotics CS494/594: Software for Intelligent Robotics Spring 2007 Tuesday/Thursday 11:10 12:25 Instructor: Dr. Lynne E. Parker TA: Rasko Pjesivac Outline Overview syllabus and class policies Introduction to class:

More information

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes. CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes. Artificial Intelligence A branch of Computer Science. Examines how we can achieve intelligent

More information

Implicit Fitness Functions for Evolving a Drawing Robot

Implicit Fitness Functions for Evolving a Drawing Robot Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

IDK0310 AUTOMATED AND SYSTEMATISED LEGAL PROCESS. Ermo Täks

IDK0310 AUTOMATED AND SYSTEMATISED LEGAL PROCESS. Ermo Täks IDK0310 AUTOMATED AND SYSTEMATISED LEGAL PROCESS Ermo Täks Introducton What is Artificial Intelligence (AI)? How this is connected to law? Artificial Intelligence and law Discipline is broadly named also

More information

Artificial Intelligence: An overview

Artificial Intelligence: An overview Artificial Intelligence: An overview Thomas Trappenberg January 4, 2009 Based on the slides provided by Russell and Norvig, Chapter 1 & 2 What is AI? Systems that think like humans Systems that act like

More information

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS DAVIDE MAROCCO STEFANO NOLFI Institute of Cognitive Science and Technologies, CNR, Via San Martino della Battaglia 44, Rome, 00185, Italy

More information

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc. Human Vision and Human-Computer Interaction Much content from Jeff Johnson, UI Wizards, Inc. are these guidelines grounded in perceptual psychology and how can we apply them intelligently? Mach bands:

More information

Philosophy and the Human Situation Artificial Intelligence

Philosophy and the Human Situation Artificial Intelligence Philosophy and the Human Situation Artificial Intelligence Tim Crane In 1965, Herbert Simon, one of the pioneers of the new science of Artificial Intelligence, predicted that machines will be capable,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Chapter 1 Chapter 1 1 Outline What is AI? A brief history The state of the art Chapter 1 2 What is AI? Systems that think like humans Systems that think rationally Systems that

More information

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor. - Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface Computer-Aided Engineering Research of power/signal integrity analysis and EMC design

More information

Cognitive Robotics 2017/2018

Cognitive Robotics 2017/2018 Cognitive Robotics 2017/2018 Course Introduction Matteo Matteucci matteo.matteucci@polimi.it Artificial Intelligence and Robotics Lab - Politecnico di Milano About me and my lectures Lectures given by

More information

Creating Scientific Concepts

Creating Scientific Concepts Creating Scientific Concepts Nancy J. Nersessian A Bradford Book The MIT Press Cambridge, Massachusetts London, England 2008 Massachusetts Institute of Technology All rights reserved. No part of this book

More information

Goals of this Course. CSE 473 Artificial Intelligence. AI as Science. AI as Engineering. Dieter Fox Colin Zheng

Goals of this Course. CSE 473 Artificial Intelligence. AI as Science. AI as Engineering. Dieter Fox Colin Zheng CSE 473 Artificial Intelligence Dieter Fox Colin Zheng www.cs.washington.edu/education/courses/cse473/08au Goals of this Course To introduce you to a set of key: Paradigms & Techniques Teach you to identify

More information

Philosophy. AI Slides (5e) c Lin

Philosophy. AI Slides (5e) c Lin Philosophy 15 AI Slides (5e) c Lin Zuoquan@PKU 2003-2018 15 1 15 Philosophy 15.1 AI philosophy 15.2 Weak AI 15.3 Strong AI 15.4 Ethics 15.5 The future of AI AI Slides (5e) c Lin Zuoquan@PKU 2003-2018 15

More information

Unit 1: Introduction to Autonomous Robotics

Unit 1: Introduction to Autonomous Robotics Unit 1: Introduction to Autonomous Robotics Computer Science 6912 Andrew Vardy Department of Computer Science Memorial University of Newfoundland May 13, 2016 COMP 6912 (MUN) Course Introduction May 13,

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction It is appropriate to begin the textbook on robotics with the definition of the industrial robot manipulator as given by the ISO 8373 standard. An industrial robot manipulator is

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino What is Robotics? Robotics is the study and design of robots Robots can be used in different contexts and are classified as 1. Industrial robots

More information

Introduction to Artificial Intelligence

Introduction to Artificial Intelligence Introduction to Artificial Intelligence By Budditha Hettige Sources: Based on An Introduction to Multi-agent Systems by Michael Wooldridge, John Wiley & Sons, 2002 Artificial Intelligence A Modern Approach,

More information

Uploading and Consciousness by David Chalmers Excerpted from The Singularity: A Philosophical Analysis (2010)

Uploading and Consciousness by David Chalmers Excerpted from The Singularity: A Philosophical Analysis (2010) Uploading and Consciousness by David Chalmers Excerpted from The Singularity: A Philosophical Analysis (2010) Ordinary human beings are conscious. That is, there is something it is like to be us. We have

More information

Birth of An Intelligent Humanoid Robot in Singapore

Birth of An Intelligent Humanoid Robot in Singapore Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing

More information

What We Talk About When We Talk About AI

What We Talk About When We Talk About AI MAGAZINE What We Talk About When We Talk About AI ARTIFICIAL INTELLIGENCE TECHNOLOGY 30 OCT 2015 W e have all seen the films, read the comics or been awed by the prophetic books, and from them we think

More information

Embodiment from Engineer s Point of View

Embodiment from Engineer s Point of View New Trends in CS Embodiment from Engineer s Point of View Andrej Lúčny Department of Applied Informatics FMFI UK Bratislava lucny@fmph.uniba.sk www.microstep-mis.com/~andy 1 Cognitivism Cognitivism is

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Lecture 01 - Introduction Edirlei Soares de Lima What is Artificial Intelligence? Artificial intelligence is about making computers able to perform the

More information

Unit 8: Problems of Common Sense

Unit 8: Problems of Common Sense Unit 8: Problems of Common Sense AI is brain-dead Can a machine have intelligence? Difficulty of Endowing Common Sense to Computers Philosophical Objections Strong vs. Weak AI Reference copyright c 2013

More information

Biomimetic Design of Actuators, Sensors and Robots

Biomimetic Design of Actuators, Sensors and Robots Biomimetic Design of Actuators, Sensors and Robots Takashi Maeno, COE Member of autonomous-cooperative robotics group Department of Mechanical Engineering Keio University Abstract Biological life has greatly

More information

Exploring Haptics in Digital Waveguide Instruments

Exploring Haptics in Digital Waveguide Instruments Exploring Haptics in Digital Waveguide Instruments 1 Introduction... 1 2 Factors concerning Haptic Instruments... 2 2.1 Open and Closed Loop Systems... 2 2.2 Sampling Rate of the Control Loop... 2 3 An

More information

Computer Science as a Discipline

Computer Science as a Discipline Computer Science as a Discipline 1 Computer Science some people argue that computer science is not a science in the same sense that biology and chemistry are the interdisciplinary nature of computer science

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Introduction Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart Russell

More information

THE AI REVOLUTION. How Artificial Intelligence is Redefining Marketing Automation

THE AI REVOLUTION. How Artificial Intelligence is Redefining Marketing Automation THE AI REVOLUTION How Artificial Intelligence is Redefining Marketing Automation The implications of Artificial Intelligence for modern day marketers The shift from Marketing Automation to Intelligent

More information

Introduction to Robotics

Introduction to Robotics Marcello Restelli Dipartimento di Elettronica e Informazione Politecnico di Milano email: restelli@elet.polimi.it tel: 02-2399-3470 Introduction to Robotics Robotica for Computer Engineering students A.A.

More information

Computational and Biological Vision

Computational and Biological Vision Introduction to Computational and Biological Vision CS 202-1-5261 Computer Science Department, BGU Ohad Ben-Shahar Some necessary administrivia Lecturer : Ohad Ben-Shahar Email address : ben-shahar@cs.bgu.ac.il

More information

Learning Actions from Demonstration

Learning Actions from Demonstration Learning Actions from Demonstration Michael Tirtowidjojo, Matthew Frierson, Benjamin Singer, Palak Hirpara October 2, 2016 Abstract The goal of our project is twofold. First, we will design a controller

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

CMSC 421, Artificial Intelligence

CMSC 421, Artificial Intelligence Last update: January 28, 2010 CMSC 421, Artificial Intelligence Chapter 1 Chapter 1 1 What is AI? Try to get computers to be intelligent. But what does that mean? Chapter 1 2 What is AI? Try to get computers

More information

CHAPTER 1. Introduction. 1.1 The place of perception in AI

CHAPTER 1. Introduction. 1.1 The place of perception in AI CHAPTER 1 Introduction Everything starts somewhere, although many physicists disagree. But people have always been dimly aware of the problems with the start of things. They wonder aloud how the snowplough

More information

Thinking and Autonomy

Thinking and Autonomy Thinking and Autonomy Prasad Tadepalli School of Electrical Engineering and Computer Science Oregon State University Turing Test (1950) The interrogator C needs to decide if he is talking to a computer

More information

This tutorial is prepared for the students at beginner level who aspire to learn Artificial Intelligence.

This tutorial is prepared for the students at beginner level who aspire to learn Artificial Intelligence. About the Tutorial This tutorial provides introductory knowledge on Artificial Intelligence. It would come to a great help if you are about to select Artificial Intelligence as a course subject. You can

More information

Technology designed to empower people

Technology designed to empower people Edition July 2018 Smart Health, Wearables, Artificial intelligence Technology designed to empower people Through new interfaces - close to the body - technology can enable us to become more aware of our

More information

HUMAN-LEVEL ARTIFICIAL INTELIGENCE & COGNITIVE SCIENCE

HUMAN-LEVEL ARTIFICIAL INTELIGENCE & COGNITIVE SCIENCE HUMAN-LEVEL ARTIFICIAL INTELIGENCE & COGNITIVE SCIENCE Nils J. Nilsson Stanford AI Lab http://ai.stanford.edu/~nilsson Symbolic Systems 100, April 15, 2008 1 OUTLINE Computation and Intelligence Approaches

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

Announcements. HW 6: Written (not programming) assignment. Assigned today; Due Friday, Dec. 9. to me.

Announcements. HW 6: Written (not programming) assignment. Assigned today; Due Friday, Dec. 9.  to me. Announcements HW 6: Written (not programming) assignment. Assigned today; Due Friday, Dec. 9. E-mail to me. Quiz 4 : OPTIONAL: Take home quiz, open book. If you re happy with your quiz grades so far, you

More information