An Experiment in the Use of Manipulation Primitives and Tactile Perception for Reactive Grasping

Similar documents
2. Visually- Guided Grasping (3D)

Multisensory Based Manipulation Architecture

Design and Control of the BUAA Four-Fingered Hand

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

Masatoshi Ishikawa, Akio Namiki, Takashi Komuro, and Idaku Ishii

World Automation Congress

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Acquisition of Multi-Modal Expression of Slip through Pick-Up Experiences

Modeling and Experimental Studies of a Novel 6DOF Haptic Device

Pedro J. Sanz. Curriculum Vitae

DESIGN AND DEVELOPMENT OF LIBRARY ASSISTANT ROBOT

On-Line Interactive Dexterous Grasping

The Tele-operation of the Humanoid Robot -Whole Body Operation for Humanoid Robots in Contact with Environment-

Multi-Modal Robot Skins: Proximity Servoing and its Applications

Five-fingered Robot Hand using Ultrasonic Motors and Elastic Elements *

High-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control

Design of a Compliant and Force Sensing Hand for a Humanoid Robot

Self-learning Assistive Exoskeleton with Sliding Mode Admittance Control

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Information and Program

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

The Haptic Impendance Control through Virtual Environment Force Compensation

Biomimetic Design of Actuators, Sensors and Robots

Group Robots Forming a Mechanical Structure - Development of slide motion mechanism and estimation of energy consumption of the structural formation -

Integration of Visuomotor Learning, Cognitive Grasping and Sensor-Based Physical Interaction in the UJI Humanoid Torso

Modelling and Simulation of Tactile Sensing System of Fingers for Intelligent Robotic Manipulation Control

On the Variability of Tactile Signals During Grasping

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Humanoid Hands. CHENG Gang Dec Rollin Justin Robot.mp4

Birth of An Intelligent Humanoid Robot in Singapore

Physics-Based Manipulation in Human Environments

FP7 STREP. The. Consortium. Marine Robots and Dexterous Manipulation for Enabling Autonomous Underwater Multipurpose Intervention Missions

Dexterous Anthropomorphic Robot Hand With Distributed Tactile Sensor: Gifu Hand II

A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality

A NOVEL CONTROL SYSTEM FOR ROBOTIC DEVICES

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Chapter 1 Introduction

Shuffle Traveling of Humanoid Robots

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Chapter 1 Introduction to Robotics

Reactive Planning with Evolutionary Computation

The Humanoid Robot ARMAR: Design and Control

Adaptive Humanoid Robot Arm Motion Generation by Evolved Neural Controllers

Term Paper: Robot Arm Modeling

More Info at Open Access Database by S. Dutta and T. Schmidt

I&S REASONING AND OBJECT-ORIENTED DATA PROCESSING FOR MULTISENSOR DATA FUSION

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Robotic Capture and De-Orbit of a Tumbling and Heavy Target from Low Earth Orbit

Summary of robot visual servo system

On Observer-based Passive Robust Impedance Control of a Robot Manipulator

Journal of Theoretical and Applied Mechanics, Sofia, 2014, vol. 44, No. 1, pp ROBONAUT 2: MISSION, TECHNOLOGIES, PERSPECTIVES

Kid-Size Humanoid Soccer Robot Design by TKU Team

2. Publishable summary

Laser-Assisted Telerobotic Control for Enhancing Manipulation Capabilities of Persons with Disabilities

Introduction to robotics. Md. Ferdous Alam, Lecturer, MEE, SUST

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Team Description Paper: HuroEvolution Humanoid Robot for Robocup 2010 Humanoid League

RAPID PROTOTYPING AND EMBEDDED CONTROL FOR AN ANTHROPOMORPHIC ROBOTIC HAND

Experiments with Haptic Perception in a Robotic Hand

Robotics. In Textile Industry: Global Scenario

H2020 RIA COMANOID H2020-RIA

Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

An Integrated HMM-Based Intelligent Robotic Assembly System

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

Development of Multi-Fingered Universal Robot Hand with Torque Limiter Mechanism

The Development of a Low Cost Pneumatic Air Muscle Actuated Anthropomorphic Robotic Hand

Obstacle avoidance based on fuzzy logic method for mobile robots in Cluttered Environment

LASA I PRESS KIT lasa.epfl.ch I EPFL-STI-IMT-LASA Station 9 I CH 1015, Lausanne, Switzerland

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

4R and 5R Parallel Mechanism Mobile Robots

Elements of Haptic Interfaces

Shape Memory Alloy Actuator Controller Design for Tactile Displays

Graz University of Technology (Austria)

Robust Haptic Teleoperation of a Mobile Manipulation Platform

LUCS Haptic Hand I. Abstract. 1 Introduction. Magnus Johnsson. Dept. of Computer Science and Lund University Cognitive Science Lund University, Sweden

GUIDELINES FOR DESIGN LOW COST MICROMECHANICS. L. Ruiz-Huerta, A. Caballero Ruiz, E. Kussul

Embedded Robust Control of Self-balancing Two-wheeled Robot

Passive Anti-Vibration Utensil

A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments

Tele-operated robotic arm and hand with intuitive control and haptic feedback

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL

Learning Actions from Demonstration

Digital Control of MS-150 Modular Position Servo System

Autonomous Localization

Haptic control in a virtual environment

UKEMI: Falling Motion Control to Minimize Damage to Biped Humanoid Robot

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration

HAPTIC GUIDANCE BASED ON HARMONIC FUNCTIONS FOR THE EXECUTION OF TELEOPERATED ASSEMBLY TASKS. Carlos Vázquez Jan Rosell,1

Robots Learning from Robots: A proof of Concept Study for Co-Manipulation Tasks. Luka Peternel and Arash Ajoudani Presented by Halishia Chugani

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

Proprioception & force sensing

On-demand printable robots

Mechatronics and Automatic Control Laboratory (MACLAB) University of Genova

Haptic Tele-Assembly over the Internet

Transcription:

An Experiment in the Use of Manipulation Primitives and Tactile Perception for Reactive Grasping Antonio Morales, Mario Prats, Pedro Sanz and Angel P. Pobil Robotic Intelligence Lab Universitat Jaume I Castellón, Spain {morales, mprats, sanzp, pobil}@uji.es Abstract Uncertainty is one of the main problems that have to deal robot manipulation systems in service environments. In this paper we propose a theoretical framework to program manipulation actions, that makes use of a priori defined manipulation primitives and sensory feedback to face uncertainty in the execution. It also takes into account the limitation of the robot gripper and the reactive execution of such actions. We describe the role of different sensory data in such scheme. Finally, a practical implementation of a manipulation primitive is presented which follows the principles of the theoretical framework. The task is the is the extraction of books from a bookshelf, with the UJI service robot. I. INTRODUCTION Management of uncertainty is one of the big challenges in the design of robot applications able to operate autonomously in unstructured environments like human or outdoor scenarios. Robot grasping and manipulation of objects is not an exception, but one of the fields in robotics more affected by the uncertainties of the real world. Robot manipulators have to deal with unknown objects which physical properties are undetermined before hand, which location is unknown and can vary, and which can remain in cluttered environments where contacts with other objects can happen at any moment. The use of sensor data, particularly contact information, is a valuable tool to overcome these difficulties [1], [2]. The design and use of contact sensors in robot manipulation applications has a long tradition that has produced a large amount of works [3]. There is a clear distinction between extrinsic sensors, those that measure forces that act upon the grasping mechanism, and intrinsic, which measure forces within the mechanism [4]. The most typical among the former type are force/torque sensors. We focus on extrinsic sensors, those which measure forces that act upon the grasping mechanism, hereinafter. These are attractive to researchers because they resemble and try to imitate the role of the human skin, covering the surfaces that contact the objects. Indeed, the design of several dexterous hands have tried to include this feature [5], [6], [7]. In other cases, commercial robot hands have been improved with tactile devices [8], [9], [10]. The latter is our case. Our purpose is to use sensor data as a feedback source for grasp control algorithms, in what has been named reactive grasping. There are several approaches that make use of tactile data in the feedback control loop, by implementing Fig. 1. The UJI Service Robot. a control law [11], a neural model based controller [12], a fuzzy logic [5], a finite state machine (FSM) [10], and a biological approach [7]. Interestingly, there have been also several attempts to integrate the use of vision and tactile data [13], [8], [12], [10]. In this paper a framework to deal with uncertain conditions during manipulation actions by using tactile information is presented. In section II we describe our particular experimental setup, the UJI service robot. On section III we depict the theoretical framework that models our vision of a grasping and manipulation for autonomous robot applications. Finally section IV, describes a particular example of a controller that, following the directives of the theoretical framework, uses tactile and force/torque data to perform a complex manipulation task. II. SYSTEM DESCRIPTION The UJI service robot is a prototype mobile manipulator designed to assist in every day tasks (Fig 1). The first addressed application scenario is it employment as an assistant in a public library. Its main task is to look for books in the bookshelves, pick them up and take them to specified locations [14], [15]. Within this application we have

Fig. 2. A detail of the tactile sensors on the fingers. considered the use of tactile sensors jointly with vision and force, aiming at improving the manipulation skills of the robot. It consist of a Mitsubishi PA-10 7 d.o.f. arm mounted on an ActivMedia PowerBot mobile robot. The manipulator is endowed with a three-fingered Barrett Hand and a JR3 force/torque and acceleration sensor mounted at the wrist, between the hand and the end-effector. This sensor not only provides a six dimensions force/torque data, but also measures the compensated linear and angular acceleration, which are six dimensions more. The hand has been improved by adding on the finger tips arrays of pressure sensors designed and implemented by Weiss Robotics. The sensors consist of three 8 x 5 cell matrices, that cover the inner parts of the distal phalanxes of each of the three fingers (see Fig 2). Each cell is a square of 2.3 mm side. The sensor principle is resistive and works with one side contacting of a carbon enriched elastomer [16]. With this method it is possible to detect a complete two dimensional force profile by the use of an homogeneous sensor material which is contacted by an adequate electrode matrix [17]. The control of these matrices and the data collected from these cells is processed through a specifically designed controller DSACON32 which also implements the communication with the PC through standard RS232 and USB cables [18]. III. MANIPULATION THEORETICAL FRAMEWORK Traditionally, most works on grasp planning, analysis and control have assumed to have a model of the object to manipulate. This is a reasonable assumption in industrial and controlled environments. However in service applications this is not often the case, which causes that most of the theoretical solutions are not directly applicable. A solution that has been proposed to face this obstacle has been the use of vision to reconstruct the shape of the object. This has been successful with certain limitations in the 2D objects, however no practical satisfactory solution has been provided for the 3D case. It has been also suggested as a solution to modify the grasp algorithms to take into account a certain degree of inaccuracy in the real conditions to make their solutions more robust. But the inaccuracy considered is more often related with the location of the contact points or the location and pose of the object than in the shape of the object itself. However the latter is most of times the main source of inaccuracy: the sensor-based reconstruction of the object can be dramatically different from the real shape. Another important drawback is that often the geometry and the kinematic constraints of the robot hand are not taken into account. As a result many general solutions are not applicable because they do not fit the particular characteristics of the robot hand. Finally, in most of the works devoted to grasp synthesis, grasps are described as sets of contact points on the object surface where forces/torques are exerted. However, the problem considered above lead us to conclude that a representation of the grasps based on a set of contact points is not appropriate for autonomous manipulation tasks with a high degree of uncertainty. Here we propose a different approach, that describes grasps in a qualitative and knowledge-based fashion. It consists in the a priori definition of a set of manipulation primitives that defines the control strategy and the sensory feedback to use in the execution. A. Manipulation primitives A manipulation primitive is an abstraction of a set of manipulation actions that follow the same control policy. In our approach a manipulation action is a action of the robot which goal is to produce a certain range of displacements/rotations of the target object by applying forces or torques with a robot gripper. Grasps are a special subset of manipulation actions that aim to constrain almost or completely the the mobility of the object. This definition of manipulation actions allows us to include not only the traditional classification of grasps (power and precision) but also other actions like pulling/pushing interactions, or the special case that we consider in our example in section IV. A manipulation primitive determine several key aspects of the execution. First, it defines the hand preshape, that is, the posture of the hand when approaching the target. Second, and more relevant to this paper, it describes the control strategy to be used for executing the action. This also includes which sensor information is used an how it is interpreted. It also determines the metrics that evaluate the degree of accomplishment of the action. Every different primitive determines a complete execution control strategy. To illustrate this assertion let consider the difference between power and precision grasps. Precision grasps only imply contacts on the finger tips, while power grasps use contacts on the whole hand surface, finger tips, phalanxes, and palm. This difference is relevant for the design of the execution controller. Roughly, for the execution of a power grasp the hand approaches the object until it makes contact, and then closes the fingers. However, in the case of precision grasps, the fingers have to close at a certain distance so that only the finger tips make contact with the object. In the case of power grasp only contact sensor data

(a) Spherical (b) Cylindrical Fig. 4. Schematics with the components of the grasp descriptor Any of these alternatives defines a different grasp type and all these possible combinations defines the set of grasp types Fig. 3. (c) Hook (d) Pinch Barrett Hand preshapes for the different grasp types. is necessary, while in the case of precision vision provides information to determine how far the hand is from the object. The set of manipulation primitives to be developed depends on the particular features of the robot hand, and the different tasks (pulling objects, opening/closing doors [19], etc) to be performed by this hand. In any case a detailed study of the hand constraints, objects, and tasks is necessary. An example of an study for an anthropomorphic five-fingered hand is presented in [20]. In the case of the Barrett hand, Miller et al. [21] present an study on the possible hand preshapes. We use them to illustrate a subset of hand primitives for the Barrett Hand (see also Fig 3). In particular we only show grasp primitives, but keep in mind that other primitives can be defined too [19]. The grasp primitives are these: Hook grasp: In this grasp the hand opposes the gravity. All fingers, form a hook that would enclose a cylindrical shaped object. The palm might exert force opposing the fingers. Cylindrical grasp: All fingers close around a cylindrical object. The thumb finger opposes the other two. Spherical grasp: All fingers close around a ball-shaped object. Pinch grasp: The grasp is characterized by the opposition of the two mobile fingers The thumb does not participate. This is appropriate to grasp small objects. Several of the preshapes described above (cylindrical, spherical and pinch) have power and precision alternatives, and the hook grasp can be used for pulling or pushing [19]. B. Manipulation actions In practical terms, a manipulation action is an instantiation of a manipulation primitive described by the following features (see Fig. 4): Manipulation primitive: A qualitative description of the manipulation action to be performed, e.g. power grasps, pushing a button (with finger index), pulling, and others. Approaching direction: Once the hand is positioned in the vicinity of the object it approaches it following this direction. The approaching line is is the path followed by the robot hand when it approaches the object. Hand orientation: the hand can rotate around the approaching direction. The rotation angle is a relevant parameter to define an action configuration. The generation of these actions depends on specific grasp planning modules which principles and design are out of the scope of this paper. For further information about the work of the authors on this topic look at [22], [23], [19]. C. The role of sensor data The manipulation action description depicted above only defines a starting configuration. The execution of an action is reactive and relies exclusively on the control strategies determined by the manipulation primitive and by the data provided by the sensors. In our system we consider three types of sensors: vision, force/torque and tactile. In robotics, visual data is able to provide the most rich information about the environment and task. In the framework described here, vision has a role on three different stages: on the grasp planning phase for extracting the relevant features of the target object [22]; in the approaching stage for servoing the arm [24]; and, in the execution stage as a feedback for the controller together with the other types of sensor data. Force/torque (F/T) sensors provide information about the magnitude and direction of the forces and moments that appear when the robot and the object are in contact. Depending on the situation of this type of sensors the information provided can be interpreted in a different way. One option

Fig. 6. Force/pressure control law. Fig. 5. The UJI service robot grasping a book particularly successful is to place small F/T sensors at the finger tips. This configuration on the Barrett Hand is used by Platt et al. [9] to control the finger contacts on an object in order to find a stable grasp through a sequence of grasping/releasing corrective actions. However, in our system a larger F/T sensor has been placed on the wrist of the robot arm. This configuration is useful to obtain global information about the interactions of the hand. This configuration has proved to be useful in tasks like opening a door by pulling the handle [15], [19]. Finally, the tactile sensors, which measure the pressure produced along the contact surface are useful to characterize the local information about contacts. IV. EXPERIMENTS ON SENSOR-GUIDED MANIPULATION In the above section a theoretical framework for autonomous manipulation has been described. Here we explain an example of implementation of a particular manipulation task within this framework. The task consists in extracting a book from a bookshelf (see Fig. 5). This task is one of the basic skills that the UJI service robot needs in a library. In this an example the manipulation primitive is not a grasp, but a special pulling action. The goal of the primitive is to pull a book off a shelf, while standing among other books. The hand is placed approximately in front of the book aligned with it. Only one of the fingers is used. It is placed on the top corner of the target book and pulls back the book. The force/torque sensor and the tactile sensor are used together to reach an optimum contact, and to extract the book. A. Force/tactile sensing integration Our tactile sensors give us pressure information, which is very related to the force that is being applied through the contact, according to the expression P = F A, where P is the pressure on the contact, F is the applied force and A is the area of the contact surface. Thus, both force and tactile readings are related by the contact area. Smaller the contact surface, bigger the pressure, and smaller the force. In contrast, bigger the contact area, smaller the pressure, and bigger the force. Excessive pressure on the tactile sensor must be avoided. At the same time, it is desired that a big force can be applied through the contact. In other words, it is desired to apply a big force with a small pressure. For achieving this goal, the robot must always try to maximize the contact surface. In next section, a set of strategies for maximizing the contact surface on a real application will be presented, but now we will focus on the control law that allows the robot to control its motion according to the force and pressure readings. B. Force/pressure control law For the compliant execution of tasks, force control is needed. For the particular task, there is a force reference that the robot tries to track. When the current force is under the reference, the control law must send the appropriate commands to the robot in order to increase the force, and vice versa. When using tactile sensors along with force sensors, the important reference is still the force reference, but we must also take care of the tactile sensors in order to avoid excessive pressure. Assuming that the force is applied exclusively through the tactile sensors, we can implicitly control the force by controlling the pressure and the contact surface on the tactile sensors. We propose the control law of Figure 6 for integrating force and pressure control. As the figure shows, there is an internal pressure control loop. It moves the manipulator in order to keep a given pressure reference on the tactile sensors. Around the pressure control loop, there is a force control law that modifies the pressure reference. The input to the whole system is a force reference. If the force reading is below or over the force reference, then the pressure reference is either increased or decreased. Thus, the force that the manipulator applies is implicitly controlled through the pressure. Of course, when the pressure reference has reached the maximum pressure that the tactile sensors support, it is not increased anymore. In this case, the only way to increase the force is by increasing the contact surface, because the pressure decreases with the contact area. It is for this reason that it is desirable to keep always a contact as extended as possible. C. Force/pressure control strategies for the librarian robot We have tried force/pressure control strategies in a real application: a robot that takes out a book from a bookshelf. We want the robot to take the book as humans do: making

Fig. 7. The experimental environment. contact with one finger on the top of the book and making it turn with respect to the base (see Figure 7). We have already addressed this problem on the UJI Librarian Robot [25], but using only force feedback. The main drawback of this approach is that, without using tactile sensors, there is no way for the robot to detect whether the contact on the top of the book is good or not. Thus, until now, we assumed that the robot fingertip was well-positioned on the top of the book before starting the task. In the real life, this is generally not accomplished, due to errors such as modelling, planning, positioning, etc. This problem has been solved by installing tactile sensors on the robot fingertips. With them, the robot can estimate whether the initial contact is good or not, and can make the appropriate motion for assuring a good contact. The current system is able to autonomously find a good initial contact on the top of the book, even when starting on a very coarse initial estimation. In addition, the robot performs the task without relying on any model of the book s height nor fixed hand trajectory. Three behaviors have been implemented. The first one controls the force that the manipulator applies on the book through the contact on the fingertip. The second one tries to maximize the contact area in order to assure that the force reference can be reached. Finally, the third behavior is in charge of performing the task, i.e. taking out the book. 1) Controlling the force: This behavior acts on the constrained translational degree of freedom (DOF), i.e. Z axis of Figure 7. The motion along this axis is guided through the control law presented in section IV-B. The robot has an initial estimate of the force that it needs to apply along this direction in order to avoid sliding when taking out the book. This force reference is reached with pressure control as already explained in section IV-B. Thus, the robot will increase or decrease the pressure made on the book by the fingertip depending on whether the force is below or over the force reference. 2) Maximizing the contact surface: If the contact surface is small, the force reference might not be reached, even if the pressure is the maximum that the sensor supports. Therefore, it is desired to maximize the contact area in order to increase the force upper bound. It is assumed that the first contact is made on the top edge of the book, as shown in Figure 8 (left). As the robot has an approximate estimate of the floor plane, this coarse initial position can be reached without problems. When the robot, guided by the first behavior, makes the first contact with the book, the position on the sensor where contact has been made is used to compute a movement in X direction so that this initial contact (i.e. the book edge) is translated to the bottom part of the sensor. With this, we assure that the part of the sensor over the initial contact can be used for making contact with the top of the book. For maximizing the contact surface on the top of the book, two rotational DOFs are controlled around the two axis that define the tactile sensor plane, i.e. rotation around axis X and Y of image 7. The sensor is split into four regions by a vertical and a horizontal line. By controlling the rotational DOFs, along with the translational constrained motion already mentioned, the robot tries to maximize the contact surface by searching a contact on each of the four regions. As the contact surface is maximized, the pressure decreases and the force can be increased. 3) Moving the hand backwards: When the force that is being applied to the book corresponds to the force reference, the robot starts taking out the book by moving along X axis (in negative sense). If enough force is being applied downwards, the book will start turning with respect to the base, as shown in Figure 8. If the force is not enough, the book will slide and the robot will lose the contact. If this happens, the robot will try again applying more force. Note that, as the fingertip moves backwards and the book turns, the robot may lose contact with the lower part of the tactile sensor. In this case, the second behavior is activated in order to maximize contact, thus making the fingertip rotate for adapting it to the top of the book. With the combination of the three behaviors, the robot is able to take out a book in a robust way, without having knowledge of any book model, and without planning a particular fingertip trajectory. The task is carried out by tracking continuously force and tactile information, and by defining a suitable control strategy. D. Results Figure 9 shows the evolution of force and pressure readings during the execution of the task. First, force and pressure are zero until, approximately, the iteration number 2000. This point corresponds to the contact. The behaviors explained in last section are executed in order to increase the contact area and make the appropriate force which, in this experiment, was set to 5N. In order to reach this force, the pressure reference is increased online until the force reaches the desired value of approximately 5N. Figure 10 shows four different contact images, taken on the first contact (top-left), on the second contact (top-right),

Fig. 8. The robot grasping the book. Left: initial contact. Middle: Maximizing contact surface. Right: doing the task. Force (N) Pression (mv/100) 30 25 20 15 10 5 Force (N) Pression (mv/100) 0 0 2000 4000 6000 8000 10000 Iteration Fig. 9. Evolution of force and pressure readings. when maximizing contact surface (bottom-left) and when turning the book (bottom-right). V. DISCUSSION In this paper we present a preliminary work in the use of tactile sensors for robot manipulation in autonomous environments. The use of sensor data is a key tool for applications in unstructured scenarios. Here we study how this type of sensors can be integrated in a global manipulation system, not only from a practical point of view but also from a theoretical perspective. We have described a theoretical framework for the description of manipulation task based on the definition of a priori knowledge -based manipulation primitives. From our past experience, we have reached the conclusion that traditional grasp planning and control schemes are not useful in situations with uncertainty. This framework provides a representation useful to describe grasps in such circumstances, in a way that can be used by upper cognitive levels. This frameworks relies on a reactive execution of manipulation actions guided almost completely by sensory data. In the second main part of the paper we have presented an example of how these primitive manipulation skills can be implemented for a particular task. In the future we aim at developing new grasp activities so our system, the UJI service robot is able to complete all kind of manipulations successfully. We are particularly interested in grasping objects of all kind of shapes. Fig. 10. Some significant contact images. ACKNOWLEDGMENT This paper describes research carried out at the Robotic Intelligence Laboratory of Universitat Jaume-I. The authors would like to thank the Spanish Ministry (MEC), under project DPI2004-01920, and Generalitat Valenciana, under projects CTBPRB/2005/052 and GV-2007-109 and Fundació Bancaixa, under project P1-1A2006-11 for their invaluable support in this research. REFERENCES [1] M. Lee and H. Nicholls, Tactile sensing for mechatronics - a state of the art survey, Mechatronics, vol. 9, no. 1, pp. 1 31, Feb. 1999. [2] M. Lee, Tactile sensing: New directions, new challenges, International Journal of Robotics Research, vol. 19, no. 7, pp. 636 643, July 2000.

[3] R. D. Howe, Tactile sensing and control of robotic manipulation, Journal of Avanced Robotics, vol. 8, no. 3, pp. 245 261, 1994. [4] J. Tegin and J. Wikander, Tactile sensing in intelligent robotic manipulation - a review, Industrial Robot: An international journal, vol. 32, no. 1, pp. 64 70, 2005. [5] L. Birglen and C. Gosselin, Fuzzy enhanced control of an underactuated finger using tactile and position sensors, in IEEE International Conference on Robotics and Automation, Barcelona, Spain, Apr. 2005, pp. 2320 2325. [6] H. Kawaski, T. Komatsu, and K. Uchiyama, Dexterous anthopomorphic robot hand with distributed tactile sensor: Gifu hand II, IEEE/SMC Transactions on Mechatronics, vol. 7, no. 2, pp. 296 303, Sept. 2002. [7] F. Leoni, M. Guerrini, C. Laschi, D. Taddeucci, P. Dario, and A. Starita, Implementing robotic grasping tasks using a biological approach, in IEEE International Conference on Robotics and Automation, Leuven, Belgium, May 1998, pp. 2274 2280. [8] P. Allen, A. T. Miller, P. Oh, and B. Leibowitz, Using tactile and visual sensing with a robotic hand, in IEEE International Conference on Robotics and Automation, Albuquerque, New Mexico, Apr. 1997, pp. 677 681. [9] R. Platt Jr., A. H. Fagg, and R. Gruppen, Nullspace composition of control laws for grasping, in IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 2002, pp. 1717 1723. [10] D. Kragic, S. Crinier, D. Brunn, and H. Christensen, Vision and tactile sensing for real world tasks, in IEEE International Conference on Robotics and Automation, Taipei, Taiwan, September 2003, pp. 1545 1550. [11] H. Kawasaki, T. Mouri, J. Takai, and S. Ito, Grasping of unknown object imitating human grasping reflex, in IFAC world congress, Barcelona, July 2002. [12] J. López, J. Pedreńo, A. Guerrero, and P. Gorce, A neural model for visual-tactile-motor integration in robotic reaching and grasping tasks, Robotica, vol. 20, pp. 23 31, 2002. [13] A. Namiki and M. Ishikawa, Optimal grasping using visual and tactile feedback, in IEEE/SICE/RSJ Intl. Conf. on Multisensor Fusion and Integration for Intelligent Systems, Whashington, DC, Dec. 1996, pp. 589 596. [14] M. Prats, P. J. Sanz, and A. P. del Pobil, Model-based tracking and hybrid force/vision control for the UJI librarian robot, in IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Canada, Aug. 2005. [15] M. Prats, P. J. Sanz, A. P. del Pobil, E. Martínez, and R. Marín, Towards multipurpose autonomous manipulation with the UJI service robot, Robotica Journal, vol. 25, no. 2, pp. 245 256, 2007. [16] K. Weiss, Sensor zur messung von mechanischen kräften, pat -Nr. DE 197 50 671 A1. [17] K. Weiss and H. Wörn, Tactile sensor system for an anthropomorphic robot hand, in IEEE International Conference on Manipulation and Grasping, Genua, Italy, July 2004. [18] W. Robotics, Technical Specifications of the sensor controller DSACON32, Lugwigsburg, Germany, http://www.weiss.robotics.com. [19] M. Prats, P. J. Sanz, and A. P. del Pobil, Task-oriented grasping using hand preshapes and task frames, in IEEE International Conference on Robotics and Automation, Rome, Italy, Apr. 2007. [20] A. Morales, P. Azad, T. Asfour, D. Kraft, S. Knoop, R. Dillmann, A. Kargov, C. Pylatiuk, and S. Schulz, An anthropomorphic grasping approach for a humanoid robot, in International Symposium on Robotics, Munich, Germany, May 2006, on CD. [21] A. Miller, S. Knoop, H. Christensen, and P. Allen, Automatic grasp planning using shape primitives, in IEEE International Conference on Robotics and Automation, Taipei, Taiwan, Septmeber 2003. [22] A. Morales, P. Sanz, A. del Pobil, and A. Fagg, Vision-based threefinger grasp synthesis constrained by hand geometry, Robotics and Autonomous Systems, vol. 54, no. 6, pp. 496 512, June 2006. [23] A. Morales, T. Asfour, P. Azad, S. Knoop, and R. Dillmann, Integrated grasp planning and visual object localization for a humanoid robot with five-fingered hand, in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, China, Oct. 2006, pp. 5663 5668. [24] G. Recatalá, P. Sanz, E. Cervera, and A. del Pobil, Grasp-based visual servoing for gripper-to-object positioning, in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, Sept. 2004, pp. 118 123. [25] M. Prats, A. P. del Pobil, and P. J. Sanz, A control architecture for compliant execution of manipulation tasks, in Proc. of International Conference on Intelligent Robots and Systems, Beijing, China, October 2006.