Research Proposal: Autonomous Mobile Robot Platform for Indoor Applications :xwgn zrvd ziad mipt ineyiil zinepehe`e zciip ziheaex dnxethlt Igal Loevsky, advisor: Ilan Shimshoni email: igal@tx.technion.ac.il William Davidson Faculty of Industrial Engineering and Management Technion - Israel Institute of Technology Technion City, Haifa 32000 Israel March 27, 2006 1 Introduction The vision of science is making robots a part of our everyday life. Robots can be used in a variety of fields, such as manufacturing, home assistance, transportation and medicine. Such use is problematic today, due to technological constraints and high costs of the existing solutions. Technological constraints are treated by many scientists from around the globe. Examples of such efforts are [3], [1] and [4]. One of the methods to deal with high costs is mass production. Many high technology and sophisticated applications, such as automobiles, personal computers or cellular phones, became available for use of the masses as a result of producing huge amounts of those items. Another aspect of economic success is the ability of one product to perform many different tasks. An example for such product is the personal computer. We believe that a robot that can be versatile, mass produced and technologically advanced, can repeat the success of the personal computer, and usefully integrate in the everyday life of humans and industry. 1
2 Basic abilities of Mobile Robot for indoor environment Each task of the robot consists of collecting and processing information, moving, and manipulating objects. [2] implements those except manipulation. Movements should bring the robot to desired locations, while not colliding into obstacles on its way. The state of obstacles or objects for manipulation can change with time, and the robot should react to those changes. Some tasks require detection and interaction with humans. Manipulation requires accurate detection of object and robot manipulator positions. Artificial indoor environment makes motion and detection easier. The robot should initiate some tasks, while the user can initiate others. Therefore, the robot must have the following basic abilities: 1. Detect its location in space 2. Move in an indoor environment, without colliding with obstacles 3. Detect certain kinds of objects 4. Manipulate objects 5. React to dynamic environments 6. Interface to the user 3 Research Objectives Our aim is to implement the abilities mentioned above, using of-the-shelf hardware. We want the implementation be robust enough, so the platform can be suitable for a variety of indoor tasks. We decided to build two use-cases of our platform: 1. Transferring Work In Process (WIP) parts between work stations in the Computer Integrated Manufacturing (CIM) laboratory (Figure 1) 2. Party Robot - robot that interacts with people, serves snacks and dances at a party 4 Hardware In this section I will describe about each of robots hardware components. 1. SICK LMS200 laser scanner: It is a range sensor, that scans the plane in front of the robot, in parallel to the floor. It gives the distance to each obstacle in its range of view. The opening angle of the scanner is 180. 2. LaserNav laser scanner: This sensor is able to detect bearings to special landmarks, called beacons. Those beacons are made from autoreflective material, so they return the laser beam to the scanner, unlike other objects, that scatter the beam. The sensor deciphers the bearings of the landmarks, and sends those to the PC. 2
Figure 1: Use Case 1 illustration. 3. MRV4 - Mobile Robot Research Vehicle: High-end, high payload, advanced research robot. Some of its characteristics: velocity upto 24 kmh, payload up to 150 kgm, turn-on-the-spot maneuverability. 4. Pentium 4 Personal Computer. 5. Two Fire-i Cameras. 6. MICROBOT MiniMover 5 arm or self-designed pallets buffer. See pictures of the robot in Figure 2. (a) (b) Figure 2: Pictures of the robot (a) Side view (b) Front view, with choclate snack 5 Implementation of Robot Sub-Systems In this section a short background of the implementation of each of the robot capabilities, mentioned in section 2, will be given. Relations between software modules of the robot and their related hardware are illustrated in Figure 3. 3
5.1 Detect its location in space Localization is the detection of robot location and orientation in space. We have chosen to implement the method presented in [5], since it is CPU efficient, potentially reliable and uses the equipment that is available for us. In order to make this algorithm suitable for our needs, we need to overcome several problems related to environmental constraints such as distortion, measurement inaccuracies and partial misdetection due to occlusion. Distortion is caused by mechanical deficiency of the equipment - robot and LaserNav sensor. Measurement inaccuracies are the errors in known locations of the beacons. Misdetection of beacon index may be caused by partial concealment of a bit of the beacon. Such event can cause LaserNav sensor to read an incorrect value of a bit, and it will decode an incorrect beacon index. See illustration of beacon index misdetection example in Figure 3. Figure 3: Beacon index misdetection example. After the improvements, the system will be comprehensively tested. We intend to design a software module that can be used in other systems as well. 5.2 Move in an indoor environment, without colliding with obstacles We use SICK LMS200 laser scanner to detect obstacles. We map the obstacles into a matrix, and use the A* algorithm to generate paths in the c-obstacles space. Another aspect of the work on this subject is the calibration of SICK LMS200 laser with LaserNav laser. 5.3 Detect certain kinds of objects In order to implement our use-cases, the only object we must detect is a human. We intend to use the output of the SICK LMS200 laser scanner, while detecting certain patterns in the scans. From those patterns we will detect the presence of a human in front of the robot, using anthropomorphic features. The detection will be confirmed by checking a corresponding area at images from the cameras on the robot. This method will provide fast and reliable detection. The calibration of SICK LMS200 laser and cameras will be done according to the method presented in [6]. 4
Figure 4: Modules of the robot and their related hardware. 5.4 Manipulate objects This will be implemented by installing a manipulator arm on the robot, or by installing a buffer that contains pallets - which will be loaded by fixed robots at the workstations. High accuracy in the measurement of the location of the end effector is required in order to grab objects. The accuracy of the robot localization system might be insufficient. In such case we intend to achieve a better accuracy using the SICK LMS200 laser sensor. The laser will scan a model of a certain form. While the scan will be processed, the points that belong to the form will be separated from the rest of the scan. The pose of the model relative to the robot will be estimated. Since the position and the orientation of the model will be known and fixed, such a measurement will yield the position and the orientation of the robot. In addition, coordination between the robots at work stations and the mobile robot might be required. 5.5 React to dynamic environments High-speed sensors of the robot, and powerful Pentium 4 PC on board, will ensure proper reaction to hard real-time constraints. It is considered to distribute the system to two Pentium 4 computers on board of the robot, if such measure will be required. 5.6 Interface to the user It is possible to connect to the robot through TCP/IP based console application, and transfer commands. The console application provides GUI and real-time monitoring of robot activity. 6 Conclusion In this thesis I plan to implement the system described above and use it for the two applications. Some components of the system have already been implemented but others still have to be developed. Moreover, the main challenge 5
is to integrate all the components into a working system and test it. References [1] J. Borenstein and Y. Koren. A mobile platform for nursing robots. IEEE Transactions on Industrial Electronics, pages 158 165, 1985. [2] R. Galan A. Jimenez D. Rodriguez-Losada, F. Matia. Blacky, an interactive mobile robot at a trade fair. IEEE International Conf. On Robotics and Automation, ICRA 2002, Washington, DC (USA), 11-15, May, 2002. [3] B. Salemi, J. Reis, A. Saifhashemi, and F. Nikgohar. Milo: Personal robot platform. International Conference on Intelligent Robots and Systems. August 2005, Edmonton, Canada. [4] B. Traub A. John D. Schraft, R.D. Graf. A mobile robot platform for assistance and entertainment. Industrial Robot Journal, 28:83 94, 2001. [5] I. Shimshoni. On mobile robot localization from landmark bearings. IEEE Trans. Robotics and Automation, 18(3):971 976, 2002. [6] Q. Zhang and R. Pless. Extrinsic calibration for a camera and laser ranger finder. IEEE/RSJ International Conference on Intellegent Robots and Systems, pages 2301 2306, 2004. 6