EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON

Similar documents
HUMAN Robot Cooperation Techniques in Surgery

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Haptic Tele-Assembly over the Internet

2. Introduction to Computer Haptics

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery

The Haptic Impendance Control through Virtual Environment Force Compensation

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

HeroX - Untethered VR Training in Sync'ed Physical Spaces

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

Advanced Man-Machine Interaction

Virtual Environments. Ruth Aylett

A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality

The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments

VICs: A Modular Vision-Based HCI Framework

Haptics CS327A

Perception. Introduction to HRI Simmons & Nourbakhsh Spring 2015

iwindow Concept of an intelligent window for machine tools using augmented reality

Robot Sensors Introduction to Robotics Lecture Handout September 20, H. Harry Asada Massachusetts Institute of Technology

Comparison of Haptic and Non-Speech Audio Feedback

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

HAPTIC GUIDANCE BASED ON HARMONIC FUNCTIONS FOR THE EXECUTION OF TELEOPERATED ASSEMBLY TASKS. Carlos Vázquez Jan Rosell,1

Bibliography. Conclusion

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Telecommunication and remote-controlled

VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences June Dr.

THE BEATING EQUALIZER AND ITS APPLICATION TO THE SYNTHESIS AND MODIFICATION OF PIANO TONES

JEPPIAAR ENGINEERING COLLEGE

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Interactive Virtual Environments

Differences in Fitts Law Task Performance Based on Environment Scaling

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

Toward an Augmented Reality System for Violin Learning Support

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

VR based HCI Techniques & Application. November 29, 2002

Intelligent Systems, Control and Automation: Science and Engineering

6.869 Advances in Computer Vision Spring 2010, A. Torralba

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Speech/Music Change Point Detection using Sonogram and AANN

LASER ASSISTED COMBINED TELEOPERATION AND AUTONOMOUS CONTROL

Booklet of teaching units

Robust Haptic Teleoperation of a Mobile Manipulation Platform

International Journal of Computer Engineering and Applications, Volume XII, Issue IV, April 18, ISSN

CAPACITIES FOR TECHNOLOGY TRANSFER

Surgical robot simulation with BBZ console

LCC 3710 Principles of Interaction Design. Readings. Sound in Interfaces. Speech Interfaces. Speech Applications. Motivation for Speech Interfaces

Exploring Surround Haptics Displays

FUNDAMENTALS ROBOT TECHNOLOGY. An Introduction to Industrial Robots, T eleoperators and Robot Vehicles. D J Todd. Kogan Page

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Passive Bilateral Teleoperation

Università di Roma La Sapienza. Medical Robotics. A Teleoperation System for Research in MIRS. Marilena Vendittelli

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances

Medical Robotics. Part II: SURGICAL ROBOTICS

MEAM 520. Haptic Rendering and Teleoperation

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

ISMCR2004. Abstract. 2. The mechanism of the master-slave arm of Telesar II. 1. Introduction. D21-Page 1

PERCEPTUAL EFFECTS IN ALIGNING VIRTUAL AND REAL OBJECTS IN AUGMENTED REALITY DISPLAYS

AR 2 kanoid: Augmented Reality ARkanoid

Computer Assisted Medical Interventions

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

2.1 Dual-Arm Humanoid Robot A dual-arm humanoid robot is actuated by rubbertuators, which are McKibben pneumatic artiæcial muscles as shown in Figure

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

are in front of some cameras and have some influence on the system because of their attitude. Since the interactor is really made aware of the impact

Augmented and Virtual Reality

Visual Search using Principal Component Analysis

Gesture Recognition with Real World Environment using Kinect: A Review

SMart wearable Robotic Teleoperated surgery

Haptic presentation of 3D objects in virtual reality for the visually disabled

Texture recognition using force sensitive resistors

Haplug: A Haptic Plug for Dynamic VR Interactions

Novel machine interface for scaled telesurgery

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

MEAM 520. Haptic Rendering and Teleoperation

Sound source localization and its use in multimedia applications

Computing for Engineers in Python

Analysis of Various Methodology of Hand Gesture Recognition System using MATLAB

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

English PRO-642. Advanced Features: On-Screen Display

Teleoperation Based on the Hidden Robot Concept

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

A FRAMEWORK FOR TELEPRESENT GAME-PLAY IN LARGE VIRTUAL ENVIRONMENTS

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

Perception in Immersive Virtual Reality Environments ROB ALLISON DEPT. OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE YORK UNIVERSITY, TORONTO

Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Privacy-Protected Camera for the Sensing Web

Available theses in robotics (March 2018) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam

Haptic Virtual Fixtures for Robot-Assisted Manipulation

Continuous Rotation Control of Robotic Arm using Slip Rings for Mars Rover

Performance Issues in Collaborative Haptic Training

A NOVEL CONTROL SYSTEM FOR ROBOTIC DEVICES

VISUAL REQUIREMENTS ON AUGMENTED VIRTUAL REALITY SYSTEM

Transcription:

EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON Josep Amat 1, Alícia Casals 2, Manel Frigola 2, Enric Martín 2 1Robotics Institute. (IRI) UPC / CSIC Llorens Artigas 4-6, 2a pt. 08028 Barcelona, SPAIN 2Dep. Automatic Control and Computer Engineering Pau Gargallo n. 5, 08028 Barcelona Universitat Politècnica de Catalunya (UPC) Abstract. The introduction of less invasive interfaces in control usually carry with them new drawbacks such as less perception, less dexterity, etc. This work tackles the experimentation of new means of perceptual feedback in teleoperation, when the operator guides the task by means of a visual exoskeleton, and consequently without any mechanical device that can be provided with haptic devices. The work describes the evaluation of augmented images and sound feedback as alternative means for the bilateral control. 1. Introduction When working on teleoperated tasks, the control of the slave arms can be performed by means of different elements. Such elements can be control devices, such as joysticks, or gesture based systems using an adequate exoskeleton. The advantage of using a mechanical exoskeleton is that it can incorporate sensorial feedback, forces and torques, in a more natural or immersive way [1], compared to the use of other devices, such as joystick type [2]. Nevertheless, such mechanical structures become heavy and burden. With the aim of making the teleoperation tasks easier through the use of exoskeletons, avoiding at the same time the drawbacks of wearing mechanical elements, which are highly invasive, the possibility of using a virtual exoskeleton has been studied. This new interface is based on a stereoscopic vision system that detects the operator arms and computes their spatial position [3, 4]. The first results obtained [5] were positive in what refers to the operator s mobility and easy of movements. But, the system introduces as main drawbacks; first, movement delays, when the operator moves excessively quickly; second, a lower precision in the slave arms positioning, and third, and more important, the lack of force feedback. This kind of feedback helps significantly, in many cases, in the development of a task, providing more safety and efficiency. Limitations of the system dynamics, when the operator moves too quickly for the control, do not appear when the operator is covered by a mechanical exoskeleton, due to the physical or ergonomic constraints, that restrict the operator s movements. Nevertheless, these control limitations, produced by this more comfortable and agile system, do not annoy the user after a short training period. Poor precision of the controlled movements, due to the measurement errors produced by the 3D vision system that detects and tracks the operator s movements, can be easily corrected by the operator, by closing the loop through the visual feedback during the execution of the teleoperated task. The lack of force feedback during the teleoperation process has motivated the study of alternative sensorial feedback. We have experimented different systems based on the acoustic perception of the efforts carried out, as well as on the visualisation, over the screen, of the synthetic images, from which the effort exerted over the environment can be interpreted. This superposition does not reduce the visual capabilities. This work focuses on the evaluation of the improvement in efficiency through the use of these new means of sensorial feedback. The visual or acoustic feedback perception is evaluated in relation to the different kind of teleoperated tasks.

2. System description Two stereo cameras focusing at the operator and a movement tracking system act as a virtual exoskeleton that provide the teleoperation control unit with the orders produced through the operator s movements. Fig. 1 shows the structure of the system, a schema containing the system main modules. The measure of forces and torques applied to the robot end-effector, as well as the camera that visualises the working area provide the additional information about the development of the teleoperated task. These data is used to generate the sensorial feedback to the operator. The sensed data is converted to sound as well as to synthetic images to be merged with the visualised image on the operator screen, in such a way that the user can interpret the remote touch, forces, etc, both visually and acoustically. The whole process consists of two parts, first, sensor data processing for extracting the required process information, and second, their transformation to different perceptive information, easier to interpret by the user through different perception means, in this case, sound and augmented images as depicted below. Fig. 1 System overview 2. 1 Robot guidance from gestures Through the detection and tracking of the operator s arms, a vision system interprets some simple gestures for robot guidance. 3D perception of the operator s arms is based on the adjustment of the moving images, obtained from two or more cameras, with a rough model of a person, of multicylindrical type. Fig. 2 shows the two camera views of the scene with the superimposed multicylindrical model adjusted to the body. Fig. 2. Adjustment of the model to the operator arms

The gestures based guidance system operates based on the information provided by a model of the human body, which is specifically designed to recognise a human body shape. Several singular points are detected from some extracted features of a sequence of images. A first classification of these points is made of the most significant body parts, the head and arms. The search of singular points and their classification to identify these relevant parts is also based on the prediction and tracking of the trajectories of each part. The tracking algorithm is suitable to compensate the possible punctual lost detection of the recognition system, provided they last a time period short enough. The recognition system can provide new position references of the body parts when they appear clearly in the image for the first time or after a short occlusion period. 2.2 Image generation The purpose of the image generation module is to provide the user with visual information from the force and torque data, so as to increase the perception feedback and thus creating a more immersive environment. The goal of the added visual information is twofold: first, to indicate the magnitude and direction of the forces and torques applied over the robot end-effector, and second, to overdraw a synthetic image over the areass of the image where the visibility is poor due to the task itself (smoke, turbidness, occlusions...). The visual indication of the applied forces and torques can be displayed in two different ways according to the user s preferences, either adding arrows and reference axis to the images or changing the colour of the area of the image corresponding to the point of application of forces. In this case, the hue value of the corresponding area is rotated an angle proportional to the measured force. The information to be drawn in the images is built by a 3D graphic engine (OpenGL based) that maintains a threedimensional model of the robot and the scene. So the virtual images are used to enhance video data, the work can be classified as augmented reality, according to Milgram taxonomy [6] of real world and virtual reality interaction. On the other hand, when a given area of the image loses clarity, making difficult the remote teleoperation, a synthetic image, corresponding to the model of the missed region of the image, is superposed in that position. With the aim of improving efficiency without losing the available image information, a system for mixing the video and the synthetic images has been developed. The system does not operate in a fixed form, switching from the real image to synthetic data, but changing in a progressive way, a gradual mixing that superposes the synthetic image over the real one operating according to two criteria: At local level in the image, to make the synthetic image grow progressively over the zones with low visibility. At model level, solidifying the modelled object bodies, in accordance with the level of visibility, from wire-frame type minimally invasive models up to solid models. The achievement of this progressive image superposition has required the use of a local visibility measure function, based on the local gradient histogram. The objective is to progressively superpose the synthetic video over the real images acquired from the scene in real time, from the measurement of C (x, y), the local clarity, or image visibility function. Consequently, the progressive appearance of the synthetic image over the real one is not binary, but progressive, as a function of the clarity or medium transparency. This gradual change, thus affects both, the spatial image, that is, the boundaries from the real to the mixed area move progressively as a function of the clarity or medium transparency, as well as the weight with which the synthetic image appears over the real one in the mixed area [7]. The adequate placement of the superposed synthetic data over the real image, the registration process, is affected by the system delays that can produce a mismatching between both images, so we use computer vision techniques for robot tool localisation and segmentation in the images, thus enabling the dynamic correction of the positioning error. Figure 3 shows the process followed to create images displaying the real scene with force indications taken from the sensors placed on the robot end-effector. First, the system obtains the image of the task scene from a calibrated camera. Additionally, the robot controller gives the position and orientation of the robot tool and also the force and torque measured in the robot wrist. From the contours of the original image, and with the camera calibration data and robot localisation, the robot tool is segmented. In parallel, the graphics based engine creates a foreground image with the graphical force indications. These 'on image indication' can be done in several ways: arrows showing the torque values, arrows showing force direction and magnitude, modification of the hue of the pixels on the virtual image. The teleoperator

control module will choose which one is needed or desired in every moment through visualisation options parameter. Some augmented images are shown in the results section. Video In ROBOT ARM Contour Extraction Tool Segmentation Tool Area Scene Video H ( F, T ) Coloured Video TCP ROBOT TOOL Arrows Video Signal Arrow Generation Operator Tracking Sound Generation Virtual Exosqueleton Teleoperation Data F,T TCP TELEOPERATION CONTROL Visualisation Options Task Control Fig. 3. The image generation system 2.3 Sound generation The aim of this sound generation module is to render the significant remote audio signals into new sounds, as realistic as possible, enabling the user to relate their characteristics to the effort carried out. In order to construct an effective augmented sound, two auditory effects are considered: 3D sound effect and efforts magnification. Interaural time difference is a common effect used to create the feeling of hearing the sound coming from a particular direction. It usually involves the control of the amount of delay between the stereo sound channels (see [8] for other possibilities). This effect is used here to provide the user with the auditory perception of the point of contact between the tool tip and the object surface. For instance, when a milling operation erases the surface of an object, a milling stereo sound is generated in the operator headphones. The required data is acquired from a piezoelectric sensor placed on the robot end-effector. The virtual position of the sound source is determined considering the viewing direction, the estimated point of contact and the tool center point. The user can modify or switch off this sound effect changing the auditory options of the teleoperation program interface.

Power Audio In Sound Acquisition Hall Sensor Power Spectrum Milling Sound Detection Actuator Current F,T ROBOT ARM F E I A Frequency and Amplitude Modulator F + T Sound Spectrum Synthesiser Left Channel Right Channel t Delay Force/ 3D Sound Effect Force Sensor TCP ROBOT TOOL PiezoElectric Sensor to Headphones Operator Tracking Video Out Image Generation Video In TELEOPERATION CONTROL Acoustic Options Task Control Virtual Exosqueleton Teleoperation Data Fig. 4. The audio generation system To produce the force effort magnification, the system isolates the characteristic sound of the machining process from other sounds such as those generated by the mill actuator or by the gears. Once the milling sound is detected, the system replaces the original sound by a more friendly synthetic sound. This synthetic sound slightly resembles the original milling sound but is less noisy and is shifted to lower frequencies. To make the sound more natural, the harmonics of the synthetic sound are modified by means of the main milling frequency (f E ). Then, with the aim of increasing the user's perception of the mill efforts, the synthetic sound is shifted in frequency, being the magnitude of the shift the average of the mill actuator current (I A ). The synthetic sound is modulated in amplitude as well. The modulation factor chosen is the maximum between the magnitude of the force vector F and the magnitude of torque vector T (multiplied by the length of the tool). 3. Results The experimental system has been tested by several users in order to evaluate qualitatively the improvement of teleoperation efficiency, as well as to evaluate whether the effectiveness of the auditory and visual indication improves with the use of this new means of feedback, the bilateral control. Figure 5 shows a snapshot of the robot and its environment in the working scenario. The task performed consisted in milling a hole in a plastic part, deeply enough as to make possible to insert a peg of know dimensions in it.

Fig. 5. The milling teleoperated task Fig. 6 and 7 show the two ways of representing the generated visual information of forces and torques. While in fig. 6 the representation is symbolic, with the arrows representation, the torques around each axis and the magnitude and orientation of the force vector, in fig. 7, the chroma changes according to the magnitude of the measured force. The majority of users consider the augmented sound generated by the system very valuable. The synthetic sound generated, and shown in figure 8c, is evaluated as clear, with no noise, and meaningful by the majority of the users. This sound is synthesised from the spectrum of the real sound, as described above. The additional help of the 3D sound effect is considered in general correct, but it annoys some of the users, due to the fact that it can be perceived as infering a false movement of the tool. Some other users had understood the behaviour of the sound effect and they use it efficiently to generate the circular trajectory around the hole. Fig. 6. Torque (left) and force (right) representation, adding simbolic information in the images.

Fig. 7. Force representation changing the chroma of the image according to the measured force (from left to right, the force detected in the tool tip increases). 200 Power Spectrum (the mill is rotating free) 150 100 50 D C B A a) 0 0 1000 2000 3000 4000 5000 6000 [Hz] 200 150 100 D C B Power Spectrum (milling) A E 50 b) 0 0 1000 2000 3000 4000 5000 6000 [Hz] 200 Synthetic sound generated 150 100 50 c) 0 0 1000 2000 3000 4000 5000 6000 [Hz] Fig. 8. Results of the audio generation. a) Power spectrum of the mill actuator sound rotating in the free air and the main characteristic frequencies (A,B,C,D). b) Power spectrum of the mill actuator sound and the main milling frequency (E). c) The generated synthetic milling sound. 4. Conclusions The continuous introduction of new techniques for the execution of teleoperation tasks requires the experimentation of new perceptual feedback means. Having developed a vision based virtual exoskeleton there appears new operating situations since the operator does not wear any physical device, and therefore haptics is not possible. The system described proposes and evaluates, as additional information to the remote video images, new feedback means, consisting of augmented images together with sound generation. The video images are merged with visual representation of the forces applied on the robot tool, while a synthetic sound is generated from the real working environment sound. The subjective perception of the different means of information feedback does not enable us to conclude taking a decision for a best option, therefore, different options for feedback representation is

left to the user according to their preferences. Further experimentation and the improvement from their conclusions of the feedback data is necessary. The extracted sensorial data can be used not only for the teleoperation bilateral control in the sense of generating data perception to the user, but also to introduce new correction strategies to the robot arm. This aspect opens new possibilities towards more advanced teleoperated systems able to receive higher level commands. 5. References [1] Ohya J., Miyasato T., Nakatsu R. Virtual Reality Technologies for Multimedia Communications. In Mixed Reality Merging Real and Virtual Worlds. Ed. Y. Ohta, H. Takamura. Springer-Verlag, NY, 1999. [2] M. Bergamasco. Force replication to the human operator: The development of arm and hand exoskeleton as haptic devices. In The Seventh Int. Symp. on Robotics Research, Germany, pp. 173-182, 1995 [3] Azarbayejani A. and Pentland A. Real-time self-calibrating stereo person tracking using 3-D shape estimation from blobs features. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Vienna, 1996. [4] Amat J., Casals C., Frigola F., Pages J.. Possibilities of Man-Machine interaction through the perception of human gestures. In journal Contributions to Science 1(2):159-173, Institut d Estudis Catalans, Barcelona, 1999-2000. [5] J. Amat, M. Frigola, A. Casals. Virtual Exoskeleton for Telemanipulation. Lecture Notes in Control and Information Sciences. Experimental Robotics VII. Eds. D. Rus, S. Singh, Springer, pp.21-31, 2001. [6] P. Milgram, Herman Colquhoun Jr. A Taxonomy of Real and Virtual World Display Integration, Mixed Reality - Merging Real and Virtual Worlds, Ohmsha (Tokyo)-Springer Verlag (Berlin), pp. 5-30, 1999. ISBN 3-540- 65623-5 (Springer Verlag) [7] A. Casals, J. Fernandez and J. Amat, Augmented reality to assist teleoperation working with reduced visual conditions. IEEE Int. Conference on Robotics and Automation, Washington-USA, 2002 [8] A. Mouchtaris, P. Reveliotis and C. Kyriakakis. Non-minimum phase inverse filter methods for immersive audio rendering. Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference on, Volume: 6, Page(s): 3077-3080 vol.6, 1999.