A Virtual Framework for Semi-Autonomous Robotic Surgery using Real-Time Spatial Mapping

Size: px
Start display at page:

Download "A Virtual Framework for Semi-Autonomous Robotic Surgery using Real-Time Spatial Mapping"

Transcription

1

2 A Virtual Framework for Semi-Autonomous Robotic Surgery using Real-Time Spatial Mapping A thesis submitted to the Graduate School of the University of Cincinnati in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE In the School of Electronics and Computing Systems of the College of Engineering and Applied Science April, 2013 by Sudhesh Sudhakaran Nair Bachelor of Technology in Electronics and Communication Cochin University of Science and Technology, June 2009 Committee Chair: Fred Beyette, Ph.D. i

3 A Virtual Framework for Semi-Autonomous Tele-Robotic Surgery using Real-Time Spatial Mapping Abstract A suite of experiments was performed to investigate the influence of time-delay on teleoperation accuracy and efficiency during a trajectory tracking task. Performance was measured using RMS error (deviation from ideal path), actual path length, and time to complete the task. It was found that beyond about 1.5 seconds the task difficulty increased substantially, as reflected by increasing RMS error. Furthermore, efficiency was reduced as reflected by increased time to complete the task. Starting at around 1.5 seconds of time-delay, subjects tended to adopt a discontinuous, move-and-pause strategy to improve accuracy at the expense of task completion time. With imposed pacing, RMS error continues to increase beyond 1.5 seconds and some subjects were not able to complete the task in the allotted 90 second timeframe. A novel system was designed to perform the same tasks semi-autonomously. The system was designed using Real-Time motion capture, spatial mapping and virtual reconstruction based on the inputs. The task was effectively divided into multiple segments and Bezier Curves were used to calculate smooth path in each segment. The efficiency and feasibility of this semi-autonomous system was evaluated in different environments under varying degrees of motion and results have been presented. The results indicate that the accuracy and efficiency of a semi-autonomous robotic procedure is a substantial improvement over that of human teleoperators under varying latencies. ii

4 iii

5 Acknowledgements Firstly, I would like to thank the University of Cincinnati for accepting me for my graduate degree and for the financial assistance provided during the course of my academic stay. I would like to express my sincere gratitude towards Dr. Grant Schaffner, for his invaluable advice, assistance, and encouragement to me during the course of my graduate degree. He has always been available with his vast knowledge and experience, whenever I found obstacles in the advancement of my research. Without his guidance, I would not have been able to complete my work in a professional and timely manner. I would like to thank Dr. Fred Beyette for taking time out of his busy schedule to provide me advice and guidance at several stages of my research. I would also like to thank Dr. Xuefu Zhou for taking time out of his busy schedule to serve as a member of my defense committee. I would also like to thank my colleague Christopher Korte, who has been a source of great assistance to me with his suggestions during the course of this research. Last, but not the least, I would like to thank my family and all my friends for their unwavering support and faith in me. iv

6 Table of Contents Abstract... ii Acknowledgements... iv Table of Contents... v List of Figures... viii List of Tables... xi Acronyms and Symbols... xii 1 Introduction Background and Motivation Tele-robotic abdominal surgery da Vinci Surgical System NASA NEEMO Mission Surgical Modeling Research Goals and Approach Methods Quantifying the effects of time delay on teleoperation Relevance to Robotic Surgery Results Discussion Virtual Model Development v

7 2.2.1 OpenGL Platform Virtual Model of the Test Article Real-Time Motion Capture BTS SMART DX BTS SMART DX with Real-Time Capture Simulating a Dynamic Environment Model CAL 50 Vibration Exciter Simulating external and internal movements Real-Time Spatial Mapping Mapping External movements (Environmental Dynamics and External Body movements) Mapping Internal Movements (Body Internal Movements) Motion Prediction Smooth Path Generation Bezier Curves Bezier Curves applied to the Virtual Model Telecommunication Links The UDP Link The TCP/IP Link Robot Controller Integrating all Functional Modules Results Phase Phase vi

8 3.3 Phase Phase Discussion Conclusion Works Cited Appendix A vii

9 List of Figures Figure 1: The Da Vinci Tele-robotic Surgical System consists of three parts A. The surgeon s console; B. the video electronics tower; and C. the robot s tower supporting three robotic arms. 5 Figure 2: The da Vinci Tele-robotic Surgical System permits the surgeon to perform an operation on a patient from a remote site. Currently the FDA requires the surgeon to sit physically in the same room as the patient on whom he is operating Figure 3: Experiment setup: a) Test article showing gates and robot end-effector marker, b) ideal path through gates, c) slave location setup, including robot, test article, motion capture system, and stereo-vision cameras, d) master location including Phantom Omni 6-dof controller and stereo-vision headset Figure 4: Summary of RMS error, path length, and time-to-complete results for phase 1 experiment (2-D). The robot end-effector was limited to movement only in the horizontal plane and subjects viewed a monoscopic image of the test article from directly above Figure 5: Time to complete the task result for phase 2 (3-D path, time-delay order not randomized) Figure 6: Comparison of average RMS error for phase 2 and phase 3. Phase 3 errors were consistently higher than phase 2 errors, and with greater variance, due to randomization of timedelay order Figure 7: Comparison of average path length for phases 2 and 3. Path length was consistently longer in phase 3 due to the randomization of time-delay. Variance, however, was almost identical between the two experiment phases Figure 8: The average time to complete the task generally increased as time delay increased. The results for phase 3 were nearly identical to those for phase 2, although the time to complete the task was consistently slightly higher for phase viii

10 Figure 9: Comparison for RMS error for timed and non-timed groups. The RMS error was higher under conditions of subject pacing (timed), but the trend was more consistent as reflected by the tighter curve fit Figure 10: Comparison of RMS error for Experienced versus Novice teleoperators. The experienced group performed substantially better in terms of trajectory tracking accuracy, as reflected in the substantially lower RMS values Figure 11: Graphical model of the test article - top view Figure 12: Graphical model of the test article - side view Figure 13: Graphical model of the test article, with all gates marked Figure 14: Test Article with markers at four corners, Side View Figure 15: Test Article with markers at four corners, Side View Figure 16: Test Article with target gates replaced with markers Figure 17: A linear Bezier Curve Figure 18: A Quadratic Bezier Curve Figure 19: A Cubic Bezier Curve Figure 20: Virtual Model of Test article with simulated Bezier curve paths Figure 21: Virtual Model of Robot following the path generated by Bezier Curves Figure 23: Phase 3- Average number of errors vs maximum peak-to-peak displacement in millimeters Figure 24: Phase 3- Average duration vs maximum peak-to-peak displacement Figure 25: Effects of frequency changes on number of errors ix

11 Figure 26: Effects of frequency changes on time duration Figure 27: New path generated by the algorithm when one of the target gates is changed x

12 List of Tables Table 1: Experimental Phases and Statistical Results Table 2: Comparison of efficiency of Semi-Autonomous operation to a Tele-operation under varying time-delays Table 3: Phase 2 Semi-Autonomous Operation Results Table 4: Phase 3 Semi-Autonomous Operation results Table 5: Phase 3- Semi-Autonomous operation Results: - Effects of frequency on efficiency of operation xi

13 Acronyms and Symbols TCP-IP Transmission Control Protocol Internet Protocol UC University of Cincinnati Dof Degree of Freedom NASA National Aeronautics and Space Administration NEEMO NASA Extreme Environment Mission Operations FDA Food and Drug Administration 3-D Three Dimensional FLS Fundamentals of Laparoscopic Surgery RMS Root Mean Square UDP User Datagram Protocol xii

14 xiii

15 1 Introduction Advances in the fields of robotics and telecommunications have enabled surgeons to robotically operate on patients from a distance known as telesurgery. In theory, telesurgey could be performed in remote or inaccessible locations such as rural areas, battlefields, Polar Regions, or even during spaceflight in situations where a patient would not otherwise have access to timely and capable surgical care. The presence of the da Vinci robotic surgery system in over 700 medical centers worldwide has led to widespread acceptance of robotic surgery. In one of the first telesurgery demonstrations, a surgical team in New York City performed a laparoscopic cholecystectomy on a woman in Stratsbourg, France under a 155 millisecond time delay (Marescaux, 2001) (Green PE, 1991). A trauma pod robotic system under development by SRI International could potentially be deployed into combat areas or remote regions by helicopter or air drop (Friedman DCW, 2007). Robotic surgery during spaceflight is of particular interest to NASA since they have identified a 90% probability of serious illness/injury during a future long range space mission (GH, 2002) and such an incident might require surgical intervention before a return to Earth is possible. At the same time, NASA cannot guarantee that a surgeon would be present on all long range flights (beyond Low Earth Orbit). To address this need, the SRI International M7 robot has been used to perform telerobotic surgery on human phantoms in a weightless environment (NASA C-9 aircraft) and in the Aquarius subsea habitat (NEEMO 9 & 12) (Satava RM, 2000) (HH, 1993). While telesurgery is conceivable in these scenarios, they all include inherent time-delay due to the telecommunication latency which can reduce accuracy and efficiency. The RAVEN surgical robot, developed by the University of Washington, was used to perform the Fundamentals of Laparoscopic Surgery (FLS) training tasks with simulated time delays of 0, 250 and 500 milliseconds. The experiment showed a higher error rate with increased delay [7]. Sheridan performed an experiment where the user operated a system that had two degrees of freedom and was required to grasp small block with slave system under simulated time delays of 0.0, 1.0, 2.1 and 3.2 seconds. The subjects had a clear view of the slave system but not of the master system. Sheridan predicted the time to complete the task using: 1

16 t(i) = t 0 (I) + (t r + t d ) * N (I) + t d (1) where I is the index of task difficulty, t 0 is the time it took the participant without a time delay, t r is the participant s reaction time at the start (set at 0.2 seconds). N(I) is the number of corrective movements and t d is the time delay. As the time delay increased, the time to complete the task increased because a move-and-wait strategy was employed (Sheridan TB, 1963). These experiments demonstrated that tele-robotic surgery is feasible, but becomes problematic with increasing time-delay. However, there is little description of the effect of latency on task accuracy. Moreover, there is not a clear indication of the threshold beyond which teleoperation performance becomes unacceptably degraded. One method to reduce these effects of latency is to use semi-autonomous robots. These robots will be able to do various surgical procedures on their own (automated or partially automated surgical procedures), under human supervision. Such a semi-autonomous robotic surgery will expand human presence in space by providing prompt and comprehensive access to surgical care without a surgeon present. The goal of this research is to design such a flexible and expandable framework which uses a semi-autonomous robot to perform a simple surgery-mimicking task and compare its performance to the performance of a teleoperated robot under varying software simulated latencies. 1.1 Background and Motivation The first robots introduced into clinical service served as camera holders. The FDA approved the use of robots as camera holders in clinical environments in 1994, and later approved the use of a second robotic camera holder, the Endoassist (Armstrong Healthcare Ltd., United Kingdom). Surgical robots are controlled directly by the surgeon, who stands at the side of operating table. Surgical robots have evolved into tele-robotic surgical platforms which allow surgeons to operate on patients from remote locations. The surgeon and tele-robot work in a master-slave relationship. The robot instruments simulate the movements of a surgeon s hand while providing the output from three-dimensional (3-D) imaging systems as a visual feedback to the surgeon. This process of providing surgical care where patient and surgeon are separated by some distance is referred to as tele-surgery. 2

17 The ability to employ surgical robotics that can be manipulated remotely and perform semi-autonomous functions could greatly reduce morbidity and mortality associated with surgically treatable conditions in military and spaceflight environments by providing surgical care on a more timely basis. In addition, the ability to remotely perform medical tasks such as needle insertion, suturing, and ultrasound examination through semi-autonomous supervisory control can lead to new approaches in the delivery of care across a broad spectrum Tele-robotic abdominal surgery Tele-robotic surgery or tele-presence surgery is the next step in the evolution of robotic surgery (Satava RM, 2000). In a tele-robotic surgery, the surgeon sits at a computer console. The computer translates the movement of the surgeon s hands into motions of the robotic instruments. The surgical tele-robot which is positioned near the patient holds the camera and manipulates two or more surgical instruments. Because the surgeon does not need to be in direct physical contact with the patient, the surgeon and the computer console can be placed at a remote site. The surgeon acts as a master and the robot as a slave (HH, 1993). The feasibility of remote surgeon tele-robotics was first demonstrated in 1991 (Green PE, 1991). Jensen (1996) reasoned that this technology would permit a surgeon at a remote site (such as an aircraft carrier) to operate on a distant patient (such as a wounded soldier on a battlefield) (Jensen JF, 1996). Several groups developed systems that were designed to replace to replace one or more surgical assistants. The First-Assistant system, for example, was a non-electronic, pneumatically controlled robotic arm. The surgeon moved the device manually (ME, 1993). More recent robotic systems were designed to replace both the surgical assistant and the camera holder. In general, these robots were similar to camera-holding robots but were modified to hold surgical instruments. The surgeon controlled these robots with either their hand or foot (Arezzo A, 2000) (Partin AW, 1995). Innovation in surgery allows surgeons to provide better health care to their patients. The Automated Endoscopic System for Optimal Positioning (AESOP) was the first robot approved for use in surgery by the US Food and Drug Administration (FDA). After its approval in 1994, the system assisted surgeons by supporting an endoscope and repositioning according to the 3

18 surgeons instructions (Jacobs, 1997) (Sackier, 1997) Licensed by Computer Motion, Inc (Goleta, CA), the AESOP was later incorporated into the Zeus robotic surgery system (Ghodoussi, 2002) which received FDA approval in September The Zeus was used in the first transatlantic tele-surgery, performed between Manhattan, New York, USA and Strasbourg, France (Marescaux, 2001). The Zeus s major competitor was the da Vinci surgical robot, produced by Intuitive Surgical, Inc (Mountain View, CA) and FDA approved in July 2000 (Guthart, 2000). In June 2003, the companies merged under the name Intuitive Surgical, Inc and production of Zeus and AESOP systems ceased (Sim, 2006). Other commercially available systems include the NeuroMate (which, along with ROBODOC, was produced by Integrated Surgical Systems, Inc in Davis, CA, until 2005) (Cleary, 2001) (Lavalle`e, 1992) and the Naviot laparoscope manipulator (Hitachi Co., Japan) (Kobayashi, 1999). Several surgical robotic systems are currently in development around the world. The system designed at the University of Tokyo (Mitsuishi, 2003) has performed tele-surgical experiments throughout Asia. The NeuRobot (Hongo, 2002) has been used in clinical applications. Other systems include the Berkeley/UCSF laparoscopic tele-surgical workstation (Cavusoglu, 2003), the Light Endoscopic Robot (Berkelman, 2003), and the MC 2 E (Zemiti, 2007) da Vinci Surgical System The da Vinci surgical system consists of three parts: the surgeons console (Fig 1A), the video electronics tower (Fig 1B), and the robot s tower supporting three arms (Fig 1C) (A, 2001). The surgeon sits in an ergonomically comfortable position at a console (Fig 2). His/her hands control the master controls that act as interface with the computer. The computer and the imaging systems complete the rest of the console. A tower holds the camera and an in-sufflator for the pneumoperitoneum. The robot has three arms. The central arm holds the camera while the outer two arms hold the surgical instruments. The surgical instruments move with seven degrees of freedom and two degrees of axial rotation. The robot is placed near the surgical table and is not connected to the surgical table; instead it is connected to three operative trocars. The computer keeps track of the 3-D location of the trocar s tip, and not the tip of the surgical instruments. The da Vinci offers a true 3-D imaging system based on stereo imaging. The primary magnifying system is 12 mm in diameter and contains two separate 5-mm magnifiers. Two three-chip video 4

19 cameras telecast the image to two separate CRT screens. A synchronizer keeps the images from the two cameras in phase. Mirrors reflect the images from the CRT screens up to the stereo viewer in the surgeon s console. In this system, the left and right images remain separated from the magnifiers to the surgeon s eyes. As with binoculars, the right eye sees the right image and the left eye sees the left image. Figure 1: The Da Vinci Tele-robotic Surgical System consists of three parts A. The surgeon s console; B. the video electronics tower; and C. the robot s tower supporting three robotic arms. (Ballantyne) 5

20 Figure 2: The da Vinci Tele-robotic Surgical System permits the surgeon to perform an operation on a patient from a remote site. Currently the FDA requires the surgeon to sit physically in the same room as the patient on whom he is operating. (Ballantyne) The technology for robotic surgery is evolving. Current systems already provide some advantages over conventional laparoscopic surgery techniques through effective 3-D visualization, increased comfort for the surgeon, increased control over the camera and a good range of motion for the surgical arms. However, these systems are also expensive, bulky, lack some desirable features and require extensive training. If robotic surgery is to become truly effective and widespread, current features will need to be refined and additional features must be added. For this to happen, it is imperative that the limitations as well as avenues for future improvement of tele-surgery are recognized and explored NASA NEEMO Mission During three NASA Extreme Environment Mission Operations (NEEMO) missions in the undersea Aquarius research station, tele-manipulation systems were successfully used in simulations of surgical procedures. Three robotic systems were deployed in the National Oceanographic and Atmospheric Administration (NOAA) habitat for evaluation during NEEMO 6

21 7, 9 and 12. Researchers inside the habitat conducted a variety of experiments to test the efficiency, performance and feasibility of teleoperated surgical systems in this remote and extreme environment. During the three missions, components of the Automated Endoscopic System for Optimal Positioning (AESOP), the M7 Surgical System, and the RAVEN were deployed and evaluated based on a number of parameters such as communication latency and semi-autonomous functions. The M7 Surgical system was modified to allow a remote surgeon the ability to insert a needle into a simulated tissue with ultrasound guidance. This marked the first time that a needle was inserted into a phantom blood vessel with remote image-guidance using an ultrasound probe. This also resulted in the world s first semi-autonomous supervisorycontrolled medical task (Charles Doarn, May 2009). The promising results of these tele-surgery experiments motivate further development of telesurgical autonomous robotics as a significant tool for healthcare delivery in extreme environments, especially for future application in medical care of the soldiers, patients in remote locations, or astronauts on long range space exploration missions. As humankind ventures into harsh and extreme environments, medical care capability including surgical care will be a vital tool for supporting health and survivability. Refinement in tele-surgical care, complemented with autonomous technologies, will serve as significant adjuncts in aerospace and military medicine and will naturally migrate towards civilian care to expand accessibility to high quality surgical care across the globe Surgical Modeling Motion and video data from Intuitive Surgical s da Vinci Surgical System were used to evaluate surgical skill, provide surgical training feedback and document essential aspects of a surgical procedure/task in experiments conducted at John Hopkins University (Henry C. Lin, 2005). With the advent of robot-assisted minimally invasive surgical systems, the ability to record quantitative motions synchronized with video data has opened up the possibility of creating simple, descriptive, mathematical models to recognize and analyze surgical training and performance. The Lin experiments attempted to recognize simple elementary motions that occur in a suturing task performed on the da Vinci robot. The task was divided into functional modules 7

22 similar to other pattern recognition schemes like automatic speech recognition. The functional modules include methods such as local feature extraction, feature normalization, linear discriminant analysis, Bayes classifier, and computer vision. Lin further performed a validation study by using a 15-fold cross validation on the expert data, i.e. by using the machine learning algorithm to evaluate the 15 different tests.. The experimental results showed that basic surgical tasks can be transformed into a labeled sequence of surgical gestures. Thus, if a robot is trained to do some of the segments of surgical tasks by itself, more complex surgical tasks than a relatively simple needle insertion could potentially be performed semi-autonomously. As these technologies advance, the delay in transferring data from the surgeon to the robot, and vice-versa, even in close quarters, becomes a limiting factor. During the New York-France telesurgery, this delay was less than 200 milliseconds. Swift data transfer was made possible by a high-speed transmission system linking the equipment by a transatlantic fiber-optic service running at 10 Mbits per second (Rassweiler J, 2001). While this technology is advancing rapidly, its distribution is not widespread, thus limiting the feasibility of conducting surgery in underserviced areas. The presence of time -delay also imposes restrictions on applying this technology across large distances, where the data transmission time is higher. For example, the current delay in communication between the Earth and the Moon is close to two seconds. Round trip transmission (commands sent to the robot, followed by video images or other sensory signals returned to the operator) would double the delay between control issuance and perception and teleoperation accuracy and efficiency would likely be severely hampered by a four second delay. 1.2 Research Goals and Approach The central hypothesis of this thesis is that a unique combination of real-time spatial and temporal mapping of a simulated three-dimensional tool-tip path and the implementation of anticipatory control algorithms will enable semi-autonomous robotic surgery which can minimize the effects of latency. 8

23 The quality/efficiency of a surgical procedure can be measured on the basis of the following parameters: 1. Total tool tip path length: Indicates the total distance travelled by the tool tip during a surgical procedure. 2. Extent of tool tip path deviation: Indicates the amount of deviation from the expected path. It is quantified by root-mean-square (RMS) tracking error. 3. Number of surgical errors made during the operation: Indicates the number of times the tool tip has missed specific targets during the procedure. 4. Total time: It indicates the total time required to complete a procedure. The specific goal of this thesis is to design a virtual framework which can be used to achieve the following. a. Perform a surgical task semi-autonomously. b. Reduce tool path deviation and path length due to time delay in a simulated robotic surgery. c. Reduce surgical errors due to time-delay. d. Shorten procedure time. e. Compare the performance of the system in different dynamic environments (under varying degrees of motion of the test article in the environment). The following design factors were taken into consideration: a. Flexibility: The designed model should be flexible and should not be restricted to just a single task. It should be easy to integrate more tasks into the model in the future. b. Universality: The designed model should have the ability to be installed, implemented and used in any machine/location. c. Expandable: The designed model should be expandable to include more complex procedures, simulated tissues and environments. d. Robustness: The designed model should be able to accommodate uncertainty and handle significant deviations in task parameters. e. Accuracy: The surgical procedures should be accomplished with a level of precision that matches or improves upon that of a surgeon performing the procedure by hand. 9

24 f. Efficiency: The surgical procedure should be completed in an amount of time that equal to or less than that of a surgeon performing the procedure by hand. Experiments are carried out on a test article which represents a three-dimensional path that a surgical tool would be required to follow during a surgical procedure. This test article is placed in a dynamic environment which simulates organ/tissue movement during surgery due to body internal movement (breathing, muscle contractions) or environmental dynamics (vehicle motion). The research strategy used in this thesis can be divided into the following steps: 1. Quantify the effects of time delay on teleoperation. 2. Develop a virtual model of the test article and its immediate environment. 3. Develop and use algorithms that divide the surgical task into several small steps. 4. Using Motion Capture System outputs, match the virtual model of the test article to the actual test article. Thus movements and geometry changes in the test article will be reflected in the virtual model simultaneously. 5. Using Bezier Curve generation methods and adaptive algorithms, determine the path to be followed by the tool tip within the virtual model. If this path satisfies the predefined requirements, the output is passed to the robot controller and thus the robot is controlled based on the path generated in the virtual model. 6. Quantify the efficiency of the semi-autonomous procedure based on the parameters defined earlier and compare it to the efficiency of teleoperation under varying latencies. Semi-autonomous robotic surgery will be a unique, game-changing technology that will help to expand human presence in space by minimizing medical risks associated with long-range space flight. However, the speed, accuracy and adaptability of path calculations and subsequent robotic motion needs to be very high in order for such a system to be effective in real world applications. 10

25 2 Methods 2.1 Quantifying the effects of time delay on teleoperation Experiments were performed using one slave system location (a laboratory at the University of Cincinnati (UC)) and two master system locations (a second laboratory at the UC or a laboratory at SRI International in Menlo Park, California). The slave location contained the test article, robot, slave computer (to drive the robot), vision system and motion capture system. The vision system consisted of a pair of Point Grey Flea2 cameras aligned to provide a stereoscopic view of the test article. A PA degree of freedom (dof) robotic manipulator (Mitsubishi Heavy Industries, Tokyo, Japan) was used to execute the teleoperated path-following task. The test article (Figure 3a and 3b) consisted of eight gates, arranged in a square pattern. Each Y-shaped gate consisted of a crossbar with vertical spring on its ends supporting markers, and a single post supporting the crossbar. The gates were placed upright in holes in a plywood base arranged in a square array of 25.4 mm pitch that allowed for multiple configurations to be adopted. Four high gates were placed at the corners of a square with sides of mm in length. Four low gates were placed at the midpoints of the sides, resulting in a three-dimensional path as shown in Figure 3b. A motion capture system (BTS Bioengineering, Milan, Italy) consisting of eight infrared cameras was used to capture the robot end-effector motion. The camera system was calibrated to provide 0.5 mm positional accuracy (Figure 3c). Markers were initially mounted on the gate ends and the gate positions recorded in a static capture. The gate markers were then replaced with non-reflective red beads that were visible to the operator but would not be picked up by the infrared motion capture system. This avoided difficulties with marker occlusion/confusion as the end effector marker passed close to the gate markers. The master laboratory housed the control hardware consisting of a Phantom Omni 6-dof input device (Sensable, Triangle Park, NC, USA) and the master computer (Figure 3d). Test article visualization was provided by an emagin 3-D headset (emagin, Bellevue, WA, USA) at UC, or a computer screen at SRI. Control inputs and video images were transmitted via the internet using TCP-IP communication. 11

26 Subjects completed a questionnaire to assess their previous experience with radio controllers (RC), video controllers and robot control operations. Based on the responses, the participants were classified into Experienced and Novice groups. The experiments were completed in four phases. The first phase of the experiment was performed between the master location at SRI International and the slave location at the University of Cincinnati. This preliminary experiment was limited to a two-dimensional pathfollowing task. The test article was viewed from directly above and the robot vertical axis was locked. A single video image was viewed on the master computer. Artificial time-delays of 0, 0.5, 1.0 and 1.5 seconds were imposed on the inherent communication delay (which averaged approximately 0.25 seconds). This first phase was performed to orient the test subjects to the input device, to build familiarity with different time-delays and to quantify inherent public network communication time delays. Five male subjects, ranging from 23 years to 46 years, were instructed to move the robot end-effector marker through the mid-points of each gate as expediently and accurately as they could. Each subject completed 16 runs, that is, four trials at each of the four time-delay conditions. The runs were completed in order of increasing timedelay, then in order of decreasing time delay. This sequence was then repeated after a brief rest. Experiment phases 2 through 4 were completed using a master and slave locations in two buildings in the main University of Cincinnati campus. The inherent network time-delay for control commands was less than 1 ms, but was of the order of ms for video transmission. Additional time-delays of 0, 0.5, 1.0, 1.5, 2.0, 2.5 seconds were artificially imposed. Twelve male subjects, ranging in age from 20 years to 46 years, were instructed to maneuver the end-effector marker through the mid-points of the gates, this time along the threedimensional path (straight line segments between the centers of successive gates, Figure 3b) using the stereo-vision system. In phase 2, subjects completed the runs in order of increasing time delay. In phase 3, the order of time-delays was randomized to mitigate the learning effect. During phase 4, subjects were given a 90 second time limit within which to complete the task under all time-delay conditions. Subjects were provided with both an audible cue (master computer beep) and visual cue (a computer-generated graphic in the top left corner of the field of view) that indicated when each gate should be traversed and thus assisted with pacing. 12

27 In all phases, the test subjects were given the opportunity to perform practice runs of the task prior to data-capture to negate the short-term learning effect. The number of practice runs was the same for all subjects in each case. Subject performance was evaluated based on three parameters: total time to complete the task, total path length, and root-mean-square (RMS) error. The total time was recorded by the motion capture system as the time between passing through the initial gate, completing one lap around the test article, and then passing through the initial gate again. The total path length (L) was calculated as the sum of all straight line segments lengths (Li) between recorded end-effector marker positions (Equations 1 and 2). (2) (3) Where x i, y i, z i are the spatial coordinates of the end-effector, as recorded by the motion capture system. The RMS error was used to quantify deviations from the ideal path, and was calculated using equation (3): ( ) ( ) ( ) (4) Where,, are the coordinates of the closest point on the ideal path, and N is the total number of motion capture data points. 13

28 The three parameters were averaged across subjects. Statistical comparisons were performed using an analysis of variance (ANOVA) when comparing multiple groups, and a student t-test when comparing two groups. A significance level of α = 0.05 was used for all comparisons Relevance to Robotic Surgery At the time that these experiments were conducted, actual surgical robots (such as the Da Vinci) were not available to the investigators. Instead, a general purpose robot (Mitsubishi PA- 10) and relatively simple input device (Phantom Omni) were used together with black and white video imagery. This system was not capable of motion scaling (up to 5 to 1 slave to master ratio), visual magnification (as much as 15 times), or haptic feedback, as commonly used in robotic surgery. The viewing system (headset) was also of much lower resolution than surgical viewing systems. However, since the experiment was designed to compare relative changes in pathfollowing accuracy and efficiency, rather than absolute measures of precision, results relevant to robotic surgery, and teleoperation in general, can be obtained Results Phase 1 The results for Phase 1 are summarized in Figure 4 and Table 1. The inequalities provided in Table 1 indicate statistically significant differences between parameters according to the timedelay groups. The error bars in the chart in Figure 4, as in all subsequent charts, indicate the 95% confidence interval. The most noticeable result in Figure 4 is that the average RMS error increased as the imposed time-delay increased. Furthermore, there is a greater increase in RMS error between 1 second and 1.5 seconds of time delay than between shorter time-delays. This is also reflected in the fact that there is a statistically significant difference in RMS error between the 1.5 second time-delay and all of the lesser time-delays. The trend is less clear with average path length, since the path length was shortest at the 1 second time delay, but there was nevertheless a general tendency for path length to increase and the longest path length occurred for the 1.5 second time-delay. The average time to complete the task increased consistently as 14

29 the time-delay increased and there is a significant difference between the time to complete the task at 1.5 seconds versus the 0 seconds time-delay Phase 2 Between 0 to 1.5 seconds, the time to complete the task steadily increased (Figure 5) and, as in phase 1, the time at 1.5 seconds was greater than at 0 seconds of time-delay. However, at 2 and 2.5 seconds of time-delay the time to complete was less than at 1.5 seconds. There were no significant differences in RMS error and path length based on time-delay (Table 1 and Figure 6), although the path length generally increased as the time delay increased Phase 3 With the order of time-delays randomized, there is no clear trend in RMS error through the different time-delay cases (Figure 6). However, it is noticeable that the RMS error was consistently higher in Phase 3 than in Phase 2. The path length generally increased with timedelay, but there was a noticeable decrease in path length at 2 seconds of time-delay (Figure 7). Furthermore, this trend was consistent for phases 2 and 3. Also, path length was consistently higher in phase 3 than in phase 2. The trends are again consistent between phases 2 and 3 when comparing time to complete the task (Figure 8). The time to complete the task generally increased with increasing time delay and the time to complete the task at 2.5 seconds time-delay was significantly greater than all shorter time-delays Phase 4 With the order of time-delays randomized and the timing cues implemented, the time to complete the task was consistently close to 90 seconds throughout this phase, as intended. With the timing cues (Figure 9, Timed values), the RMS error consistently increased with increasing time delay (r 2 value of based on a linear regression). The RMS error values from phase 3 (Figure 9, Non-timed ) were consistently lower, and also showed a greater amount of variance (r 2 value of 0.539). 15

30 Figure 3: Experiment setup: a) Test article showing gates and robot end-effector marker, b) ideal path through gates, c) slave location setup, including robot, test article, motion capture system, and stereo-vision cameras, d) master location including Phantom Omni 6-dof controller and stereovision headset. To further explore the underlying trends, the RMS error data was separated between subjects that had more experience with teleoperation under time-delay or operation of remote control vehicles (Figure 10, Experienced group) versus those that had comparatively little experience in these areas (Figure 10, Novice group). There is clearly a substantial difference between these groups. The Experienced subjects exhibited RMS errors that ranged from around 10 to 15 mm with a generally consistent increase in RMS error with increasing time-delay and a smaller amount of variance (as shown by the tighter confidence interval bars). The Novice subjects exhibited a less consistent increase in RMS errors that ranged from 24 to 27 mm, and a greater 16

31 amount of variance. For the Experienced group, the RMS errors for the 1.0 second time-delay and shorter were significantly less than for the 1.5 second time-delay and longer (Table 1). The change in path length was less clear, although there was generally an increase in path length as time-delay increased and this was more noticeable with the Experienced group. Figure 4: Summary of RMS error, path length, and time-to-complete results for phase 1 experiment (2-D). The robot end-effector was limited to movement only in the horizontal plane and subjects viewed a monoscopic image of the test article from directly above. 17

32 Figure 5: Time to complete the task result for phase 2 (3-D path, time-delay order not randomized). Figure 6: Comparison of average RMS error for phase 2 and phase 3. Phase 3 errors were consistently higher than phase 2 errors, and with greater variance, due to randomization of time-delay order. 18

33 Figure 7: Comparison of average path length for phases 2 and 3. Path length was consistently longer in phase 3 due to the randomization of time-delay. Variance, however, was almost identical between the two experiment phases. Figure 8: The average time to complete the task generally increased as time delay increased. The results for phase 3 were nearly identical to those for phase 2, although the time to complete the task was consistently slightly higher for phase 3. 19

34 Figure 9: Comparison for RMS error for timed and non-timed groups. The RMS error was higher under conditions of subject pacing (timed), but the trend was more consistent as reflected by the tighter curve fit. Figure 10: Comparison of RMS error for Experienced versus Novice teleoperators. The experienced group performed substantially better in terms of trajectory tracking accuracy, as reflected in the substantially lower RMS values. 20

35 Phase Parameter Time-Delay Group (Time-Delay (s)) (0) (0.5) (1.0) (1.5) (2.0) (2.5) RMS 1<4 2<4 3<4 4>1,2,3 N/A N/A 1 Path Length N/A N/A Time 1<4 4>1 N/A N/A RMS 2 Path Length Time 1<4 4>1 RMS 3 Path Length Time 1<6 2<6 6>1,2 4 a Path Length Time RMS 1<4,5,6 2<5,6 3<5,6 4>1 5>1,2,3 6>1,2,3 a Results shown for only the Experienced subject group in phase 4. For all subjects combined, there were no significant differences in the parameters. Table 1: Experimental Phases and Statistical Results Discussion The magnitude of RMS error (10 to 27 mm) was higher than what would be expected in actual surgery (a fraction of a millimeter to a few millimeters). However, when one accounts for the motion and vision scaling factors described earlier, one can argue that the measured values could be scaled down by a factor of 1/5 to 1/10, or even as much as 1/50, which would bring the experimental results well within the positional error range expected during surgery. All phases of the experiment demonstrated that as the time-delay increased, the task difficulty increased. This was reflected as an increase in RMS error (reduced manipulation accuracy), an increase in the time to complete the task (reduced task efficiency), or both. Path length also generally increased as time delay increased, but was found to be a less sensitive indicator of manipulation accuracy than RMS error. In Phase 1, the fact that RMS error increased more substantially between the 1.0 second and 1.5 second time-delays than between the other time- 21

36 delays provided some early indication that task difficulty increases greatly as the time delay approaches 1.5 seconds. The order of time-delay was not randomized in phases 1 and 2, and it appears that this allowed a short-term learning effect to occur. In particular, there was less noticeable increase in all of the performance parameters (increases reflect poorer performance) in phase 2 compared to phases 3 and 4 where the order of time-delays was randomized. It was reasoned that preceding trials offered the subject an opportunity to gain proficiency at the task thus allowing better performance on succeeding trials. To further diminish the learning effect, in phases 3 and 4 the subjects completed multiple practice runs at different time-delay levels before the data capture runs. In phases 2 and 3 another interesting phenomenon occurred. When the time delay exceeded 1.5 seconds, the RMS error decreased, or did not increase as expected. However, the time to complete the task continued to increase with increasing time-delay. It was observed that, starting at a time-delay of 1.5 seconds, subjects generally started to use a discontinuous movement pattern, interspersing frequent pauses and controller position resets (to avoid singular configurations) between movements. These pauses allowed the observed video images to catch up with the controller inputs and robot responses, thus negating the time-delay based impact on movement accuracy, but at the expense of an increase in the time to complete the task. To avoid this compensatory strategy and explore the impacts of time-delay on manipulation accuracy, a time limit of 90 seconds was introduced in phase 4, along with audible and visual cues to keep the subjects on pace. With this measure in place, the RMS error tended to increase more monotonically, however, the pattern was still not as clear as expected. To further dissect the underlying causes, the subject data were divided between subjects with moderate to extensive teleoperation experience and those with minimal or no experience. A clear pattern then emerged. Subjects with teleoperation experience performed significantly better on the task in terms of accuracy, as reflected in their lower RMS error values, and were also more successful in completing the task within the allotted 90 second interval. The Novice users sometimes resorted to the move-and-pause strategy observed in the untimed tests, and thus preserved positional accuracy, but exceeded the time-limit. 22

37 It was found that at around 1.5 seconds of imposed time-delay a simple task of maneuvering a robotic manipulator end-effector along a 3-D path becomes substantially more difficult than at lesser time-delays. It was also shown that there is a noticeable learning effect between the different time-delay cases and that experienced teleoperators performed far better than novices. It is clear, though, that even experienced teleoperators would not perform adequately to conduct surgical procedures with acceptable safety and efficiency beyond two seconds of time delay. For spaceflight, this would limit tele-robotic surgery to that performed by an Earth-based surgeon on a patient on a spacecraft in low-earth orbit (LEO). For exploration class missions beyond LEO, an alternative solution would have to be found. One method to reduce the detrimental effects of latency would be to employ semiautonomous robotic surgery procedures, that is, segmented autonomous movements with human supervisory control and decision points. 2.2 Virtual Model Development A virtual model of the test article was developed using the OpenGL platform and incorporated the ability to modify the target gate locations and orientations, and the overall test article location (base location) OpenGL Platform OpenGL is a software interface to graphics hardware, which comprises of around 150 distinct commands that a developer can use to specify objects and operations needed to produce an interactive three-dimensional application. OpenGL is designed as a streamlined, hardwareindependent interface which can be implemented on many hardware platforms. OpenGL also provides a sophisticated library which includes a lot of modeling features, such as quadric surfaces and NURBS curves and surfaces. OpenGL, a low-level graphics library specification, makes available to the programmer a small set of geometric primitives points, lines, polygons, images, and bitmaps. It also provides tools to allow specification of geometric objects in two or 23

38 three dimensions, using the provided primitives, together with commands that control how these objects are rendered or drawn. OpenGL provides a powerful but primitive set of rendering command, and all higher-level drawing must be done in terms of these commands. Since OpenGL drawing commands are limited to those that generate simple geometric primitives (points, lines and polygons), the OpenGL Utility Toolkit (GLUT) was created by Mark Kilgard to aid in the development of more complicated three-dimensional objects like a sphere, or cuboids (Kilgard, 1999). OpenGL Utility Kit (GLUT) is a window-system-independent toolkit to hide the complexities of differing window Application Programming Interfaces (APIs) Virtual Model of the Test Article The virtual model of the test article was created by defining coordinates and matching the length, breadth and height of the base and the target gates of the test article with the internal graphical model. Eight target gates were drawn on to the base at initial locations, which matched the exact location of gates on the test article. The ability to reposition the gates was inbuilt into the design and this enabled the program/user to change the target locations as needed. The target locations can be changed based on external input (such as input from a Motion Capture System) too. The size of the test article, the heights of the target gates, the distances between the gates were scaled to fit in the coordinate axis of the graphical model. The end-effector of the robot arm was designed and placed in the graphical model as a marker attached to a short rod. The initial position of the end-effector was placed at the target gate closest to the robot with an angle and orientation similar to the actual angle and orientation of the robot end-effector with respect to the test article (Figure 11, 12 and 13). 24

39 Figure 11: Graphical model of the test article - top view Figure 12: Graphical model of the test article - side view 25

40 Figure 13: Graphical model of the test article The base of the test article in the graphical model can also be moved in the X, Y and Z axes according to user input or external input from the motion capture system. Thus, movements to the base are a suitable means to simulate environmental dynamics (movements due to vehicle motion, patient table movements and other unexpected movements). Movements to each individual gate, if well-coordinated, are a suitable means to simulate body internal movements. The robot end-effector position can be modified by the output of path calculation algorithms or user defined methods. Along with the model, functions were designed and implemented to rotate the graphical test article along any axis for a better view and perception, zoom in and out, move the coordinates, and to pause all action. 26

41 2.3 Real-Time Motion Capture Real-Time Motion Capture is an extension of the BTS Smart Capture system from the BTS Bioengineering group (Milan, Italy). Real-time capture and analysis capabilities were added to the existing software to assist in virtual reconstruction of the test article on a real-time basis BTS SMART DX BTS SMART DX is the new generation of high-precision optoelectronics systems, manufactured and developed by the BTS Bioengineering group. This software was designed for use in research, sport and medicine. With SMART DX, BTS extends the investigative capacity of doctors and researchers, providing them with a line of high-definition systems characterized by extreme calculation power and exceptional versatility. These features make the system able to handle all the analysis needs, even under the most critical conditions. The system uses newly-designed digital video cameras that employ highly sensitive sensors and innovative, functional illuminators whose high radiation power, combined with the high resolution of the video camera (up to 4 Megapixels), increases the working volume and allows for capturing both extremely rapid and imperceptible movements. This new system represents an evolutionary leap in the development of multi-factorial motion analysis, now made more accurate, integrated, quicker and more productive. The system integrates, synchronizes and manages all kinematic, kinetic, electromyographic and video data in real time as it is obtained from connected devices such as force platforms, electromyographs, sensor-fitted treadmills, etc. The system includes advanced software for multi-factorial motion analysis which allows for protocols customization for a complete motion analysis by means of an innovative object-based interface. All graphic and multimedia reports can be configured as needed and can be printed out, exported or shared over the internet. The software also includes BTS SMART Clinic, which is a solution devoted to the clinical assessment of human movement. Based on protocols validated by the international scientific community, it is a powerful, advanced tool allowing for 27

42 the simultaneous analysis of the movement of the entire body and individual body districts thanks to the high acquisition resolution (up to 4 Megapixels). All the kinematic, kinetic, electromyographic and video data is synchronized even in the event of long-lasting acquisitions. The data acquired is sent with an associated time stamp. The system correlates all the data received and records it on the timeline. BTS also provides a development tool, a Software Development Kit (SDK) for access to 3D data in real time, trigger events management, external synchronization clock management and a kit for isokinetic dynamometers. The system also includes BTS SMART Analyzer, which is a complete solution for the biomechanical analysis of movement with three-dimensional kinematic data, video and analog data from force platforms, electromyographs or other devices. The flexibility and the completeness of this instrument make it the ideal technology for multifactor movement analysis in various application fields, including neurophysiology, prosthesis, robotics, veterinary science, phonetics and sports. The biomechanical function of the software is the design of a computing scheme that generates all the data required by the user for a complete analysis of the motor gesture. The BTS SMART DX, without the effective implementation of Real-Time Software Development Kit (RTSDK) can be used only for analysis of captured data after the data capture is completed. In order to use this system in a real-time operation, more software routines had to be written to directly access the data buffers and send the data out as quickly possible, so that the main capture threads and marker reconstruction calculation threads are not affected BTS SMART DX with Real-Time Capture To achieve real-time capture and communication, the applications have to process data at the same time and have to communicate with each other. This was implemented using two main concepts: threads and dataports. Threads are independent processes, whose execution are concurrent and share data among each other. Dataport is an expedient to handle a part of memory that is accessible for several 28

43 applications, to communicate and exchange data. Each dataport is identified with a unique name and more than one dataport may exist at the same time. A dataport may be described as a technique to create data flow between a server, or source, application and one or more client, or target, applications which execute the next step of processing. Several parameters like, frequency of data generation, time, frame number, marker identification number, the X, Y and Z positions of each marker and the number of acquired frames can be obtained from the dataport. For the purpose of this research, the parameters chosen were: 1. Frame number : The Frame identification number 2. Marker number: The marker identification number, which is used to identify each corresponding marker. 3. X Position: The X coordinate position of each identified marker based on the original reference frame established during calibration. 4. Y Position: The Y coordinate position of each identified marker based on the reference frame established during calibration. 5. Z Position: The Z coordinate position of each identified marker based on the reference frame established during calibration. 6. Time: The time value associated with each frame generation. The above mentioned values were extracted from the dataport and stored in a user-defined data structure. This data structure forms the packet which is sent out for further processing. All access to the dataport is implemented making sure that it does not affect the main threads collecting marker information and updating coordinates. This requires perfect synchronization among the threads and this is achieved by careful programming techniques and constant performance and optimization checks. The frames are generated at a frequency of 100 Hz. 2.4 Simulating a Dynamic Environment 29

44 A dynamic environment was simulated by mounting the test article on a MB Dynamic CAL50 Vibration Exciter (MB Dynamics, Cleveland Ohio, USA) and driving the exciter based on input from a low frequency sine wave generated by a function generator ( HP 33120A, Hewlett Packard, Melrose, MA, USA) Model CAL 50 Vibration Exciter The CAL50 Exciter is a self-contained, permanent magnet powered electro-dynamic shaker. The electro-dynamic vibration exciter manufactured by MB Dynamics, Inc is based on the scientific principle that a mechanical force or motion can be produced by passing electric current through a wire placed into a magnetic field. The force generated depends upon the permanent magnetic field strength as well as the current in the moving element coil. Thus the amplitude and frequency of the response depends on the amplitude and frequency of the input current. To amplify the signal output from the function generator, a power amplifier (MB Dynamics SS250 Power Amplifier) is used. The output from function generator is fed into a power amplifier and the output from power amplifier serves as the input to the exciter. Exciter performance specifications demand unidirectional motion along the exciter s principal axis. The CAL50 exciter has movements along the Y axis, i.e. up and down, and the magnitude and frequency of this movement can be controlled based on the input signal. The test article is firmly mounted on top of CAL50 exciter using screws. A sine wave of frequency varying from 0.1 Hz to 0.3 Hz is used to drive the exciter movements Simulating external and internal movements External movement of the test article is accomplished by driving the exciter along its principal axis. This movement is a suitable means to simulate environmental dynamics and abdominal movement during breathing. The normal respiratory rate in an adult is per minute with an average thoracic movement of 5 mm to 13 cm. (Lindh, Pooler, Tamparo, & Dahl, 2009). This 30

45 corresponds to a frequency 0.23 Hz to 0.3 Hz. Thus, if the exciter is driven by a sine wave of frequency varying from 0.2 Hz to 0.3 Hz and varying amplitude, this can simulate the abdominal movements during breathing action. To simulate internal body movements, the target gates are moved manually while the experiment is in progress. 2.5 Real-Time Spatial Mapping Real-Time Spatial Mapping is a technique by which the virtual graphical model is matched to the actual test article by matching previously identified points (markers) on a real-time basis. This helps in creating an interactive virtual model in which, changes are updated in the virtual model as soon as they happen in the physical world Mapping External movements (Environmental Dynamics and External Body movements) To map the environmental dynamics, markers are attached to the test article base, at four corners (Figure 14). The real-time motion capture system was configured to capture the movement to these markers and send it to the virtual graphical simulator. The simulator reconstructs its internal model based on the values received from the real-time motion capture system. This realtime virtual reconstruction is an effective method to track all movements happening to the test article in the physical world. Based on the values received, the program maps each marker to each corner of the virtual model and the virtual model is redrawn every time the values are updated. The reconstruction includes reconstruction of all target gates, updating the path calculation algorithms and the output sent to the robot controller. To decrease the dependence on initial values produced by the real-time motion capture system, the virtual reconstruction is configured to accept changes in values rather 31

46 than original values. The change in the coordinates of each marker is calculated every time frame and these values are used as a base value for virtual reconstruction. Figure 14: Test Article with markers at four corners, Side View 1 32

47 Figure 15: Test Article with markers at four corners, Side View Mapping Internal Movements (Body Internal Movements) To map internal movements, markers are used to replace the balls on target gates (Figure 16). The real-time motion capture system is used to track the movement of these target gates. This information is used to update the target locations inside the graphical virtual model. This updated location is used in the path calculations and updated information is sent to the robot controller on a real-time basis. The robot controller updates the end-effector location using this information. Mapping both internal and external movements into the virtual model on a real-time basis helps in creating a real-time virtual model of the test article. As the virtual model is updated, the path calculation algorithms can use the new points to make decisions about the ideal path. 33

48 Figure 16: Test Article with target gates replaced with markers. 2.6 Motion Prediction In order to effectively predict where a target would be at a future time, based on its recent motion history, and to move the robot end-effector accordingly, Kalman filtering was used initially. The state of each target gate was defined as : [ ] (5) 34

49 where S refers to the state of target, X, Y and Z are the three dimensional Cartesian coordinates while α, β and γ are the Euler angles Yaw, Pitch and Roll. The Kalman filter is a powerful tool that is playing an increasingly important role in computer graphics as we include sensing of the real world in our systems. While the Kalman filter has been in use for over 30 years, its application in a wide variety of computer graphics applications has been a relatively recent occurrence. These applications span from simulating musical instruments in virtual reality, to head tracking, to extracting lip motion from a video sequence of speakers, to fitting spline surfaces over collection of points. While the Extended Kalman filter showed good results during simulation, its integration into the final design had to be dropped due to the processing time delay it caused inside the control loop. Estimation of X, Y and Z coordinates of more than 10 markers on a constant basis in separate threads caused instability within the program and a detrimental delay in the final output being sent to the robot controller. Since the algorithms are being developed for eventual application to robotic surgery, such a delay could cause movement inaccuracies that could result in harm to the patient undergoing a procedure. Instead, a novel approach which uses minimal system resources and delay was used to predict the motion of targets and move the robot end-effector accordingly. This method assumes that there is pattern in every random motion that the system might encounter, as in breathing motion and exploits this principle to predict the motion of targets and update the position of the robot accordingly. When the targets are in motion, we use a training module which constantly records the position of the targets whose motion is to be predicted. The input data includes the vision data from the last twenty five frames for the position of the marker. In every frame the difference in position along each individual axis is calculated relative to its position in the previous frames. The average of this difference is added to the normal data sent to the robot controller to help it move to the predicted state of the target. In order to reduce the effects of sudden turns and movements in opposite direction along any axis, the algorithm flushes its previous values when a change of direction is noted in consecutive frames. During every such flush, the robot controller is notified immediately of the change and corrective measures are taken to compensate for the change in direction. 35

50 While this simple motion prediction method proved to be efficient in experiments where the motion was more or less regular, the performance of the system is dependent upon the complexity of motion estimated. However, since the primary purpose of this experiment was to model abdominal breathing patterns and to manipulate the robot end-effector based on this, the algorithm proved effective in our experiments. Suitable safety precautions were coded into the system to stop the prediction process, if the number of flushes exceeds a predetermined amount in a given timeframe. In this case, the robot end-effector is driven based on the path calculation algorithm and the observed values of the marker positions. 2.7 Smooth Path Generation A smooth path is often defined as: - A curve that does not intersect itself. - A curve which has a tangent at each point whose direction varies continuously as the point moves along the curve. In addition to the above requirements, the path generated by the system should fulfill the following requirements too to be effectively used in a robotic surgery: 1. Flexibility: Since a surgery involves many complex gestures and movements, the path generated should be flexible according to user needs. The curvature should not be statically set; instead for each segment, the user should have the ability to set the size and shape of the curve. 2. Speed: The path generation process should be fast and seamless to be integrated effectively into the system. 3. Adaptability: The path generated should adapt itself to changes in target locations dynamically. For e.g. when a target moves, a new path to the updated location should be generated without delaying the main robot motion. 36

51 It was determined that Bezier curves are a good match to all the requirements stated above. Bezier Curves provide an effective method to achieve smooth path generation Bezier Curves Bezier curves are a form of parametric curves which is regularly used in computer graphics for surface simulation. A path can be formed by linking several Bezier Curves. Since Bezier Curves are also typically used in the time domain, it is easy to integrate the same with a real-time process. Bezier Curves were initially derived from Bernstein basis polynomials and are named after Pierre Bezier, a French engineer who used these curves to design automobile bodies at Renault. A Bezier curve is defined by a set of control points, P 0 to P n where n defines the order of the equation (n =1 is for a linear polynomial, n = 2 defines a quadratic polynomial, n = 3 defines a Cubic Polynomial, etc.). The first control point P 0 defines the starting point of the curve while the last control point P n defines the end point. The other control points, P 1 to P n-1 do not lie on the curve, but help to define the curvature and shape of the curve. A linear Bezier curve is defined by: ( ) ( ) (6) Where and define the control points and t increases in steps from 0 to 1. A linear Bezier curve would simply be a straight line between the two control points. When t = 1, the curve reaches the end point defined by the last control point, while t = 0 corresponds to the initial control point. A quadratic Bezier curve is defined by: ( ) ( ) ( ) (7) 37

52 Where,, are the control points and t varies from 0 to 1 in steps. As t increases from 0 to 1 in steps, the curve starts from, moves towards and then bends to reach in the direction from. A cubic Bezier Curve has four control points,,,. It is defined by: ( ) ( ) ( ) ( ) (8) A Bezier curve defined for a degree n can be defined as: ( ) ( ) ( ) ( ) (9) where ( ) are the binomial coefficients. p 0 Figure 17: A linear Bezier Curve 38

53 p 1 Figure 18: A Quadratic Bezier Curve p 2 p 0 p 1 Figure 19: A Cubic Bezier Curve The unique advantage the Bezier Curve method offers is that it can be reconfigured quickly by redefining the control points. This plays a very important role in the applications to surgical procedures that need very quick and flexible path generation Bezier Curves applied to the Virtual Model 39

54 For the purpose of this research, cubic Bezier curves were used to generate the required paths. The entire task was divided into eight segments, the start and end of each segment identified by successive target gates. For each segment, the Bezier curve is generated by defining control points as needed. The control points affect the shape and curvature of the path, and thus the user has the power to modify the path in any segment by modifying the control points or redefining them. Figure 20: Virtual Model of Test article with simulated Bezier curve paths 40

55 Figure 21: Virtual Model of Robot following the path generated by Bezier Curves Figures 20 and 21 show the paths generated by the Bezier Curves in the virtual model. The robot end-effector position is updated every twenty five millisecond based on the new path position. The user has the ability to modify and define a new curve for any segment, and thus a great amount of flexibility is provided in terms of path generation within the system. 2.8 Telecommunication Links Telecommunication links connecting the different modules in the system play an important role in the efficiency of the system. Two telecommunication links exist in the system: a. The Motion Capture System is connected to the Processing Station via a User Datagram Protocol (UDP) link. b. The Processing Station is connected to the Robot Controller via a TCP/IP (Transmission Control Protocol / Internet Protocol) link. 41

56 2.8.1 The UDP Link The UDP telecommunication link provides a quick method to transfer packets over the internet, with a minimum delay. However, the link is transaction oriented and delivery or duplicate protection is not guaranteed. It provides a best-effort datagram service to an End host. The simplicity of UDP reduces the overhead and so the protocol is used in numerous applications. UDP differs from other protocols in that it does not require the user to establish an end-to-end connection between communicating end systems. UDP does not provide any communications security. It is up to the communicating applications to protect the communication link from eavesdropping, tampering or message forgery. The Real-Time Motion Capture System acts as the UDP server, while the processing station acts as the UDP Client. As soon as the client is connected, it sends a message request to the UDP server. The server uses the incoming message to identify and authenticate the client. Once the client is authenticated, the server sends data at a constant rate until the client stops listening. Since the packets are generated at a frequency of 100 Hz in the motion capture system, a UDP link is best suited to connect it to the processing station. The UDP link offers minimal delay and thus the virtual model in the processing station can be updated on a real-time basis. The average time delay in the link is less than twenty milliseconds The TCP/IP Link The Transmission Control Protocol (TCP) is a connection-oriented reliable protocol. It provides a reliable transport service between a server and client. TCP is stream oriented and exchanges streams of data. The TCP operates in a much more complex method than UDP and involves much higher overhead to ensure a reliable transmission. This causes additional delay in the packet transmission. 42

57 The processing station sends out the robot control data to the Robot Controller using a TCP/IP link. A reliable link is required here as the robot end-effector is directly controlled by the values sent. The Robot Controller acts as a TCP Server, while the processing station acts as a client. During initialization, the sockets in client and server sides are connected to each other, and remain connected for the duration of the experiment. The default values sent out to the robot controller are configured to suspend the robot movement. These values are changed only when the path calculation algorithms have started and the user approves the paths generated. Thus, the user has the ability to suspend/modify the robot movement at any instance during the experiment. Once the experiment is finished, the sockets are disconnected and the robot control is turned off. 2.9 Robot Controller The Robot Controller module contains the functions and methods to manipulate the position of the end-effector. This module uses native libraries provided by the manufacturer with additional safety checks to ensure than the instantaneous velocity never crosses 500 mm/s at any point. Safety routines are built in to withdraw, suspend or terminate the robot movements at any instance. All communication to the robot has to be sent through the robot controller only. The robot controller accepts instantaneous velocities along each displacement axis (X, Y, Z) and around each the rotational axis (Yaw, Pitch and Roll) as its input and transforms it into end-effector movement. For the purpose of this research, only X, Y and Z velocities were sent to the robot controller Integrating all Functional Modules Integrating all functional modules discussed so far results in the framework for this system. Figure 22 shows the block diagram of the proposed system with all the modules. 43

58 The Real-Time Motion Capture system collects data from the cameras connected to it and uses the incoming data to calculate the Cartesian coordinates of individual markers attached to the test article. Once the coordinates are calculated, the data is sent to the processing station using a UDP connection. The processing station accepts the data and uses it to reconstruct its own internal model of the test article. The data is also used to update the path calculation algorithms. The updated model and the new paths are shown to the user through a graphical interface. The calculated path is also sent to the robot controller for moving the robot end-effector along the desired path. The user has the ability to suspend the process or modify the generated path at any point during the entire process. Additional data available to the user include position updates of each target in the internal model, previous robot motion values sent and the estimated time remaining to complete the procedure. 44

59 Human Operator Visual Simulator Real-Time Motion Capture System (Tracking motion, State Management) UDP Connection Processing Station (Path, Robot position, State Management) TCP/IP Connection Robot Controller Test Article Camera 1 Camera 4 Robot Arm Camera 2 Camera 3 Figure 22: The Block Diagram of the proposed system 45

60 Experiments were conducted using this framework in four phases: a. Static Model: In this phase, the test article was allowed to remain static and the robot end-effector was driven to complete the same task that users completed during time-delay experiments. The goal of this phase was to complete the task with minimal errors and high efficiency. Results from this phase were later used to compare the efficiency of the automated system to that of teleoperation by human users under varying time-delays. b. Dynamic Model with regular movement along one axis: In this phase, the test article was mounted on the exciter and a sine wave of constant amplitude and frequency was used to move the test article. c. Dynamic Model with irregular movement along one axis: In this phase, the test article was mounted on the exciter and a sine wave of varying amplitude and frequency was used to move the test article. d. Static Model with limited internal movements: In this phase, individual targets were moved manually while the experiment was in progress. The goal of this phase was to verify whether the system has the ability to adapt to internal movements or changes in path geometry. 3 Results As discussed earlier, the main parameters used to evaluate the efficiency of the system are: a. Number of errors made b. Total Path Length c. Total Time Duration d. Total Path Deviations 46

61 3.1 Phase 1 In this phase, the test article was allowed to remain static throughout the procedure. The robot end-effector was driven by path calculation algorithms and its performance was evaluated. Twenty five trials were conducted and the average of all results compared to results from humanteleoperation under varying time-delays has been presented in Table 2. Human Human Human Human Human Human Semi- Tele- Tele- Tele- Tele- Tele- Tele- Autonomous operation operation operation operation operation operation operation Average Number of errors Total Path Length (in mm) Timedelay 0.0 Sec Timedelay 0.5 Sec Timedelay 1.0 Sec Timedelay 1.5 Sec Timedelay 2.0 Sec Timedelay 2.5 Sec Average Time Duration (in seconds) RMS Error (mm) Table 2: Comparison of efficiency of semi-autonomous operation to a teleoperation under various timedelays. 47

62 The Phase 1 results show a considerable improvement in accuracy for autonomous operation versus teleoperation. During autonomous operation, the average number of errors was negligible and the total path length remained close to ideal values. The average time duration of the procedure was set to 100 seconds in the path calculation algorithms. The RMS error was noticeably high, but this is due to the use of Bezier curves as paths instead of straight lines connecting the target gates. 3.2 Phase 2 In this phase, the test article was mounted on top of the exciter and a sine wave of constant amplitude (20 mm peak-to-peak displacement) and constant frequency (0.15 Hz) was used to drive the Exciter. This resulted in a regular movement along the Y-axis. The robot controller was driven to perform the same task as before. Since the motion capture system was in use for the real-time capture, it was not used to analyze the performance. Twenty five trials were performed and the average values were calculated. The parameters used to evaluate the efficiency were the average number of errors and the average time duration to complete the procedure. The results have been presented in Table 3. Semi-Autonomous Operation Average number of errors 1.2 Average time duration (in seconds) 110 Table 3: Phase 2 Semi-Autonomous Operation Results The semi-autonomous procedure was not completely error-free, but the number of errors was still lower than the number of errors at zero second time-delay by human teleoperators. The additional delay required for motion prediction and estimation caused the average time duration (compared to the original 100 seconds time duration) to increase too. 48

63 3.3 Phase 3 In this phase, the test article was mounted on top of the exciter and a sine wave of varying amplitude (15 mm peak-to-peak to 35 mm peak-to-peak) and varying frequency (0.1 Hz to 0.35 Hz) was used to drive the Exciter. Twenty five trials were performed and the average values were calculated. The results are presented in Table 4. Maximum Displacement (in mm) Average number of errors Average time duration (in seconds) Table 4: Phase 3 Semi-Autonomous Operation results Phase 3 Semi-Autonomous operation Average number of errors Maximum peak-to-peak displacement in mm Average number of errors vs Displacement Figure 23: Phase 3- Average number of errors vs maximum peak-to-peak displacement in millimeters 49

64 Phase 3- Average time duration vs Displacement Average total time duration (in seconds) Average Time duration vs Displacement Maximum peak-to-peak displacement in mm Figure 24: Phase 3- Average duration vs maximum peak-to-peak displacement Frequency (in Hz) Average number of errors Average time duration (in seconds) Table 5: Phase 3- Semi-Autonomous operation Results: - Effects of frequency on efficiency of operation. Tables 4 and 5 show that the magnitude of displacement has minimal effects on the efficiency of operation. However, the increasing frequency causes the error rate to increase too. However, the error rates are still lower than the error rates during a teleoperation by human subjects on a static model. 50

65 Average number of 2 errors Effects of frequency change on average number of errors Frequency in Hz Effects of frequency change on average number of Figure 25: Effects of frequency changes on number of errors Average time duration in seconds Effects of frequency on average time duration Frequency in Hz Effects of frequency on average time Figure 26: Effects of frequency changes on time duration 51

66 3.4 Phase 4 In this phase, locations of target gates were manually changed while the robot end-effector was in the process of completing the task. This test was strictly to evaluate whether the algorithm can adapt itself to take the new positions into consideration and change the path accordingly. This test was successful as it was found that the robot end-effector followed to the new position of the targets and was successfully able to complete the task with no errors (Figure 27). Figure 27: New path generated by the algorithm when one of the target gates is changed 4 Discussion 52

Medical robotics and Image Guided Therapy (IGT) Bogdan M. Maris, PhD Temporary Assistant Professor

Medical robotics and Image Guided Therapy (IGT) Bogdan M. Maris, PhD Temporary Assistant Professor Medical robotics and Image Guided Therapy (IGT) Bogdan M. Maris, PhD Temporary Assistant Professor E-mail bogdan.maris@univr.it Medical Robotics History, current and future applications Robots are Accurate

More information

Medical Robotics. Part II: SURGICAL ROBOTICS

Medical Robotics. Part II: SURGICAL ROBOTICS 5 Medical Robotics Part II: SURGICAL ROBOTICS In the last decade, surgery and robotics have reached a maturity that has allowed them to be safely assimilated to create a new kind of operating room. This

More information

Methods for Haptic Feedback in Teleoperated Robotic Surgery

Methods for Haptic Feedback in Teleoperated Robotic Surgery Young Group 5 1 Methods for Haptic Feedback in Teleoperated Robotic Surgery Paper Review Jessie Young Group 5: Haptic Interface for Surgical Manipulator System March 12, 2012 Paper Selection: A. M. Okamura.

More information

Surgical robot simulation with BBZ console

Surgical robot simulation with BBZ console Review Article on Thoracic Surgery Surgical robot simulation with BBZ console Francesco Bovo 1, Giacomo De Rossi 2, Francesco Visentin 2,3 1 BBZ srl, Verona, Italy; 2 Department of Computer Science, Università

More information

Novel machine interface for scaled telesurgery

Novel machine interface for scaled telesurgery Novel machine interface for scaled telesurgery S. Clanton, D. Wang, Y. Matsuoka, D. Shelton, G. Stetten SPIE Medical Imaging, vol. 5367, pp. 697-704. San Diego, Feb. 2004. A Novel Machine Interface for

More information

Evaluation of Haptic Virtual Fixtures in Psychomotor Skill Development for Robotic Surgical Training

Evaluation of Haptic Virtual Fixtures in Psychomotor Skill Development for Robotic Surgical Training Department of Electronics, Information and Bioengineering Neuroengineering and medical robotics Lab Evaluation of Haptic Virtual Fixtures in Psychomotor Skill Development for Robotic Surgical Training

More information

SMart wearable Robotic Teleoperated surgery

SMart wearable Robotic Teleoperated surgery SMart wearable Robotic Teleoperated surgery This project has received funding from the European Union s Horizon 2020 research and innovation programme under grant agreement No 732515 Context Minimally

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

Robot: Robonaut 2 The first humanoid robot to go to outer space

Robot: Robonaut 2 The first humanoid robot to go to outer space ProfileArticle Robot: Robonaut 2 The first humanoid robot to go to outer space For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-robonaut-2/ Program

More information

Differences in Fitts Law Task Performance Based on Environment Scaling

Differences in Fitts Law Task Performance Based on Environment Scaling Differences in Fitts Law Task Performance Based on Environment Scaling Gregory S. Lee and Bhavani Thuraisingham Department of Computer Science University of Texas at Dallas 800 West Campbell Road Richardson,

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Appendix III Graphs in the Introductory Physics Laboratory

Appendix III Graphs in the Introductory Physics Laboratory Appendix III Graphs in the Introductory Physics Laboratory 1. Introduction One of the purposes of the introductory physics laboratory is to train the student in the presentation and analysis of experimental

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

An Inexpensive Experimental Setup for Teaching The Concepts of Da Vinci Surgical Robot

An Inexpensive Experimental Setup for Teaching The Concepts of Da Vinci Surgical Robot An Inexpensive Experimental Setup for Teaching The Concepts of Da Vinci Surgical Robot S.Vignesh kishan kumar 1, G. Anitha 2 1 M.TECH Biomedical Engineering, SRM University, Chennai 2 Assistant Professor,

More information

Autonomous Surgical Robotics

Autonomous Surgical Robotics Nicolás Pérez de Olaguer Santamaría Autonomous Surgical Robotics 1 / 29 MIN Faculty Department of Informatics Autonomous Surgical Robotics Nicolás Pérez de Olaguer Santamaría University of Hamburg Faculty

More information

da Vinci Skills Simulator

da Vinci Skills Simulator da Vinci Skills Simulator Introducing Simulation for the da Vinci Surgical System Skills Practice in an Immersive Virtual Environment Portable. Practical. Powerful. The da Vinci Skills Simulator contains

More information

Measurements of the Level of Surgical Expertise Using Flight Path Analysis from da Vinci Robotic Surgical System

Measurements of the Level of Surgical Expertise Using Flight Path Analysis from da Vinci Robotic Surgical System Measurements of the Level of Surgical Expertise Using Flight Path Analysis from da Vinci Robotic Surgical System Lawton Verner 1, Dmitry Oleynikov, MD 1, Stephen Holtmann 1, Hani Haider, Ph D 1, Leonid

More information

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery Using Simulation to Design Control Strategies for Robotic No-Scar Surgery Antonio DE DONNO 1, Florent NAGEOTTE, Philippe ZANNE, Laurent GOFFIN and Michel de MATHELIN LSIIT, University of Strasbourg/CNRS,

More information

Robotics, telepresence and minimal access surgery - A short and selective history

Robotics, telepresence and minimal access surgery - A short and selective history Robotics, telepresence and minimal access surgery - A short and selective history Luke Hares, Technology Director, Cambridge Medical Robotics P-306v2.0 Overview o Disclaimer! o Highlights of robotics and

More information

More Info at Open Access Database by S. Dutta and T. Schmidt

More Info at Open Access Database  by S. Dutta and T. Schmidt More Info at Open Access Database www.ndt.net/?id=17657 New concept for higher Robot position accuracy during thermography measurement to be implemented with the existing prototype automated thermography

More information

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices This is the Pre-Published Version. Integrating PhysX and Opens: Efficient Force Feedback Generation Using Physics Engine and Devices 1 Leon Sze-Ho Chan 1, Kup-Sze Choi 1 School of Nursing, Hong Kong Polytechnic

More information

Enhanced performance of delayed teleoperator systems operating within nondeterministic environments

Enhanced performance of delayed teleoperator systems operating within nondeterministic environments University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2010 Enhanced performance of delayed teleoperator systems operating

More information

Small Occupancy Robotic Mechanisms for Endoscopic Surgery

Small Occupancy Robotic Mechanisms for Endoscopic Surgery Small Occupancy Robotic Mechanisms for Endoscopic Surgery Yuki Kobayashi, Shingo Chiyoda, Kouichi Watabe, Masafumi Okada, and Yoshihiko Nakamura Department of Mechano-Informatics, The University of Tokyo,

More information

Application of Force Feedback in Robot Assisted Minimally Invasive Surgery

Application of Force Feedback in Robot Assisted Minimally Invasive Surgery Application of Force Feedback in Robot Assisted Minimally Invasive Surgery István Nagy, Hermann Mayer, and Alois Knoll Technische Universität München, 85748 Garching, Germany, {nagy mayerh knoll}@in.tum.de,

More information

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,

More information

Shape Memory Alloy Actuator Controller Design for Tactile Displays

Shape Memory Alloy Actuator Controller Design for Tactile Displays 34th IEEE Conference on Decision and Control New Orleans, Dec. 3-5, 995 Shape Memory Alloy Actuator Controller Design for Tactile Displays Robert D. Howe, Dimitrios A. Kontarinis, and William J. Peine

More information

Haptic Feedback in Laparoscopic and Robotic Surgery

Haptic Feedback in Laparoscopic and Robotic Surgery Haptic Feedback in Laparoscopic and Robotic Surgery Dr. Warren Grundfest Professor Bioengineering, Electrical Engineering & Surgery UCLA, Los Angeles, California Acknowledgment This Presentation & Research

More information

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training On Application of Virtual Fixtures as an Aid for Telemanipulation and Training Shahram Payandeh and Zoran Stanisic Experimental Robotics Laboratory (ERL) School of Engineering Science Simon Fraser University

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Design of a Remote-Cockpit for small Aerospace Vehicles

Design of a Remote-Cockpit for small Aerospace Vehicles Design of a Remote-Cockpit for small Aerospace Vehicles Muhammad Faisal, Atheel Redah, Sergio Montenegro Universität Würzburg Informatik VIII, Josef-Martin Weg 52, 97074 Würzburg, Germany Phone: +49 30

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

Real-Time Bilateral Control for an Internet-Based Telerobotic System

Real-Time Bilateral Control for an Internet-Based Telerobotic System 708 Real-Time Bilateral Control for an Internet-Based Telerobotic System Jahng-Hyon PARK, Joonyoung PARK and Seungjae MOON There is a growing tendency to use the Internet as the transmission medium of

More information

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018.

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018. Research Intern Director of Research We are seeking a summer intern to support the team to develop prototype 3D sensing systems based on state-of-the-art sensing technologies along with computer vision

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Spatial Judgments from Different Vantage Points: A Different Perspective

Spatial Judgments from Different Vantage Points: A Different Perspective Spatial Judgments from Different Vantage Points: A Different Perspective Erik Prytz, Mark Scerbo and Kennedy Rebecca The self-archived postprint version of this journal article is available at Linköping

More information

Computer Assisted Medical Interventions

Computer Assisted Medical Interventions Outline Computer Assisted Medical Interventions Force control, collaborative manipulation and telemanipulation Bernard BAYLE Joint course University of Strasbourg, University of Houston, Telecom Paris

More information

Performance Issues in Collaborative Haptic Training

Performance Issues in Collaborative Haptic Training 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 FrA4.4 Performance Issues in Collaborative Haptic Training Behzad Khademian and Keyvan Hashtrudi-Zaad Abstract This

More information

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416

More information

Telemanipulation and Telestration for Microsurgery Summary

Telemanipulation and Telestration for Microsurgery Summary Telemanipulation and Telestration for Microsurgery Summary Microsurgery presents an array of problems. For instance, current methodologies of Eye Surgery requires freehand manipulation of delicate structures

More information

Cutaneous Feedback of Fingertip Deformation and Vibration for Palpation in Robotic Surgery

Cutaneous Feedback of Fingertip Deformation and Vibration for Palpation in Robotic Surgery Cutaneous Feedback of Fingertip Deformation and Vibration for Palpation in Robotic Surgery Claudio Pacchierotti Domenico Prattichizzo Katherine J. Kuchenbecker Motivation Despite its expected clinical

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

SAR Evaluation Considerations for Handsets with Multiple Transmitters and Antennas

SAR Evaluation Considerations for Handsets with Multiple Transmitters and Antennas Evaluation Considerations for Handsets with Multiple Transmitters and Antennas February 2008 Laboratory Division Office of Engineering and Techlogy Federal Communications Commission Introduction This document

More information

Image Guided Robotic Assisted Surgical Training System using LabVIEW and CompactRIO

Image Guided Robotic Assisted Surgical Training System using LabVIEW and CompactRIO Image Guided Robotic Assisted Surgical Training System using LabVIEW and CompactRIO Weimin Huang 1, Tao Yang 1, Liang Jing Yang 2, Chee Kong Chui 2, Jimmy Liu 1, Jiayin Zhou 1, Jing Zhang 1, Yi Su 3, Stephen

More information

Robotic Systems Challenge 2013

Robotic Systems Challenge 2013 Robotic Systems Challenge 2013 An engineering challenge for students in grades 6 12 April 27, 2013 Charles Commons Conference Center JHU Homewood Campus Sponsored by: Johns Hopkins University Laboratory

More information

Proceedings of Al-Azhar Engineering 7 th International Conference Cairo, April 7-10, 2003.

Proceedings of Al-Azhar Engineering 7 th International Conference Cairo, April 7-10, 2003. Proceedings of Al-Azhar Engineering 7 th International Conference Cairo, April 7-10, 2003. MODERNIZATION PLAN OF GPS IN 21 st CENTURY AND ITS IMPACTS ON SURVEYING APPLICATIONS G. M. Dawod Survey Research

More information

RENDERING MEDICAL INTERVENTIONS VIRTUAL AND ROBOT

RENDERING MEDICAL INTERVENTIONS VIRTUAL AND ROBOT RENDERING MEDICAL INTERVENTIONS VIRTUAL AND ROBOT Lavinia Ioana Săbăilă Doina Mortoiu Theoharis Babanatsas Aurel Vlaicu Arad University, e-mail: lavyy_99@yahoo.com Aurel Vlaicu Arad University, e mail:

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

Roundtable Discussion. Telesurgery and Robotics

Roundtable Discussion. Telesurgery and Robotics TELEMEDICINE AND e-health Volume 13, Number 4, 2007 Mary Ann Liebert, Inc. DOI: 10.1089/tmj.2007.9980 Roundtable Discussion Telesurgery and Robotics CHARLES R. DOARN, M.B.A., 1 KEVIN HUFFORD, M.S., 2 THOMAS

More information

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

Navigation of an Autonomous Underwater Vehicle in a Mobile Network Navigation of an Autonomous Underwater Vehicle in a Mobile Network Nuno Santos, Aníbal Matos and Nuno Cruz Faculdade de Engenharia da Universidade do Porto Instituto de Sistemas e Robótica - Porto Rua

More information

Robots in the Field of Medicine

Robots in the Field of Medicine Robots in the Field of Medicine Austin Gillis and Peter Demirdjian Malden Catholic High School 1 Pioneers Robots in the Field of Medicine The use of robots in medicine is where it is today because of four

More information

ERC: Engineering Research Center for Computer- Integrated Surgical Systems and Technology (NSF Grant # )

ERC: Engineering Research Center for Computer- Integrated Surgical Systems and Technology (NSF Grant # ) ERC: Engineering Research Center for Computer- Integrated Surgical Systems and Technology (NSF Grant #9731748) MARCIN BALICKI 1, and TIAN XIA 2 1,2 Johns Hopkins University, 3400 Charles St., Baltimore,

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

NeuroSim - The Prototype of a Neurosurgical Training Simulator

NeuroSim - The Prototype of a Neurosurgical Training Simulator NeuroSim - The Prototype of a Neurosurgical Training Simulator Florian BEIER a,1,stephandiederich a,kirstenschmieder b and Reinhard MÄNNER a,c a Institute for Computational Medicine, University of Heidelberg

More information

AIRCRAFT CONTROL AND SIMULATION

AIRCRAFT CONTROL AND SIMULATION AIRCRAFT CONTROL AND SIMULATION AIRCRAFT CONTROL AND SIMULATION Third Edition Dynamics, Controls Design, and Autonomous Systems BRIAN L. STEVENS FRANK L. LEWIS ERIC N. JOHNSON Cover image: Space Shuttle

More information

Voice Control of da Vinci

Voice Control of da Vinci Voice Control of da Vinci Lindsey A. Dean and H. Shawn Xu Mentor: Anton Deguet 5/19/2011 I. Background The da Vinci is a tele-operated robotic surgical system. It is operated by a surgeon sitting at the

More information

HUMAN Robot Cooperation Techniques in Surgery

HUMAN Robot Cooperation Techniques in Surgery HUMAN Robot Cooperation Techniques in Surgery Alícia Casals Institute for Bioengineering of Catalonia (IBEC), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain alicia.casals@upc.edu Keywords:

More information

Aerospace Sensor Suite

Aerospace Sensor Suite Aerospace Sensor Suite ECE 1778 Creative Applications for Mobile Devices Final Report prepared for Dr. Jonathon Rose April 12 th 2011 Word count: 2351 + 490 (Apper Context) Jin Hyouk (Paul) Choi: 998495640

More information

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 t t t rt t s s Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 1 r sr st t t 2 st t t r t r t s t s 3 Pr ÿ t3 tr 2 t 2 t r r t s 2 r t ts ss

More information

GYN / US. VITOM A Unique Visualization System for Vaginal Hysterectomy in the Operating Room

GYN / US. VITOM A Unique Visualization System for Vaginal Hysterectomy in the Operating Room GYN 1.0 03/2016-6-US VITOM A Unique Visualization System for Vaginal Hysterectomy in the Operating Room The VITOM System for your Exoscopy in the Operating Room Dear Colleagues, When feasible, the vaginal

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Designing Better Industrial Robots with Adams Multibody Simulation Software

Designing Better Industrial Robots with Adams Multibody Simulation Software Designing Better Industrial Robots with Adams Multibody Simulation Software MSC Software: Designing Better Industrial Robots with Adams Multibody Simulation Software Introduction Industrial robots are

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Force feedback interfaces & applications

Force feedback interfaces & applications Force feedback interfaces & applications Roope Raisamo Tampere Unit for Computer-Human Interaction (TAUCHI) School of Information Sciences University of Tampere, Finland Based on material by Jukka Raisamo,

More information

Transforming Surgical Robotics. 34 th Annual J.P. Morgan Healthcare Conference January 14, 2016

Transforming Surgical Robotics. 34 th Annual J.P. Morgan Healthcare Conference January 14, 2016 1 Transforming Surgical Robotics 34 th Annual J.P. Morgan Healthcare Conference January 14, 2016 Forward Looking Statements 2 This presentation includes statements relating to TransEnterix s current regulatory

More information

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Surgical Assist Devices & Systems aka Surgical Robots

Surgical Assist Devices & Systems aka Surgical Robots Surgical Assist Devices & Systems aka Surgical Robots D. J. McMahon 150125 rev cewood 2018-01-19 Key Points Surgical Assist Devices & Systems: Understand why the popular name robot isn t accurate for Surgical

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Robotics technology has recently found extensive use in surgical and therapeutic procedures. The purpose of this chapter is to give an overview of the robotic tools which may be

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration Nan Cao, Hikaru Nagano, Masashi Konyo, Shogo Okamoto 2 and Satoshi Tadokoro Graduate School

More information

Computational Principles of Mobile Robotics

Computational Principles of Mobile Robotics Computational Principles of Mobile Robotics Mobile robotics is a multidisciplinary field involving both computer science and engineering. Addressing the design of automated systems, it lies at the intersection

More information

FSI Machine Vision Training Programs

FSI Machine Vision Training Programs FSI Machine Vision Training Programs Table of Contents Introduction to Machine Vision (Course # MVC-101) Machine Vision and NeuroCheck overview (Seminar # MVC-102) Machine Vision, EyeVision and EyeSpector

More information

Autonomous Cooperative Robots for Space Structure Assembly and Maintenance

Autonomous Cooperative Robots for Space Structure Assembly and Maintenance Proceeding of the 7 th International Symposium on Artificial Intelligence, Robotics and Automation in Space: i-sairas 2003, NARA, Japan, May 19-23, 2003 Autonomous Cooperative Robots for Space Structure

More information

An Excavator Simulator for Determining the Principles of Operator Efficiency for Hydraulic Multi-DOF Systems Mark Elton and Dr. Wayne Book ABSTRACT

An Excavator Simulator for Determining the Principles of Operator Efficiency for Hydraulic Multi-DOF Systems Mark Elton and Dr. Wayne Book ABSTRACT An Excavator Simulator for Determining the Principles of Operator Efficiency for Hydraulic Multi-DOF Systems Mark Elton and Dr. Wayne Book Georgia Institute of Technology ABSTRACT This paper discusses

More information

Evaluation of RAVEN Surgical Telerobot during the NASA Extreme Environment Mission Operations (NEEMO) 12 Mission

Evaluation of RAVEN Surgical Telerobot during the NASA Extreme Environment Mission Operations (NEEMO) 12 Mission Evaluation of RAVEN Surgical Telerobot during the NASA Extreme Environment Mission Operations (NEEMO) 12 Mission Blake Hannaford Diana Friedman Hawkeye King Mitch Lum Jacob Rosen Ganesh Sankaranarayanan

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

Canadian Activities in Intelligent Robotic Systems - An Overview

Canadian Activities in Intelligent Robotic Systems - An Overview In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2-4, 2004 Canadian Activities in Intelligent Robotic

More information

C. R. Weisbin, R. Easter, G. Rodriguez January 2001

C. R. Weisbin, R. Easter, G. Rodriguez January 2001 on Solar System Bodies --Abstract of a Projected Comparative Performance Evaluation Study-- C. R. Weisbin, R. Easter, G. Rodriguez January 2001 Long Range Vision of Surface Scenarios Technology Now 5 Yrs

More information

Initial setup and subsequent temporal position monitoring using implanted RF transponders

Initial setup and subsequent temporal position monitoring using implanted RF transponders Initial setup and subsequent temporal position monitoring using implanted RF transponders James Balter, Ph.D. University of Michigan Has financial interest in Calypso Medical Technologies Acknowledgements

More information

RECOMMENDATION ITU-R BT SUBJECTIVE ASSESSMENT OF STANDARD DEFINITION DIGITAL TELEVISION (SDTV) SYSTEMS. (Question ITU-R 211/11)

RECOMMENDATION ITU-R BT SUBJECTIVE ASSESSMENT OF STANDARD DEFINITION DIGITAL TELEVISION (SDTV) SYSTEMS. (Question ITU-R 211/11) Rec. ITU-R BT.1129-2 1 RECOMMENDATION ITU-R BT.1129-2 SUBJECTIVE ASSESSMENT OF STANDARD DEFINITION DIGITAL TELEVISION (SDTV) SYSTEMS (Question ITU-R 211/11) Rec. ITU-R BT.1129-2 (1994-1995-1998) The ITU

More information

Teleoperation with Sensor/Actuator Asymmetry: Task Performance with Partial Force Feedback

Teleoperation with Sensor/Actuator Asymmetry: Task Performance with Partial Force Feedback Teleoperation with Sensor/Actuator Asymmetry: Task Performance with Partial Force Wagahta Semere, Masaya Kitagawa and Allison M. Okamura Department of Mechanical Engineering The Johns Hopkins University

More information

EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON

EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON Josep Amat 1, Alícia Casals 2, Manel Frigola 2, Enric Martín 2 1Robotics Institute. (IRI) UPC / CSIC Llorens Artigas 4-6, 2a

More information

DESIGN AND USE OF MODERN OPTIMAL RATIO COMBINERS

DESIGN AND USE OF MODERN OPTIMAL RATIO COMBINERS DESIGN AND USE OF MODERN OPTIMAL RATIO COMBINERS William M. Lennox Microdyne Corporation 491 Oak Road, Ocala, FL 34472 ABSTRACT This paper will discuss the design and use of Optimal Ratio Combiners in

More information

Evaluation of Operative Imaging Techniques in Surgical Education

Evaluation of Operative Imaging Techniques in Surgical Education SCIENTIFIC PAPER Evaluation of Operative Imaging Techniques in Surgical Education Shanu N. Kothari, MD, Timothy J. Broderick, MD, Eric J. DeMaria, MD, Ronald C. Merrell, MD ABSTRACT Background: Certain

More information

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION ROBOTICS INTRODUCTION THIS COURSE IS TWO PARTS Mobile Robotics. Locomotion (analogous to manipulation) (Legged and wheeled robots). Navigation and obstacle avoidance algorithms. Robot Vision Sensors and

More information

Quartz Lock Loop (QLL) For Robust GNSS Operation in High Vibration Environments

Quartz Lock Loop (QLL) For Robust GNSS Operation in High Vibration Environments Quartz Lock Loop (QLL) For Robust GNSS Operation in High Vibration Environments A Topcon white paper written by Doug Langen Topcon Positioning Systems, Inc. 7400 National Drive Livermore, CA 94550 USA

More information

Improving Depth Perception in Medical AR

Improving Depth Perception in Medical AR Improving Depth Perception in Medical AR A Virtual Vision Panel to the Inside of the Patient Christoph Bichlmeier 1, Tobias Sielhorst 1, Sandro M. Heining 2, Nassir Navab 1 1 Chair for Computer Aided Medical

More information

VR Haptic Interfaces for Teleoperation : an Evaluation Study

VR Haptic Interfaces for Teleoperation : an Evaluation Study VR Haptic Interfaces for Teleoperation : an Evaluation Study Renaud Ott, Mario Gutiérrez, Daniel Thalmann, Frédéric Vexo Virtual Reality Laboratory Ecole Polytechnique Fédérale de Lausanne (EPFL) CH-1015

More information

ROBOTIC MANIPULATION AND HAPTIC FEEDBACK VIA HIGH SPEED MESSAGING WITH THE JOINT ARCHITECTURE FOR UNMANNED SYSTEMS (JAUS)

ROBOTIC MANIPULATION AND HAPTIC FEEDBACK VIA HIGH SPEED MESSAGING WITH THE JOINT ARCHITECTURE FOR UNMANNED SYSTEMS (JAUS) ROBOTIC MANIPULATION AND HAPTIC FEEDBACK VIA HIGH SPEED MESSAGING WITH THE JOINT ARCHITECTURE FOR UNMANNED SYSTEMS (JAUS) Dr. Daniel Kent, * Dr. Thomas Galluzzo*, Dr. Paul Bosscher and William Bowman INTRODUCTION

More information

Haptic Virtual Fixtures for Robot-Assisted Manipulation

Haptic Virtual Fixtures for Robot-Assisted Manipulation Haptic Virtual Fixtures for Robot-Assisted Manipulation Jake J. Abbott, Panadda Marayong, and Allison M. Okamura Department of Mechanical Engineering, The Johns Hopkins University {jake.abbott, pmarayong,

More information

Robotic System Simulation and Modeling Stefan Jörg Robotic and Mechatronic Center

Robotic System Simulation and Modeling Stefan Jörg Robotic and Mechatronic Center Robotic System Simulation and ing Stefan Jörg Robotic and Mechatronic Center Outline Introduction The SAFROS Robotic System Simulator Robotic System ing Conclusions Folie 2 DLR s Mirosurge: A versatile

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

ISO INTERNATIONAL STANDARD. Robots for industrial environments Safety requirements Part 1: Robot

ISO INTERNATIONAL STANDARD. Robots for industrial environments Safety requirements Part 1: Robot INTERNATIONAL STANDARD ISO 10218-1 First edition 2006-06-01 Robots for industrial environments Safety requirements Part 1: Robot Robots pour environnements industriels Exigences de sécurité Partie 1: Robot

More information

Robotics for Telesurgery

Robotics for Telesurgery Robotics for Telesurgery Divya Salian Final year MCA student from Deccan Education Society s Navinchandra Mehta Institute of Technology & Development. Abstract: We as human beings have always been dissatisfied

More information