Minimally invasive surgical skills evaluation in the field of otolaryngology

Similar documents
Evaluation of Haptic Virtual Fixtures in Psychomotor Skill Development for Robotic Surgical Training

Medical robotics and Image Guided Therapy (IGT) Bogdan M. Maris, PhD Temporary Assistant Professor

Surgical robot simulation with BBZ console

HUMAN Robot Cooperation Techniques in Surgery

Measurements of the Level of Surgical Expertise Using Flight Path Analysis from da Vinci Robotic Surgical System

Medical Robotics. Part II: SURGICAL ROBOTICS

A Virtual Reality Surgical Trainer for Navigation in Laparoscopic Surgery

Epona Medical simulation products catalog Version 1.0

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018.

Using Web-Based Computer Graphics to Teach Surgery

Current Status and Future of Medical Virtual Reality

Simendo laparoscopy. product information

Open surgery SIMULATION

Automatic Detection and Segmentation of Robot-Assisted Surgical Motions

Robot assisted craniofacial surgery: first clinical evaluation

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

INTRODUCING THE VIRTUAL REALITY FLIGHT SIMULATOR FOR SURGEONS

RENDERING MEDICAL INTERVENTIONS VIRTUAL AND ROBOT

Methods for Haptic Feedback in Teleoperated Robotic Surgery

Scopis Hybrid Navigation with Augmented Reality

ehealth : Tools & Methods Dr. Asif Zafar

Can technological solutions support user experience, learning, and operation outcome in robotic surgery?

The Effect of Haptic Degrees of Freedom on Task Performance in Virtual Surgical Environments

Computer Assisted Abdominal

Accuracy evaluation of an image overlay in an instrument guidance system for laparoscopic liver surgery

Virtual Reality Based Training to resolve Visio-motor Conflicts in Surgical Environments

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices

Surgical Education Arrow project meeting

Fracture fixation providing absolute or relative stability, as required by the personality of the fracture, the patient, and the injury.

MacroPATH The new line of Digital Imaging Systems for Grossing

Haptic Feedback in Laparoscopic and Robotic Surgery

Learning Phacoemulsification Surgery In Virtual Reality Course ESCRS: Sept. 6, 2010,

Virtual and Augmented Reality techniques embedded and based on a Operative Microscope. Training for Neurosurgery.

Advanced Augmented Reality Telestration Techniques With Applications In Laparoscopic And Robotic Surgery

A Haptic-enabled Toolkit for Illustration of Procedures in Surgery (TIPS)

Comparison of Haptic and Non-Speech Audio Feedback

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Simulating Haptic Feedback of Abdomen Organs on Laparoscopic Surgery Tools

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

Image Guided Robotic Assisted Surgical Training System using LabVIEW and CompactRIO

towerswatson.com Transforming Life Medtronic aligns global total rewards with EVP

Information & Instructions

Robotic Surgical Advances for Prostatectomies

IEEE Open Milker Robot Version 1.1

INCLINED PLANE RIG LABORATORY USER GUIDE VERSION 1.3

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

Small Occupancy Robotic Mechanisms for Endoscopic Surgery

A surgical simulator for training surgeons in a few tasks related to minimally invasive surgery

A Comparison Between Camera Calibration Software Toolboxes

AUGMENTED REALITY IN VOLUMETRIC MEDICAL IMAGING USING STEREOSCOPIC 3D DISPLAY

Ravikiran Joseph Singapogu, Ph.D.

11Beamage-3. CMOS Beam Profiling Cameras

Virtual and Augmented Reality Applications

A TELE-INSTRUCTION SYSTEM FOR ULTRASOUND PROBE OPERATION BASED ON SHARED AR TECHNOLOGY

Medtronic Payer Solutions

Virtual Environments. Ruth Aylett

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

Comparison of Simulated Ovary Training Over Different Skill Levels

MIVS Tel:

Transforming Surgical Robotics. 34 th Annual J.P. Morgan Healthcare Conference January 14, 2016

SMart wearable Robotic Teleoperated surgery

Laboratory 1: Motion in One Dimension

DEVELOPMENT OF A BIPED ROBOT

Chapter 2 Mechatronics Disrupted

WSC WORLD SPORTS World Sports Council COUNCIL

Stereoscopic Augmented Reality System for Computer Assisted Surgery

Haptic presentation of 3D objects in virtual reality for the visually disabled

Lab 8: Introduction to the e-puck Robot

On the Implementation of a Robotic Welding Process Using 3D Simulation Environment

Evaluation of Five-finger Haptic Communication with Network Delay

BodyViz fact sheet. BodyViz 2321 North Loop Drive, Suite 110 Ames, IA x555 www. bodyviz.com

Perceptual-motor coordination in an endoscopic surgery simulation

An Excavator Simulator for Determining the Principles of Operator Efficiency for Hydraulic Multi-DOF Systems Mark Elton and Dr. Wayne Book ABSTRACT

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

A NEW APPROACH FOR ONLINE TRAINING ASSESSMENT FOR BONE MARROW HARVEST WHEN PATIENTS HAVE BONES DETERIORATED BY DISEASE

A Virtual Reality Training Program for Improvement of Robotic Surgical Skills

Team Breaking Bat Architecture Design Specification. Virtual Slugger

RASim Prototype User Manual

FUZZY LOGIC BASED NAVIGATION SAFETY SYSTEM FOR A REMOTE CONTROLLED ORTHOPAEDIC ROBOT (OTOROB)

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Technologies Worth Watching. Case Study: Investigating Innovation Leader s

Bandit Detection using Color Detection Method

Mixed Reality technology applied research on railway sector

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

BEAMAGE-3.0 KEY FEATURES BEAM DIAGNOSTICS AVAILABLE MODELS MAIN FUNCTIONS SEE ALSO ACCESSORIES. CMOS Beam Profiling Cameras

Momentum and Impulse. Objective. Theory. Investigate the relationship between impulse and momentum.

System Two Making your sugical outcomes brighter

Determine the Future of Lean Dr. Rupy Sawhney and Enrique Macias de Anda

Robots in the Field of Medicine

ERS KEY FEATURES BEAM DIAGNOSTICS MAIN FUNCTIONS AVAILABLE MODEL. CMOS Beam Profiling Camera. 1 USB 3.0 for the Fastest Transfer Rates

OPHTHALMIC SURGICAL MODELS

Novel machine interface for scaled telesurgery

Student Attendance Monitoring System Via Face Detection and Recognition System

da Vinci Skills Simulator

D8.1 PROJECT PRESENTATION

An Immersive Virtual Reality Training System for Mechanical Assembly

Congenital Heart Surgery with a Silicon Valley Perspective

Force feedback interfaces & applications

Tianjin Onehealth Medical Instruments. Onehealth Your surgical solutions partner

Transcription:

Minimally invasive surgical skills evaluation in the field of otolaryngology Alejandro Cuevas 1, Daniel Lorias 1, Arturo Minor 1, Jose A. Gutierrez 2, Rigoberto Martinez 3 1 CINVESTAV-IPN, México D.F., México. Department of Electrical Engineering, Section of Bioelectronics. 2 Technological Institute of Morelia, Michoacán, México. Department of Electrical Engineering, Section of Biomedic. 3 State Autonomous University of Mexico, México D.F., México. Department of Electrical Engineering Email: acuevas@cinvestav.mx Abstract This paper describes the design and development of a training system for minimally invasive surgery skills in the field of otolaryngology. The main purpose of the system is for the surgical residents to gain experience and practice. In order to provide the surgeon a practical interaction environment, training modules and an evaluation methodology are proposed. The methodology consists of a series of tasks which help to develop and evaluate the surgeon skills based on precision and time. A custom MATLAB code records and evaluates surgeon s performance on each task, giving the resident the chance of following a trend of progress. KEYWORDS SURGICAL TARGET, TRAINING MODULE, EVALUATION TASK, SURGEON S SKILLS, ENT SURGERY. 1. Introduction Throughout the last millennium, surgical practice and technique have significantly evolved. One of the most remarkable advances in recent decades has been the arrival of minimally invasive surgical techniques such as laparoscopic surgery, arthroscopy and interventional radiology. Such techniques enable a considerably reduction of the damage caused to the patients during an intervention, shortening the period of convalescence, decreasing postoperative pain and risk of infection [1]. These days, the minimally invasive surgery is a global concept that includes almost every medical discipline, such as otorhinolaryngology. Given that surgical training is mainly practiced through real surgery under the strict control of an expert, it requires long and expensive training sessions, until the students acquire the minimum needed skills [1]. Nowadays there are virtual trainers like the hybrid surgical simulators (ProMIS, UMS, TrEndo, IOMaster7D) which are a combination of a box trainer with an evaluation system and different accessories, where the surgeon interacts with virtual and physical models [2]. The most advanced systems integrate realistic force feedback, augmented reality graphics and generate objective measures of performance, similar to virtual reality simulators. Its main disadvantage is the high cost [3]. The availability of virtual trainers for teaching and training is prohibitively expensive in Latin America and has not entered the academic scene. Nevertheless, residents and surgeons need access to training systems that provide an objective, quantitative measure of their abilities development to establish and standardize better training protocols [4]. Therefore, the need of surgical environments, capable of developing skills and give experience to the residents, is imminent. The purpose of this project is to develop a low-cost training device that helps to improve surgeon s skills on minimally invasive surgery oriented on otolaryngology. 2. Materials and methods Based on three environments related to the learning of laparoscopic technique (virtual environment, direct practice in surgery and the use of physical trainers), three physical modules of practice are proposed, as well as an accessible and functional electronic-digital system, which allows to record and evaluate the performance on proposed tasks [5]. The training environment for skills development is show in Fig. 1. It specifies the interaction of the surgeon with the training module, using surgical instruments. The surgeon operates the surgical instrument and holds the visual feedback camera, which is introduced into the module in order to have a perspective from the inside. The surgical target inside this module embraces the environment for the completion of the task. Evaluation parameters for each module (one module per task) are captured, recorded, processed and 978-1-4799-1053-3/13/$31.00 c 2013 IEEE CBMS 2013 83

evaluated on a personal computer (PC) through a MATLAB custom code. Such code integrates the communication process between the modules, the PC, and the visual feedback. Given that each module has its own communication means, a different connection must be established for each task. PC Communication interface Physical module training Visual feedback Surgical instrument Camera Surgeon Figure 1. Training environment for minimally invasive skills in the specialty of otolaryngology. 3. Evaluation procedures The evaluation was performed based on the following aspects: spatial position of surgical instrument tip, smooth motion, depth perception, response orientation, and ambidexterity [6]; all of them with respect to time. Measurement of above parameters is made through three different assessment tasks, which are proposed as follows: dissection task, spatial location task and spatial navigation task. These tasks are integrated on the training modules. There is one module for each task. The training modules are an excellent low-cost alternative that gives a practical environment to the surgeon trainee. However, the modules themselves do not offer a system for objective evaluation [4]. Proposed modules are composed in general of a cylindrical environment integrating a surgical target and a visual feedback camera (endoscopic camera). It communicates with the computer via USB and captures images at a sampling rate of 30 frames per second; the camera replaces the zero degree rhinoscope used in ear, nose and throat (ENT) surgery [4]. The feedback camera gives four degrees of freedom mobility (up-down, left-right, forward-bacown axis) allowing the surgeon to see inside the and on its module and get the perspective of the surgical target, allowing him to perform the tasks contemplated for each module. All the modules integrate devices to process significant information, for instance, the webcam and the electronic circuit that determines the measurement parameters. 3.1 Dissection module The surgical target is shown on Fig. 2. The target is located at a distance of 6 cm into the module, where the task should be executed. This target involves a shape with a total area of 3.1416 cm 2. A commercial USB webcam is integrated into the module. An image is captured by the webcam and then analyzed and processed using an algorithm written in MATLAB. Such algorithm determines the percentage of sectioned area. The module environment is shown in Fig. 5(a). 3.1.1 Task: This task consists of using the surgical instrument (forceps) to cut and remove the surgical target (inside the circle), as shown in Fig. 2 (a). Removing 100% of target s area is considered an ideal cut for the task; see Fig. 2(b). 3.1.2 Difficulty: To obtain an ideal cut as faster as possible. 3.1.3 Measurement parameters: The percentage of sectioned area and the time it takes the surgeon to perform the task are determined. Figure 2. Surgical target for dissection task. (a) Diagram describing surgical targett for dissection task. The dotted line describes an ideal cut or cut 100%. (b) A real perspective of the surgical target. The figure shows a cut done by a participant. 3.2 Spatial location module The surgical target is shown on Fig. 3. The target is located at a distance of 6 cm into the module, where the task should be executed. The target integrates 6 different points which have to be touched by the surgical instrumental (probe-separator). These 6 points are distributed as shown on Fig. 3(a). An electronic 84 CBMS 2013

circuit (described below), attached to the module communicates with the PC through a microphone wire and the signal is received on the audio card of the PC. The circuit emits different sounds depending on the contact zone. A custom MATLAB code processes the audio signal, turning it into useful information. The electronic circuit consists on a voltage divider, a buzzer and a commercial microphone. The surgery target has various contact spots, when the surgical instrument touches any of the spots the circuit closes and generates a different output voltage for each spot. The output voltage make the buzzer emit a sound. The microphone, which is plugged to the PC, receives the audio signal to process it. The audio signal is useful not only for the PC algorithm, but to alert the surgeon when he has touched any of the spots. The module environment is shown in Fig. 5(b). 3.2.1 Task: The aim of this task is to touch each point with the surgical instrument following the sequence shown in Fig 3(b). 3.2.2 Difficulty: Achieving depth perception, as described in Fig. 3 (a) and disorientation associated to the camera motion. or limit of depth; see Fig. 4 (a). The track that will be followed by the surgeon has a total length of 10 cm and a width of 6 mm. The surgical instrument has a thickness of 4 mm, leaving a margin of 1 mm for tremor between each edge. An electronic circuit produces a beep whenever the instrument touches one of the edges of the plates. This contact is considered as a penalty and is hence recorded. Communication with the PC is made as in spatial location module. A custom MATLAB code records and processes the measured parameters. The module environment is shown in Fig. 5(c). 3.3.1 Task: The surgeon must follow the path described in Fig 4 (b) from point A to point B using the surgical instrument (probe-separator). 3.3.2 Difficulty: Avoid touching the bottom of the surgical target described in Fig. 4 (a) and avoid the edges marked by the red line; see Fig 4(b). 3.3.3 Measurement parameters: The total number of penalties and the time to perform the task. 3.2.3 Measurement parameters: Penalties committed by failing to follow the sequence and the time taken to complete the task. Figure 4. Surgical target for spatial location task. (a) Diagram describing the task and its spatial distribution. (b) Perspective the surgeon has to follow the path. 3.4. Digital Assessment System Figure 3. Surgical target for spatial location task. (a) Diagram describing the task and its spatial distribution. b) Perspective the surgeon has for the travel direction. 3.3 Spatial navigation module The target of this module is shown in Fig. 4. The target is located at a distance of 6 cm into the module and comprises two metal plates. The upper plate has a track with a fixed path; the surgical instrument must follow this path. The bottom plate provides the borders The software presents a control screen which works as a graphic interface. This screen shows the evaluation outcome and includes an input section for the surgeon to enter his data (Fig. 6). Each task can be executed several times until it is satisfactorily completed. Each time the task is performed is called a repetition. At the beginning of each task, the user (surgeon or resident) inserts her details to keep a personal record, when the task starts, the timer starts running. Upon completion of the task, the screen shows the following results: session number, repetition number, task duration, number of penalties and CBMS 2013 85

sectioned area percentage (depending on the task). There is as well an option for printing or saving all the data (EVALUATE button) on an Excel file. Table 1. Measurement parameters recorded for dissection, spatial navigation and spatial location. Dissection task Time (s) Cut percentage (%) I1 I2 I3 I1 I2 I3 (a) 1 331.5 206.5 323.2 77.109 67.911 78.966 2 127.7 178.6 239.5 78.413 73.387 68.876 3 119.7 141.6 192.1 75.963 79.284 78.868 4 159.7 206.1 214.4 75.752 72.600 74.537 5 121 170.1 132.1 79.292 80.723 75.856 (b) (c) Figure 5. Learning environment. (a) Dissection module. (b) Spatial location module. (c) Spatial navigation module. Time (s) Spatial navigation task Penalties I1 I2 I3 I1 I2 I3 1 63.1 42.6 55.1 30 40 34 2 42.1 60.1 37 15 28 27 3 31.4 55.3 43 18 29 27 4 30 40 39.5 19 25 24 5 24 37.1 26.1 10 23 21 Spatial location task Time (s) Penalties I1 I2 I3 I1 I2 I3 1 78.5 83.52 90.35 1 1 1 2 55 60 88.45 0 1 0 3 64.2 65.53 75.53 0 0 1 4 53.6 58.36 68.36 0 0 0 5 53 53 65 0 0 0 Figure 6. Control screen for the spatial navigation. The screen shows the following results: session number, repetition number, task duration and total penalties. 4. Results Experimental tests were performed in laboratory. Three participants with no experience were recruited. The participants are tagged I1, I2 and I3. After an explanation on how the system works and what does each task requires they started executing the tasks. Each participant performed one session of 5 repetitions per task. Table 1 enlists their results. Data from Table 1 was plotted to observe the trend of learning for the participants. The Fig. 7 is the plotted data for the dissection task. Fig. 7 (a) is the cut percentage and shows an improvement on the percentage of sectioned area. Fig. 7 (b) shows a slight tendency to decrease the time taken to perform the task. The Fig. 8 shows the plotted data for the spatial navigation task by the participants. The Fig. 8 (a) shows a decrease on the number of penalties per repetition. The Fig. 8 (b) shows a tendency to decrease the time taken to perform the task. The Fig. 9 shows the plotted data for the spatial location task done by the participants. In Fig. 9 (a) is shown a decrease on the number of penalties per repetition. The Fig. 9 (b) shows a tendency to decrease the time taken to perform the task. 86 CBMS 2013

Figure 7. Plotted data for the dissection task. (a) Graph describing a slight improvement for approximation to an ideal cut. (b) Graph describing the decrease in time with respect to the repetitions. Figure 9. Plotted data for the spatial location task. (a) Graph describing a significant improvement comparing the first repetition with the last. (b) Graph describing the decrease in time with respect to the repetitions. 5. Discussion Figure 8. Plotted data for the spatial navigation task. (a) Graph describing a significant improvement compared the first repetitions with the last. (b) Graph describing the decrease in time with respect to the repetitions. The technological and cost gap between physical and virtual trainers is a factor that limits the standardization of academic methods for training in the ETN surgery specialty [7,8]. The cost of virtual trainers has not adequately justified the set of abilities that can be learned solely from physical trainers [9]. For this reason, it is priority to integrate the best of these two technologies, to promote the development of appropriate training modules and improve physician performance and skill acquisition [6], at a cost that could be accessible for residents and public hospitals [4]. Existing models and simulators used for instruction and training on the field of otolaryngology provide subjective learning. Surgeons and residents need accessibility to training systems that provide objective and quantitative measurement of their improvements and skills development in order to establish and standardize the training protocols [10-12]. The novelty of this work is its capability of obtaining significant information through the registration and evaluation of measurement parameters. The proposed methodology gives a way of following the evolution of the surgeon s skills. A novel communication method between the PC and the training modules 2 and 3 has been CBMS 2013 87

implemented. It involves the audio card of the PC as a method for obtaining data; this is useful because the sound represents an audible feedback for the user. 6. Conclusions The training method shown in this paper represents a work in progress. It has proved possible to extract meaningful information as a learning method from the analysis of the surgical instrument s movement. The main reason of the development is the huge improvement it can make to the general surgical practice. Because of the realism that the device creates, any resident from the specialty will be able to participate, practice and repeat one particular task that helps him on a particular weakness. This kind development can be applied to test new surgical techniques, instrumentation and the subsequent exchange of knowledge among surgeons. Future improvements to the system are currently being contemplated, as well as the prospect to extend its application to the neurology field. [8] J. Torkington, S. Smith, B. Rees, A. Darzi. The role of the Basic Surgical skills course in the acquisition and retention of laparoscopic skill. Surgical Endoscopy, 2001, pp. 1071 1075. [9] A. Minor, A. Chouleb, D. Lorias. Millimetric laparoscopic surgery training on a physical trainer using rats. Surgical Endoscopy, 2008, pp. 246 249. [10] M. Bridges, D.L. Diamond The financial impact of teaching surgical residents in the operating room. Am J Surg, 1999, pp. 28 32. [11] J. Korndorffer, D. Stefanidis, D. Scott. Laparoscopic skills laboratories: current assessment and a call for resident training standards. American Journal of Surgery,2006, pp. 17-22. [12] L. Chang, J. Petros, D. T. Hess, C. Rotondi, T. J. Babineau Integrating simulation into a surgical residency program Is voluntary participation effective?. Surg Endosc 2007, pp. 418 421. References [1] C. Monserrat, Ó. López, M. Alcañíz. Estado del Arte en Simulación Quirúrgica. Informática y Salud, Índice Nº 47, Especial (15), pp (1). Junio 2004. [2] A. Oostema, Matthew P. Abdel, Jon C. Gould. Timeefficient laparoscopic skills assessment using an augmented reality simulator. Surg Endosc 2008 22:2621 2624. [3] Sanne M.B.I. Botden, Sonja N. Buzink, Marlies P. Schijven, Jack J. Jakimowicz, Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference?. World J Surg 2007; 31: 764 772. [4] D. Lorias, A. Minor, J.L. Ortiz. Computer System for the Evaluation of Laparoscopic Skills. Electronics, Robotics and Automotive Mechanics Conference (CERMA), 2010, pp. 19 22. [5] D. Lorias, A. Minor, D. Gamino. Integration of a system for evaluating in a box trainer: hybrid System for laparoscopic training. Pan american Health Care Exchanges PAHCE, Miami, Florida, 2012. [6] S. Cotin, Metrics for Laparoscopic Skills Training: The Weakest Link!. Lecture Notes in Computer Science, Volume 2488/2002, pp. 35 43, 2002. [7] R. Haluck, T. Krummel. Computers and virtual reality for surgical education in the 21st century. Computers and virtual reality for surgical education, 2000, pp.786 792. 88 CBMS 2013