Image Interpretation System for Informed Consent to Patients by Use of a Skeletal Tracking

Similar documents
Classification for Motion Game Based on EEG Sensing

Digital Imaging CT & MR

EMMA Software Quick Start Guide

Toward an Augmented Reality System for Violin Learning Support

Differentiation of Malignant and Benign Masses on Mammograms Using Radial Local Ternary Pattern

Correlation of 2D Reconstructed High Resolution CT Data of the Temporal Bone and Adjacent Structures to 3D Images

Preprocessing on Digital Image using Histogram Equalization: An Experiment Study on MRI Brain Image

Development a File Transfer Application by Handover for 3D Video Communication System in Synchronized AR Space

The SENSE Ghost: Field-of-View Restrictions for SENSE Imaging

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

P a n o r a m i c a n d C e p h a l o m e t r i c X - r a y s

FlexAR: A Tangible Augmented Reality Experience for Teaching Anatomy

TOPICS: CT Protocol Optimization over the Range of Patient Age & Size and for Different CT Scanner Types: Recommendations & Misconceptions

1. Queries are issued to the image archive for information about computed tomographic (CT)

Observer Performance of Reduced X-Ray Images on Liquid Crystal Displays

Medical Images Analysis and Processing

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture

Maximum Performance, Minimum Space

Slide 1. Slide 2. Slide 3 ACR CT Accreditation. Multi-Slice CT Artifacts and Quality Control. What are the rules or recommendations for CT QC?

A positioning QA procedure for 2D/2D (kv/mv) and 3D/3D (CT/CBCT) image matching for radiotherapy patient setup

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Scopis Hybrid Navigation with Augmented Reality

TORNIER BLUEPRINT. 3D Planning + PSI SCAN PROTOCOL

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A comparison between medical-grade liquid crystal display (LCD) and ipad color imaging

Chiara Secco. PET Performance measurements of the new LSO-Based Whole Body PET/CT. Scanner biograph 16 HI-REZ using the NEMA NU Standard.

Digital imaging à la carte

COMPUTER PHANTOMS FOR SIMULATING ULTRASOUND B-MODE AND CFM IMAGES

Introduction. Parametric Imaging. The Ultrasound Research Interface: A New Tool for Biomedical Investigations

Robot assisted craniofacial surgery: first clinical evaluation

Multimodal Co-registration Using the Quantum GX, G8 PET/CT and IVIS Spectrum Imaging Systems

Defense Technical Information Center Compilation Part Notice

Stabilize humanoid robot teleoperated by a RGB-D sensor

Invisible sophistication. Visible simplicity. CS Welcome to the simplicity of compact panoramic imaging

COMPUTED TOMOGRAPHY 1

MEDICAL X-RAY 2D AND 3D IMAGE VIEWER:ROLE FOR THE MEDICAL IMAGE IN DICOM STANDARD

Virtual monochromatic imaging in dual-source and dual-energy CT for visualization of acute ischemic stroke

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

LSO PET/CT Pico Performance Improvements with Ultra Hi-Rez Option

Clinical Experience Using the Open Bore Multislice CT System Supria (16 slice CT) MEDIX VOL. 61 P.8 P.11

QC Testing for Computed Tomography (CT) Scanner

P a n o r a m i c a n d C e p h a l o m e t r i c X - r a y s

2D, 3D CT Intervention, and CT Fluoroscopy

160-slice CT SCANNER / New Standard for the Future

Ysio Max. The most direct way to the image. Answers for life.

MRI Phase Mismapping Image Artifact Correction

Supplementary Figure 1

Confirmation, refinement, and extension of a study in intrafraction motion interplay with sliding jaw motion

A Study on Motion-Based UI for Running Games with Kinect

2 nd generation TOMOSYNTHESIS

Yinsheng Li 1, Peter Bannas 2, M.D., Perry Pickhardt M.D. 2, Meghan Lubner M.D. 2, Ke Li Ph.D. 1,2, and Guang-Hong Chen Ph.D. 1,2

Enhanced Functionality of High-Speed Image Processing Engine SUREengine PRO. Sharpness (spatial resolution) Graininess (noise intensity)

IMAGE MANAGEMENT PLAN FOR ACRIN PA 4003 Evaluation of the Ability of a Novel [ 18 F] amyloid ligand ([ 18 F-AV-45]) to distinguish patients with a

CHAPTER 2 COMMISSIONING OF KILO-VOLTAGE CONE BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED RADIOTHERAPY

Digital Imaging started in the 1972 with Digital subtraction angiography Clinical digital imaging was employed from the 1980 ~ 37 years ago Amount of

Aim. Images for this section: Page 2 of 13

PET: New Technologies & Applications, Including Oncology

DICOM Conformance. DICOM Detailed Specification for Diagnostic Labs and Radiology Center Connectivity

Medical Application of Digital Image Processing Based on MATLAB

Preliminary Assessment of High Dynamic Range Displays for Pathology Detection Tasks. CIS/Kodak New Collaborative Proposal

Homeostasis Lighting Control System Using a Sensor Agent Robot

Digital human technology and its application in the field of sports. medicine and Prospect

The Trend of Medical Image Work Station

Automated dose control in multi-slice CT. Nicholas Keat Formerly ImPACT, St George's Hospital, London

Alternative lossless compression algorithms in X-ray cardiac images

Vein and Fingerprint Identification Multi Biometric System: A Novel Approach

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses

Proceedings of Meetings on Acoustics

A TELE-INSTRUCTION SYSTEM FOR ULTRASOUND PROBE OPERATION BASED ON SHARED AR TECHNOLOGY

Things you may want to know

Improved Tomosynthesis Reconstruction using Super-resolution and Iterative Techniques

Optimization of user interaction with DICOM in the Operation Room of a hospital

Four-dimensional Computed Tomography (4D CT) Concepts and Preliminary Development

Incorporating novel image processing methods in a hospital-wide PACS

Application of 3D Terrain Representation System for Highway Landscape Design

Detection of a Person Awakening or Falling Out of Bed Using a Range Sensor Geer Cheng, Sawako Kida, Hideo Furuhashi

Initial Certification

Image evaluation method for digital mammography with soft-copy reading using a digital phantom

KINECT CONTROLLED HUMANOID AND HELICOPTER

Evaluation of automatic time gain compensated in-vivo ultrasound sequences

Intuitive Gestures on Multi-touch Displays for Reading Radiological Images

Open Access AOA and TDOA-Based a Novel Three Dimensional Location Algorithm in Wireless Sensor Network

A Novel Knee Position Acquisition and Face Recognition System Using Kinect v2 at Entrance for Fatigue Detection and Automated Door Opening

An Activity in Computed Tomography

1. Patient size AEC. Large Patient High ma. Small Patient Low ma

The effect of focal spot size on the spatial resolution of variable resolution X-ray CT scanner

/08/$25.00 c 2008 IEEE

Motion Capture for Runners

White Paper. The Advantages of Hybrid Gamma PXL. 1 Introduction Outline of Hybrid Gamma PXL Study Results...

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller

Novel machine interface for scaled telesurgery

C a t p h a n. T h e P h a n t o m L a b o r a t o r y. Ordering Information

Coded Aperture for Projector and Camera for Robust 3D measurement

T h e P h a n t o m L a b o r a t o r y

A Study On Preprocessing A Mammogram Image Using Adaptive Median Filter

Photographic Standards in Plastic Surgery

Advanced MSK MRI Protocols at 3.0T. Garry E. Gold, M.D. Associate Professor Department of Radiology Stanford University

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

Indicator of Small Calcification Detection in Ultrasonography using Decorrelation of Forward Scattered Waves

Infrared Screening. with TotalVision anatomy software

Transcription:

Image Interpretation System for Informed Consent to Patients by Use of a Skeletal Tracking Naoki Kamiya 1, Hiroki Osaki 2, Jun Kondo 2, Huayue Chen 3, and Hiroshi Fujita 4 1 Department of Information and Computer Engineering, Toyota National College of Technology, Toyota, JAPAN 2 Advanced Course of Computer Science, Toyota National College of Technology, Toyota, JAPAN 3 Department of Anatomy, Graduate school of Medicine, Gifu University, Gifu, JAPAN 4 Department of Intelligent Information, Division of Regeneration and Advanced Medical Sciences, Graduate School of Medicine, Gifu University, Gifu, JAPAN E-mail n-kamiya@toyota-ct.ac.jp Abstract In recent years, a variety of medical images are used for diagnosis. However, it is difficult to understand these images by patient without anatomical knowledge. In particular, CT images consist of large number of slice images. It causes a lack of understanding about positional information in their body. This will adversely affect the first-person informed consent (IC). We developed an image interpretation system for the patient. Our purpose of this system is patient participation in clinical site. This system is based on the skeletal tracking for recognizing patient s hand movements and determines the relative position of the body. The skeletal tracking has been implemented using the Microsoft Kinect sensor with official SDK (KinectSDK-v1.0-bata2-x86). We performed association of tracking marker and a CT number. Finally, the calculated CT images are displayed on the ipad handled with the patient. We tested the effectiveness of the proposed system in cooperation with volunteer subjects without anatomical knowledge. As a result, they were able to properly identify the proposed position of the cross-sectional image than before. 1. Introduction In the field of health care, due to the advancement of medical equipment, the imaging device such as MRI, CT and PET/CT is often used for diagnosis. The tomographic image is a high-definition. However, the large amounts of images can be obtained in an imaging session. For example, in the torso CT images were used in this study is composed of a cross-sectional images over 1,000 per patient. In such images, a doctor select the number of images used in the diagnosis from within the huge image, then presented and explains to the patient. Normally, all slice images or the image list of the required range is displayed for the patient. Therefore, some patients can understand its spatial position roughly, in general, it is considered to be difficult for an intuitive understanding for patient from a few images of tomographic image without anatomical knowledge. In addition, the importance of the informed consent (IC) has been increasing recently [1, 2]. Informed consent means the patient s medical care based on their free will. It cannot be achieved if without their understanding based on the accurate information. 1593

Fig. 2. The system interface of our previous system [4]. Fig. 1. Conceptual diagram of the proposed method Therefore, the system must be able to understand easily for the tomographic images by the patient without anatomical knowledge. The image presentation method can be considered variety of forms, we develop the image interpretation system that can be understood intuitively the patient himself. In this study, we generate the above-mentioned system using an environmental mounted sensor. Using the Microsoft Kinect, we measure the coordinates of each part of the body by skeletal tracking. The skeletal tracking using Kinect have attracted much attention in the medical research field because of the no need to add equipment to the patient and can be introduced at low cost. As an example research using Kinect in the medical field, such as machine control by obtaining the body position and movements [3]. In this study, we develop a system to display the corresponding CT images with the behavior of the patient s hands in real time by use of the skeletal position estimation based on the skeletal tracking. Figure 1 shows the conceptual diagram of the proposed method. This system consists of three parts, the patient, the sensor, and a personal computer (PC). The patient does not require a contact-type sensor, though it is noninvasive, the patients can be understood their internal structure of the body by intuitive behavior. It is a system that can be understood in association with the position of their own body images that are presented by the doctor. Evaluation is carried out by an observer study. Evaluate the patient s level of understanding as to whether or not improved by the system use. 2. Methods In our previous study, we proposed a concept model of this image presentation system [4]. This previous system also uses the Kinect as the environment- mounted sensor. In addition, we use the tablet device, ipad for presenting images to the patient. We have developed the system acquires the spatial position of the ipad, to present a corresponding tomographic image in any cross-section. Figure 2 shows the interface of our previous system. In this system was useful in understanding cross-sectional tomographic image. However, it is necessary to operate the tablet device by the patient. Because of it, such as elderly, will focus on the operation of the ipad, which prevented an intuitive understanding about image which presented by the doctor. In this paper, we develop a system using only the relative position of the body without ipad to make the decision of the displayed cross-section image. Therefore, patients need not to understand how to operate the ipad. In addition to the cross-sectional plane, we achieve presentation another anatomicalplane; sagittal and coronal plane. The patient can understand three-dimensional positional information of their body not only cross-sectional view. In this way, we suppose that to improve understanding for the patient about the organ locations. In particular, when presented with a massive lesion, it has a threedimensional spread, such as a depth direction. Therefore, it is considered that can understand the spatial spread of the lesion region in the body by three cross-sectional views in the proposed system. Figure 3 shows the flow chart of our proposed method. The system consists of four parts. The details are described in the following sections. 2.1. Development Environment In this paper, we developed image interpretation system under the following conditions. Using the Microsoft s Kinect as a environment-mounted sensor. A computer is a generic 32-bit personal computer. 2.2. Input Images Tomographic image is composed of a large number of images; it is difficult to understand for the patient. 1594

Therefore, CT image is used as input image in this study. However, the algorithm of the proposed method is not intended for a particular modality. In principle, it can be applied to various images, such as MRI and PET/CT. Here, we use torso CT image as input image, which imaged from pubic bone to shoulder. The image is composed one case per 1000 cross-sectional images. Similarly, the sagittal plane and coronal plane is composed of about 500 images. At the same time, skeletal image is also used as input image for the position synchronization described in Section D. The skeletal image is segmented each bone by use of skeletal topology [5]. 2.3. Skeletal Tracking Based on the skeletal tracking, make the acquisition of the distinctive position on the patient s body. As mentioned above, to achieve the skeletal tracking by using a KinectSDK. Control point obtained by the skeletal tracking is used to enter the position where the patient wants to view and also used to alignment of the image. Figure 4 shows the three images acquired from the Kinect. is the normal pictured image, is the depth image and (c) is the skeletal tracking image. The depth image changes the gray value depending on the depth information obtained from the depth sensor, and the human area is also recognized and colored independently. The skeletal tracking image is an image connected by the straight line between each landmark points. In the KinectSDK, a total of 20 landmarks can be automatically recognized upper two people at the same time. Figure 5 shows the landmarks correspondence with body position. At the input of the image position want to observe by the patient is achieved by calculating the relative position between landmarks. The relative position as (c) Fig. 4. Image acquired from Kinect: normal, depth, and (c) skeletal tracking image. Fig. 3. Flowchart of our proposed method Fig. 5. Landmark positions on the human body. the difference value, calculated using the right hand (shown HAND_RIGHT in fig.5) and other three landmarks. In cross-section, use the relative position of the umbilicus (show HIP_CENTER) and the neck (SHOULDER_CENTER) to the position of the right hand. Similarly, in the sagittal plane is used the right shoulder landmark (SHOULDER_RIGHT). In coronal plane, use the depth information of the right hand to select the display image from the depth changing. 2.4. Position Synchronization Make the selection of the patient s CT slices corresponding to the body position that was entered in the previous section because of the large slice numbers in torso CT images. In this 1595

Fig. 6. Umbilicus slice detection: skeletal image, detected umbilicus slice. paper, an umbilicus position in the CT image is recognized with relative ease, so we recognize umbilicus position in the CT images automatically and synchronize the umbilicus position (HIP_CENTER) recognized by the KinectSDK in the skeletal image. The recognition of the umbilicus cross-section in the CT image makes use of the skeletal image, which is one of the input images. The axial position of the umbilicus is located at the top of the pelvis, and also located at the lowest end of the costal bone. We determine the umbilicus position between these two slices. Figure 6 shows the image of the skeletal image and detected umbilicus slice. In figure 6, each color represents a skeleton recognized separately [5]. Based on this umbilicus position, we determine the display image using the right hand position obtained in Section C. The cross-sectional position in input CT images to be synchronized with the approximate spatial location of the right hand in the following method. In the axial plane, the shoulder and hip landmarks define the upper limit and lower limit, respectively. Then, in the torso CT image, they should be mapped to the top and bottom of the image. Then, to calculate where to locate the right hand between the top and bottom for determine the display slice. In the sagittal plane, umbilicus and right shoulder landmarks define the center and the right endpoint, respectively. Then, in the torso CT Fig. 7. Synchronized slice image: sagittal and coronal plane. image, they should be mapped to the umbilicus and right edge of the image. In the coronal plane, the patient s umbilicus landmark is defined to the reference position. Then, determine the image due to changes in the relative position of the right hand against the umbilicus landmark. Empirically, track the movement of the hand in the range of around 15cm from the umbilicus. When indicating the location closest to the camera, assign to the foreground image. Figure 7 shows the synchronized slice image between patient s hand movement and corresponding CT slices. shows the sagittal plane, and shows the coronal plane. 3. Results and Discussion For the verification of whether or not be able to grasp in association with body position, the tomographic image obtained from this system. Conduct experiments on the 10 subjects without anatomical knowledge. Evaluate the changes in the level of understanding about the location on the body before and after use in the proposed system. First, we presented five cross-sectional images randomly. These images are selected from, shoulder, lung, xiphoid, umbilicus and pelvis cross-section. Subjects had to answer the thinking position on the body to the number line. Next, the subjects freely 1596

Table 1. Response error before and after use of the system. Average Standard Deviation Before 14.94 35.91 After 9.49 6.64 tomographic images like CT. Compared with our conventional method [4], eliminate the need for ipad as displaying device, and possible to observe two another sections; sagittal and coronal. The usefulness of the system has been shown in the experiment for the people without anatomical knowledge. In the future, it is necessary to achieve the reconstruction of arbitrary cross-section and to consider the application to medical education. Acknowledgement We thank all the members of the Fujita laboratory of Gifu University and the Kamiya laboratory of Toyota national college of technology for their valuable contributions to this work. References Fig. 8. Response error before and after use of the system: dotted line indicates the system before use, solid line indicates the system after use. observe the CT images using this system. Then, the same five cross-sectional images are displayed again randomly, they answer on the number line once again. Finally, determine the error between correct answer and input answer on the number line. Table 1 shows the average of response error in a five cross-sectional view. The use of the system, the average value of the error was reduced to 9.49 from 14.94. In addition, standard deviation of the error is significantly decreased from 35.91 to 6.94. Figure 8 shows the same results approximated to the normal distribution. After using this system, from the shape of the graph, many subjects lead to correct answer. However, it can be confirmed that the convergence has been shifted in a positive direction. It is considered from an error of scale or system error of display position. Therefore, it is considered to increase the level of understanding for the presented CT images have been achieved. [1] R. Volpe, Patients' expressed and unexpressed needs for information for informed consent, Journal of Clinical Ethics, vol. 21, no. 1, pp. 45-57, 2010. [2] Y. Ivashkov, V. Norman, Informed consent and the ethical management of the older patient, Anesthesiol Clinical, vol. 27, no. 3, pp. 569-580, 2009. [3] AP.Bo, M. Hayashibe, P. Poignet, Joint angle estimation in rehabilitation with inertial sensors and its integration with Kinect, Proceeding of IEEE Eng. Med. Biol. Soc., pp.3479-3483, 2011. [4] J. Kondo, N. Kamiya, H. Osaki, T. Hara, C. Muramatsu, and H. Fujita, Interactive system for next-generation medical care system using ipad and Kinect, 97th Scientific assembly and annual meeting of the radiological society of north America, radiology informatics series: mobile computing devices (scientific formal (paper) presentations), MSVR31-05, p. 91, 2011. [5] X. Zhou, T. Hayashi, M. Han, H. Chen, T. Hara, H. Fujita, R. Yokoyama, M. Kanematsu, and H. Hoshi, Automatic segmentation and recognition of the bone structure in non-contrast torso CT images using implicit anatomical knowledge, Proceeding of SPIE, 7259, 72593S, 2009. Doi: 10.1117/12.812945. 4. Conclusion In this study, we developed a novel image presentation system to support informed consent. This system can be understood intuitively for the patient to 1597