Electronic Travel Aid Based on. Consumer Depth Devices to Avoid Moving Objects

Similar documents
Technology offer. Aerial obstacle detection software for the visually impaired

3D ULTRASONIC STICK FOR BLIND

Optimum Timing Acquisition for High Efficiency OFDM System in Wireless Communications

A Method of Measuring Distances between Cars. Using Vehicle Black Box Images

Stability of Some Segmentation Methods. Based on Markov Random Fields for Analysis. of Aero and Space Images

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Assignment Scheme for Maximizing the Network. Capacity in the Massive MIMO

Azaad Kumar Bahadur 1, Nishant Tripathi 2

A Qualitative Research Proposal on Emotional. Values Regarding Mobile Usability of the New. Silver Generation

Automated Mobility and Orientation System for Blind

INTELLIGENT WHITE CANE TO AID VISUALLY IMPAIRED

SMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE

SMART VIBRATING BAND TO INTIMATE OBSTACLE FOR VISUALLY IMPAIRED

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence

Computer Vision Based Real-Time Stairs And Door Detection For Indoor Navigation Of Visually Impaired People

Outdoor Navigation Systems to Promote Urban Mobility to Aid Visually Impaired People

AN UNIQUE METHODOLOGY ENABLING BUS BOARD NAVIGATING SYSTEM USING WSN

A RASPBERRY PI BASED ASSISTIVE AID FOR VISUALLY IMPAIRED USERS

A Survey on Assistance System for Visually Impaired People for Indoor Navigation

A Study on Developing Image Processing for Smart Traffic Supporting System Based on AR

Prototype of Low Temperature Sensor Based on. Coils-Resistance Temperature Detector. Enhanced with Three-Wire Configurations Bridge

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Smart Blind Help ABSTRACT I. INTRODUCTION II. LITERATURE SURVEY

Leading the Agenda. Everyday technology: A focus group with children, young people and their carers

OPEN CV BASED AUTONOMOUS RC-CAR

Haptic presentation of 3D objects in virtual reality for the visually disabled

Mixed Reality technology applied research on railway sector

Maintenance of Quality of Paint and Varnish. Coverings of Building Products and Designs

Substitute eyes for Blind using Android

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Capacitive Face Cushion for Smartphone-Based Virtual Reality Headsets

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

Learning and Using Models of Kicking Motions for Legged Robots

Prospects for the Use of Space Robots in the Neighbourhood of the Libration Points

Face Detector using Network-based Services for a Remote Robot Application

Virtual Reality Calendar Tour Guide

Eyedentify MMR SDK. Technical sheet. Version Eyedea Recognition, s.r.o.

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Smart Navigation System for Visually Impaired Person

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

An Optimized Direct Digital Frequency. Synthesizer (DDFS)

Wheeled Mobile Robot Kuzma I

OBSTACLE DETECTION AND COLLISION AVOIDANCE USING ULTRASONIC DISTANCE SENSORS FOR AN AUTONOMOUS QUADROCOPTER

Learning and Using Models of Kicking Motions for Legged Robots

International Journal of Advance Engineering and Research Development TRAFFIC LIGHT DETECTION SYSTEM FOR VISUALLY IMPAIRED PERSON WITH VOICE SYSTEM

Blind navigation support system based on Microsoft Kinect

Team members: Christopher A. Urquhart Oluwaseyitan Joshua Durodola Nathaniel Sims

Omni-Directional Catadioptric Acquisition System

Research Article An Investigation of Structural Damage Location Based on Ultrasonic Excitation-Fiber Bragg Grating Detection

A MOBILE SOLUTION TO HELP VISUALLY IMPAIRED PEOPLE IN PUBLIC TRANSPORTS AND IN PEDESTRIAN WALKS

AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, Based On Virtual Interface

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Regular Hexagon Cover for. Isoperimetric Triangles

Optimization of the Setting Parameters of a. Drilling Tool Using the Reliability Study

By Pierre Olivier, Vice President, Engineering and Manufacturing, LeddarTech Inc.

Portable Monitoring and Navigation Control System for Helping Visually Impaired People

Enhanced indoor localization using GPS information

License Plate Localisation based on Morphological Operations

International Journal of Pure and Applied Mathematics

Mechanical Life: Expression of Artificial Life Shown in Kinetic Art

Blind navigation with a wearable range camera and vibrotactile helmet

Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications

Improved Method for Determining the Height of Center of Gravity of Agricultural Tractors

Environmental control by remote eye tracking

Coarse-time Positioning without Continuous GPS Signal Tracking

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Study of Pattern Area of Logic Circuit. with Tunneling Field-Effect Transistors

Outline. Drawing the Graph. 1 Homework Review. 2 Introduction. 3 Histograms. 4 Histograms on the TI Assignment

ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY IMPAIRED

A Study on Motion-Based UI for Running Games with Kinect

Directions Aids for the Visually Challenged Using Image Processing Using Image Recognition

The Autonomous Performance Improvement of Mobile Robot using Type-2 Fuzzy Self-Tuning PID Controller

Baset Adult-Size 2016 Team Description Paper

Multi-sensor Panoramic Network Camera

Video Synthesis System for Monitoring Closed Sections 1

Automatic Generation of BLE Beacon Applications. Using Service Specifications

Terry Max Christy & Jeremy Borgman Dr. Gary Dempsey & Nick Schmidt November 29, 2011

Chair. Table. Robot. Laser Spot. Fiber Grating. Laser

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Driver status monitoring based on Neuromorphic visual processing

Korean Wave (Hallyu) of Knowledge through Content Curation, Infographics, and Digital Storytelling

The Seamless Localization System for Interworking in Indoor and Outdoor Environments

Human-Robot Interaction. Aaron Steinfeld Robotics Institute Carnegie Mellon University

Estimation of Folding Operations Using Silhouette Model

A State-of-Charge and Capacity Estimation Algorithm for Lithium-ion Battery Pack Utilizing Filtered Terminal Voltage

Geo-Located Content in Virtual and Augmented Reality

Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain

The Virtual Reality Brain-Computer Interface System for Ubiquitous Home Control

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2

An Integrated Image Steganography System. with Improved Image Quality

Rotary Wing DVE Solution Proof Of Concept Live Demonstration

Performance Analysis of SVD Based Single and. Multiple Beamforming for SU-MIMO and. MU-MIMO Systems with Various Modulation.

Estimation of Absolute Positioning of mobile robot using U-SAT

II. EXPERIMENTAL SETUP

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Fingers Bending Motion Controlled Electrical. Wheelchair by Using Flexible Bending Sensors. with Kalman filter Algorithm

Homeostasis Lighting Control System Using a Sensor Agent Robot

Smart traffic control with ambulance detection

Transcription:

Contemporary Engineering Sciences, Vol. 9, 2016, no. 17, 835-841 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2016.6692 Electronic Travel Aid Based on Consumer Depth Devices to Avoid Moving Objects Jee-Eun Lee NCsoft Corporation, Seoul, South Korea Yujin Kim LOTTE.com Inc., Seoul, South Korea Taejung Park Visual Media Lab., Department of Digital Media Duksung Women s University, Seoul, South Korea Copyright 2016 Jee-Eun Lee, Yujin Kim and Taejung Park. This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract We present a prototype for an electronic travel aid (ETA) for visually impaired persons based on consumer depth sensors. To provide more helpful information, our ETA system detects the velocity of moving objects and helps the user to avoid them with proper audible warnings. To achieve the stated goal, our system captures and processes depth images in front of a user to remove noises and acquire a bounding box for each object. Then, it traces the objects bounding boxes and their speeds and directions of movement in the captured depth images. According to the recognized directions and speeds, our system warns the user not only of immediately dangerous situations, but also of potential risks, via audio warnings and cautions. Keywords: electronic travel aids, depth sensors, visually impaired, Kinect, Xtion 1 Introduction Many people with weak vision or visual impairment find it very difficult to walk on a road with many moving objects. They usually rely on a white cane on

836 Jee-Eun Lee et al. roads. Unfortunately, since the white cane can only detect low-profiled obstacles and patterns on the road, it often fails to guide a person safely and successfully due to vertical obstructions including traffic signs, safety mirrors, and billboards. (Fig. 1). Moreover, it makes it more difficult for the visually impaired to walk through crowded roads with many moving and static objects having various velocities. Figure 1. Vertical obstacles that are hazards for the visually impaired person walking with only a white cane To address these issues and help the visually impaired people move more conveniently, various electronic travel aids (ETAs) and robotic travel aids (RTAs) have been proposed. Most of these aim to overcome the limitations of the white cane and provide the user with more ability by applying global positioning system (GPS) [1], ultrasonic sensor [2], or stereo vision technology [3]. Although those ETA techniques with GPS features could help the user travel long distances, they often fail to work in buildings and are not suitable for shorter trips. Also, the ETAs based on ultrasonic or light sensors can detect the existence of obstacles, but would fail to detect smaller obstacles and would not provide the user with detailed information (e.g., the directions or speeds of the moving obstacles). Those with stereo vision require multiple (at least two) cameras and higher computing powers to analyze multiple two-dimensional videos and construct three-dimensional scene information, which would inevitably make the whole system heavier and reduce its battery life. To overcome these limitations, we propose an ETA prototype that uses consumer depth images like those from Microsoft Kinect, ASUS Xtion, and Occipital Structure Sensor. Our ETA system detects the velocity of moving objects and helps the user to avoid them with proper audible warnings. Recently, some ETA systems similar to ours have also adopted depth sensors. Oxford Smart Sunglasses [4] uses similar depth sensors to help people with weak vision recognize objects more easily. However, the purpose and aim of this device are different from those of ours; it depends on depth sensor images only to remove the details of a real view. Besides, those who have lost their sight entirely cannot use this device. Kinecthesia [5] also uses the Kinect depth sensor and shares a very similar structure with our approach. However, this device cannot provide detailed information as ours does and only directs the user to three options: left, middle, and right. We have introduced overview of our device in a

Electronic travel aid based on consumer depth devices 837 conference [6], but not full details. We discuss the details fully in this paper. Table 2 summarizes some of the related ETA technologies and their limitations. Table 1. Related technologies Name Description Limitations Oxford Smart Sunglasses [4] Kinecthesia [5] These smart sunglasses rely on depth images, as does our method. By projecting abstract depth images, they help people with weak vision to recognize objects. Using a Kinect depth sensor, this device directs the user by vibration. The purpose of this device is to remove details of a scene to help the people with weak vision to recognize the objects more easily. They are not useful for those who have completely lost their sight. This device only indicates three options: left, middle, and right. It does not provide detailed information (e.g., the distance or velocity of the objects). ETA based on stereo vision [7] With two video cameras, this device analyzes and recognizes numbers, letters, and symbols on the streets (e.g., bus stops, billboards, etc.), and reads them aloud to the user. This device focuses on providing visual information around the user in an audible way, without including safety information about moving objects. 2 Proposed System 2.1 System Overview Figure 2 shows the overview of our system. Our device captures depth images from depth sensors (e.g., Microsoft Kinect, ASUS Xtion, or Occipital Structure Sensor). These three sensors use similar technologies, but the Structure Sensor is the lightest and is smaller than the others are. In this regard, the Structure Sensor is more suitable for our purpose. Based on preliminary tests, we concluded that the distance from the user to the moving object should be 3 to 4 m to warn the user safely. Table 2 summarizes the specifications of the three depth sensors. Since all three of the sensors can be accessed by OpenNI SDK [7], there is little difference between them in the development of our system. Table 2. Depth device information Weight (g) Size (L W H) mm Range (cm) Kinect v1 1,360 304.8 76.2 63.5 40-450 Xtion 226.7 177.8 50.8 38.1 80-350 Structure 95 119.2 27.9 29 40 350+

838 Jee-Eun Lee et al. Figure 2. System overview of the proposed ETA device [6] 2.2 Processing Depth Images In general, the depth images captured from the depth sensors have much noise. Therefore, we need to denoise the depth images to achieve robust results. For this goal, we apply OpenCV to the depth images in an embedded Linux environment based on Raspberry Pi [8]. We stabilize the depth images with a Gaussian blur filter followed by a Canny filter [9, 10] to construct bounding boxes that can recognize each object separately. 2.3 Detecting and Tracing Objects In the next step, we construct bounding boxes with the stabilized images in the previous step. Our system recognizes and traces each object with the movement of its bounding box. In general three-dimensional cases, bounding boxes can be constructed by acquiring the maximum point Pmax = (xmax, ymax, zmax) and minimum point Pmin = (xmin, ymin, zmin) (see Fig. 3). However, in our case, where the depth image is often called 2.5-dimensional, we detect the boundary from the processed image with a Canny filter and acquire a two-dimensional rectangular boundary (upper image in Fig. 4). We determine the distance from the user (i.e., the depth sensor) to the object as the minimum depth value in the two-dimensional rectangular boundary (i.e., d1, d2, and d3 in Fig. 4). We have found that some objects would result in the construction of multiple boundary rectangles (see Fig. 5). To address this problem, we apply an algorithm to aggregate smaller rectangles into larger ones if the distance values are within a reasonable range. Considering the relative positions of the bounding rectangles on each frame, we can estimate the velocity (i.e., movement direction and speed) of each object (Fig. 6).

Electronic travel aid based on consumer depth devices 839 P m ax y P m in x z Figure 3. General three-dimensional bounding box Figure 4. 2.5-dimensional bounding boxes Figure 5. Problems with multiple bounding rectangles Figure 6. Calculating the velocity of each object based on bounding rectangles 2.4 Audio Warning and Information By tracing each object in its respective bounding box, we can estimate the angle (θ in Fig. 7) between the user and the object from the vectors of the moving objects and the user. Based on this information, we can determine the degree of risk for each object. For example, if the distance of an object is within one meter and the angle θ is from 90 to 180, we can conclude that the object imposes an immediate danger to the user, because it is very close and moving toward the user. However, if another object is moving toward the user with a similar angle but the distance is farther than one meter, then we can consider that this object may cause possible harm to the user, but we have more time to avoid the object. As shown in Fig. 8, our voice warning scheme provides four audio cues based on the current information from this observation. After determining the current situation, audio is provided as follows: [Voice information listed in Table 3 according to the situation], there is an obstacle [distance] ahead on the [ right or left ] side.

840 Jee-Eun Lee et al. For example, if an object is far from the user at 10 cm (i.e., within 1 m) on the right, and is moving away from the user at 30, then the device warns, Be careful, there is an obstacle 10 centimeters ahead on the right side. Based on this voice information, the user can determine her/his direction to walk safely. Figure 7. Relative direction between the moving object and the user Figure 8. Criteria for degree of risk [6] Table 3. Audio information according to the situation Distance 1 m Distance 1 m -90 < θ < 90 Be careful No audio (safe) 90 < θ < 180 Danger Warning 3 Results and Discussion Figure 9 illustrates the recognized objects and their bounding boxes. In addition, Fig. 10 shows our prototype used for the proof of concept, which the user wears on her/his chest. We expect the size of the proposed device could be made much smaller once smaller depth sensors are on the market [11]. In the next step, we plan to improve our system by performing usability and usefulness tests for those who have weak vision or visual impairment. Figure 9. Objects recognized [6] Figure 10. Prototype device [6]

Electronic travel aid based on consumer depth devices 841 Acknowledgements. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (NRF-2013R1A1A2064147) References [1] Sandra Mau, N. Melchior, Maxim Makatchev, Aaron Steinfeld, BlindAid: An Electronic Travel Aid for the Blind, CMU-RI-TR-07-39, The Robotics Institute, Carnegie Mellon University, 2008. [2] Laehyun Kim, Sehyung Park, Sooyong Lee, Hyunchul Cho and Sungdo Ha, Improvement of An Electronic Aid for the Blind using Ultrasonic and Acceleration Sensors, Journal of Korea Information Science Society: Software and Applications, 36 (2009), no. 4, 291-297. [3] Kyoung-ho Kim and Sang-Woong Lee, Positioning System for the Blind Navigation, Journal of Korean Institute of Next Generation Computing, 8 (2012), no. 4, 6-16. [4] Oxford smart glasses. http://www.ndcn.ox.ac.uk/research/oxford-smart -specs-research-group [5] Kinecthesia. http://www.kinecthesia.com [6] Jee-Eun Lee, Yujin Kim and Taejung Park, Travelling Aids for Visual Impaired based on Depth Sensor, Advanced Science and Technology Letters, 125 (2016), 45-49. http://dx.doi.org/10.14257/astl.2016.125.09 [7] OpenNI SDK. http://openni.ru/openni-sdk [8] Raspberry Pi. https://www.raspberrypi.org [9] Canny edge detector. https://en.wikipedia.org/wiki/canny_edge_detector [10] Hairol Nizam Mohd Shah, Mohd Zamzuri Ab Rashid and Tam You Tam, Develop and Implementation of Autonomous Vision Based Mobile Robot Following Human, International Journal of Advanced Science and Technology, 51 (2013), 81-92. [11] Intel RealSense Camera. http://www.intel.com/content/www/us/en/architecture-and-technology/realse nse-depth-camera.html Received: April 15, 2016; Published: August 11, 2016