A Vision-Based Wayfinding System for Visually Impaired People Using Situation Awareness and Activity-Based Instructions

Size: px
Start display at page:

Download "A Vision-Based Wayfinding System for Visually Impaired People Using Situation Awareness and Activity-Based Instructions"

Transcription

1 sensors Article A Vision-Based Wayfinding System for Visually Impaired People Using Situation Awareness and Activity-Based Instructions Eunjeong Ko ID and Eun Yi Kim * Visual Information Processing Laboratory, Konkuk University, Seoul 05029, Korea; goejeong85@gmail.com * Correspondence: eykim@konkuk.ac.kr; Tel.: Received: 8 July 2017; Accepted: 8 August 2017; Published: 16 August 2017 Abstract: A significant challenge faced by visually people is wayfinding, which is ability to find one s way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize situation and scene objects in real time. Through analyzing streaming images, proposed system first classifies current situation of a user in terms of ir location. Next, based on current situation, only necessary context objects are found and interpreted using computer vision techniques. It estimates motions of user with two inertial sensors and records trajectories of user toward destination, which are also used as a guide for return route after reaching destination. To efficiently convey recognized results using an auditory interface, activity-based instructions are generated that guide user in a series of movements along a route. To assess effectiveness of proposed system, experiments were conducted in several indoor environments: sit in which situation awareness accuracy was 90% and object detection false alarm rate was In addition, our field test results demonstrate that users can locate ir paths with an accuracy of 97%. Keywords: wayfinding system; visually people; situation awareness; activity-based instruction; user trajectory recording 1. Introduction There are approximately 39 million legally blind people in world, while anor 246 million people have some form of significant visual impairment [1]. Among m, number of older people is increasing due to age-related diseases such as glaucoma and diabetic retinopathy. In ir daily lives, se people experience many difficulties when traversing unfamiliar environments on way to ir destination. For this type of wayfinding, it is essential to use and organize definite sensory cues from external environment [2 4]. In general, sighted people construct 3D maps based on visual sensory information. In contrast, visually people use different cognitive and attentional resources. As discussed in [5], people who are born blind or become blind early in life encode sequential features of a travelled route, i.e., y create a set of instructions that denote directional changes in route. To reduce difficulties of visually and help m localize a current position and find a destination, a wide range of technologies have been developed [3,6 32]. The most recent research and technologies have focused on Global Positioning Systems (GPS) systems [6,7]. However, while systems using GPS sensors operate well as wayfinding aids in outdoor environments, GPS signals are often unavailable in indoor environments, which makes m inadequate for assisting people indoors. Accordingly, goal of present study was to develop a wayfinding system that will be effective in various indoor environments with complex illumination patterns and cluttered backgrounds such as shopping malls, hospitals, and schools. Sensors 2017, 17, 1882; doi: /s

2 Sensors 2017, 17, of 34 Thus far, various solutions for indoor wayfinding have been proposed and implemented. They can be categorized as eir sensor-based approaches [8 21] or vision-based approaches [22 32]. The former use sensors, such as Wi-Fi, RFID, and UWB sensors, to estimate a user s current position, and latter use images obtained from a camera and recognize visual clues, such as objects and scene texts, from surrounding environment. Among se potential solutions, vision-based methods have received more attention from researchers, and, in particular, systems with color codes have been successfully investigated for use in various applications [26 32]. To help blind and visually people with wayfinding, two basic functions should be supported: positioning (to localize a user s position) and path guidance (to guide a user through a route to target destination and return route) [30]. To support such functions, most existing systems require indoor structural information of buildings such as maps and building layouts. Typically, se systems obtain such information through seamless communication between a user s mobile device and server systems, or some systems assume that a building map has been provided previously. However, in real situations, this assumption is not always true, and access to such structural information may be limited to authorized people only and is generally not common knowledge for public. Moreover, stable communication between server and mobile user is not guaranteed due to signal interruptions or traffic. Thus, it is necessary to develop wayfinding systems that can function in various environments regardless of availability of maps. For design of a wayfinding system, we conducted an initial study on wayfinding behaviors of visually people and sighted people. We collected and analyzed behaviors that perceive environmental information such as moving direction or location, and determined next actions on way to a given destination. In general, visually people depend on a white cane to understand environmental information. Using white cane, y can understand situation through detecting changes in walls, including corners, and ground height. When place type changed, y determined ir next actions such as finding braille signs next to door, turning left or right according to corner, or going up stairs. These observations signify that recognizing current situation is essential to enabling safe travel and determining ir way. Sighted people can navigate unfamiliar indoor environments even if y do not have structural information because y can locate necessary information from visual clues such as objects and signs within environment. An interesting point is that sighted people require different types of information according to ir situation. For example, when y are standing in front of a door, y need information about room number in order to know wher it is ir intended destination. For a junction or hall, y need directional information about ir destination. Based on se observations, we propose a situation-based wayfinding system that first recognizes situation and n locates appropriate environmental information. In this study, a situation refers to type of place where user is standing, and it is classified as a door, corridor, hall, or junction. In order to represent different environmental information, two types of QR code were designed: one encodes location-specific information and or encodes directional information. These QR codes are attached according to place type. The proposed system was implemented on an iphone 6, which has an embedded camera, gyroscope, and accelerometer. It consists of five processing modules: situation awareness, object detection, object recognition, user trajectory recording, and activity-based instruction. The situation awareness module is core of proposed system because it determines type of scene objects to be detected according to type of current place. For situation awareness module, some templates that represent respective situations are first collected. Then, a vocabulary tree is built first from templates, which are used for an effective image description and a fast comparison between images. Then, a new input image is compared with templates using an entropy-based metric, and its situation is determined based on most similar template. Once a situation is determined, necessary environmental information is located. In this proposed approach, this information is represented with color QR codes [26 32], which require only minor modifications to environment

3 Sensors 2017, 17, of 34 such as posting special signs and are widely used in real environments. Then, simple computer vision algorithms based on color and edges are applied to detect codes on a mobile smartphone quickly and reliably. While a user is moving, ir motion is computed continuously, and ir routes are recorded in user trajectory recording module, which are used to guide return route. Finally, all processed results are conveyed to user through activity-based instructions. These results guide visually people to destination using user s movement activity paths, such as walking a certain number of steps and compass directions, and users are notified via beeping or text-to-speech (TTS) information. To assess validity of proposed method, it was tested in unfamiliar indoor environments with varying illuminations and building layouts. The experimental results show that proposed system could detect scene objects with an accuracy of almost 100% at a distance of 2.5 m and a viewing angle of ±40. Furrmore, it recognized meaning of an object with an accuracy of more than 99%. In addition, to demonstrate its feasibility as a wayfinding aid for blind and visually people, field tests were conducted with four users. They were all able to locate ir path in real-time with an accuracy of 97%. The reminder of paper is organized as follows: Section 2 reviews previous work presented in literature. Section 3 presents an overview of proposed system. The module details are introduced from Section 4 to Section 7. The experimental results are reported in Section 8, followed by conclusions in Section Related Work Over past few years, several wayfinding systems have been developed to assist blind and visually people to navigate ir way through indoor environments. Giudice et al. [2] proposed four important factors that should be considered when developing electronic travel aids for blind people and visually people, and n summarized existing systems. The four factors are as follows: (1) sensory translation, which is mapping between input and output modality that is intuitive and requires little or no training (sensors), (2) selection of information, which is important to understand what information is provided (user interface and instruction), (3) environmental conditions, which means that it can be used over a wide range of environmental conditions (environments), and (4) form and function, which means that it should be minimally intrusive (devices). Referring to se factors, we summarized features of se systems with respect to sensors used, ir user interface, ir primary functions, ir target user population, and so on. Table 1 presents existing systems for indoor wayfinding.

4 Sensors 2017, 17, of 34 Table 1. Existing systems for indoor wayfinding. Approach Institute Sensors Function Map Usage Target User User Interface Environment Sensor-based approaches Vision-based ones Scene-objects recognition Color-codes recognition Univ. of Catania [8] LIFC [20] Univ. of California-Santa Barbara [15] Univ. of California [16] Univ. of Maine [19] RFID, inertial sensor WIFI sensor network Infrared Ultra-Wide band Accelerometers Positioning, path guidance Positioning, path guidance Positioning, path guidance Positioning, path guidance Positioning, path guidance UCEM [14] RFID Positioning NO Universität Stuttgart [13] IEU [12] LASMEA lab [22] Univ. d Orleans [25] Brigham Young University [23] King Saud Univ. [22] Univ. of Minnesota [32] Smart camera [27] UC Santa Cruz [30] Univ. of Malakand [28] Chun Yuan Christian University [33] Universidad Autónoma de Madrid [31] Graz Univ. of Technology [34] RFID, GPS RFID, GPS CCD camera Mobile phone camera Stereo camera Mobile phone camera Infrared camera Mobile phone camera Mobile phone camera Mobile phone camera Mobile phone camera Mobile phone camera Mobile phone camera Positioning, path guidance Positioning, path guidance Positioning, path guidance Positioning, path guidance Obstacle avoidance Positioning Positioning, path guidance Positioning Positioning, path guidance Positioning, path guidance Positioning, path guidance Positioning, path guidance Positioning, path guidance YES YES YES YES YES YES YES YES YES NO NO YES NO YES YES YES YES The visually The visually The visually The visually The visually The visually The visually The visually The visually The visually The visually The visually The visually The visually The visually The visually The Cognitively The Cognitively Speech Speech Spatial display Speech Speech Speech Braille display Speech Speech, Sonar sound Speech Speech Speech Speech Speech Speech, beeping Speech Graphical interface Graphical/Verbal interfaces Indoors Indoors Indoors Indoors Indoors Indoors Indoors/Outdoors Indoors/Outdoors Indoors/Outdoors Indoors Indoors Indoors Indoors Indoors/ Outdoors Indoors Indoors Indoors Indoors YES Pedestrian Graphical interface Indoors

5 Sensors 2017, 17, of Sensor-Based Systems vs. Vision-Based Systems First, existing systems can be categorized into sensor-based systems and vision-based systems that use cameras. In sensor-based methods, RFIDs [8 14], infrared [15], ultra-wideband (UWB) [16,17], inertial sensors [18,19] and Wi-Fi [20,21] are commonly used to estimate current position. Although se approaches work well, y have limitations. For example, RFIDs can only be sensed and read at short distances from reader. Thus, ir locations must be estimated, reby making it difficult for blind and visually people to initially locate RFIDs [8 14]. In addition, pedestrian dead reckoning (PDR) systems using inertial sensors [18,19] require more computational time to improve ir accuracy, because accumulated errors increase rapidly with travel distance. In order to compensate for this problem, Qian et al. combined PDR algorithm with a particle filter to correct estimated errors and guarantee localization accuracy. However, y require additional technologies such as Wi-Fi or RFID to improve localization accuracy. Anor system using UWB-based indoor positioning was developed to provide a high level of accuracy in large open places with low installation costs. In [16,17], y used a single set of four sensors in a room with a length of less than 100 m. Their estimation errors were up to 20 cm in most locations, which is sufficiently low for application in wayfinding systems. As an alternative, vision-based wayfinding systems have been investigated in which computer vision techniques have been used to sense surrounding environments using scene texts, signs, and meaningful objects [22 25]. In reference [22], a body-mounted single camera was used in wayfinding for both indoor and outdoor environments. When an input image was given, environmental landmarks were identified at position and compared with pre-established landmarks. Methods that use a stereo camera and bionic eyeglasses have also been developed in references [23,24], which recognize current position using object recognition. In offline stage, key objects are learned with a neural network (NN) and genetic algorithm (GA). In online stage, salient features are extracted from a given input image and classified as learned objects, which enable environmental information to be recognized in current scene. In [25], a mobile application provided location-specific information through key-frame matching. In offline phase, system generated list of keyframes with ir distinctive camera orientations. During wayfinding, it found closest keyframe to current frame, computed angular deviation between two images, and n provided user with current localized information. The primary advantage of se vision-based methods has been that infrastructure or environment does not need to be modified because objects are recognized directly as associated with specific environments. However, ir disadvantages include insufficient reliability and prohibitive computational complexity. To manage this problem, a color code based system was adopted. It can operate quickly and reliably on a portable phone requiring only minor modifications to environment [26 32]. As shown in Figure 1, quick response (QR) codes and barcodes are used as labels to denote environmental information. Among m, QR codes can contain a significant amount of information, including numbers, alphabet characters, symbols, and control codes. In practical applications, y usually represent URLs linking to webpages that explain current environment. Such systems using color codes are effective for indoor environments and have already been proven to be practical for indoor wayfinding and shopping in grocery stores [29].

6 As shown in Figure 1, quick response (QR) codes and barcodes are used as labels to denote environmental information. Among m, QR codes can contain a significant amount of information, including numbers, alphabet characters, symbols, and control codes. In practical applications, y usually represent URLs linking to webpages that explain current environment. Sensors Such 2017, systems 17, 1882using color codes are effective for indoor environments and have already been proven 6 of 34 to be practical for indoor wayfinding and shopping in grocery stores [29]. (a) (b) Figure 1. Indoor wayfinding system using color codes: (a) Wayfinding using color barcodes and color targets [30]; (b) Using quick response (QR) codes [27] Assistive Functions The core elements of wayfinding assist users by letting m know where y are, identifying ir destination and providing a route for m to follow, recognizing ir destination upon arrival, and safely returning m to ir point of origin. To support se elements, both positioning of current location and path guidance to direct users to/from target destination should be included in a wayfinding system. Among methods described in Table 1, some systems [8,12,13,15,16,19,20,22,25,28,31,32] can provide both positioning and path guidance to users, whereas ors provide eir positioning or path guidance. To provide both functions, most existing systems require a map to be provided or constructed. For example, y assume that building maps have been previously provided, as in references [8,12,13,15,16,19,20,22,25,28,31,32], or that 3D building maps are constructed in real-time by extracting landmarks from input images and matching m to known structures [22 25]. Based on map information, y identify user s current position and locate optimal route to reach target destination or to return to user s original position. However, in real situations, access to such structural information is very limited and not commonly available. Even if a map is provided, user should obtain information through seamless communication between mobile device and server systems; however, stable communication is not guaranteed due to signal interruptions, network traffic overload, and so on. Accordingly, it is necessary to develop wayfinding systems that can operate in various environments regardless of wher maps are available Target User Population and User Interface During past decade, many wayfinding systems have been developed that support mobility of people with various impairments. The target population includes people with visual impairments which include low vision and blindness [6 32], people with cognitive impairments, and elderly people [33,35]. Furrmore, wayfinding has also been developed for people without impairments [34,36]. For practical applications for various user groups as mobility aids, user interface of wayfinding systems must efficiently convey results. Accordingly, interfaces for several types of systems have been developed for specific target user populations shown in Table 1. The types of systems can also be divided into a graphical interface using virtual reality (VR) and augmented reality (AR) techniques, haptic and audio interfaces using Braille, text-to-speech (TTS), simple audio signals or virtual sound. The former technique is commonly used in wayfinding systems designed for cognitively users, whereas latter techniques are designed for blind and visually users. In order to be effective, it is important to consider instructions provided to user. The instructions should be easily understood by user. The instructions used in existing systems can be divided into spatial language and virtual sounds. The spatial language notifies user in form of verbal directions (e.g., left or right ) with or without degrees, which are represented as a clock reference system and cardinal directions (e.g., Turn toward 2 o clock or Turn east ). Spatial language has been widely used in existing systems and has been proven to effectively guide

7 Sensors 2017, 17, of 34 user over paths [8,9,12,16,19,23 25,27,28,32,37]. The virtual sounds provide spatialized sound that notifies user using binaural sounds according to relative distance and bearing of waypoint with respect to user. In [38,39], Loomis et al. performed a user study to evaluate guidance performance between spatial language and virtual sounds. The virtual sounds exhibited better performance in terms of both guidance and user preference compared with spatial language. Sensors 2017, 17, of 35 However, it can partially occlude external sounds such as alarms or speech, and it can require a high computational Among cost to spatial continuously language-based track approaches, user s relative activity-based movementsinstructions to waypoint. have been recently proposed Among for use spatial in wayfinding language-based [34]. They approaches, provide activity-based route toward instructions destination have been divided recently by proposed minimum for use unit inof wayfinding human movement [34]. They activities, provide and route y toward guide a user destination with a specific dividednumber by of minimum steps, going unit of up human or down, movement turning activities, right or and left, y and guide so on. a user In [34], withsome a specific activity number instructions of steps, are going depicted up or in down, Figure turning 2 and right all instructions or left, and are so on. indicated In [34], using some activity image icons. instructions Through areexperiments, depicted in it Figure was 2proven and allthat instructions activity-based are indicated instructions usingcould imagereduce icons. Through mental experiments, and physical it burden was proven on that users activity-based and could instructions provide an easier could user reduce interface mental with and fewer physical errors. burden In this on study, users activity-based and could provide instructions an easier were user redefined interfacefor with fewer proposed errors. system In thisand study, conveyed activity-based to user instructions via beeping were and a redefined text-to-speech for (TTS) proposed service system because and conveyed blind users toand visually user via beeping and users a text-to-speech are target (TTS) users of service proposed because blind system. users and visually users are target users of proposed system. (a) (b) (c) Figure Figure 2. Examples 2. Examples of activity-based of activity-based instructions: instructions: (a) Go straight (a) Go for straight seven (7) for steps seven to (7) information steps to desk; information (b) Turn left/right desk; (b) to Turn left/right information to desk; information (c) Stop at desk; destination. (c) Stop at destination. 3. Overview 3. Overview of of Proposed Proposed System System The The goal goal of of proposed proposed system system is to is guide to guide blind blind and and visually visually people people to to and and from from ir ir destination destination of choice of choice in unfamiliar in unfamiliar indoor indoor environments environments to fulfill ir to fulfill accessibility ir accessibility and mobilityand needs. mobility Here, needs. primary Here, target primary user population target user is population people with is people low vision with orlow blindness. vision or The blindness. proposedthe wayfinding proposed system wayfinding was implemented system was implemented on an iphone6, on which an iphone6, has anwhich embedded has an camera, embedded gyroscope, camera, and gyroscope, accelerometer. and Additionally, accelerometer. it supports Additionally, two functions: it supports positioning two functions: and path positioning guidance. Initially, and path by guidance. processing Initially, images by processing obtained from images iphone obtained camera, from proposed iphone camera, system first proposed recognizes system current first recognizes situation. Then, current it locates situation. scene objects Then, it and locates recognizes scene objects ir meaning and recognizes so that it ir can guide meaning so user that along it can a route guide to user target along destination. a route to Meanwhile, target it destination. calculates Meanwhile, user s motions it calculates using two user s inertial motions sensors using and records two inertial user s sensors trajectories, and records which are user s used trajectories, as a guide for which are return used route. as a guide for Figure return 3 shows route. overall architecture of proposed wayfinding system. When a user input is given Figure to 3 system shows by speech overall recognition, architecture of proposed proposed system wayfinding is activated. system. The When information a user input collected is given from to system sensors by are speech continuously recognition, processed, proposed and system results is are activated. delivered The to information user by collected a Text-To-Speech from sensors interface. are continuously The proposed processed, system consists and of results five are main delivered modules: to situation user by a awareness, Text-To-Speech object detection, interface. The object proposed recognition, system user consists trajectory of five recording, main modules: and user situation interface awareness, with activity-based object detection, instructions. object recognition, user trajectory recording, and user interface with activity-based instructions. In situation awareness module, proposed system first recognizes user s situation, which refers In to situation type awareness of place where module, user is proposed standing. system This module first recognizes has an essential user s function situation, in which proposed refers system. to type It can of determine place where what environmental user is standing. information This module is has required an essential and where function in user should proposed traverse system. by classifying It can determine current what types environmental of places as information corridors, doors, is required halls, or and junctions. where This user enables should traverse proposed by classifying system to function current in various types of environments places as corridors, even though doors, halls, environmental or junctions. maps This are enables not available. proposed For system situation to function awareness in various module, environments image matching even techniques though environmental based on shape maps descriptors are not available. are used. For This is discussed situation awareness in detail in module, Section 4. image matching techniques based on shape According descriptors to are situation, used. This is required discussed environmental in detail in Section information 4. is different eir requiring just location According information to situation, (door) or requiring required both environmental location and directional information information is different (corridor, eir requiring hall, just location information (door) or requiring both location and directional information (corridor, hall, and junctions). Here, environmental information is represented by QR codes. The codes were designed with green or orange quiet zones (The quiet zone helps scanner find leading edge of barcode so reading can begin), which enable accurate discrimination from complex environments even when using simple color analysis. Based on current situation and

8 Sensors 2017, 17, of 34 and junctions). Here, environmental information is represented by QR codes. The codes were designed with green or orange quiet zones (The quiet zone helps scanner find leading edge of barcode so reading can begin), which enable accurate discrimination from complex environments even when using simple color analysis. Based on current situation and environmental information, an activity-based instruction is created and conveyed to user. Then, human activities such as walking Sensors 2017, a certain 17, 1882 number of steps and turning in a certain direction are obtained by calculating 9 of 35 distance between user and QR code and viewing angle. While moving, user s path is continuously moving, user s recorded path inis continuously user trajectory recorded recording module, user trajectory which isrecording used to help module, user which locate is previously used to help visited user locations locate previously such as his/her visited starting locations point. such as his/her starting point. Figure Figure Overall Overall architecture architecture of of proposed proposed wayfinding wayfinding system. system. While user is moving, proposed system records all processing results in log file. Thus, While user is moving, proposed system records all processing results in a log file. Thus, it can manage unexpected problems that can occur due to battery discharge or sensing failures. it can manage unexpected problems that can occur due to battery discharge or sensing failures. If beeping or speech signals are suddenly not generated by proposed system, user must stop If beeping or speech signals are suddenly not generated by proposed system, user must stop walking and restart system. Once proposed system is restarted, it automatically loads walking and restart system. Once proposed system is restarted, it automatically loads last last log file in order to verify wher recent recognition results match with user s log file in order to verify wher recent recognition results match with user s destination destination or not; if it does not match, proposed system determines that previous or not; if it does not match, proposed system determines that previous wayfinding was not wayfinding was not completed and asks user if y want to continue wayfinding for completed and asks user if y want to continue wayfinding for previous destination. previous destination. 4. Situation Awareness 4. Situation Awareness People often move through unfamiliar environments. Even when people have no prior structural information People often for such move environments, through unfamiliar y can environments. easily reach ir Even destination. when people The have reason no is prior that sighted structural people information use various for such types environments, of visual clues y that can are easily found reach in ir indoor destination. environments The reason to guide is m. that sighted This section people describes use various how types people of visual use visual clues clues that are that found are found in indoor in environments. environments Based to guide on our m. observations, This section we describes defined how important people situations use visual and clues n that developed are found an in algorithm environments. that recognizes Based on those our observations, situations. we defined important situations and n developed an algorithm that recognizes those situations Initial Study 4.1. Initial Study To collect visual clues that are often observed in real environments, an initial study was conducted. We focused To collect on determining visual clues which that are visual often clues observed are necessary in real from environments, various objects an ininitial surrounding study was environment conducted. We and focused how people on determining use this information which visual toclues guideare m necessary to irfrom destination. various objects In this initial study, surrounding two men environment and three women and how were people selected use who this had information expert knowledge to guide on m to structure ir destination. and rooms In this initial study, two men and three women were selected who had expert knowledge on structure and rooms of several buildings at our university campus. Using a digital camcorder, users recorded useful visual clues that were found in real environments. Figure 4 shows some of visual clues that were observed in real environments. As seen in Figure 4, many visual clues can be observed in real environments. Although some

9 Sensors 2017, 17, of 34 of several buildings at our university campus. Using a digital camcorder, users recorded useful visual clues that were found in real environments. Figure 4 shows some of visual clues that were observed in real environments. As seen in Figure 4, many visual clues can be observed in real environments. Although some differences exist according to type of building, visual clues can be divided into two groups: location-specific information that identifies current position, and directional information for a given destination. Sensors 2017, 17, of 35 Sensors 2017, 17, of 35 (a) (b) (c) Figure 4. Visual clues (a) used to guide pedestrians in real (b) environments: (a) Place number; (c) Figure 4. Visual clues used to guide pedestrians in real environments: (a) Place number; (b) Pictogram; (b) Pictogram; (c) Guide (c) Figure sign. Guide 4. sign. Visual clues used to guide pedestrians in real environments: (a) Place number; (b) Pictogram; (c) Guide sign. In addition, visual clues with positioning information were primarily observed in front of In addition, visual clues with positioning information were primarily observed in front of doors, doors, while clues In addition, with directional visual clues information with positioning usually were information observed were in halls primarily and corridors. observed For in front of while clues with directional information usually were observed in halls and corridors. For example, example, place doors, numbers while clues (Figure with 4a) directional and pictograms information (Figure usually 4b) were observed found in in front halls of and doors, corridors. For place numbers (Figure 4a) and pictograms (Figure 4b) were found in front of doors, whereas directional whereas directional example, signs place were numbers found (Figure in a hall 4a) or and at a junction pictograms shown (Figure in Figure 4b) were 4c. found in front of doors, signs were found in a hall or at a junction shown in Figure 4c. In addition, whereas in directional order to analyze signs were needs found of in blind a hall people or at a and junction visually shown in Figure people 4c. for In addition, in order to analyze needs of blind people and visually people for proposed wayfinding In addition, system, in order we to observed analyze ir needs behaviors of blind people order and to understand visually how y people for proposed wayfinding system, we observed ir behaviors in order to understand how y interpret proposed environmental wayfinding information system, we around observed m. ir Yang behaviors et al. studied in order ir to understand wayfinding how y interpret environmental information around m. Yang et al. studied ir wayfinding behaviors behaviors interpret through interviews environmental and through information capturing all around actions m. during Yang finding et al. ir studied destination ir wayfinding through interviews and through capturing all actions during finding ir destination in unfamiliar unfamiliar behaviors environments through [40]. interviews In general, and through visually capturing all people actions depend during on finding white-cane ir destination to in environments [40]. In general, visually people depend on white-cane to understand understand unfamiliar environmental environments information. [40]. Through In general, alternatively visually striking people left side depend and on right white-cane to environmental information. Through alternatively striking left side and right side with side with understand white cane, environmental y can perceive information. location Through of alternatively obstacles and striking available left directions side and right white cane, y can perceive location of obstacles and available directions toward ir toward ir side destination. with white Table cane, 2 presents y can perceive user behaviors location using of obstacles white-cane and to understand available directions destination. Table 2 presents user behaviors using white-cane to understand environment environment toward around ir destination. m. On flat Table ground, 2 presents visually user behaviors people using first find white-cane nearest to understand around m. On flat ground, visually people first find nearest wall, and n move wall, and n environment move forward around following m. On flat wall. ground, When y visually move around, people it is possible first find to nearest forward following wall. When y move around, it is possible to recognize doors or corridors recognize wall, doors and or corridors n move through forward changes following edges wall. and When shapes y of move walls. around, On stairs it is or possible to through changes in edges and shapes of walls. On stairs or slopes, y can determine wher slopes, y recognize can determine doors wher or corridors to go up through or go down changes through in variations edges and in shapes height of of walls. ground. On stairs or to go up or go down through variations in height of ground. slopes, y can determine wher to go up or go down through variations in height of ground. Table 2. Wayfinding behaviors of visually people. Table 2. Wayfinding behaviors of visually people. Table 2. Wayfinding behaviors of visually people. User Behavior Perceiving Environments User Behavior Perceiving Environments Perceiving Environments After leaving a room, y change ir direction through finding After leaving After a leaving room, edges y a and room, change shapes y ir of change direction walls. ir direction through finding through finding edges and edges shapes and of shapes walls. of walls. They find nearest wall from both sides, and n follow this They find wall They while find nearest tapping nearest wall from it using wall both from a white sides, both cane. sides, and n follow this and n follow this wall while tapping it using wall while tapping it using a white cane. a white cane. When reaching a corner, y consider current place as a When reaching junction a corner, court. y consider current place as a junction or court. If wall is recessed, y can assume that a door is near. If wall is recessed, y can assume that a door is near.

10 Table 2. Wayfinding behaviors of visually people. User Behavior Perceiving Environments After leaving a room, y change ir direction through finding edges and shapes of walls. Sensors 2017, 17, of 34 User Behavior Table They 2. Cont. find nearest wall from both sides, and n follow this wall while tapping it using a white cane. Perceiving Environments When reaching When a corner, reaching y a consider corner, y current consider current place as a place as a junction or court. junction or court. If wall is recessed, y can assume that a door If wall is recessed, y can assume that a door is near. is near. Using Using height differences height differences of ground, of y ground, y can perceive can perceive beginning beginning and end and of stairs. end of stairs. Consequently, situation information has an important function in navigation and wayfinding for both sighted people and visually people. Accordingly, it was used as a foundation to develop proposed system. To improve mobility of blind or visual people so that y can independently traverse unfamiliar indoor environments, a situation-based navigation and wayfinding system is proposed that first recognizes type of place and n detects and recognizes signage Definition of Situations In this study, a situation refers to type of place where user is standing. The situation is categorized into one of four types: door, corridor, hall, and junction. While first three situations (door, corridor, and hall) are self-contained, last situation (junction) transposes one situation into anor making it a combination of two or more situations. Thus, first three situations are called primitive place types while last situation is called a complex place type. These types of situations have also been considered in or systems. For example, in anti-collision systems, situations are divided into four types according to level of difficulty in controlling a wheelchair: in front of a door, in front of an obstacle, a wall, and or situations. Then, classification is performed based on sensor information attached to both sides of wheelchair. The situation is determined according to difference between distances measured by sensors. This approach is unsuitable for mobile phones due to hardware issues; thus, this study proposes a novel vision-based method to recognize various situations Recognition Methods For situation awareness, specific characteristics must be identified to distinguish four situation types, that is, visual patterns associated with each type of situation must be recognized. However, same type of situation can appear significantly different in images for many reasons including cluttered backgrounds, different viewpoints, orientations, scales, lighting conditions, and so on. Thus, to manage se differences, speeded-up robust feature (SURF) is used. SURF is known as a scale and rotation invariant feature detector [41]. To establish a link between four situation types and SURFs, 200 images representing four situation types were collected from various environments and used to extract SURF local descriptors. Figure 5 shows characteristics of detected SURF descriptors corresponding to four situation types. Figure 5a,b shows SURFs overlapping original images, while Figure 5c e shows accumulated SURF descriptors over 5, 10, and 20 images for each situation type. Interestingly, in Figure 5c e, accumulated SURFs for corridor images revealed an X-shaped pattern shown in top images. A rectangular-shaped pattern was observed for door images (second image from top). The SURFs for hall images were crowded around upper boundaries with a horizontal

11 Sensors 2017, 17, of 34 and thick line pattern in center. A specific pattern was not detected in junction image SURFs, which were complexly and sparsely distributed across images. Thus, common patterns were revealed among images that belonged to same situation type, except for complex junction situation. Therefore, SURF descriptors were used to index collected sample images and to match input image with indexed image to recognize current situation. A vocabulary tree, which is a very popular algorithm in object recognition, was used for indexing [42,43]. Therefore, module has two stages: an offline phase to build vocabulary tree and an online phase to recognize current Sensors 2017, situation. 17, of 35 Corridor Door Hall Junction (a) (b) (c) (d) (e) Figure Characteristics of of SURF distributions among situation types: (a,b) SURFs extracted from corridor, door, hall and and junction images; (c e) (c e) SURF SURF distribution accumulated with with 5, 10, 5, and 10, 20 and images, 20 images, respectively. respectively Offline Stage First, images were collected for for template data to to represent four situation types from various environments, and and 20,289 20,289 SURF SURF descriptors descriptors were were extracted extracted from from local regions local regions in images. in Then, images. extracted Then, descriptors extracted were descriptors quantized were intoquantized visual words into byvisual hierarchical words by K-means hierarchical [42,43]. Here, K-means K defines [42,43]. Here, branchk factor defines (number branch of children factor of (number each internal of children node), of noteach number internal of node), clusters, not and it number was setof atclusters, 10. and it was set at 10. The process for for hierarchical K-means quantization is is as as follows. First, an an initial clustering is is performed on on 20,289 20,289 initial initial descriptors, descriptors, reby reby defining defining K groups, K groups, where each where group each consists group of consists descriptor of descriptor vectors closest vectors aclosest particular to a cluster particular center. cluster This center. processthis performed process is recursively, performed and recursively, a vocabulary and a tree vocabulary is built. tree Eachis node built. ineach node vocabulary in vocabulary tree is an associated tree is an associated inverted file inverted with reference file with toreference images to containing images containing descriptor that descriptor corresponds that to corresponds that node. Once to that node. quantization Once isquantization defined, entropy-based is defined, an entropy-based weight (w i ) is assigned weight ( to ) each is assigned node (i), to as each follows: node (i), as follows: =ln (1) w i = ln N N (1) i where N is number of images in template database, and Ni is number of images in database with at least one descriptor vector path through node i. Inspired by TF-IDF scheme [42], this is used to ignore effects of most frequent and infrequent features (noise) in template database.

12 Sensors 2017, 17, of 34 where N is number of images in template database, and N i is number of images in database with at least one descriptor vector path through node i. Inspired by TF-IDF scheme [42], this is used to ignore effects of most frequent and infrequent features (noise) in template database Online Stage The online phase determines most relevant images in template database in relation to current input image, which is calculated based on similarity of paths down vocabulary tree of descriptors from DB images and those from input image. According to weights assigned to each node in vocabulary tree, template data (t) and input image (q) are defined as follows: t = {t i = m i w i }. (2) q = {q i = n i w i } (3) where m i and n i are number of descriptor vectors with a path through node i in template and input image, respectively. To compute difference between template and input vectors, both vectors are normalized, and n, similarity is calculated using following dot product. s(q, t) = q t 2 2 = 2 2 { f ot all i qi =0, t i =0} q it i (4) The template image with highest matching score is selected, and its situation type is assigned to label current situation. 5. Object Detection and Recognition It is difficult to recognize normal signage such as numbers and pictures due to ir complex shapes, varieties, and distance. Due to se difficulties, color codes have been used extensively to replace normal signage [26 30]. Although a variety of color codes have been proposed, QR code was chosen for proposed system for several reasons. First, it can hold a large amount of information, including numerals, alphabet characters, symbols, and control codes. Second, reader is freely available and can be installed on all smartphones with cameras, and it runs quickly and reliably. Accordingly, QR codes were used to represent environmental information and n modified to increase ir usability. Unlike existing methods that use QR codes to represent URLs [44], we used QR codes to represent numerical and alphabetic characters that indicate positioning information such as room numbers and signs indicating stairs and exits. Therefore, no network access is required in order to interpret meaning of QR codes. In addition to facilitating easier discernment of QR codes from background, green and orange color were used to denote QR code quiet zones. For minimized modification in environment, QR code size was set to cm 2, which was determined through experiments. The QR code was located 140 cm above floor. Based on current situation, proposed system detects different types of QR codes. The green QR codes are used to represent location-specific information, which are usually located next to doors; orange QR codes indicate directional information and appear in corridors, halls, and junctions. Because standard green color (RGB (0, 255, 0)) and orange color (RGB (128, 128, 0)) in real environments appear similar to fluorescent green and orange with diffuse reflections, this study used a darker green color (RGB (0, 128, 0)) and a darker orange color (RGB (256, 186, 0)). Figure 6 presents examples of generated QR codes and ir meanings. Figure 6a is a QR code that encodes location-specific information: this room is Figure 6b is a QR code that encodes directional information: Turn left from room 1201 to room 1206.

13 junctions. Because standard green color (RGB (0, 255, 0)) and orange color (RGB (128, 128, 0)) in real environments appear similar to fluorescent green and orange with diffuse reflections, this study used a darker green color (RGB (0, 128, 0)) and a darker orange color (RGB (256, 186, 0)). Figure 6 presents examples of generated QR codes and ir meanings. Figure 6a is a QR code that encodes Sensors 2017, location-specific 17, 1882 information: this room is Figure 6b is a QR code that 13 of encodes 34 directional information: Turn left from room 1201 to room (a) Place number: 1204 (b) Turn left from 1201 to 1206 Figure Figure 6. Examples 6. Examples of of generated generated QR QR codes codes and andir ir meaning: (a) (a) Location-specific code; (b) Directional (b) Directional code. code Object 5.1. Object Detection In proposed system, QR codes use dark green or dark orange to represent quiet zones, and y have a square shape. Thus, y are detected by locating a green (and orange) square in a context. The process for detecting QR codes is as follows: (1) Preprocessing: Because time-varying illumination requires contrast adjustment, a histogram specification is used. (2) Discretization: Each pixel in input image is classified as green, orange or ors. The color ranges are defined as follows: ( Cr < 1.7 & C g >1.5 & C b > 1.7 ) &&(H > 90 & H < 160) (5) ( Cr > 1.0 & C g >1.0 & C b < 0.7 ) &&(H > 20 & H < 60) (6) (3) Labeling: Row-by-row labeling is performed on discretized image. Then, area and circularity are calculated from all components. These properties are used to remove noise: if circularity of a region is larger than a predefined threshold or if its area is too small, it is considered to be noise. Thus, only components corresponding to color codes are filtered through this stage. (4) Post-processing: After noise filtering, adjacent components are merged to prevent color codes from being split. Figure 7 shows process used to localize QR codes. Figure 7a c shows input image, discretized results, and labeling. The detected regions of QR codes are marked in red and blue; red one indicates detected location-specific code, and blue one means detected directional code, respectively. As shown in Figure 7b, discretized image includes abundant noise, which is removed using two geometric characteristics (see Figure 7c).

14 color codes from being split. Figure 7 shows process used to localize QR codes. Figure 7a c shows input image, discretized results, and labeling. The detected regions of QR codes are marked in red and blue; red one indicates detected location-specific code, and blue one means detected Sensors 2017, directional 17, 1882code, respectively. As shown in Figure 7b, discretized image includes abundant 14 of 34 noise, which is removed using two geometric characteristics (see Figure 7c). (a) (b) (c) (d) Figure 7. Process for QR code localization: (a) Input images; (b) Discretized results; (c) Labelling Figure 7. Process for QR code localization: (a) Input images; (b) Discretized results; (c) Labelling results; (d) Detected QR codes. results; (d) Detected QR codes Object Recognition 5.2. Object Recognition The code was attached to locations where real signage was placed in indoor environments. The Once code code wasis attached localized to in locations detection where module, real signage proposed was system placed initiates indoor QR environments. reader [45]. Once A standard code is QR localized reader can in accurately detection recognize module, detected proposed codes system within a initiates limited range QR as follows: reader [45]. A standard distance QR reader from canuser accurately to codes recognize should be within detected 1 m, codes and within code should a limited be perpendicular range as follows: distance to user. from Due user to se to limitations, codes should after detecting be within 1 m, codes, and proposed code should system be first perpendicular measures to distance between user and detected code and viewing angle between m. It n user. Due to se limitations, after detecting codes, proposed system first measures verifies if two conditions are satisfied. If not, it guides user to approach code and adjust distance between user and detected code and viewing angle between m. It n verifies his/her position so that QR reader can read code. The details of this process are discussed in if two Section conditions 6. are satisfied. If not, it guides user to approach code and adjust his/her position so that QR reader can read code. The details of this process are discussed in Section User Interface with Activity-based Instructions 6. User Interface with Activity-based Instructions Recently, activity-based navigation has been proposed as an alternative to map-based navigation because it does not require a pre-installed map, and it is not dependent on absolute positioning [34]. An activity denotes mode of human movement such as standing, walking, climbing stairs or riding an elevator. Thus, activity-based navigation guides a user to a destination using a sequence of human movement activities such as walking a certain number of steps, going up or down, and so on. The efficiency of method in reducing mental burden on visually people and reducing navigation errors has been demonstrated previously in [34]. As mentioned above, people who go blind early and those born blind encode sequential features of travelled route, i.e., a set of instructions that denotes directional changes in ir route. Therefore, activity-based navigation is well-suited to providing guidance information to se users. Accordingly, to convey recognized results to users in a more efficient manner, new activity-based instructions were defined and used in proposed system. Here, one instruction statement consists of four parameters: action, step counts, compass direction, and current place, which are shown in Figure 8a. Based on results obtained from situation awareness and color code recognition modules, user action is determined. Then, parameters necessary for required actions are calculated by analyzing geometric characteristics of detected QR code. In addition, for navigation system for blind users, generated information is represented using spatial language ( turn left/right, go-straight, or stop ) or virtual sounds (i.e., perceived azimuth of sound indicates target waypoint). The former is spoken to user using a text-to-speech (TTS) service, and latter is conveyed using beeps or sonar sounds. In beginning of wayfinding, spatial language is more effective for guidance to a specific direction or waypoint. However, when a cognitive load is present or accumulated while user is moving, virtual sounds exhibited better performance than spatial language [37 39]. Therefore, proposed method combined se approaches: for actions of go-straight and stop, number of step counts to

15 climbing stairs or riding an elevator. Thus, activity-based navigation guides a user to a destination using a sequence of human movement activities such as walking a certain number of steps, going up or down, and so on. The efficiency of method in reducing mental burden on visually people and reducing navigation errors has been demonstrated previously in [34]. As Sensors mentioned 2017, 17, 1882 above, people who go blind early and those born blind encode sequential features 15 of of 34 travelled route, i.e., a set of instructions that denotes directional changes in ir route. Therefore, activity-based navigation is well-suited to providing guidance information to se users. destination Accordingly, is important. to convey Thus, recognized proposed system results used to users beeping in sounds a more with efficient different manner, frequencies new according activity-based to remaining instructions number were defined of stepand counts. used However, in proposed for action system. of Here, turn, one instruction proposed system statement uses spatial consists language of four to parameters: convey action, directional step command counts, compass with compass direction, directions and current through place, text-to-speech which are shown service. in The Figure effectiveness 8a. Based on of combination results obtained of se from two methods situation was awareness demonstrated and color in a field code study, recognition which is modules, described inuser Section action is determined. Through field Then, tests, parameters combined method necessary exhibited for better required performance actions are than calculated using only by speech-based analyzing spatial geometric language. characteristics of detected QR code. (a) (b) Figure 8. Activity-based instruction: (a) Structure of instruction statement; (b) Three types of Figure 8. Activity-based instruction: (a) Structure of instruction statement; (b) Three types of instructions according to actions. instructions according to actions. In addition, for navigation system for blind users, generated information is represented 6.1. Actions using spatial language ( turn left/right, go-straight, or stop ) or virtual sounds (i.e., perceived In proposed azimuth system, of sound possible indicates actions target are go waypoint). straight, The turn, former and stop, is spoken each of to which user is determined using a text-to-speech according to (TTS) several service, conditions and suchlatter as is current conveyed situation, using beeps codeor detection, sonar sounds. and viewing In angle beginning and distance of between wayfinding, user spatial and language color is codes. more effective This determination for guidance process to a specific is summarized direction or in Table waypoint. 3. However, when a cognitive load is present or accumulated while user is moving, virtual sounds exhibited better performance than spatial language [37 39]. Therefore, proposed method combined se approaches: Table 3. Action for table. actions of go-straight and stop, number of step counts to destination is important. Thus, proposed system used beeping Conditions sounds with different frequencies according to remaining number of step counts. However, for Is angle Is distance to Is Action Is found action any of turn, proposed between system a user uses spatial language What is current Details to convey directional command detected color current place color code? and color code situation? with compass code less directions than 1 m? through text-to-speech destination? service. The effectiveness of combination of perpendicular? se two methods - was demonstrated - in a field - study, which All is described Go-straight in Section Through field tests, combined method exhibited better - performance Allthan using Go-straight only speech-based - spatial language. - - All Turn 6.1. Actions Hall, Corridor and Junction Door Turn Turn to direction that is orthogonal to detected QR codes to guided direction by directional QR codes to direction to return back to previous route Door Stop - When color codes are not detected, proposed system guides user to continue to go straight along ir original direction. However, if a color code is found, it verifies wher QR code is placed within a recognizable scope. Then, if two QR code-reading conditions are satisfied (as specified in Section 5.2), it selects appropriate action based on current situation. For example,

16 Sensors 2017, 17, of 34 when a user is standing in a hall, junction, or corridor, it chooses turn action (see third row of Table 3). However, when a user is standing in front of a door, proposed system first verifies wher current positioning is destination and n directs user to stop or return to previous route to continue his/her travel. After determining a suitable action according to user s conditions, necessary parameters for performing respective actions should be calculated. For a go straight action, a step count is necessary to approach detected color code or next location. For turn action, compass direction is required to guide user s orientation for next direction. Then, when destination is reached, proposed system is terminated, and it waits until a new destination is given. Accordingly, instructions are presented in three forms according to action type and are shown in Figure 8b Current Place The positioning information is required for all actions. In this study, positioning information is denoted by current situation or current place. If explicit information is provided by decoding QR codes, places are clearly denoted such as Room 1204, toilet, or or specific locations; orwise, situation such as a junction or hall is used to denote current place. Such information is obtained through situation awareness and object detection and recognition Compass Direction Here, compass direction is described by cardinal direction, and possible direction is selected from following eight orientations: {north (0 ), norast (45 ), east (90 ), souast (135 ), south (180 ), southwest (225 ), west (270 ), and northwest (315 )}. Sometimes, compass direction is clarified explicitly by QR codes that represent directional information (see Figure 6b). However, in many cases, compass direction is not clarified, e.g., when detected color codes are not placed within a recognizable scope and when a user wants to return to previous steps. To guide users in such cases, two algorithms were developed to calculate compass direction. The first algorithm is a vision-based algorithm that was developed to calculate compass direction based on viewing angles between a user and detected QR codes. The second algorithm is a sensor-based algorithm that calculates compass direction using difference between user s successive motions obtained from gyroscope and accelerometer. In this section, only vision-based algorithm is illustrated, and sensor-based algorithm will be discussed in Section 7. To calculate viewing angles between user and QR codes, a regular grid map was used, and each cell was 5 5. The procedure to estimate viewing angle between camera and color code is as follows: (1) The regular grid map is overlaid on detected color code. (2) For cells allocated on both sides, densities of green-colored cells on Y-axis are accumulated; each cell is denoted as DL or DR, respectively. (3) The direction of left or right side is set by sign of difference between two accumulated densities. (4) The viewing angle is determined by difference between two accumulated densities, i.e., DL-DR. Figure 9 shows that different values as viewing angles increase at various distances. As can be seen in this figure, se differences are directly proportional to viewing angle between user and color code, that is, difference gradually increases with larger viewing angles.

17 2) For cells allocated on both sides, densities of green-colored cells on Y-axis are accumulated; each cell is denoted as DL or DR, respectively. 3) The direction of left or right side is set by sign of difference between two accumulated densities. 4) The viewing angle is determined by difference between two accumulated densities, i.e., DL-DR. Sensors 2017, 17, of 34 DL-DR ±0 ±10 ±20 ±30 ±40 Viewing angle between camera and QR code Figure Relationship between viewing angles and DL-DR values. values Step Figure Counts 9 shows that different values as viewing angles increase at various distances. As can be A 6.4. seen step Step count in Counts this isfigure, obtained se by differences dividing are distance directly to proportional a specific position to byviewing a normal angle stepbetween distance. Here, user a normal and A step step color count distance code, obtained that is assumed is, by dividing difference to be 0.5 m. gradually distance When QR to increases a codes specific are with position notlarger detected by viewing a and normal when angles. step a user is turning, distance. Here, step a count normal is fixed step distance to 3 because is assumed proposed to be 0.5 m. system When can QR codes detectare objects not detected at a distance and of 2.5 mwhen froma auser user. is turning, In or cases, step count distance fixed is to calculated 3 because using proposed image system processing. can detect The objects stepat count calculation a distance is performed of 2.5 m from after a user. estimating In or cases, viewing distance angle is calculated because using distance image processing. measured The at step count calculation is performed after estimating viewing angle because distance perpendicular line from color codes is more accurate. measured at perpendicular line from color codes is more accurate. Similar to calculation for compass direction, regular grid map is first overlaid on Similar to calculation for compass direction, regular grid map is first overlaid on codes. codes. Then, Then, distance distance is obtained is obtained by counting by counting number number of of grid grid cells cells that that are are mapped mapped to to green quietgreen zone. quiet Its ratio zone. over Its ratio all cells over isall inversely cells is inversely proportional proportional to distance, to distance, that is, that is, ratio gradually ratio decreases gradually withdecreases larger distances with larger between distances between user and user color and code. color Figure code. 10Figure shows 10 shows color codes that are color captured codes that several are captured distances at several and viewing distances angles. and viewing The images angles. in The Figure images 10a,b in Figure were10a,b captured from were same captured distance; from however, same distance; y have however, different y have viewing different angles viewing of 20angles andof 5020, respectively. and 50, The images respectively. in Figure The 10c,d images were in captured Figure 10c,d at a distance were captured of 0.5 mat and a distance 1.25 m, of respectively, 0.5 m and from 1.25 m, user. To respectively, measure from viewing user. angles and distances from user, regular grid map was first overlain To measure viewing angles and distances from user, regular grid map was first on detected color codes shown in Figure 10. Then, difference was calculated between overlain on detected color codes shown in Figure 10. Then, difference was calculated densities of green-colored pixels at both ends, and ratio of green-colored cells over all cells between densities of green-colored pixels at both ends, and ratio of green-colored cells was counted. over all cells Aswas shown counted. in Figure As shown 10, in Figure difference 10, indifference Figure 10a in Figure is larger 10a than is larger that than in Figure that in 10b, and Figure 10c 10b, has and afigure smaller 10c ratio has a than smaller that ratio of Figure than that 10d. of Figure 10d. 0.5m 0.75m 1m 1.25m Sensors 2017, 17, of 35 (a) (b) (c) (d) Figure 10. Process used to calculate viewing angle from camera to color codes and Figure 10. Process used to calculate viewing angle from camera to color codes and distance between m: (a) and (b) are images captured with viewing angles of 20 and 50 at distance between m: (a) and (b) are images captured with viewing angles of 20 and 50 at distance of 0.5 m, respectively; (c) and (d) are images captured from distance of 1.25 m and 0.5 m distance of 0.5 m, respectively; (c) and (d) are images captured from distance of 1.25 m and 0.5 m at at viewing angle of 0, respectively. viewing angle of 0, respectively. 7. User Trajectory Recording Module When a user returns to his/her start point (i.e., lobby or elevator), it can cause a mental and physical burden on user. To reduce his/her difficulties, we provide back paths using recorded paths. To do this, this module records a user s trajectories until he/she arrives at his/her destination from starting point to help him/her return to a previous location such as starting

18 Sensors 2017, 17, of User Trajectory Recording Module When a user returns to his/her start point (i.e., lobby or elevator), it can cause a mental and physical burden on user. To reduce his/her difficulties, we provide back paths using recorded paths. To do this, this module records a user s trajectories until he/she arrives at his/her destination from starting point to help him/her return to a previous location such as starting point. Using sensors that are already integrated into mobile phones, paths can be constructed automatically while user is moving. Here, two inertial sensors are used: a gyroscope and an accelerometer. Algorithm 1 describes algorithm used to record a user s trajectory to destination. All paths are recorded to stack (S), and each path is formatted as one instruction statement. Thus, action should be defined first, and n, related parameters, e.g., step counts (SC), compass direction (θ), and current position (P), should be estimated. In this module, se parameters are calculated based on sensory information. Algorithm 1: The proposed trajectory recording algorithm. Input: Gyroscope sensor G, Accelerometer sensor AC, destination D Output: Stack S that contains set of instructions, I(A, SC, θ current, P), where A, SC, θ, P are variables for action, step count, compass direction, and position, respectively. Procedure: 1. Initialize A null, θ previous, θ current 0, SC 0, P(P x, P y ) (0,0); // Determine action type 2. If AC < 0.03, n A Stop > 3. else if θ current θ previous 15 n A Turn 4. else A Go-straight //Estimate instruction parameters according to action type 5. if A is Go-straight, n SC, P x, P y is updated by following equation: SC SC + 1, P x SC cos θ current, P y SC sin θ current 6. else if A is Turn, n θ current θ previous 7. Push I(A, SC, θ current, P) to S //check if current positioning information is destination ( positioning information is obtained by recognizing QR codes) 8. if current location is destination, n terminate 9. else Go to Line 2 Once user arrives at destination, return route to origin should be provided. The algorithm used to provide reverse path is simple and described in Algorithm 2. As shown in Algorithm 2, instruction that is placed on top of stack is conveyed to user. Because main users of proposed system are blind or visually, all instructions are provided verbally through text-to-speech functionality.

19 Sensors 2017, 17, of 34 Algorithm 2: The proposed trace backward algorithm. Input: Stack S that contains set of instructions, I(A, SC, θ, P), where A, SC, θ, P are variables for action, step count, compass direction, and position, respectively Output: Instruction Procedure: 1. Pop I(A, SC, θ, P) from S 2. if A is Turn, n θ (360 θ). //Generate instruction statement according to action type 3. if A is Go-straight, Pop I(A, SC, θ, P) from S 4. if A is Go-straight, n SC SC else, Push I(A, SC, θ, P) to S and make instruction as Go-straight SC steps else if A is Turn, n make instruction as Turn to θ 6. else make instruction as Stop 7. //Convey instruction to user, through Text-to-Speech (TTS) service 8. Call TTS (instruction) 9. if S is empty, n terminate 10. else Go to Line 1 8. Experiments To demonstrate advantages of proposed situation-based wayfinding system, two experiments were performed: an efficiency test and a field test. The efficiency test was designed to evaluate accuracy of five modules. In addition to demonstrating feasibility of proposed system in increasing mobility of blind and visually users, a field test also was designed Efficiency Test For a quantitative performance evaluation of proposed system, color codes were first attached to walls inside buildings, and n, test images were collected at different times of day and under different lighting conditions Situation Awareness Results In proposed system, performance of situation awareness module is crucial because it is used to select type of QR code to be detected and it determines actions to guide user. In order to evaluate accuracy of proposed recognition method, it was tested with a total of 350 images that were captured from eight locations. Figure 11 presents some of results from situation awareness module in proposed system. Figure 11a presents SURF features overlapping original input images, in which scenes have cluttered backgrounds and varying degrees of illumination. These input images were compared with template data for indexing using vocabulary tree, and most relevant images were selected. Figure 11b depicts examples of images that produced errors. For robust recognition regardless of illumination, scale, and viewpoint, proposed situation awareness method used SURFs to describe images. However, this process is subject to interference from light reflections caused by surface materials. In first two images, ir situations were misclassified as corridors because slanted lines that resulted from reflected light created a similar complexity to SURF descriptors for corridor images, reby leading to misclassification. In second case, static objects were mistaken as walls; thus, image was misclassified as a hall. However, confusion between corridor and hall classifications did not influence performance of proposed system because both situations required same environmental information, i.e., directional information.

20 first two images, ir situations were misclassified as corridors because slanted lines that resulted from reflected light created a similar complexity to SURF descriptors for corridor images, reby leading to misclassification. In second case, static objects were mistaken as walls; thus, image was misclassified as a hall. However, confusion between corridor and hall classifications did not influence performance of proposed system because both situations required same environmental information, i.e., directional information. Sensors 2017, 17, of 34 (a) (b) Figure 11. Situation awareness results: (a) Correctly recognized images; (b) Error images. Figure 11. Situation awareness results: (a) Correctly recognized images; (b) Error images. Table 4 summarizes overall performance of situation awareness module for various Table 4 summarizes overall performance of situation awareness module for various indoor environments. The average accuracy was above 90%. The accuracy in recognizing three indoor environments. The average accuracy was above 90%. The accuracy in recognizing three primitives was 93% on average. The proposed system recognized all door situations and showed primitives was 93% on average. The proposed system recognized all door situations and showed perfect performance, while it had a lower precision for corridor and hall situations. Because junction combines two or more or primitive types and does not have distinctive features shown in Figure 5, junction images would be misclassified as corridor or door, which does not influence proposed system. Table 4. Confusion matrix of situation awareness (%). Door Corridor Hall Junction Door Corridor Hall Junction Object Detection Results For practical use, proposed method should satisfy two requirements, which are explained in detail in reference [30]: it should detect color codes at distances of up to 2 m in cluttered environments, and it should be robust to various degrees of illumination and cluttered backgrounds. Thus, performance of object detection was measured by changing following three environmental factors: distance from user (camera) to color codes, viewing angle between user and color codes, and degree of illumination, e.g., direct sunlight and fluorescent light.

21 environments, and it should be robust to various degrees of illumination and cluttered backgrounds. Thus, performance of object detection was measured by changing following three environmental factors: distance from user (camera) to color codes, viewing angle between user and color codes, and degree of illumination, e.g., direct sunlight and fluorescent light. Sensors 2017, 17, of 34 Around images images were were collected collected using using distances distances from from 25 cm 25 to cm 3 to m, 3 viewing m, viewing angles angles from 90 from to to, and 90, different and different illuminations illuminations from from direct direct sunlight sunlight to fluorescent to fluorescent light. light. As such, As such, for each for each type of type illumination, of illumination, distance distance was fixed was fixed at a specific at a specific value, value, and and object object detection detection was was tested tested when when viewing viewing angle angle was changed. was changed. Figure Figure 12 shows 12 shows examples examples of of object detection object detection results results in which in which red rectangles red rectangles represent represent localized localized results for results color for code. color code. Figure 12. Examples of QR code localization results on real scenes. Figure 12. Examples of QR code localization results on real scenes. Figure 13 summarizes experimental results with performance analyzed in terms of Figure 13 summarizes experimental results with performance analyzed in terms of maximum detection distance (MDD) and maximum detection angle (MDA). For color codes that maximum detection distance (MDD) and maximum detection angle (MDA). For color codes that were Sensors perpendicularly 2017, 17, 1882 aligned from user, y were detected up to a maximum distance of of m. 35 were perpendicularly aligned from user, y were detected up to a maximum distance of 2.5 m cm 75cm 100cm 125cm 150cm 175cm 200cm 225cm 250cm Direct sunlight Fluorescent Figure Figure The The accuracy accuracy of of QR QR code code detection detection in terms in terms of of maximum maximum detection detection distance distance (MDD) (MDD) and and maximum maximum detection detection angle angle (MDA). (MDA). However, maximum viewing distance gradually decreased as viewing angle became furr from center. Thus, proposed system can can detect detect color color codes codes with with a viewing a viewing angle angle of 60of at 60 a distance at a distance of 1.5of m. 1.5 This m. detection This detection accuracy accuracy was also was affected also affected by by degree degree of illumination, of illumination, which showed which showed better performance better performance under fluorescent under fluorescent light than light under than direct under sunlight. direct sunlight. These results are impressive when compared with existingcolor color code-based methods. In In reference [30], [30], a a wayfinding system system using using color color barcodes barcodes was was proposed, proposed, for which for which color color targets targets were were designed designed to improve to improve detection detection distance, distance, and and se se were were used used as as basis basis to to denote places of of color color barcodes. By By using color targets, ir detection was extendedto to a maximum of 1.5 m. Although ir detection range was extended, additional computations are involved, and color targets, as well as color codes, should be attached to environments. In contrast, proposed system has a maximum detection range of 2.5 m. Accordingly it was proven that proposed method has superior performance in accurately detecting color codes. Figure 14 presents color code localization results for images with uncluttered backgrounds and similar color compositions to proposed QR codes. In Figure 14, images in first row

22 60 at a distance of 1.5 m. This detection accuracy was also affected by degree of illumination, which showed better performance under fluorescent light than under direct sunlight. These results are impressive when compared with existing color code-based methods. In reference [30], a wayfinding system using color barcodes was proposed, for which color targets were designed Sensors 2017, to 17, improve 1882 detection distance, and se were used as basis to denote places 22 of of 34 color barcodes. By using color targets, ir detection was extended to a maximum of 1.5 m. Although ir detection range was extended, additional computations are involved, and color targets, as well as color codes, should be attached to environments. In contrast, proposed targets, as well as color codes, should be attached to environments. In contrast, proposed system has a maximum detection range of 2.5 m. Accordingly it was proven that proposed method system has a maximum detection range of 2.5 m. Accordingly it was proven that proposed has superior performance in accurately detecting color codes. method has superior performance in accurately detecting color codes. Figure 14 presents color code localization results for images with uncluttered backgrounds Figure 14 presents color code localization results for images with uncluttered backgrounds and similar color compositions to proposed QR codes. In Figure 14, images in first row and similar color compositions to proposed QR codes. In Figure 14, images in first row (Figure 14a) include QR codes, while those in second row (Figure 14b) do not. There were no false (Figure 14a) include QR codes, while those in second row (Figure 14b) do not. There were no positives, and color codes in first row were all accurately localized. false positives, and color codes in first row were all accurately localized. (a) (b) Figure 14. QR code localization results on complex scenes: (a) Images include QR codes; (b) Images Figure 14. QR code localization results on complex scenes: (a) Images include QR codes; (b) Images without without QR QR codes. codes. The proposed detection method was also evaluated regarding false positive rate (FPR) and false negative rate (FNR), which are presented in Table 5. For a distance range of 0.25 m to 3 m, FPR was 0, and FNR gradually increased with increasing distance. In case in which a user turns around or moves quickly, QR codes would be missed. However, it can be found in next consecutive frames because proposed system processes image streaming at 10 fps. After detecting QR code, proposed system asked a user to move toward codes so that y can be more accurately recognized. After moving closer, proposed system started decoder, which n exhibited a recognition accuracy of 100%. Table 5. Performance of QR code detection (%). Distance (cm) FPR FNR User Trajectory Recording Results In this study, human movements are defined in terms of action type, step count, and compass direction. Thus, re are ten basic movements including moving forward/backward and turning in eight directions specified in Section 6.3. Furrmore, various complex movements can be produced by combining two or more basic movements. Accordingly, accuracy of user trajectory recording was evaluated using types of movements. For se experiments, four users used proposed system for wayfinding. First,

23 Sensors 2017, 17, of 34 a brief explanation of proposed system was provided to users, and y were shown how to use it. The users were asked to perform every movement five times, and n, error rate was measured. The users moving paths were recorded, and ir trajectories were compared with real moving paths. The experiment was performed to evaluate accuracy of users trajectory estimation for more complex movements. In this experiment, users were asked to return to ir original starting point after reaching ir destination. Figure 15 shows results for se experiments where each cell corresponds to m in real space. There are three lines: green dotted line indicates paths that user used, and two solid lines are results from proposed method. The solid blue line is route from starting point to destination, which is estimated using proposed user trajectory recording algorithm as shown in Algorithm 1. The red solid line is return route for user to return to original starting point according to instructions generated with proposed backward path algorithm. To calculate accuracy of trajectory recording module, errors between recorded trajectories and back paths generated by this module were calculated every second. On average, proposed method exhibited error rates below 6% and 5% in estimating distance and compass direction, respectively. These error rates would be accumulative according to increasing movements of user in real applications. To handle this problem, a trajectory estimation method proposed in [46] was incorporated into current system to compensate for accumulated errors and to provide accurate return paths to user. Sensors 2017, 17, of 35 (a) (b) (c) Figure 15. Estimation of users traces for complex movements: Routes generated by (a) User 1, (b) Figure 15. Estimation of users traces for complex movements: Routes generated by (a) User 1, User 2, and (c) User 3, respectively. (b) User 2, and (c) User 3, respectively Processing Time Processing Time For practical use by blind and visually users, wayfinding system should be portable For practical use by blind and visually users, a wayfinding system should be portable and effective in real-time. Thus, computation time of proposed system is a critical and effective in real-time. Thus, computation time of proposed system is a critical consideration. consideration. Table 6 presents average time required to process a frame in respective Table 6 presents average time required to process a frame in respective modules on an iphone 6. modules on an iphone 6. The situation awareness module was performed at 2 s intervals when The situation awareness module was performed at 2 s intervals when simulating real environments. simulating real environments. On average, proposed system required 149 ms to process a frame, On average, proposed system required 149 ms to process a frame, followed by an average followed by an average processing time of up to 13 frames/second. This confirmed that proposed processing time of up to 13 frames/second. This confirmed that proposed system could be system could be used as an effective wayfinding aid in real-time situations. used as an effective wayfinding aid in real-time situations. Table 6. Processing time (ms). Stage Time Situation awareness 71 Object detection 29

24 Sensors 2017, 17, of 34 Table 6. Processing time (ms). Stage Time Situation awareness 71 Object detection 29 Object recognition 36 Activity-based instruction 7 User trajectory recording Field Test For practical use of proposed system in real environments, following three factors should be considered: real-time, performance and serviceability. Thus, several test environments were constructed, and field tests with four users were performed Participants Our goal is to provide guidance information to blind people and visually people. In order to demonstrate validity of proposed method, we recruited participants with low vision. Thus, field test was performed with four low vision users: three females and one male, with a median age of 26 years (range: years). All users were able to use a smartphone in ir daily lives. Table 7 provides a summary of participants. Users 3 and 4 have significant visual impairments and cannot see objects well when y are more than an arm s length away. User (Age/Gender) Table 7. Participants. Ability Visual Acuity (Decimal) Experience Mobile Phone User1 25/Female Low vision (0.2) YES User2 27/Female Low vision (0.2) YES User3 28/Female Low vision (0.15) YES User4 24/Male Blind (0.01) YES Each user was given a basic introduction to proposed system and was shown how to operate it. For example, it consists of entering destination via voice recognition, practices for turning each direction toward given instructions, and approaching QR code attached to wall. They were difficult to follow generated instruction by proposed system, but y adapted through practices several times. These processes were performed repeatedly until y felt confidence to use proposed system. They were asked to move from starting point to destination using instructions provided by proposed wayfinding system. In order to prevent dangerous situations that could arise, we asked one caregiver to walk alongside users without commenting on wayfinding during experiments. In addition, in order to manage hand jitter, smartphone was fixed to upper body of user (approximately cm from floor). Figure 16 shows a snapshot of one user using proposed system during field test. As seen in Figure 16b, system required entering destination through speech or a keypad, for example, information center or room number Thereafter, it continuously recognized current situations, locates QR codes, and interpreted m until reaching destination. All results that were analyzed by proposed system were given to users by TTS and beeping. Some ethical issues regarding this study should be mentioned. We complied with principles and protocols of Declaration of Helsinki when conducting field test. To give users insight into research process, we gave m and ir parents a short introduction about research procedure and explained informed consent for participating in study. After users indicated that y had examined form and agreed to take part in study, y signed informed

25 situations, locates QR codes, and interpreted m until reaching destination. All results that were analyzed by proposed system were given to users by TTS and beeping. Some ethical issues regarding this study should be mentioned. We complied with principles and protocols of Declaration of Helsinki when conducting field test. To give users insight Sensors into 2017, research 17, 1882 process, we gave m and ir parents a short introduction about research 25 of 34 procedure and explained informed consent for participating in study. After users indicated that y had examined form and agreed to take part in study, y signed consent informed form. consent At form. beginning At and beginning during and field during test, users field were test, told users repeatedly were that told y repeatedly could terminate that y ir could participation terminate ir in participation study at any in time. study at any time. (a) (b) Figure 16. A user performing initial tasks: (a) A user is moving according to guidance by Figure proposed 16. Asystem; user performing (b) Screen of initial proposed tasks: (a) wayfinding A user is system. moving according to guidance by proposed system; (b) Screen of proposed wayfinding system Test Maps Test Maps The goal of this research was to develop a wayfinding system that enables blind and visually The goal users of this to easily research navigate was to to develop a specific a wayfinding location system unfamiliar thatindoor enablesenvironments. blind and visually In field test, users two tobuildings easily navigate were to used a specific located location on inkonkuk unfamiliar University indoor environments. campus. These In buildings field test, consisted two buildings of different werebuilding used located layouts. on The Konkuk first one University was a new campus. millennium Thesebuilding, buildings which consisted was a of14-story different buildingwith layouts. a total Theof first 96 one different was a new rooms, millennium and building, or one which is an was environmental a 14-story building with a total of 96 different rooms, and or one is an environmental engineering building, which is a 6-story building with a total of 175 different rooms. The respective floors have almost same structure except for first floor, which are described in Figure 17. In se buildings, many experiments were performed with several different scenarios. Some of those scenarios are presented in this section. The goals of field tests consisted of: (1) arriving at room 406 on fourth floor in environmental engineering building, (2) finding room 1204 and (3) finding toilet on twelfth floor in New Millennium building. In order to archive this, each goal was composed of two or three sub-goals. For example, user entered lobby on first floor, and y had to find an information desk (sub-goal 1) and take an elevator (sub-goal 2). After getting out of elevator, y should move toward ir destination (e.g., room or toilet). Figure 17 shows test maps constructed for field tests, for which scenes contained textured and cluttered backgrounds and reflective lighting with shadows. Several QR codes were first affixed to walls with location and directional information. On maps, blue boxes indicate where location-specific codes were affixed next to doors, and red boxes with a white arrow indicate guide codes that provide directional information. For test maps, it was assumed that users started at a predefined point and were navigating an unfamiliar environment to a predefined destination. In test scenarios, users were standing in hall and had to first find guide code to obtain directional information for destination. Then, y had to locate following QR codes until y reached ir destination.

26 indicate where location-specific codes were affixed next to doors, and red boxes with a white arrow indicate guide codes that provide directional information. For test maps, it was assumed that users started at a predefined point and were navigating an unfamiliar environment to a predefined destination. In test scenarios, users were standing in hall and had to first find guide code to obtain directional information for Sensors destination. 2017, 17, 1882Then, y had to locate following QR codes until y reached ir destination. 26 of 34 Sensors 2017, 17, of 35 (a) (b) Figure Figure 17. Test 17. Test maps maps constructed constructed for for real environments: environments: (a) (a) Environmental Environmental engineering engineering building; building; (b) New millennium building. (b) New millennium building. When y were on first floor ( bottom layer of Figure 17a,b), y had to first visit When information y were center on to get first place floor numbers ( bottom for layer destinations, of Figure and 17a,b), n, yhad had to to move first to visit information target floors center using to get elevator. place numbers Thus, y for had two destinations, goals and first n, floor; y finding hadinformation to move to targetcenter floorsand using taking an elevator. Thus, Additionally, y had two test goals map in ontop layer firstof floor; Figure finding 17a had information one goal and center and taking test an map elevator. in top layer Additionally, of Figure 17b had test two map goals, inrespectively. top layer of All Figure se goals 17a had could one be found goal and by recognizing QR codes of information center, elevator, toilet, and office. In real environments, directional information would be not provided at junctions, which increases difficult of wayfinding to destination. Thus, it was assumed that guide signs were provided at every junction Results

27 Sensors 2017, 17, of 34 test map in top layer of Figure 17b had two goals, respectively. All se goals could be found by recognizing QR codes of information center, elevator, toilet, and office. In real environments, directional information would be not provided at junctions, which increases difficult of wayfinding to destination. Thus, it was assumed that guide signs were provided at every junction Results In this section, we present experimental results of field tests that were performed with four users. We evaluated performance in terms of task time and wayfinding errors: (1) complete time taken from starting point until reaching destination was evaluated and (2) wayfinding errors were measured through comparing users trajectories using proposed system with optimal route determined by sighted people. For this, researchers put chalk on shoes of users so that ir footprints and paths could be measured. In addition, in order to evaluate effectiveness of user interface, users performed wayfinding twice for each route; first test guided user using only speech and second test guided m using both beeping and speech. Figure 18 shows paths of four users when moving in test maps shown in Figure 17. In Figure 18, three patterns represent respective situation types recognized by using proposed method. As seen in figure, proposed system accurately recognized situations. The black dotted line represents optimal route created by sighted people, and or color lines represent paths of each user. The red circles indicate positions where errors occurred. Despite space complexity and lighting conditions, most users used a near optimal route in unfamiliar indoor environment without reference points as depicted in Figure 18. Furrmore, trajectories were similar to each or regardless of combination of speech and beeping. However, some users made errors in ir paths. In map in top layer of Figure 18a, User 3 made one error. As seen in Figure 17a, width of corridor was relatively wide, so it could be easy to miss QR codes due to limited FOV of smartphone. User 1 failed to detect one QR code which denoted target place; refore, she continued to go straight. However, at end of corridor, she found QR code that told her to turn back; thus, she could find destination. On map in top layer of Figure 18b, User 4 misunderstood instruction from proposed system and turned in opposite direction. However, at end of corridor, he found anor QR code and turned back toward destination. In addition, when proposed system guided using only spatial language, User 4 was confused and misunderstood, which could have caused an accumulated cognitive load during a long-distance travel. Therefore, we combined beeping information in order to reduce cognitive load of user and mental demands. When proposed system guided users using both beeping and speech, User 4 arrived at his destination without error. On average, lateral deviation relative to optimal path was 0.5 m. Thus, even though some errors occurred, field test results demonstrated that proposed system had an error decision rate of less than 3% on average. Some existing systems using building maps provide optimal route to destination by applying Dijkstra algorithm to preloaded maps [35]. The proposed system may provide shortest paths to destination; however, it does not guarantee optimal paths because it does not use preloaded maps. Figure 19 shows average travel time taken by users to accomplish goals in respective buildings. In order to compare difficulties for approaching attached QR code around users, we compared average completion time obtained by two sighted students. They also followed instructions generated by proposed system. Two students need about 67% of completion time spent by four participants for arriving ir destinations. It means that proposed system requires training time for visually people to follow generated instructions when y approach QR codes. In addition, based on se data, it is clear that significant differences do not exist between users. In first building, average completion time and standard variation for each goal were (40.5, 5.3) and (108.8, 15.5). In second building,

28 made one error. As seen in Figure 17a, width of corridor was relatively wide, so it could be easy to miss QR codes due to limited FOV of smartphone. User 1 failed to detect one QR code which denoted target place; refore, she continued to go straight. However, at end of corridor, she found QR code that told her to turn back; thus, she could find destination. On map in top layer of Figure 18b, User 4 misunderstood instruction from proposed Sensorssystem 2017, 17, and 1882turned in opposite direction. However, at end of corridor, he found anor 28 of 34 QR code and turned back toward destination. In addition, when proposed system guided using only spatial language, User 4 was confused and misunderstood, which could have caused an y were accumulated (56, 3.6), cognitive (111, 12.3), load and during (37.3, a long-distance 2.9). For Goal travel. 3 in Therefore, first building, we combined it took User beeping 3 a long time to information reach in destination order to reduce because cognitive he missedload of QR codes user and needed mental todemands. return from When end of corridor. proposed Noneless, system guided most users using tookboth a similar beeping time and tospeech, complete User 4 arrived goals. at Ithis assumed destination that proposed without system error. can be useful for visually and blind users. Sensors 2017, 17, of 35 (a) (b) Figure 18. User trajectory results: (a) Environmental engineering building; (b) New millennium Figure 18. User trajectory results: (a) Environmental engineering building; (b) New millennium building. The black dotted line shows optimal route, and or color lines represent traces building. The black dotted line shows optimal route, and or color lines represent traces of each user. The red circles indicate positions where errors are occurred. of each user. The red circles indicate positions where errors are occurred. On average, lateral deviation relative to optimal path was 0.5 m. Thus, even though some errors occurred, field test results demonstrated that proposed system had an error decision rate of less than 3% on average. Some existing systems using building maps provide optimal route to destination by applying Dijkstra algorithm to preloaded maps [35]. The proposed system may provide shortest paths to destination; however, it does not guarantee optimal paths because it does not use preloaded maps. Figure 19 shows average travel time

29 Sensors 2017, 17, of 34 Sensors 2017, 17, of 35 Sensors 2017, 17, of 35 Figure 19. Time taken by each user to accomplish goals (sec.) Post-test Post-test Interview Interview Results Results After finishing field Figure test 19. Time using taken by proposed each user to system, accomplish users goals were (sec.). After finishing field test using proposed system, users were interviewed interviewed in in order order to to determine determine ir satisfaction with system. In order to obtain more details about ir opinions, Post-test ir satisfaction Interview Results with system. In order to obtain more details about ir opinions, seven seven questions questions were were designed designed based based on on system system usability usability scale scale [47] [47] from from ten ten available available questions. questions. The users After were finishing asked to rate field test following using seven proposed items system, using one users of five were responses interviewed that in ranged order to The users were asked to rate following seven items using one of five responses that ranged from from strongly determine agree (5) ir to satisfaction strong disagree with (1): system. In order to obtain more details about ir opinions, strongly agree (5) to strong disagree (1): seven questions were designed based on system usability scale [47] from ten available questions. E1: think that would like to use this system frequently. E1: The I users thinkwere thatasked I would to rate like to following use this system seven items frequently. using one of five responses that ranged from E2: strongly thought agree (5) to system strong was disagree easy (1): to use. E2: I thought system was easy to use. E3: think that I would need support of a technical person to be able to use this system. E3: I think E1: I think that that I would I would need like to use support this system of a technical frequently. person to be able to use this system. E4: E2: found I thought various system functions was easy in to this use. system were well integrated. E4: E5: I found think that various most people functions would in learn this system to use were this system well integrated. E3: I think that I would need support of a technical person very to be quickly. able to use this system. E5: E6: I think E4: thought I found that that most re various people was functions would consistency in learn this in to system this use this system. were system well integrated. very quickly. E6: E7: I thought E5: felt I very think that confident that re most was people using consistency would system. learn into this use system. this system very quickly. E7: I felt E6: I very thought confident that re using was consistency system. in this system. Figure 20 presents results of post-test interviews. As seen in figure, most users were E7: I felt very confident using system. satisfied Figure with 20 presents proposed results wayfinding of system. post-test The interviews. users responded As seenwith in average figure, satisfaction most users were rates Figure 20 presents results of post-test interviews. As seen in figure, most users were satisfied of 80%, 85%, with 80% proposed for E1, E2, wayfinding and E7, respectively. system. TheRegarding users responded proposed with average system s satisfaction availability rates in satisfied with proposed wayfinding system. The users responded with average satisfaction rates of real 80%, environments, 85%, 80% for E1, users E2, andanswered E7, respectively. with average Regarding rates of proposed 85% and system s 80% for availability E3 and E5, in of 80%, 85%, 80% for E1, E2, and E7, respectively. Regarding proposed system s availability in real respectively. environments, In addition, users answered users gave with rating average of 80% ratesand of 85% 90% and for 80% E4 and for E6, E3 and respectively, E5, respectively. in real environments, users answered with average rates of 85% and 80% for E3 and E5, In evaluation addition, respectively. of users In function addition, gaveand rating consistency users of 80% gave and rating of 90% of proposed for 80% E4and system. 90% E6, respectively, for E4 and E6, inrespectively, evaluation in of function evaluation and consistency of function of and proposed consistency system. of proposed system Score (5-likert scale) Score (5-likert scale) 3 User1 User1 User2 User2 2 User3 User3 User4 User E1 E1 E2 E2 E3 E3 E4 E5 E6 E6 E7 E7 Questions Figure 20. Evaluation results. Figure 20. Evaluation results.

30 Sensors 2017, 17, of 34 Based on post-test interviews, some users thought that proposed system was easy to use and helped m with ir movements. Even if y needed some learning time to use proposed system, most of users said that y were accustomed to interacting with proposed system after three trials. The overall results confirmed feasibility of proposed system as a wayfinding aid for visually and blind users. In field tests, results show that users could locate optimal path in real-time with an accuracy of 97% and that y thought proposed system was comfortable and efficient. Consequently, proposed wayfinding system can effectively support visually and blind users in navigating unfamiliar indoor environments Discussion This paper has proposed a situation-based indoor wayfinding system for blind and visually people. The field test results demonstrated that proposed system can provide convenient and efficient guidance information for users. However, it requires some improvements in providing optimal path guidance to users, increasing localization accuracy [16,17] in cluttered environments, and extensive field testing in order to verify generalizability of proposed system. First, current system guides user in routes based on QR code recognition results. This means that it provides limited routes according to QR codes in building. Thus, it cannot provide various routes such as shortcuts, detours or multiple destinations. In order to address this limitation, map information and searching algorithm should be integrated in order to generate possible routes and to recommend appropriate routes according to user preferences. As future work, we will integrate GIS representation used in [16] into current system, and employ A* algorithm [35] to generate possible routes from given destination and GIS representation. Second, in large open spaces, some QR codes can be hidden by pedestrians and objects, which made proposed method miss detecting some QR codes. In order to counter this problem, we will combine with QR code with information from nonvisual sensors such as ultra-wideband (UWB). In [17], UWB-based system has accurate localization in room (with sides shorter than 100 m) with a single set of four sensors, while it exhibited positioning errors up to 20 cm in most locations. In general, installation cost for UWBs is cheaper than RFID or Wi-Fi network-based approaches, however, it requires significant computational costs. Therefore, proposed system can operate UWB-based localizations in cluttered environments such as junctions or halls, and switch over vision-based wayfinding in corridors and doors. Finally, in order to demonstrate validity of statistical inferences, number of users will be increased to people. In current study, only four people with visually impairments participated in our test. The field test should be performed by more users with greater variation in age, e.g. to include elderly people. In order to locate users with various profiles for a more extensive user study, we have been contacting official departments such as Gwangjin-gu Office (Social Welfare Division), which is a public institution in Seoul, Korea. 9. Conclusions This study developed a new situation-based wayfinding system to help blind and visually users recognize ir location and find ir way to a given destination in an unfamiliar indoor environment. The proposed wayfinding system was implemented on an iphone 6, and it consists of five modules: situation awareness, object detection, object recognition, activity-based instruction, and user trajectory recording. To assess validity of proposed codes and wayfinding system, experiments were conducted in several indoor environments. The results show that proposed system could detect color codes with an accuracy of almost 100% at a distance of 2.5 m and a viewing angle of ±40, while recognizing ir meaning with an accuracy of above 99%. In addition, to confirm its real-time efficacy,

31 Sensors 2017, 17, of 34 field tests were performed with four users who have significant visual impairments; all users found optimal path in real-time with an accuracy of 97%. A significant contribution of proposed system over existing systems is that it does not rely on prior knowledge such as maps or 3D models of buildings by automatically predicting outline of buildings through situation awareness and scene object recognition. Anor contribution is development of a wayfinding system for mobile phones that are equipped with a camera and inertial sensors (i.e., gyroscope and accelerometer), which can guide users along a route to target destination. A third significant contribution is that proposed system has a more efficient user interface using activity-based instructions. The proposed system needs some improvements including (1) provision of optimal path guidance to users through combining map information with proposed system [16,35], (2) increases in localization accuracy through integrating UWB technique in cluttered environments [17], and (3) verification of generalizability of proposed system through designing various scenarios with more varied users. In order to fully support mobility of blind people and visually people, a system that can prevent collisions with obstacles should be incorporated into current wayfinding system, and intensive formal validation tests should be performed with more users in order to generalize system s efficiency and validity. In this area of research, previous studies of author developed an intelligent wheelchair [48,49] and EYECANE [50,51]. The intelligent wheelchair was used for severely disabled people and it provides anti-collision maneuvers as well as a convenient user interface. EYECANE is a camera-embedded white cane that detects obstacles and find obstacle-less paths using a vision-based technique. To avoid obstacles in a more efficient manner, situation information is required, e.g., users should walk along a corridor wall, y should stop in front of a door, and so on. Thus, in future research, a technique to avoid obstacles will be developed based on situation information, and algorithms will be integrated into EYECANE. The current wayfinding system will be combined with extended EYECANE to support safer mobility of blind and visually people. Acknowledgments: This research was supported by MSIT (Ministry of Science and ICT), Korea, under ITRC (Information Technology Research Center) support program (IITP ) supervised by IITP (Institute for Information & communications Technology Promotion). Author Contributions: Eun Yi Kim conceived of research and participated in its design and coordination. Eunjeong Ko implemented system and performed experiments; Eun Yi Kim and Eunjeong Ko wrote paper. Conflicts of Interest: The authors declare no conflict of interest. References 1. World Health Organization. Available online: (accessed on 9 August 2017). 2. Giudice, N.A.; Legge, G.E. Blind navigation and role of technology. In The Engineering Handbook of Smart Technology for Aging, Disability, and Independence; John Wiley & Sons: Hoboken, NJ, USA, 2008; pp Fallah, N.; Apostolopoulos, I.; Bekris, K.; Folmer, E. Indoor Human Navigation Systems: A Survey. Interact. Comput. 2013, 25, Lynch, K. The Image of City; MIT Press: Cambridge, MA, USA, Thinus-Blanc, C.; Gaunet, F. Representation of space in blind persons: Vision as a spatial sense? Psychol. Bull. 1997, 121, [CrossRef] [PubMed] 6. Gulati, R. GPS Based Voice Alert System for Blind. Int. J. Sci. Eng. Res. 2011, 2, Cecelja, F.; Garaj, V.; Hunaiti, Z.; Balachandran, W. A Navigation System for Visually Impaired. In Proceedings of IEEE Conference on Instrumentation and Measurement Technology, Sorrento, Italy, April 2006.

32 Sensors 2017, 17, of Ando, B.; Baglio, S.; Marletta, V.; Pitrone, N. A Mixed Inertial & RFID Orientation Tool for Visually Impaired. In Proceedings of 6th International Multi-Conference on Systems, Signals and Devices, Djerba, Tunisia, March Liu, X.; Makino, H.; Kobayashi, S.; Maeda, Y. Design of an Indoor Self Positioning System for Visually Impaired-Simulation with RFID and Bluetooth in a Visible Light Communication. In Proceedings of 29th Annual International Conference on IEEE EMBS, Lyon, France, August Chang, Y.; Chen, C.; Chou, L.; Wang, T. A Novel Indoor Wayfinding System Based on Passive RFID Individuals with Cognitive Impairments. In Proceedings of 2nd International Conference on Pervasive Computing Technologies for Healthcare, Tampere, Finland, 30 January 1 February Digole, R.N.; Kulkarni, S.M. Smart navigation system for visually person. Int. J. Adv. Res. Comput. Commun. Eng. 2015, 4, Sahin, Y.G.; Aslan, B.; Talebi, S.; Zeray, A. A smart tactile for visually people. J. Trends Dev. Mach. 2015, 19, Hub, A. Combination of Indoor and Outdoor Navigation System TANIA with RFID Technology for Initialization and Object Recognition. In Proceedings of International Mobility Conference, Marburg, Germany, July Paredes, A.C.; Malfaz, M.; Salichs, M.A. Signage system for navigation of autonomous robots in indoor environments. IEEE Trans. Ind. Inf. 2014, 10, [CrossRef] 15. Loomis, J.M.; Golledge, R.G.; Klatzky, R.L.; Marston, J.R. Assisting wayfinding in visually travelers. In Applied Spatial Cognition: From Research to Cognitive Technology; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2006; pp Riehle, T.H.; Lichter, P.; Giudice, N.A. An indoor navigation system to support visually. In Proceedings of 30th Annual IEEE EMBC, Vancouver, BC, Canada, August Martinez-Sala, A.S.; Lisilla, F.; Sanchez-Aarnoutes, J.C.; Garcia-Haro, J. Design, implementation and evaluation of an indoor navigation system for visually people. Sensors 2015, 15, [CrossRef] [PubMed] 18. Qian, J.; Ma, J.; Ying, R.; Liu, P.; Pei, L. An improved indoor localization method using smartphone inertial sensors. In Proceedings of International Conference on Indoor Positioning and Indoor Navigation, Montbeliard-Belfort, France, October Riehle, T.H.; Anderson, S.M.; Lichter, P A.; Whalen, W.E.; Giudice, N.A. Indoor Inertial Waypoint Navigation for Blind. In Proceedings of 35th annual IEEE Engineering in Medicine and Biology Conference, Osaka, Japan, 3 7 July Beydoun, K.; Felea, V.; Guyennet, H. Wireless sensor network system helping navigation of visually. In Proceedings of IEEE International Conference on Information and Communication Technologies: From Theory to Applications, Damascus, Syria, 7 11 April Chang, Y.J.; Wang, T.Y. Indoor wayfinding based on wireless sensor networks for individuals with multiple special needs. Cybern. Syst. Int. J. 2010, 41, [CrossRef] 22. Treuillet, S.; Royer, E.; Chateau, T.; Dhome, M.; Lavest, J.M. Body Mounted Vision System for Visually Impaired Outdoor and Indoor Wayfinding Assistance. In Proceedings of Conference & Workshop on Assitive Technologies for People with Vision & Hearing Impairments, Granada, Spain, August Anderson, J.D.; Lee, D.J.; Archibald, J.K. Embedded Stereo Vision System Providing Visual Guidance to Visually Impaired. In Proceedings of IEEE/NIH Life Science Systems and Applications Workshop, Besda, MD, USA, 8 9 November Karacs, K.; Lazar, A.; Wagner, R.; Balya, D.; Roska, T.; Szuhaj, M. Bionic Eyeglass: An Audio Guide for Visually Impaired. In Proceedings of IEEE Biomedical Circuits and Systems Conference, London, UK, 29 November 1 December Elloumi, W.; Guissous, K.; Chetouani, A.; Canals, R.; Leconge, R.; Emile, B.; Treuillet, S. Indoor navigation assistance with a Smartphone camera based on vanishing points. In Proceedings of International Conference on Indoor Positioning and Indoor Navigation, Montbeliard-Belfort, France, October Al-Khalifa, H.S. Utilizing QR code and Mobile Phones for Blinds and Visually Impaired People. In Proceedings of International Conference on Computers Helping People with Special Needs, Linz, Austria, 9 11 July 2008.

33 Sensors 2017, 17, of Smart Camera Project. Available online: (accessed on 9 August 2017). 28. Zeb, A.; Ullah, S.; Rabbi, I. Indoor vision-based auditory assistance for blind people in semi-controlled environments. In Proceedings of 4th International Conference on Image Processing Theory, Tools and Applications, Paris, France, October Kulyukin, V.A.; Kutiyanawala, A. Demo: ShopMobile II: Eyes-Free Supermarket Grocery Shopping for Visually Impaired Mobile Phone Users. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, San Francisco, CA, USA, June Manduchi, R.; Kurniawan, S.; Bagherinia, H. Blind Guidance Using Mobile Computer Vision: A Usability Study. In Proceedings of 12th International ACM SIGACCESS Conference on Computers and Accessibility, Orlando, FL, USA, October Torrado, J.C.; Montoro, G.; Gomez, J. Easing integration: A feasible indoor wayfinding system for cognitive people. Pervasive Mob. Comput. 2016, 31, [CrossRef] 32. Legge, G.E.; Beckmann, P.J.; Tjan, B.S.; Havey, G.; Kramer, K. Indoor Navigation by People with Visual Impairment Using a Digital Sign System. PLoS ONE 2013, 8, e [CrossRef] [PubMed] 33. Chang, Y.; Tsai, S.; Wang, Y. A Context Aware Handheld Wayfinding System for Individuals with Cognitive Impairments. In Proceedings of 10th International ACM SIGACCESS Conference on Computers and Accessibility, Halifax, NS, Canada, October Mulloni, A.; Seichter, H.; Schmalstieg, D. Handheld Augmented Reality Indoor Navigation with Activity-Based Instructions. In Proceedings of 13th International Conference on Human Computer Interaction with Mobile Devices and Services, Stockholm, Sweden, 30 August 2 September Montague, K. Accessible indoor navigation. In Proceedings of 12th International ACM SIGACCESS Conference on Computers and Accessibility, Orlando, FL, USA, October Amemiya, T.; Sugiyama, H. Handheld Wayfinder with Pseudo-Attraction Force for Pedestrians with Visual Impairments. In Proceedings of 11th International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, October Connors, E.C.; Chrastil, E.R.; Sánchez, J.; Merabet, L.B. Action video game play and transfer of navigation and spatial cognition skills in adolescents who are blind. Front. Hum. Neurosci. 2014, 8, 133. [CrossRef] [PubMed] 38. Klatzky, R.L.; Marston, J.R.; Giudice, N.A.; Golledge, R.G.; Loomis, J.M. Cognitive load of navigating without vision when guided by virtual sound versus spatial language. J. Exp. Psychol. Appl. 2006, 12, [CrossRef] [PubMed] 39. Loomis, J.M.; Golledge, R.G.; Klatzky, R.L. Navigation system for blind:auditory display modes and guidance. Presence Teleoper. Virtual Environ. 1998, 7, [CrossRef] 40. Yang, S.; Song, J. Analysis on way-finding behaviors of visually people Design research for guide system development. J. Digit. Interact. Des. 2009, 8, Bay, H.; Ess, A.; Tuyteaars, T.; Gool, L.V. SURF: Speeded-up robust features. Comput. Vis. Image Underst. 2008, 110, [CrossRef] 42. Nister, D.; Stewenius, H. Scalable recognition with a vocabulary tree. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, June Peng, X.; Wang, L.; Wang, X.; Qiao, Y. Bag of Visual Words and Fusion Method for Action Recognition: Comprehensive Study and Good Practice. Comput. Vis. Image Underst. 2016, 150, [CrossRef] 44. QR Code (2D Barcode). Available online: qrcode/default.aspx (accessed on 9 August 2017). 45. ZBar iphone SDK. Available online: (accessed on 9 August 2017). 46. Qian, J.; Pei, L.; Ma, J.; Ying, R.; Liu, P. Vector graph assisted pedestrian dead reckoning using an unconstrained smartphone. Sensors 2015, 15, [CrossRef] [PubMed] 47. Brooke, J. SUS-A quick and dirty usability scale. Usability Evaluat. Ind. 1996, 189, Ju, J.S.; Shin, Y.; Kim, E.Y. Vision based interface system for hands free control of an intelligent wheelchair. J. Neuro Eng. Rehabilit. 2009, 6, [CrossRef] [PubMed]

34 Sensors 2017, 17, of Ji, Y.; Lee, M.; Kim, E.Y. An Intelligent Wheelchair to Enable Safe Mobility of Disabled People with Motor and Cognitive Impairments. In Proceedings of European Conference on Computer Vision Workshop, Zurich, Switzerland, 6 12 September Ju, J.S.; Ko, E.; Kim, E.Y. EYE Cane: Navigating with camera embedded white cane for visually person. In Proceedings of 11th International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, October Hwang, J.; Ji, Y.; Kim, E.Y. Intelligent Situation Awareness on EYECANE. In Proceedings of 12th Pacific Rim International Conference on Artificial Intelligence, Kuching, Malaysia, 3 7 September by authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under terms and conditions of Creative Commons Attribution (CC BY) license (

Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence

Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence Ji-Won Song Dept. of Industrial Design. Korea Advanced Institute of Science and Technology. 335 Gwahangno, Yusong-gu,

More information

Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System

Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System R. Manduchi 1, J. Coughlan 2 and V. Ivanchenko 2 1 University of California, Santa Cruz, CA 2 Smith-Kettlewell Eye

More information

Computer Vision Based Real-Time Stairs And Door Detection For Indoor Navigation Of Visually Impaired People

Computer Vision Based Real-Time Stairs And Door Detection For Indoor Navigation Of Visually Impaired People ISSN (e): 2250 3005 Volume, 08 Issue, 8 August 2018 International Journal of Computational Engineering Research (IJCER) For Indoor Navigation Of Visually Impaired People Shrugal Varde 1, Dr. M. S. Panse

More information

The Seamless Localization System for Interworking in Indoor and Outdoor Environments

The Seamless Localization System for Interworking in Indoor and Outdoor Environments W 12 The Seamless Localization System for Interworking in Indoor and Outdoor Environments Dong Myung Lee 1 1. Dept. of Computer Engineering, Tongmyong University; 428, Sinseon-ro, Namgu, Busan 48520, Republic

More information

Indoor navigation with smartphones

Indoor navigation with smartphones Indoor navigation with smartphones REinEU2016 Conference September 22 2016 PAVEL DAVIDSON Outline Indoor navigation system for smartphone: goals and requirements WiFi based positioning Application of BLE

More information

Agenda Motivation Systems and Sensors Algorithms Implementation Conclusion & Outlook

Agenda Motivation Systems and Sensors Algorithms Implementation Conclusion & Outlook Overview of Current Indoor Navigation Techniques and Implementation Studies FIG ww 2011 - Marrakech and Christian Lukianto HafenCity University Hamburg 21 May 2011 1 Agenda Motivation Systems and Sensors

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

Hardware-free Indoor Navigation for Smartphones

Hardware-free Indoor Navigation for Smartphones Hardware-free Indoor Navigation for Smartphones 1 Navigation product line 1996-2015 1996 1998 RTK OTF solution with accuracy 1 cm 8-channel software GPS receiver 2004 2007 Program prototype of Super-sensitive

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Assisting and Guiding Visually Impaired in Indoor Environments

Assisting and Guiding Visually Impaired in Indoor Environments Avestia Publishing 9 International Journal of Mechanical Engineering and Mechatronics Volume 1, Issue 1, Year 2012 Journal ISSN: 1929-2724 Article ID: 002, DOI: 10.11159/ijmem.2012.002 Assisting and Guiding

More information

Detecting Intra-Room Mobility with Signal Strength Descriptors

Detecting Intra-Room Mobility with Signal Strength Descriptors Detecting Intra-Room Mobility with Signal Strength Descriptors Authors: Konstantinos Kleisouris Bernhard Firner Richard Howard Yanyong Zhang Richard Martin WINLAB Background: Internet of Things (Iot) Attaching

More information

Recent Progress on Wearable Augmented Interaction at AIST

Recent Progress on Wearable Augmented Interaction at AIST Recent Progress on Wearable Augmented Interaction at AIST Takeshi Kurata 12 1 Human Interface Technology Lab University of Washington 2 AIST, Japan kurata@ieee.org Weavy The goal of the Weavy project team

More information

3D and Sequential Representations of Spatial Relationships among Photos

3D and Sequential Representations of Spatial Relationships among Photos 3D and Sequential Representations of Spatial Relationships among Photos Mahoro Anabuki Canon Development Americas, Inc. E15-349, 20 Ames Street Cambridge, MA 02139 USA mahoro@media.mit.edu Hiroshi Ishii

More information

Ubiquitous Positioning: A Pipe Dream or Reality?

Ubiquitous Positioning: A Pipe Dream or Reality? Ubiquitous Positioning: A Pipe Dream or Reality? Professor Terry Moore The University of What is Ubiquitous Positioning? Multi-, low-cost and robust positioning Based on single or multiple users Different

More information

MEng Project Proposals: Info-Communications

MEng Project Proposals: Info-Communications Proposed Research Project (1): Chau Lap Pui elpchau@ntu.edu.sg Rain Removal Algorithm for Video with Dynamic Scene Rain removal is a complex task. In rainy videos pixels exhibit small but frequent intensity

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

IoT Wi-Fi- based Indoor Positioning System Using Smartphones

IoT Wi-Fi- based Indoor Positioning System Using Smartphones IoT Wi-Fi- based Indoor Positioning System Using Smartphones Author: Suyash Gupta Abstract The demand for Indoor Location Based Services (LBS) is increasing over the past years as smartphone market expands.

More information

Service Cooperation and Co-creative Intelligence Cycle Based on Mixed-Reality Technology

Service Cooperation and Co-creative Intelligence Cycle Based on Mixed-Reality Technology Service Cooperation and Co-creative Intelligence Cycle Based on Mixed-Reality Technology Takeshi Kurata, Masakatsu Kourogi, Tomoya Ishikawa, Jungwoo Hyun and Anjin Park Center for Service Research, AIST

More information

Annotation Overlay with a Wearable Computer Using Augmented Reality

Annotation Overlay with a Wearable Computer Using Augmented Reality Annotation Overlay with a Wearable Computer Using Augmented Reality Ryuhei Tenmokuy, Masayuki Kanbara y, Naokazu Yokoya yand Haruo Takemura z 1 Graduate School of Information Science, Nara Institute of

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

A Survey on Assistance System for Visually Impaired People for Indoor Navigation

A Survey on Assistance System for Visually Impaired People for Indoor Navigation A Survey on Assistance System for Visually Impaired People for Indoor Navigation 1 Omkar Kulkarni, 2 Mahesh Biswas, 3 Shubham Raut, 4 Ashutosh Badhe, 5 N. F. Shaikh Department of Computer Engineering,

More information

Technology offer. Aerial obstacle detection software for the visually impaired

Technology offer. Aerial obstacle detection software for the visually impaired Technology offer Aerial obstacle detection software for the visually impaired Technology offer: Aerial obstacle detection software for the visually impaired SUMMARY The research group Mobile Vision Research

More information

The Gender Factor in Virtual Reality Navigation and Wayfinding

The Gender Factor in Virtual Reality Navigation and Wayfinding The Gender Factor in Virtual Reality Navigation and Wayfinding Joaquin Vila, Ph.D. Applied Computer Science Illinois State University javila@.ilstu.edu Barbara Beccue, Ph.D. Applied Computer Science Illinois

More information

Multi-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living

Multi-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living Multi-sensory Tracking of Elders in Outdoor Environments on Ambient Assisted Living Javier Jiménez Alemán Fluminense Federal University, Niterói, Brazil jjimenezaleman@ic.uff.br Abstract. Ambient Assisted

More information

Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study

Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study Orly Lahav & David Mioduser Tel Aviv University, School of Education Ramat-Aviv, Tel-Aviv,

More information

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu

More information

LOCALIZATION WITH GPS UNAVAILABLE

LOCALIZATION WITH GPS UNAVAILABLE LOCALIZATION WITH GPS UNAVAILABLE ARES SWIEE MEETING - ROME, SEPT. 26 2014 TOR VERGATA UNIVERSITY Summary Introduction Technology State of art Application Scenarios vs. Technology Advanced Research in

More information

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques Kevin Rushant, Department of Computer Science, University of Sheffield, GB. email: krusha@dcs.shef.ac.uk Libor Spacek,

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

Wi-Fi Fingerprinting through Active Learning using Smartphones

Wi-Fi Fingerprinting through Active Learning using Smartphones Wi-Fi Fingerprinting through Active Learning using Smartphones Le T. Nguyen Carnegie Mellon University Moffet Field, CA, USA le.nguyen@sv.cmu.edu Joy Zhang Carnegie Mellon University Moffet Field, CA,

More information

Mobile Cognitive Indoor Assistive Navigation for the Visually Impaired

Mobile Cognitive Indoor Assistive Navigation for the Visually Impaired 1 Mobile Cognitive Indoor Assistive Navigation for the Visually Impaired Bing Li 1, Manjekar Budhai 2, Bowen Xiao 3, Liang Yang 1, Jizhong Xiao 1 1 Department of Electrical Engineering, The City College,

More information

AR Glossary. Terms. AR Glossary 1

AR Glossary. Terms. AR Glossary 1 AR Glossary Every domain has specialized terms to express domain- specific meaning and concepts. Many misunderstandings and errors can be attributed to improper use or poorly defined terminology. The Augmented

More information

Multisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills

Multisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills Multisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills O Lahav and D Mioduser School of Education, Tel Aviv University,

More information

Exercise questions for Machine vision

Exercise questions for Machine vision Exercise questions for Machine vision This is a collection of exercise questions. These questions are all examination alike which means that similar questions may appear at the written exam. I ve divided

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Indoor Positioning 101 TECHNICAL)WHITEPAPER) SenionLab)AB) Teknikringen)7) 583)30)Linköping)Sweden)

Indoor Positioning 101 TECHNICAL)WHITEPAPER) SenionLab)AB) Teknikringen)7) 583)30)Linköping)Sweden) Indoor Positioning 101 TECHNICAL)WHITEPAPER) SenionLab)AB) Teknikringen)7) 583)30)Linköping)Sweden) TechnicalWhitepaper)) Satellite-based GPS positioning systems provide users with the position of their

More information

Senion IPS 101. An introduction to Indoor Positioning Systems

Senion IPS 101. An introduction to Indoor Positioning Systems Senion IPS 101 An introduction to Indoor Positioning Systems INTRODUCTION Indoor Positioning 101 What is Indoor Positioning Systems? 3 Where IPS is used 4 How does it work? 6 Diverse Radio Environments

More information

MELODIOUS WALKABOUT: IMPLICIT NAVIGATION WITH CONTEXTUALIZED PERSONAL AUDIO CONTENTS

MELODIOUS WALKABOUT: IMPLICIT NAVIGATION WITH CONTEXTUALIZED PERSONAL AUDIO CONTENTS MELODIOUS WALKABOUT: IMPLICIT NAVIGATION WITH CONTEXTUALIZED PERSONAL AUDIO CONTENTS Richard Etter 1 ) and Marcus Specht 2 ) Abstract In this paper the design, development and evaluation of a GPS-based

More information

A Study on the Navigation System for User s Effective Spatial Cognition

A Study on the Navigation System for User s Effective Spatial Cognition A Study on the Navigation System for User s Effective Spatial Cognition - With Emphasis on development and evaluation of the 3D Panoramic Navigation System- Seung-Hyun Han*, Chang-Young Lim** *Depart of

More information

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 t t t rt t s s Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 1 r sr st t t 2 st t t r t r t s t s 3 Pr ÿ t3 tr 2 t 2 t r r t s 2 r t ts ss

More information

DEMONSTRATION OF ROBOTIC WHEELCHAIR IN FUKUOKA ISLAND-CITY

DEMONSTRATION OF ROBOTIC WHEELCHAIR IN FUKUOKA ISLAND-CITY DEMONSTRATION OF ROBOTIC WHEELCHAIR IN FUKUOKA ISLAND-CITY Yutaro Fukase fukase@shimz.co.jp Hitoshi Satoh hitoshi_sato@shimz.co.jp Keigo Takeuchi Intelligent Space Project takeuchikeigo@shimz.co.jp Hiroshi

More information

Indoor Location Detection

Indoor Location Detection Indoor Location Detection Arezou Pourmir Abstract: This project is a classification problem and tries to distinguish some specific places from each other. We use the acoustic waves sent from the speaker

More information

ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY IMPAIRED

ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY IMPAIRED Proceedings of the 7th WSEAS International Conference on Robotics, Control & Manufacturing Technology, Hangzhou, China, April 15-17, 2007 239 ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY

More information

Real Time Indoor Tracking System using Smartphones and Wi-Fi Technology

Real Time Indoor Tracking System using Smartphones and Wi-Fi Technology International Journal for Modern Trends in Science and Technology Volume: 03, Issue No: 08, August 2017 ISSN: 2455-3778 http://www.ijmtst.com Real Time Indoor Tracking System using Smartphones and Wi-Fi

More information

Improved SIFT Matching for Image Pairs with a Scale Difference

Improved SIFT Matching for Image Pairs with a Scale Difference Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,

More information

Electronic Travel Aid Based on. Consumer Depth Devices to Avoid Moving Objects

Electronic Travel Aid Based on. Consumer Depth Devices to Avoid Moving Objects Contemporary Engineering Sciences, Vol. 9, 2016, no. 17, 835-841 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2016.6692 Electronic Travel Aid Based on Consumer Depth Devices to Avoid Moving

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi Department of E&TC Engineering,PVPIT,Bavdhan,Pune ABSTRACT: In the last decades vehicle license plate recognition systems

More information

Robust Positioning for Urban Traffic

Robust Positioning for Urban Traffic Robust Positioning for Urban Traffic Motivations and Activity plan for the WG 4.1.4 Dr. Laura Ruotsalainen Research Manager, Department of Navigation and positioning Finnish Geospatial Research Institute

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

IoT. Indoor Positioning with BLE Beacons. Author: Uday Agarwal

IoT. Indoor Positioning with BLE Beacons. Author: Uday Agarwal IoT Indoor Positioning with BLE Beacons Author: Uday Agarwal Contents Introduction 1 Bluetooth Low Energy and RSSI 2 Factors Affecting RSSI 3 Distance Calculation 4 Approach to Indoor Positioning 5 Zone

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

Spatial navigation in humans

Spatial navigation in humans Spatial navigation in humans Recap: navigation strategies and spatial representations Spatial navigation with immersive virtual reality (VENLab) Do we construct a metric cognitive map? Importance of visual

More information

Background Adaptive Band Selection in a Fixed Filter System

Background Adaptive Band Selection in a Fixed Filter System Background Adaptive Band Selection in a Fixed Filter System Frank J. Crosby, Harold Suiter Naval Surface Warfare Center, Coastal Systems Station, Panama City, FL 32407 ABSTRACT An automated band selection

More information

What will the robot do during the final demonstration?

What will the robot do during the final demonstration? SPENCER Questions & Answers What is project SPENCER about? SPENCER is a European Union-funded research project that advances technologies for intelligent robots that operate in human environments. Such

More information

Smart Space - An Indoor Positioning Framework

Smart Space - An Indoor Positioning Framework Smart Space - An Indoor Positioning Framework Droidcon 09 Berlin, 4.11.2009 Stephan Linzner, Daniel Kersting, Dr. Christian Hoene Universität Tübingen Research Group on Interactive Communication Systems

More information

CCNY Smart Cane. Qingtian Chen 1, Muhammad Khan 1, Christina Tsangouri 2, Christopher Yang 2, Bing Li 1, Jizhong Xiao 1* and Zhigang Zhu 2*

CCNY Smart Cane. Qingtian Chen 1, Muhammad Khan 1, Christina Tsangouri 2, Christopher Yang 2, Bing Li 1, Jizhong Xiao 1* and Zhigang Zhu 2* The 7th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems July 31-August 4, 2017, Hawaii, USA CCNY Smart Cane Qingtian Chen 1, Muhammad Khan 1, Christina

More information

Blind navigation with a wearable range camera and vibrotactile helmet

Blind navigation with a wearable range camera and vibrotactile helmet Blind navigation with a wearable range camera and vibrotactile helmet (author s name removed for double-blind review) X university 1@2.com (author s name removed for double-blind review) X university 1@2.com

More information

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Michael Hölzl, Roland Neumeier and Gerald Ostermayer University of Applied Sciences Hagenberg michael.hoelzl@fh-hagenberg.at,

More information

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram Kiwon Yun, Junyeong Yang, and Hyeran Byun Dept. of Computer Science, Yonsei University, Seoul, Korea, 120-749

More information

Video Synthesis System for Monitoring Closed Sections 1

Video Synthesis System for Monitoring Closed Sections 1 Video Synthesis System for Monitoring Closed Sections 1 Taehyeong Kim *, 2 Bum-Jin Park 1 Senior Researcher, Korea Institute of Construction Technology, Korea 2 Senior Researcher, Korea Institute of Construction

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Marco Cavallo. Merging Worlds: A Location-based Approach to Mixed Reality. Marco Cavallo Master Thesis Presentation POLITECNICO DI MILANO

Marco Cavallo. Merging Worlds: A Location-based Approach to Mixed Reality. Marco Cavallo Master Thesis Presentation POLITECNICO DI MILANO Marco Cavallo Merging Worlds: A Location-based Approach to Mixed Reality Marco Cavallo Master Thesis Presentation POLITECNICO DI MILANO Introduction: A New Realm of Reality 2 http://www.samsung.com/sg/wearables/gear-vr/

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Indoor Localization and Tracking using Wi-Fi Access Points

Indoor Localization and Tracking using Wi-Fi Access Points Indoor Localization and Tracking using Wi-Fi Access Points Dubal Omkar #1,Prof. S. S. Koul *2. Department of Information Technology,Smt. Kashibai Navale college of Eng. Pune-41, India. Abstract Location

More information

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Virtual Tactile Maps

Virtual Tactile Maps In: H.-J. Bullinger, J. Ziegler, (Eds.). Human-Computer Interaction: Ergonomics and User Interfaces. Proc. HCI International 99 (the 8 th International Conference on Human-Computer Interaction), Munich,

More information

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS Eva Cipi, PhD in Computer Engineering University of Vlora, Albania Abstract This paper is focused on presenting

More information

Semantic Localization of Indoor Places. Lukas Kuster

Semantic Localization of Indoor Places. Lukas Kuster Semantic Localization of Indoor Places Lukas Kuster Motivation GPS for localization [7] 2 Motivation Indoor navigation [8] 3 Motivation Crowd sensing [9] 4 Motivation Targeted Advertisement [10] 5 Motivation

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Multi-Modal User Interaction

Multi-Modal User Interaction Multi-Modal User Interaction Lecture 4: Multiple Modalities Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk MMUI, IV, Zheng-Hua Tan 1 Outline Multimodal interface

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

Real Time Word to Picture Translation for Chinese Restaurant Menus

Real Time Word to Picture Translation for Chinese Restaurant Menus Real Time Word to Picture Translation for Chinese Restaurant Menus Michelle Jin, Ling Xiao Wang, Boyang Zhang Email: mzjin12, lx2wang, boyangz @stanford.edu EE268 Project Report, Spring 2014 Abstract--We

More information

1 ABSTRACT. Proceedings REAL CORP 2012 Tagungsband May 2012, Schwechat.

1 ABSTRACT. Proceedings REAL CORP 2012 Tagungsband May 2012, Schwechat. Oihana Otaegui, Estíbaliz Loyo, Eduardo Carrasco, Caludia Fösleitner, John Spiller, Daniela Patti, Adela, Marcoci, Rafael Olmedo, Markus Dubielzig 1 ABSTRACT (Oihana Otaegui, Vicomtech-IK4, San Sebastian,

More information

Robust Positioning in Indoor Environments

Robust Positioning in Indoor Environments Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Robust Positioning in Indoor Environments Professor Allison Kealy RMIT University, Australia Professor Guenther Retscher Vienna University

More information

AUGMENTED REALITY-BASED VISITING GUIDANCE IN INDOOR ARTISTIC STATUE EXHIBITIONS BY USE OF MOBILE DEVICES

AUGMENTED REALITY-BASED VISITING GUIDANCE IN INDOOR ARTISTIC STATUE EXHIBITIONS BY USE OF MOBILE DEVICES AUGMENTED REALITY-BASED VISITING GUIDANCE IN INDOOR ARTISTIC STATUE EXHIBITIONS BY USE OF MOBILE DEVICES 1 Tzu-Lung Chang ( 張子瀧 ) and 2 Wen-Hsiang Tsai ( 蔡文祥 ) 1 Institute of Computer Science and Engineering

More information

Integrated Positioning The Challenges New technology More GNSS satellites New applications Seamless indoor-outdoor More GNSS signals personal navigati

Integrated Positioning The Challenges New technology More GNSS satellites New applications Seamless indoor-outdoor More GNSS signals personal navigati Integrated Indoor Positioning and Navigation Professor Terry Moore Professor of Satellite Navigation Nottingham Geospatial Institute The University of Nottingham Integrated Positioning The Challenges New

More information

Context-Aware Planning and Verification

Context-Aware Planning and Verification 7 CHAPTER This chapter describes a number of tools and configurations that can be used to enhance the location accuracy of elements (clients, tags, rogue clients, and rogue access points) within an indoor

More information

Multi-sensor Panoramic Network Camera

Multi-sensor Panoramic Network Camera Multi-sensor Panoramic Network Camera White Paper by Dahua Technology Release 1.0 Table of contents 1 Preface... 2 2 Overview... 3 3 Technical Background... 3 4 Key Technologies... 5 4.1 Feature Points

More information

Indoor Positioning Using a Modern Smartphone

Indoor Positioning Using a Modern Smartphone Indoor Positioning Using a Modern Smartphone Project Members: Carick Wienke Project Advisor: Dr. Nicholas Kirsch Finish Date: May 2011 May 20, 2011 Contents 1 Problem Description 3 2 Overview of Possible

More information

Do-It-Yourself Object Identification Using Augmented Reality for Visually Impaired People

Do-It-Yourself Object Identification Using Augmented Reality for Visually Impaired People Do-It-Yourself Object Identification Using Augmented Reality for Visually Impaired People Atheer S. Al-Khalifa 1 and Hend S. Al-Khalifa 2 1 Electronic and Computer Research Institute, King Abdulaziz City

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Using Intelligent Mobile Devices for Indoor Wireless Location Tracking, Navigation, and Mobile Augmented Reality

Using Intelligent Mobile Devices for Indoor Wireless Location Tracking, Navigation, and Mobile Augmented Reality Using Intelligent Mobile Devices for Indoor Wireless Location Tracking, Navigation, and Mobile Augmented Reality Chi-Chung Alan Lo, Tsung-Ching Lin, You-Chiun Wang, Yu-Chee Tseng, Lee-Chun Ko, and Lun-Chia

More information

Fingerprinting Based Indoor Positioning System using RSSI Bluetooth

Fingerprinting Based Indoor Positioning System using RSSI Bluetooth IJSRD - International Journal for Scientific Research & Development Vol. 1, Issue 4, 2013 ISSN (online): 2321-0613 Fingerprinting Based Indoor Positioning System using RSSI Bluetooth Disha Adalja 1 Girish

More information

idocent: Indoor Digital Orientation Communication and Enabling Navigational Technology

idocent: Indoor Digital Orientation Communication and Enabling Navigational Technology idocent: Indoor Digital Orientation Communication and Enabling Navigational Technology Final Proposal Team #2 Gordie Stein Matt Gottshall Jacob Donofrio Andrew Kling Facilitator: Michael Shanblatt Sponsor:

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Target detection in side-scan sonar images: expert fusion reduces false alarms

Target detection in side-scan sonar images: expert fusion reduces false alarms Target detection in side-scan sonar images: expert fusion reduces false alarms Nicola Neretti, Nathan Intrator and Quyen Huynh Abstract We integrate several key components of a pattern recognition system

More information

Implementation of Augmented Reality System for Smartphone Advertisements

Implementation of Augmented Reality System for Smartphone Advertisements , pp.385-392 http://dx.doi.org/10.14257/ijmue.2014.9.2.39 Implementation of Augmented Reality System for Smartphone Advertisements Young-geun Kim and Won-jung Kim Department of Computer Science Sunchon

More information

COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES

COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES http:// COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES Rafiqul Z. Khan 1, Noor A. Ibraheem 2 1 Department of Computer Science, A.M.U. Aligarh, India 2 Department of Computer Science,

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Novel Approach

More information

Blind Navigation and the Role of Technology

Blind Navigation and the Role of Technology 25 Blind Navigation and the Role of Technology Nicholas A. Giudice University of California, Santa Barbara Gordon E. Legge University of Minnesota 25.1 INTRODUCTION The ability to navigate from place to

More information

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices J Inf Process Syst, Vol.12, No.1, pp.100~108, March 2016 http://dx.doi.org/10.3745/jips.04.0022 ISSN 1976-913X (Print) ISSN 2092-805X (Electronic) Number Plate Detection with a Multi-Convolutional Neural

More information

A Vehicular Visual Tracking System Incorporating Global Positioning System

A Vehicular Visual Tracking System Incorporating Global Positioning System A Vehicular Visual Tracking System Incorporating Global Positioning System Hsien-Chou Liao and Yu-Shiang Wang Abstract Surveillance system is widely used in the traffic monitoring. The deployment of cameras

More information

best practice guide Ruckus SPoT Best Practices SOLUTION OVERVIEW AND BEST PRACTICES FOR DEPLOYMENT

best practice guide Ruckus SPoT Best Practices SOLUTION OVERVIEW AND BEST PRACTICES FOR DEPLOYMENT best practice guide Ruckus SPoT Best Practices SOLUTION OVERVIEW AND BEST PRACTICES FOR DEPLOYMENT Overview Since the mobile device industry is alive and well, every corner of the ever-opportunistic tech

More information

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Quy-Hung Vu, Byeong-Sang Kim, Jae-Bok Song Korea University 1 Anam-dong, Seongbuk-gu, Seoul, Korea vuquyhungbk@yahoo.com, lovidia@korea.ac.kr,

More information