Analysis of Perceived Workload when using a PDA for Mobile Robot Teleoperation

Size: px
Start display at page:

Download "Analysis of Perceived Workload when using a PDA for Mobile Robot Teleoperation"

Transcription

1 Analysis of Perceived Workload when using a PDA for Mobile Robot Teleoperation Julie A. Adams EECS Department Vanderbilt University Nashville, TN USA julie.a.adams@vanderbilt.edu Hande Kaymaz-Keskinpala EECS Department Vanderbilt University Nashville, TN USA hande.kaymaz@vanderbilt.edu Abstract A Personal Digital Assistant (PDA) based interface has been developed to provide teleoperation of a mobile robot. The interface provides three different screen designs, all of which employ touch (finger) based interaction rather than stylus based interaction. The interface provides general interaction capabilities for driving the robot based upon the information display. The Vision-only screen provides the forward facing camera image, the Sensory-only screen provides the on-board ultrasonic sensors and laser range finder information, while the Vision with sensory overlay screen integrates all three data sets. A user evaluation was conducted to evaluate the usability and perceived workload required for each screen. Thirty participants completed the quantitative evaluation. The focus of this paper is the obtained perceived workload results. Keywords-Human-Robot Interaction; PDA; User Evaluation I. INTRODUCTION Personal Digital Assistants (PDAs) have become devices many of us use on a daily basis. They are relatively inexpensive; offer a broad spectrum of applications while also being small, portable, and fairly easy to use. Therefore, it seems relatively obvious that PDAs may represent a good platform for interacting with robots. Their small size, processing capability, and portability make them rather attractive human-robot interaction (HRI) devices. Many standard PDA interfaces have been developed for a wide range of applications. In addition to standard applications such as word processing, calendar management, and calculators, some more specialized systems have been developed. For example, character recognition [1,2,3], educational tools [4,5,6,7], face recognition [8], and mobile surveillance [9]. Robotics has not lagged in the development of PDA based applications. Fong [10] developed stylus based interfaces to provide an operator the ability to interact with a robot via his collaborative control architecture. The general idea is to provide the capability for the operator and the robot to collaborate during the task execution. Perzanowski et al. [11] have developed a multimodal interface incorporating a PDA, gestures, and speech interaction. This interface is used to interact with single and multiple robots. The focus is the development of multimodal humanrobotic interfaces. Huttenrauch and Norman [12] implemented PocketCERO to provide screens to a service robot. They focused on providing a PDA-based HRI because they felt that a mobile robot should have a mobile interface. Their interface provided three screens; two required stylus based interaction and a third permits touch (finger) based interaction. Skubic, Bailey and Chronis [13, 14] have developed a PDA-based sketch interface to provide a path to a mobile robot. The user employs the stylus to provide landmarks as well as a path through the landmarks. This information can then be translated into commands that the mobile robot can execute. Each of these groups had a slightly different motivation for their development efforts and this motivation directed their HRI design and implementation. This work has a different motivation. First, the domains under consideration for this interface include military operations. Therefore, these domains potentially require the operator to be running or walking while operating the robot. This limits the operator s ability to focus on small buttons, menus choices, or writing in the graffiti language. Second, the military personnel that we spoke to stated: the stylus will be lost in the first five minutes. Given these first two considerations, the design was intended to provide touch-based interactions via a finger, thus eliminating the stylus. Third, the domains in question may require the operator to wear bulky gloves. Therefore, the interface must provide interaction capabilities that can accommodate the additional bulk. Fourth, the interface should provide sensory information rather than raw sensory data. Finally, the interface should be usable both when the robot is in view as well as when the operator is at a remote location. The work that most closely resembles our work is that of Lundberg et al. [15]. Their system appears to be based upon many of the design considerations outlined above. It is designed for use with military and search and rescue personnel. They too focused on a finger touch based interface, but do have interaction capabilities that require a stylus. In general, their system is more extensive as it includes capabilities such as collision avoidance, person following, and autonomous exploration. On the other hand, their interface limits the sensory feedback presentation to the operator. A limitation that

2 requires operators be able to directly view the robot s environment during teleoperation. The focus of our work was to develop the system using standard user centered design principles. Therefore, we focused on basic screens and quantitative feedback before further system development. Their system uses a similar PDA and robot as this work. The purpose of this paper is to provide a brief description of the interface design as well as a report the perceived workload results from the quantitative user evaluation. Section II provides the interface design background. Section III describes the experimental design and Section IV reports the evaluation results. Finally, Section V provides the conclusions. II. INTERFACE DESIGN The task domain is military operations. The intention was to provide a small, lightweight, portable, and usable. As one military interviewee stated, It needs to be Private proof (when referring to soldiers within the Private rank). This implied a very simple interface was needed. Given these constraints, the decision was made to pursue the use of a Personal Digital Assistant (PDA) for the teleoperation interface. The domains in question require that the interaction techniques do not rely on fine-grained interaction capabilities. Rather, the preferred interaction mechanism is via touch-screen interaction using a finger. The interviewed military personnel indicated that a stylus would be quickly lost in the field. As well, they did not want to consider an interaction device that required fine-grained interactions. Therefore, the designed interface does not require the stylus. PDAs inherently provide the necessary touch-screen interaction capabilities. The interface design had to accommodate touch-based interactions while operators are wearing gloves, such as chemical protection gloves. This requirement implied larger than normal touch screen buttons for commanding the robot. This constraint conflicts with attempting to maximize the ability to view information on the screen. PDA s offer a small display screen real estate. Therefore, the design had to provide the ability to maximize the screen usage while also accommodating the large command buttons. Providing transparent command buttons satisfied this constraint. An additional requirement, as suggested by the interviewed military personnel, was that the interface should be attached to their person. They should not have to remove the device from a pocket. Therefore, the typical PDA interface interactions were rotated 90º counter-clockwise so that the PDA could be strapped to the operator s arm. This work employs a Toshiba E740 PDA to control an ATRV-Jr robot. The PDA operating system is Microsoft PocketPC The PDA is equipped with 64 MB of RAM, and an Intel PXA MHz XScale Processor. The robot has a forward facing camera, a forward facing laser range finder, seventeen ultrasonic sonar sensors, and an Intel Pentium III TM processor. The ultrasonic sonar sensors are located around the robot with five sonar facing forward, five facing out from each side (a total of ten), and two mounted on the rear of the robot. Wireless Ethernet provides the communication capabilities between the PDA and the robot. Embedded Visual Basic was used to program the interface. A. General Interaction Capabilities Three screens were designed, each providing a different level of sensory feedback (Complete design details can be found in [16, 17]). The robot command buttons are consistent across all three screens. The operator can command the robot to move forward, backward, left, right or a combination of forward and turning or backward and turning. The move forward button is the upward facing button in Figure 1. An emergency stop button is provided in the lower right corner of the screen, as shown in Figure 1. This particular position was chosen for two reasons. The initial objective was to provide tactile feedback to the operator as to the button s location based upon the physical PDA structure. The current button position is the only position that ensures the operator is unable to accidentally activate operating system functionality rather than issue the command. B. Teleoperation Interface Screens Three screens were designed to provide varying feedback levels. It is expected that military operators will use the screens in situations where they can directly view the robot and the environment, as well as when the robot is located in a remote environment and is not directly visible. The Vision-only screen provides the forward facing camera image along with the general robot command buttons. The current design does not provide the capability to pan or tilt the camera. As can be seen in Figure 1, the image is visible through the buttons, thus permitting the operator to view the information behind the buttons. A limitation of this screen is the operator s inability to view the area surrounding the robot when it is located in a remote environment. The operator is required to command the robot to physically rotate which can be cumbersome and time consuming. Figure 1. The Vision-only screen. The Sensor-only screen provides the operator with the ultrasonic sonar and laser range finder sensory information. The ultrasonic sensors are located around the body of the robot, and therefore provide the operator with feedback from the entire area within their field of view. The laser range finder

3 provides a 180º field of view at the front of the robot. Figure 2 provides a screen shot of the Sensor-only screen. A disadvantage is the inability to view the visual feedback from the environment while viewing this interface. This screen is best used when the operator is able to directly view the environment and the robot. Laser Range-Finder Information Sonar Information Figure 2. The Sensor-only screen. The Vision with sensory overlay screen provides the operator with the fused camera image and sensory feedback. The forward facing ultrasonic and laser range finder information is overlaid on top of the standard camera image, thus permitting the operator to view the available information on one screen. Figure 3 provides an example of the Vision with sensory overlay screen. While this screen attempts to combine all available sensory information, it is limited to the forward facing information. If additional information is required, the robot must be rotated to view the surrounding areas. Laser Range-Finder Data Figure 3. The Vision with sensory overlay interface. III. EXPERIMENTAL DESIGN Sonar Data A user evaluation was conducted to assess the perceived workload while teleoperating a robot in an indoor environment containing obstacles. The evaluation was designed to determine which screen the evaluation participants preferred for these tasks. A. Participants Thirty male and female participants completed the evaluation. All participants were unpaid volunteers from the Vanderbilt University community. A majority of the participants were pursuing either Master s or Ph.D. degrees. All participants completed the entire study and all participants had experience using PDAs but no experience with mobile robots. B. Design and Procedure The experiment was a with-in subjects repeated-measures design. Each participant completed two trials with each of the three screens. The screen presentation was randomized across participants in order to counterbalance learning effects. The screen presentation remained the same between trials. Three task environments were set-up for the three different screens. The environment remained constant for a particular screen across trials. The participants navigated the robot through the environment to a designated goal position while avoiding all obstacles and not running into walls. They were to complete the tasks as quickly as possible. All obstacles were easily identifiable within the environment. The participants received a thirty-minute training session during which they were introduced to the interface and robot capabilities. They also practiced driving the robot. The participants were not permitted to view or practice in the actual task environments. The participants were placed in a remote location from the robot and environment. The participants then completed trial one of each task. At the completion of each task, participants completed a questionnaire, which asked them to rate their perceived workload along the 100-point scales across all six factors of the NASA Task Load Index (TLX) tool [18]. The questionnaire also contained five-point Likert scale usability questions. At the completion of each trial, the participants completed another questionnaire that included the fifteen paired factor comparisons of the NASA TLX tool. The questionnaire also contained a series of questions that ranked the three interfaces from best to worst based on a series of factors. The participants then completed a second trial with tasks and questionnaires administered as in the first trial. Additional data collection included the time required to complete each task, the number of errors, the location and number of screen touches, and the participants ability to obtain the final goal position. There was one difference between the trials. The participants were permitted to view the environment and the robot during the second trial while completing the Sensor-only task. The Vision-only and Vision with sensory overlay tasks were completed from the remote location as in the first trial. IV. EXPERIEMENTAL RESULTS A complete statistical analysis of the NASA TLX, usability, and interface ranking questions was conducted. Additionally, an analysis of the task completion times, number of errors, the location and number of screen touches, and accuracy of goal achievement was conducted. This paper

4 focuses on the perceived workload results, the analysis of all evaluation data can be found in [17]. Hart and Staveland [18] define workload as representing the cost incurred by a human operator to achieve a particular level of performance. NASA TLX is a widely accepted perceived workload assessment methodology that entails two components. The first requires participants to rate their responses on a scale from 0 to 100 across six factors: mental demand, physical demand, temporal demand, frustration, effort, and performance. The participants select the factors that contributed the most to perceived workload across all fifteen paired-comparisons. This information was employed to calculate the overall perceived workload. Table 1. Perceived workload results by screen and trial. Trial one Trial two Mean St. Dev. Mean St. Dev. Vision-only Sensor-only Vision with sensory overlay The hypothesis for the perceived workload analysis was: The Vision with sensory overlay and Sensor-only interface screens induce higher workload levels for the defined user group than the Vision-only interface screen. Table 1 provides the overall mean perceived workload values by screen and trial. Figure 4 provides the overall mean workload values by screen and trial. The raw data indicates that the hypothesis should hold for trial one but may not hold for trial two. Figure 4. Mean perceived workload by screen and trial. As Figure 4 indicates, the perceived workload increased slightly for the Vision-only and the Vision with sensory overlay screens between trials. The Sensor-only screen perceived workload dramatically dropped between trials. This result is due to permitting the participants to directly view the environment and robot while completing this task during the second trial. Based upon this result a within subjects, repeated measures Analysis of Variance (ANOVA) was conducted across screens and trials. The result indicate that the difference between trials across the three screens was statistically significant, F(1, 29) = 10.52, p < 0.01 (p < 0.05 is considered significant). Further analysis comparing task pairs using the repeated measures ANOVA was conducted. A. Vision-only versus Sensor-only Analysis The first analysis set compared the Vision-only and Sensoronly screens. The ANOVA across screens and trials indicated that this relationship was statistically significant, F(1,29) = 13.74, p < The ANOVA directly comparing the Visiononly screen to the Sensor-only screen across both trials was insignificant, F(1, 29) = 0.14, p = This result is primarily due to the participants ability to directly view the environment and robot during the second trail of the Sensoronly task. The result was that the mean workload value for that screen over both trials was while the mean for the Vision-only screen over both trials was A t-test was conducted to compare the individual screens within a particular trial in order to understand the differences between the data sets while considering the change to the Sensor-only task during the second trial. The t-test comparing the two screens during trial one found a significant result, t(29) = -2.5, p = The workload value for the Vision-only screen was significantly lower than that of the Sensor-only screen. Similar analysis for the same screens during trial two found a significant result, t(29) = 2.48, p = 0.02, however this result is the reverse of the trial one result. The workload value for the Vision-only screen was significantly higher than that of the Sensor-only screen. These results indicate that when the participants were unable to directly view the environment and robot while using the Sensor-only screen, their perceived workload was significantly higher than when using the Vision-only screen. This result is reversed when participants are permitted to directly view the environment and robot while using the Sensor-only screen. Their perceived workload is significantly lower when using the Sensor-only screen. An ANOVA of the combined mean workload for the Vision-only and Sensory-only screens during trial one versus trial two was found to be statistically significant, F(1,29) = 8.231, p < This result indicates that the combined mean workload values of both screens were statistically different across the trials. The mean perceived workload for the Visiononly and Sensor-only screens during trial one was (std. dev. = 16.72). This mean is higher than the combined mean during trial two (mean = 40.92, std. dev. = 16.94). The individual screen results indicate (based on Figure 4) that the value increased slightly for the Vision-only screen during the second trial. It was anticipated that the Vision-only screen perceived workload values between trials would decrease as the participants gained more experience. The resulting aggregate reduction between trials is not related to this factor. Rather the reduction is primarily due to the participants ability to directly view the environment while completing the Sensor-only task. During the first trial participants were located in a remote

5 location. This difference in the task conditions created a dramatic decrease in the perceived workload values for the Sensory-only screen resulting in a statistically significant result across trials. B. Sensor-only versus Vision with sensory overlay Analysis An ANOVA comparing the perceived workload across screens and trials for Sensor-only and Vision with sensory overlay screens found a significant relationship, F(1,29) = 15.33, p < Additionally, the ANOVA comparing the mean workload values for the Sensor-only screens across both trials to those for the Vision with sensory overlay screen was significant, F(1,29) = 12.15, p < This result does not tell the entire story due to the change in the Sensor-only task treatment during the second trial. The mean workload values for these two screens during trial one were relatively similar, as seen in Table 1 therefore; the significant result is attributed to the change in Sensor-only screen task treatment during trial two. A t-test was conducted on the relationship between the individual tasks within a trial. The results comparing the Sensor-only and the Vision with sensory overlay screens during trial one found an insignificant result, t(29) = -0.53, p = It was anticipated that a lower overall workload would be found with the Vision with sensory overlay screen than with the Sensor-only screen when both were used in the remote location. Not only was the result insignificant, the Sensor-only screen received the lower overall workload value. This result may be attributed to the time delay between issuing a robot command via the Vision with sensory overlay screen and the robot beginning to execute the command. The delay was about five seconds and is due to the large amount of PDA based processing. The t-test for the same screens during trial two was statistically significant, t(29) = -5.57, p < This result is a direct effect of the task change during the second trial. These results indicate that when the participants were able to directly view the environment and robot while using the Sensor-only screen, their perceived workload was significantly lower than when using the Vision with sensory overlay screen. This implies that the Sensor-only screen in this condition requires much lower workload to complete the task. When participants are required to complete both tasks while located at a remote location, both interfaces require relatively the same workload. An ANOVA of the combined mean workload for both the Vision with sensory overlay and Sensor-only screens across trials was statistically significant, F(1,29) = 8.93, p < This result indicates that the perceived workload values were statistically different across the trials. The mean perceived workload for both screens during trial one was (std. dev. = 17.44). This mean is higher than the combined mean for trial two (mean = 46.55, std. dev. = 16). The lower mean workload across both screens during the second trial is due to the participant s ability to directly view the environment and robot. Again, the individual task results indicate, based on Figure 4 that perceived workload increased slightly for the Vision with Sensory overlay screen during the second trial. C. Vision-only versus Vision with sensory overlay Analysis An ANOVA analyzing the perceived workload across trials and screens for the Vision-only and Vision with sensory overlay screens found an insignificant relationship, F(1, 29) = 0.001, p = An ANOVA comparing the mean workload values for the Vision-only screen across both trials to those for the Vision with sensory overlay screen was found to be significant, F(1, 29) = 27.4, p < As Figure 4 shows, the change between trials for each of these tasks was minor. The figure also indicates that the Vision with sensory overlay screen had much higher mean workload (mean = 55.16) than the vision-only screen (mean = 43.95). The t-test further analyzing the screen data within a trial found significant relationships. The t-test comparing the Vision-only and Vision with sensory overlay screens during trial one found t(29) = -3.45, p < The same comparison during trial two found t(29) = -4.75, p < These results indicate that the perceived workload is significantly lower when using the Vision-only screen independent of trial, and therefore requires lower workload. An analysis of the combined mean workload for both the Vision and the Vision with sensory overlay screens during trial one versus trial two was found to be insignificant, F(1,29) = 0.33, p = This result indicates that the combined perceived workload values across the two screens did not dramatically change between trials. The combined mean perceived workload for both screens during trial one was (std. dev. = 15.98). This mean was lower than the combined mean for trial two (mean = 50.21, std. dev. = 17.33). It was anticipated that the workload values for these screens during trial two would decrease. It is believed that this effect existed due to the dramatic drop in workload when participants were able to directly view the environment and robot during Sensoronly task. This may have caused participants to rate their perceived workloads for the Vision-only and Vision with sensory overlay screens slightly higher during the second trial. It is not clear from the data that this is the cause of this result. These results indicate that while there was little change in the workload values for a particular screen between trials, the difference between the overall workload values for the Visiononly and the Vision with sensory overlay screens was significant. Therefore, the participants felt that their perceived workload was always significantly lower with the Vision-only screen. This result may be attributed to the participant s inexperience with ultrasonic sonar and laser ranger finder information. It may be the case that with further training, the difference in perceived workload would decrease. An additional confounding factor is the time delay between issuing a command to the robot when using the Vision with sensory overlay screen and when the robot begins the command execution. No time delay occurs with the Vision-only screen; therefore the participants noticed the delay. This time delay is due to the additional processing required on the PDA to process and display all information. It is feasible that this time delay heavily influenced the perceived workload ratings. An additional evaluation is required to better understand how operators use the interfaces when they are able to directly view the environment and robot. It is unclear from this study if

6 the operators focused more on the robot rather than the provided interface during the second trial of the Sensor-only task. An element of such a study would be to compare the three screens in this situation to determine if there are any differences between them. Further analysis should be completed with more dedicated operators, such as military personnel in order to understand exactly how they would use such a system and which screens prove most beneficial. Such users would have longer training periods and perhaps a better understanding of the screens and their uses. In addition to further evaluations, there are system components that require improvement and modification. The current Vision with sensory overlay screen implementation adversely affected the evaluation results. An implementation revision to eliminate the delays is necessary. Further refinements are required to better integrate the ultrasonic and laser range finder data for the Sensory-only and the Vision with sensory overlay screens. V. CONCLUSIONS A PDA-based teleoperation interface composed of three touch-based screens has been described. The focus of the paper was the presentation of the perceived workload results obtained during the quantitative user evaluation. A with-in subjects repeated measures evaluation was conducted with thirty volunteers. All volunteers had experience using PDA s but no experience working with mobile robots. The original hypothesis related to perceived workload was: The Vision with sensory overlay and Sensor-only interface screens induce higher workload levels for the defined user group than the Vision-only interface screen. This hypothesis was found to be true when participants were unable to directly view the robot and environment for all three tasks. When participants were permitted to directly view the environment and robot while working with the Sensor-only screen, their perceived workload values were significantly lower than the results for the Vision-only and the Vision with sensory overlay screens. If the participants were able to directly view the environment and robot while working with these two screens, it possible that the original hypothesis would stand. Unfortunately, our results do not permit us to draw any firm conclusions on that particular question. ACKNOWLEDGMENT The authors wish to thank the Center for Intelligent Systems at Vanderbilt University for the use of the PDA and ATRV-Jr robot. The authors also thank Mary Dietrich for her statistical analysis guidance. REFERENCES [1] Z. Luo and C.H. Wu, A Chinese Character Recognition Interface for Mobile Communication Devices Using Fuzzy Logic and Unit Extraction, Proceedings of the IEEE IECON 22nd International Conference on Industrial Electronics, Control, and Instrumentation, Vol.1, pp , August [2] H. Kang and H.J. Kim, Design of an Interface on PDA for Korean, IEEE Transactions on Consumer Electronics, Vol.46, Issue: 3, pp , August [3] J. Zhang, X. Chen, J. Yang and A. Waibel, A PDA-based Sign Translator, Proceedings of IEEE International Conference on Multimodal Interfaces, pp , October [4] J. Breitbart, D. Balakrishnan, and A. Ganz, Pocket-IMPACT Software for Delivering Online Courseware on a PDA: Challenges, Design Guidelines and Implementation, Proceedings of IEEE Frontiers in Education Conference, Vol.1, pp. T3F-5, November [5] X. Vila, A. Riera, E. Sanchez, M. Lama, D. L. Mureno, A PDA-based Interface for a Computer Supported Educational System, Proceedings of the 3 rd IEEE International Conference on Advanced Learning Technologies, pp.12-16, July [6] Y.-W. Chen, Z.-J. Yan, J.-C. Huang, I-H. Peng, J.-W. Zhan, Implementation of a PDA/GPS Based Development Platform and Its Application in Native Education, Proceedings of the IEEE International Conference on Communications, Circuits and Systems and West Sino Expositions, Vol. 2, pp , June - July [7] R. Avanzato, Student Use of Personal Digital Assistants in a Computer Engineering Course, Proceeding of the Frontiers in Education Conference, Vol. 2, pp. F1B-F19, October [8] J. Yang, X. Chen, and W. Kunz, A PDA-based Face Recognition System, Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision, pp , December [9] S.-T. Li, H.-C. Hsieh, L.-Y. Shue, and W.-S. Chen, PDA Watch for Mobile Surveillance Services, Proceedings of IEEE Workshop on Knowledge Media Networking, pp , July [10] T. Fong, Collaborative Control: A Robot Centric Model for Vehicle Teleoperation, Technical Report CMU-RI-TR-01-34, Ph.D. Thesis, Robotics Institute, Carnegie Mellon University, November [11] D. Perzanowski, A.C. Schultz, W. Adams, E. Marsh, and M. Bugajska, Building a Multimodal Human Robot Interface, IEEE Intelligent Systems, Vol. 16, No. 1, pp , January/February [12] H. Huttenrauch and M. Norman, PocketCERO Mobile Interfaces for Service Robots, Proceeding of the International Workshop on Human Computer Interaction with Mobile Devices, France, September [13] M. Skubic, C. Bailey, and G. Chronis, A Sketch Interface for Mobile Robots, Proceedings of the 2003 IEEE International Conference on Systems, Man, and Cybernetics, pp , October [14] C. Bailey, A Sketch Interface For Understanding Hand-Drawn Route Maps, Master s Thesis, Computational Intelligence Lab, University of Missouri-Columbia, December [15] C. Lundberg, C. Barck-Holst, J. Folkeson, and H. Christensen, PDA Interface for a Field Robot, Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp , October [16] H. Kaymaz-Keskinpala, J.A. Adams, and K. Kawamura, PDA-Based Human-Robotic Interface, Proceedings of the International Conference on Systems, Man and Cybernetics, pp , October [17] H. Kaymaz-Keskinpala, PDA-Based Teleoperation Interface for a Mobile Robot, Master s Thesis, Vanderbilt University, May [18] S. Hart and L. Staveland, "Development of NASATLX (Task Load Index): Results of Empirical and Theoretical Research," in Human Mental Workload, P.A. Hancock, N. Meshkati, Ed., pp , 1988.

Objective Data Analysis for a PDA-Based Human-Robotic Interface*

Objective Data Analysis for a PDA-Based Human-Robotic Interface* Objective Data Analysis for a PDA-Based Human-Robotic Interface* Hande Kaymaz Keskinpala EECS Department Vanderbilt University Nashville, TN USA hande.kaymaz@vanderbilt.edu Abstract - This paper describes

More information

Evaluation of an Enhanced Human-Robot Interface

Evaluation of an Enhanced Human-Robot Interface Evaluation of an Enhanced Human-Robot Carlotta A. Johnson Julie A. Adams Kazuhiko Kawamura Center for Intelligent Systems Center for Intelligent Systems Center for Intelligent Systems Vanderbilt University

More information

An Agent-Based Architecture for an Adaptive Human-Robot Interface

An Agent-Based Architecture for an Adaptive Human-Robot Interface An Agent-Based Architecture for an Adaptive Human-Robot Interface Kazuhiko Kawamura, Phongchai Nilas, Kazuhiko Muguruma, Julie A. Adams, and Chen Zhou Center for Intelligent Systems Vanderbilt University

More information

Using a Qualitative Sketch to Control a Team of Robots

Using a Qualitative Sketch to Control a Team of Robots Using a Qualitative Sketch to Control a Team of Robots Marjorie Skubic, Derek Anderson, Samuel Blisard Dennis Perzanowski, Alan Schultz Electrical and Computer Engineering Department University of Missouri-Columbia

More information

Evaluating the Augmented Reality Human-Robot Collaboration System

Evaluating the Augmented Reality Human-Robot Collaboration System Evaluating the Augmented Reality Human-Robot Collaboration System Scott A. Green *, J. Geoffrey Chase, XiaoQi Chen Department of Mechanical Engineering University of Canterbury, Christchurch, New Zealand

More information

Multimodal Metric Study for Human-Robot Collaboration

Multimodal Metric Study for Human-Robot Collaboration Multimodal Metric Study for Human-Robot Collaboration Scott A. Green s.a.green@lmco.com Scott M. Richardson scott.m.richardson@lmco.com Randy J. Stiles randy.stiles@lmco.com Lockheed Martin Space Systems

More information

Enhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback

Enhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback Enhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback by Paulo G. de Barros Robert W. Lindeman Matthew O. Ward Human Interaction in Vortual Environments

More information

Ecological Interfaces for Improving Mobile Robot Teleoperation

Ecological Interfaces for Improving Mobile Robot Teleoperation Brigham Young University BYU ScholarsArchive All Faculty Publications 2007-10-01 Ecological Interfaces for Improving Mobile Robot Teleoperation Michael A. Goodrich mike@cs.byu.edu Curtis W. Nielsen See

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

NAVIGATION is an essential element of many remote

NAVIGATION is an essential element of many remote IEEE TRANSACTIONS ON ROBOTICS, VOL.??, NO.?? 1 Ecological Interfaces for Improving Mobile Robot Teleoperation Curtis Nielsen, Michael Goodrich, and Bob Ricks Abstract Navigation is an essential element

More information

COMMUNICATING WITH TEAMS OF COOPERATIVE ROBOTS

COMMUNICATING WITH TEAMS OF COOPERATIVE ROBOTS COMMUNICATING WITH TEAMS OF COOPERATIVE ROBOTS D. Perzanowski, A.C. Schultz, W. Adams, M. Bugajska, E. Marsh, G. Trafton, and D. Brock Codes 5512, 5513, and 5515, Naval Research Laboratory, Washington,

More information

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment Proceedings of the International MultiConference of Engineers and Computer Scientists 2016 Vol I,, March 16-18, 2016, Hong Kong Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free

More information

A USEABLE, ONLINE NASA-TLX TOOL. David Sharek Psychology Department, North Carolina State University, Raleigh, NC USA

A USEABLE, ONLINE NASA-TLX TOOL. David Sharek Psychology Department, North Carolina State University, Raleigh, NC USA 1375 A USEABLE, ONLINE NASA-TLX TOOL David Sharek Psychology Department, North Carolina State University, Raleigh, NC 27695-7650 USA For over 20 years, the NASA Task Load index (NASA-TLX) (Hart & Staveland,

More information

ENHANCING A HUMAN-ROBOT INTERFACE USING SENSORY EGOSPHERE

ENHANCING A HUMAN-ROBOT INTERFACE USING SENSORY EGOSPHERE ENHANCING A HUMAN-ROBOT INTERFACE USING SENSORY EGOSPHERE CARLOTTA JOHNSON, A. BUGRA KOKU, KAZUHIKO KAWAMURA, and R. ALAN PETERS II {johnsonc; kokuab; kawamura; rap} @ vuse.vanderbilt.edu Intelligent Robotics

More information

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions Sesar Innovation Days 2014 Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions DLR German Aerospace Center, DFS German Air Navigation Services Maria Uebbing-Rumke, DLR Hejar

More information

Comparing the Usefulness of Video and Map Information in Navigation Tasks

Comparing the Usefulness of Video and Map Information in Navigation Tasks Comparing the Usefulness of Video and Map Information in Navigation Tasks ABSTRACT Curtis W. Nielsen Brigham Young University 3361 TMCB Provo, UT 84601 curtisn@gmail.com One of the fundamental aspects

More information

Mobile Robot Navigation Contest for Undergraduate Design and K-12 Outreach

Mobile Robot Navigation Contest for Undergraduate Design and K-12 Outreach Session 1520 Mobile Robot Navigation Contest for Undergraduate Design and K-12 Outreach Robert Avanzato Penn State Abington Abstract Penn State Abington has developed an autonomous mobile robotics competition

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Mobile Robots Exploration and Mapping in 2D

Mobile Robots Exploration and Mapping in 2D ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA. Mobile Robots Exploration and Mapping in 2D Sithisone Kalaya Robotics, Intelligent Sensing & Control (RISC)

More information

Creating a 3D environment map from 2D camera images in robotics

Creating a 3D environment map from 2D camera images in robotics Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

Extracting Navigation States from a Hand-Drawn Map

Extracting Navigation States from a Hand-Drawn Map Extracting Navigation States from a Hand-Drawn Map Marjorie Skubic, Pascal Matsakis, Benjamin Forrester and George Chronis Dept. of Computer Engineering and Computer Science, University of Missouri-Columbia,

More information

AirTouch: Mobile Gesture Interaction with Wearable Tactile Displays

AirTouch: Mobile Gesture Interaction with Wearable Tactile Displays AirTouch: Mobile Gesture Interaction with Wearable Tactile Displays A Thesis Presented to The Academic Faculty by BoHao Li In Partial Fulfillment of the Requirements for the Degree B.S. Computer Science

More information

Designing Laser Gesture Interface for Robot Control

Designing Laser Gesture Interface for Robot Control Designing Laser Gesture Interface for Robot Control Kentaro Ishii 1, Shengdong Zhao 2,1, Masahiko Inami 3,1, Takeo Igarashi 4,1, and Michita Imai 5 1 Japan Science and Technology Agency, ERATO, IGARASHI

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Simulation of a mobile robot navigation system

Simulation of a mobile robot navigation system Edith Cowan University Research Online ECU Publications 2011 2011 Simulation of a mobile robot navigation system Ahmed Khusheef Edith Cowan University Ganesh Kothapalli Edith Cowan University Majid Tolouei

More information

Invited Speaker Biographies

Invited Speaker Biographies Preface As Artificial Intelligence (AI) research becomes more intertwined with other research domains, the evaluation of systems designed for humanmachine interaction becomes more critical. The design

More information

Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface

Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface Scott A. Green*, **, XioaQi Chen*, Mark Billinghurst** J. Geoffrey Chase* *Department of Mechanical Engineering, University

More information

User interface for remote control robot

User interface for remote control robot User interface for remote control robot Gi-Oh Kim*, and Jae-Wook Jeon ** * Department of Electronic and Electric Engineering, SungKyunKwan University, Suwon, Korea (Tel : +8--0-737; E-mail: gurugio@ece.skku.ac.kr)

More information

Analysis of Human-Robot Interaction for Urban Search and Rescue

Analysis of Human-Robot Interaction for Urban Search and Rescue Analysis of Human-Robot Interaction for Urban Search and Rescue Holly A. Yanco, Michael Baker, Robert Casey, Brenden Keyes, Philip Thoren University of Massachusetts Lowell One University Ave, Olsen Hall

More information

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA RIKU HIKIJI AND SHUJI HASHIMOTO Department of Applied Physics, School of Science and Engineering, Waseda University 3-4-1

More information

Scalability of Robotic Controllers: An Evaluation of Controller Options Experiment II

Scalability of Robotic Controllers: An Evaluation of Controller Options Experiment II Scalability of Robotic Controllers: An Evaluation of Controller Options Experiment II by Rodger A. Pettitt, Elizabeth S. Redden, Nicholas Fung, Christian B. Carstens, and David Baran ARL-TR-5776 September

More information

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging

Proseminar Roboter und Aktivmedien. Outline of today s lecture. Acknowledgments. Educational robots achievements and challenging Proseminar Roboter und Aktivmedien Educational robots achievements and challenging Lecturer Lecturer Houxiang Houxiang Zhang Zhang TAMS, TAMS, Department Department of of Informatics Informatics University

More information

Human Robot Interaction (HRI)

Human Robot Interaction (HRI) Brief Introduction to HRI Batu Akan batu.akan@mdh.se Mälardalen Högskola September 29, 2008 Overview 1 Introduction What are robots What is HRI Application areas of HRI 2 3 Motivations Proposed Solution

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

Obstacle Avoidance in Collective Robotic Search Using Particle Swarm Optimization

Obstacle Avoidance in Collective Robotic Search Using Particle Swarm Optimization Avoidance in Collective Robotic Search Using Particle Swarm Optimization Lisa L. Smith, Student Member, IEEE, Ganesh K. Venayagamoorthy, Senior Member, IEEE, Phillip G. Holloway Real-Time Power and Intelligent

More information

Initial Report on Wheelesley: A Robotic Wheelchair System

Initial Report on Wheelesley: A Robotic Wheelchair System Initial Report on Wheelesley: A Robotic Wheelchair System Holly A. Yanco *, Anna Hazel, Alison Peacock, Suzanna Smith, and Harriet Wintermute Department of Computer Science Wellesley College Wellesley,

More information

Multi-touch Interface for Controlling Multiple Mobile Robots

Multi-touch Interface for Controlling Multiple Mobile Robots Multi-touch Interface for Controlling Multiple Mobile Robots Jun Kato The University of Tokyo School of Science, Dept. of Information Science jun.kato@acm.org Daisuke Sakamoto The University of Tokyo Graduate

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Multi-Agent Planning

Multi-Agent Planning 25 PRICAI 2000 Workshop on Teams with Adjustable Autonomy PRICAI 2000 Workshop on Teams with Adjustable Autonomy Position Paper Designing an architecture for adjustably autonomous robot teams David Kortenkamp

More information

Mixed-Initiative Interactions for Mobile Robot Search

Mixed-Initiative Interactions for Mobile Robot Search Mixed-Initiative Interactions for Mobile Robot Search Curtis W. Nielsen and David J. Bruemmer and Douglas A. Few and Miles C. Walton Robotic and Human Systems Group Idaho National Laboratory {curtis.nielsen,

More information

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops Sowmya Somanath Department of Computer Science, University of Calgary, Canada. ssomanat@ucalgary.ca Ehud Sharlin Department of Computer

More information

Designing A Human Vehicle Interface For An Intelligent Community Vehicle

Designing A Human Vehicle Interface For An Intelligent Community Vehicle Designing A Human Vehicle Interface For An Intelligent Community Vehicle Kin Kok Lee, Yong Tsui Lee and Ming Xie School of Mechanical & Production Engineering Nanyang Technological University Nanyang Avenue

More information

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks 3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks David Gauldie 1, Mark Wright 2, Ann Marie Shillito 3 1,3 Edinburgh College of Art 79 Grassmarket, Edinburgh EH1 2HJ d.gauldie@eca.ac.uk, a.m.shillito@eca.ac.uk

More information

Session 11 Introduction to Robotics and Programming mbot. >_ {Code4Loop}; Roochir Purani

Session 11 Introduction to Robotics and Programming mbot. >_ {Code4Loop}; Roochir Purani Session 11 Introduction to Robotics and Programming mbot >_ {Code4Loop}; Roochir Purani RECAP from last 2 sessions 3D Programming with Events and Messages Homework Review /Questions Understanding 3D Programming

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Blending Human and Robot Inputs for Sliding Scale Autonomy *

Blending Human and Robot Inputs for Sliding Scale Autonomy * Blending Human and Robot Inputs for Sliding Scale Autonomy * Munjal Desai Computer Science Dept. University of Massachusetts Lowell Lowell, MA 01854, USA mdesai@cs.uml.edu Holly A. Yanco Computer Science

More information

The Use of Robots in Harsh and Unstructured Field Applications

The Use of Robots in Harsh and Unstructured Field Applications The Use of Robots in Harsh and Unstructured Field Applications C. Lundberg, H. I. Christensen, A. Hedstrom Centre for Autonomous Systems (CAS), Numerical Analysis and Computer Science (NADA) Royal Institute

More information

Sensors & Systems for Human Safety Assurance in Collaborative Exploration

Sensors & Systems for Human Safety Assurance in Collaborative Exploration Sensing and Sensors CMU SCS RI 16-722 S09 Ned Fox nfox@andrew.cmu.edu Outline What is collaborative exploration? Humans sensing robots Robots sensing humans Overseers sensing both Inherently safe systems

More information

The Effect of Display Type and Video Game Type on Visual Fatigue and Mental Workload

The Effect of Display Type and Video Game Type on Visual Fatigue and Mental Workload Proceedings of the 2010 International Conference on Industrial Engineering and Operations Management Dhaka, Bangladesh, January 9 10, 2010 The Effect of Display Type and Video Game Type on Visual Fatigue

More information

Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone

Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone To cite this article: B Siregar

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Recognizing Military Gestures: Developing a Gesture Recognition Interface. Jonathan Lebron

Recognizing Military Gestures: Developing a Gesture Recognition Interface. Jonathan Lebron Recognizing Military Gestures: Developing a Gesture Recognition Interface Jonathan Lebron March 22, 2013 Abstract The field of robotics presents a unique opportunity to design new technologies that can

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Design Science Research Methods. Prof. Dr. Roel Wieringa University of Twente, The Netherlands

Design Science Research Methods. Prof. Dr. Roel Wieringa University of Twente, The Netherlands Design Science Research Methods Prof. Dr. Roel Wieringa University of Twente, The Netherlands www.cs.utwente.nl/~roelw UFPE 26 sept 2016 R.J. Wieringa 1 Research methodology accross the disciplines Do

More information

Intelligent Power Economy System (Ipes)

Intelligent Power Economy System (Ipes) American Journal of Engineering Research (AJER) e-issn : 2320-0847 p-issn : 2320-0936 Volume-02, Issue-08, pp-108-114 www.ajer.org Research Paper Open Access Intelligent Power Economy System (Ipes) Salman

More information

A comparison of three interfaces using handheld devices to intuitively drive and show objects to a social robot: the impact of underlying metaphors

A comparison of three interfaces using handheld devices to intuitively drive and show objects to a social robot: the impact of underlying metaphors A comparison of three interfaces using handheld devices to intuitively drive and show objects to a social robot: the impact of underlying metaphors Pierre Rouanet and Jérome Béchu and Pierre-Yves Oudeyer

More information

Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence

Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence Touch Your Way: Haptic Sight for Visually Impaired People to Walk with Independence Ji-Won Song Dept. of Industrial Design. Korea Advanced Institute of Science and Technology. 335 Gwahangno, Yusong-gu,

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

GESTURE BASED ROBOTIC ARM

GESTURE BASED ROBOTIC ARM GESTURE BASED ROBOTIC ARM Arusha Suyal 1, Anubhav Gupta 2, Manushree Tyagi 3 1,2,3 Department of Instrumentation And Control Engineering, JSSATE, Noida, (India) ABSTRACT In recent years, there are development

More information

No one claims that people must interact with machines

No one claims that people must interact with machines Applications: Robotics Building a Multimodal Human Robot Interface Dennis Perzanowski, Alan C. Schultz, William Adams, Elaine Marsh, and Magda Bugajska, Naval Research Laboratory No one claims that people

More information

CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN

CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN 8.1 Introduction This chapter gives a brief overview of the field of research methodology. It contains a review of a variety of research perspectives and approaches

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS Eva Cipi, PhD in Computer Engineering University of Vlora, Albania Abstract This paper is focused on presenting

More information

Implementation of a Self-Driven Robot for Remote Surveillance

Implementation of a Self-Driven Robot for Remote Surveillance International Journal of Research Studies in Science, Engineering and Technology Volume 2, Issue 11, November 2015, PP 35-39 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) Implementation of a Self-Driven

More information

The Representational Effect in Complex Systems: A Distributed Representation Approach

The Representational Effect in Complex Systems: A Distributed Representation Approach 1 The Representational Effect in Complex Systems: A Distributed Representation Approach Johnny Chuah (chuah.5@osu.edu) The Ohio State University 204 Lazenby Hall, 1827 Neil Avenue, Columbus, OH 43210,

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

More information

Limits of a Distributed Intelligent Networked Device in the Intelligence Space. 1 Brief History of the Intelligent Space

Limits of a Distributed Intelligent Networked Device in the Intelligence Space. 1 Brief History of the Intelligent Space Limits of a Distributed Intelligent Networked Device in the Intelligence Space Gyula Max, Peter Szemes Budapest University of Technology and Economics, H-1521, Budapest, Po. Box. 91. HUNGARY, Tel: +36

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

Multi touch Vector Field Operation for Navigating Multiple Mobile Robots

Multi touch Vector Field Operation for Navigating Multiple Mobile Robots Multi touch Vector Field Operation for Navigating Multiple Mobile Robots Jun Kato The University of Tokyo, Tokyo, Japan jun.kato@ui.is.s.u tokyo.ac.jp Figure.1: Users can easily control movements of multiple

More information

Virtual Reality Calendar Tour Guide

Virtual Reality Calendar Tour Guide Technical Disclosure Commons Defensive Publications Series October 02, 2017 Virtual Reality Calendar Tour Guide Walter Ianneo Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Comparing Two Haptic Interfaces for Multimodal Graph Rendering

Comparing Two Haptic Interfaces for Multimodal Graph Rendering Comparing Two Haptic Interfaces for Multimodal Graph Rendering Wai Yu, Stephen Brewster Glasgow Interactive Systems Group, Department of Computing Science, University of Glasgow, U. K. {rayu, stephen}@dcs.gla.ac.uk,

More information

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. September 9-13, 2012. Paris, France. Evaluation of a Tricycle-style Teleoperational Interface for Children:

More information

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

With a New Helper Comes New Tasks

With a New Helper Comes New Tasks With a New Helper Comes New Tasks Mixed-Initiative Interaction for Robot-Assisted Shopping Anders Green 1 Helge Hüttenrauch 1 Cristian Bogdan 1 Kerstin Severinson Eklundh 1 1 School of Computer Science

More information

SMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE

SMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE ISSN: 0976-2876 (Print) ISSN: 2250-0138 (Online) SMART ELECTRONIC GADGET FOR VISUALLY IMPAIRED PEOPLE L. SAROJINI a1, I. ANBURAJ b, R. ARAVIND c, M. KARTHIKEYAN d AND K. GAYATHRI e a Assistant professor,

More information

A Comparison Between Camera Calibration Software Toolboxes

A Comparison Between Camera Calibration Software Toolboxes 2016 International Conference on Computational Science and Computational Intelligence A Comparison Between Camera Calibration Software Toolboxes James Rothenflue, Nancy Gordillo-Herrejon, Ramazan S. Aygün

More information

CPE/CSC 580: Intelligent Agents

CPE/CSC 580: Intelligent Agents CPE/CSC 580: Intelligent Agents Franz J. Kurfess Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. 1 Course Overview Introduction Intelligent Agent, Multi-Agent

More information

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005)

Prof. Emil M. Petriu 17 January 2005 CEG 4392 Computer Systems Design Project (Winter 2005) Project title: Optical Path Tracking Mobile Robot with Object Picking Project number: 1 A mobile robot controlled by the Altera UP -2 board and/or the HC12 microprocessor will have to pick up and drop

More information

Autonomous Localization

Autonomous Localization Autonomous Localization Jennifer Zheng, Maya Kothare-Arora I. Abstract This paper presents an autonomous localization service for the Building-Wide Intelligence segbots at the University of Texas at Austin.

More information

Virtual Grasping Using a Data Glove

Virtual Grasping Using a Data Glove Virtual Grasping Using a Data Glove By: Rachel Smith Supervised By: Dr. Kay Robbins 3/25/2005 University of Texas at San Antonio Motivation Navigation in 3D worlds is awkward using traditional mouse Direct

More information

Wednesday, October 29, :00-04:00pm EB: 3546D. TELEOPERATION OF MOBILE MANIPULATORS By Yunyi Jia Advisor: Prof.

Wednesday, October 29, :00-04:00pm EB: 3546D. TELEOPERATION OF MOBILE MANIPULATORS By Yunyi Jia Advisor: Prof. Wednesday, October 29, 2014 02:00-04:00pm EB: 3546D TELEOPERATION OF MOBILE MANIPULATORS By Yunyi Jia Advisor: Prof. Ning Xi ABSTRACT Mobile manipulators provide larger working spaces and more flexibility

More information

High-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control

High-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control High-Level Programming for Industrial Robotics: using Gestures, Speech and Force Control Pedro Neto, J. Norberto Pires, Member, IEEE Abstract Today, most industrial robots are programmed using the typical

More information

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 t t t rt t s s Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 1 r sr st t t 2 st t t r t r t s t s 3 Pr ÿ t3 tr 2 t 2 t r r t s 2 r t ts ss

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology Natural User Interfaces What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger

More information

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416

More information

Vision System for a Robot Guide System

Vision System for a Robot Guide System Vision System for a Robot Guide System Yu Wua Wong 1, Liqiong Tang 2, Donald Bailey 1 1 Institute of Information Sciences and Technology, 2 Institute of Technology and Engineering Massey University, Palmerston

More information

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE Prof.dr.sc. Mladen Crneković, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Prof.dr.sc. Davor Zorc, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb

More information

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 ThA4.3 HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot Takahiro Takeda, Yasuhisa Hirata,

More information

GestureCommander: Continuous Touch-based Gesture Prediction

GestureCommander: Continuous Touch-based Gesture Prediction GestureCommander: Continuous Touch-based Gesture Prediction George Lucchese george lucchese@tamu.edu Jimmy Ho jimmyho@tamu.edu Tracy Hammond hammond@cs.tamu.edu Martin Field martin.field@gmail.com Ricardo

More information

Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2

Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2 Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2 by A William Evans III, Susan G Hill, Brian Wood, and Regina Pomranky ARL-TR-7240 March 2015 Approved for public

More information

This list supersedes the one published in the November 2002 issue of CR.

This list supersedes the one published in the November 2002 issue of CR. PERIODICALS RECEIVED This is the current list of periodicals received for review in Reviews. International standard serial numbers (ISSNs) are provided to facilitate obtaining copies of articles or subscriptions.

More information

BIM Awareness and Acceptance by Architecture Students in Asia

BIM Awareness and Acceptance by Architecture Students in Asia BIM Awareness and Acceptance by Architecture Students in Asia Euisoon Ahn 1 and Minseok Kim* 2 1 Ph.D. Candidate, Department of Architecture & Architectural Engineering, Seoul National University, Korea

More information

A*STAR Unveils Singapore s First Social Robots at Robocup2010

A*STAR Unveils Singapore s First Social Robots at Robocup2010 MEDIA RELEASE Singapore, 21 June 2010 Total: 6 pages A*STAR Unveils Singapore s First Social Robots at Robocup2010 Visit Suntec City to experience the first social robots - OLIVIA and LUCAS that can see,

More information

Julie L. Marble, Ph.D. Douglas A. Few David J. Bruemmer. August 24-26, 2005

Julie L. Marble, Ph.D. Douglas A. Few David J. Bruemmer. August 24-26, 2005 INEEL/CON-04-02277 PREPRINT I Want What You ve Got: Cross Platform Portability And Human-Robot Interaction Assessment Julie L. Marble, Ph.D. Douglas A. Few David J. Bruemmer August 24-26, 2005 Performance

More information

Hand Gesture Recognition System for Daily Information Retrieval Swapnil V.Ghorpade 1, Sagar A.Patil 2,Amol B.Gore 3, Govind A.

Hand Gesture Recognition System for Daily Information Retrieval Swapnil V.Ghorpade 1, Sagar A.Patil 2,Amol B.Gore 3, Govind A. Hand Gesture Recognition System for Daily Information Retrieval Swapnil V.Ghorpade 1, Sagar A.Patil 2,Amol B.Gore 3, Govind A.Pawar 4 Student, Dept. of Computer Engineering, SCS College of Engineering,

More information

Human-Robot Interaction

Human-Robot Interaction Human-Robot Interaction 91.451 Robotics II Prof. Yanco Spring 2005 Prof. Yanco 91.451 Robotics II, Spring 2005 HRI Lecture, Slide 1 What is Human-Robot Interaction (HRI)? Prof. Yanco 91.451 Robotics II,

More information