Compass Visualizations for Human-Robotic Interaction

Size: px
Start display at page:

Download "Compass Visualizations for Human-Robotic Interaction"

Transcription

1 Visualizations for Human-Robotic Interaction Curtis M. Humphrey Department of Electrical Engineering and Computer Science Vanderbilt University Nashville, Tennessee USA (curtis.m.humphrey, Julie A. Adams ABSTRACT es have been used for centuries to express directions and are commonplace in many user interfaces; however, there has not been work in human-robotic interaction (HRI) to ascertain how different compass visualizations affect the interaction. This paper presents a HRI evaluation comparing two representative compass visualizations: top-down and in-world world-aligned. The compass visualizations were evaluated to ascertain which one provides better metric judgment accuracy, lowers workload, provides better situational awareness, is perceived as easier to use, and is preferred. Twenty-four participants completed a withinsubject repeated measures experiment. The results agreed with the existing principles relating to 2D and 3D views, or projections of a three-dimensional scene, in that a top-down (2D view) compass visualization is easier to use for metric judgment tasks and a world-aligned (3D view) compass visualization yields faster performance for general navigation tasks. The implication for HRI is that the choice in compass visualization has a definite and nontrivial impact on operator performance (world-aligned was faster), situational awareness (top-down was better), and perceived ease of use (top-down was easier). Categories and Subject Descriptors H.5.2 [Information interfaces and presentation]: User Interfaces evaluation/methodology, user-centered designs. General Terms Measurement, Performance, Human Factors Keywords Visualization, Human-Robotic Interaction (HRI) 1. I TRODUCTIO es have been used for centuries to express directions and aid in navigation [16]. The displaying of a compass visualization and its relationship to directions is commonplace [15]. Many different compass visualizations have been employed in wearable computing and human-robotic interaction (HRI) (e.g. [1][2][4][5][9][11][16]); however, there are no existing evaluations that assess these different compass visualizations and their impact on human-robotic interaction. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. HRI 08, March 12 15, 2008, Amsterdam, Netherlands. Copyright 2008 ACM /08/03...$5.00. visualizations separate into two basic categories: those that present a top-down or flat to the screen compass, see Figure 1.a (the compass is positioned at the center right), (e.g. [4][9][11]), and those that present a world-aligned compass. World-aligned compasses have two forms: a linear bar, as shown in Figure 1.c (the compass is indicating that the forward direction (a) (b) (c) Figure 1: (a) A top-down compass visualization [11]. (b) An in-world world-aligned compass visualization [5]. (c) A bar world-aligned compass visualization [1]. 49

2 Figure 2: The interface employed to compare the compass visualizations is West South West), (e.g. [1][2][16]) or an in-world compass, see Figure 1.b (the compass is positioned at the bottom center) (e.g. [5]). The in-world compass differs from the other compass visualizations as it appears to be three-dimensional (3D) and uses the extra dimension to convey level horizon or pitch information. The in-world compass is referred to as 3D, but is really a 3D display projected onto the two-dimensional (2D) screen. Interfaces employed in wearable computing that use the worldaligned compass visualization advocate that the mapping between the compasses and the presented view are better correlated and therefore improve the operator s compass usage [1][2][16]. Interfaces in HRI that use the top-down compass visualization appear to do so without any stated reasons (e.g. [4][9][11]); however, a top-down compass visualization is similar to viewing a physical compass in the palm of one s hand and is commonplace in many domains [15]. Extensive research has compared 2D views and 3D views, focusing on their respective strengths and weaknesses across a number of tasks (e.g. [13][14][18]); however, this research has not been applied to HRI with regard to compass visualizations. A view is defined as a single projection of a three-dimensional scene [18]. 2D views present two dimensions faithfully and accurately with no distortions [13]. 3D views present a projection of three dimensions onto a two dimensional screen causing ambiguity [13][14]. Users find judging distances across empty space to be particularly difficult in 3D views [13]. It is this difference in accuracy and ambiguity that allows 2D views to outperform 3D views across a number of tasks, including precise spatial judgments [13][18], judgments regarding the relative position between two objects or two terrain locations [14], analysis of physical details [18], and precise navigation [18]. However, the integration of all three dimensions of space into a 3D view allows the human operator to accomplish a set of tasks in one view that would require more than one 2D view, thereby reducing visual scanning and possibly reducing the human s perceptual demands [13]. It is this integration that allows the 3D views to outperform 2D views across a number of tasks including: surveying a 3D space [14][18], understanding shapes [14][18], and gross navigating in 3D space [18]. The views research has not been applied to HRI compass visualizations. visualizations are views, thus part of the purpose of this work was to ascertain if these same principles can guide the design of compass visualizations to enhance certain features in HRI e.g. task performance or situational awareness. This paper presents a HRI evaluation comparing two representative compass visualizations: a top-down (2D view) and in-world world-aligned (3D view) compass visualization to ascertain their agreement or disagreement with the principles regarding 2D and 3D views. The evaluation focused on the participants visualization preferences, perceived situational awareness (SA), perceived workload, task completion time, usage time, and interface interaction to determine which compass visualization was more usable and effective. Perceived workload and SA play key roles in determining the interface effectiveness [6][12]. Section 2 of this paper summarizes the compass designs and the system apparatus, Section 3 outlines the evaluation method, and Section 4 provides the evaluation results. Section 5 discusses the results and finally, Section 6 provides the conclusions. 2. APPARATUS A previously developed human-robot interface was modified for this evaluation [9]; see Figure 2. The interface contains four sections: the current robot s camera feed and halo display (upper left), the robot status bar (upper right), the environmental overview (lower right), and the control panel (lower left). The halo display was not employed for this evaluation, while the status 50

3 bars and environmental overview areas required minimal usage. The current evaluation focused solely on comparing the two different compass visualizations, therefore, the camera feed and control panel were the primary interface elements employed during this evaluation Interactions Two areas of the interface permit interaction: the robot s camera feed display and the control panel (lower left of Figure 2). The robot control was intentionally simple in order to ensure that it did not detract from the compass visualizations. The camera moved independently of the robot s base. The camera is commanded to move when the mouse pointer hovers over the robot s camera feed, the mouse pointer becomes a visually different arrow that indicates the direction the robot s camera will move when the left mouse button is pressed and held down during movement. The camera movement mouse pointer is shown in the center, top third of Figure 6.e. The control panel, in the lower left of Figure 2, provided the available robot motion commands. The commands included move forward or backwards, spin right or left, and stop via the five buttons to the right side of the control panel. The robot motion began when the selected motion button was released and continued until another motion command was issued. The pickup bomb command removed a bomb present in the robot s camera feed. The center camera command repositioned the robot s camera to be horizontal and in the same direction as the robot s forward motion. The center base command spun the robot s base so that the robot s forward motion was the same direction as the camera s orientation Visualizations Two compass visualizations were developed: a top-down (2D view) and in-world world-aligned (3D view) compass visualization. The top-down visualization presents a top-down or flat to the screen compass. The in-world visualization appears to be three-dimensional and the extra dimension conveys the level horizon or pitch. Both compass visualizations were overlaid onto the robot s camera feed, the top-down compass is shown in the upper left portion of Figure 2 and the in-world compass is shown in Figure 3. Both compass visualizations provided two consistent features. The first consistent feature was the directional symbols N (North), S (South), E (East), and W (West) that rotated to align with the respective directions (ego-referenced). An arrow indicating the direction that the robot s base was facing and the direction of travel when it was commanded to move forward was the second feature, which was found to be useful during pilot evaluations. The compass visualizations are ego-referenced and rotate relative to the human operator s current point of view [21]; however, the visualizations differ in their relationship to the world and their perceived dimensions. The top-down compass is displayed flat on the interface, similar to a bird s eye or top-down view, and is displayed in two dimensions; see Figure 2. The world-aligned compass visualization is displayed level with the world s horizon, as shown in Figure 3. The world-aligned compass visualization relationship with the horizon is enhanced by a semi-transparent circle that connects the direction symbols with the direction arrow that rotates inside this circle; see Figure 3. This circle gives the world-aligned compass visualization a 3D appearance; however, it is displayed on a two dimensional screen. The world-aligned compass visualization appeared to reside in the world, while the top-down compass visualization appeared to be on the camera lens. The world-aligned compass visualization differs from the compass visualization in Figure 1.b. on three accounts. First, the directional arrow and circle are flat whereas in Figure 1.b. they appear as three-dimensional objects. The arrow, in Figure 1.b, appears below the word and and is drawn as a 3D object. Second, the compass circle, in Figure 3, changes transparency allowing a greater distinction between the forward and backward orientation. Finally, the backwards direction letter, in Figure 3, is not displayed to prevent confusion with the forward direction letter, thus avoiding overlapping letters. These differences, although slight, were found useful during pilot training. The compasses were designed to ensure that they minimized problems associated with the compass occluding (e.g. hiding) the information contained in the camera image. Occlusion was measured as the percentage of camera image pixels covered by the compass visualization display. When the robot was level with the horizon, the world-aligned compass visualization occluded 0.46 percent of the camera image, whereas the top-down compass visualization occluded 0.56 percent. When the horizon was at a 30 degree slope relative to the robot, the world-aligned compass visualization occluded 1.30 percent (the top-down compass visualization occlusion remained the same, as it is not referenced relative to the horizon). Over 80% of the time, the robot positioned at between 0 to 5 degrees slope relative to the horizon. The occlusion difference between the compass visualizations was negligible System The interface was constructed using Adobe Flash and Actionscript 2.0. Flash provided a universal interface in both the hardware platforms and the underlying simulation engines or control systems for the robots. The simulated world was emulated by the Unreal Tournament 2004 (UT2004) server and provided a high fidelity, non-planer, fully textured, noisy, communication delayed, simulated world [20]. The world was modeled in this manner in order to simulate rescue robots [10]. Figure 3: The 3D Visualization. 51

4 USARSim [20] was used to implement the two robot types: the operator s robot and the bomb. The bomb to be defused during the evaluation scenarios was represented by a different type of robot. The bomb robot did not move and was different in color and structure from the operator s robot s The evaluation task was to locate and diffuse bomb(s). The participants followed verbal navigation commands provided by the experimenter, who was the same individual for all participants. The experiment consisted of one training task followed by two trials of two tasks. The training task involved one bomb and the evaluation tasks incorporated three bombs each. The verbal navigation commands included a mixture of directional commands (e.g. turn the robot north by north east ) and landmark commands (e.g. turn the robot right until you see the little rock on the horizon ). During the tasks seven questions were asked orally, forcing the participants to use the compass (e.g. what direction is the bomb from your robot s current location ) and to move the camera independently of the robot base. The responses were oral statements (e.g. the bomb is north by north east ). All oral statements were based on the compass points e.g. North, North by North East, North East, etc. The compass points were defined for the participants prior to beginning the evaluation and they were given Figure 4 as a reference Environments A single environment was employed for all tasks. Three different areas of the environment were used, each with unique bomb placements. The three areas were the training area and two evaluation areas. Each evaluation area was used twice: once for a top-down compass visualization task and once for a world-aligned compass visualization task. The areas were not flat and included elevation changes throughout the course, see Figure 5. One area had an overall 1.11 meter elevation change with an average total distance traveled of meters (Figure 5.a), while the other area had a 4.59 meter elevation change and the distance traveled was on average of meters (Figure 5.b.). The elevation changes permitted the world-aligned compass visualization to display its relationship with the horizon in contrast to the top-down compass visualization, which does not change based on the horizon. Figure 6 depicts the compass visualization at different compasses presentation angles as the robot progressed through the course of one task and the robot Figure 4: The Sixteen Point as presented to the participants. moved across flat, or nearly flat, terrain as well as sloped terrain. Figure 6.a occurred near the beginning of the task where the world-aligned compass visualization is at about a 5 slope. Figure 6.c occurs later in the task when the slope is approximately 0. The remaining camera images in Figure 6 depict the compass positions throughout the task with slope angles of 10 (Figure 6.b), 20 (Figure 6.d), and 30 (Figure 6.e). The slope range between Figure 6.a and Figure 6.c represents the typical robot slope positions and compass positions during approximately 80% of the task. 3. METHOD 3.1. Participants Twenty-four participants were recruited from the Vanderbilt University community and were compensated for their participation. All participants were at least 18 years old with at least a high school education. The average age bracket was 21 to 30 years of age. Four female participants completed the evaluation. The participants had normal or corrected to normal vision and had played a first person perspective video game(s) for at least an hour prior to participating in the evaluation. This video game experience was required so that all participants would be familiar with the keyhole affect present in robot video [19]. The keyhole affect is due to the limited angular view available through which the remote operator views the remote environment as if one is looking through a soda-straw [19]. (a) (b) Figure 5: The environmental area one (a) and two (b) elevation change ranges (y-axis) across the distances traveled (x-axis). The overall elevation change in environment (a) was 1.11 meters with an average distance traveled of meters. The overall elevation change in environment (b) was 4.49 meters with an average distance traveled of meters. 52

5 (a) (b) (c) 3.2. Experimental Design A within-subject repeated measures design for the two different compass visualizations was employed. The required interactions, the bomb dispersion throughout the environment, and tasks were identical across the compass presentations. The evaluation consisted of two trials where each trial included two tasks: one top-down compass task and one world-aligned compass task. The environment areas changed between tasks in order to limit environmental learning affects. The compass presentation and the environmental area orders were counterbalanced over the participants. The independent variable was the compass presentations and the dependent variables included mouse clicks (interface interaction), task completion times, responses to in-task questions regarding workload and SA, and responses to subjective questionnaires regarding perceived workload, SA, and their compass preferences Procedure All participants received an initial orientation before completing the informed consent form and background questionnaire. The participants received ten minutes of training during which they received instructions in the same manner that would occur during the evaluation tasks; however, no compass visualization was displayed, the participants did not answer directional questions, and only one bomb existed to be defused. A compass visualization was not displayed during training in order to limit bias for a particular compass visualization and to allow participants to become familiar with robot navigation and the interface. During each task, the following information was automatically recorded: task completion time, camera movement time (time spent moving the camera), and interaction mouse clicks. After each task, including the training task, the participants completed a 3-Dimensional Situational Awareness Rating Technique (3D SART) questionnaire [17] 3D SART was chosen over other SA measurement methods because this evaluation was similar to field evaluations and techniques such as SAGAT may alter (d) Figure 6: The in-world world-aligned compass visualization at various pitch angles in the order that they appeared during one task: (a) 5, (b) 10, (c) 0, (d) 20, and (e) 30. (e) participants workload [7]. After completion of the 3D SART, the participants completed a weighted NASA- Load Index (NASA-TLX) questionnaire [8]. After both trials, the participants completed the post-experiment questionnaire Hypothesis The objective of this evaluation was to determine whether the topdown or world-aligned compass visualization was preferred, facilitated better SA, required lower workload, and provided better performance. The hypotheses were: Hypothesis 1: The world-aligned compass visualization will provide lower workload and be preferred. Hypothesis 2: The top-down compass visualization will provide better SA. 4. RESULTS The evaluation analysis included a statistical analysis of the recorded data. All statistical tests are paired two-tailed student t- tests with family Type I errors set to 0.05 and Cohen s d used to compute effect size measured (ES), except where explicitly stated. There were no significant differences between the two environmental areas used, the presentation order of the environmental areas or compass visualizations, or the participants results based on their demographic responses Completion, Interaction, and Accuracy The participants performance with each compass visualization was assessed via the task completion time, the camera movement time, and the number of verbal directional judgment questions answered correctly. The mean time required to complete the topdown compass task was 8:20 (8 minutes, 20 seconds) (Standard Deviation (SD) = 1:26) and was 7:30 (SD = 1:17) for the worldaligned compass task, see Table 1. The world-aligned compass task completion time was significantly faster with a medium effect size (t(95) = 3.78, p < 0.01, ES(d) = 0.61). The mean camera movement times were 1:02 (SD = 0:30) for the top-down 53

6 Table 1: The performance times by task type. Top-Down World-Aligned Comparisons Time 1 M SD M SD t (95) 2 p ES(d) To Finish 1 8:20 1:26 7:30 1: < Moving Camera 1 1:02 0:30 1:04 0: Time formatted as minutes:seconds; 2 paired t-test compass and 1:04 (SD = 0:35) for the world-aligned compass, this difference is not statistically significant. The amount of time dedicated to moving the camera was small relative to the task completion times, 12.4% of the total completion time for the topdown compass and 14.2% for the world-aligned compass. In summary, the participant s performance navigating the robot was significantly faster for the world-aligned compass task and these results were independent of the camera movement time since the camera movements were not significantly different across visualizations. The participants were verbally asked metric judgment questions during each task. There were four incorrect answers out of 336 questions during the top-down compass visualization task and five incorrect answers out of 336 questions for the world-aligned compass visualization. There was no significant difference in these results. It is noted that 19 of 24 participants did not provide any incorrect responses with either compass visualization. The remaining five participants had one or two incorrect answers total across both visualization tasks Participants Interactions Participant interaction was ascertained by counting the overall number of mouse clicks, the number of camera actions initiated, and the number of movement commands initiated. None of these three interaction categories was found to be significantly different across the top-down and world-aligned compass tasks, see Table 2. About one in two participants made a single mistake with regard to misdirecting the robot because the camera alignment was different from the robot s forward direction; however, there was no significant difference related to these mistakes across compass visualizations Visualizations Preference The participants completed the post-experiment questionnaire that rated the ease of use, ease of answering metric judgment questions, awareness of the situation, and compass visualization preferences based upon Likert scale questions. Table 2: The number of mouse clicks required by task. Top-Down World-Aligned Comparisons Clicks 1 M SD M SD t (95) 2 p ES(d) All Using Camera Moving Robot Number of times the mouse button was clicked; 2 paired t-test The participants rated how easy or difficult using each compass visualization was on a scale from 1 (very difficult) to 7 (very easy). The mean Likert score for the top-down compass was 5.63 (SD 1.17) while the mean for the world-aligned compass was 4.63 (SD 1.28). The top-down compass was significantly easier to use, with a large effect size, indicating that the top-down compass was perceived to be easier to use than the world-aligned compass (t(47) = 2.65, p = 0.01, ES(d) = 1.96). The remaining questions addressing compass visualization preference were rated on a Likert scale from 1 (a complete preference for the top-down compass) to 7 (a complete preference for the world-aligned compass). The resulting means were compared against a no preference or a rating of 4 and are therefore non-paired t-tests. The participants rated which compass visualization was easer to use. This question was designed to ascertain which compass was preferred for metric judgment tasks. The mean preference was 3.17 (SD = 1.90). This result was significant with a medium effect size (t(23), µ 4, = -2.15, p = 0.04, ES(d) = 0.62). This result indicates that the participants believed that the top-down compass was easier to use than the world-aligned compass. The participants also rated which compass visualization facilitated better awareness of the situation (a SA related question). The participants generally preferred the top-down compass (M = 3.21, SD = 1.91) with a medium effect size, but the results are not significant (t(24), µ 4, = -2.03, p = 0.05, ES(d) = 0.59). The participants rated which compass visualization they preferred. They generally preferred the top-down compass (M = 3.21, SD = 1.98); however, the results were not significant (t(24), µ 4, = , p = 0.06, ES(d) = 0.57) Situation Awareness Results The participants mean overall SA, as measured by the 3D SART s Likert scale of 1 (low) to 7 ( high), was 5.97 for the top-down compass visualization (SD = 0.83) and 5.63 for the world-aligned compass (SD = 0.95); see Table 3. The top-down compass provided a significantly higher overall SA rating over the worldaligned compass, but with a small effect size (t(95) = 2.17, p = 0.03, ES(d) = 0.27). The 3D SART subcomponent results, when compared across visualizations, where not found to be significantly different. The overall SA result supports the trend present in the post-experiment questionnaire that the top-down compass facilitated greater situational awareness; see Section 4.3. Table 3: The 3D SART responses by task. Top-Down World-Aligned Comparisons M SD M SD t (95) 2 p ES(d) Demands on attentional resources 1 Supply of attentional resources Understanding of the situation Overall SA Likert scale of 1 (low) to 7 (high); 2 paired t-test 54

7 4.5. Perceived Workload Results The weighted NASA-TLX overall workload calculation was employed to determine the participants overall perceived workload. The top-down compass resulted in an overall workload mean of (SD = 18.58), while the world-aligned compass resulted in a mean workload of (SD = 16.95); see Table 4. There was no significant difference across the two visualizations with regard to overall workload (t(95) = 0.53, p = 0.59, ES(d) = 0.04). The individual NASA-TLX workload components results had small to very small effect sizes indicating that there was very little workload difference across the compass visualizations. The only NASA-TLX workload component that was significantly different across visualizations was the Physical Demand in which the top-down compass had a mean of (SD = 12.12) and the world-aligned compass mean was (SD = 16.56); however, the effect size is small and the cause of this result is unknown (t(95) = 2.38, p = 0.02, ES(d) = 0.15). These results offer no support for the post-experiment questionnaire s findings that the top-down compass was easier to use. Table 4: The ASA-TLX workload analysis results by task. Top-Down World-Aligned Comparisons M SD M SD t (95) 2 p ES(d) Mental 1 Demand Physical Demand Temporal Demand Performance Effort Frustration Total Workload Percentages from 0 (low) to 100 (high); 2 paired t-test 5. DISCUSSIO The first hypothesis predicted that the world-aligned compass would provide lower workload and be preferred. This hypothesis was not supported by the evaluation results. The world-aligned compass did not result in lower workload; rather it resulted in virtually identical workload to the top-down compass visualization. The world-aligned compass visualization was predicted to be the preferred visualization; however, the results from the questionnaires were not significantly different and the data leaned towards preferring the top-down compass over the world-aligned compass. The world-aligned compass was found to be significantly faster than the top-down compass in completing the overall task, which was a general navigation task. The world-aligned visualization performed faster but did not have lower workload and was not perceived as easier to use. This finding may be a result of task dichotomy: one part navigation, and one part metric judgments. The ease of use of a particular visualization or the associated workload during navigation periods may have been negated during the metric judgment activities. Since the questionnaires were post-task, they encompassed both the navigation and judgment task components and the results reflect the summing of the experiences during each task component. Hence the world-align visualization may be faster and easier to use during navigation while the top-down compass may be perceived as easier to use overall because of its performance on one particular task component (e.g. metric judgments). The second hypothesis predicted that the top-down compass would provide better SA. This hypothesis was supported by the evaluation results. The prediction that the top-down compass would provide better SA was found as the top-down compass provided significantly better overall SA when compared to the world-aligned compass. The top-down compass was also perceived as easier to use when answering metric judgment questions. These results were based on simulation; therefore, further evaluation with real robots is required to fully validate the findings. These findings also may not generalize beyond the egoreferenced 3D display employed, as other display styles will undoubtedly affect the operators interactions. 6. CO CLUSIO Two compass visualizations for human-robotic interaction were evaluated to ascertain how top-down (2D view) and in-world world-aligned (3D view) compass visualizations compared across a number of factors. The compass visualization presentations were chosen based upon existing literature related to standard compass visualizations and results related to 2D and 3D views. Twentyfour participants completed a within-subjects repeated measures evaluation. The evaluation results are in agreement with existing results regarding the effects of 2D and 3D views on the operators ability to complete different tasks [13][14][18]. The existing view results indicate that if the task to be performed is a metric judgment task, a top-down (2D view) compass visualization will be easier to use. If the task to be performed is instead a general navigation task, an in-world world-aligned (3D view) compass visualization will yield faster performance. The implication to human-robotic interaction from these results is that the choice in compass visualizations has a definite and nontrivial impact. In general, the world-aligned compass resulted in faster task performance; whereas, the top-down compass provided perceived situational awareness and was perceived easier to use. Our results imply that a top-down compass visualization is appropriate for metric judgment tasks and an in-world compass visualization is appropriate for navigational tasks. A single compass visualization may be inappropriate for all HRI tasks, specifically tasks that combine metric judgment and navigational activities into a single task. visualizations for these combined tasks require further evaluation. 7. ACK OWLEDGME TS This work was supported by the NSF Grant IIS REFERE CES [1] Aaltonen, A., "A Context Visualization Model for Wearable Computers," Sixth International Symposium on Wearable Computers, , [2] Aaltonen, A., and Lehikoinen, J., Refining visualization reference model for context information, Personal and Ubiquitous Computing, 9(6), ,

8 [3] Baker, M., Casey, R., Keyes, B., and Yanco, H.A., Improved Interfaces for Human-Robot Interaction in Urban search and Rescue. IEEE Intl. Conf. on Systems, Man and Cybernetics, , [4] Bruemmer, D.J., Few, D.A., Boring, R.L., Marble, J.L., Walton, M.C., Nielsen, C.W., "Shared understanding for collaborative control," IEEE Transactions on Systems, Man and Cybernetics, Part A, 35(4), , 2005 [5] Cooper, J. L. and Goodrich, M. A., Integrating critical interface elements for intuitive single-display aviation control of UAVs, Proceedings of SPIE DSS06 - Defense and Security Symposium, 62260B-1-9, [6] Drury, J., Scholtz, J., and Yanco, H. A., Awareness in human-robot interactions. Proc. of the 2003 IEEE Intl. Conf. on Systems, Man and Cybernetics, , [7] Endsley, M.R., Selcon, S.J., Hardiman, T.D., and Croft, D.G., A comparative evaluation of SAGAT and SART for evaluations of situation awareness. Proc. of the Human Factors and Ergonomics Society Annual Meeting, 82-86, [8] Hart, S., and Staveland L., Development of NASA-TLX ( Load Index): Results of empirical and theoretical research. In: Hancock P, Meshkati N (Eds.). Human Mental Workload. North Holland, Amsterdam, , [9] Humphrey, C.M., Henk, C., Sewell, G., Williams, B., and Adams, J.A., Assessing the Scalability of a Multiple Robot Interface In Proceeding of the 2nd ACM SIGCHI/SIGART Conference on Human-Robot Interaction, , [10] Murphy, R.R., and Burke, J.L., Up from the Rubble: Lessons Learned about HRI from Search and Rescue, Proceedings of the 49th Annual Meetings of the Human Factors and Ergonomics Society, , [11] Nielsen, C. W., Goodrich, M. A., and Ricks, B., Ecological Interfaces for Improving Mobile Robot Teleoperation, IEEE Transactions on Robotics, 23(5), , [12] Parasuraman, R., Galster, S., Squire, P., Furukawa, H. and Miller, C., A Flexible Delegation-Type Interface Enhances System Performance in Human Supervision of Multiple Robots: Empirical Studies with RoboFlag, IEEE Trans. on Systems, Man and Cybernetics-Part A, 35(4), , [13] Smallman, H. S., St. John, M., Oonk, H. M., Information Availability in 2D and 3D displays. IEEE Computer Graphics and Applications, 21(5), 51-57, [14] St. John, M., Cowen, M. B., Smallman, H. S., and Oonk, H. M., The use of 2D and 3D displays for shape-understanding versus relative-position tasks, Human Factors, 43(1), 79-98, [15] Steinfeld, A., Interface lessons for fully and semiautonomous mobile robots. In Proc. IEEE International Conference on Robotics and Automation, , [16] Suomela, R. and Lehikoinen, J., Context, In Proceedings of the 4th IEEE international Symposium on Wearable Computers, , [17] Taylor, R.M. Situational awareness rating technique (SART): The development of a tool for aircrew systems design, Proc. of the AGARD AMP Symp. on Situational Awareness in Aerospace Operations, CP478, [18] Tory, M., Kirkpatrick, A. E., Atkins, M. S., and Möller, T., Visualization task performance with 2D, 3D, and combination displays, IEEE Transactions on Visualization and Computer Graphics, 12(1), 2-13, [19] Voshell, M. and Woods, D. Breaking the Keyhole in Human-Robot Coordination: Method and Evaluation, Proc. of the Human Factors and Ergonomics Society 49th Annual Meeting, , [20] Wang, J., Lewis, M., Hughes, S., Koes, M., and Carpin, S., Validating USARsim for use in HRI Research, Proc. of the 49th Human Factors and Ergonomics Society Annual Meeting, , [21] Wickens, C. D. and Prevett, T. T. Exploring the dimensions of egocentricity in aircraft navigation displays, Journal of Experimental Psychology: Applied, 1(2), , [22] Woods, D., Tittle, J., Feil, M. & Roesler, A., Envisioning human-robot coordination for future operation, IEEE Transactions on Systems, Man & Cybernetics: Part C, 34(2), ,

Evaluation of an Enhanced Human-Robot Interface

Evaluation of an Enhanced Human-Robot Interface Evaluation of an Enhanced Human-Robot Carlotta A. Johnson Julie A. Adams Kazuhiko Kawamura Center for Intelligent Systems Center for Intelligent Systems Center for Intelligent Systems Vanderbilt University

More information

A Human Eye Like Perspective for Remote Vision

A Human Eye Like Perspective for Remote Vision Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 A Human Eye Like Perspective for Remote Vision Curtis M. Humphrey, Stephen R.

More information

Objective Data Analysis for a PDA-Based Human-Robotic Interface*

Objective Data Analysis for a PDA-Based Human-Robotic Interface* Objective Data Analysis for a PDA-Based Human-Robotic Interface* Hande Kaymaz Keskinpala EECS Department Vanderbilt University Nashville, TN USA hande.kaymaz@vanderbilt.edu Abstract - This paper describes

More information

Comparing the Usefulness of Video and Map Information in Navigation Tasks

Comparing the Usefulness of Video and Map Information in Navigation Tasks Comparing the Usefulness of Video and Map Information in Navigation Tasks ABSTRACT Curtis W. Nielsen Brigham Young University 3361 TMCB Provo, UT 84601 curtisn@gmail.com One of the fundamental aspects

More information

Teams for Teams Performance in Multi-Human/Multi-Robot Teams

Teams for Teams Performance in Multi-Human/Multi-Robot Teams Teams for Teams Performance in Multi-Human/Multi-Robot Teams We are developing a theory for human control of robot teams based on considering how control varies across different task allocations. Our current

More information

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 Abstract Navigation is an essential part of many military and civilian

More information

Enhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback

Enhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback Enhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback by Paulo G. de Barros Robert W. Lindeman Matthew O. Ward Human Interaction in Vortual Environments

More information

Breaking the Keyhole in Human-Robot Coordination: Method and Evaluation Martin G. Voshell, David D. Woods

Breaking the Keyhole in Human-Robot Coordination: Method and Evaluation Martin G. Voshell, David D. Woods Breaking the Keyhole in Human-Robot Coordination: Method and Evaluation Martin G. Voshell, David D. Woods Abstract When environment access is mediated through robotic sensors, field experience and naturalistic

More information

Teams for Teams Performance in Multi-Human/Multi-Robot Teams

Teams for Teams Performance in Multi-Human/Multi-Robot Teams PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 54th ANNUAL MEETING - 2010 438 Teams for Teams Performance in Multi-Human/Multi-Robot Teams Pei-Ju Lee, Huadong Wang, Shih-Yi Chien, and Michael

More information

Mixed-Initiative Interactions for Mobile Robot Search

Mixed-Initiative Interactions for Mobile Robot Search Mixed-Initiative Interactions for Mobile Robot Search Curtis W. Nielsen and David J. Bruemmer and Douglas A. Few and Miles C. Walton Robotic and Human Systems Group Idaho National Laboratory {curtis.nielsen,

More information

Analysis of Human-Robot Interaction for Urban Search and Rescue

Analysis of Human-Robot Interaction for Urban Search and Rescue Analysis of Human-Robot Interaction for Urban Search and Rescue Holly A. Yanco, Michael Baker, Robert Casey, Brenden Keyes, Philip Thoren University of Massachusetts Lowell One University Ave, Olsen Hall

More information

Spatial Judgments from Different Vantage Points: A Different Perspective

Spatial Judgments from Different Vantage Points: A Different Perspective Spatial Judgments from Different Vantage Points: A Different Perspective Erik Prytz, Mark Scerbo and Kennedy Rebecca The self-archived postprint version of this journal article is available at Linköping

More information

Autonomy Mode Suggestions for Improving Human- Robot Interaction *

Autonomy Mode Suggestions for Improving Human- Robot Interaction * Autonomy Mode Suggestions for Improving Human- Robot Interaction * Michael Baker Computer Science Department University of Massachusetts Lowell One University Ave, Olsen Hall Lowell, MA 01854 USA mbaker@cs.uml.edu

More information

Gravity-Referenced Attitude Display for Teleoperation of Mobile Robots

Gravity-Referenced Attitude Display for Teleoperation of Mobile Robots PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 48th ANNUAL MEETING 2004 2662 Gravity-Referenced Attitude Display for Teleoperation of Mobile Robots Jijun Wang, Michael Lewis, and Stephen Hughes

More information

Evaluating the Augmented Reality Human-Robot Collaboration System

Evaluating the Augmented Reality Human-Robot Collaboration System Evaluating the Augmented Reality Human-Robot Collaboration System Scott A. Green *, J. Geoffrey Chase, XiaoQi Chen Department of Mechanical Engineering University of Canterbury, Christchurch, New Zealand

More information

Human Control for Cooperating Robot Teams

Human Control for Cooperating Robot Teams Human Control for Cooperating Robot Teams Jijun Wang School of Information Sciences University of Pittsburgh Pittsburgh, PA 15260 jiw1@pitt.edu Michael Lewis School of Information Sciences University of

More information

SPATIAL AWARENESS BIASES IN SYNTHETIC VISION SYSTEMS DISPLAYS. Matthew L. Bolton, Ellen J. Bass University of Virginia Charlottesville, VA

SPATIAL AWARENESS BIASES IN SYNTHETIC VISION SYSTEMS DISPLAYS. Matthew L. Bolton, Ellen J. Bass University of Virginia Charlottesville, VA SPATIAL AWARENESS BIASES IN SYNTHETIC VISION SYSTEMS DISPLAYS Matthew L. Bolton, Ellen J. Bass University of Virginia Charlottesville, VA Synthetic Vision Systems (SVS) create a synthetic clear-day view

More information

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present

More information

The Representational Effect in Complex Systems: A Distributed Representation Approach

The Representational Effect in Complex Systems: A Distributed Representation Approach 1 The Representational Effect in Complex Systems: A Distributed Representation Approach Johnny Chuah (chuah.5@osu.edu) The Ohio State University 204 Lazenby Hall, 1827 Neil Avenue, Columbus, OH 43210,

More information

Measuring Coordination Demand in Multirobot Teams

Measuring Coordination Demand in Multirobot Teams PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 53rd ANNUAL MEETING 2009 779 Measuring Coordination Demand in Multirobot Teams Michael Lewis Jijun Wang School of Information sciences Quantum Leap

More information

LASSOing HRI: Analyzing Situation Awareness in Map-Centric and Video-Centric Interfaces

LASSOing HRI: Analyzing Situation Awareness in Map-Centric and Video-Centric Interfaces LASSOing HRI: Analyzing Situation Awareness in Map-Centric and Video-Centric Interfaces Jill L. Drury The MITRE Corporation 202 Burlington Road Bedford, MA 01730 +1-781-271-2034 jldrury@mitre.org Brenden

More information

A USEABLE, ONLINE NASA-TLX TOOL. David Sharek Psychology Department, North Carolina State University, Raleigh, NC USA

A USEABLE, ONLINE NASA-TLX TOOL. David Sharek Psychology Department, North Carolina State University, Raleigh, NC USA 1375 A USEABLE, ONLINE NASA-TLX TOOL David Sharek Psychology Department, North Carolina State University, Raleigh, NC 27695-7650 USA For over 20 years, the NASA Task Load index (NASA-TLX) (Hart & Staveland,

More information

Early Take-Over Preparation in Stereoscopic 3D

Early Take-Over Preparation in Stereoscopic 3D Adjunct Proceedings of the 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 18), September 23 25, 2018, Toronto, Canada. Early Take-Over

More information

Psychophysics of night vision device halo

Psychophysics of night vision device halo University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers (Archive) Faculty of Science, Medicine and Health 2009 Psychophysics of night vision device halo Robert S Allison

More information

Elizabeth A. Schmidlin Keith S. Jones Brian Jonhson. Texas Tech University

Elizabeth A. Schmidlin Keith S. Jones Brian Jonhson. Texas Tech University Elizabeth A. Schmidlin Keith S. Jones Brian Jonhson Texas Tech University ! After 9/11, researchers used robots to assist rescue operations. (Casper, 2002; Murphy, 2004) " Marked the first civilian use

More information

Effects of Alarms on Control of Robot Teams

Effects of Alarms on Control of Robot Teams PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 55th ANNUAL MEETING - 2011 434 Effects of Alarms on Control of Robot Teams Shih-Yi Chien, Huadong Wang, Michael Lewis School of Information Sciences

More information

Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2

Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2 Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2 by A William Evans III, Susan G Hill, Brian Wood, and Regina Pomranky ARL-TR-7240 March 2015 Approved for public

More information

Analysis of Perceived Workload when using a PDA for Mobile Robot Teleoperation

Analysis of Perceived Workload when using a PDA for Mobile Robot Teleoperation Analysis of Perceived Workload when using a PDA for Mobile Robot Teleoperation Julie A. Adams EECS Department Vanderbilt University Nashville, TN USA julie.a.adams@vanderbilt.edu Hande Kaymaz-Keskinpala

More information

Wi-Fi Fingerprinting through Active Learning using Smartphones

Wi-Fi Fingerprinting through Active Learning using Smartphones Wi-Fi Fingerprinting through Active Learning using Smartphones Le T. Nguyen Carnegie Mellon University Moffet Field, CA, USA le.nguyen@sv.cmu.edu Joy Zhang Carnegie Mellon University Moffet Field, CA,

More information

Mixed-initiative multirobot control in USAR

Mixed-initiative multirobot control in USAR 23 Mixed-initiative multirobot control in USAR Jijun Wang and Michael Lewis School of Information Sciences, University of Pittsburgh USA Open Access Database www.i-techonline.com 1. Introduction In Urban

More information

Immersive Simulation in Instructional Design Studios

Immersive Simulation in Instructional Design Studios Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,

More information

How Search and its Subtasks Scale in N Robots

How Search and its Subtasks Scale in N Robots How Search and its Subtasks Scale in N Robots Huadong Wang, Michael Lewis School of Information Sciences University of Pittsburgh Pittsburgh, PA 15260 011-412-624-9426 huw16@pitt.edu ml@sis.pitt.edu Prasanna

More information

Fusing Multiple Sensors Information into Mixed Reality-based User Interface for Robot Teleoperation

Fusing Multiple Sensors Information into Mixed Reality-based User Interface for Robot Teleoperation Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Fusing Multiple Sensors Information into Mixed Reality-based User Interface for

More information

Image Distortion Maps 1

Image Distortion Maps 1 Image Distortion Maps Xuemei Zhang, Erick Setiawan, Brian Wandell Image Systems Engineering Program Jordan Hall, Bldg. 42 Stanford University, Stanford, CA 9435 Abstract Subjects examined image pairs consisting

More information

Task Performance Metrics in Human-Robot Interaction: Taking a Systems Approach

Task Performance Metrics in Human-Robot Interaction: Taking a Systems Approach Task Performance Metrics in Human-Robot Interaction: Taking a Systems Approach Jennifer L. Burke, Robin R. Murphy, Dawn R. Riddle & Thomas Fincannon Center for Robot-Assisted Search and Rescue University

More information

Developing Performance Metrics for the Supervisory Control of Multiple Robots

Developing Performance Metrics for the Supervisory Control of Multiple Robots Developing Performance Metrics for the Supervisory Control of Multiple Robots ABSTRACT Jacob W. Crandall Dept. of Aeronautics and Astronautics Massachusetts Institute of Technology Cambridge, MA jcrandal@mit.edu

More information

An Agent-Based Architecture for an Adaptive Human-Robot Interface

An Agent-Based Architecture for an Adaptive Human-Robot Interface An Agent-Based Architecture for an Adaptive Human-Robot Interface Kazuhiko Kawamura, Phongchai Nilas, Kazuhiko Muguruma, Julie A. Adams, and Chen Zhou Center for Intelligent Systems Vanderbilt University

More information

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Scaling Effects in Multi-robot Control

Scaling Effects in Multi-robot Control 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems Acropolis Convention Center Nice, France, Sept, 22-26, 2008 Scaling Effects in Multi-robot Control Prasanna Velagapudi, Paul Scerri,

More information

Scaling Effects in Multi-robot Control

Scaling Effects in Multi-robot Control Scaling Effects in Multi-robot Control Prasanna Velagapudi, Paul Scerri, Katia Sycara Carnegie Mellon University Pittsburgh, PA 15213, USA Huadong Wang, Michael Lewis, Jijun Wang * University of Pittsburgh

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays

Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays Z.Y. Alpaslan, S.-C. Yeh, A.A. Rizzo, and A.A. Sawchuk University of Southern California, Integrated Media Systems

More information

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism REPORT ON THE CURRENT STATE OF FOR DESIGN XL: Experiments in Landscape and Urbanism This report was produced by XL: Experiments in Landscape and Urbanism, SWA Group s innovation lab. It began as an internal

More information

Findings of a User Study of Automatically Generated Personas

Findings of a User Study of Automatically Generated Personas Findings of a User Study of Automatically Generated Personas Joni Salminen Qatar Computing Research Institute, Hamad Bin Khalifa University and Turku School of Economics jsalminen@hbku.edu.qa Soon-Gyo

More information

Evaluation of Mapping with a Tele-operated Robot with Video Feedback

Evaluation of Mapping with a Tele-operated Robot with Video Feedback The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), Hatfield, UK, September 6-8, 2006 Evaluation of Mapping with a Tele-operated Robot with Video Feedback Carl

More information

See highlights on pages 1, 2 and 5

See highlights on pages 1, 2 and 5 See highlights on pages 1, 2 and 5 Dowell, S.R., Foyle, D.C., Hooey, B.L. & Williams, J.L. (2002). Paper to appear in the Proceedings of the 46 th Annual Meeting of the Human Factors and Ergonomic Society.

More information

The Statistics of Visual Representation Daniel J. Jobson *, Zia-ur Rahman, Glenn A. Woodell * * NASA Langley Research Center, Hampton, Virginia 23681

The Statistics of Visual Representation Daniel J. Jobson *, Zia-ur Rahman, Glenn A. Woodell * * NASA Langley Research Center, Hampton, Virginia 23681 The Statistics of Visual Representation Daniel J. Jobson *, Zia-ur Rahman, Glenn A. Woodell * * NASA Langley Research Center, Hampton, Virginia 23681 College of William & Mary, Williamsburg, Virginia 23187

More information

Evaluation of mapping with a tele-operated robot with video feedback.

Evaluation of mapping with a tele-operated robot with video feedback. Evaluation of mapping with a tele-operated robot with video feedback. C. Lundberg, H. I. Christensen Centre for Autonomous Systems (CAS) Numerical Analysis and Computer Science, (NADA), KTH S-100 44 Stockholm,

More information

Considerations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations

Considerations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations Considerations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations Roger A. Chadwick New Mexico State University Remote unmanned ground vehicle (UGV) operations place the human operator

More information

From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness

From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness Alaa Azazi, Teddy Seyed, Frank Maurer University of Calgary, Department of Computer Science

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

Multimodal Metric Study for Human-Robot Collaboration

Multimodal Metric Study for Human-Robot Collaboration Multimodal Metric Study for Human-Robot Collaboration Scott A. Green s.a.green@lmco.com Scott M. Richardson scott.m.richardson@lmco.com Randy J. Stiles randy.stiles@lmco.com Lockheed Martin Space Systems

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Human Control of Multiple Robots in the RoboFlag Simulation Environment *

Human Control of Multiple Robots in the RoboFlag Simulation Environment * Human Control of Multiple Robots in the RoboFlag Simulation Environment * Raja Parasuraman Cognitive Science Laboratory The Catholic University of America Washington, DC, USA parasuraman@cua.edu Scott

More information

Using Variability Modeling Principles to Capture Architectural Knowledge

Using Variability Modeling Principles to Capture Architectural Knowledge Using Variability Modeling Principles to Capture Architectural Knowledge Marco Sinnema University of Groningen PO Box 800 9700 AV Groningen The Netherlands +31503637125 m.sinnema@rug.nl Jan Salvador van

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

Experimental Analysis of a Variable Autonomy Framework for Controlling a Remotely Operating Mobile Robot

Experimental Analysis of a Variable Autonomy Framework for Controlling a Remotely Operating Mobile Robot Experimental Analysis of a Variable Autonomy Framework for Controlling a Remotely Operating Mobile Robot Manolis Chiou 1, Rustam Stolkin 2, Goda Bieksaite 1, Nick Hawes 1, Kimron L. Shapiro 3, Timothy

More information

Comparing Two Haptic Interfaces for Multimodal Graph Rendering

Comparing Two Haptic Interfaces for Multimodal Graph Rendering Comparing Two Haptic Interfaces for Multimodal Graph Rendering Wai Yu, Stephen Brewster Glasgow Interactive Systems Group, Department of Computing Science, University of Glasgow, U. K. {rayu, stephen}@dcs.gla.ac.uk,

More information

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks 3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks David Gauldie 1, Mark Wright 2, Ann Marie Shillito 3 1,3 Edinburgh College of Art 79 Grassmarket, Edinburgh EH1 2HJ d.gauldie@eca.ac.uk, a.m.shillito@eca.ac.uk

More information

Asynchronous Control with ATR for Large Robot Teams

Asynchronous Control with ATR for Large Robot Teams PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 55th ANNUAL MEETING - 2011 444 Asynchronous Control with ATR for Large Robot Teams Nathan Brooks, Paul Scerri, Katia Sycara Robotics Institute Carnegie

More information

Introduction to Human-Robot Interaction (HRI)

Introduction to Human-Robot Interaction (HRI) Introduction to Human-Robot Interaction (HRI) By: Anqi Xu COMP-417 Friday November 8 th, 2013 What is Human-Robot Interaction? Field of study dedicated to understanding, designing, and evaluating robotic

More information

Discriminating direction of motion trajectories from angular speed and background information

Discriminating direction of motion trajectories from angular speed and background information Atten Percept Psychophys (2013) 75:1570 1582 DOI 10.3758/s13414-013-0488-z Discriminating direction of motion trajectories from angular speed and background information Zheng Bian & Myron L. Braunstein

More information

Human Factors: The Journal of the Human Factors and Ergonomics Society

Human Factors: The Journal of the Human Factors and Ergonomics Society Human Factors: The Journal of the Human Factors and Ergonomics Society http://hfs.sagepub.com/ Choosing Autonomy Modes for Multirobot Search Michael Lewis, Huadong Wang, Shih Yi Chien, Prasanna Velagapudi,

More information

Experiments with An Improved Iris Segmentation Algorithm

Experiments with An Improved Iris Segmentation Algorithm Experiments with An Improved Iris Segmentation Algorithm Xiaomei Liu, Kevin W. Bowyer, Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, U.S.A.

More information

Sampling Efficiency in Digital Camera Performance Standards

Sampling Efficiency in Digital Camera Performance Standards Copyright 2008 SPIE and IS&T. This paper was published in Proc. SPIE Vol. 6808, (2008). It is being made available as an electronic reprint with permission of SPIE and IS&T. One print or electronic copy

More information

Julie L. Marble, Ph.D. Douglas A. Few David J. Bruemmer. August 24-26, 2005

Julie L. Marble, Ph.D. Douglas A. Few David J. Bruemmer. August 24-26, 2005 INEEL/CON-04-02277 PREPRINT I Want What You ve Got: Cross Platform Portability And Human-Robot Interaction Assessment Julie L. Marble, Ph.D. Douglas A. Few David J. Bruemmer August 24-26, 2005 Performance

More information

Using a Robot Proxy to Create Common Ground in Exploration Tasks

Using a Robot Proxy to Create Common Ground in Exploration Tasks Using a to Create Common Ground in Exploration Tasks Kristen Stubbs, David Wettergreen, and Illah Nourbakhsh Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 {kstubbs,

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

VIRTUAL ASSISTIVE ROBOTS FOR PLAY, LEARNING, AND COGNITIVE DEVELOPMENT

VIRTUAL ASSISTIVE ROBOTS FOR PLAY, LEARNING, AND COGNITIVE DEVELOPMENT 3-59 Corbett Hall University of Alberta Edmonton, AB T6G 2G4 Ph: (780) 492-5422 Fx: (780) 492-1696 Email: atlab@ualberta.ca VIRTUAL ASSISTIVE ROBOTS FOR PLAY, LEARNING, AND COGNITIVE DEVELOPMENT Mengliao

More information

A Cognitive Model of Perceptual Path Planning in a Multi-Robot Control System

A Cognitive Model of Perceptual Path Planning in a Multi-Robot Control System A Cognitive Model of Perceptual Path Planning in a Multi-Robot Control System David Reitter, Christian Lebiere Department of Psychology Carnegie Mellon University Pittsburgh, PA, USA reitter@cmu.edu Michael

More information

TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES

TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES IADIS International Conference Computer Graphics and Visualization 27 TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES Nicoletta Adamo-Villani Purdue University, Department of Computer

More information

Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface

Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface Scott A. Green*, **, XioaQi Chen*, Mark Billinghurst** J. Geoffrey Chase* *Department of Mechanical Engineering, University

More information

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions Sesar Innovation Days 2014 Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions DLR German Aerospace Center, DFS German Air Navigation Services Maria Uebbing-Rumke, DLR Hejar

More information

A Comparison Between Camera Calibration Software Toolboxes

A Comparison Between Camera Calibration Software Toolboxes 2016 International Conference on Computational Science and Computational Intelligence A Comparison Between Camera Calibration Software Toolboxes James Rothenflue, Nancy Gordillo-Herrejon, Ramazan S. Aygün

More information

Applying CSCW and HCI Techniques to Human-Robot Interaction

Applying CSCW and HCI Techniques to Human-Robot Interaction Applying CSCW and HCI Techniques to Human-Robot Interaction Jill L. Drury Jean Scholtz Holly A. Yanco The MITRE Corporation National Institute of Standards Computer Science Dept. Mail Stop K320 and Technology

More information

Synchronous vs. Asynchronous Video in Multi-Robot Search

Synchronous vs. Asynchronous Video in Multi-Robot Search First International Conference on Advances in Computer-Human Interaction Synchronous vs. Asynchronous Video in Multi-Robot Search Prasanna Velagapudi 1, Jijun Wang 2, Huadong Wang 2, Paul Scerri 1, Michael

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates Copyright SPIE Measurement of Texture Loss for JPEG Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates ABSTRACT The capture and retention of image detail are

More information

A Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones

A Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones A Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones Jianwei Lai University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 USA jianwei1@umbc.edu

More information

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT 1 Rudolph P. Darken, 1 Joseph A. Sullivan, and 2 Jeffrey Mulligan 1 Naval Postgraduate School,

More information

Design and Evaluation of Tactile Number Reading Methods on Smartphones

Design and Evaluation of Tactile Number Reading Methods on Smartphones Design and Evaluation of Tactile Number Reading Methods on Smartphones Fan Zhang fanzhang@zjicm.edu.cn Shaowei Chu chu@zjicm.edu.cn Naye Ji jinaye@zjicm.edu.cn Ruifang Pan ruifangp@zjicm.edu.cn Abstract

More information

A Method for Quantifying the Benefits of Immersion Using the CAVE

A Method for Quantifying the Benefits of Immersion Using the CAVE A Method for Quantifying the Benefits of Immersion Using the CAVE Abstract Immersive virtual environments (VEs) have often been described as a technology looking for an application. Part of the reluctance

More information

EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES

EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 49th ANNUAL MEETING 2005 35 EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES Ronald Azuma, Jason Fox HRL Laboratories, LLC Malibu,

More information

Application of 3D Terrain Representation System for Highway Landscape Design

Application of 3D Terrain Representation System for Highway Landscape Design Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented

More information

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 t t t rt t s s Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 1 r sr st t t 2 st t t r t r t s t s 3 Pr ÿ t3 tr 2 t 2 t r r t s 2 r t ts ss

More information

User interface for remote control robot

User interface for remote control robot User interface for remote control robot Gi-Oh Kim*, and Jae-Wook Jeon ** * Department of Electronic and Electric Engineering, SungKyunKwan University, Suwon, Korea (Tel : +8--0-737; E-mail: gurugio@ece.skku.ac.kr)

More information

not to be republished NCERT Introduction To Aerial Photographs Chapter 6

not to be republished NCERT Introduction To Aerial Photographs Chapter 6 Chapter 6 Introduction To Aerial Photographs Figure 6.1 Terrestrial photograph of Mussorrie town of similar features, then we have to place ourselves somewhere in the air. When we do so and look down,

More information

The Effect of Display Type and Video Game Type on Visual Fatigue and Mental Workload

The Effect of Display Type and Video Game Type on Visual Fatigue and Mental Workload Proceedings of the 2010 International Conference on Industrial Engineering and Operations Management Dhaka, Bangladesh, January 9 10, 2010 The Effect of Display Type and Video Game Type on Visual Fatigue

More information

Chapter 1 Virtual World Fundamentals

Chapter 1 Virtual World Fundamentals Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target

More information

Perspective View Displays and User Performance

Perspective View Displays and User Performance 186 Perspective View Displays and User Performance Michael B. Cowen SSC San Diego INTRODUCTION Objects and scenes displayed on a flat screen from a 30- to 60-degree perspective viewing angle can convey

More information

Light-Field Database Creation and Depth Estimation

Light-Field Database Creation and Depth Estimation Light-Field Database Creation and Depth Estimation Abhilash Sunder Raj abhisr@stanford.edu Michael Lowney mlowney@stanford.edu Raj Shah shahraj@stanford.edu Abstract Light-field imaging research has been

More information

VEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

Perceptual Characters of Photorealistic See-through Vision in Handheld Augmented Reality

Perceptual Characters of Photorealistic See-through Vision in Handheld Augmented Reality Perceptual Characters of Photorealistic See-through Vision in Handheld Augmented Reality Arindam Dey PhD Student Magic Vision Lab University of South Australia Supervised by: Dr Christian Sandor and Prof.

More information

Comparing the State Estimates of a Kalman Filter to a Perfect IMM Against a Maneuvering Target

Comparing the State Estimates of a Kalman Filter to a Perfect IMM Against a Maneuvering Target 14th International Conference on Information Fusion Chicago, Illinois, USA, July -8, 11 Comparing the State Estimates of a Kalman Filter to a Perfect IMM Against a Maneuvering Target Mark Silbert and Core

More information

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Helen McBreen, James Anderson, Mervyn Jack Centre for Communication Interface Research, University of Edinburgh, 80,

More information

NAVIGATION is an essential element of many remote

NAVIGATION is an essential element of many remote IEEE TRANSACTIONS ON ROBOTICS, VOL.??, NO.?? 1 Ecological Interfaces for Improving Mobile Robot Teleoperation Curtis Nielsen, Michael Goodrich, and Bob Ricks Abstract Navigation is an essential element

More information

Blending Human and Robot Inputs for Sliding Scale Autonomy *

Blending Human and Robot Inputs for Sliding Scale Autonomy * Blending Human and Robot Inputs for Sliding Scale Autonomy * Munjal Desai Computer Science Dept. University of Massachusetts Lowell Lowell, MA 01854, USA mdesai@cs.uml.edu Holly A. Yanco Computer Science

More information

An Integrated Expert User with End User in Technology Acceptance Model for Actual Evaluation

An Integrated Expert User with End User in Technology Acceptance Model for Actual Evaluation Computer and Information Science; Vol. 9, No. 1; 2016 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Science and Education An Integrated Expert User with End User in Technology Acceptance

More information