Operating in Conguration Space Signicantly. Abstract. and control in teleoperation of robot arm manipulators. The motivation is

Size: px
Start display at page:

Download "Operating in Conguration Space Signicantly. Abstract. and control in teleoperation of robot arm manipulators. The motivation is"

Transcription

1 Operating in Conguration Space Signicantly Improves Human Performance in Teleoperation I. Ivanisevic and V. Lumelsky Robotics Lab, University of Wisconsin-Madison Madison, Wisconsin 53706, USA Abstract This paper discusses the use of conguration space (C-space) as a means of visualization and control in teleoperation of robot arm manipulators. The motivation is to improve operator performance in tasks involving manipulator motion in complex three-dimensional (3D) environments with obstacles. Unlike other motion planning tasks, operators are known to make expensive mistakes in arm control, due to de- ciencies of human spatial reasoning. The advantage of C-space is that in it the arm becomes a point, a case which humans are much better equipped to handle. To make such operation possible, a tool is proposed that reduces motion in 3D C-space to that in 2D C-space. It is then shown on results from testing 18 human subjects that translating the problem of a three-link 3D arm manipulator motion into C-space improves the operator performance remarkably, by a factor of 2 to 4, compared to usual work space control. 1 Introduction This paper is concerned with the developmentof a human-computer interface which would simplify motion planning tasks encountered in teleoperation of robot arm manipulators. It is a well known fact (this topic has been a subject of several experimental studies [1, 2]) that human operators have a hard time controlling complex jointed devices such as robot arm manipulators. In the past, examples of applications of teleoperation have been reported (e.g. with the NASA shuttle arm) where human operator error resulted in costly equipment damages and failure to complete the task. This is not to say that humans should be eliminated from such tasks: the well-known (if not well-understood) ability of humans to see \the big picture" and to quickly assess changes This work was supported by the Oce of Naval Research Grant N

2 in the environment and modify the task as necessary still makes the human operator an invaluable part of the control loop in most teleoperation tasks. What is needed is machine intelligence tools that would compensate for the shortcomings of human reasoning. In approaching this development task, one needs to clarify rst what specic intelligent skills the machine should be endowed with and how the two intelligences - human and machine's - can be combined in a synergistic fashion. Studies [1, 2] connect operator diculties with peculiarities of human spatial reasoning: humans have diculty handling simultaneous interaction with objects at multiple points of the device's body, or motion that involves mechanical joints (such as in arm manipulators), or dynamic tasks where the eects of inertia and accelerations are essential (such as in assembly or underwater applications). With this in mind, our approach has been to use machine intelligence to reduce the problem to one which humans are known to be good at: experience indicates that one such problem is that of moving a point in a maze. Another task that humans have diculty with and which can be assigned to machine intelligence is avoiding potential collisions. This can be accomplished by continuously checking whether the operator-dictated motion of the arm will result in collision with obstacles, and disregarding or, better, intelligently modifying such commands. Such systems, based on sensitive skin covering the arm, have already been demonstrated in hardware [3]. The focus of this work is to produce a visual interface which would allow the human intelligence to become a contributor to the motion control. We show, in particular, that while human operators nd it nearly impossible not to bump the arm into surrounding objects, they easily achieve real-time collision-free motion using the proposed tools. Our visual human-computer interface transforms the task of controlling a jointed arm manipulator (the work space (W-space) control) to one of control of a point in the corresponding conguration space (C-space). It has been shown [4]that even in a simpler two-dimensional (2D) case the control in C-space improves operator performance rather remarkably, even outperforming the best known algorithms running on modern workstations. Results in 2D, however, are not easily translated to 3D tasks of real-world applications [5, 6, 7]. Such applications can appear in physical or virtual setting, or be presented on a (at) screen. In order to extend known 2D results to a more realistic 3D case, an interface tool is proposed here which reduces the 3D task in hand to a multiplicity of simple 2D tasks. We assume that the C-space representation of the work space is available to the operator. Below, details of the arm model and its C-space are discussed in Section 2, the proposed interface is described in Section 3, and the experimental setup, results and discussion are given in Section 4. 2

3 z θ 1 l 2 θ 2 l 1 l 3 θ 3 P x y Figure 1: The 3D RRR arm manipulator. P 2 The arm Arm Geometry. We consider a three-link three-dimensional arm manipulator with three revolute joints (RRR arm), Figure 1. The arm's rst link, l 1, rotates about a vertical axis, producing joint values 1. Each of the other two links, l 2 and l 3, rotates in a vertical plane; their positions are dened by the values 2 and 3, respectively. For computational eciency links are modeled as generalized cylinders. Note that link l 2 is attached to the side of base l 1, and link l 3 is attached to the side of link l 2, allowing for bigger (or even unlimited) range of joints 2 ; 3. For a more realistic representation, joint values are assumed to be limited by mechanical stops, j i j < 2; i =1; 2; 3. The virtual arm used in the experiments (Section 4) is shown in Figure 2. The arm operates in an environment with stationary obstacles. Conguration Space (C-space). The arm's conguration space (C-space) is formed by all possible combinations of the values of the triple ( 1 ; 2 ; 3 ) each of which denes a position of the arm in the work space. Those positions of the arm that are not permissible due to obstacles or due to the range limits form the C-space obstacles (see e.g. [8],[9]). Since in our scheme the operator will control the robot in C-space, the latter has to be 3

4 Figure 2: A sample task in W-space (Task 1 in Section 4). computed rst. A position in W-space in which the arm touches some obstacle maps in C-space uniquely into a point on the surface of the corresponding obstacle image. The set of all positions in W-space in which the arm touches one or more obstacles will form the surface(s) of C-space obstacles. The said mapping is not linear and so C-space obstacles look very dierent from W-space obstacles. This means the operator will in general see a rather unfamiliar picture, which in principle is a disadvantage. One useful property of C-space obstacles is that, assuming 3 axis corresponds to the \up" direction in C-space, they tend to have the general shape of pillars (see Figure 3; here axes q i correspond to our i ). This fact is exploited below in the design of our interface. 4

5 Figure 3: Three-dimensional C-space of the sample task in Figure 2. 3 Visual Feedback and Control We can now compare two types of visual feedback that that one can realize { in W-space or in C-space. Their use in tests with human subjects is discussed in Section W-space Feedback and Control This scheme corresponds to the commonly used work space control. The user is presented with an image of the three-dimensional work space (that is, its projection on the at screen), complete with the arm and any physical obstacles in the scene. See an example in Figure 2: the darker object is the arm, the lighter object { a relatively complex obstacle. The starting position of the arm is shown in solid, and its target position is shown as a skeleton arm. The user can manipulate the W-space image interactively, including zoom in and zoom out and changing the position of the \camera" (i.e. the viewer's position relative to the 5

6 Figure 4: The master arm used as an input device in W-space. scene). (Note that such exibility of control is feasible in virtual scenes but is not likely to be reproducible in a physical system because it would require either a very large number of cameras or a possibility of an arbitrary positioning for the viewing camera(s)). In an attempt to provide a good interface, an input device has been built, Figure 4, which presents an arm resembling the one on the screen. The operator can use this master arm to guide the virtual slave arm on the screen while receiving continuous visual feedback from the \camera" 1. Given the diculty the operators have with arm control in the vicinity of obstacles, additional visual feedback means have been provided, which inaway simulate a haptic interface. A small bright sphere appears on the point(s) of the virtual arm body that comes into contact with an obstacle. Also, if after the contact the operator continues moving the arm \into the obstacle", a skeleton arm gure will follow the actual motion while the solid arm will stay at the point of contact. The operators then realize they should move the arm back until arriving at the point of contact and then modifying the motion. 1 We considered other types of input devices { mouse and keyboard in particular { and found them to be less eective when dealing with a 3D environment. One might also consider a haptic interface that would provide tactile feedback from collisions between the virtual arm and virtual obstacles. Accomplishing this would not be easy: today there are no haptic interfaces { and principles on which they could be built are unclear { capable of \sensing" contacts with objects at any point of the virtual arm body, and transmitting this information to the operator in a meaningful way. 6

7 Figure 5: Slicing of the 3D C-space in Figure 3. This also corresponds to the W-space in Figure C-space Feedback and Control As explained above, transforming the problem of moving a complex jointed arm (in W- space) to that of moving a simple point (in C-space) makes the task much easier for the humans. But, another problem, which did not appear in the two-dimensional case [4], appears in the 3D case. Note that for non-point objects various means (e.g. perspective projections, shading etc.) can be used to help estimate the object's location. These are not useful for a point: when moving a point on the screen, it is very hard to see where it is located in 3D space. A no less serious problem is that the operator cannot see most of the interior of C-space obstacles, which tend to be cave-like structures, with multiple 7

8 entrances, branches and internal dead-ends (see Figure 3). Attempts to \go inside" those caves would likely lead to an exhaustive search of the insides of the C-space obstacles. In fact, it may be easier to assess in C-space (rather than in W-space) which paths lead to dead-ends and which lead to the target.) making C-space less eective in such cases. This suggests another mapping, which would transform the 3D C-space to some twodimensional space, while retaining information about C-space obstacles. The mapping introduced here is based on the concept of C-space slicing, which is not unlike the slicing used in earlier work on motion planning [8]. One may notice an analogy with the eld of medicine, where it has been common practice to present doctors with a set of 2D pictures representing dierent slices of a 3D object (e.g. a human brain). The fact that doctors are able to mentally reconstruct a 3D space from this set of 2D slices provides a hope that the same principle could be applied to motion planning. Assume each side of the C-space cube is 2 long; if a joint {say, 1 { has range limits, ( 1min ; 1max ), then the parallelepiped in C-space between 1 = 1min ; 1 = 1max is a range obstacle which, from the motion planning standpoint, is as any other obstacle. [C-obstacles shown in Figure 3 represent only physical obstacles; range obstacles are omitted]. We slice C-space bottom up, along the 3 axis; the result is a number of squares each representing a 3 slice. Each side of each square represents the 2 range of 1 ; 2, respectively. It may include slices of regular C-space obstacles and range obstacles if any. If the range of 3 is 3min ; 3max and the slice thickness is 3, then the number of slice squares is m =( 3min, 3max )= 3. Smaller 3 will results in better resolution and bigger m. The maximum m is determined by the screen size and operator's convenience. Figure 5 shows the slice mapping of C-space of Figure 2. Rectangular shaded stripes in the squares and fully shaded squares in the middle of the gure correspond to the range obstacles, more \irregular" shapes { to normal C-obstacles. The 100 squares shown result in a resolution of 3.6 degrees along the 3 axis. In the gure, the bottom left corner corresponds to 3 = 0 and the upper right corner to 3 =2, 3 { which means these slices are actually neighbors. In each row the increase in 3 corresponds to the slices going from left to right; that is, the leftmost slice in row n follows the rightmost slice in the row n, 1 and vice versa. S and T mark the start and target positions corresponding to those in Figure 2. When working in C-space, the operator moves the point representing the arm's current position within a slice or between neighboring slices. One can also try to jump over a number of slices { the computer will allow such moves if the (C-space) straight line between the current and intended positions does not cross any obstacles. To increase visibility of details (e.g., to look at narrow openings), one can zoom-in at a given slice by clicking the mouse (this makes the slice occupy the whole screen) and zoom-out back when needed. As mentioned in Section 2, C-space obstacles tend to have the general shape of pillars. This explains the choice of slicing along the 3 axis (as opposed to 1 or 2 or some arbitrary axis). with slicing along 3, neighboring slices tend to be more similar in appearance - i.e. when slicing higher or lower across a pillar you tend to see the same general shape in the cross-section. This property of similarity is important in that it allows the operator to 8

9 Figure 6: W-space experiment, Task 2 (Section 4). formulate a plan of motion and mentally carry it from slice to slice along an intended path, to see if it will accomplish the task. In other words, similar to a bird-sight view of a maze, the operator will not actually perform the motion until they're certain it is likely to solve the problem. If each slice were vastly dierent from its neighbors, the discontinuity would make it hard to keep track of the change from slice to slice, and thus it would be impossible to do this kind of advance planning without performing any motion. It is clear that in more or less complex tasks such planning is not possible in W-space { the objects being moved are far too complex and their imaginary motion is nearly impossible to visualize. Experimental studies [1, 2] conrm that in practice human operators usually try to solve the bigger problem in smaller stages that are easier to plan. This often results in backtracking when it becomes obvious that previous stages led to an unfavorable outcome. 9

10 Figure 7: C-space experiment, Task 2. This also corresponds to the W-space in Figure 6. On the other hand, an operator working in C-space can carry out the imaginary motion from start to target in a succession of smaller tasks (e.g. nd a way out of or into a hole), never moving the actual arm until it is certain that the contemplated path will lead to the target. To realize this type of advanced planning, the operator needs some training to get familiar with the use of C-space; in our experiments some attempt was made to assess the eect of more or less training on the operator performance. 4 Experimental Results and Discussion In the experiments, 18 human subjects have been tested in the same motion planning tasks with a three-link RRR arm manipulator. The objective was to compare the subjects' 10

11 performance when working in the work space with that in the conguration space. For W-space, the interface described in Section 3.1 was used, and for C-space { the interface presented in Section 3.2. Each subject was tested on two tasks { an easier Task 1, Figure 2, and also Task 2, Figure 6, which was meant to be more of a challenge to human spatial reasoning 2. Table 1: C-space, Task 1 statistics for 18 subjects Mean Min. Max. St.Dev. path len time, sec Table 2: C-space, Task 2 statistics for 18 subjects Mean Min. Max. St.Dev. path len time, sec The experiments consisted of two parts completed on separate days, with Part 1 (planning in C-space) followed by Part 2 (planning in W-space). The reasoning behind this order of experiments is as follows. If, as expected, the subjects' performance in C-space is indeed better than in W-space, those good results would be more convincing if they were obtained in the very rst set of experiments. That is, if those same results were obtained in the Part 2 of the experiment, one could argue that better performance in C-space could have been the result of experience and memorization of the environment acquired in the (Part 1) W-space experiment. With our C-space-then-W-space-experiment order, better performance in C- space would suggest that operation in C-space is indeed more eective than in W-space. Table 3: W-space Task 1 statistics for 18 subjects Mean Min. Max. St.Dev. path len time, sec Part 1. The task is to plan arm motion from the starting to the target position in C- space, using the C-space interface. Each subject completed Task 1 shown in Figure 5, and then a more complex Task 2 shown in Figure 7. Prior to the test each subject went through a training period of about 15 min during which they would familiarize themselves with the interface and solve a simple training example. Of the whole group of 18 subjects, 3 subjects received more extensive training: this was to be used for assessment of the eect of training. 2 One may see resemblance between the shapes of obstacles in the experiments (Figure 2) and the support trusses in the NASA space station project. 11

12 Figure 8: A typical W-space path produced using the C-space interface in Task 1. Table 4: W-space, Task 2 statistics for 18 subjects Mean Min. Max. St.Dev. path len time, sec Subjects' performance was measured by the length of the path produced, determined as the integral of changes in the three joint values ( 1 ; 2 ; 3 ), and the time taken to complete the task. Tables 1 and 2 summarize the subjects' performance in C-space, in Task 1 and Task 2, respectively. Figures 8, 9 show typical performances in Task 1 and Task 2, respectively, when translated back to the W-space. Dark lines show the path of the arm endpoint; also shown is the projection of the path on the workspace oor (xy plane). Part 2: Here the subjects were asked to solve the problem in W-space, using the W-space interface. Once again, the subjects received 15 min of training during which they would solve a simple training example (again, the same three subjects received more extensive training, see above). Each subject would then complete Task 1 shown in Figure 2, followed bytask 2, Figure 6. As before, subjects' performance was measured by the path length and time to completion. Tables 3 and 4 show the subjects' performance in W-space for Task 1 12

13 Figure 9: A typical W-space path produced using the C-space interface in Task 2. and Task 2, respectively. Figure 10 shows a typical example of performance in W-space in Task 1, and Figure 11 shows one for Task 2. Discussion: Note a remarkable improvement in subjects' performance in C-space compared to W-space. As seen in Tables 1-4, for Task 1 the mean length of the paths produced in C-space is roughly 2.5 times better (shorter) than in W-space; for Task 2 it is more than 3.5 times better than in W-space. Furthermore, both the best and the worst performances of the subjects are better in C-space than in W-space (see the min and max columns of Tables 1-4). Even more interestingly, in the more dicult Task 2 the best (shortest) path in W-space is still worse that the worst (longest) path in C-space. For the same subject, only in one instance, in Task 1, one subject performed (marginally) better in W-space than in C-space (producing a path of length 5.21 as compared to 5.59, respectively). In Task 2 the same subject did signicantly better in C-space than in W-space. 13

14 Figure 10: The typical W-space path produced using the W-space interface in Task 1. One parameter that was not measured consistently which could be of great interest in applications, was the number of times the arm hit an obstacle during the task execution. Based on our observations, this number was much higher when working in W-space than in C-space. Overall, the subjects found it much easier to work, and they performed admirably, in C-space compared to the W-space. This dierence was even more evident with the three subjects who received longer training with both interfaces. For these subjects, in C-space Task 1, the mean path length was 1.52 and the mean time 86.3 sec. For the same task in W-space these numbers were 4.36 and sec, respectively. When considering those numbers, one should take into the account that the length of the optimal path for Task 1 is about In C-space the three well-trained subjects produced near optimal paths. On the other hand, when working in W-space, their performance was signicantly worse than that of an average little-trained 14

15 Figure 11: The typical W-space path produced using the W-space interface in Task 2. subject (with 15 min training) in C-space, albeit signicantly better than that of littletrained subjects in W-space. In C-space Task 2, the well-trained subjects' mean path length was 1.63 and the mean time was 92.7 sec. For the same task these numbers in W-space were 7.77 and sec, respectively. Note that the path length 7.77 is only marginally better than the mean of all subjects taken together { training did not help the well-trained subjects in this case. In this more complicated task, the optimal path length is about Here the improvement in performance in C-space (as compared to W-space) is even more dramatic. This conrms other results [1, 2] which suggest that training in general helps little when working in W- space. On the other hand, since the performance of well-trained subjects in C-space was consistently better than of the little-trained subjects, one can conclude that training does in fact help when operating in C-space. Given the overhead of eort a subject needs to master the more complex rules of operation 15

16 in C-space and to get used to the unfamiliar shapes of C-space obstacles, it is reasonable to suggest that training will, in general, signicantly improve the subjects' performance in C-space. 5 Conclusion This paper proposes an approach to human-guided teleoperation of a robot arm manipulator based on operation in the task's conguration space rather than in the commonly used W-space. As described, the approach is applicable only if complete information about the arm manipulator and its environment is available and the corresponding c-space can be computed beforehand. To improve the human interface, a tool is proposed for reducing the motion in 3D C-space to that in a 2D C-space. With it, a two-dimensional, sliced version of the three dimensional C-space is oered to the operator. Instead of directly confronting the problem of collision analysis, which is known to be extremely challenging for the human spatial reasoning, the operator faces the task in C-space where one can concentrate on global navigation, leaving collision analysis to the computer. Thus reduced task becomes a variant of the maze-searching problem. Our experiments with human subjects show that in spite of the strange appearance of this new interface, subjects quickly learn it and perform in it remarkably better than in the familiar commonly used work space. The authors express their gratitude to Jon Lawrence for the help in designing and especially building the master arm used in this work. References [1] V. Lumelsky, S. Rogers, J. Watson, F. Liu, An Experimental Study of Human Performance in Planning Object Motion. Final Report. University of Wisconsin Robotics Lab. July [2] F. Liu, Multivariate Analysis of Human Performance in Motion Planning. M.S. Thesis, University of Wisconsin-Madison, Mechanical Engineering, May [3] V. Lumelsky, E. Cheung, Real-Time Collision Avoidance in Teleoperated Whole- Sensitive Robot Arm Manipulators, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 23, No. 5, pp , [4] I. Ivanisevic, V. Lumelsky, A Human-Machine Interface for Teleoperation of Arm Manipulators in a Complex Environment, Proceedings of the 1998 IEEE International Conference on Intelligent Robots and Systems, October [5] Q. Lin, C. Kuo, Virtual Teleoperation of Underwater Robots, Proceedings of the 1997 IEEE International Conference on Robotics and Automation, April

17 [6] P. Schenker et al., Development of a telemanipulator for dexterity enhanced microsurgery, Proceedings of the 2nd International Symposium on Medical Robotics and Computer Assisted Surgery, pp , [7] I. Hunter et al., A Teleoperated Microsurgical Robot and Associated Virtual Environment for Eye Surgery, Presence, Vol. 2, pp , Fall [8] T. Lozano-Perez, Spatial planning: a conguration space approach, IEEE Transactions on Computers, Vol. 32, No. 3, February [9] J. Schwartz, M. Sharir, On the Piano Movers problem. Part II. General Techniques for Computing Topological Properties of Real Algebraic Manifolds, Advances in Applied Mathematics, Vol. 4, pp ,

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction It is appropriate to begin the textbook on robotics with the definition of the industrial robot manipulator as given by the ISO 8373 standard. An industrial robot manipulator is

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

Decentralized Motion Planning for Multiple Mobile Robots: The Cocktail Party Model

Decentralized Motion Planning for Multiple Mobile Robots: The Cocktail Party Model Autonomous Robots 4, 121 135 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Decentralized Motion Planning for Multiple Mobile Robots: The Cocktail Party Model V.J. LUMELSKY

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Novel machine interface for scaled telesurgery

Novel machine interface for scaled telesurgery Novel machine interface for scaled telesurgery S. Clanton, D. Wang, Y. Matsuoka, D. Shelton, G. Stetten SPIE Medical Imaging, vol. 5367, pp. 697-704. San Diego, Feb. 2004. A Novel Machine Interface for

More information

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE Prof.dr.sc. Mladen Crneković, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Prof.dr.sc. Davor Zorc, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks

Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks Nikos C. Mitsou, Spyros V. Velanas and Costas S. Tzafestas Abstract With the spread of low-cost haptic devices, haptic interfaces

More information

Virtual Grasping Using a Data Glove

Virtual Grasping Using a Data Glove Virtual Grasping Using a Data Glove By: Rachel Smith Supervised By: Dr. Kay Robbins 3/25/2005 University of Texas at San Antonio Motivation Navigation in 3D worlds is awkward using traditional mouse Direct

More information

a b c d e f g h i j k l m n

a b c d e f g h i j k l m n Shoebox, page 1 In his book Chess Variants & Games, A. V. Murali suggests playing chess on the exterior surface of a cube. This playing surface has intriguing properties: We can think of it as three interlocked

More information

THREE DIMENSIONAL FLASH LADAR FOCAL PLANES AND TIME DEPENDENT IMAGING

THREE DIMENSIONAL FLASH LADAR FOCAL PLANES AND TIME DEPENDENT IMAGING THREE DIMENSIONAL FLASH LADAR FOCAL PLANES AND TIME DEPENDENT IMAGING ROGER STETTNER, HOWARD BAILEY AND STEVEN SILVERMAN Advanced Scientific Concepts, Inc. 305 E. Haley St. Santa Barbara, CA 93103 ASC@advancedscientificconcepts.com

More information

Robotics Links to ACARA

Robotics Links to ACARA MATHEMATICS Foundation Shape Sort, describe and name familiar two-dimensional shapes and three-dimensional objects in the environment. (ACMMG009) Sorting and describing squares, circles, triangles, rectangles,

More information

Learning Guide. ASR Automated Systems Research Inc. # Douglas Crescent, Langley, BC. V3A 4B6. Fax:

Learning Guide. ASR Automated Systems Research Inc. # Douglas Crescent, Langley, BC. V3A 4B6. Fax: Learning Guide ASR Automated Systems Research Inc. #1 20461 Douglas Crescent, Langley, BC. V3A 4B6 Toll free: 1-800-818-2051 e-mail: support@asrsoft.com Fax: 604-539-1334 www.asrsoft.com Copyright 1991-2013

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

FSR99, International Conference on Field and Service Robotics 1999 (to appear) 1. Andrew Howard and Les Kitchen

FSR99, International Conference on Field and Service Robotics 1999 (to appear) 1. Andrew Howard and Les Kitchen FSR99, International Conference on Field and Service Robotics 1999 (to appear) 1 Cooperative Localisation and Mapping Andrew Howard and Les Kitchen Department of Computer Science and Software Engineering

More information

Siemens NX11 tutorials. The angled part

Siemens NX11 tutorials. The angled part Siemens NX11 tutorials The angled part Adaptation to NX 11 from notes from a seminar Drive-to-trial organized by IBM and GDTech. This tutorial will help you design the mechanical presented in the figure

More information

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic

More information

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam In the following set of questions, there are, possibly, multiple correct answers (1, 2, 3 or 4). Mark the answers you consider correct.

More information

Zhan Chen and Israel Koren. University of Massachusetts, Amherst, MA 01003, USA. Abstract

Zhan Chen and Israel Koren. University of Massachusetts, Amherst, MA 01003, USA. Abstract Layer Assignment for Yield Enhancement Zhan Chen and Israel Koren Department of Electrical and Computer Engineering University of Massachusetts, Amherst, MA 0003, USA Abstract In this paper, two algorithms

More information

The development of a virtual laboratory based on Unreal Engine 4

The development of a virtual laboratory based on Unreal Engine 4 The development of a virtual laboratory based on Unreal Engine 4 D A Sheverev 1 and I N Kozlova 1 1 Samara National Research University, Moskovskoye shosse 34А, Samara, Russia, 443086 Abstract. In our

More information

CS123. Programming Your Personal Robot. Part 3: Reasoning Under Uncertainty

CS123. Programming Your Personal Robot. Part 3: Reasoning Under Uncertainty CS123 Programming Your Personal Robot Part 3: Reasoning Under Uncertainty This Week (Week 2 of Part 3) Part 3-3 Basic Introduction of Motion Planning Several Common Motion Planning Methods Plan Execution

More information

Elements of Haptic Interfaces

Elements of Haptic Interfaces Elements of Haptic Interfaces Katherine J. Kuchenbecker Department of Mechanical Engineering and Applied Mechanics University of Pennsylvania kuchenbe@seas.upenn.edu Course Notes for MEAM 625, University

More information

2 Study of an embarked vibro-impact system: experimental analysis

2 Study of an embarked vibro-impact system: experimental analysis 2 Study of an embarked vibro-impact system: experimental analysis This chapter presents and discusses the experimental part of the thesis. Two test rigs were built at the Dynamics and Vibrations laboratory

More information

Parallel Robot Projects at Ohio University

Parallel Robot Projects at Ohio University Parallel Robot Projects at Ohio University Robert L. Williams II with graduate students: John Hall, Brian Hopkins, Atul Joshi, Josh Collins, Jigar Vadia, Dana Poling, and Ron Nyzen And Special Thanks to:

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

Using Simple Force Feedback Mechanisms as Haptic Visualization Tools.

Using Simple Force Feedback Mechanisms as Haptic Visualization Tools. Using Simple Force Feedback Mechanisms as Haptic Visualization Tools. Anders J Johansson, Joakim Linde Teiresias Research Group (www.bigfoot.com/~teiresias) Abstract Force feedback (FF) is a technology

More information

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING PRESENTED BY S PRADEEP K SUNIL KUMAR III BTECH-II SEM, III BTECH-II SEM, C.S.E. C.S.E. pradeep585singana@gmail.com sunilkumar5b9@gmail.com CONTACT:

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

ABSTRACT. We investigate joint source-channel coding for transmission of video over time-varying channels. We assume that the

ABSTRACT. We investigate joint source-channel coding for transmission of video over time-varying channels. We assume that the Robust Video Compression for Time-Varying Wireless Channels Shankar L. Regunathan and Kenneth Rose Dept. of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106 ABSTRACT

More information

Differences in Fitts Law Task Performance Based on Environment Scaling

Differences in Fitts Law Task Performance Based on Environment Scaling Differences in Fitts Law Task Performance Based on Environment Scaling Gregory S. Lee and Bhavani Thuraisingham Department of Computer Science University of Texas at Dallas 800 West Campbell Road Richardson,

More information

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 Objective: Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 This Matlab Project is an extension of the basic correlation theory presented in the course. It shows a practical application

More information

UNIT 5a STANDARD ORTHOGRAPHIC VIEW DRAWINGS

UNIT 5a STANDARD ORTHOGRAPHIC VIEW DRAWINGS UNIT 5a STANDARD ORTHOGRAPHIC VIEW DRAWINGS 5.1 Introduction Orthographic views are 2D images of a 3D object obtained by viewing it from different orthogonal directions. Six principal views are possible

More information

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES Petteri PÖNTINEN Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing, Finland petteri.pontinen@hut.fi KEY WORDS: Cocentricity,

More information

Ch 1. Ch 2 S 1. Haptic Display. Summary. Optimization. Dynamics. Paradox. Synthesizers. Ch 3 Ch 4. Ch 7. Ch 5. Ch 6

Ch 1. Ch 2 S 1. Haptic Display. Summary. Optimization. Dynamics. Paradox. Synthesizers. Ch 3 Ch 4. Ch 7. Ch 5. Ch 6 Chapter 1 Introduction The work of this thesis has been kindled by the desire for a certain unique product an electronic keyboard instrument which responds, both in terms of sound and feel, just like an

More information

ISMCR2004. Abstract. 2. The mechanism of the master-slave arm of Telesar II. 1. Introduction. D21-Page 1

ISMCR2004. Abstract. 2. The mechanism of the master-slave arm of Telesar II. 1. Introduction. D21-Page 1 Development of Multi-D.O.F. Master-Slave Arm with Bilateral Impedance Control for Telexistence Riichiro Tadakuma, Kiyohiro Sogen, Hiroyuki Kajimoto, Naoki Kawakami, and Susumu Tachi 7-3-1 Hongo, Bunkyo-ku,

More information

LOCAL OPERATOR INTERFACE. target alert teleop commands detection function sensor displays hardware configuration SEARCH. Search Controller MANUAL

LOCAL OPERATOR INTERFACE. target alert teleop commands detection function sensor displays hardware configuration SEARCH. Search Controller MANUAL Strategies for Searching an Area with Semi-Autonomous Mobile Robots Robin R. Murphy and J. Jake Sprouse 1 Abstract This paper describes three search strategies for the semi-autonomous robotic search of

More information

STEM Spectrum Imaging Tutorial

STEM Spectrum Imaging Tutorial STEM Spectrum Imaging Tutorial Gatan, Inc. 5933 Coronado Lane, Pleasanton, CA 94588 Tel: (925) 463-0200 Fax: (925) 463-0204 April 2001 Contents 1 Introduction 1.1 What is Spectrum Imaging? 2 Hardware 3

More information

Web-Based Mobile Robot Simulator

Web-Based Mobile Robot Simulator Web-Based Mobile Robot Simulator From: AAAI Technical Report WS-99-15. Compilation copyright 1999, AAAI (www.aaai.org). All rights reserved. Dan Stormont Utah State University 9590 Old Main Hill Logan

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

ENGINEERING GRAPHICS ESSENTIALS

ENGINEERING GRAPHICS ESSENTIALS ENGINEERING GRAPHICS ESSENTIALS Text and Digital Learning KIRSTIE PLANTENBERG FIFTH EDITION SDC P U B L I C AT I O N S Better Textbooks. Lower Prices. www.sdcpublications.com ACCESS CODE UNIQUE CODE INSIDE

More information

Methods for Haptic Feedback in Teleoperated Robotic Surgery

Methods for Haptic Feedback in Teleoperated Robotic Surgery Young Group 5 1 Methods for Haptic Feedback in Teleoperated Robotic Surgery Paper Review Jessie Young Group 5: Haptic Interface for Surgical Manipulator System March 12, 2012 Paper Selection: A. M. Okamura.

More information

Creating a light studio

Creating a light studio Creating a light studio Chapter 5, Let there be Lights, has tried to show how the different light objects you create in Cinema 4D should be based on lighting setups and techniques that are used in real-world

More information

Observing a colour and a spectrum of light mixed by a digital projector

Observing a colour and a spectrum of light mixed by a digital projector Observing a colour and a spectrum of light mixed by a digital projector Zdeněk Navrátil Abstract In this paper an experiment studying a colour and a spectrum of light produced by a digital projector is

More information

Robot Task-Level Programming Language and Simulation

Robot Task-Level Programming Language and Simulation Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application

More information

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view) Camera projections Recall the plenoptic function: Panoramic imaging Ixyzϕθλt (,,,,,, ) At any point xyz,, in space, there is a full sphere of possible incidence directions ϕ, θ, covered by 0 ϕ 2π, 0 θ

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Chair. Table. Robot. Laser Spot. Fiber Grating. Laser

Chair. Table. Robot. Laser Spot. Fiber Grating. Laser Obstacle Avoidance Behavior of Autonomous Mobile using Fiber Grating Vision Sensor Yukio Miyazaki Akihisa Ohya Shin'ichi Yuta Intelligent Laboratory University of Tsukuba Tsukuba, Ibaraki, 305-8573, Japan

More information

Inventor-Parts-Tutorial By: Dor Ashur

Inventor-Parts-Tutorial By: Dor Ashur Inventor-Parts-Tutorial By: Dor Ashur For Assignment: http://www.maelabs.ucsd.edu/mae3/assignments/cad/inventor_parts.pdf Open Autodesk Inventor: Start-> All Programs -> Autodesk -> Autodesk Inventor 2010

More information

AN ABSTRACT OF THE THESIS OF

AN ABSTRACT OF THE THESIS OF AN ABSTRACT OF THE THESIS OF Jason Aaron Greco for the degree of Honors Baccalaureate of Science in Computer Science presented on August 19, 2010. Title: Automatically Generating Solutions for Sokoban

More information

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern ModaDJ Development and evaluation of a multimodal user interface Course Master of Computer Science Professor: Denis Lalanne Renato Corti1 Alina Petrescu2 1 Institute of Computer Science University of Bern

More information

Brief summary report of novel digital capture techniques

Brief summary report of novel digital capture techniques Brief summary report of novel digital capture techniques Paul Bourke, ivec@uwa, February 2014 The following briefly summarizes and gives examples of the various forms of novel digital photography and video

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Exploring 3D in Flash

Exploring 3D in Flash 1 Exploring 3D in Flash We live in a three-dimensional world. Objects and spaces have width, height, and depth. Various specialized immersive technologies such as special helmets, gloves, and 3D monitors

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

Multi-Modal Robot Skins: Proximity Servoing and its Applications

Multi-Modal Robot Skins: Proximity Servoing and its Applications Multi-Modal Robot Skins: Proximity Servoing and its Applications Workshop See and Touch: 1st Workshop on multimodal sensor-based robot control for HRI and soft manipulation at IROS 2015 Stefan Escaida

More information

QUICKSTART COURSE - MODULE 1 PART 2

QUICKSTART COURSE - MODULE 1 PART 2 QUICKSTART COURSE - MODULE 1 PART 2 copyright 2011 by Eric Bobrow, all rights reserved For more information about the QuickStart Course, visit http://www.acbestpractices.com/quickstart Hello, this is Eric

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Design and Simulation of a New Self-Learning Expert System for Mobile Robot

Design and Simulation of a New Self-Learning Expert System for Mobile Robot Design and Simulation of a New Self-Learning Expert System for Mobile Robot Rabi W. Yousif, and Mohd Asri Hj Mansor Abstract In this paper, we present a novel technique called Self-Learning Expert System

More information

In 1974, Erno Rubik created the Rubik s Cube. It is the most popular puzzle

In 1974, Erno Rubik created the Rubik s Cube. It is the most popular puzzle In 1974, Erno Rubik created the Rubik s Cube. It is the most popular puzzle worldwide. But now that it has been solved in 7.08 seconds, it seems that the world is in need of a new challenge. Melinda Green,

More information

Motion Control of Excavator with Tele-Operated System

Motion Control of Excavator with Tele-Operated System 26th International Symposium on Automation and Robotics in Construction (ISARC 2009) Motion Control of Excavator with Tele-Operated System Dongnam Kim 1, Kyeong Won Oh 2, Daehie Hong 3#, Yoon Ki Kim 4

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

Experiment 01 - RF Power detection

Experiment 01 - RF Power detection ECE 451 Automated Microwave Measurements Laboratory Experiment 01 - RF Power detection 1 Introduction This (and the next few) laboratory experiment explores the beginnings of microwave measurements, those

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Abstract. 1. Introduction

Abstract. 1. Introduction GRAPHICAL AND HAPTIC INTERACTION WITH LARGE 3D COMPRESSED OBJECTS Krasimir Kolarov Interval Research Corp., 1801-C Page Mill Road, Palo Alto, CA 94304 Kolarov@interval.com Abstract The use of force feedback

More information

Artificial Neural Network based Mobile Robot Navigation

Artificial Neural Network based Mobile Robot Navigation Artificial Neural Network based Mobile Robot Navigation István Engedy Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar tudósok körútja 2. H-1117,

More information

ROBOT DESIGN AND DIGITAL CONTROL

ROBOT DESIGN AND DIGITAL CONTROL Revista Mecanisme şi Manipulatoare Vol. 5, Nr. 1, 2006, pp. 57-62 ARoTMM - IFToMM ROBOT DESIGN AND DIGITAL CONTROL Ovidiu ANTONESCU Lecturer dr. ing., University Politehnica of Bucharest, Mechanism and

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

Proc. Int'l Conf. on Intelligent Robots and Systems (IROS '01), Maui, Hawaii, Oct. 29-Nov. 3, Comparison of 3-D Haptic Peg-in-Hole Tasks

Proc. Int'l Conf. on Intelligent Robots and Systems (IROS '01), Maui, Hawaii, Oct. 29-Nov. 3, Comparison of 3-D Haptic Peg-in-Hole Tasks Proc. Int'l Conf. on Intelligent Robots and Systems (IROS '1), Maui, Hawaii, Oct. 9-Nov. 3, 1. 1 Comparison of 3-D Haptic Peg-in-Hole Tasks in Real and Virtual Environments B. J. Unger, A. Nicolaidis,

More information

Trade of Sheet Metalwork. Module 7: Introduction to CNC Sheet Metal Manufacturing Unit 2: CNC Machines Phase 2

Trade of Sheet Metalwork. Module 7: Introduction to CNC Sheet Metal Manufacturing Unit 2: CNC Machines Phase 2 Trade of Sheet Metalwork Module 7: Introduction to CNC Sheet Metal Manufacturing Unit 2: CNC Machines Phase 2 Table of Contents List of Figures... 4 List of Tables... 5 Document Release History... 6 Module

More information

Summary of robot visual servo system

Summary of robot visual servo system Abstract Summary of robot visual servo system Xu Liu, Lingwen Tang School of Mechanical engineering, Southwest Petroleum University, Chengdu 610000, China In this paper, the survey of robot visual servoing

More information

Instruction Manual for HyperScan Spectrometer

Instruction Manual for HyperScan Spectrometer August 2006 Version 1.1 Table of Contents Section Page 1 Hardware... 1 2 Mounting Procedure... 2 3 CCD Alignment... 6 4 Software... 7 5 Wiring Diagram... 19 1 HARDWARE While it is not necessary to have

More information

CS 443: Imaging and Multimedia Cameras and Lenses

CS 443: Imaging and Multimedia Cameras and Lenses CS 443: Imaging and Multimedia Cameras and Lenses Spring 2008 Ahmed Elgammal Dept of Computer Science Rutgers University Outlines Cameras and lenses! 1 They are formed by the projection of 3D objects.

More information

A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques

A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques Zia-ur Rahman, Glenn A. Woodell and Daniel J. Jobson College of William & Mary, NASA Langley Research Center Abstract The

More information

Super resolution with Epitomes

Super resolution with Epitomes Super resolution with Epitomes Aaron Brown University of Wisconsin Madison, WI Abstract Techniques exist for aligning and stitching photos of a scene and for interpolating image data to generate higher

More information

Virtual Sculpting and Multi-axis Polyhedral Machining Planning Methodology with 5-DOF Haptic Interface

Virtual Sculpting and Multi-axis Polyhedral Machining Planning Methodology with 5-DOF Haptic Interface Virtual Sculpting and Multi-axis Polyhedral Machining Planning Methodology with 5-DOF Haptic Interface Weihang Zhu and Yuan-Shin Lee* Department of Industrial Engineering North Carolina State University,

More information

2 CHAPTER 1. INTRODUCTION The rst step in comparing two images is removing as many of these factors as possible, which is a process referred to as nor

2 CHAPTER 1. INTRODUCTION The rst step in comparing two images is removing as many of these factors as possible, which is a process referred to as nor Chapter 1 Introduction 1.1 Motivation In the last half of the 19th century people commonly went to a photographic studio for portraits. Photography was still in its infancy, resulting in blackand-white

More information

UvA Rescue Team Description Paper Infrastructure competition Rescue Simulation League RoboCup Jo~ao Pessoa - Brazil

UvA Rescue Team Description Paper Infrastructure competition Rescue Simulation League RoboCup Jo~ao Pessoa - Brazil UvA Rescue Team Description Paper Infrastructure competition Rescue Simulation League RoboCup 2014 - Jo~ao Pessoa - Brazil Arnoud Visser Universiteit van Amsterdam, Science Park 904, 1098 XH Amsterdam,

More information

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training On Application of Virtual Fixtures as an Aid for Telemanipulation and Training Shahram Payandeh and Zoran Stanisic Experimental Robotics Laboratory (ERL) School of Engineering Science Simon Fraser University

More information

Maze Solving Algorithms for Micro Mouse

Maze Solving Algorithms for Micro Mouse Maze Solving Algorithms for Micro Mouse Surojit Guha Sonender Kumar surojitguha1989@gmail.com sonenderkumar@gmail.com Abstract The problem of micro-mouse is 30 years old but its importance in the field

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Robust Haptic Teleoperation of a Mobile Manipulation Platform

Robust Haptic Teleoperation of a Mobile Manipulation Platform Robust Haptic Teleoperation of a Mobile Manipulation Platform Jaeheung Park and Oussama Khatib Stanford AI Laboratory Stanford University http://robotics.stanford.edu Abstract. This paper presents a new

More information

Design and Control of the BUAA Four-Fingered Hand

Design and Control of the BUAA Four-Fingered Hand Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 2001 Design and Control of the BUAA Four-Fingered Hand Y. Zhang, Z. Han, H. Zhang, X. Shang, T. Wang,

More information

Interior Design using Augmented Reality Environment

Interior Design using Augmented Reality Environment Interior Design using Augmented Reality Environment Kalyani Pampattiwar 2, Akshay Adiyodi 1, Manasvini Agrahara 1, Pankaj Gamnani 1 Assistant Professor, Department of Computer Engineering, SIES Graduate

More information

Performance Issues in Collaborative Haptic Training

Performance Issues in Collaborative Haptic Training 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 FrA4.4 Performance Issues in Collaborative Haptic Training Behzad Khademian and Keyvan Hashtrudi-Zaad Abstract This

More information

On Observer-based Passive Robust Impedance Control of a Robot Manipulator

On Observer-based Passive Robust Impedance Control of a Robot Manipulator Journal of Mechanics Engineering and Automation 7 (2017) 71-78 doi: 10.17265/2159-5275/2017.02.003 D DAVID PUBLISHING On Observer-based Passive Robust Impedance Control of a Robot Manipulator CAO Sheng,

More information

Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN

Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA. University of Tsukuba. Tsukuba, Ibaraki, 305 JAPAN Long distance outdoor navigation of an autonomous mobile robot by playback of Perceived Route Map Shoichi MAEYAMA Akihisa OHYA and Shin'ichi YUTA Intelligent Robot Laboratory Institute of Information Science

More information

An intro to CNC Machining

An intro to CNC Machining An intro to CNC Machining CNC stands for Computer Numeric Control. CNC machining involves using a machine controlled by a computer to machine material. Generally the machine is either a milling machine

More information

+ - X, Y. Signum of Laplacian of Gaussian. Original X, Y X, Y DIGITIZER. Update stored image LIVE IMAGE STORED IMAGE. LoG FILTER.

+ - X, Y. Signum of Laplacian of Gaussian. Original X, Y X, Y DIGITIZER. Update stored image LIVE IMAGE STORED IMAGE. LoG FILTER. Real-Time Video Mosaicking of the Ocean Floor Richard L. Marks Stephen M. Rock y Michael J. Lee z Abstract This research proposes a method for the creation of real-time video mosaics of the ocean oor.

More information

Put Your Designs in Motion with Event-Based Simulation

Put Your Designs in Motion with Event-Based Simulation TECHNICAL PAPER Put Your Designs in Motion with Event-Based Simulation SolidWorks software helps you move through the design cycle smarter. With flexible Event-Based Simulation, your team will be able

More information

Wavelet Transform. From C. Valens article, A Really Friendly Guide to Wavelets, 1999

Wavelet Transform. From C. Valens article, A Really Friendly Guide to Wavelets, 1999 Wavelet Transform From C. Valens article, A Really Friendly Guide to Wavelets, 1999 Fourier theory: a signal can be expressed as the sum of a series of sines and cosines. The big disadvantage of a Fourier

More information

The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design

The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design Zhang Liang e-mail: 76201691@qq.com Zhao Jian e-mail: 84310626@qq.com Zheng Li-nan e-mail: 1021090387@qq.com Li Nan

More information

Isometric Drawings. Figure A 1

Isometric Drawings. Figure A 1 A Isometric Drawings ISOMETRIC BASICS Isometric drawings are a means of drawing an object in picture form for better clarifying the object s appearance. These types of drawings resemble a picture of an

More information

Descriptive Geometry Courses for Students of Architecture On the Selection of Topics

Descriptive Geometry Courses for Students of Architecture On the Selection of Topics Journal for Geometry and Graphics Volume 4 (2000), No. 2, 209 222. Descriptive Geometry Courses for Students of Architecture On the Selection of Topics Claus Pütz Institute for Geometry and Applied Mathematics

More information

PATH CLEARANCE USING MULTIPLE SCOUT ROBOTS

PATH CLEARANCE USING MULTIPLE SCOUT ROBOTS PATH CLEARANCE USING MULTIPLE SCOUT ROBOTS Maxim Likhachev* and Anthony Stentz The Robotics Institute Carnegie Mellon University Pittsburgh, PA, 15213 maxim+@cs.cmu.edu, axs@rec.ri.cmu.edu ABSTRACT This

More information

Motion Manipulation Techniques

Motion Manipulation Techniques Motion Manipulation Techniques You ve already been exposed to some advanced techniques with basic motion types (lesson six) and you seen several special motion types (lesson seven) In this lesson, we ll

More information

Experiment P02: Understanding Motion II Velocity and Time (Motion Sensor)

Experiment P02: Understanding Motion II Velocity and Time (Motion Sensor) PASCO scientific Physics Lab Manual: P02-1 Experiment P02: Understanding Motion II Velocity and Time (Motion Sensor) Concept Time SW Interface Macintosh file Windows file linear motion 30 m 500 or 700

More information

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing Digital images Digital Image Processing Fundamentals Dr Edmund Lam Department of Electrical and Electronic Engineering The University of Hong Kong (a) Natural image (b) Document image ELEC4245: Digital

More information

Telerobotics and Virtual Reality. Swiss Federal Institute of Technology. experiments, we are still in a phase where those environments

Telerobotics and Virtual Reality. Swiss Federal Institute of Technology. experiments, we are still in a phase where those environments September 10 12, 1997 in Geneva, Switzerland. \KhepOnTheWeb" : An Experimental Demonstrator in Telerobotics and Virtual Reality Olivier Michel, Patrick Saucy and Francesco Mondada Laboratory of Microcomputing

More information

Haptic Virtual Fixtures for Robot-Assisted Manipulation

Haptic Virtual Fixtures for Robot-Assisted Manipulation Haptic Virtual Fixtures for Robot-Assisted Manipulation Jake J. Abbott, Panadda Marayong, and Allison M. Okamura Department of Mechanical Engineering, The Johns Hopkins University {jake.abbott, pmarayong,

More information