Virtual Environment for Teleoperation of Forwarder Crane

Size: px
Start display at page:

Download "Virtual Environment for Teleoperation of Forwarder Crane"

Transcription

1 Virtual Environment for Teleoperation of Forwarder Crane Simon Westerberg June 5, 2007 Master s Thesis in Computing Science, 20 credits Supervisor at CS-UmU: Niclas Börlin Examiner: Per Lindström Umeå University Department of Computing Science SE UMEÅ SWEDEN

2

3 Abstract At the research center IFOR (Intelligenta Fordon Off-Road), one long-term goal is the development of fully autonomous forest machines. Some of the driving objectives are to increase productivity and to improve the working environment for the operator. The development of teleoperated vehicles is one step in this direction, it is however still facing a number of challenges. This thesis examines how virtual environment technology can be used to address some of these challenges. Furthermore, it describes the development of an application prototype that adds virtual environment support to a crane teleoperation system. The software includes a scene graph based visualization of a crane model that reflects the configuration of the physical crane by responding to sensor data. The application allows for supervisory control of the crane over the Internet, by letting the user specify a target position or path for the crane tip. It also supports a dynamic environment, where an environment sensing system can be used to add objects. Although no reliable environment sensor system exists today, an introduction of such a system would give the virtual environment solution several advantages compared to streaming video feedback.

4 ii

5 Contents 1 Introduction Background Problem description Outline Virtual environments Representing geometric models Scene graphs Teleoperation Controlling the robot Joint control Cartesian coordinate control Supervisory control Time delay Environment sensors Time-of-flight laser scanner Structured light Camera vision Object recognition Stereo vision VE-assisted teleoperation Using VE for supervisory control Planning iii

6 iv CONTENTS Communication Action Supervision Challenges Tools Crane OpenSceneGraph Libxml MATLAB and Simulink Implementation and results System description CraneVE Input processing system Environment sensor interface Environment visualization The scene graph Operating the crane Collision avoidance Discussion Future work Environment sensors Full vehicle support User interface Summary and outlook Acknowledgements 43 References 45 A CraneVE settings DTD 49 B Default CraneVE settings XML 51

7 CONTENTS v C Computer vision interface API 53 C.1 sendconvexhull C.2 sendlog C.3 sendman

8 vi CONTENTS

9 Chapter 1 Introduction This thesis describes the development of a virtual environment application prototype for teleoperation of a forest forwarder crane. 1.1 Background The forest industry is continuously developing new products and technological solutions to increase productivity. Harvesters and forwarders (Figure 1.1), forest machines for cutting trees and transporting timber respectively, are constantly improved. The research center IFOR 1 (Intelligenta Fordon Off-Road), supports development of vehicles in off-road environments and one of their long-term goals is fully autonomous forest machines. Apart from productivity increase, a driving objective is to relieve the operators from a stressful working environment full of noise and vibration. Furthermore, an autonomous vehicle has no need for a cabin, so the machine can also be made smaller and lighter, resulting in higher maneuverability and less environmental damage (Figure 1.2) [1]. There is however still a long way to go until fully autonomous machines would become reality. Further research is needed in areas like navigation and localization, artificial intelligence, sensors and perception before a reliable autonomous system can be presented. In shorter term, it seems more realistic to develop semi-autonomous systems [2]. One technique that fulfills many of the objectives for autonomous machines without sharing all of their limitations is teleoperation controlling the vehicle remotely. This has the advantages of removing the human operator from the vehicle, leading both to an improved working environment and a cabin-free vehicle, without having the concern of whether to fully trust the computer to make correct decisions. Remotely controlled forest machines are closer to commercial use than autonomous machines. For example, at the research institute Skogforsk 2, a remote

10 2 Chapter 1. Introduction Figure 1.1: The forwarder is used for transporting logs from the felling site to a roadside landing area. Figure 1.2: A conceptual image of an autonomous (or remote controlled) forwarder. In an unmanned vehicle the cabin can be removed, resulting in a smaller and lighter machine, with less fuel consumption and environmental damage.

11 1.2. Problem description 3 controlled harvester called Besten is developed [3, 4]. The unmanned harvester is meant to be used together with two manned forwarders and it is maneuvered by the forwarder operators. Preliminary studies indicate that this kind of remotely controlled vehicles can be economically viable. It has also been shown that Besten can decrease the fuel consumption by 20 40% compared to ordinary forwarder and harvester systems. Remote controlled machines are furthermore cheaper to produce and can run at a lower cost while maintaining or in some cases increasing the productivity. Additionally, they can improve the working environment for the operators. 1.2 Problem description The aim of this thesis project is to construct a virtual environment for remote control of a forest forwarder crane. In the resulting application prototype, the following components are included: A 3D visualization system for the crane at the Smart Crane Lab. The program should display the crane and its environment. Two-way interaction with the existing crane control system through network communication. The virtual crane model should reflect the configuration of the physical crane by responding to sensor data sent over network. The virtual environment application should be able to control the physical crane by sending a target position for the crane tip to the crane control system. User interface for assigning a crane tip target position or path. Possible means for controlling the target position include mouse and joystick. Support for a dynamic environment, where objects (like logs, rocks, etc.) can be added and removed. The interface should allow object manipulation by hand as well as automatic object generation from environment sensor data. 1.3 Outline Chapter 1 gives a short introduction to research in autonomous machines and the use of teleoperation in the forest industry. Chapters 2, 3 and 4 contain background theory about virtual environments, teleoperation and computer vision, respectively. Chapter 5 discusses the use of virtual environments for teleoperation, and how it compares to remote control with video feedback. Chapter 6 provides an overview of the software and hardware tools used during the implementation of the virtual environment application.

12 4 Chapter 1. Introduction Chapter 7 contains a description of the resulting prototype. Chapter 8 presents a discussion of the results and future work that can be done in order to further develop the prototype.

13 Chapter 2 Virtual environments Many attempts have been made to define the term virtual environment (VE) [5, 6]. A common definition is a computer generated virtual world containing threedimensional objects. One or more users should be able to view these objects and manipulate some of them through some kind of man-machine interface. In some definitions, virtual environment is considered to mean something more similar to virtual reality (VR), and the two terms are in some cases used interchangeably [7]. In this case, definitions often include requirements concerning the equipment that is used for viewing and/or interacting with the environment. One example of such equipment is head-mounted displays, or other kinds of stereo vision technology. Hand tracking equipment and spatial audio are also mentioned as technological requirements [7, 8]. Throughout this report, virtual environment will mean a virtual world with a three-dimensional graphical representation. No consideration is taken to if, or how, the physical and the virtual world are connected, nor to what technologies that are used for sensing or manipulating the virtual world. 2.1 Representing geometric models A virtual environment is spatial in nature. Properties in the VE, like appearance, sound and light sources are all associated with locations in the three-dimensional virtual space. It is the task of a VE application to represent, organize, and control this spatial information, making it possible to e.g. visualize the information the virtual scene from an arbitrary view [9]. Although virtual environments can contain information from other senses, visual perception is the most commonly used. This means that the focus of VE systems is to represent virtual objects as visual geometric models with spatial locations [9]. There are many ways to describe geometric models. One way is to use lowlevel primitives like points, lines and polygons. Another way is to represent the model with high-level primitives like cubes, cylinders and ellipsoids. Low- 5

14 6 Chapter 2. Virtual environments level primitives have the advantage of having good hardware support, which increases performance. Many of the high-level primitives on the other hand can not be rendered quickly enough. Before they can be rendered, they must be converted to low-level representations, which in most cases means polygons. Most geometries can be reasonably approximated by polygons, a process known as tessellation [10]. The most widely used application programming interfaces (API) for 3D graphics are OpenGL 1 and Microsoft DirectX 2. They operate on low-level primitives, and these operations are supported by most modern graphics acceleration hardware [11, 12]. Writing 3D graphics or virtual environment software directly in e.g. OpenGL can be beneficial in some situations were you want full control over all low-level commands. The drawback is that programming in a low-level graphics API tends to lead to complex and inflexible code [13]. As a clarifying example, consider a graphics application with one of its tasks to draw a cone geometry. The cone is a high-level primitive that must be tesselated before rendering. An application using OpenGL will thus repeatedly loop through a list of polygons that represent the cone geometry and draw each polygon on the screen. But for the programmer, it is easier to handle the cone as one object with properties like position, length, radius and rotation, than as a large number of vertices and edges. This abstraction hides the complex details and allows the application developer to focus on what to render, rather than how to render it. Furthermore, it implies a separation between the scene data and the procedures that operate on the data, resulting in code that is easier to create and maintain [13]. There are however few geometries that can be described in a simple way, like the cone. A majority of the three-dimensional geometry models used in graphics applications are represented by various polygon meshes. Other geometrical models can have more advanced representations like non-uniform rational B-spline surfaces (NURBS) or subdivision surfaces [10]. To abstract away such representation details and to reduce the code complexity, one intuitive solution is to define an abstract data type that encapsulates the internal representation of the geometry. In some application areas, representing each model as an instance of an abstract data type can be enough. However sometimes it may be necessary to consider relations between different objects. Imagine a scene containing a person and a hat as two separate models. Assume that the person is wearing the hat. When the person moves, we want the hat to follow; we have an attachment relation between the two objects [9]. In order to make the hat stick to the person, we cannot simply merge the geometries of the two models, because we want to be able to separate them again when the man takes his hat off. In order not to have to manually keep track of what objects are attached to

15 2.2. Scene graphs 7 a) b) Figure 2.1: (a,b) Some geometries in local coordinate systems. each other, our data structure can be extended to support attachment relations between objects. 2.2 Scene graphs One tool that allows high-level geometry representations and in addition aids in solving the problem of attachment relations is the scene graph. As the name suggests, a scene graph is a data structure that contains the current configuration of every object inside the virtual world, or scene. The scene graph is a vital part of many 3D graphics software libraries, including OpenSceneGraph 3, Java3D 4, OpenSG 5 and X3D 6. Scene graphs are used in several different graphics areas including computer games, simulators and virtual reality research projects [14]. The scene graph data structure is a directed acyclic graph (DAG), which means that the relationship betweeen two nodes is directed; for any two connected nodes, one is said to be the parent of the other. Furthermore, no cycles are allowed, i.e. no node can be a parent of any of its ancestors [15]. In its simplest form, the scene graph is a tree. The graph has one node in the top level the root node that is the parent of the whole tree [16]. The leaf nodes contain geometry models, each in its own local coordinate system (Figure 2.1). If the leaf nodes are directly below the root node, the local coordinates are mapped directly to world coordinates (Figure 2.2). A virtual world is not static but changes according to user input and internal rules of object behavior, e.g. physical simulation. To reflect changes of an object in the virtual world, a transform node is used. It allows for translating an object to a different location, as well as rotating and scaling the object. The transform node is inserted above geometry nodes and applies a transformation

16 8 Chapter 2. Virtual environments a) b) Root Table Box Figure 2.2: (a) A simple scene graph containing the two geometries found in Figure 2.1. (b) The resulting scene. a) Root b) T1 T2 Table Box Figure 2.3: (a) Transform nodes are inserted into the scene graph to allow translation, rotation and scaling of the geometries. (b) The resulting scene. to everything in its subtree (Figure 2.3). It is possible for a transform node to have several children, in which case all geometries will be affected by the same transformation [16]. Scene graphs can get additional qualities by the introduction of multiple levels of transform nodes. As mentioned earlier, one complexity-reducing property of a virtual environment data structure is the support of attachment relations. In the scene graph, this can be achieved by using transform nodes to organize objects into groups and to treat them as groups when e.g. moving them. As demonstrated in Figure 2.4, one transform can change the properties of a group of objects, while transforms deeper down in the tree change the individual objects of the group. This results in a hierarchy of objects in which operations can be made on different abstraction levels, all depending on how deep down the tree the operations are performed. This hierarchical concept can be extended by allowing any node in the graph to have attributes like material, texture, etc. that affect the whole subtree. All scene graph examples presented so far have been trees. In a general DAG, one node can have several parents. If, e.g., a leaf node has two parent transform

17 2.2. Scene graphs 9 a) Root b) c) T1 Table T2 Box Figure 2.4: (a) A scene graph with multiple levels of transform nodes containing the objects from Figure 2.1. (b) Changes in the top transform will result in a transformation of each object in the subtree below the transform node. This means that if the table is rotated, the box will be affected by the rotation as if it was attached to the table. (c) The box can still be moved relative to the table by changing the transform directly above the geometry node. nodes, the geometry in the leaf node will appear twice in the virtual world, with locations independently specified by the two transform nodes respectively (Figure 2.5). Virtual environments contain more than geometries, and scene graphs are often extended with nodes that reflect this. Additional leaf nodes can handle sounds, animations or behavior rules [17]. Further up in the graph, nodes for concepts like level of detail (LOD) can be introduced. LOD is a collection of algorithms that speed up the rendering process by switching between different models of an object based on how far away the object is. When an object is close to the view point, a model with high detail level is used. When it is further away and thus contributes less to the rendered image, a simpler model with less detail (e.g. fewer vertices) is used. This technique increases the rendering speed [15].

18 10 Chapter 2. Virtual environments a) Root b) Root T1 T2 T3 T1 T2 T3 Table Box Box Table Box Figure 2.5: (a,b) Two scene graphs that both correspond to a virtual scene with one table and two boxes. In (b), the same box geometry will be rendered twice with different transforms, resulting in a world that will appear equal to the world constructed by (a). Since only one box geometry is needed, the method used in (b) reduces memory usage.

19 Chapter 3 Teleoperation Teleoperation means that a human operator is manipulating the physical world from a distant location [18]. Traditionally, the operations have been performed by controlling an articulated robotic arm, but the theory can also be extended to apply to remote control of vehicles or other machines [19]. Current application areas for teleoperation range from robot-assisted surgery [20] to vehicles exploring the surface of Mars [21]. This chapter will present different ways to control a remote robot and how issues with time delay can be dealt with. 3.1 Controlling the robot Joint control The first teleoperation systems were built around Their task was to aid in nuclear activities. Early systems had two similar mechanical arms. One of them called the slave arm was located in the room where the operations were to be performed. The other the master arm was located outside the room, together with a human operator. Corresponding joints of the two arms were mechanically connected, so that the slave arm exactly replicated the motion of the master arm [18]. For these mechanical systems, the distance between master and slave is limited by the length of the mechanical link. Furthermore, the relative position between the two arms must be kept constant [19]. A solution to these problems is to replace the mechanical links with electronically controlled motors and sensors, making it possible to control them from any distance. The operator still controls the slave robot by specifying each joint angle but the information is no longer implicitly transferred through the mechanical connections. Instead, sensors read the joint angles of the master arm and motors move the joints of the slave arm correspondingly (Figure 3.1). With mechanical connection between the two arms, the master arm auto- 11

20 12 Chapter 3. Teleoperation Figure 3.1: When joint control is used for teleoperation, the angles of each joint are transferred to the slave arm and directly mapped to motion. matically gives feedback to the user about the current position and motion of the slave arm. The user also experiences a physical resistance as feedback when moving the arm. The resistance force is equal to the amount of force that is required to move the master arm. Without the mechanical connection, this feedback is lost. To compensate for this, electronically controlled systems can introduce motors on the master arm. They can position the arm and add a resistance force, thus providing a feeling of how the slave arm responds to the control actions Cartesian coordinate control In most cases, when controlling a robot arm, the actual angles of each joint are not interesting, but only a means to move the end effector to the desired position. Hence it is natural to think of a telerobotic system where the control is handled by sending control data as cartesian coordinates, i.e. in which direction the end effector should move. In this case, the master user has no control over the joint rotations of the slave arm. If the end effector of the master arm is moved forward, the slave arm moves forward. This means that the master arm no longer has to be an exact model of the slave arm, but can have another kinematic design [19]. An example of cartesian coordinate control is using a computer mouse to move the tip of a jointed robot arm in a horizontal plane (Figure 3.2). The

21 3.1. Controlling the robot 13 Figure 3.2: When cartesian coordinate control is used, the coordinates for the end effector are specified by the master device. These coordinates must be translated by inverse kinematics to joint angles before the corresponding motion can be initiated. boom tip will always aim to be at a position directly mapped to the mouse coordinates. Depending on the robot size, there can be a scaling factor involved in the translation from the mouse coordinates to the end effector coordinates. Since the slave robot arm is physically controlled by joint rotations, the end effector coordinates have to be translated to the joint angles that put the tip of the arm into the desired position. This is known as inverse kinematics. An inverse kinematics problem for an arm with two joints can have zero, one or two solutions. For an arm with more than two joints, there is often an infinite number of solution for a given target point. In this case the task is to find an optimal solution [22] Supervisory control Imagine maneuvring a vehicle on the moon from a control room on earth. The control signals are limited by the speed of light and hence the minimum time it takes before they reach the moon is about 1.5 seconds. This also applies to the feedback signals that are returned to earth. A human operator that moves the vehicle by coordinate control can only move the vehicle a small, safe distance. The operator must then wait three seconds for the feedback signals before moving again. This move-and-wait method is not very efficient [19].

22 14 Chapter 3. Teleoperation Figure 3.3: Supervisory control lets the human operator control the slave robot by producing a high-level plan. This plan includes a goal state as well as the information and rules that are needed to find a solution to how the robot should act to reach the state. When the solution requires robot arm motion, it can be broken down to a sequence of end effector positions, which then in turn are translated to joint coordinates and motion. An alternative control method is to specify a goal, e.g. a target position specifying where the vehicle should go. The goal is presented to the vehicle along with instructions that allow the vehicle to independently calculate how to get there as well as how to respond to situations that can occur on the way. With this method, the vehicle can be constantly moving with the human operator supervising the process. If any conditions change, the operator can intervene and set a new goal. This method is part of a concept called supervisory control, which can be applied not only to motion control, but also for other tasks. For robot teleoperation, the idea is to let the robot be in control over simpler actions, while the operator provides the robot with goals, or plans, on a higher abstraction level (Figure 3.3). The operator s task is then to supervise the robot s action instead of directly controlling them [23].

23 3.2. Time delay Time delay Not all systems have as large delay as the lunar vehicle example in Section Nonetheless, as long as any move-and-wait occurs, remotely performed direct control will cause some amount of performance penalty due to communication time delays. With move-and-wait, the minimum time it takes to complete a task is T = N(d + a) (3.1) where d is the round-trip time delay, a is the amount of time that the human operator can move the slave arm without feedback, or in the case of supervisory control, the time that the slave can move without intervention from the human operator. N is the number of such moves that are needed to complete the task. This means that the total time delay for the task is Nd. By reducing N, the time delay will decrease. This can be achieved by supervisory control that works on a high abstraction level; i.e. only a few high-level commands are given and the lower-level details of the control is handled by the slave robot control system [19]. Another time delay-related problem is that the state of the robot representation (for example camera images or an object in a virtual environment) at the master location does not show the current state of the slave robot, but a previous state. This difference between the states depends on the internal delay of the sensor equipment as well as the transmission delay of the sensor data. When the transmission is delayed, the state from which the decisions of the human operator are based will differ from the actual state of the robot, leading to a less exact control. In this case, using a high-level method like supervisory control can be more efficient [24].

24 16 Chapter 3. Teleoperation

25 Chapter 4 Environment sensors One of the largest technological challenges in the development of a complete VE teleoperation system is the act of sensing the physical environment in a robust and exact way. Several techniques are used for acquisition of environment data. The strengths and weaknesses of each technique depend on the kind of environment it is applied on. For a known and static environment, environment sensing techniques are used to determine the location of a robot or vehicle. By detecting natural or artifical reference points or landmarks in the data that is the environment, the position of the sensor device can be calculated. For an unknown environment, the demands are higher. The sensor system must detect features in the sensor data that allow the system to e.g. separate nearby objects and classify each object. This chapter presents some different techniques that can be used to acquire three-dimensional data about objects in the environment. 4.1 Time-of-flight laser scanner One example of a technique for range measuring is the time-of-flight laser scanner. The scanner emits a pulse of laser light and measures the time it takes for the reflected light to return to a detector. This time multiplied with the speed of light gives us the distance that the light pulse has travelled. This round-trip distance divided by two equals the distance to the surface that reflected the light. Since this only gives the distance to one point in space, the laser is rotated, so that each pulse is emitted in a new direction. This is usually made by a system of rotating mirrors. When rotated around one axis, the scanner gives measurements in one plane. This can be useful for localization in indoor environments, where e.g. walls and doors can be easily detected. In other environments this technique is less useful, since it is unable to e.g. find obstacles below or above the scanned plane. In order to achieve this, the laser must be rotated in two dimensions, scanning one plane at the time. 17

26 18 Chapter 4. Environment sensors Figure 4.1: Principles of structured light range sensing, using a scanned line method. A plane of light is projected to the environment, resulting in an illuminated line on intersecting objects. The reflected light can then be detected by a CCD camera. The 3D point corresponding to a pixel in the camera image can be found by drawing a line from the camera center through the pixel, and then find the intersection point between that line and the plane of illumination. 4.2 Structured light Another range measuring technique is structured light. It is a collection of various methods where a laser or conventional light source projects a pattern of light onto the environment. The reflected light is recognized by a sensor and used to determine the three-dimensional structure of the environment. One approach is to use a single line pattern, that is scanned across the scene. A sensor, most often a CCD camera, records the resulting pattern. By using the knowledge of the position and orientation of both the light source and the sensor, it is possible to retrieve information about the shape of objects (Figure 4.1). This is often referred to as triangulation, since three points the light source, the sensor and the 3D point on the object form a triangle with known angles, making it possible to calculate the 3D position [25, 26]. An alternative to scan the environment plane by plane is to scan a large part of the environment at once. This can be achieved by projecting a twodimensional pattern over a larger area of the environment. One commonly used pattern is the fringe pattern, with alternating bright and dark stripes. With this pattern, the borders between the bright and the dark stripes form lines that can be detected in the same way as with a single line pattern.

27 4.3. Camera vision 19 The two-dimensional pattern approach is generally faster than the scanned line approach. On the other hand it gives rise to another problem. Due to occlusions, it can be hard to determine which line in the captured image corresponds to which line in the projected light pattern [26]. A limitation with structured light methods is that any part of the environment that is not clearly visible from both the light source and the sensor will not be detected. Furthermore, strong ambient light or shiny materials can reduce the performance [25]. 4.3 Camera vision Camera vision deals with the use of conventional digital cameras in environment sensing tasks. Cameras are commonly used because of low price and high availability. A camera image is a projection from 3D to 2D. During this projection, a lot of information about the 3D environment is lost. By using information from several images, some of this information can be recreated. This process is described in Section Section describes a simpler problem that can be solved with camera images, i.e. recognition of known objects Object recognition Camera images can be used to gain knowledge about the environment by detecting known objects. This is known as object detection or object recognition. A reference image of an object is used to find the same object on another camera image. The object cannot be detected by a comparison of pixel values between the images. Changes in illumination can cause significant changes in pixel values between two images of the same object. Furthermore, when an image is taken from another distance or view angle than the reference image, a different projection is applied to the object, resulting in a different shape on the image. This requires that the comparison is based on object features that are invariant to (unchanged by) common transformations. The Scale Invariant Feature Transform (SIFT) [27] uses features that are invariant to translation, scaling and rotation, and partially invariant to illumination changes. The SIFT method also works for partly occluded objects. For a known environment with many known reference points, or landmarks, object recognition methods can be used for localization. When several landmarks, with known positions, are found on the same image, the camera position can be calculated Stereo vision Camera images can also be used to obtain three-dimensional information about unknown objects. By using several images from different view points, the three-

28 20 Chapter 4. Environment sensors Figure 4.2: Assume that the feature point x in first image corresponds to an unknown point P. The point must lie on a line projected from C through x. This line is projected on the second image as the epipolar line l. The matching point on the second image must therefor lie on l. dimensional structure of the objects can be reconstructed. First, interesting points in each image must be detected. These feature points are points that are distinct from their surroundings and likely to appear in other images of the same object. The Förstner interest operator [28] and the Harris corner detector [29] are two examples of methods that are used to select appropriate feature points. These two methods look for feature points that correspond to corners, by searching for local intensity changes. The next step is to match the feature points from the two images, i.e. for each point in the first image find a point that corresponds to the same point in the three-dimensional space (Figure 4.2). The 3D point P that corresponds to a feature point x on the first image must lie somewhere on the line from the first camera center through x. Since P lies on this line, the projection of P on the second image the matching feature point must lie somewhere on the projection of the line on the second image. The projected line is known as the epipolar line [30]. When a pair of matched feature points is found, the next step is to find the 3D point that the image points are projected from (Figure 4.3). From each camera center, a line can be back-projected through each feature point. In the ideal case, the two lines intersect, giving the exact position of the 3D point. In

29 4.3. Camera vision 21 Figure 4.3: Assume that x and x are two matching feature points i.e. they are projections of the same 3D point P. In order to find the location of P, imagine a line projected from each camera center C and C, through corresponding feature point. Ideally, these two lines will intersect at some point in 3D space. The intersection will occur at the location of the point P. practice, however, image noise will introduce errors in the location of the feature points, resulting in non-intersecting lines. As a result, the position cannot be exactly determined but will have to be estimated.

30 22 Chapter 4. Environment sensors

31 Chapter 5 VE-assisted teleoperation Remote operations especially when performed on a location beyond visual sight for the operator demand a way of informing the human operator about what is happening at the location of the slave robot. Early systems used conventional video cameras to send visual information about the environment. This has proved to be an insufficient method, due to a number of frequently occurring problems; the user can for instance easily become disoriented and it can be hard for the user to visually follow a subject of interest [18]. For long-distance systems and systems with low bandwidth, an even larger problem is that streaming video requires large bandwidth in order to provide a reasonable image quality. With a VE system, only the most important features are extracted from the images and sent over the network, while unnecessary information like object texture details can be omitted. Furthermore, the VE technology introduces user assistance that can not be achieved, at least not as easy, with video camera based feedback. 5.1 Using VE for supervisory control Supervisory control, as described in Section 3.1.3, can be said to consist of four phases: planning, communication, action and supervision. The following sections suggest how virtual environment technology can be useful in these different stages Planning When planning what action to perform in an environment, the more you know about the environment, the easier it is to make a good plan. One obvious advantage of using VE for teleoperation, compared to video feedback, is that VE makes it possible to change the camera view without having to move any physical camera. This gives the operator a better overview over the 23

32 24 Chapter 5. VE-assisted teleoperation environment than images from a static (or at least less dynamic) video camera. Objects can also be made transparent, to allow the user to see occluded objects. The virtual environment can also present additional information that helps in the decision making. One way is to let the system generate perception information that is added on top of normal perception. If the task is to assemble mechanical parts by teleoperation, the system can project a preview of the final result onto the virtual model of the assembly area. A virtual environment can also change the modality of perceptive information that the system has recordings of but is unable to provide in its natural form. For instance, color can be used to show the temperature of an object s surface. Furthermore, the system can take part in the planning process. It can, e.g., analyze the terrain near a teleoperated vehicle and suggest a direction for the vehicle to move in, or it can highlight the virtual representation of objects that it considers appropriate to perform a certain action on Communication When the plan has been created in the mind of the operator, the next step is to communicate the plan to the robot. Assume that the plan is to have the robot pick up a box and put it into a container. With a virtual environment, it might be possible for the operator to click on a virtual representation of the box using a conventional mouse, then on a move button and finally on the virtual container. Without a VE, this plan would likely be much harder to communicate Action As described in Section 3.1.3, supervisory control requires that some tasks or actions are automated, such that they can be executed by the slave robot without human interaction. Tasks for a teleoperated robot might involve manipulation of the environment, or actions that in some other way are dependent on the current state of the environment. In such case, the robot control system must have knowledge about the environment in order to perform the actions in a correct way Supervision Like in the planning phase, a virtual environment can give more information about the robot s actions than video cameras can offer. A VE also lets the user see how the robot perceives the world, or rather what information about the environment that its decisions are based on. This makes it easier to make sure that the robot makes the right decision, but also to understand why a decision is made.

33 5.2. Challenges Challenges A real-world VE-assisted teleoperation system demands an environment system that can provide the VE with fairly correct information about the environment. None of the techniques presented in Chapter 4 meet the requirements in terms of performance and reliability. A number of challenges exist that need to be faced before teleoperation of forest machines with VE feedback can be realized. Two of the major challenges are detection and classification. A detection system needs to be reliable. If e.g. the system reports an incorrect location or size of an obstacle, or if the obstacle remains undetected, the crane might hit the obstacle and cause material damage or even human injury. Furthermore, the system should work both at night and in full daylight. It should handle different weather condition like snow and fog. The second challenge is classification. An environment sensor system for a forwarder must be able to separate logs, branches and mud. Additionally, it must be able to distinguish between different types of terrain. The classification must be reliable. It might be acceptable for the system to occasionally mistake a pile of branches for a log. It is however not acceptable to fail to detect a human inside the safety area.

34 26 Chapter 5. VE-assisted teleoperation

35 Chapter 6 Tools This chapter describes a number of hardware and software tools that were used during the implementation of the application prototype. 6.1 Crane At the Smart Crane Lab 1 at the department of applied physics and electronics, Umeå University, there is a physical installation of a hydraulic crane (Figure 6.1). The crane is manufactured by Cranab 2. The model is 370 RCR, a smaller version of the cranes that are mounted on common forwarder machines. The crane is powered by electro-hydraulic machinery and consists of an articulated boom, that rotates around a vertical axis. The boom has two joints and an extendable telescope. At the end of the telescope there is a grapple (a forwarder gripping device) that can be rotated, opened and closed. There are a number of sensors attached to the crane. Angular sensors read the angles of the two joints; one between the base and the first link, the other between the first and the second link. A distance sensor measures the extension of the telescope. Currently, the crane is not equipped with any sensors for measuring the crane s rotation around the vertical axis, nor is it possible to sense the rotation of the grapple or whether it is open or closed. 6.2 OpenSceneGraph OpenSceneGraph is a high-level 3D graphics toolkit in the form of a C++ API. It is built as an abstraction layer on top of OpenGL and contains a scene graph implementation as well as a number of other useful graphics utilities. The OpenSceneGraph library is open source software and released under the GNU Lesser General Public License (LGPL). It is highly portable and can be 1 Systems/Set Ups/Smart Crane

36 28 Chapter 6. Tools Figure 6.1: The crane at Smart Crane Lab. used on GNU/Linux, Windows, Mac OS X, Solaris and several other operating systems [14]. Apart from the data structure itself, the scene graph implementation includes support for creating and managing high-level geometries. Furthermore, each node can be associated with a state set, that is used to assign state attributes to the node. A large number of different attributes are implemented, e.g. texture, lighting and blend functions [14]. The software consists of three core libraries, listed in Table 6.1. OpenScene- Graph also contains a number of so-called node kits. A node kit is a plug-in library that adds features to the core scene graph library by adding additional nodes or state attributes [14]. Table 6.2 lists some of the available node kits. 6.3 Libxml2 Libxml2, originally developed for the Gnome project, is an XML toolkit implementing functions for parsing, manipulating and writing XML data. The library supports Document Type Definition (DTD) validation and is capable of performing validations at parse time. It also includes an implementation of

37 6.3. Libxml2 29 Table 6.1: The core libraries of OpenSceneGraph. Library osg osgdb osgutil Description The scene graph implementation. Includes support for keyboard and mouse action events. A plugin library for reading from, and writing to, graphics data files. Includes plugins for 3D model file formats (.3dc,.flt,.ac, etc.) and image file formats (.jpg,.tif,.png, etc.) as well as a number of other plugins. A library containing utility classes, both scene graph specific, such as tree traversers for culling, as well as general purpose utilities like polygon tessellation. Table 6.2: OpenSceneGraph node kits. Node kit osgfx osgga osgparticle Description The osgfx node kit implements a number of graphical effects, including bump mapping and anisotropic lighting. A GUI abstraction library. Meant as a tool for integrating osg applications with window systems by abstracting away underlying windowing toolkits. Allows the applications to interact with different window systems like GLUT 3 and Qt 4. Adds support for particle systems.

38 30 Chapter 6. Tools XML processing languages like XML Path Language (XPath) and XML Pointer Language (XPointer) [31]. The libxml2 library is written in C, but there exist language bindings and wrappers for a number of different programming languages like C++, Python, Perl, PHP, Ruby and Tcl. The software is open source and released under the MIT licence [31]. 6.4 MATLAB and Simulink Simulink, developed by The Mathworks 5, is a platform for designing and simulating dynamic systems. The Simulink environment is integrated with MATLAB, adding an interactive graphical user interface, where systems are modeled as block diagrams [32]. A set of block libraries is provided, giving the user access to functions commonly used in dynamic system modeling like integrators, transfer functions and switches. Other block functions include logic and math operations and lookup tables. It also has different input blocks for data sources, as well as a number of output blocks, that let the user monitor, log and analyze the data [32]. The models can be hierarchical, where a group of blocks and signals can be defined as a subsystem and form a single block in the higher level system [32]. By using so-called S-functions, external code written in MATLAB, C, or Fortran can be inserted into Simulink models. Simulink can also generate C code from block diagram models [32]. 5

39 Chapter 7 Implementation and results 7.1 System description This section describes the virtual environment software that has been developed in order to add virtual environment support to a crane teleoperation system. An overview of the complete teleoperation system can be seen in Figure 7.1. The virtual environment software developed during this thesis project consists of a visualization system, called CraneVE, and a system for processing user input. It also includes an interface for dynamically sending environment data to the virtual environment. This interface can either be used to send data manually or it can be used by a future implementation of an environment sensor system to send data automatically. The VE support has been added to existing crane control and crane sensor systems. The control system has the task of controlling the crane hardware, while the sensor system reads data from the crane sensors. The following sections contain a closer description of the subsystems created during this project CraneVE The central part of the complete teleoperation system depicted in Figure 7.1 is the CraneVE application. It organizes the data flow, by handling the processed input from user and sensors and by sending the output to the crane control and graphics display. It also maintains the representations and knowledge of both the crane, the environment and the operator s actions. The application is written in C++ and uses the OpenSceneGraph library for visualization, as well as for reading input from keyboard, mouse and graphics data files. Settings CraneVE allows the user to specify a number of settings. The settings are read from an XML file at program startup. The libxml2 software library is used for 31

40 32 Chapter 7. Implementation and results Figure 7.1: The different parts of the complete crane teleoperation system. parsing the XML data. The configurable parts concern the following: Window settings Network settings Data directory paths Physical crane dimensions Visualization details, colors, etc. The Document Type Definition (DTD) that specifies the format of the XML settings file is available in Appendix A. An example XML file is shown in Appendix B Input processing system The input processing system is built using Simulink. The system registers joystick input and converts it to a 3D position that is used in the process of generating crane motion. The crane motion control procedure is described in detail in Section Environment sensor interface To make use of the features of a virtual environment in a teleoperation system, it must contain a representation of the physical environment, at least of the objects that are to be manipulated.

41 7.2. Environment visualization 33 Figure 7.2: Two screen captures of the CraneVE application showing the two different static environments: indoor and outdoor. To support the integration of environment sensors into the teleoperation system, a Matlab API was created. It allows an environment sensor system to send data about objects in the physical environment to the CraneVE system. The API has methods for sending objects of the type Log, Man and Convex Hull, as well as for removing objects. The interface is specified in Appendix C. 7.2 Environment visualization The virtual environment supports static and dynamic objects. The static objects are created at the start of the program. It consists of the crane and two different environments that the user can switch between. The first is an indoor environment replicating the interiors of the Smart Crane Lab. The other is an outdoor environment with a forwarder model and terrain (Figure 7.2). The dynamic part of the environment is the objects that are sent through the environment sensor system interface. The supported object types are Log, Man and Convex hull. A log is defined by two end positions and a radius, while a man is specified by a single position. A convex hull is constructed from a set of 3D points and consists of a list of vertices and indices that together form the convex hull of the original point set (Figure 7.3). As can be seen in Figure 7.4, the application can present three simultaneous camera views. One is freely configurable by the user, the other two have fixed positions relative to the crane The scene graph An instance of the scene graph constructed by the system is seen in Figure 7.5. The crane structure in the scene graph illustrates the benefits of a spatial data structure with support for attachment relations. The physical crane has a hierarchical motion structure:

42 34 Chapter 7. Implementation and results Figure 7.3: A screen capture showing a number of objects dynamically added to the virtual world. Three different object types are supported: Log, Man and Convex hull. 1) The telescope extension affects the grapple. 2) The outer joint affects the parts in 1), plus the outer link. 3) The inner joint affects the parts in 2), plus the inner link. 4) The base rotator affects the parts in 3), plus the crane base. By structuring the virtual crane parts with hierarchical transform nodes, the hierarchical moving pattern is replicated. Since one transform node affects the subgraph below it, the crane base is located close to the root, while the grapple is deeper down the tree. 7.3 Operating the crane On the forest machines used today, the controls for moving the crane are rather complicated. The angle of each joint is controlled individually, mainly by moving one of two joysticks in a certain direction. So in order to move the crane tip in a straight line, the operator may have to manipulate all the links as well as change the extension of the telescope. Research has been done to find other more intuitive ways of moving the crane, like the boom tip control [2], that allows the operator to map the joystick movements to Cartesian coordinates.

43 7.3. Operating the crane 35 Figure 7.4: The CraneVE application can show three simultaneous camera views. The left camera is the main camera, that the user is allowed to rotate and translate freely around the virtual world. The two cameras on the right have positions that can not be changed by the user, instead their positions are relative to the crane and hence follow the crane motion. The top right camera shows the crane from the side. The camera s view point is located in a position relative to the base of the crane and rotates along with it to always view the crane from an angle perpendicular to the crane direction. The lower right camera has a constant position relative to the crane tip. It is located directly above the crane tip, looking down on the grapple.

44 36 Chapter 7. Implementation and results Figure 7.5: A schematic image of the scene graph in the implemented prototype.

45 7.3. Operating the crane 37 Figure 7.6: The physical crane is controlled by specifying a target position for the crane tip. The user can set the target position by moving a crosshair-like pointer around the virtual world with a joystick and press a button to activate the current position. However, since direct control for teleoperation due to time delays is likely to be impractical when dealing with the fast movements of forest machine cranes, this application implements a kind of supervisory control. Target positions for the end effector are specified by the human operator, after which the crane tip approaches that position under supervision of the operator. The inverse kinematics operations needed for controlling the joints are implemented in the crane control system, which means that the control system will accept a 3D target position as input to CraneVE. The actual control is performed by having the operator move a crosshair, or a pointer, inside the virtual environment using a joystick (Figure 7.6). By clicking a button on the joystick the crosshair position is activated. If single target mode is activated, this will result in CraneVE sending the target position to the control system. If multiple target mode is activated, CraneVE will instead add the new target position to a list of targets that together form a path that the crane should follow. In this case, only the next-in-line target is sent to the control system. One example of a target path can be seen in Figure 7.7. It is the task of the input system to read joystick data and convert it to a 3D position for the crosshair. There are two different conversions from joystick motion to 3D position implemented in the input system. One is for specifying X, Y and Z coordinates for a 3D position in the virtual environment. The other is for two-dimensional motion in the Y Z plane. It assumes that the crane base

46 38 Chapter 7. Implementation and results Table 7.1: The implemented mappings between joystick motion and target position. Joystick motion 2D control event 3D control event Forward increase Z increase Y Back decrease Z decrease Y Left increase Y increase X Right decrease Y decrease X Button 6 - increase Z Button 5 - decrease Z Button 1 activate position activate position is located at origin with the crane tip in the positive Y direction. The virtual environment application uses a right-handed, Z-up coordinate system. The mappings from joystick motion to target coordinates for the two implementations are shown in Table Collision avoidance One of the reasons for having virtual environment support in a teleoperation system is that it makes it easier to assist the operator with certain tasks. An example of an operator assisting feature that has been implemented in CraneVE is collision avoidance. The collision avoidance feature is enabled in the multiple target mode. If the target path specified by the operator is blocked by an object, e.g. a rock, and the crane would follow it, it would lead to a collision. With this algorithm enabled, another path is automatically calculated such that the crane tip passes above the object and hence avoids the collision. Assume that p is the last target point in the already existing path. Whenever a new target position q is activated, two things will occur. First, the position q is tested for collision against all objects. If any collision is found, q is replaced by a new position q, which is a safe position a small distance above the bounding sphere of the colliding object. Secondly, the path from p to q is checked for collisions and will in such case be replaced by another path between p and q that does not collide with any object. The method that is used to construct the new path is explained in Figure 7.8.

47 7.4. Collision avoidance 39 Figure 7.7: Obstacle avoidance in multiple target mode. The yellow sphere (visible in the top right camera) shows the next-in-line target, while the yellow line from the sphere to the crosshair is the path that the crane should follow in order to reach all targets. This path is generated by the collision avoidance algorithm described in Section 7.4 in order to avoid the obstacle between the two end points of the path.

INTRODUCTION TO GAME AI

INTRODUCTION TO GAME AI CS 387: GAME AI INTRODUCTION TO GAME AI 3/31/2016 Instructor: Santiago Ontañón santi@cs.drexel.edu Class website: https://www.cs.drexel.edu/~santi/teaching/2016/cs387/intro.html Outline Game Engines Perception

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Practical Data Visualization and Virtual Reality. Virtual Reality Practical VR Implementation. Karljohan Lundin Palmerius

Practical Data Visualization and Virtual Reality. Virtual Reality Practical VR Implementation. Karljohan Lundin Palmerius Practical Data Visualization and Virtual Reality Virtual Reality Practical VR Implementation Karljohan Lundin Palmerius Scene Graph Directed Acyclic Graph (DAG) Hierarchy of nodes (tree) Reflects hierarchy

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

OpenSceneGraph Basics

OpenSceneGraph Basics OpenSceneGraph Basics Mikael Drugge Virtual Environments Spring 2005 Based on material from http://www.openscenegraph.org/ Feb-09-2005 SMM009, OpenSceneGraph, Basics 1 Agenda Introduction to OpenSceneGraph

More information

6 th International Forest Engineering Conference Quenching our thirst for new Knowledge Rotorua, New Zealand, April 16 th - 19 th, 2018

6 th International Forest Engineering Conference Quenching our thirst for new Knowledge Rotorua, New Zealand, April 16 th - 19 th, 2018 6 th International Forest Engineering Conference Quenching our thirst for new Knowledge Rotorua, New Zealand, April 16 th - 19 th, 2018 AUTOMATION TECHNOLOGY FOR FORESTRY MACHINES: A VIEW OF PAST, CURRENT,

More information

Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote

Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote 8 th International LS-DYNA Users Conference Visualization Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote Todd J. Furlong Principal Engineer - Graphics and Visualization

More information

Virtual Environments. Ruth Aylett

Virtual Environments. Ruth Aylett Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able

More information

Prospective Teleautonomy For EOD Operations

Prospective Teleautonomy For EOD Operations Perception and task guidance Perceived world model & intent Prospective Teleautonomy For EOD Operations Prof. Seth Teller Electrical Engineering and Computer Science Department Computer Science and Artificial

More information

Pangolin: A Look at the Conceptual Architecture of SuperTuxKart. Caleb Aikens Russell Dawes Mohammed Gasmallah Leonard Ha Vincent Hung Joseph Landy

Pangolin: A Look at the Conceptual Architecture of SuperTuxKart. Caleb Aikens Russell Dawes Mohammed Gasmallah Leonard Ha Vincent Hung Joseph Landy Pangolin: A Look at the Conceptual Architecture of SuperTuxKart Caleb Aikens Russell Dawes Mohammed Gasmallah Leonard Ha Vincent Hung Joseph Landy Abstract This report will be taking a look at the conceptual

More information

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1 Active Stereo Vision COMP 4102A Winter 2014 Gerhard Roth Version 1 Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION Brad Armstrong 1, Dana Gronau 2, Pavel Ikonomov 3, Alamgir Choudhury 4, Betsy Aller 5 1 Western Michigan University, Kalamazoo, Michigan;

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

CS 354R: Computer Game Technology

CS 354R: Computer Game Technology CS 354R: Computer Game Technology http://www.cs.utexas.edu/~theshark/courses/cs354r/ Fall 2017 Instructor and TAs Instructor: Sarah Abraham theshark@cs.utexas.edu GDC 5.420 Office Hours: MW4:00-6:00pm

More information

Understanding OpenGL

Understanding OpenGL This document provides an overview of the OpenGL implementation in Boris Red. About OpenGL OpenGL is a cross-platform standard for 3D acceleration. GL stands for graphics library. Open refers to the ongoing,

More information

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE Prof.dr.sc. Mladen Crneković, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Prof.dr.sc. Davor Zorc, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

Sensible Chuckle SuperTuxKart Concrete Architecture Report

Sensible Chuckle SuperTuxKart Concrete Architecture Report Sensible Chuckle SuperTuxKart Concrete Architecture Report Sam Strike - 10152402 Ben Mitchell - 10151495 Alex Mersereau - 10152885 Will Gervais - 10056247 David Cho - 10056519 Michael Spiering Table of

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

4/9/2015. Simple Graphics and Image Processing. Simple Graphics. Overview of Turtle Graphics (continued) Overview of Turtle Graphics

4/9/2015. Simple Graphics and Image Processing. Simple Graphics. Overview of Turtle Graphics (continued) Overview of Turtle Graphics Simple Graphics and Image Processing The Plan For Today Website Updates Intro to Python Quiz Corrections Missing Assignments Graphics and Images Simple Graphics Turtle Graphics Image Processing Assignment

More information

Lab 7: Introduction to Webots and Sensor Modeling

Lab 7: Introduction to Webots and Sensor Modeling Lab 7: Introduction to Webots and Sensor Modeling This laboratory requires the following software: Webots simulator C development tools (gcc, make, etc.) The laboratory duration is approximately two hours.

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Physical Presence in Virtual Worlds using PhysX

Physical Presence in Virtual Worlds using PhysX Physical Presence in Virtual Worlds using PhysX One of the biggest problems with interactive applications is how to suck the user into the experience, suspending their sense of disbelief so that they are

More information

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com

More information

Haptics CS327A

Haptics CS327A Haptics CS327A - 217 hap tic adjective relating to the sense of touch or to the perception and manipulation of objects using the senses of touch and proprioception 1 2 Slave Master 3 Courtesy of Walischmiller

More information

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment Proceedings of the International MultiConference of Engineers and Computer Scientists 2016 Vol I,, March 16-18, 2016, Hong Kong Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free

More information

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION ROBOTICS INTRODUCTION THIS COURSE IS TWO PARTS Mobile Robotics. Locomotion (analogous to manipulation) (Legged and wheeled robots). Navigation and obstacle avoidance algorithms. Robot Vision Sensors and

More information

Omni-Directional Catadioptric Acquisition System

Omni-Directional Catadioptric Acquisition System Technical Disclosure Commons Defensive Publications Series December 18, 2017 Omni-Directional Catadioptric Acquisition System Andreas Nowatzyk Andrew I. Russell Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University Images and Graphics Images and Graphics Graphics and images are non-textual information that can be displayed and printed. Graphics (vector graphics) are an assemblage of lines, curves or circles with

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Software Computer Vision - Driver Assistance

Software Computer Vision - Driver Assistance Software Computer Vision - Driver Assistance Work @Bosch for developing desktop, web or embedded software and algorithms / computer vision / artificial intelligence for Driver Assistance Systems and Automated

More information

Investigating the Post Processing of LS-DYNA in a Fully Immersive Workflow Environment

Investigating the Post Processing of LS-DYNA in a Fully Immersive Workflow Environment Investigating the Post Processing of LS-DYNA in a Fully Immersive Workflow Environment Ed Helwig 1, Facundo Del Pin 2 1 Livermore Software Technology Corporation, Livermore CA 2 Livermore Software Technology

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Creating a light studio

Creating a light studio Creating a light studio Chapter 5, Let there be Lights, has tried to show how the different light objects you create in Cinema 4D should be based on lighting setups and techniques that are used in real-world

More information

Microsoft Scrolling Strip Prototype: Technical Description

Microsoft Scrolling Strip Prototype: Technical Description Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features

More information

Chapter 1 Virtual World Fundamentals

Chapter 1 Virtual World Fundamentals Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target

More information

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 19: Depth Cameras Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Continuing theme: computational photography Cheap cameras capture light, extensive processing produces

More information

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko SPIDERMAN VR Adam Elgressy and Dmitry Vlasenko Supervisors: Boaz Sternfeld and Yaron Honen Submission Date: 09/01/2019 Contents Who We Are:... 2 Abstract:... 2 Previous Work:... 3 Tangent Systems & Development

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

Chapter 14. using data wires

Chapter 14. using data wires Chapter 14. using data wires In this fifth part of the book, you ll learn how to use data wires (this chapter), Data Operations blocks (Chapter 15), and variables (Chapter 16) to create more advanced programs

More information

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture 12 Window Systems - A window system manages a computer screen. - Divides the screen into overlapping regions. - Each region displays output from a particular application. X window system is widely used

More information

Virtual Environments and Game AI

Virtual Environments and Game AI Virtual Environments and Game AI Dr Michael Papasimeon Guest Lecture Graphics and Interaction 9 August 2016 Introduction Introduction So what is this lecture all about? In general... Where Artificial Intelligence

More information

High Performance Imaging Using Large Camera Arrays

High Performance Imaging Using Large Camera Arrays High Performance Imaging Using Large Camera Arrays Presentation of the original paper by Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz,

More information

By Pierre Olivier, Vice President, Engineering and Manufacturing, LeddarTech Inc.

By Pierre Olivier, Vice President, Engineering and Manufacturing, LeddarTech Inc. Leddar optical time-of-flight sensing technology, originally discovered by the National Optics Institute (INO) in Quebec City and developed and commercialized by LeddarTech, is a unique LiDAR technology

More information

Interface Design V: Beyond the Desktop

Interface Design V: Beyond the Desktop Interface Design V: Beyond the Desktop Rob Procter Further Reading Dix et al., chapter 4, p. 153-161 and chapter 15. Norman, The Invisible Computer, MIT Press, 1998, chapters 4 and 15. 11/25/01 CS4: HCI

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

FSI Machine Vision Training Programs

FSI Machine Vision Training Programs FSI Machine Vision Training Programs Table of Contents Introduction to Machine Vision (Course # MVC-101) Machine Vision and NeuroCheck overview (Seminar # MVC-102) Machine Vision, EyeVision and EyeSpector

More information

An Implementation Review of Occlusion-Based Interaction in Augmented Reality Environment

An Implementation Review of Occlusion-Based Interaction in Augmented Reality Environment An Implementation Review of Occlusion-Based Interaction in Augmented Reality Environment Mohamad Shahrul Shahidan, Nazrita Ibrahim, Mohd Hazli Mohamed Zabil, Azlan Yusof College of Information Technology,

More information

GUIBDSS Gestural User Interface Based Digital Sixth Sense The wearable computer

GUIBDSS Gestural User Interface Based Digital Sixth Sense The wearable computer 2010 GUIBDSS Gestural User Interface Based Digital Sixth Sense The wearable computer By: Abdullah Almurayh For : Dr. Chow UCCS CS525 Spring 2010 5/4/2010 Contents Subject Page 1. Abstract 2 2. Introduction

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training On Application of Virtual Fixtures as an Aid for Telemanipulation and Training Shahram Payandeh and Zoran Stanisic Experimental Robotics Laboratory (ERL) School of Engineering Science Simon Fraser University

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

Oculus Rift Getting Started Guide

Oculus Rift Getting Started Guide Oculus Rift Getting Started Guide Version 1.23 2 Introduction Oculus Rift Copyrights and Trademarks 2017 Oculus VR, LLC. All Rights Reserved. OCULUS VR, OCULUS, and RIFT are trademarks of Oculus VR, LLC.

More information

MRT: Mixed-Reality Tabletop

MRT: Mixed-Reality Tabletop MRT: Mixed-Reality Tabletop Students: Dan Bekins, Jonathan Deutsch, Matthew Garrett, Scott Yost PIs: Daniel Aliaga, Dongyan Xu August 2004 Goals Create a common locus for virtual interaction without having

More information

Rendering a perspective drawing using Adobe Photoshop

Rendering a perspective drawing using Adobe Photoshop Rendering a perspective drawing using Adobe Photoshop This hand-out will take you through the steps to render a perspective line drawing using Adobe Photoshop. The first important element in this process

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Robotics Introduction Matteo Matteucci

Robotics Introduction Matteo Matteucci Robotics Introduction About me and my lectures 2 Lectures given by Matteo Matteucci +39 02 2399 3470 matteo.matteucci@polimi.it http://www.deib.polimi.it/ Research Topics Robotics and Autonomous Systems

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

HeroX - Untethered VR Training in Sync'ed Physical Spaces

HeroX - Untethered VR Training in Sync'ed Physical Spaces Page 1 of 6 HeroX - Untethered VR Training in Sync'ed Physical Spaces Above and Beyond - Integrating Robotics In previous research work I experimented with multiple robots remotely controlled by people

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Overview of current developments in haptic APIs

Overview of current developments in haptic APIs Central European Seminar on Computer Graphics for students, 2011 AUTHOR: Petr Kadleček SUPERVISOR: Petr Kmoch Overview of current developments in haptic APIs Presentation Haptics Haptic programming Haptic

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

The project. General challenges and problems. Our subjects. The attachment and locomotion system

The project. General challenges and problems. Our subjects. The attachment and locomotion system The project The Ceilbot project is a study and research project organized at the Helsinki University of Technology. The aim of the project is to design and prototype a multifunctional robot which takes

More information

Time-Lapse Panoramas for the Egyptian Heritage

Time-Lapse Panoramas for the Egyptian Heritage Time-Lapse Panoramas for the Egyptian Heritage Mohammad NABIL Anas SAID CULTNAT, Bibliotheca Alexandrina While laser scanning and Photogrammetry has become commonly-used methods for recording historical

More information

Vision Ques t. Vision Quest. Use the Vision Sensor to drive your robot in Vision Quest!

Vision Ques t. Vision Quest. Use the Vision Sensor to drive your robot in Vision Quest! Vision Ques t Vision Quest Use the Vision Sensor to drive your robot in Vision Quest! Seek Discover new hands-on builds and programming opportunities to further your understanding of a subject matter.

More information

3D Scanning Guide. 0. Login. I. Startup

3D Scanning Guide. 0. Login. I. Startup 3D Scanning Guide UTSOA has a Konica Minolta Vivid 910 3D non-contact digitizing system. This scanner is located in the digital fabrication section of the technology lab in Sutton Hall 1.102. It is free

More information

MEAM 520. Haptic Rendering and Teleoperation

MEAM 520. Haptic Rendering and Teleoperation MEAM 520 Haptic Rendering and Teleoperation Katherine J. Kuchenbecker, Ph.D. General Robotics, Automation, Sensing, and Perception Lab (GRASP) MEAM Department, SEAS, University of Pennsylvania Lecture

More information

An Introduction into Virtual Reality Environments. Stefan Seipel

An Introduction into Virtual Reality Environments. Stefan Seipel An Introduction into Virtual Reality Environments Stefan Seipel stefan.seipel@hig.se What is Virtual Reality? Technically defined: VR is a medium in terms of a collection of technical hardware (similar

More information

What is Virtual Reality? What is Virtual Reality? An Introduction into Virtual Reality Environments. Stefan Seipel

What is Virtual Reality? What is Virtual Reality? An Introduction into Virtual Reality Environments. Stefan Seipel An Introduction into Virtual Reality Environments What is Virtual Reality? Technically defined: Stefan Seipel stefan.seipel@hig.se VR is a medium in terms of a collection of technical hardware (similar

More information

6 System architecture

6 System architecture 6 System architecture is an application for interactively controlling the animation of VRML avatars. It uses the pen interaction technique described in Chapter 3 - Interaction technique. It is used in

More information

Team 4. Kari Cieslak, Jakob Wulf-Eck, Austin Irvine, Alex Crane, Dylan Vondracek. Project SoundAround

Team 4. Kari Cieslak, Jakob Wulf-Eck, Austin Irvine, Alex Crane, Dylan Vondracek. Project SoundAround Team 4 Kari Cieslak, Jakob Wulf-Eck, Austin Irvine, Alex Crane, Dylan Vondracek Project SoundAround Contents 1. Contents, Figures 2. Synopsis, Description 3. Milestones 4. Budget/Materials 5. Work Plan,

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

COMPUTER. 1. PURPOSE OF THE COURSE Refer to each sub-course.

COMPUTER. 1. PURPOSE OF THE COURSE Refer to each sub-course. COMPUTER 1. PURPOSE OF THE COURSE Refer to each sub-course. 2. TRAINING PROGRAM (1)General Orientation and Japanese Language Program The General Orientation and Japanese Program are organized at the Chubu

More information

CS277 - Experimental Haptics Lecture 2. Haptic Rendering

CS277 - Experimental Haptics Lecture 2. Haptic Rendering CS277 - Experimental Haptics Lecture 2 Haptic Rendering Outline Announcements Human haptic perception Anatomy of a visual-haptic simulation Virtual wall and potential field rendering A note on timing...

More information

House Design Tutorial

House Design Tutorial House Design Tutorial This House Design Tutorial shows you how to get started on a design project. The tutorials that follow continue with the same plan. When you are finished, you will have created a

More information

MEAM 520. Haptic Rendering and Teleoperation

MEAM 520. Haptic Rendering and Teleoperation MEAM 520 Haptic Rendering and Teleoperation Katherine J. Kuchenbecker, Ph.D. General Robotics, Automation, Sensing, and Perception Lab (GRASP) MEAM Department, SEAS, University of Pennsylvania Lecture

More information

An Open Robot Simulator Environment

An Open Robot Simulator Environment An Open Robot Simulator Environment Toshiyuki Ishimura, Takeshi Kato, Kentaro Oda, and Takeshi Ohashi Dept. of Artificial Intelligence, Kyushu Institute of Technology isshi@mickey.ai.kyutech.ac.jp Abstract.

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

House Design Tutorial

House Design Tutorial House Design Tutorial This House Design Tutorial shows you how to get started on a design project. The tutorials that follow continue with the same plan. When you are finished, you will have created a

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Topics VRML. The basic idea. What is VRML? History of VRML 97 What is in it X3D Ruth Aylett

Topics VRML. The basic idea. What is VRML? History of VRML 97 What is in it X3D Ruth Aylett Topics VRML History of VRML 97 What is in it X3D Ruth Aylett What is VRML? The basic idea VR modelling language NOT a programming language! Virtual Reality Markup Language Open standard (1997) for Internet

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery Using Simulation to Design Control Strategies for Robotic No-Scar Surgery Antonio DE DONNO 1, Florent NAGEOTTE, Philippe ZANNE, Laurent GOFFIN and Michel de MATHELIN LSIIT, University of Strasbourg/CNRS,

More information

Mobile Robots (Wheeled) (Take class notes)

Mobile Robots (Wheeled) (Take class notes) Mobile Robots (Wheeled) (Take class notes) Wheeled mobile robots Wheeled mobile platform controlled by a computer is called mobile robot in a broader sense Wheeled robots have a large scope of types and

More information

FLASH LiDAR KEY BENEFITS

FLASH LiDAR KEY BENEFITS In 2013, 1.2 million people died in vehicle accidents. That is one death every 25 seconds. Some of these lives could have been saved with vehicles that have a better understanding of the world around them

More information

1 Abstract and Motivation

1 Abstract and Motivation 1 Abstract and Motivation Robust robotic perception, manipulation, and interaction in domestic scenarios continues to present a hard problem: domestic environments tend to be unstructured, are constantly

More information

Interactive Design/Decision Making in a Virtual Urban World: Visual Simulation and GIS

Interactive Design/Decision Making in a Virtual Urban World: Visual Simulation and GIS Robin Liggett, Scott Friedman, and William Jepson Interactive Design/Decision Making in a Virtual Urban World: Visual Simulation and GIS Researchers at UCLA have developed an Urban Simulator which links

More information

Computer simulator for training operators of thermal cameras

Computer simulator for training operators of thermal cameras Computer simulator for training operators of thermal cameras Krzysztof Chrzanowski *, Marcin Krupski The Academy of Humanities and Economics, Department of Computer Science, Lodz, Poland ABSTRACT A PC-based

More information

ŞahinSim: A Flight Simulator for End-Game Simulations

ŞahinSim: A Flight Simulator for End-Game Simulations ŞahinSim: A Flight Simulator for End-Game Simulations Özer Özaydın, D. Turgay Altılar Department of Computer Science ITU Informatics Institute Maslak, Istanbul, 34457, Turkey ozaydinoz@itu.edu.tr altilar@cs.itu.edu.tr

More information

OpenGL Programming Guide About This Guide 1

OpenGL Programming Guide About This Guide 1 OpenGL Programming Guide About This Guide 1 About This Guide The OpenGL graphics system is a software interface to graphics hardware. (The GL stands for Graphics Library.) It allows you to create interactive

More information

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic Universal Journal of Control and Automation 6(1): 13-18, 2018 DOI: 10.13189/ujca.2018.060102 http://www.hrpub.org Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic Yousef Moh. Abueejela

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information