Point Cloud-based Model-mediated Teleoperation with Dynamic and Perception-based Model Updating

Size: px
Start display at page:

Download "Point Cloud-based Model-mediated Teleoperation with Dynamic and Perception-based Model Updating"

Transcription

1 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) Point Cloud-based Model-mediated Teleoperation with Dynamic and Perception-based Model Updating Xiao Xu, Burak Cizmeci, Anas Al-Nuaimi, and Eckehard Steinbach Abstract In this paper, we extend the concept of model-mediated teleoperation (MMT) for complex environments and six degrees of freedom interaction using point cloud surface models. In our system, a time-of-flight camera is used to capture a high resolution point cloud model of the object surface. The point cloud model and the physical properties of the object (stiffness and surface friction coefficient) are estimated at the slave side in real-time and transmitted to the master side using the modeling and updating algorithm proposed in this work. The proposed algorithm adaptively controls the updating of the point cloud model and the object properties according to the slave movements and by exploiting known limitations of human haptic perception. As a result, perceptually irrelevant transmissions are avoided, and thus the packet rate in the communication channel is substantially reduced. In addition, a simple point cloud-based haptic rendering algorithm is adopted to generate the force feedback signals directly from the point cloud model without first converting it into a 3D mesh. In the experimental evaluation, the system stability and transparency are verified in the presence of a round-trip communication delay of up to 1000ms. Furthermore, by exploiting the limits of human haptic perception the presented system allows for a significant haptic data reduction of about 90% for teleoperation systems with time delay. Index Terms model-mediated teleoperation, model-update, packet rate reduction, point cloud-based haptic rendering. 1 I. INTRODUCTION typical teleoperation system consists of three main parts: the human operator (OP)/master system, the Ateleoperator (TOP)/slave system, and the communication link/network between them [1]. The slave is typically controlled by the position or velocity commands generated by the master. The sensed haptic and visual signals during the slave s interaction with the remote environment are transmitted back to the master. The position/velocity commands as well as the visual-haptic data are exchanged over a communication network, as illustrated in Fig. 1. On the master side, the haptic and visual feedback signals are displayed to the user, which allows haptical and visual immersion into the remote environment. Many applications in entertainment, gaming, teaching/training, telerobotics, etc. can benefit from such a bilateral teleoperation system [2, 3]. For teleoperation systems with geographically separated operators and teleoperators, time delay introduced by the communication network always exists. It is well known that even a small time delay in the haptic channel jeopardizes the system stability and transparency [4]. Classical passivity-based control schemes, e.g. wave-variable transformation [5 7], have been developed to address this issue. However, system stability (passivity) and transparency are conflicting objectives in passivity-based teleoperation system design [4,5,8]. In [9,10], the so-called predictive display approach is developed to compensate for Manuscript received November 24, A preliminary version of this work has been presented at the IEEE Int. Workshop on Haptic Audio-Visual Environments and Games, Istanbul, October This work has been supported by the European Research Council under the European Unions Seventh Framework Programme (FP7/ ) / ERC Grant agreement no The authors are with the Institute for Media Technology, Technische Universität München, Arcisstr. 21, Munich, Germany (Tel: +49-(0) xiao.xu@tum.de, burak.cizmeci@tum.de, anas.alnuaimi@tum.de, eckehard.steinbach@tum.de).

2 2 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) Video Force Local loop Network Local loop Force Video Operator /Master Velocity /Position Velocity /Position Teleoperator /Slave Fig. 1. Overview of a typical teleoperation system (adopted from [1]). the visual delay. The idea of the predictive display is to overlay a computer graphics (CG) model of the robot arm on the real video images, which allows the operator to locally view the motion of the slave robot before it actually moves and hence avoid possible collisions. An extension of the predictive display for the prediction of a 3D geometric model of the remote environment is presented in [11]. A stereo camera is employed to capture the remote environment using an offline scanning procedure. A 3D virtual environment (VE) is constructed using the captured 3D geometry in combination with texture mapping. After that, a model of the telerobot is placed in the VE and the user can thus locally interact with the VE without delay. The predictive display approach, however, cannot provide a realistic haptic feedback to the users since modeling the physical properties of the environment (stiffness, friction coefficient, damping, etc.) is missing. Also, the reconstruction of the VE model is time-consuming and the updating of the environment model cannot be performed while the robot is in operation. Different from the methods above, the concept of Model-Mediated Teleoperation (MMT) has been proposed to address both stability and transparency issues in the presence of communication delays [12 14]. In the MMT approach, a simple virtual object model is built to approximate the object in the remote environment based on the slave s position/velocity and force signals. The parameters describing this virtual object model are continuously updated and transmitted back to the master whenever the slave obtains a new model. On the master side, a copy of this virtual object model is constructed accordingly and the haptic feedback is generated locally based on the virtual object model without any delay. Thus, stable and transparent teleoperation is achieved if the model estimation and update algorithms perform well. Generally, the main challenges of MMT lie in two aspects: 1) To obtain a precise object model for complex environments (both object geometry and physical properties). 2) To update the estimated model parameters with reduced packet rate and negligible (ideally imperceptible) distortion for the users. In our preliminary work in [15], we present a point cloud-based model-mediated teleoperation (pcbmmt) method to address the issue of dealing with complex object geometry. In this paper, we extend our pcbmmt approach to additionally address the challenge 2) above. Different from previous MMT approaches, the remote environment in our system is no longer approximated by a simple geometric shape, but by a point cloud object surface model, which is captured by a time-of-flight (ToF) camera. Thus, even complex object geometry can be modeled. In order to enable real-time interaction with the point cloud object surface model, a simple haptic rendering algorithm is adopted to generate the force feedback directly from the point cloud model without first converting it into a 3D mesh, which reduces the computational complexity compared to the traditional mesh-based force rendering methods. Compared to our previous work in [15], in this paper the point cloud model as well as the physical properties of the object (stiffness and surface friction coefficient) are estimated on the slave side in real-time and transmitted to the master. The data transmission is controlled by a novel updating algorithm proposed in this paper, which adaptively adjusts the update rate of the point cloud model and its physical properties. Limits of human haptic perception are exploited in the design of the updating algorithm. As a result, the packet rate in the backward communication channel is significantly reduced without disturbing the user

3 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 3 Video Model Parameters Video Force Local Model Network Force Local Model Sensor data Operator /Master Position /Force Position /Force Teleoperator /Slave Fig. 2. Overview of a model-mediated teleoperation system (adopted from [12]). perception. In this paper, we make the following two main contributions to the field of MMT: (1) Instead of using a simple geometry to approximate the remote environment, the object surface is now described by a point cloud model. In addition, the whole system such as the parameter identification, date transmission, and haptic rendering are all extended for dealing with pure point cloud object models. (2) A perceptionbased data reduction approach is proposed to avoid unnecessary transmissions of the estimated model parameters. Only the estimates, which are perceptually relevant to the user, are selected and transmitted back to the master. The remainder of the paper is organized as follows. In Sec. II, we review related work in the area of MMT. In Sec. III, we describe our point cloud-based MMT extension, including pre-filtering of depth maps, coordinate transformations, environment modeling, model update, point cloud-based haptic rendering and data reduction. In Sec. IV, error compensation and force protection schemes for the pcbmmt are introduced. Our experimental setup and the results obtained are described in Sec. V. Sec. VI concludes the paper and outlines future work. II. MODEL-MEDIATED TELEOPERATION Model-mediated teleoperation, also referred to as virtual-reality-based or impedance-reflecting teleoperation, was first proposed by Hannaford [14], and later extended by Niemeyer et al. [12,13]. The main goal of MMT is to enable a stable and transparent teleoperation in the presence of arbitrary communication delays. In MMT, a virtual model of the remote environment is estimated based on the haptic interaction with the remote objects. Instead of directly exchanging haptic (force) signals, the estimated model parameters (object geometry and physical properties) are transmitted back to the master. On the master side, a copy of the virtual model is reconstructed accordingly. Thus, the user can haptically interact with the local model without any delay as illustrated in Fig. 2. In MMT, if the estimated model captures the properties of the remote environment accurately, the teleoperation system can be both stable and transparent for arbitrary communication delays. One of the main tasks of a MMT system is to estimate the environment parameters [16 24]. In [16], a damper-spring model is adopted to approximate the environment, and a sliding average least-square algorithm is proposed to estimate the dynamic parameters of the environment on the slave side for online updating of the virtual model parameters on the master side in the presence of large communication delay. In [17], a laser rangefinder is used to predict the collision even before the slave is in contact with the environment. Both estimation approaches, however, work only for one degree-of-freedom (DoF) systems. A multi-dof estimation method is proposed in [18, 19], where physical properties such as stiffness and damping are estimated in real-time. Yet, the communication delay as well as the surface friction is ignored. An approach for estimating the object model in multi-dof with communication delays is proposed in [13], where a 2D planar surface model is extracted from point clouds captured by a stereo camera. Stability and transparency issues and the according compensation schemes of MMT system are discussed in [25, 26].

4 4 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) x m /f m T f x /f ' sd m (xs,xsd,x s,fs ) Master system x m Force Rendering Model Reconstr. Updating controller Network (Delay) Modeling Updating controller Coord. Transf. Slave x Prefilter s system f m Live video Dec. T b Enc. Depth image Live video Fig. 3. Overview of the point cloud-based MMT system, where x m and x s are the master and slave position. x d s denotes the desired slave position, which could penetrate into the object surface. f m and f s are the master and measured slave contact force. T f and T b are the forward and backward communication delays. Most related work on MMT approximates the environment with a simple geometry (e.g. a plane). However, in most cases the geometric properties of the remote environment are complex. A simplified geometry approximation is not sufficient, since it leads to large deviations from the real environment and thus results in frequent model updates and incorrect haptic rendering. It is widely accepted that point clouds are a convenient and efficient way to represent complex object surfaces. In [36 39], point cloudbased haptic rendering approaches are proposed to generate force signals directly from point cloud models of object surfaces. However, these approaches are proposed only for virtual environments. In [15], we present a pcbmmt method that is able to deal with static and rigid objects with complex surface geometry. To this end, a point cloud of the object surface is captured by a 3D sensor in real time. The model mediation and force rendering are purely based on point clouds without using any geometric model or 3D meshes. This enables the system to estimate the environment geometry with high resolution. Yet, for the pcbmmt system in [15] some issues remain unsolved: (1) Estimation of the physical properties of the objects (stiffness and friction coefficient) in real-time in the presence of large communication delays. (2) Model parameter updates (object geometry and physical properties) at a low packet rate in the communication channel with minimum distortion perceived by users during the updating. In this paper, we extend our pcbmmt method to address the aforementioned issues by developing an adaptive modeling and updating controller which is presented in the next section. III. POINT CLOUD-BASED MODEL-MEDIATED TELEOPERATION SYSTEM WITH ADAPTIVE MODELING AND UPDATING CONTROLLER A. System overview An overview of the proposed system is shown in Fig. 3. The depth images captured by the ToF camera, the slave position and the measured force signals are used to estimate the environment model parameters. Once the model parameters are obtained, they are transmitted to the master side. An updating algorithm dynamically controls the updating on both the slave and the master side. On the master side, the forcefeedback signals are generated based on the local copy of the environment model. Thus, delays in the haptic channel are avoided. B. The 3D sensor and depth maps To obtain the point cloud model, a 3D sensor is employed. For complex environments there are always areas that cannot be scanned during approaching the objects because of occlusion and the limited field

5 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 5 of view. Thus, the 3D sensor needs to capture the point cloud of the object surface continuously during teleoperation. The employed 3D sensor is a ToF camera (Argos R 3D-P100), which captures at a high frame rate (up to 160fps) and operates at a more flexible work range (10cm to 5m) compared to other 3D sensors, such as Microsoft s Kinect and ASUS Xtion (about 50cm to 3m). The captured point clouds are organized and stored in matrices (depth maps) with a size of 120x160 pixels. In this paper, we consider the captured depth maps as gray-scale images and thus image processing algorithms can be directly applied on the depth maps for noise reduction, image inpainting and compression. C. Pre-filtering The raw depth maps captured by the 3D camera are normally quite noisy, sometimes even with missing parts (holes) due to an invalid work range or wrong reflection (Fig. 4 left). Therefore, pre-filtering is necessary for noise reduction and hole filling. In this paper, in order to reduce the computational complexity for the online modeling, simple standard image filters are employed. Firstly, a 5 by 5 median filter is applied on each depth image to remove outlier depth values. Then, a temporal per pixel average filter for every N frames is employed to reduce the noise of the depth image (see Sec. E.1 for more details about the value N). In addition, an image inpainting method is employed to fill the holes in the depth image. Since the purpose of using image inpainting techniques is to recover the missing parts rather than providing a good visual quality, the simple and fast image inpainting algorithm described in [27] is adopted. In this hole-filling algorithm, the missing regions in the depth image are first extracted and marked. Then isotropic diffusion (convolution with matrices A and B) based on the neighborhoods of the hole regions is applied inside the hole regions for several rounds. The diffusion kernels suggested by [27] are as follows: A = a b a b 0 b a b a and B = c c c c 0 c c c c (1) where a = , b = and c = After filtering, a low-noise depth image is obtained without holes (Fig. 4 right). Note that if there is a real hole on the object surface, the inpainting algorithm will fill it by mistake. This is because we do not apply any detectors to distinguish between a real hole and the missing part. To address this issue, more information such as the edge shape of the hole should be collected and analyzed in order to make a correct decision, which will be addressed in our future work. D. Coordinate transformation The acquired depth maps are expressed in local pixel coordinates. In order to build the 3D point cloud model in world coordinates from the depth maps, 3D coordinate transformation is employed. As illustrated in Fig. 5, the coordinate transformation is composed of three steps: 1) from pixel coordinates to camera Fig. 4. A depth image before filtering (left) and after filtering (right). The holes are filled by the median, average and inpainting filters.

6 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 6 ToF camera Zc Xc Oc Zw R2, t2 Yw Ow Yt Xw R1, t 1 Ot Yc Xt Zt Fig. 5. Coordinate transformation in the proposed pcbmmt system. Xc v Object in pixel coordinates (u v) u oy ox zc) xc zc focal length f Yc Object in camera coordinates (xc yc Zc Image plane Fig. 6. Pin-hole camera model and the coordinate transformation from pixel to camera coordinates. coordinates, 2) from camera coordinates to robot tool coordinates (R1 and t1 ) and 3) from robot tool coordinates to world coordinates (R2 and t2 ). In the first step, every pixel in the depth map described by the vector (u, v, d)t is transformed into camera coordinates (xc, yc, zc )T, where u and v are the pixel coordinates in rows and columns and d is the depth value. An ideal pinhole camera model is adopted to apply this transformation. As illustrated in Fig. 6, the transformation can be described as follows: xc = (ox v) zc / fx, yc = (u oy ) zc / fy, zc = d (2) where fx and fy are the camera focal lengths in x and y directions, ox and oy are the pixel shifts from the camera center. For the remaining steps from camera coordinates (xc, yc, zc )T to robot tool coordinates (xt, yt, zt )T and then to world coordinates (xw, yw, zw )T, 3D rotations and translations are applied as follows: (xw, yw, zw )T = R2 (xt, yt, zt )T + t2 = R2 (R1 (xc, yc, zc )T + t1 ) + t2 (3) E. Environment modeling Next, we discuss our approach to estimate the geometry and physical properties for the remote object. According to our assumption, the object in the remote environment is a static and rigid body with friction, thus, the friction coefficient (FC) µ in between the object surface and the robot end-effector is the first important parameter to be estimated. Meanwhile, due to the haptic rendering algorithm (proxy-hip method, see Sec. G), a stiffness value k is necessary, which could be a very high value and represents the stiffness coupling between the slave robot and the environment. Due to the communication delay, we need to select initial values for k and µ in order to be able to render the force signals on the master side before the first estimated physical properties are received. In our paper, the initial values are set as k0 = 2000N/m and µ0 = 0.15.

7 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 7 E.1 Geometry modeling The object geometry is continuously updated and transmitted to the master side, while the slave is in free space. Thus, the master system can reconstruct a stable and precise 3D point cloud model before the slave gets in contact with the remote environment. The update rate of the object geometry is adaptively changed according to the slave velocity. For accuracy reasons, higher slave velocity results in higher update rate. As the frame rate of the 3D sensor is set to be 50fps in our system, in order to balance the estimation accuracy and the computational time, the maximal and minimal update rates of the object geometry are set to be 25Hz and 2Hz, respectively. The update rate r as a function of the slave velocity v s is selected as follows: r = min{2 + v s, 25} (4) where v s is the slave velocity in cm/s. According to Eq.(4), the update rate of the object geometry (point cloud) increases with increasing slave velocity. The adaptive length N of the temporal averaging filter in Sec. C is computed as N = round(50/r), which means minimally 2 frames and maximally 25 frames are taken to compute the point cloud model (object geometry). For example, if the slave velocity is v s =5cm/s, then the update rate r = min{2 + 5, 25} = 7Hz. Therefore, we use N = round(50/r) = 7 frames that are captured by the 3D sensor to compute the final depth map (pre-filtering and inpainting, see Sec.III.C), which will be transmitted to the master side to reconstruct a corresponding virtual environment model. E.2 Physical properties Friction coefficient (FC) µ Three assumptions are made before the estimation: 1. The static FC is assumed to be the same as the dynamic FC. 2. The estimation is activated only when the robot velocity in the tangent direction of the object surface v t s is larger than a pre-defined threshold. 3. The FC value is the ratio of the measured tangential force f t s and the normal force f n s on the slave side: µ = f t s/ f n s, f n s = f s,n, f t s = f s f n s n (5) Where, denotes the vector inner product, f s is the measured slave contact force and n is the estimated surface normal at the contact position (see Sec. G for more details about surface normal estimation). When the estimation is activated, the measured tangential and normal slave forces (sampling rate 1kHz) are recorded as an effective force-pair sample. The FC is computed based on the last 100 effective forcepairs using the least squares method. Stiffness k While interacting with a rigid object, the slave end-effector stays on the surface of the object. The commanded (desired) slave position x d s, however, can penetrate into the object due to limited force that can be displayed through the master device. The position difference between the slave end-effector x s and the desired slave position x d s in the direction of the surface normal is regarded as the penetration depth: d = x s x d s,n (6) With the help of Hook s law, the stiffness k is computed as

8 8 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) k = f n s / d (7) Similar to the FC estimation, the last 100 effective ( f n s, d) samples (sampling rate 1kHz) are collected to estimate the k value. The stiffness estimation is activated if the normal force on the slave side is larger than a pre-defined threshold, which implies a stable contact between the slave end-effector and the environment. F. Model update During the operation, a continuous updating of the model parameters (object geometry and physical properties) is necessary to ensure high system transparency. For each update, the latest estimated parameters are transmitted back to the master side and the local model on the master side is updated accordingly. We assume in this work that the objects in the remote environment are static and rigid. The model update algorithm is based on the following assumptions: 1. To avoid the model-jump effect [25], object geometry updates are only activated if the slave is in free space ( f s < f thres ). 2. Updates of the physical properties of the object are activated according to the algorithms described in Sec.E.2 once a contact is detected on the slave side ( f s > f thres ). { Geometry, f Trigger = s < f thres (8) Physical properties, f s f thres where f thres is used to detect the contact on the slave side. If the measured slave force f s is larger than f thres, the slave is then considered to have contact with the object. In our work, by considering the sensor resolution and measuring noise, we select f thres =0.3N. F.1 Updating the object geometry While the slave is in free space, the captured object geometry (point cloud) is directly encoded and transmitted to the master. On the master side, the object model is reconstructed according to the received geometry data (Fig. 7). The geometry updates are deactivated once the slave is in contact with the environment. If the local model is accurate for the current interaction area, the proposed geometry updating scheme allows for stable exploration. However, the slave could try to touch the environment where no point cloud model is available. This happens for instance if the workspace of the master/slave is larger than the field of view of the 3D sensor. This leads to a model mismatch between the slave and master and results in unpredictable distortion. On the master side, as there is no valid point cloud model, the force feedback is zero, while on the slave side the slave end-effector could be still in contact with the object. In our proposed pcbmmt, the solution for this issue is to extrapolate the current model across the model boundaries. Thus, once the master HIP (Haptic Interaction Point) moves across the boundary of the available model, the system can still render force signals based on the extrapolated point cloud model. Once a large force/position difference is detected between the slave and master, the user is asked to stop the exploration and command the slave back to free space. Thus, the geometry updating is activated again and new point cloud data is captured according to the current sensor view. This solution, however, can lead to a frequent interruption of the user s exploration. Moreover, for environments with deformable and movable objects, the extrapolation of the point cloud model is not sufficient and an algorithm for updating the object geometry (point cloud) during the contacting with the remote object is required. Such an updating scheme is beyond the scope of this paper and will be addressed in our future work.

9 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 9 f s Geometry Update controller on the slave side Update controller on the master side Geometry Physical properties JND controller Updating period 500ms Physical properties Fig. 7. The structure of the update controller on the slave side (left) and the master side (right). On the slave side, the measured slave contact force is applied to choose the transmission mode. If f s < f thres, the slave is in free space and thus geometry data is transmitted. Otherwise, the physical properties are transmitted if their changes since the last update are larger than a JND. On the master side, the received geometry data is directly used to reconstruct the local model, while updating the physical properties is stretched over a time period of 500ms to avoid abrupt changes. F.2 Updating the physical properties of the objects The updating schemes for the estimated physical properties are different between the master and slave sides: The slave uses a model of human haptic perception to determine when to transmit a new update to the master in order to reduce the packet rate. On the master side, the local model is updated from the current parameters to the new received ones without disturbing the user exploration. Update controller on the slave side: For a typical teleoperation system, the haptic signals (packets) are transmitted back to the master side at a rate of 1kHz. This packet rate can be significantly reduced by applying the perceptual deadband coding approach described in [28 33]. For our pcbmmt system, a similar concept can be employed to reduce the packet rate required for transmitting the estimated physical properties of the object. If the physical properties of the object are perfectly estimated by the modeling algorithm, there will be no updates required and thus the system achieves zero transmission in the backward communication channel after the initial model transfer. For real teleoperation system, however, this is impossible. Various factors, like noise, human behavior, field of view etc., affect the model completeness and accuracy. Therefore, we expect a certain but not extremely high packet rate reduction while the pcbmmt method is employed. Instead of sending every new estimate of the physical properties, only the values which could result in significantly different perception during the user s exploration will be transmitted back to the master. Based on the perceptual deadband haptic data reduction approach, an update is triggered if the difference between the locally rendered master force and the measured slave force is larger than the JND (Just Noticeable Difference). However, triggering updates based on the force JND has inadequacies, since we have only two choices to obtain the master force on the slave side: (1) Transmit the master force to the slave in the forward channel. (2) Render the force from a local model on the slave side based on the physical properties of the object transmitted to the master. The former method works poorly with increasing communication delay, as the slave receives the master force only after a round-trip delay (T d ). During the round-trip period, the measured slave force and the received master force on the slave side, however, are still mismatching. This results in unnecessary updates and leads to high packet rate in the backward communication channel. The latter method does not suffer from this effect. Yet, it is resource consuming. The whole force rendering algorithm needs to be run at 1kHz on the slave side in order to simulate the local master force without delay. Meanwhile, the slave system has to estimate the environment parameters, which is also time-consuming if the environment is complex.

10 10 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) Our approach avoids the aforementioned issues. In our work, the updating is based on the change of the physical properties k and µ, but not on the change of forces. If the difference between the current estimated stiffness / FC values and the last updated values are larger than a threshold, an update is triggered (Fig. 7 left). { yes, i f k n k n 1 update = k n > k or µ n µ n 1 µ n > µ (9) no, else where k n and µ n are the n th updates of the stiffness and FC values, while k and µ denote the JNDs of the stiffness and FC, respectively. According to the update condition, we need to find out the JND values for the stiffness k and FC µ. As suggested in [34, 35], k is set to be 23% for rigid contact. For µ, the threshold value is not available from the literature. However, by assuming a constant slave normal force f n s we can derive µ from the force JND: f = d f t s f t s = d(µ f n s ) µ f n s = dµ µ = µ (10) where f is the JND for force. As a result, if the normal force is constant, the JND for the FC is simply the same as for the force, which is typically around 10%. Under this assumption, however, we just isolate the friction force from the other force components and do not consider the impact of other human factors such as exploration velocity. Actually, with a change of the normal force the FC JND could also change. Meanwhile, the human perception for stiffness and FC are coupled and thus we cannot simply set their JNDs separately. Therefore, a further study of the JNDs for different physical object properties using methods from psychophysics is necessary. Update controller on the master side: The task of the update controller on the master side is to provide a smooth updating from the currently applied physical properties to newly received ones. A sudden change in the physical properties leads to system instabilities [25, 26]. Thus, a time interval for updating (updating period) is required. As suggested in [25], the updating period is set to be 500ms. Once a new update is received on the master side during the current updating period (we call it updating overlap), the controller stops applying the current updates and takes another 500ms to apply the newer updates (Fig. 7 right). Since changes of the physical properties can be assumed as to be infrequent spatially (across the object surface), the updating overlap is thus infrequent as well. G. Point cloud-based force rendering In order to render the interaction force based on the received point cloud model on the master side, a point cloud-based haptic rendering (pcbhr) method is employed. Compared to the traditional mesh-based rendering process, the pcbhr method can directly compute the force signals without converting the point cloud into meshes, which reduces the computational complexity. Previous approaches for pcbhr for rigid objects are described in [36 39]. In order to apply a low-cost pcbhr method including friction rendering, a combination of the approach in [36, 37] and the friction cone method in [40] is proposed in this work. As illustrated in Fig. 8, a proxy-hip method is used to estimate the surface normal and render the force. G.1 Proxy states Similar to [37], the proxy has three different radius ranges R 1,R 2 and R 3. R 1 and R 2 are used to detect collision while R 3 is used for surface normal estimation. The gap between the proxy radius R 1 and R 2 is

11 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 11 f m R2 R3 R1 proxy HIP friction cone Normal vector n Tangent vector v Fig. 8. The definition of the proxy (left) and the estimation of the surface normal as well as master force (right). chosen to be 5mm, which is just larger than the noise level of the point clouds captured by the ToF camera. R 3 is chosen sufficiently large to get a good surface normal estimation. Three different proxy states are defined as follows: Free space: no points within R 2 In contact: there are points between R 1 and R 2, but no points within R 1 Entrenched: points within R 1 G.2 Surface normal estimation As suggested in [36], the surface normal is obtained by averaging all vectors which start from the contact point and point towards the center of the proxy x p. Every time before the proxy moves, the surface normal n is computed as follows: n = 1 K K i=1 where x i is the position of the points between R 1 and R 3. G.3 Proxy movement x p x i x p x i, n = n / n (11) A modified proxy movement algorithm based on [37] is employed to enable friction rendering. In the following, we define s as the proxy movement vector, d as the step size (more details for d, please refer to [37]), and x m and x p as the master (HIP) and proxy position, respectively. u = x m x p denotes the vector which points towards the HIP position from the proxy center and v = vector. If the proxy is in free space, move it one step: s = d u If the proxy is entrenched, move it one step in the direction of n: s = d n u u,n u u,n is the surface tangent If the proxy is in contact with the object, a friction cone is computed based on the estimated friction coefficient. Then (1) If the HIP is inside of the friction cone, the proxy stays still (s = 0); (2) If the HIP is outside of the friction cone, move the proxy one step s in the direction of v. The step size of s is computed such that after the proxy movement the HIP stays just at the boundary of the friction cone [40].

12 12 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) G.4 Force rendering The haptic signal is simply rendered at 1kHz with a spring model between the proxy and the HIP based on Hooke s law: H. Point cloud compression f m = k (x p x m ) (12) In our system, the captured 3D point clouds are transmitted to the master side to reconstruct a local virtual model. The transmission of the 3D point clouds, however, requires a large data rate in the communication channel. To reduce the data rate, we transmit the filtered depth map in the camera view (organized 2D matrix) along with the coordinate rotation and translation parameters. Thus, the 3D model can be reconstructed in a lossless manner on the master side with reduced data. However, directly transmitting the depth images still requires a large bit rate. Considering a depth map of size 120x160 pixels and the maximum update rate of 25fps (see Sec. III. E.1), the maximum required bit rate is: pixel/ f rame 8bit/pixel 25 f rame/s = 3.84Mb/s Therefore, lossless H.264/AVC compression is employed to compress the depth map (see Sec. V. B.2). IV. POSITION ERROR COMPENSATION AND FORCE PROTECTION Due to the estimation error of the 3D point clouds, there will be small position differences between the real object and the estimated point cloud model. The following two cases represent this issue: 1. Under-estimation: the slave is in contact with the object while the master HIP is still in free space (Fig. 9 left) f s > f thres and f m 0 2. Over-estimation: the slave is in free space while the contact occurs on the master side (Fig. 9 right) f s f thres and f m > 0 where f m is the received master force on the slave side. A similar solution as proposed in [25, 41] is employed to address this issue. For case 1, the point cloud model is shifted as a whole opposite to the direction of the current master velocity by x 1, which is computed such that the shifted model is just below the current master HIP. If the master moves upwards, the shift is continually applied until the local model reaches the correct position. Due to the communication delay, only a model shift is not enough for keeping the system stable, since the desired slave position x d s already penetrates too deeply into the object surface when the collision is detected on the slave side. Therefore, a force protection scheme is applied to prevent the dangerous penetration. The slave movement can be modified accordingly as: { v n 0, i f f n s = s > fm n + f thres v n (13) m, else where v n m and v n s denote the master and slave velocity in the direction of the object surface normal, while f n m and f n s are the master and slave force in the direction of the object surface normal, respectively. Eq. (13) implies that once a potentially dangerous force action is detected, the slave stops penetrating the object. For case 2, the point cloud model is shifted as a whole along the direction of the current master velocity by x 2. The step size x 2 needs to be carefully chosen in order to reduce the force distortion. A too

13 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 13 Under estimation Slave Master Slave Over estimation Master d x s x s x m x s x m x 1 x 2 Real object surface Shifted surface Estimated surface Fig. 9. The position error compensation approach. For under estimation (left), the object model is shifted up to be as close to its real position as possible without pop-through of the master HIP. For over estimation (right), the object model is shifted down by a proper distance for this correction. small step size leads to an excessive number of perceivable shifts and a too large step size results in under-estimation (i.e. case 1). A. Setup V. EXPERIMENTAL VALIDATION For evaluating the proposed method, we use a real teleoperation system with a Force Dimension R Omega.6 as the master device and a KUKA LightWeight arm as the slave (Fig. 10). A JR3 force sensor (6 DOF force/torque sensor) is mounted at the end-effector of the slave robot to measure the contact force. The measured force is automatically calibrated and decoupled by the sensor SDK. Gravity, inertial forces are also compensated in the force measurement. The Argos R 3D-P100 ToF camera is used to capture the depth images. The measurement errors of the 3D sensor have been already compensated by using a look-up table during the camera calibration procedure. Besides, a RGB camera is used on the slave side to capture the video signals of the slave robot. The software environment is based on ROS ( the FRI library (cs.stanford.edu/people/tkr/fri/html) and the SDK of Force Dimension. As illustrated in Fig. 11, the remote environment is composed of two objects: a hardcover book with a smooth surface (object 1) and a plank with a relatively rough surface (object 2). The object 1 is placed horizontally on a hard base while the object 2 is arranged with a small slope and supported by foam boards. Therefore, during the experiment different stiffnesses and surface friction coefficients for the two objects are expected. The exploration trajectory is illustrated in Fig. 11 with a green arrow. The slave robot is first commanded to touch the point A, after the contact is stable, it is controlled to move across the two object surfaces along the trajectory. At point B, the slave end-effector leaves the object surface. During the exploration, the system estimates the object stiffness k as well as the friction coefficient µ and triggers updates according to the updating algorithm described in Sec. III.F. The packet rate in the communication Slave: KUKA robot >20cm Live video Master: Omega.6 Fig. 10. The setup of the teleoperation system.

14 14 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) Object 1 A B Object 2 Fig. 11. The tested remote environment, which consists of two objects, one is a horizontally placed book with a smooth hard cover and the other one is a declining wooden plank. The green arrow from point A to point B denotes the trajectory of the slave motion during the test. channel is also recorded. The forward and backward communication delays are set to be constant with T f = T b = 500ms. Note that the distance between the 3D sensor and the slave end-effector is larger than 20cm. Thus, the point cloud of the object surface can be correctly captured by the 3D sensor when the slave is close to or in contact with the object. B. Results B.1 System evaluation Figs. 12(a)-(f) show the measured position and force signals on both the master and the slave side in world coordinates. The master position and force signals are shifted by the forward communication delay T f for easier comparison. Tab. I shows a summary of the slave status based on its motion. We observe from Figs. 12(b)(c) and Tab. I that before t 1, the slave is in free space and commanded to approach the object 1. At t 1, the slave gets in contact with object 1. Between t 1 and t 2, the slave end-effector stays in contact with object 1 without moving. At t 3, the slave reaches the boundary of the two objects, which leads to a disturbed slave force (Fig. 12(f)). Between t 4 and t 5, the slave stays on the surface of object 2 without moving. After t 5 the slave is controlled to leave object 2 and returns to free space again. Between t 2 and t 3, since the slave is moving on the surface of a horizontally placed object, the force in z-direction is mainly affected by the penetration depth and the environment stiffness, while the forces in x-direction and y-direction are caused by surface friction. Between t 3 and t 4, the slave is in contact with a declining object. Therefore, the force signals in all three directions have influence on the estimation of the object stiffness and friction coefficient. The estimated stiffness and friction coefficient values are shown in Fig. 13. The time period between t 1 and t 2 is considered to be a time buffer to enable a stable slave contact before obtaining the effective physical properties. In the periods t 2 t 3 and t 3 t 4 the system measures the stiffnesses k 1 and k 2, and friction coefficient µ 1 and µ 2 for the two objects. The mean and standard deviation (Std.) values of the estimated physical properties are shown in Tab. II. Since object 2 is supported by foam boards, which are softer than the base of object 1, the estimated stiffness of object 2 is lower than for object 1. In addition, the estimated friction coefficients between the two objects are also different due to the different surface smoothness of the two objects. A statistical analysis shows that significant differences exist for the estimated stiffnesses and friction coefficients (T-test p < 0.01, Ranksum test p < 0.01). Thus, our system successfully detects the different physical properties for the two objects. In addition, after time instant t 2

15 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 15 t1 t2 t3 t4 t5 t1 t2 t3 t4 t5 t1 t2 t3 t4 t5 (a) (b) (c) t1 t2 t3 t4 t5 t1 t2 t3 t4 t5 (d) (e) (f) Fig. 12. Experimental results. (a)-(c) the master and slave position in x, y and z directions, respectively. (d)-(f) the locally rendered master force and the remotely measured slave force in x, y, and z directions, respectively. TABLE I SLAVE STATUS DURING THE OPERATION. Time Contact Motion 0 t 1 No Yes t 1 t 2 object 1 No t 2 t 3 object 1 Yes t 3 t 4 object 2 Yes t 4 t 5 object 2 No t 5 No Yes in Fig. 13 (b), a peak for the friction coefficient is detected, which is caused by static friction before the end-effector s stable sliding over the surface (dynamic friction). Note that between t 1 and t 2 the measured stiffness value changes rapidly, while the master force does not change as quickly. According to the master update controller (see Fig. 7), the stiffness value on the master side is not immediately updated once a new update arrives. The master controller takes a 500ms period to gradually change the old value to the current received value. This kind of mechanism can be regarded as a low pass filter in the time domain. Thus, the change of the master force is smoothed and sudden force changes are avoided. Between t 2 and t 4, the slave contacts with the objects stably (except at the time around t 3 ). Thus, the estimated model parameters (stiffness and friction coefficient) of the remote objects are stable too. From Fig. 12(f) we observe that the main difference between the locally rendered master force and the remotely measured slave force are almost smaller than a JND (10%). According to

16 16 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) TABLE II MEAN AND STANDARD DEVIATION OF THE ESTIMATED STIFFNESSES AND FRICTION COEFFICIENTS FOR THE TWO OBJECTS k 1 k 2 µ 1 µ 2 mean 1410 N/m 870 N/m Std. 140 N/m 180 N/m t1 t2 t3 t4 t5 (a) (b) Fig. 13. (a) Estimated stiffness. (b) Estimated friction coefficient. the limits of human haptic perception, the user can hardly distinguish the difference between the master force and the slave force, which implies that the estimated environment model is sufficiently accurate and the system is perceptually transparent. A subjective test is conducted to evaluate the system transparency (see Sec. C). B.2 Data reduction The achieved data reduction includes the compression of the point cloud model (object geometry) and the reduction of the update packet rate for the physical properties of the objects. To evaluate the point cloud compression, a total of 40 filtered depth images are recorded during a 5- second slave movement in free space, which includes the slave statement of still, slow motion (< 5cm/s) and rapid motion (up to 20cm/s). The lossless H.264/AVC compression algorithm with IPPP... GOP (group of pictures) structure is applied, where the I frame period is 10. The results are shown in Tab. III. Due to the GPU acceleration, the compression time of the depth images is negligible compared to the communication delay. Meanwhile, even for the worst case (25Hz update rate), the maximum required bitrate in the communication channel is 770kbps. TABLE III FRAME SIZE AND COMPRESSION TIME FOR THE DEPTH IMAGES OF THE POINT CLOUD MODEL. Mean max. min. frame size 3.36kB 3.85kB 2.74kB compression time 1.7ms 3ms 1ms The update packet rate vs. time is shown in Fig. 14. We observe a high packet rate at t 1, t 2, t 3 and after t 4. At t 1, the slave first gets in contact with the object 1. The system starts to estimate the stiffness of

17 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 17 t1 t2 t3 t4 t5 Fig. 14. Packet rate for transmitting the physical properties in the backward channel during the operation. the object 1. Since the estimated stiffness is significantly different from the initial one, new updates are triggered. At t 2, the slave starts to slide across object 1, and thus, the friction estimation is activated and the estimated friction coefficients are transmitted to the master for updating the previous value. At t 3, the slave is moving across the boundary between object 1 and object 2. Due to the disturbed slave force (Fig. 12(f)), the estimated physical properties are varying intensely, which results in a large number of updates. After t 4, the slave starts to leave object 2. The drastic change of the estimated stiffness results in a high packet rate. During the contact time period (t 1 t 5 ), the total average packet rate is 103 packets/s, which shows a packet rate reduction of about 90% in the backward communication channel compared to the uncompressed rate (1kHz). We compare the result with a previous work for haptic data reduction in delayed teleoperation systems. In [42], the author uses the wave-variable approach [5] to enable a stable teleoperation system in the presence of communication delays. Then, a perceptual deadband coding scheme with a JND of 10% is applied on the so-call locally computed wave-variables (LCWV) to reduce the packet rate. In our work, the environment is more complex and the communication delay is higher than in [42] (arbitrary 3-dimensional object surface vs. 1-dimensional planar surface and round-trip delay: 1000ms vs. 30ms). With such a complex case, our pcbmmt method can also significantly reduce the packet rate without degrading the system stability and transparency as the passivity-based haptic data reduction method described in [42] does. TABLE IV COMPARISON OF THE PACKET RATES BETWEEN THE PROPOSED PCBMMT APPROACH AND WV-BASED PERCEPTUAL DEADBAND CODING. Methods packet rate (1/s) LCWV - Weber-inspired deadband [42] 245 pcbmmt 103 C. Subjective test A subjective experiment is conducted to additionally evaluate the system transparency. The experimental procedure is similar to the one described in [43]. Before the experiment, the live video of a 10- second-telemanipulation is recorded. During the telemanipulation, the slave and master force signals are also recorded after the system estimates a stable environment model. During the experiment, the recorded

18 18 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) video along with the slave / master force signals are replayed to the subjects. The subjects are asked to focus on the force feedback while watching the replayed video. All subjects are trained until they feel comfortable with the experimental setup and the task. The experiment is composed of 6 trials. In each trial, two force signals (slave-slave force, slave-master force or master-master force) are displayed to the subjects and the subjects need to answer whether they feel any difference in the quality of the two force signals. In four of the total six trials the slave-master force signals are displayed, while in the remaining two trials the slave-slave forces and master-master forces are displayed. A subject is considered to fail to distinguish between the slave and master force signals, if she/he gives the answer no difference more than twice in the four trials where the slave-master force signals are displayed. 10 subjects participated in the experiment, ranging in age from 25-44, all right-handed. The experimental results for each subject are illustrated in Fig. 15. Fig. 15. Results of the subjective experiment. We only count the numbers of the no difference answers when the slave-master force signals are displayed. We observe that 9 out of 10 subjects give the answers no difference more than twice when the slavemaster force signals are displayed, which means 90% of the subjects fail to distinguish between the remotely measured slave and locally rendered master force. Therefore, we conclude that the slave force is perceptually identical to the master force and our pcbmmt system is thus transparent. VI. CONCLUSION In this paper, we propose a point cloud-based model-mediated teleoperation (pcbmmt) system to enable a stable and transparent teleoperation for complex environments in the presence of communication delays. In our system, the environment model is no longer approximated by simple geometry, but by point clouds. The point cloud model is built with the help of a ToF 3D sensor. During teleoperation, the environment parameters (geometry and physical properties) are estimated and transmitted back to the master side. An update controller is developed to reduce the packet rate and the force disturbance due to the updates of the object physical properties. The system stability and transparency are verified in the experimental evaluation. In addition, by exploiting the limits of human haptic perception the proposed pcbmmt achieves a significant haptic data reduction of about 90%. In future work, an extended updating algorithm will be studied which allows online updates of the object point cloud model, which is important for deformable and movable objects. In addition, complex environments with deformable and movable objects will be considered. Moreover, subjective experiments will be conducted to evaluate both the subjective experience and the objective task performance of the proposed pcbmmt system.

19 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 19 ACKNOWLEDGMENT This work has been supported by the European Research Council under the European Unions Seventh Framework Programme (FP7/ ) / ERC Grant agreement no The authors would like to thank Nicolas Alt, Clemens Schuwerk and Rahul Chaudhari for their technical support. REFERENCES [1] W. Ferrell and T. Sheridan. Supervisory control of remote manipulation. IEEE Spectrum, vol. 4, no. 10, pp , [2] E. Saddik. The potential of haptics technologies. IEEE Instrumentation Measurement Magazine, vol. 10, no. 1, pp , April [3] A. Alamri, M. Eid, R. Iglesias, S. Shirmohammadi, and A. El Saddik. Haptic Virtual Rehabilitation Exercises for Poststroke Diagnosis. IEEE Transactions on Instrumentation and Measurement, vol. 57, no. 9, pp , Sept [4] D. Lawrence. Stability and transparency in bilateral teleoperation. IEEE Transactions on Robotics and Automation, vol. 9, no. 5, pp , [5] G. Niemeyer and J.-J. Slotine. Stable Adaptive Teleoperation. IEEE Journal of Oceanic Engineering, vol. 16, no. 1, pp , Jan [6] R. Anderson and M. Spong. Bilateral control of teleoperators with time delay. IEEE Transactions on Automatic Control, vol. 34, no. 5, pp , [7] Y.Ye, and P. Liu. Improving Haptic Feedback Fidelity in Wave-Variable-Based Teleoperation Orientated to Telemedical Applications. IEEE Transactions on Instrumentation and Measurement, vol. 58, no. 8, pp , Aug [8] R. Daniel and P. McAree. Fundamental limits of performance for force reflecting teleoperation. The International Journal of Robotics Research, vol. 17, no. 8, pp , [9] A. Bejczy, W. Kim and S. Venema. The phantom robot: predictive displays for teleoperation with time delay. In Proceeding of the IEEE international conference on robotics and automation, Cincinnati, OH, May [10] A. Bejczy and W. Kim. Predictive displays and shared compliance control for time-delayed telemanipulation. In Proceeding of the international conference on IROS, Ibaraki, Japan, July [11] Tim Burkert, Jan Leupold and Georg Passig. A Photo-Realistic Predictive Display. Presence: Teleoperators and Virtual Environments, vol. 13, no. 1, pp , [12] P. Mitra and G. Niemeyer. Model mediated telemanipulation. International Journal of Robotics Research, vol. 27, no. 2, pp , [13] B. Willaert, J. Bohg, H. Brussel and G. Niemeyer. Towards multi-dof model mediated teleoperation: using vision to augment feedback. IEEE International Workshop on HAVE, Munich, Germany, Oct [14] B. Hannaford. A design framework for teleoperators with kinesthetic feedback. IEEE Transactions on Robotics and Automation, vol. 5, no. 4, pp , Aug [15] X. Xu, B. Cizmeci and E. Steinbach. Point-cloud-based Model-mediated Teleoperation. IEEE International Workshop on HAVE, Istanbul, Turkey, Oct [16] H. Li and A. Song. Virtual-Environment Modeling and Correction for Force-Reflecting Teleoperation With Time Delay. IEEE Transactions on Industrial Electronics, vol. 54, no. 2, pp , [17] Farid Mobasser and Keyvan Hashtrudi-Zaad. Predictive Teleoperation using Laser Rangefinder. Canadian Conference on Electrical and Computer Engineering, Ottawa, Canada, May, [18] X. Xu, J. Kammerl, R. Chaudhari and E. Steinbach. Hybrid signal-based and geometry-based prediction for haptic data reduction. IEEE International Workshop on HAVE, Hebei, China, Oct

20 20 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) [19] A. Achhammer, C. Weber, A. Peer and M. Buss. Improvement of model-mediated teleoperation using a new hybrid environment estimation technique. In Proc. of the international conference on Robotics and Automation, Anchorage, AK, May [20] C. Tzafestas, S. Velanas and G. Fakiridis. Adaptive impedance control in haptic teleoperation to improve transparency under time-delay. IEEE International Conference on Robotics and Automation, Pasadena, May [21] W. Yoon, T. Goshozono, H. Kawabe, M. Kinami, Y. Tsumaki, M. Uchiyama, M. Oda and T. Doi. Model-based space robot teleoperation of ETS - VII manipulator. IEEE Transactions on Robotics and Automation, vol. 20, no. 3, pp , June [22] C. Zhao. Real Time Haptic Simulation of Deformable Bodies. Ph.D Thesis, Technische Universität München, [23] A. Haddadi, K. Hashtrudi-Zaad. Online contact impedance identification for robotic systems. IEEE/RSJ International Conference on IROS, Nice, Sep [24] D. Verscheure, J. Swevers, H. Bruyninckx, and J. Schutter. On-line identification of contact dynamics in the presence of geometric uncertainties. IEEE International Conference on Robotics and Automation, Pasadena, CA, May [25] B. Willaert, H. Brussel and G. Niemeyer. Stability of Model-Mediated Teleoperation: Discussion and Experiments. Eurohaptics, Tampere, Finland, June [26] X. Xu, G. Paggetti and E. Steinbach. Dynamic Model Displacement for Model-mediated Teleoperation. IEEE World Haptics Conference, Daejeon, Korea, April [27] Manuel M. Oliveira, Brian Bowen, Richard McKenna and Yu-sung Chang. Fast Digital Image Inpainting. Proc. of the international conference on VIIP, Marbella, Spain, [28] P. Hinterseer, S. Hirche, S. Chaudhuri, E. Steinbach and M. Buss. Perception-based Data Reduction and Transmission of Haptic Data in Telepresence and Teleaction Systems. IEEE Transactions on Signal Processing, vol. 56, no. 2, pp , Feb [29] P. Hinterseer, E. Steinbach, S. Hirche and M. Buss. A novel, psychophysically motivated transmission approach for haptic data streams in telepresence and teleaction systems. In IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, March [30] E. Steinbach, S. Hirche, J. Kammerl, I. Vittorias and R. Chaudhari. Haptic Data Compression and Communication for Telepresence and Teleaction. IEEE Signal Processing Magazine, vol. 28, no. 1, pp , Jan [31] E. Steinbach, S. Hirche, M. Ernst, F. Brandi, R. Chaudhari, J. Kammerl and I. Vittorias. Haptic Communications. Proceedings of the IEEE, vol. 100, no. 4, pp , April [32] J. Kammerl, R. Chaudhari, E. Steinbach. Combining Contact Models with Perceptual Data Reduction for Efficient Haptic Data Communication in Networked VEs. IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 1, pp , January [33] N. Sakr, N. D. Georganas and J. Zhao, Human perception-based data reduction for haptic communication in six-dof telepresence systems. IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 11, pp , October [34] L. Jones and I. Ilunter. A Perceptual Analysis of Stiffness. Experimental Brain Research, vol. 79, no. 1, pp , Jan [35] F. Freyberger and B. Färber. Compliance Discrimination of Deformable Objects by Squeezing with One and Two Fingers. EuroHaptics, Paris, [36] F. Ryden, S. Kosari and H. Chizeck. Proxy Method for Fast Haptic Rendering from Time Varying Point Clouds. Proc. of 2011 IEEE/RSJ international conference on Intelligent Robots and Systems, San Francisco, Sept

21 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) 21 [37] F. Ryden and H. Chizeck. A Proxy Method for Real-Time 3-DOF Haptic Rendering of Streaming Point Cloud Data. IEEE transactions on Haptics, vol. 6, no. 3, pp , [38] N. El-Far, N. Georganas and A. El Saddik. An Algorithm for Haptically Rendering Objects Described by Point Clouds. Proc. of Canadian Conference on Electrical and Computer Engineering (CCECE), Niagara Falls, ON, May [39] A. Leeper, S. Chan and K. Salisbury. Point Clouds Can be Represented as Implicit surfaces for Constraint-based Haptic Rendering. Proc. of IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, May [40] W. Harwin and N. Melder. Improved Haptic Rendering for Multi-Finger Manipulation using Friction Cone based God-Objects. Proceedings of EuroHaptics, Edinburgh, UK, July [41] P. Mitra, D. Gentry and G. Niemeyer. User Perception and Preference in Model-Mediated Telemanipulation. IEEE World Haptics Conference, Tsukaba, Mar [42] I. Vittorias, J. Kammerl, S. Hirche and E. Steinbach. Perceptual Coding of Haptic Data in Timedelayed Teleoperation. IEEE World Haptics Conference, Salt Lake City, UT, Mar [43] J. Kammerl and E. Steinbach. High-fidelity recording, compression, and replay of visual-haptic telepresence sessions. Proc. of IEEE International Conference on Image Processing (ICIP), Hong Kong, Sept Xiao Xu received his B.Sc. degree in Information Engineering from Shanghai Jiaotong University (China) in 2008 and the M.Sc. degree in Information Engineering in 2011 from Technische Universität München (Germany). After this he joined the Media Technology Group at the Technische Universität München in April 2011, where he is working as a member of the research staff. His current research interests are in the field of perceptual coding of haptic data communication and model-mediated telemanipulation. Burak Cizmeci received his B.Sc. degree in Electronics Engineering in 2007, B.Sc. degree in Computer Engineering in 2008 and master of science degree in 2009 both from Isik University, Istanbul, Turkey. He worked as a teaching assistant at the department of electronics engineering of Isik University from 2007 to In September 2010, he joined the Media Technology Group at the Technische Universitt Mnchen (Germany) to pursue his Ph.D. degree as a DAAD (German Academic Exchange Service) scholarship holder. His research interests include video analysis, coding, de-noising, frame rate up-conversion and super-resolution. Currently, he is working on multimodal multiplexing of audio, video and haptic signals for telepresence and teleaction (TPTA) systems Anas Al-Nuaimi Studied Electrical and Computer Engineering at the Hashemite University in Jordan majoring in Telecommunications. He pursued his master degree at TUMs international M.Sc. degree program in Communications Engineering (MSCE) majoring in Communications Systems. He wrote his master thesis on the topic of Rapid Feature Matching, allowing very rapid image similarity matching for the application of city-scale visual location recognition. He was awarded the E-ON future award for outstanding thesis work. He is currently member of the research staff at the institute for Media Technology where his research focus is on CBIR and 3D point cloud processing.

22 22 Preliminary version for evaluation: Please do not circulate without the permission of the author(s) Eckehard Steinbach (IEEE M 96, SM 08) studied Electrical Engineering at the University of Karlsruhe (Germany), the University of Essex (Great-Britain), and ESIEE in Paris. From he was a member of the research staff of the Image Communication Group at the University of Erlangen-Nuremberg (Germany), where he received the Engineering Doctorate in From February 2000 to December 2001 he was a Postdoctoral Fellow with the Information Systems Laboratory of Stanford University. In February 2002 he joined the Department of Electrical Engineering and Information Technology of Munich University of Technology (Germany), where he is currently a Full Professor for Media Technology. His current research interests are in the area of audio-visual-haptic information processing and communication as well as networked and interactive multimedia systems.

2. Introduction to Computer Haptics

2. Introduction to Computer Haptics 2. Introduction to Computer Haptics Seungmoon Choi, Ph.D. Assistant Professor Dept. of Computer Science and Engineering POSTECH Outline Basics of Force-Feedback Haptic Interfaces Introduction to Computer

More information

Robust Haptic Teleoperation of a Mobile Manipulation Platform

Robust Haptic Teleoperation of a Mobile Manipulation Platform Robust Haptic Teleoperation of a Mobile Manipulation Platform Jaeheung Park and Oussama Khatib Stanford AI Laboratory Stanford University http://robotics.stanford.edu Abstract. This paper presents a new

More information

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic

More information

Haptics CS327A

Haptics CS327A Haptics CS327A - 217 hap tic adjective relating to the sense of touch or to the perception and manipulation of objects using the senses of touch and proprioception 1 2 Slave Master 3 Courtesy of Walischmiller

More information

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,

More information

Performance Issues in Collaborative Haptic Training

Performance Issues in Collaborative Haptic Training 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 FrA4.4 Performance Issues in Collaborative Haptic Training Behzad Khademian and Keyvan Hashtrudi-Zaad Abstract This

More information

FORCE FEEDBACK. Roope Raisamo

FORCE FEEDBACK. Roope Raisamo FORCE FEEDBACK Roope Raisamo Multimodal Interaction Research Group Tampere Unit for Computer Human Interaction Department of Computer Sciences University of Tampere, Finland Outline Force feedback interfaces

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Haptic Virtual Fixtures for Robot-Assisted Manipulation

Haptic Virtual Fixtures for Robot-Assisted Manipulation Haptic Virtual Fixtures for Robot-Assisted Manipulation Jake J. Abbott, Panadda Marayong, and Allison M. Okamura Department of Mechanical Engineering, The Johns Hopkins University {jake.abbott, pmarayong,

More information

Haptic Communication for the Tactile Internet

Haptic Communication for the Tactile Internet Technical University of Munich (TUM) Chair of Media Technology European Wireless, EW 17 Dresden, May 17, 2017 Telepresence Network audiovisual communication Although conversational services are bidirectional,

More information

AHAPTIC interface is a kinesthetic link between a human

AHAPTIC interface is a kinesthetic link between a human IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 13, NO. 5, SEPTEMBER 2005 737 Time Domain Passivity Control With Reference Energy Following Jee-Hwan Ryu, Carsten Preusche, Blake Hannaford, and Gerd

More information

On Observer-based Passive Robust Impedance Control of a Robot Manipulator

On Observer-based Passive Robust Impedance Control of a Robot Manipulator Journal of Mechanics Engineering and Automation 7 (2017) 71-78 doi: 10.17265/2159-5275/2017.02.003 D DAVID PUBLISHING On Observer-based Passive Robust Impedance Control of a Robot Manipulator CAO Sheng,

More information

Passive Bilateral Teleoperation

Passive Bilateral Teleoperation Passive Bilateral Teleoperation Project: Reconfigurable Control of Robotic Systems Over Networks Márton Lırinc Dept. Of Electrical Engineering Sapientia University Overview What is bilateral teleoperation?

More information

Real-Time Bilateral Control for an Internet-Based Telerobotic System

Real-Time Bilateral Control for an Internet-Based Telerobotic System 708 Real-Time Bilateral Control for an Internet-Based Telerobotic System Jahng-Hyon PARK, Joonyoung PARK and Seungjae MOON There is a growing tendency to use the Internet as the transmission medium of

More information

Control design issues for a microinvasive neurosurgery teleoperator system

Control design issues for a microinvasive neurosurgery teleoperator system Control design issues for a microinvasive neurosurgery teleoperator system Jacopo Semmoloni, Rudy Manganelli, Alessandro Formaglio and Domenico Prattichizzo Abstract This paper deals with controller design

More information

A Feasibility Study of Time-Domain Passivity Approach for Bilateral Teleoperation of Mobile Manipulator

A Feasibility Study of Time-Domain Passivity Approach for Bilateral Teleoperation of Mobile Manipulator International Conference on Control, Automation and Systems 2008 Oct. 14-17, 2008 in COEX, Seoul, Korea A Feasibility Study of Time-Domain Passivity Approach for Bilateral Teleoperation of Mobile Manipulator

More information

Bibliography. Conclusion

Bibliography. Conclusion the almost identical time measured in the real and the virtual execution, and the fact that the real execution with indirect vision to be slower than the manipulation on the simulated environment. The

More information

CS277 - Experimental Haptics Lecture 2. Haptic Rendering

CS277 - Experimental Haptics Lecture 2. Haptic Rendering CS277 - Experimental Haptics Lecture 2 Haptic Rendering Outline Announcements Human haptic perception Anatomy of a visual-haptic simulation Virtual wall and potential field rendering A note on timing...

More information

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices This is the Pre-Published Version. Integrating PhysX and Opens: Efficient Force Feedback Generation Using Physics Engine and Devices 1 Leon Sze-Ho Chan 1, Kup-Sze Choi 1 School of Nursing, Hong Kong Polytechnic

More information

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training

On Application of Virtual Fixtures as an Aid for Telemanipulation and Training On Application of Virtual Fixtures as an Aid for Telemanipulation and Training Shahram Payandeh and Zoran Stanisic Experimental Robotics Laboratory (ERL) School of Engineering Science Simon Fraser University

More information

Modeling and Experimental Studies of a Novel 6DOF Haptic Device

Modeling and Experimental Studies of a Novel 6DOF Haptic Device Proceedings of The Canadian Society for Mechanical Engineering Forum 2010 CSME FORUM 2010 June 7-9, 2010, Victoria, British Columbia, Canada Modeling and Experimental Studies of a Novel DOF Haptic Device

More information

MEAM 520. Haptic Rendering and Teleoperation

MEAM 520. Haptic Rendering and Teleoperation MEAM 520 Haptic Rendering and Teleoperation Katherine J. Kuchenbecker, Ph.D. General Robotics, Automation, Sensing, and Perception Lab (GRASP) MEAM Department, SEAS, University of Pennsylvania Lecture

More information

Haptic Tele-Assembly over the Internet

Haptic Tele-Assembly over the Internet Haptic Tele-Assembly over the Internet Sandra Hirche, Bartlomiej Stanczyk, and Martin Buss Institute of Automatic Control Engineering, Technische Universität München D-829 München, Germany, http : //www.lsr.ei.tum.de

More information

EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON

EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON Josep Amat 1, Alícia Casals 2, Manel Frigola 2, Enric Martín 2 1Robotics Institute. (IRI) UPC / CSIC Llorens Artigas 4-6, 2a

More information

From Encoding Sound to Encoding Touch

From Encoding Sound to Encoding Touch From Encoding Sound to Encoding Touch Toktam Mahmoodi King s College London, UK http://www.ctr.kcl.ac.uk/toktam/index.htm ETSI STQ Workshop, May 2017 Immersing a person into the real environment with Very

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Enhanced performance of delayed teleoperator systems operating within nondeterministic environments

Enhanced performance of delayed teleoperator systems operating within nondeterministic environments University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2010 Enhanced performance of delayed teleoperator systems operating

More information

MEAM 520. Haptic Rendering and Teleoperation

MEAM 520. Haptic Rendering and Teleoperation MEAM 520 Haptic Rendering and Teleoperation Katherine J. Kuchenbecker, Ph.D. General Robotics, Automation, Sensing, and Perception Lab (GRASP) MEAM Department, SEAS, University of Pennsylvania Lecture

More information

Visible Light Communication-based Indoor Positioning with Mobile Devices

Visible Light Communication-based Indoor Positioning with Mobile Devices Visible Light Communication-based Indoor Positioning with Mobile Devices Author: Zsolczai Viktor Introduction With the spreading of high power LED lighting fixtures, there is a growing interest in communication

More information

2B34 DEVELOPMENT OF A HYDRAULIC PARALLEL LINK TYPE OF FORCE DISPLAY

2B34 DEVELOPMENT OF A HYDRAULIC PARALLEL LINK TYPE OF FORCE DISPLAY 2B34 DEVELOPMENT OF A HYDRAULIC PARALLEL LINK TYPE OF FORCE DISPLAY -Improvement of Manipulability Using Disturbance Observer and its Application to a Master-slave System- Shigeki KUDOMI*, Hironao YAMADA**

More information

Some Issues on Integrating Telepresence Technology into Industrial Robotic Assembly

Some Issues on Integrating Telepresence Technology into Industrial Robotic Assembly Some Issues on Integrating Telepresence Technology into Industrial Robotic Assembly Gunther Reinhart and Marwan Radi Abstract Since the 1940s, many promising telepresence research results have been obtained.

More information

Force display using a hybrid haptic device composed of motors and brakes

Force display using a hybrid haptic device composed of motors and brakes Mechatronics 16 (26) 249 257 Force display using a hybrid haptic device composed of motors and brakes Tae-Bum Kwon, Jae-Bok Song * Department of Mechanical Engineering, Korea University, 5, Anam-Dong,

More information

Evaluation of Five-finger Haptic Communication with Network Delay

Evaluation of Five-finger Haptic Communication with Network Delay Tactile Communication Haptic Communication Network Delay Evaluation of Five-finger Haptic Communication with Network Delay To realize tactile communication, we clarify some issues regarding how delay affects

More information

Force feedback interfaces & applications

Force feedback interfaces & applications Force feedback interfaces & applications Roope Raisamo Tampere Unit for Computer-Human Interaction (TAUCHI) School of Information Sciences University of Tampere, Finland Based on material by Jukka Raisamo,

More information

Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin

Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Available theses in industrial robotics (October 2016) Prof. Paolo Rocco Prof. Andrea Maria Zanchettin Politecnico di Milano - Dipartimento di Elettronica, Informazione e Bioingegneria Industrial robotics

More information

Increasing the Impedance Range of a Haptic Display by Adding Electrical Damping

Increasing the Impedance Range of a Haptic Display by Adding Electrical Damping Increasing the Impedance Range of a Haptic Display by Adding Electrical Damping Joshua S. Mehling * J. Edward Colgate Michael A. Peshkin (*)NASA Johnson Space Center, USA ( )Department of Mechanical Engineering,

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Exploring Haptics in Digital Waveguide Instruments

Exploring Haptics in Digital Waveguide Instruments Exploring Haptics in Digital Waveguide Instruments 1 Introduction... 1 2 Factors concerning Haptic Instruments... 2 2.1 Open and Closed Loop Systems... 2 2.2 Sampling Rate of the Control Loop... 2 3 An

More information

Friction & Workspaces

Friction & Workspaces Friction & Workspaces CPSC 599.86 / 601.86 Sonny Chan University of Calgary Today s Agenda Rendering surfaces with friction Exploring large virtual environments using devices with limited workspace [From

More information

Photographing Long Scenes with Multiviewpoint

Photographing Long Scenes with Multiviewpoint Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an

More information

More Info at Open Access Database by S. Dutta and T. Schmidt

More Info at Open Access Database  by S. Dutta and T. Schmidt More Info at Open Access Database www.ndt.net/?id=17657 New concept for higher Robot position accuracy during thermography measurement to be implemented with the existing prototype automated thermography

More information

Haptic Rendering CPSC / Sonny Chan University of Calgary

Haptic Rendering CPSC / Sonny Chan University of Calgary Haptic Rendering CPSC 599.86 / 601.86 Sonny Chan University of Calgary Today s Outline Announcements Human haptic perception Anatomy of a visual-haptic simulation Virtual wall and potential field rendering

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery

Using Simulation to Design Control Strategies for Robotic No-Scar Surgery Using Simulation to Design Control Strategies for Robotic No-Scar Surgery Antonio DE DONNO 1, Florent NAGEOTTE, Philippe ZANNE, Laurent GOFFIN and Michel de MATHELIN LSIIT, University of Strasbourg/CNRS,

More information

Applying Model Mediation Method to a Mobile Robot Bilateral Teleoperation System Experiencing Time Delays in Communication

Applying Model Mediation Method to a Mobile Robot Bilateral Teleoperation System Experiencing Time Delays in Communication Applying Model Mediation Method to a Mobile Robot Bilateral Teleoperation System Experiencing Time Delays in Communication B. Taner * M. İ. C. Dede E. Uzunoğlu İzmir Institute of Technology İzmir Institute

More information

Elements of Haptic Interfaces

Elements of Haptic Interfaces Elements of Haptic Interfaces Katherine J. Kuchenbecker Department of Mechanical Engineering and Applied Mechanics University of Pennsylvania kuchenbe@seas.upenn.edu Course Notes for MEAM 625, University

More information

Lecture 6: Kinesthetic haptic devices: Control

Lecture 6: Kinesthetic haptic devices: Control ME 327: Design and Control of Haptic Systems Autumn 2018 Lecture 6: Kinesthetic haptic devices: Control Allison M. Okamura Stanford University important stability concepts instability / limit cycle oscillation

More information

5G Tactile Internet Lab King s

5G Tactile Internet Lab King s 5G Tactile Internet Lab Experimentation @ King s Mischa Dohler Fellow, IEEE & Royal Society of Arts Director, Centre for Telecom Research Chair Professor, King's College London Cofounder, Worldsensing

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Taichi Yamada 1, Yeow Li Sa 1 and Akihisa Ohya 1 1 Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1,

More information

Overview of current developments in haptic APIs

Overview of current developments in haptic APIs Central European Seminar on Computer Graphics for students, 2011 AUTHOR: Petr Kadleček SUPERVISOR: Petr Kmoch Overview of current developments in haptic APIs Presentation Haptics Haptic programming Haptic

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

A Movement Based Method for Haptic Interaction

A Movement Based Method for Haptic Interaction Spring 2014 Haptics Class Project Paper presented at the University of South Florida, April 30, 2014 A Movement Based Method for Haptic Interaction Matthew Clevenger Abstract An abundance of haptic rendering

More information

Image De-Noising Using a Fast Non-Local Averaging Algorithm

Image De-Noising Using a Fast Non-Local Averaging Algorithm Image De-Noising Using a Fast Non-Local Averaging Algorithm RADU CIPRIAN BILCU 1, MARKKU VEHVILAINEN 2 1,2 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720, Tampere FINLAND

More information

1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany

1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany 1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany SPACE APPLICATION OF A SELF-CALIBRATING OPTICAL PROCESSOR FOR HARSH MECHANICAL ENVIRONMENT V.

More information

Stabilize humanoid robot teleoperated by a RGB-D sensor

Stabilize humanoid robot teleoperated by a RGB-D sensor Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information

More information

phri: specialization groups HS PRELIMINARY

phri: specialization groups HS PRELIMINARY phri: specialization groups HS 2019 - PRELIMINARY 1) VELOCITY ESTIMATION WITH HALL EFFECT SENSOR 2) VELOCITY MEASUREMENT: TACHOMETER VS HALL SENSOR 3) POSITION AND VELOCTIY ESTIMATION BASED ON KALMAN FILTER

More information

Development of a Walking Support Robot with Velocity-based Mechanical Safety Devices*

Development of a Walking Support Robot with Velocity-based Mechanical Safety Devices* 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 2013. Tokyo, Japan Development of a Walking Support Robot with Velocity-based Mechanical Safety Devices* Yoshihiro

More information

Compression and Image Formats

Compression and Image Formats Compression Compression and Image Formats Reduce amount of data used to represent an image/video Bit rate and quality requirements Necessary to facilitate transmission and storage Required quality is application

More information

Robotics 2 Collision detection and robot reaction

Robotics 2 Collision detection and robot reaction Robotics 2 Collision detection and robot reaction Prof. Alessandro De Luca Handling of robot collisions! safety in physical Human-Robot Interaction (phri)! robot dependability (i.e., beyond reliability)!

More information

A PILOT STUDY ON ULTRASONIC SENSOR-BASED MEASURE- MENT OF HEAD MOVEMENT

A PILOT STUDY ON ULTRASONIC SENSOR-BASED MEASURE- MENT OF HEAD MOVEMENT A PILOT STUDY ON ULTRASONIC SENSOR-BASED MEASURE- MENT OF HEAD MOVEMENT M. Nunoshita, Y. Ebisawa, T. Marui Faculty of Engineering, Shizuoka University Johoku 3-5-, Hamamatsu, 43-856 Japan E-mail: ebisawa@sys.eng.shizuoka.ac.jp

More information

Transparent Data Reduction in. Networked Telepresence and Teleaction. Systems Part II: Time-Delayed Communication

Transparent Data Reduction in. Networked Telepresence and Teleaction. Systems Part II: Time-Delayed Communication Title page for Transparent Data Reduction in Networked Telepresence and Teleaction Systems Part II: Time-Delayed Communication Authors: Sandra Hirche 0 Martin Buss Affiliation: Institute of Automatic Control

More information

Methods for Haptic Feedback in Teleoperated Robotic Surgery

Methods for Haptic Feedback in Teleoperated Robotic Surgery Young Group 5 1 Methods for Haptic Feedback in Teleoperated Robotic Surgery Paper Review Jessie Young Group 5: Haptic Interface for Surgical Manipulator System March 12, 2012 Paper Selection: A. M. Okamura.

More information

International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:16 No: L. J. Wei, A. Z. Hj Shukor, M. H.

International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:16 No: L. J. Wei, A. Z. Hj Shukor, M. H. International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:16 No:01 54 Investigation on the Effects of Outer-Loop Gains, Inner-Loop Gains and Variation of Parameters on Bilateral Teleoperation

More information

Computer Vision Slides curtesy of Professor Gregory Dudek

Computer Vision Slides curtesy of Professor Gregory Dudek Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short

More information

A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor

A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor Umesh 1,Mr. Suraj Rana 2 1 M.Tech Student, 2 Associate Professor (ECE) Department of Electronic and Communication Engineering

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration Nan Cao, Hikaru Nagano, Masashi Konyo, Shogo Okamoto 2 and Satoshi Tadokoro Graduate School

More information

The Haptic Impendance Control through Virtual Environment Force Compensation

The Haptic Impendance Control through Virtual Environment Force Compensation The Haptic Impendance Control through Virtual Environment Force Compensation OCTAVIAN MELINTE Robotics and Mechatronics Department Institute of Solid Mechanicsof the Romanian Academy ROMANIA octavian.melinte@yahoo.com

More information

BEAMFORMING WITH KINECT V2

BEAMFORMING WITH KINECT V2 BEAMFORMING WITH KINECT V2 Stefan Gombots, Felix Egner, Manfred Kaltenbacher Institute of Mechanics and Mechatronics, Vienna University of Technology Getreidemarkt 9, 1060 Wien, AUT e mail: stefan.gombots@tuwien.ac.at

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Assistant Lecturer Sama S. Samaan

Assistant Lecturer Sama S. Samaan MP3 Not only does MPEG define how video is compressed, but it also defines a standard for compressing audio. This standard can be used to compress the audio portion of a movie (in which case the MPEG standard

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

Peter Berkelman. ACHI/DigitalWorld

Peter Berkelman. ACHI/DigitalWorld Magnetic Levitation Haptic Peter Berkelman ACHI/DigitalWorld February 25, 2013 Outline: Haptics - Force Feedback Sample devices: Phantoms, Novint Falcon, Force Dimension Inertia, friction, hysteresis/backlash

More information

Active Vibration Control in Ultrasonic Wire Bonding Improving Bondability on Demanding Surfaces

Active Vibration Control in Ultrasonic Wire Bonding Improving Bondability on Demanding Surfaces Active Vibration Control in Ultrasonic Wire Bonding Improving Bondability on Demanding Surfaces By Dr.-Ing. Michael Brökelmann, Hesse GmbH Ultrasonic wire bonding is an established technology for connecting

More information

Feeding human senses through Immersion

Feeding human senses through Immersion Virtual Reality Feeding human senses through Immersion 1. How many human senses? 2. Overview of key human senses 3. Sensory stimulation through Immersion 4. Conclusion Th3.1 1. How many human senses? [TRV

More information

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method

More information

Visual Debugger forsingle-point-contact Haptic Rendering

Visual Debugger forsingle-point-contact Haptic Rendering Visual Debugger forsingle-point-contact Haptic Rendering Christoph Fünfzig 1,Kerstin Müller 2,Gudrun Albrecht 3 1 LE2I MGSI, UMR CNRS 5158, UniversitédeBourgogne, France 2 Computer Graphics and Visualization,

More information

Bayesian Estimation of Tumours in Breasts Using Microwave Imaging

Bayesian Estimation of Tumours in Breasts Using Microwave Imaging Bayesian Estimation of Tumours in Breasts Using Microwave Imaging Aleksandar Jeremic 1, Elham Khosrowshahli 2 1 Department of Electrical & Computer Engineering McMaster University, Hamilton, ON, Canada

More information

Networked haptic cooperation using remote dynamic proxies

Networked haptic cooperation using remote dynamic proxies 29 Second International Conferences on Advances in Computer-Human Interactions Networked haptic cooperation using remote dynamic proxies Zhi Li Department of Mechanical Engineering University of Victoria

More information

Learning to Predict Indoor Illumination from a Single Image. Chih-Hui Ho

Learning to Predict Indoor Illumination from a Single Image. Chih-Hui Ho Learning to Predict Indoor Illumination from a Single Image Chih-Hui Ho 1 Outline Introduction Method Overview LDR Panorama Light Source Detection Panorama Recentering Warp Learning From LDR Panoramas

More information

MRT: Mixed-Reality Tabletop

MRT: Mixed-Reality Tabletop MRT: Mixed-Reality Tabletop Students: Dan Bekins, Jonathan Deutsch, Matthew Garrett, Scott Yost PIs: Daniel Aliaga, Dongyan Xu August 2004 Goals Create a common locus for virtual interaction without having

More information

Congress Best Paper Award

Congress Best Paper Award Congress Best Paper Award Preprints of the 3rd IFAC Conference on Mechatronic Systems - Mechatronics 2004, 6-8 September 2004, Sydney, Australia, pp.547-552. OPTO-MECHATRONIC IMAE STABILIZATION FOR A COMPACT

More information

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015 Computer Graphics Si Lu Fall 2017 http://www.cs.pdx.edu/~lusi/cs447/cs447_547_comput er_graphics.htm 10/02/2015 1 Announcements Free Textbook: Linear Algebra By Jim Hefferon http://joshua.smcvt.edu/linalg.html/

More information

Mobile Manipulation in der Telerobotik

Mobile Manipulation in der Telerobotik Mobile Manipulation in der Telerobotik Angelika Peer, Thomas Schauß, Ulrich Unterhinninghofen, Martin Buss angelika.peer@tum.de schauss@tum.de ulrich.unterhinninghofen@tum.de mb@tum.de Lehrstuhl für Steuerungs-

More information

Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks

Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks Nikos C. Mitsou, Spyros V. Velanas and Costas S. Tzafestas Abstract With the spread of low-cost haptic devices, haptic interfaces

More information

Optical Correlator for Image Motion Compensation in the Focal Plane of a Satellite Camera

Optical Correlator for Image Motion Compensation in the Focal Plane of a Satellite Camera 15 th IFAC Symposium on Automatic Control in Aerospace Bologna, September 6, 2001 Optical Correlator for Image Motion Compensation in the Focal Plane of a Satellite Camera K. Janschek, V. Tchernykh, -

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

IN MANY industrial applications, ac machines are preferable

IN MANY industrial applications, ac machines are preferable IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 46, NO. 1, FEBRUARY 1999 111 Automatic IM Parameter Measurement Under Sensorless Field-Oriented Control Yih-Neng Lin and Chern-Lin Chen, Member, IEEE Abstract

More information

Relationship to theory: This activity involves the motion of bodies under constant velocity.

Relationship to theory: This activity involves the motion of bodies under constant velocity. UNIFORM MOTION Lab format: this lab is a remote lab activity Relationship to theory: This activity involves the motion of bodies under constant velocity. LEARNING OBJECTIVES Read and understand these instructions

More information

An Efficient Noise Removing Technique Using Mdbut Filter in Images

An Efficient Noise Removing Technique Using Mdbut Filter in Images IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 3, Ver. II (May - Jun.2015), PP 49-56 www.iosrjournals.org An Efficient Noise

More information

REVERSIBLE MEDICAL IMAGE WATERMARKING TECHNIQUE USING HISTOGRAM SHIFTING

REVERSIBLE MEDICAL IMAGE WATERMARKING TECHNIQUE USING HISTOGRAM SHIFTING REVERSIBLE MEDICAL IMAGE WATERMARKING TECHNIQUE USING HISTOGRAM SHIFTING S.Mounika 1, M.L. Mittal 2 1 Department of ECE, MRCET, Hyderabad, India 2 Professor Department of ECE, MRCET, Hyderabad, India ABSTRACT

More information

Displacement Measurement of Burr Arch-Truss Under Dynamic Loading Based on Image Processing Technology

Displacement Measurement of Burr Arch-Truss Under Dynamic Loading Based on Image Processing Technology 6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of

More information

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No Sofia 015 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.1515/cait-015-0037 An Improved Path Planning Method Based

More information

Open Access The Application of Digital Image Processing Method in Range Finding by Camera

Open Access The Application of Digital Image Processing Method in Range Finding by Camera Send Orders for Reprints to reprints@benthamscience.ae 60 The Open Automation and Control Systems Journal, 2015, 7, 60-66 Open Access The Application of Digital Image Processing Method in Range Finding

More information

Moving Object Detection for Intelligent Visual Surveillance

Moving Object Detection for Intelligent Visual Surveillance Moving Object Detection for Intelligent Visual Surveillance Ph.D. Candidate: Jae Kyu Suhr Advisor : Prof. Jaihie Kim April 29, 2011 Contents 1 Motivation & Contributions 2 Background Compensation for PTZ

More information

Using Simple Force Feedback Mechanisms as Haptic Visualization Tools.

Using Simple Force Feedback Mechanisms as Haptic Visualization Tools. Using Simple Force Feedback Mechanisms as Haptic Visualization Tools. Anders J Johansson, Joakim Linde Teiresias Research Group (www.bigfoot.com/~teiresias) Abstract Force feedback (FF) is a technology

More information

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 19: Depth Cameras Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Continuing theme: computational photography Cheap cameras capture light, extensive processing produces

More information