A Multiplexing Scheme for Multimodal Teleoperation

Size: px
Start display at page:

Download "A Multiplexing Scheme for Multimodal Teleoperation"

Transcription

1 A Multiplexing Scheme for Multimodal Teleoperation BURAK CIZMECI, XIAO XU, RAHUL CHAUDHARI, CHRISTOPH BACHHUBER, NICOLAS ALT and ECKEHARD STEINBACH, Chair of Media Technology Technical University of Munich Munich, Germany This paper proposes an application-layer multiplexing scheme for teleoperation systems with multimodal feedback (video, audio and haptics). The available transmission resources are carefully allocated to avoid delay-jitter for the haptic signal potentially caused by the size and arrival time of the video and audio data. The multiplexing scheme gives high priority to the haptic signal and applies a preemptive-resume scheduling strategy to stream the audio and video data. The proposed approach estimates the available transmission rate in real time and adapts the video bitrate, data throughput and force buffer size accordingly. Furthermore, the proposed scheme detects sudden transmission rate drops and applies congestion control to avoid abrupt delay increases and converge promptly to the altered transmission rate. The performance of the proposed scheme is measured objectively in terms of end-to-end signal latencies, packet rates and peak signal to noise ratio (PSNR) for visual quality. Moreover, peak-delay and convergence time measurements are carried out to investigate the performance of the congestion control mode of the system. Categories and Subject Descriptors: C.2.1.g [Communication/Networking and Information Technology] Network communications; H.1.2 [Models and Principles]: User/Machine Systems Human Information Processing; H.5.2 [Information Interfaces and Presentation]: User Interfaces Evaluation/methodology Additional Key Words and Phrases: Haptics, haptic compression and communication, teleoperation, human-robot interaction over communication networks, multiplexing, congestion control, rate control ACM Reference Format: B. Cizmeci, X. Xu, R. Chaudhari, C. Bachhuber, N. Alt and E. Steinbach, A Multiplexing Scheme for Multimodal Teleoperation, ACM Trans. Multimedia Comput. Commun. Appl. 0, 0, Article 0 (March 2017), 29 pages. DOI = / INTRODUCTION With a teleoperation system, it is possible to immerse ourselves into environments which are remote or inaccessible to human beings. Teleoperation systems are also referred to as telemanipulation sysc 2017 ACM TOMM. Personal use of this material is permitted. Permission from ACM must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This work has been supported by the European Research Council under the European Union s Seventh Framework Programme (FP7/ ) / ERC Grant agreement no corresponding author burak.cizmeci@tum.de Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY USA, fax +1 (212) , or permissions@acm.org. c 2017 ACM /2017/03-ART0 $10.00 DOI /

2 0:2 Burak Cizmeci et al. Operator Teleoperator Speaker Display Video Decoder Audio Decoder DEMUX Media Flow Channel MUX Video Encoder Audio Encoder Haptic Encoder Video Audio Force/Torque Haptic Decoder Command Flow Haptic Device Postion/velocity Haptic Encoder Channel Haptic Decoder Postion/velocity Fig. 1: Schematic overview of a multimodal teleoperation system. tems considering their manipulative ability in the remote environment. Besides auditory and visual feedback, the bidirectional exchange of haptic information enables the human operator to interact with remote objects physically. As shown in Fig 1, a teleoperation system consists of the human system interface (HSI) as the master on the operator (OP) side, the teleoperator (TOP) as the slave system, and a communication link connecting them [Ferrell 1965; Sheridan 1993]. The HSI is composed of a haptic device for position-orientation input and for force feedback output, a video display for visual feedback, and headphones for acoustic feedback. The TOP is a robot equipped with a force sensor, a video camera and a microphone. The TOP senses the remote environment, and sends the multimodal information to the human OP over the communication network. The quality of service (QoS) provided by the network strongly influences the performance of the teleoperation system. In particular, communication delay, delay-jitter, packet loss and limited transmission capacity jeopardize the system stability and degrade the task performance of the OP and the system transparency. Evidently, teleoperation over long-distance wired and wireless networks challenges the design of a reliable and stable teleoperation system. Especially if the network is shared with other traffic flows, the available transmission rate may fluctuate over time due to the unknown side traffic in the communication path. As seen in Fig. 1, the modalities need to be multiplexed into a single stream according to a priority-based transmission rate sharing strategy for an efficient utilization of the network resources. In this paper, we focus on the multiplexing of audio-video and force feedback signals for teleoperation sessions running over communication networks having low transmission rate. The following section introduces the delay effect of a low transmission rate. 1.1 Considered scenario and problem statement If the transmission capacity of the communication path between the TOP and OP is low, the transmission rate budget should be fairly distributed among the audio, video and haptic streams. Compared to audio and video feedback from the TOP, haptic communication between the TOP and OP is much more sensitive to latency. This is because bilateral teleoperation with haptic feedback places the human OP inside a global control loop between the OP and TOP, which requires low-delay position and force feedback exchange. Even small communication delay jeopardizes the system stability. Also, the smaller the delay, the better the immersion into the remote environment for a successful task com-

3 11 A Multiplexing Scheme for Multimodal Teleoperation 0:3 Packet arrivals at the OP side Packet departures at the TOP side V 1 V 1 1 Mbps H 1 H 2 H 1 H time (ms) 0 25 time (ms) t V1 t H1 t H2 Fig. 2: Transmission hold up for haptic information caused by a large video segment. pletion. Hence, the transmission of the haptic samples should be highly prioritized. In this paper, our main focus is on the resource-sharing problem for the multimodal streams over communication links with congestion and time-varying transmission rate. Such conditions exist, for instance, in earth-tospace communication for on-orbit teleservicing [Goeller et al. 2012], wide area networks connected via satellite-internet connections [Pyke et al. 2007], troposcatter links which are point-to-point wireless links communicating over the microwaves reflecting from troposphere [Dinc and Akan 2015] and slow internet connections as well. In Fig. 2, we illustrate the resource-sharing and multiplexing challenge for video and haptic packets. In this example, we consider a 1 Mbps constant bitrate (CBR) link between the TOP and OP. Assuming a first-come first-serve (FCFS) scheduling discipline, the signal arrival times at the OP side and the packet departure times at the TOP side are as follows: (1) At t V 1 = 0 ms, a packet containing the bitstream of video frame V 1 is ready for transmission at the TOP and its transmission is scheduled immediately as the channel is assumed to be idle at time instant 0 ms. The channel service time for the video packet V 1 is 32 ms for a video segment size of 4000 bytes and a transmission rate of 1 Mbps. (2) Slightly after video packet V 1, the haptic packet H 1 is ready for transmission at t H1 = 1 ms and it is scheduled to be transmitted after the video packet V 1. (3) At time t H2 = 25 ms another haptic packet H 2 is triggered for transmission and it is scheduled to be transmitted after the preceding haptic packet H 1. (4) For simple analysis, we assume in this example that the service time of a haptic packet is 1 ms. In Fig. 2 on the left, the packet arrival times at the OP side are shown for each packet. The haptic packets encounter blocking delays due to the previously scheduled large video packet V 1. The haptic packets H 1 and H 2 are delayed by 32 and 9 ms, respectively. The most critical problem from a teleoperation perspective is the varying delay (jitter) for the haptic samples. In this example, the interarrival distance between consecutive haptic samples is t H2 t H1 = 25 ms 1 ms = 24 ms and this distance becomes 1 ms at the OP side (see arrivals at the OP in Fig. 2). As we expect variable video packet sizes and irregular haptic packet generations, this delay-jitter is unavoidable for the haptic samples which may lead to instability problems for teleoperation systems. Furthermore, the varying delay distorts the regular display of the force samples which may cause misperception of the remote environment. In the following subsection, we analyze the transmission scheduling problem in more detail. 1.2 Scheduling the transmission of video and haptic signals To avoid large blocking delays for the haptic packets as shown in Fig. 2, they need to be highly prioritized and their immediate transmission must be ensured. The most suitable queueing discipline for such a scenario is a preemptive-resume scheduling strategy where a packet with high priority interrupts

4 0:4 Burak Cizmeci et al. the transmission of a lower priority packet and the transmission of the lower priority packet is resumed after transmission of the high priority packet. To realize the preemptive-resume functionality, in [Walraevens 2005], the author divides the sender capacity into equal discrete-time serving slots where the service time for a particular packet can be determined by the number of used slots. This model is used, for instance, for analyzing the performance of ATM (Asynchronous Transfer Mode) switches because ATM networks communicate using fixed length (53 bytes) cells and the transmission time of a cell is constant. Using the same perspective, the transmission for a teleoperation system can be sampled discretely by 1 khz which is the typical sampling rate of the uncompressed haptic signal. With this approach, we have a 1 ms accuracy to control the transmission service time of each packet. For the example in Fig. 2, if we consider fixed length transmission slots of 1 ms, each slot corresponds to 125 bytes. In this case, the video packet V 1 can be transmitted in 32 slots. This scheme provides us with the opportunity to prioritize the important haptic packet H 1 which is ready for transmission at t H1 = 1 ms. The transmission of the video packet is paused and the haptic packet H 1 is immediately put on the transmission channel. Once the haptic packet is transmitted, video transmission is resumed from where it was interrupted. Now, the delay for the haptic samples drops to 1 ms and the delay of the video packet is increased from 32 to 34 ms since its transmission is interrupted for 2 ms. Theoretically, the preemptive-resume strategy is the best scheduling option to stream high priority haptic samples together with video data. However, it has some drawbacks in practice. To achieve 1 ms delay accuracy, the packets are constrained to small fixed-size slots and, consequently, we reach a high packet rate of 1000 packets/second. This causes an inefficient usage of network resources due to header usage for the transport protocol. Additionally, the high packet rate may quickly saturate the buffers in the network and some of the packets may consequently be dropped. 1.3 Related work In the literature, transmission schemes for teleoperation systems can be categorized into: transport and application layer approaches. The transport layer schemes were developed on top of the existing transport control protocol (TCP) and user datagram protocol (UDP) [Uchimura et al. 2002; Ping et al. 2005] and their main focus is to optimize the network scheduling targeting a fast signal exchange between two control loops. For instance, in [Cen et al. 2005], the authors considered a scenario for control and sensing over multi-relayed communication links and developed an optimized wireless network protocol for the case in question. In [Cen* et al. 2005], the authors implemented a transport layer QoS management scheme on top of overlay networks for bilateral teleoperation systems involving multimodal interaction. They applied task dexterity detection to identify the priority of each media stream, and the available transmission rate was allocated based on the weighted priorities. Furthermore, they applied traffic shapers to ensure the allocated rate to the corresponding stream. For the application layer approaches, previous works focused on the efficient encoding and transmission scheduling of multimodal streams. In [Lee et al. 2006], the authors proposed a transmission scheme for haptic interactions in collaborative virtual environments, where two remote clients interact to perform a collaborative task in a virtual world located on a server. To achieve the consistency between the local virtual worlds of the clients and the server, they employed dead-reckoning based haptic event error control. Furthermore, they regulated the high data rate of the haptic communication by applying priority-based haptic filtering and network-adaptive aggregated packet generation to reduce the transmission rate demand of the system. In [Osman et al. 2007], the authors proposed an application layer protocol for haptic networking (ALPHAN) that is embedded on top of UDP. Instead of using RTP, ALPHAN introduces its own specific headers related to the haptic interaction, which reduces overhead because ALPHAN transmits packets at a rate of 1 khz. Additionally, a specific object of an application can be prioritized based on a buffering scheme. In [Cha et al. 2007], instead of a new protocol design, the authors

5 A Multiplexing Scheme for Multimodal Teleoperation 0:5 multiplexed haptic content into MPEG-4 BIFS (binary format for scenes) and developed a multimodal broadcasting scheme for applications involving passive sense of touch. [Eid et al. 2011] made a comprehensive study on haptic-audio-visual data communication protocols and they proposed an extended version of the ALPHAN protocol [Osman et al. 2007] as an adaptive application layer statistical multiplexing scheme (ADMUX) for a multimodal tele-immersion system involving haptic interaction. In their approach, the scheme allocates resources based on a statistical switching mechanism between modalities until the delay constraint of the signal is close to being violated. Furthermore, they reported that the QoS requirements are not always satisfied. Therefore, the scheme needs to employ delay and jitter compensation modules to ensure stability if it is applied for a bilateral teleoperation system. On the other hand, they mentioned that advanced data reduction and rate control techniques need to be employed for audio, video and haptic signals to efficiently use the available transmission resources. In [Isomura et al. 2011; Kaede et al. 2015], the authors proposed QoE enhancement schemes with intrastream synchronization for bidirectional haptic interactions for a teleteaching tool. In their application, the audio, video and haptic streams flow in both directions between the instructor and manipulator nodes. The application was tested over a 10 Mbps CBR link loaded by the media streams having average bitrates of 5.8 to 6.2 Mbps, and they applied application layer transmission schemes with media adaptive playout buffering, skipping and buffering haptic samples. According to the user experience tests, the media adaptive buffering with haptic sample skipping performed the best subjectively. However, this work lacks haptic data reduction methods, which can also provide a satisfactory user experience at a reduced amount of data. In [Yashiro et al. 2013; Yamamoto et al. 2014], the authors addressed the transmission capacity problem when force feedback and video frames are transmitted together over CBR links having 4 10 M bps transmission capacity and they employed an end-to-end flow controller to adapt the packet rate and bitrate of the visual-haptic streams. They showed that video frames generated by a JPEG [ITU-T 1993] encoder block the haptic packets and cause additional queueing delays. The authors apply adaptive selection for the bitrate and frame rate of the video stream and packet rate of haptic samples based on a transmission rate estimation scheme and a queueing observer which can be considered as a congestion detector. Although, this approach has similarities with the methods that are being discussed in this paper, the scheme does not employ the state-of-the-art haptic data rate reduction techniques and the visual communication system applies primitive coding approaches such as JPEG and frame rate reduction down to 3 fps, which causes high delay and a low visual quality for the observer. In this paper, we integrate a transmission rate estimation scheme called TIBET (Time Intervals based Bandwidth Estimation Technique) [Capone et al. 2004] into our multimodal teleoperation system. In the literature, several transmission rate estimation schemes [Brakmo and Peterson 1995; Stoica et al. 1998; Casetti et al. 2002] were proposed for congestion control mainly focusing on TCP/IP based applications. The transmission rate estimation is performed using the time-stamps and the length of the transmitted data packets. The estimation accuracy improves as the packet lengths get closer to the maximum transmission unit (MTU) size of the network and the time-stamp resolution increases to the maximum possible clock frequency. However, the data sizes can be irregular over time based on the characteristics of the media traffic and it is challenging to reach high clock frequencies due to processing limitations. Thus, the literature proposes filtering and estimation techniques to perform transmission rate estimation for noisy observations. TIBET [Capone et al. 2004] is an improved version of the TCP Westwood algorithm [Casetti et al. 2002]. Instead of using the individual packet transmission measurements, TIBET uses the average packet lengths and the average transmission times separately to estimate the transmission rate, which improves the precision of the estimation.

6 0:6 Burak Cizmeci et al. 1.4 Previous work and contributions of the paper In [Cizmeci et al. 2014], we proposed a delay-constrained resource allocation solution visual-haptic multiplexing for CBR links with known transmission rates and the aforementioned delay-jitter problem was addressed by applying the preemptive-resume strategy with a buffered approach for the efficient usage of the available transmission rate. We briefly explain the core idea of our previous work as follows: Let d(n) be the target packet delay for packet n, as a function of the transmission rate C of the link and sizes P i of the backlogged packets in the link as shown in Eq. (1). d(n) = n 1 i=0 P i + P n C Where P i is the size of the i th backlogged packet already in the channel and P n is the size of the current packet waiting to be injected into the channel. Assuming C to be known, we can adjust the size of the current packet P n in order to reach the target delay constraint d(n). If this adjustment is performed for every packet prior to transmission and every packet waits for its preceding packet to be transmitted, the accumulated size of the backlogged packets can be minimized to zero, n 1 i=0 P i 0. With this strategy, the system can guarantee the delay for every packet. Since the haptic packets are generated irregularly, the visual-haptic multiplexing scheme in [Cizmeci et al. 2014] uses a buffer with a fixed size for the force samples to observe the transmit and not-transmit states of the haptic data reduction scheme (explained in Section 2.1). With the multiplexing buffer, the scheme is able to foresee the upcoming transmission slots and determines the preemption and resumption times at the application layer. Additionally, if consecutive non-transmit slots exist in the multiplexing buffer, the transmission slots can be merged to reduce the packet rate and transport header usage. This paper extends the work in [Cizmeci et al. 2014] and proposes an application layer multiplexing scheme for time-delayed teleoperation systems involving multimodal interaction. Contrary to the related work in the literature, we benefit from the advanced video codec H.264 [ITU-T 2005], audio codec CELT [Valin et al. 2010] with rate-shaping capabilities and state-of-the-art haptic data reduction and delay-compensating control methods [Hinterseer et al. 2008; Xu et al. 2015]. Furthermore, we extended the multiplexing scheme with congestion control features. In the following items, we state the specific extensions of this paper to the previous work in [Cizmeci et al. 2014]: (1) Transmission rate adaptation: The new multiplexing scheme estimates the transmission capacity of the link using the acknowledged packets coming from the demultiplexer side. We adopt the TCP-based transmission rate estimation algorithm described in [Capone et al. 2004] for real time UDP based transmission. The estimation scheme in [Capone et al. 2004] improves the well-known TCP Westwood algorithm [Casetti et al. 2002] by reformulating its filter yielding an unbiased estimation of the transmission rate. Therefore, with the transmission rate estimation, the multiplexing scheme is able to automatically set the video bitrate, multiplexing throughput rate and the force sample buffer size to provide a guaranteed end-to-end delay for the audio, video and force signals. (2) Congestion control: The transmission rate estimation algorithm in [Capone et al. 2004] is very sensitive to network capacity changes. The round trip time (RTT) impairs the estimation and leads to underestimation of the available transmission rate during congestion. To quickly converge to the true network capacity, the estimated transmission rate is tracked over time and the scheme detects sudden congestion events to adapt the system parameters to the current network conditions. During congestion events, the scheme switches to its compensation mode to converge smoothly to the current transmission rate. The proposed congestion control extension for the transmission rate estimation algorithm in [Capone et al. 2004] is discussed in Section (1)

7 A Multiplexing Scheme for Multimodal Teleoperation 0:7 (3) Channel adaptive video encoding: An accurate frame level bitrate control is needed at the video encoder to reach a low-delay visual feedback. In this work, we employ a video encoder which uses the ρ-domain rate control approach proposed in [Gao et al. 2015] and which is able to generate CBR video streams. The proposed multiplexer is able to communicate with the video encoder to update the video bitrate and switch to intra or inter mode for each frame based on the changing network conditions. In [Demircin et al. 2008], the authors developed a real time streaming system for digital video broadcasting over time-varying wireless networks. Their scheme applies both transrating and scalable video coding techniques to transmit video streams with flexible bitrates over a wireless link with time-varying capacity. They proposed two concepts: single and multi-frame delay-constrained rate adaptation. Inspired from the single-frame delay constrained rate adaptation solution, we adopted the same approach into low-delay streaming of teleoperation scenes. (4) Audio modality: We add the audio modality to get acoustic feedback like collision and dragging sounds from the remote environment. In the new scheme which will be detailed in Section 2, the audio shares the transmission rate with the video stream based on a constant weight determined by the bitrate settings of the video and audio encoders. (5) Time-delay compensation control architecture: Considering teleoperation between geographically distant places, it is not possible to avoid signal propagation delay in the networks. To provide a stable and safe haptic interaction, it is necessary to employ a proper time-delay compensation control architecture in the system. In our system, we employ the time-domain passivity control approach which is cascaded with a perceptual haptic data reduction scheme ([Xu et al. 2015]). The control architecture allows us to test the system for large RTTs which challenge the transmission rate estimation and adaptation part of the system. Since this part is related to control engineering and it does not affect the output traffic of the haptic data reduction scheme [Hinterseer et al. 2008], we refer the interested reader to the corresponding paper [Xu et al. 2015] and the demo video [TDPA DEMO video] for details about the control scheme. The following Section 2 introduces the building blocks of the multiplexing scheme and explains the proposed algorithm. In Section 3, we give a detailed description of our setup and present the performance of the system with objective measurements. Finally, Section 4 concludes the paper. 2. MULTIPLEXING SCHEME FOR MULTIMODAL TELEOPERATION In Section 1.1, we defined the limited transmission rate problem and the challenge of resource allocation between the different modalities. This section extends the visual-haptic multiplexing scheme in [Cizmeci et al. 2014] to a multimodal version which also handles the estimation of variations in the available transmission rate. Fig. 3 illustrates the media flow from the TOP to the OP. The video, audio and force signals are captured, encoded and put into the media data queues based on the FCFS principle. The multiplexer (MUX) can directly access the queues to forward the data to the channel. In the following subsections, we first introduce the signal encoding blocks of each modality and then describe the details of the multiplexing algorithm. 2.1 Haptic data reduction Psychophysics literature has shown that the human haptic perception of forces, velocity, pressure, etc. can be modeled by a mathematical relationship between the physical intensity of a stimulus and its phenomenologically perceived intensity. This relationship has become known as Weber s Law of Just Noticeable Differences (JND): I = k I. [Weber 1851]. I is the reference stimulus and I is the so called Difference Threshold (or the JND). It indicates the smallest amount of change of stimulus I which can be detected as often as it cannot be. The constant k (called the deadband parameter k from now on) denotes the linear relationship between I and the initial stimulus I. Inspired from

8 0:8 Burak Cizmeci et al. Speaker Display H.264 Video Decoder CELT Audio Decoder Haptic Data Reconstruction Send Ack DEMUX Channel Ack Flow Channel Media Flow Estimate Transmission Rate rate MUX T V 1 V 2 V 3 Video Data Queue A 1 A 2 A 3 A 4 Audio Data Queue T=5ms F F rate Force Sample Buffer H.264 Video Encoder Haptic Data Reduction CELT Audio Encoder Clock Video Audio Force Haptic Device Operator Teleoperator 1 khz Fig. 3: Proposed multimodal multiplexing on the feedback channel from the TOP to the OP. this relation, Hinterseer and Hirche et al. proposed a sample-based perceptual data reduction scheme for haptic signals [Hinterseer et al. 2005; Hirche et al. 2005; Hinterseer et al. 2008]. In [Kuschel et al. 2006], the authors made a further investigation on the performance of the existing frame-based and sample-based haptic data reduction schemes by ensuring the passivity conditions and showed that sample-based data reduction methods achieve better immersion performance than frame-based approaches. Hence, we employ a sample-based perceptual data reduction scheme in our system. For the force feedback channel, the principle of perceptual deadband (PD) based data reduction is illustrated in Fig 4 a and b. According to Weber s Law, unsubstantial changes in the force feedback signal (shown as empty circles in Fig. 4a) are considered to be imperceptible and these haptic samples are dropped. When the difference between the recently sent sample and the current signal value violates the human perception thresholds (outside the gray zones), the current value is sent as a new update and the DB threshold is also updated with this recent sample. At the OP side, the zero-orderhold (ZOH) data reconstruction block interpolates the irregularly received signal samples back to its original sampling rate of 1 khz that is the minimum rate requirement for the local control loops [Colgate and Brown 1994]. For haptic perception, the DB parameter k is constant and has been found to be within the range of 5% to 15%, depending on the type of stimulus and the limb/joint where it is applied [Burdea 1996]. Furthermore, Hinterseer et al. performed subjective experiments over a wide range of DB parameters and reported that for k = 10% average sample rate reductions of up to 90% with satisfactory subjective ratings are achievable [Hinterseer et al. 2008]. For stable teleoperation of geographically distant TOPs and OPs, the time domain passivity control architecture [Ryu et al. 2010] is applied after the perceptual deadband based haptic data reduction [Xu et al. 2015]. With this extension, the teleoperation system can be tested under more realistic conditions including significant communication delay. Latency of haptic feedback. In a real teleoperation system, usage of a force sensor is necessary to acquire the true force signal sensed from the interaction between the robot end-effector and the remote objects. At the TOP side, we employ a JR3 6 DoF force-torque sensor [JR3] and we have a 6 DoF haptic device, Omage 6, from Force Dimension [FORCE DIMENSION] at the OP side. In the following, we can define the overall latency on the force signal: t delay = t DAQ + t mux+net + t display (2)

9 A Multiplexing Scheme for Multimodal Teleoperation 0:9 Where t mux+net is the multiplexing and network delay and t DAQ is the delay introduced by the data acquisition (DAQ) card. The raw signal from the sensor is very noisy. To reduce the noise level, the data acquisition card has on-board DSP filters. According to the JR3 documentation, it is reported that the group delay of this filter is computed as t DAQ = 1 f cutoff. In our teleoperation setup, we found that a filter with a cut off frequency of Hz is sufficient and in that case the acquisition delay is approximately 32 ms. t display is the delay that occurs between the computer and the haptic device. From the device API, it is measured as 1 ms. The overall delay on the force feedback can be computed as: t delay = t mux+net + 33 ms for our experimental setup. 2.2 CELT audio encoder To achieve a low delay audio transmission for our teleoperation system, we employ the audio codec CELT [Valin et al. 2010] which introduces a very low algorithmic delay. The total latency encountered on the acoustic feedback can be written as follows: t delay = t env + t render + t encode + t mux+net + t decoder + t display (3) t env is the delay introduced by the sound propagation in the environment and it can change from 5 to 20 ms depending on the distance between the microphone and source. t render is the processing delay on the soundcard. To read and write the data on the soundcard, we employed an audio I/O library called PortAudio [Portaudio]. In the library documentation, it is reported that t render and t display delays are 12.7 ms each. In order to reach the lowest encoding and decoding latency of 5 ms, the frame buffer size is set to 240 samples at 48 khz which leads to 200 encoded frames per second. The encoder bitrate is set to 64 kbps with CBR mode so that each frame has a fixed size of 40 bytes. If we assume that the microphone is placed closest to the event which imposes approximately 5 ms delay, the expected delay can be determined as: t delay = t mux+net ms. 2.3 H.264 video encoder In our teleoperation system, we employ an open source version of the H.264/AVC video coding standard called x264 [Merritt and Vanam] and implemented the rate control (RC) approach proposed in [Gao et al. 2015] to avoid buffer overflow delays due to limited communication transmission rate. The approach is based on the ρ domain concept originally proposed in [He and Mitra 2002]. It exploits the linear relationship between the video bitrate and ρ which is the ratio of zero coefficients after applying the discrete cosine transform (DCT) in a frame. In order to apply this RC scheme in real time with good quality, in [Gao et al. 2015] we made further improvements and accelerations on [Zhang and Steinbach 2011] and developed a precise and fast RC scheme on the video frames. (a) Haptic sample selection at the TOP side (b) Haptic data reconstruction (ZOH) at the OP side Fig. 4: Perceptual deadband based haptic data reduction with ZOH reconstruction (adopted from [Steinbach et al. 2011]).

10 0:10 Burak Cizmeci et al. Latency of visual feedback. The glass-to-glass (camera lens-to-display) latency of a video signal can be analyzed as follows: t delay = t camera + t encoder + t mux+net + t decoder + t display (4) Where t camera is the capturing delay, t encoder is the encoding delay, t mux+net is the multiplexing and network delay, t decoder is the decoding delay and t display is the delay introduced by the monitor. The rendering and display delays are hardware-dependent components and these delays can be reduced by designing a custom hardware. For research purposes, we use commodity hardware which are off-theshelf computers and the available camera and display systems introducing a considerable amount of delay. We name this delay intrinsic delay, t intrinsic which is the sum of capturing and display delays, t intrinsic = t camera + t display. Following the approach taken in [Bachhuber and Steinbach 2016], we measure the delay as follows: A blinking led is placed in front of the camera and a photodiode is attached on the screen of the video display window. The time difference between the led trigger and photodiode reaction is recorded with a microcontroller. This time difference gives the intrinsic delay, t intrinsic, that we described above. In the system, we used a GigE camera and a 144 Hz gaming display and the mean delay is measured as 60 ms for 720p high definition (HD) video at 25 fps. Regarding the decoding delay, t decoder, the current video decoders are very fast on commodity hardware and can decode a 720p HD video in less than a millisecond. The encoding delay is the processing time of each frame during the compression and RC and it is measured as t encoder = 30 ms for a 720p video stream at 25 fps. The multiplexing and network delay, t mux+net, depends on the frame size and the current available video bitrate and it can be controlled with a single-frame delay constraint [Demircin et al. 2008] as follows: F ramesize T argetdelay (5) V ideot R where T argetdelay is the desired constraint on the delay t mux+net and V ideot R is the current available transmission rate for video frames. Using the constraint in Eq. 5, the frame size is computed and the bitrate of each frame is adjusted for the current video target rate. To avoid queueing delays for the video stream and efficiently use the available transmission rate, the delay constraint T argetdelay needs to be set close to the frame period. For a 25 fps video stream, the frame period is 1 25sec = 40 ms. Since the RC might slightly deviate from the desired frame size, the T argetdelay is set to 35 ms. Using this approach, we expect that the system guarantees a ms delay due to transmission rate limitations for all levels of transmission rate budget. If we sum up the measured delays, the overall delay on visual feedback is around 125 ms excluding the one-way signal propagation delay between the OP and TOP. 2.4 Multiplexing algorithm The haptic data reduction block uses the PD based force data reduction integrated with time-domain passivity control scheme [Xu et al. 2015]. The constant Weber factor, k, determines the transmit/nottransmit states on the force signal. If the current force signal value exceeds the perceptual threshold defined by the previously sent sample, the encoder marks the current force sample for transmission and enqueues it into the force buffer. The system buffers a few force samples (force sample buffer in Fig. 3) to observe transmit states of the force signal. The force encoder and MUX blocks are triggered in synchrony with a clock rate of 1 khz. When the clock ticks, the force encoder pushes a sample to the force buffer tail and the multiplexer dequeues a force sample from the force buffer head. Considering the transmission flag of the force sample, it is either discarded or sent. The samples tagged as dark gray blocks F need to be transmitted and the ones tagged as light gray blocks 0 need not to be

11 A Multiplexing Scheme for Multimodal Teleoperation 0:11 ALGORITHM 1: Multimodal multiplexing algorithm 1 if Slots == 0 then 2 Slots = CheckAvailableSlots(F orcebuffer); 3 if F orce.t ransmit == 1 then 4 P ackett ype = CheckMultimediaAvailability(AudioBytes, V ideobytes); // F, AF, VF or AVF packet SendP acket(p ackett ype, &Slots); // & operator shows that the function can modify data 5 end 6 if F orce.t ransmit == 0 then 7 P ackett ype = CheckMultimediaAvailability(AudioBytes, V ideobytes);// A, V or AV packet SendP acket(p ackett ype, &Slots); // & operator shows that the function can modify data 8 end 9 end 10 else 11 if Slots > 0 then 12 Slots ; //Wait for the previous transmission 13 end 14 end transmitted. According to the state of the system, the multiplexer can generate 7 types of packets; force (F), video (V), audio (A), audio-video (AV), audio-force (AF), video-force (VF) and audio-videoforce (AVF) packets. The packet types are identified by the header information, MUX Header, which fits into 1 byte storage space. Additionally, the multiplexer adds time-stamps to measure the latencies, packet and sample identification numbers to label the streams for correct decoding and sequential diplaying. Details of the header and packet structures are given in Appendix A. During multiplexing, the maximum size of each packet is limited to the maximum transmission unit (MTU) size of the ethernet protocol, which is 1500 bytes. If the packet size at the application layer exceeds the MTU size, the ethernet protocol divides the packet into fragments. The fragmentation adds an additional overhead on the transmission and increases the packet rate. If one of the fragments is lost during the transmission, it is not possible to recover the original packet using the remaining fragments which leads to loss of more data. In Algorithm 1, we give the pseudo code of the multiplexing scheme. The multiplexer runs as a thread with a clock rate of 1 khz. The first condition statement checks whether the channel is busy with a transmission or not. The Slots = 0 case shows that the channel is ready to serve the transmission of new data and Slots > 0 indicates that the channel is still busy with the transmission of the previous packet. When the channel is busy, the multiplexer cannot push a new packet into the channel and waits until the channel is ready for the new transmission. When the channel comes back to ready state, the multiplexer goes over the queued samples in the force buffer and counts the free slots tagged as 0 until hitting a planned force transmission tagged as F or reaching the tail of the force buffer. The multiplexer then inspects the data in the audio and video queues and decides the type of packet to send and may update the utilized number of Slots when the size of the multiplexed data is smaller than the available rate resources. In Fig. 5, the multiplexing algorithm is explained for the buffer state shown in Fig. 3. This example assumes a force buffer size of T = 5 ms but the same method can be applied to different buffer sizes. For the sake of simplicity, without loss of generality we assume that audio and video information is always available for transmission. The same strategy applies to all cases of data availability. When both audio and video data are ready for transmission, the multiplexer distributes the resources fairly based on the encoder rate settings. This decision is made with a weighting function: w V = R V R A + R V (6)

12 0:12 Burak Cizmeci et al. (i) (ii) (iii) (iv) AVF wait 1 clock tick AVF wait 4 clock ticks F0 0 0 F F F0 0 F Fig. 5: Step-by-step illustration of the multiplexing process using the force buffer state shown in Fig. 3. where R A and R V are the constant bitrate settings for the audio and video encoders, respectively. w V is the percentage of resources reserved for video when both audio and video data demand transmission resources. In Fig. 5, the rectangular blocks on the right are used to show the current state of the force buffer. As shown in Fig. 3, the head of the force buffer is on the left and the tail is on the right. In Fig. 5, each square block on the right represents a channel resource bucket for a 1 ms time period. For illustration purposes, we assume a 1 Mbps CBR channel which has 1 ms transmission slots with a size of 1000 bits. In Fig. 5, we demonstrate how the multiplexing works clock tick by clock tick. At clock tick (i), the channel is free to transmit data which allows the multiplexer to push new data onto the transmission link. The multiplexer checks the force buffer starting from the head until reaching a force sample in the buffer. In this example, the multiplexer encounters the first valid force sample at the head of the force buffer. This force sample has already been delayed by 4 ms by the buffer, which means that additional 1 ms delay is tolerable to meet the maximum force delay constraint of 5 ms. To achieve this, we can transmit a 1000 bits packet which carries audio, video and force data as well as the relevant header data to the OP. At clock tick (ii), the multiplexer waits for the transmission of the previous packet and a new force transmission state arrives at the tail of the force buffer. At clock tick (iii), the channel is free to transmit a new packet and the multiplexer checks the force buffer for upcoming force samples which need to be transmitted. At this time, the next F sample is located 4 slots from the head of the force buffer and 4 ms transmission delay is tolerable to meet the force delay constraint of 5 ms. Hence, 4 resource buckets are available to form an audio-video-force packet fitting into = 4000 bits. The multiplexer now has to wait 4 clock ticks before the next packet transmission. During this waiting period, new transmission states arrive at the force buffer (see clock tick (iv)). The force sample is already delayed 1 ms by the force buffer, the packet transmission takes an additional 4 ms. In total, the force sample is subjected to 5 ms delay which also meets the target delay constraint on the force signal. 2.5 Real-time transmission rate estimation and adaptation of system parameters For teleoperation sessions running over the internet or a wireless network which is shared with other users, the available transmission rate fluctuates in accordance with the side traffic in the network. Especially in case of network congestion, the transmission rate for the teleoperation session may sud-

13 A Multiplexing Scheme for Multimodal Teleoperation 0:13 denly drop and cause dangerous situations during the manipulation. To keep the system staying on the alert for such cases, the transmission rate of the network needs to be instantly tracked and the throughput of the system and bitrate settings of the video encoder must be adapted to the current network conditions. To address this issue, in this section, we close the loop between the multiplexer and the demultiplexer to predict the available transmission rate of the forward channel (from the TOP to OP) and adapt the throughput of the multiplexer, the force buffer size, and the video encoder bitrate Transmission rate estimation. In our system, we adopt the TIBET [Capone et al. 2004] transmission rate estimation algorithm as shown in Algorithm 2. For every packet, the multiplexer puts a packet time-stamp (P T S, see packet structures in Appendix A in Table III) and the demultiplexer immediately sends back the time-stamp of the corresponding packet to the multiplexer side over the feedback channel for transmission rate estimation (see Fig. 3). In lines 2 and 3 of Algorithm 2, the recently acknowledged packet length and transmission time are computed. In lines 4 and 5, the average packet length and average sample interval are computed using a first order IIR low-pass filter. α is the pole of the low-pass filters which is between 0 α 1.0 and it has a critical effect on the estimation performance. As α gets smaller, the algorithm becomes highly sensitive to changes in the available transmission rate. However, this causes oscillations on the estimation results. Conversely, as α reaches to 1, the algorithm produces stable estimates, but is less sensitive to changes in the network. In our setup, we experimentally identified that is a compromise to achieve a balance between the response to changes in the network and the estimation accuracy. The average measured transmission rate T R avg (i) is computed as follows: T R avg (i) = AvgP aclen(i) AvgSampT ime(i) Using Eq. 7, the oscillations for T R avg (i) are still too high and further processing is required to reach a smooth estimate. In lines 6 and 7, we perform an adaptive filtering on the instant measurements T R avg (i) and exploit the oscillation effect. The adaptive filter coefficient is an exponential function (as seen in line 7 of Algorithm 2) which adjusts the weight of the estimation based on the time interval between adjacent estimations. If the time difference between two estimations increases, the filter relies more on the current measurement. Especially during congestion, recent measurements give reliable estimations so they should be highly weighted. This is provided by the adaptive filter weight with the increasing estimation intervals T est (i) caused by congestion. Conversely, the filter increases the (7) ALGORITHM 2: Transmission rate estimation algorithm 1 if AckReceived == true then 2 CurrSampLen(i) = BytesReceived(i) 8; 3 CurrSampT ime(i) = CurrentT ime P ackett imestamp(i) RT T ; 4 AvgP aclen(i) = α AvgP aclen(i 1) + (1 α) CurrSampLen(i); 5 AvgSampT ime(i) = α AvgSampT ime(i 1) + (1 α) CurrSampT ime(i); 6 T est(i) = CurrentT ime LastEstimationT ime; 7 T R est(i) = (1 exp( T est(i) AvgP aclen(i) T const )) + exp( T est(i) AvgSampT ime(i) T const ) T R est(i 1); 8 LastEstimationT ime = CurrentT ime; 9 AvgP aclen(i 1) = AvgP aclen(i); 10 T R est(i 1) = T R est(i); 11 AvgSampT ime(i 1) = AvgSampT ime(i); 12 AckReceived = false; 13 NewT REstimation = true; 14 end

14 0:14 Burak Cizmeci et al. transmission rate conservatively when the transmission capacity improves. Starting from line 8 in Algorithm 2, the variables are updated for the next transmission rate estimation round Bitrate Adaptation. Whenever a new transmission rate estimate is completed, the bitrate adaptation algorithm updates the multiplexing throughput rate, video bitrate, the multiplexing buffer size and audio-video weighting factor. Algorithm 3 shows the detailed operations of this stage. In lines 3, 4 and 5, the multiplexing buffer size is updated in line with the new transmission rate estimate and the maximum transmission unit (MTU) size of the network. The multiplexer needs to limit the packet lengths to the MTU size (1500 bytes) of the transmission protocol because packets larger than the MTU size will be fragmented by the transmission protocol which leads to an increase in packet rate and usage of headers over the network. With this limitation, the maximum multiplexing buffer size is bounded analytically with the MTU size and the current transmission rate estimate, as shown in Eq. 8: BufferSize = MT Usize(bits) T R(bits/sec) = (bits) = 10 ms (8) (bits/sec) The numerical example above shows the multiplexer buffer size selection for a 1.2 Mbps link. As the multiplexing buffer size increases, the packet rate (packets/seconds) of the system decreases because the multiplexer can fill more audio-video data into free transmission slots. To achieve the minimum possible packet rate, the multiplexing buffer should be set to its maximum possible size. However, it is difficult to update the buffer size instantly due to its usage in the communication loop and frequent updates may cause jitter on the force signal. Based on such reasons, the multiplexer updates the buffer size with multiples of 5 as seen in line 4 in Algorithm 3. In line 6, the bitrate of the video encoder is updated according to the current transmission rate estimation using a linear model. The model parameters are obtained by a separate experimental setup shown in Fig. 6. In this experiment, we run the system over known CBR channels with transmission rates from 800 to 2100 kbps with steps of 100 kbps using pre-recorded teleoperation sessions. As seen in Fig. 6a, we consider the channel and the multiplexer in the dashed box as a combined network bottleneck and the audio and force signals as the incoming side traffics. We apply transmission rate estimation at the video input of the multiplexer using Algorithm 2. For the transmission rate estimation, we treat every video frame as a single packet and the demultiplexer acknowledges a packet when the transmission of a frame is completed. The estimated transmission rate for each channel condition is recorded and averaged. In Fig. 6b, the dots show the relation between the transmission capacity of the channel and the estimated average video bitrate that can be pushed into the system. As observed in the figure, there is a linear relationship between the channel capacity and the video bitrate. A linear model is fitted and used in the transmission rate adaptation algorithm (Algorithm 3 in line 6). The model parameters β and S ALGORITHM 3: Bitrate adaptation algorithm 1 if NewT REstimation == true then 2 MultiplexerT hroughput = T R est(i); 3 OriginalBufferSize = round( MT Usize 8 ); MultiplexerT hroughput 4 NewBufferSize = ceil( OriginalBufferSize ) 5; 5 5 ApplyNewMuxBuffer(NewBufferSize); 6 V ideobitrate = (β T R est(i) S) V ideodelayconstraint framerate; V ideobitrate V ideobitrate+audiobitrate ; 7 w v = 8 NewT REstimation = false; 9 end

15 A Multiplexing Scheme for Multimodal Teleoperation 0:15 Send Video Ack FB Channel timestamps Estimate Transmission Rate rate DEMUX CBR Channel MUX Audio Force H.264 Video Encoder (a) The setup to determine the video bitrate (b) Video bitrate model for CBR links: kbps Fig. 6: The test-bed to derive the video bitrate update model for varying transmission rates. are found as 0.87 and 89 kbps respectively. Furthermore, a single-frame delay constraint as described in Section 2.3 is applied to avoid queueing delays at the multiplexer input. Thus, the channel and multiplexer altogether finish serving each frame before the subsequent frame arrives. Finally, in line 7 of Algorithm 3, the resource allocation weighting factor calculation in Eq. 6 between the audio and video is updated Congestion control. Sudden congestion events are considered as unexpected transmission rate drops due to increased side traffic in the network which may have dangerous effects during telemanipulation. When congestion happens, the system enters an uncertain state for which the transmission rate is assumed to be unknown. Consequently, until transmission rate estimation converges to the current bitrate of the link, the current system parameters stay ambiguous and the signal delays suddenly increase at the same time. A trivial solution to the problem would be to stop immediately and restart the system from the lowest possible bitrate parameters. However, the transition would interrupt the user s interaction and it would take time to converge to a reliable transmission rate estimate. Also, if the congestions happen frequently, the stop and adaptation states can be annoying to the OP. Concerning a smooth transition to the new network conditions, an intelligent video frame dropping and bitrate adaptation strategy is applied. In Algorithm 4, we give our proposed method for congestion control for the transmission rate estimation in Algorithm 2. In line 1, the derivative of the estimated transmission rate is computed using a 4 th order FIR filter. An increasing positive derivative of the transmission rate indicates a congestion event with a sudden drop. A decreasing negative derivative relates to an increasing transmission rate. Congestion control is activated when either the delay constraint for the force samples or the video frames is violated (Line 2 in Algorithm 4) and the derivative of the transmission rate is higher than a threshold. When the congestion starts, the current video transmission buffer is discarded and the video transmission breaks until all unacknowledged frames are acknowledged (Lines 3-6 in Algorithm 4). Then the system continues probing the available transmission rate by restarting the video transmission with half the frame rate (temporal scalability) and intra only mode. With this approach, we can safely sample the transmission rate, and the interruptions of the video caused by the cleared buffers are quickly recovered with intra frames (Lines 7-12 in Algorithm 4). When the delay constraints are back to normal and T Rderivative reaches a value lo-

16 0:16 Burak Cizmeci et al. ALGORITHM 4: Congestion control algorithm 1 T Rderivative(i) = ((T R est(i 3) T R est(i)) + 2 (T R est(i 2) T R est(i 1)))/8; 2 if T Rderivative(i) >= T hr&&(f orcedelay > F orcedelayconst V ideodelay > V ideodelayconst) then 3 if UnackedV ideonumber > 0&&SignalStop == true then 4 EmptyV ideobuffer(); 5 V ideot ransf lag = false; 6 end 7 else 8 F P Sscale = 2; 9 V ideomode = IntraOnly; 10 V ideot ransf lag = true; 11 SignalStop = false; 12 end 13 end 14 else 15 if W aitingt ime > W aitt hr then 16 F P Sscale = 1; 17 V ideomode = InterIntra; 18 SignalStop = true; 19 end 20 end wer than the threshold, the system enters the recovery state and keeps its low rate state (Lines 8-9) until a wait time threshold is reached (Line 15). Thereupon, the system increases the video framerate back to normal and switches back to inter-intra coding mode (Lines 16-18). 3. EXPERIMENTAL SETUP AND RESULTS The multiplexing scheme is evaluated using a teleoperation system consisting of a KUKA Light Weight Robot arm [KUKA], a JR3 Force/Torque sensor [JR3] and a Force Dimension Omega 6 haptic device [FORCE DIMENSION]. The TOP and OP are separated by a hardware network emulator [APPOSITE- TECH] which can impose precise channel rate limitations and network delay. The tested RTT delays are symmetrically set in the network emulator and the feedback channel that transmits the velocity and acknowledgment packets is assumed to have enough transmission rate in our experiments. Realtime Xenomai Linux-based machines are used to measure signal latencies and estimate the transmission rate. The GigE camera [MAKO G-223] is attached to the robotic arm (eye-on-the hand) focusing on the operating end-effector and its operation region. The TOP robot is equipped with a single-point contact tool to interact with wooden toys in the remote environment. Using the tool, the OP can move the toys and peg them in the corresponding holes, which can be considered as a representative task for teleoperation applications. Appendix B gives detailed information about the design of the teleoperation testbed and the experimental procedure. 3.1 Experiment 1: Teleoperation over constant bitrate (CBR) links In this experiment, we demonstrate the performance of the system for CBR links. The CBR link from TOP to OP is set to 1, 2 and 3 Mbps with a symmetric RTT delay of 100 ms. The OP performs the following fixed task: OP moves the tool tip, holds the object from the hole, moves it 10 cm, releases and moves it back to the initial position (see Appendix B for details). The target points are marked to avoid the deviation from the desired manipulation. During the teleoperation, the estimated transmission rate, the number of transmitted packets, PSNR for visual quality and signal delays are instantly probed for the evaluation of the teleoperation system together with the multiplexing scheme.

17 A Multiplexing Scheme for Multimodal Teleoperation 0:17 TR est performance: In Table I, we observe that the mean of the transmission rate estimate converges to the target bitrate. Approximately 97 99% of the available transmission rate can be detected. In the latter columns, we report the root mean squared error (T Rest rmse ) and standard deviation (T Rest) σ from the original transmission rate as the precision loss of the transmission rate estimation. We observe a little precision loss in the estimate as the transmission rate increases because of the numerical computation error issues mentioned in Section Packet rate: As the available transmission rate increases, the average packet rate rises due to the increased video bitrate for better visual quality. The packet rate results are very promising compared to haptic communication systems without data reduction schemes, which need to transmit 1000 packets/sec. With the help of haptic data reduction and multiplexing scheme, it is still possible to have a packet rate reduction around 75 87% for audio, video and haptics if we take 1000 packets/sec as reference. If we compare this result with the systems which are transmitting only haptic signals with a data reduction of 90% [Hinterseer et al. 2008], we can conclude that the multiplexed audio and video data doesn t cause a high increase in packet rate. Visual quality PSNR: The visual quality of the teleoperation scenes ( f ps) are measured in terms of mean, standard deviation, minimum and maximum of PSNR in db. We observe that the system adapts the video bitrate, and the quality of the video stream is improved as the available transmission rate increases. It is important to note that the quality of the video is highly dependent on the motion in the teleoperation scenes. Because of the camera mounted on the robotic arm, when the robot is in high motion, the PSNR reaches to its minimum value. Conversely, the PSNR reaches its maximum value when the robot is steady. Force delay: The statistics of force delay demonstrate that the multiplexing scheme successfully controls the delay of the force signal. In the lower part of Table I, the maximum force delay column shows that the force samples are not delayed more than the multiplexing buffer size which is determined by Algorithm 3. The mean and jitter values refer to the average and standard deviation of the original transmission delay for the force samples. To avoid signal distortion for the early force sample arrivals, the demultiplexer applies play-out buffering by checking the timestamps to display the force sample at the correct time. Hence, the mean force delay can be shifted to the target delay with 0 ms jitter. Video delay: We observe that the linear video bitrate model determined in Section performs successfully in controlling the video delay together with the single-frame delay constraint. The delay constraint is set to 35 ms and we observe that the system converges to the constraint with very low jitter. If we check the minimum and maximum delays, there exist sometimes outliers due to rate control Table I. : The transmission rate estimation performance for CBR links, packet rate of the system and visual quality of the teleoperation scenes are illustrated. Additionally, we present the delay performance of the system with respect to each signal. The one-way 50 ms delay is subtracted from the results to clearly illustrate the delay effect of the transmission bottleneck. TR (kbps) TR est performance Packet rate Visual quality PSNR T Rest mean T Rest σ T Rest rmse mean stddev mean stddev min max (kbps) (kbps) (kbps) (packets/sec) (packets/sec) (db) (db) (db) (db) TR (kbps) Force delay (ms) Video delay (ms) Audio delay (ms) mean jitter min max mean jitter min max mean jitter min max

18 0:18 Burak Cizmeci et al. deviation on some frames. However, the low jitter value indicates that these outliers very rarely occur. Audio delay: The audio frames reach the demultiplexer side very fast. We observe a little decreasing trend on the audio delay as the network capacity increases because the audio bitrate is kept constant in all transmission rate conditions as 64 kbps which gives sufficient sound quality for the interaction. Hence, the audio delay slightly decreases as the available transmission rate increases. 3.2 Experiment 2: Congestion control results In this experiment, the available transmission rate of the communication link suddenly drops from 3 to 2 Mbps while the OP is in contact with a remote object and drags it over the surface. We test the performance of Algorithm 4 where the system detects congestion and converges quickly back to the current transmission rate. As the RTT increases, the transmission rate estimation is delayed. Consequently, the system adaptation to the transmission rate change is also delayed. Thus, the system pushes at a high data rate until it estimates the current transmission rate. During this period, signal delays are higher than the desired delay constraints. In Table II, we show the system response to the congestion event when the RTT is 100, 200 and 300 ms and compare the signal delays for the enabled and disabled congestion control (Algorithm 4). We show the peak delay for the haptic and video signals, compute the transmission rate estimation drop with respect to the target bitrate of 2 M bps and measure convergence time, which is the time difference between the time when congestion begins and the time when the estimate converges to 2 Mbps. From the table, we can see that the congestion control reduces the system latency by controlling the video throughput of the system. On the other hand, we see that the RTT plays an important role and it gets challenging for the system to adapt the parameters as the RTT increases. In Figures 7, 8 and 9, we illustrate the corresponding force and video delay plots for the test conditions RT T 100, 200 and 300 ms. In the following, we comment on the results for the delay plots. The demo video of the setup and the results for RT T = 100 ms can be watched in the referenced link [Paper DEMO video]. RTT 100 ms: Fig. 7 shows the estimated transmission rate results and the delay profiles of video and force signals when the RTT delay is set to 100 ms. If the congestion control is not enabled, we observe that the transmission rate estimation Algorithm 2 conservatively drops the throughput below 1 M bps although the current transmission rate is 2 Mbps. However, with the help of Algorithm 4, the system detects the congestion event and probes the communication link capacity slightly to converge to the true transmission rate. The red line in the figures illustrates the estimated transmission rate and we observe that the congestion control mode helps the estimator to quickly converge to the current transmission rate of the link. Moreover, we draw plots for the force and video delay profiles which are aligned with the transmission rate estimation result. We observe more delay constraint violations if the congestion control mode is off. On the other hand, the peak delays do not cause critical lags on both video and force signals as already shown in Table II. RTT 200 ms: Fig. 8 illustrates the estimated transmission rate results and the delay profiles of video and force signals when the RTT delay is set to 200 ms. If we compare the transmission rate estimation Table II. : Congestion control results when the link capacity suddenly drops from 3 to 2 Mbps. RTT (ms) without congestion control with congestion control DHaptic max DV max ideo T R drop T conv DHaptic max DV max ideo T R drop (ms) (ms) (kbps) (ms) (ms) (ms) (kbps) T conv (ms)

19 A Multiplexing Scheme for Multimodal Teleoperation 0:19 (a) Force delay without congestion control (b) Force delay with congestion control (c) Video delay without congestion control (d) Video delay with congestion control Fig. 7: Delay improvements for congestion control when a sudden transmission rate drop occurs from 3 Mbps to 2 Mbps at a RTT of 100 ms. To illustrate the delays due to the transmission rate, 50 ms one-way delay is subtracted from the signal delays. results in Fig. 8 with Fig. 7, we observe that the transmission rate estimation diverges very quickly from the current transmission rate as the RTT delay increases from 100 to 200 ms. On the other hand, the congestion control mode helps the estimator to converge to the current transmission rate 2 M bps by keeping the turning point above 1.5 Mbps. When we compare the force delay plots Fig. 8a and Fig. 8b, the congestion compensation mode quickly keeps the force delay below the desired constraint. However, as we observe in Fig. 8a, the force delay constraint is violated approximately 5 seconds if the

20 0:20 Burak Cizmeci et al. (a) Force delay without congestion control (b) Force delay with congestion control (c) Video delay without congestion control (d) Video delay with congestion control Fig. 8: Delay improvements for congestion control when a sudden transmission rate drop occurs from 3 Mbps to 2 Mbps at a RTT of 200 ms. To illustrate the delays due to the transmission rate, 100 ms one-way delay is subtracted from the signal delays. congestion control scheme is switched off. When we compare the video delay profiles in Fig. 8c and Fig. 8d, there is a significant improvement on video delay by enabling the congestion detection and compensation scheme. RTT 300 ms: Fig. 9 illustrates the estimated transmission rate results and the delay profiles for video and force signals when the RTT delay is set to 300 ms. When we compare the transmission rate estimation results in Fig. 9 with Figs. 8 and 7, the increasing RTT delay significantly impairs the

21 A Multiplexing Scheme for Multimodal Teleoperation 0:21 (a) Force delay without congestion control (b) Force delay with congestion control (c) Video delay without congestion control (d) Video delay with congestion control Fig. 9: Delay improvements for congestion control when a sudden transmission rate drop occurs from 3 Mbps to 2 Mbps at a RTT of 300 ms. To illustrate the delays due to the transmission rate, 150 ms one-way delay is subtracted from the signal delays. estimation. Especially if the congestion control scheme is off, the estimation drops close to 500 kbps and it takes more than 10 seconds to converge to the current transmission rate of the link. Similar to the force delay results for RTT delay 100 ms case, the congestion control mode quickly recovers the force delay back to its desired constraint. On the other hand, when the congestion control mode is switched off, the video delay diverges close to 900 ms because of the late adaptation of video bitrate in Fig. 9c.

22 0:22 Burak Cizmeci et al. However, as seen in Fig. 9d, enabling the congestion control mode helps the system to recover the video delay in a very short time without any overshoot. 4. CONCLUSION In this paper, we studied a haptic teleoperation scenario under low-bitrate and delayed network conditions. We introduced an application layer communication protocol which transmits the multimodal signals with low latency while efficiently utilizing network resources. To achieve this, we employed the recent data reduction and bitrate control techniques in the audio-visual-haptic communications context. More specifically in this paper, we focused on the estimation of available transmission rate and the effects of sudden congestion on the signal latencies. The capacity of the communication path is instantly estimated and the system parameters; the video bitrate, multiplexing throughput and force buffer length are adapted to the available network resources. Moreover, we developed a congestion control scheme which becomes active when the link capacity drops suddenly as a result of increased side traffic. The results show that the congestion latency can be recovered quickly by lowering the video framerate and bitrate. On the contrary, we illustrated that increasing the RTT delay challenges the system and recovering back to true transmission rate takes longer. REFERENCES APPOSITE-TECH Netropy N60 hardware network emulator, (visited on ). FORCE DIMENSION Omega 6, 6-DoF haptic device, (visited on ). TDPA DEMO video: Combining haptic data reduction with stabilizing control approaches, (visited on ). Paper DEMO video: A Multiplexing scheme for multimodal teleoperation, (visited on ). JR3 6-DoF force-torque sensor, (visited on ). KUKA leight weight robot arm, (visited on ). L. Merritt and R. Vanam, x264: A high performance h.264/avc encoder, Videolan Project (visited on ). MAKO G-223 Gigabit Ethernet industrial camera, (visited on ). Portaudio, portable cross platform audio i/o, (visited on ). BACHHUBER, C. AND STEINBACH, E A system for high precision glass-to-glass delay measurements in video communication. In IEEE Int. Conf. on Image Processing (ICIP). p BRAKMO, L. S. AND PETERSON, L. L TCP Vegas: End to end congestion avoidance on a global internet. IEEE Journal on selected Areas in communications v. 13, p BURDEA, G. C Force and touch feedback for virtual reality. JohnWiley & Sons, New York, NY, USA. CAPONE, A., FRATTA, L., AND MARTIGNON, F Bandwidth estimation schemes for TCP over wireless networks. IEEE Trans. on Mobile Computing v. 3, n. 2, p CASETTI, C., GERLA, M., MASCOLO, S., SANADIDI, M. Y., AND WANG, R TCP Westwood: End-to-end congestion control for wired/wireless networks. Springer-Verlag Wireless Networks v. 8, n. 5, p CEN*, Z., MUTKA, M., LIU, Y., GORADIA, A., AND XI, N QoS management of supermedia enhanced teleoperation via overlay networks. In IEEE Int. Conf. on Intelligent Robots and Systems. p CEN, Z., MUTKA, M. W., ZHU, D., AND XI, N Improved transport service for remote sensing and control over wireless networks. In IEEE Int. Conf. on Mobile Adhoc and Sensor Systems. p CHA, J., SEO, Y., KIM, Y., AND RYU, J An authoring/editing framework for haptic broadcasting: Passive haptic interactions using MPEG-4 BIFS. In IEEE Proc. of EuroHaptics Conf. p CIZMECI, B., CHAUDHARI, R., XU, X., ALT, N., AND STEINBACH, E A visual-haptic multiplexing scheme for teleoperation over constant-bitrate communication links. Springer-Verlag LNCS, In Haptics: Neuroscience, Devices, Modeling, and Applications v. 8619, p COLGATE, J. AND BROWN, J Factors affecting the z-width of a haptic display. In IEEE Int. Conf. on Rob.&Aut. p DEMIRCIN, M., VAN BEEK, P., AND ALTUNBASAK, Y Delay-constrained and r-d optimized transrating for high-definition video streaming over WLANs. IEEE Trans. on Multimedia v. 10, n. 6, p DINC, E. AND AKAN, O More than the eye can see: Coherence time and coherence bandwidth of troposcatter links for mobile receivers. IEEE Vehicular Technology Magazine v. 10, n. 2, p EID, M. A., CHA, J., AND EL-SADDIK, A Admux: An adaptive multiplexer for haptic-audio-visual data communication. IEEE Trans. on Instrumentation and Measurement v. 60, n. 1, p

23 A Multiplexing Scheme for Multimodal Teleoperation 0:23 FERRELL, W. R Remote manipulation with transmission delay. IEEE Trans. on Hum. Fact. in Elec. v. 6, n. 1, p GAO, M., CIZMECI, B., EILER, M., STEINBACH, E., ZHAO, D., AND GAO, W Macroblock level rate control for low delay H.264/AVC based video communication. In IEEE Picture Coding Symposium. p GOELLER, M., OBERLAENDER, J., UHL, K., ROENNAU, A., AND DILLMANN, R Modular robots for on-orbit satellite servicing. In IEEE Int. Conf. on Robotics and Biomimetics (ROBIO). p HE, Z. AND MITRA, S Optimum bit allocation and accurate rate control for video coding via ρ-domain source modeling. IEEE Trans. on Circuits and Systems for Video Technology v. 12, n. 10, p HINTERSEER, P., HIRCHE, S., CHAUDHURI, S., STEINBACH, E., AND BUSS, M Perception-based data reduction and transmission of haptic data in telepresence and teleaction systems. IEEE Trans. on Signal Processing v. 56, n. 2, p HINTERSEER, P., STEINBACH, E., HIRCHE, S., AND BUSS, M A novel psychophysically motivated transmission approach for haptic data streams in telepresence and teleaction systems. In IEEE Int. Conf. on Acous., Sp., and Sig. Proc. p HIRCHE, S., HINTERSEER, P., STEINBACH, E., AND BUSS, M Network traffic reduction in haptic telepresence systems by deadband control. In 16th IFAC World Congress. p ISOMURA, E., TASAKA, S., AND NUNOME, T QoE enhancement in audiovisual and haptic interactive ip communications by media adaptive intra-stream synchronization. In TENCON IEEE Region 10 Conf. p ITU-T ITU-T JPEG Standard, digital compression and coding of continuous-tone still images. ITU-T ITU-T H.264, advanced video coding for generic audiovisual services. KAEDE, S., NUNOME, T., AND TASAKA, S QoE enhancement of audiovisual and haptic interactive IP communications by user-assistance. In IEEE 18th Int. Conf. on Computational Science and Engineering. p KUSCHEL, M., KREMER, P., HIRCHE, S., AND BUSS, M Lossy data reduction methods for haptic telepresence systems. In Proc. IEEE Int. Conf. on Rob. & Aut. p LEE, S., MOON, S., AND KIM, J A network-adaptive transport scheme for haptic-based collaborative virtual environments. In Proc. of 5th ACM SIGCOMM Workshop on Network and System Support for Games, article no. 13. MARSHALL, A., YAP, K. M., AND YU, W Providing QoS for networked peers in distributed haptic virtual environments. Hindawi, Journal of Advances in Multimedia v. 2008, n OSMAN, H. A., EID, M., IGLESIAS, R., AND SADDIK, A. E ALPHAN: Application Layer Protocol for HAptic Networking. In IEEE Int. Workshop on Haptic, Audio and Visual Environments and Games. p PING, L., WENJUAN, L., AND ZENGQI, S Transport layer protocol reconfiguration for network-based robot control system. In IEEE Proc. of Networking, Sensing and Control. p PYKE, J., HART, M., POPOV, V., HARRIS, R., AND MCGRATH, S A tele-ultrasound system for real-time medical imaging in resource-limited settings. In IEEE Int. Conf. of Eng. in Medicine and Biology Society. p RANK, M., SHI, Z., MLLER, H., AND HIRCHE, S Perception of delay in haptic telepresence systems. MIT Presence v. 19, n. 5, p RYU, J.-H., ARTIGAS, J., AND PREUSCHE, C A passive bilateral control scheme for a teleoperator with time-varying communication delay. Mechatronics v. 20, n. 7, p SHERIDAN, T Space teleoperation through time delay: review and prognosis. IEEE Trans. on Robotics and Automation v.9, n.5, p SILVA, J. M., OROZCO, M., CHA, J., SADDIK, A. E., AND PETRIU, E. M Human perception of haptic-to-video and hapticto-audio skew in multimedia applications. ACM Trans. on Multimedia Comp., Comm. and App. (TOMM) v. 9, n. 2, p. 9:1 9:16. STEINBACH, E., HIRCHE, S., KAMMERL, J., VITTORIAS, I., AND CHAUDHARI, R Haptic data compression and communication for telepresence and teleaction. IEEE Signal Processing Magazine v. 28, n. 1, p STOICA, I., SHENKER, S., AND ZHANG, H Core-stateless fair queueing: Achieving approximately fair bandwidth allocations in high speed networks. ACM SIGCOMM Comput. Comm. Rev. v. 28, n. 4, p UCHIMURA, Y., OHNISHI, K., AND YAKOH, T Bilateral robot system on the real time network structure. In IEEE Int. Workshop on Advanced Motion Control. p VALIN, J., TERRIBERRY, T., MONTGOMERY, C., AND MAXWELL, G A high-quality speech and audio codec with less than 10-ms delay. IEEE Trans. on Audio, Speech, and Language Processing v. 18, n. 1, p VOGELS, I Detection of temporal delays in visual-haptic interfaces. HUMAN FACTORS v. 46, n. 1, p WALRAEVENS, J Discrete-time queueing models with priorities. Ph.D. Dissertation, (TELIN) Ghent University. WEBER, E Die lehre vom tastsinn und gemeingefuehl, auf versuche gegruendet. XU, X., CIZMECI, B., SCHUWERK, C., AND STEINBACH, E Haptic data reduction for time-delayed teleoperation using the time domain passivity approach. In IEEE World Haptics Conf. (WHC). p

24 0:24 Burak Cizmeci et al. YAMAMOTO, S., YASHIRO, D., YUBAI, K., AND KOMADA, S Rate control based on queuing state observer for visual-haptic communication. In IEEE Int. Workshop on Advanced Motion Control. p YASHIRO, D., TIAN, D., AND YAKOH, T End-to-end flow control for visual-haptic communication in the presence of bandwidth change. Wiley Electronics and Communications in Japan v. 96, n. 11, p ZHANG, F. AND STEINBACH, E Improved ρ-domain rate control with accurate header size estimation. In IEEE Int. Conf. on Acous., Speech and Signal Proc. p

25 A Multiplexing Scheme for Multimodal Teleoperation 0:25 Fig. 10: MUX Header (1 byte) Structure in bits. A. MULTIPLEXING HEADER STRUCTURE In Fig. 10 we show the structure of MUX Header. The first 3 bits marked with M in Fig. 10 represent the packet type. With 3 bits, 8 different packet types can be signalled which is sufficient for the current multiplexing scheme. Currently, bits 3, 4, 5 and 6 ( N ) are reserved for future modalities and control signalling. If the packet type includes video data, the bit 7 tagged as L is used to signal the video frame completion which means the frame is ready for decoding. The multiplexing scheme divides the encoded video stream into fragments of different sizes. When the current video frame transmission is completed, the demultiplexer is triggered to pass the bitstream to the video decoder. In Table III, the multiplexing header is shown as H(1) where (1) represents the size of the information as 1 byte. According to the packet type, we need to signal additional information like time-stamps, data indexes and payload lengths after the multiplexing header. Every packet contains a time-stamp shown as P T S(2) which is the clock time when the packet is pushed into the channel. Using the packet time-stamp, the demultiplexer can measure the transmission time of each packet. The multimedia information follows these headers and according to the type of the modality, the following information is added: Force: If the packet contains a force sample, sample id SID(2), sample time-stamp ST S(2), sample payload SP L(6) and energy payload EP L(12) which is used for the control architecture are added to the packet. Each force sample is represented by a 2 bytes floating number in the sample payload and each energy sample is represented by a 4 bytes floating number in the energy payload. Video: If the packet contains a video frame fragment, the corresponding frame number F N(2), fragment number F GN(2), frame time-stamp F T S(2) and video payload length V P LL(2) are written into the packet. The payload size (Y bytes) of the fragment is written into V P LL(2). After the multiplexing information, the payload data is written and it occupies Y bytes which is decided by the multiplexing algorithm. Audio: Unlike video frames, many audio frames can fit into one packet due to their small frame size. First, we indicate the number of audio frames NAF (1) = a and then we write the time-stamp AT S(2), Table III. : Packet Structures Packet type Packet structure Size (bytes) F H(1) P T S(2) SID(2) ST S(2) SP L(6) EP L(12) 25 A H(1) P T S(2) NAF (1) a (AT S(2) AF N(2) AP LL(2)) AP L(X) 4 + a 6 + X V H(1) P T S(2) F N(2) F GN(2) F T S(2) V P LL(2) V P L(Y ) 11 + Y AV H(1) P T S(2) NAF (1) a (AT S(2) AF N(2) AP LL(2)) AP L(X) 12 + a 6 + X + Y F N(2) F GN(2) F T S(2) V P LL(2) V P L(Y ) AF H(1) P T S(2) NAF (1) a (AT S(2) AF N(2) AP LL(2)) AP L(X) 26 + a 6 + X ST S(2) SID(2) SP L(6) EP L(12) VF H(1) P T S(2) SID(2) ST S(2) SP L(6) EP L(12) F N(2) F GN(2) 33 + Y F T S(2) V P LL(2) V P L(Y ) AVF H(1) P T S(2) NAF (1) a (AT S(2) AF N(2) AP LL(2)) AP L(X) F N(2) F GN(2) F T S(2) V P LL(2) V P L(Y ) SID(2) ST S(2) SP L(6) EP L(12) 34 + a 6 + X + Y

26 0:26 Burak Cizmeci et al. frame number AF N(2) and payload length AP LL(2) for each audio frame. After all the side information, the audio payloads (in total X bytes, determined by the multiplexer) are written into the packet. In addition to the multiplexing information, the transport protocol adds its own header information to every packet. The used protocol(s) should be known to the multiplexer because the protocol header size needs to be taken into account while allocating the channel rate to the packets. In our set-up, the UDP/IPv4 protocol is used and 42 bytes are reserved for the UDP/IPv4 header information. The overall overhead size is calculated by adding up the 42 bytes UDP/IPv4 protocol header to the estimated size of each packet type given in Table III. B. TELEOPERATION TESTBED DESIGN Fig. 11 illustrates the implemented testbed. The OP and TOP computers, running Real-Time (RT) Xenomai Linux kernel, run the local control loops at 1 khz and are physically separated via ethernet with a hardware network emulator (Apposite Netropy N60). The TOP computer communicates with the KUKA LWR control unit and transmits the computed (X, Y, Z) position of the robot end-effector. The KUKA control unit has its own closed kinematics computation loop for moving the robot joints to the desired end-effector position. The video encoding and decoding are performed on separate machines because the computation load of the video processing interferes with the local control loops at the OP and TOP. It is challenging to synchronize the control loops at 1 ms accuracy. Therefore, we employ a separate computer as a clock server machine that sends timestamps to synchronize the control loops at both sides. These timestamps are used to measure the end-to-end latencies for the evaluation of the system performance. The dimensions of the manipulation tool are given in Fig. 12a. Its total length from bottom to tipend is 165 mm, and the tip diameter is semi-conical from 8.8 to 14 mm to fit into the object holes (13 mm, given in Fig. 12c) for manipulation tasks involving holding and dragging. Fig. 12b illustrates the Clock Machine RT Linux Operator Network Emulator Teleoperator KUKA LWR Control Unit RT Linux FPGA Apposite Netropy N60 RT Linux VxWorks Video Decoder ffmpeg Video Encoder x264 Linux Linux Fig. 11: Teleoperation system testbed: The physical structure of the teleoperation system is given, and the computers and hardware are interconnected through ethernet-based interfaces.

27 A Multiplexing Scheme for Multimodal Teleoperation 0:27 35 mm 70 mm 35 mm 48 mm 65 mm 350 mm 35 mm h=10 mm 48 mm 50 mm 50 mm 50 mm 350 mm (a) Single-point metal end-effector, dimensions in mm (b) Manipulation platform dimensions h=19 mm 43 mm 13 mm 13 mm 43 mm 43 mm 43 mm (c) Dimensions of the objects for manipulation Fig. 12: Dimensions for the single-point tool, manipulation platform and objects. construction of the manipulation platform. As observed from Fig. 12b, the dimensions of the triangleand square-shaped holes are drilled slightly larger (+5 mm) to perform the pegging operation smoothly. The 3D object files are provided at the Supplemental Files section. Therefore, the objects and the manipulation tool can be either reproduced in a workshop or 3D printed. Fig. 13a shows the experimental setup at the TOP side. The camera is mounted and fixed on the robot hand (eye-on-the-hand) monitoring the single-point metal end-effector, the manipulation platform and objects. Fig. 13b illustrates how we performed the experiments 1 and 2. For consistency between different experimental sessions and reproducible results, we force the OP to make a controlled movement during the manipulation. The procedure is as follows: (1) At the beginning the end-effector is in free-space above the object. The OP slightly moves towards the object and gets in contact with it by holding from its hole. (2) The OP drags the object to the target location which is placed 10 cm apart as shown in Fig. 13b. (3) When the target is achieved, the OP releases the object and moves upwards. (4) The OP approaches to the object once again to drag it back to the initial position. (5) Similarly, it holds the object brings it back to the initial position and releases the object.

28 0:28 Burak Cizmeci et al. (1)Move 10 cm (2)Bring back (a) Teleoperator KUKA LWR arm with single-point metal endeffector performing peg-in-hole task. (b) The procedure of experiments 1 and 2 Fig. 13: The teleoperator side manipulation environment C. DISCUSSION ON DELAY REQUIREMENTS AND INTER-MEDIA SYNCHRONIZATION In Table IV, we give the latency performance of our teleoperation system excluding the one-way propagation delay. From Table IV, we can conclude that our teleoperation system does not violate the reported maximum tolerable delay and jitter constraints shown as bold numbers [Marshall et al. 2008; Eid et al. 2011]. On the other hand, one-way propagation delay can not impair the delay requirements for video and audio streams up to 275 ms and 100 ms delays respectively. However, the delay requirement on haptic signals in Table IV is very tight but this is for systems without delay-robust control architectures. In our system, delays up to 250 ms one-way can be tolerated [Xu et al. 2015]. It is important to note that the QoS constraints given in Table IV were determined for teleconferencing systems (audio-video) and shared haptic virtual environments. Therefore, we also need to consider delay perception research for teleoperation systems and cross-modality effects. Due to the bi-directional communication of velocity/position - force signals and varying environment conditions, studying the human delay perception in haptic teleoperation systems is challenging compared to audio-visual delay perception [Rank et al. 2010]. In [Vogels 2004], the author reported that time differences up to 45 ms between visual and force signals are acceptable when the subject hits a stiff wall with the haptic device. Besides, the author reported that the perception of delay between force and visual stimuli can be larger under some telemanipulation situations. In [Rank et al. 2010], the authors showed that the operator s movement dynamics, the local haptic device features, and the penetration Table IV. : The provided service without one-way delay and the delay requirements for haptic, video and audio streams. The bold numbers refer to the delay requirements as the maximum tolerable delay and jitter for haptic, video and audio streams given in [Marshall et al.2008; Eid et al. 2011] Modality Haptic Video Audio Delay(ms) < < < 150 Jitter(ms) 0<2 3 < < 30

Haptic Communication for the Tactile Internet

Haptic Communication for the Tactile Internet Technical University of Munich (TUM) Chair of Media Technology European Wireless, EW 17 Dresden, May 17, 2017 Telepresence Network audiovisual communication Although conversational services are bidirectional,

More information

Increasing Broadcast Reliability for Vehicular Ad Hoc Networks. Nathan Balon and Jinhua Guo University of Michigan - Dearborn

Increasing Broadcast Reliability for Vehicular Ad Hoc Networks. Nathan Balon and Jinhua Guo University of Michigan - Dearborn Increasing Broadcast Reliability for Vehicular Ad Hoc Networks Nathan Balon and Jinhua Guo University of Michigan - Dearborn I n t r o d u c t i o n General Information on VANETs Background on 802.11 Background

More information

Congestion Control for Network-Aware Telehaptic Communication

Congestion Control for Network-Aware Telehaptic Communication Congestion Control for Network-Aware Telehaptic Communication VINEET GOKHALE, JAYAKRISHNAN NAIR, and SUBHASIS CHAUDHURI, Indian Institute of Technology Bombay Telehaptic applications involve delay-sensitive

More information

Spatial Audio Transmission Technology for Multi-point Mobile Voice Chat

Spatial Audio Transmission Technology for Multi-point Mobile Voice Chat Audio Transmission Technology for Multi-point Mobile Voice Chat Voice Chat Multi-channel Coding Binaural Signal Processing Audio Transmission Technology for Multi-point Mobile Voice Chat We have developed

More information

BASIC CONCEPTS OF HSPA

BASIC CONCEPTS OF HSPA 284 23-3087 Uen Rev A BASIC CONCEPTS OF HSPA February 2007 White Paper HSPA is a vital part of WCDMA evolution and provides improved end-user experience as well as cost-efficient mobile/wireless broadband.

More information

DYNAMIC BANDWIDTH ALLOCATION IN SCPC-BASED SATELLITE NETWORKS

DYNAMIC BANDWIDTH ALLOCATION IN SCPC-BASED SATELLITE NETWORKS DYNAMIC BANDWIDTH ALLOCATION IN SCPC-BASED SATELLITE NETWORKS Mark Dale Comtech EF Data Tempe, AZ Abstract Dynamic Bandwidth Allocation is used in many current VSAT networks as a means of efficiently allocating

More information

CSE 461: Bits and Bandwidth. Next Topic

CSE 461: Bits and Bandwidth. Next Topic CSE 461: Bits and Bandwidth Next Topic Focus: How do we send a message across a wire? The physical / link layers: 1. Different kinds of media 2. Encoding bits, messages 3. Model of a link Application Presentation

More information

Point Cloud-based Model-mediated Teleoperation with Dynamic and Perception-based Model Updating

Point Cloud-based Model-mediated Teleoperation with Dynamic and Perception-based Model Updating Preliminary version for evaluation: Please do not circulate without the permission of the author(s) Point Cloud-based Model-mediated Teleoperation with Dynamic and Perception-based Model Updating Xiao

More information

Passive Bilateral Teleoperation

Passive Bilateral Teleoperation Passive Bilateral Teleoperation Project: Reconfigurable Control of Robotic Systems Over Networks Márton Lırinc Dept. Of Electrical Engineering Sapientia University Overview What is bilateral teleoperation?

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

Department of Computer Science and Engineering. CSE 3213: Computer Networks I (Fall 2009) Instructor: N. Vlajic Date: Dec 11, 2009.

Department of Computer Science and Engineering. CSE 3213: Computer Networks I (Fall 2009) Instructor: N. Vlajic Date: Dec 11, 2009. Department of Computer Science and Engineering CSE 3213: Computer Networks I (Fall 2009) Instructor: N. Vlajic Date: Dec 11, 2009 Final Examination Instructions: Examination time: 180 min. Print your name

More information

Department of Computer Science and Engineering. CSE 3213: Communication Networks (Fall 2015) Instructor: N. Vlajic Date: Dec 13, 2015

Department of Computer Science and Engineering. CSE 3213: Communication Networks (Fall 2015) Instructor: N. Vlajic Date: Dec 13, 2015 Department of Computer Science and Engineering CSE 3213: Communication Networks (Fall 2015) Instructor: N. Vlajic Date: Dec 13, 2015 Final Examination Instructions: Examination time: 180 min. Print your

More information

Opportunistic Adaptive Haptic Sampling on Forward Channel in Telehaptic Communication

Opportunistic Adaptive Haptic Sampling on Forward Channel in Telehaptic Communication Opportunistic Adaptive Haptic Sampling on Forward Channel in Telehaptic Communication Vineet Gokhale Jayakrishnan Nair Subhasis Chaudhuri Indian Institute of Technology Bombay Abstract We propose a network-based

More information

Adaptive -Causality Control with Adaptive Dead-Reckoning in Networked Games

Adaptive -Causality Control with Adaptive Dead-Reckoning in Networked Games -Causality Control with Dead-Reckoning in Networked Games Yutaka Ishibashi, Yousuke Hashimoto, Tomohito Ikedo, and Shinji Sugawara Department of Computer Science and Engineering Graduate School of Engineering

More information

Lecture 9: Case Study -- Video streaming over Hung-Yu Wei National Taiwan University

Lecture 9: Case Study -- Video streaming over Hung-Yu Wei National Taiwan University Lecture 9: Case Study -- Video streaming over 802.11 Hung-Yu Wei National Taiwan University QoS for Video transmission Perceived Quality How does network QoS translate to multimedia quality? Define your

More information

Lec 19 Error and Loss Control I: FEC

Lec 19 Error and Loss Control I: FEC Multimedia Communication Lec 19 Error and Loss Control I: FEC Zhu Li Course Web: http://l.web.umkc.edu/lizhu/teaching/ Z. Li, Multimedia Communciation, Spring 2017 p.1 Outline ReCap Lecture 18 TCP Congestion

More information

Balancing Bandwidth and Bytes: Managing storage and transmission across a datacast network

Balancing Bandwidth and Bytes: Managing storage and transmission across a datacast network Balancing Bandwidth and Bytes: Managing storage and transmission across a datacast network Pete Ludé iblast, Inc. Dan Radke HD+ Associates 1. Introduction The conversion of the nation s broadcast television

More information

Interoperability of FM Composite Multiplex Signals in an IP based STL

Interoperability of FM Composite Multiplex Signals in an IP based STL Interoperability of FM Composite Multiplex Signals in an IP based STL Junius Kim and Keyur Parikh GatesAir Mason, Ohio Abstract - The emergence of high bandwidth IP network connections is an enabler for

More information

H.264 Video with Hierarchical QAM

H.264 Video with Hierarchical QAM Prioritized Transmission of Data Partitioned H.264 Video with Hierarchical QAM B. Barmada, M. M. Ghandi, E.V. Jones and M. Ghanbari Abstract In this Letter hierarchical quadrature amplitude modulation

More information

In this lecture, we will look at how different electronic modules communicate with each other. We will consider the following topics:

In this lecture, we will look at how different electronic modules communicate with each other. We will consider the following topics: In this lecture, we will look at how different electronic modules communicate with each other. We will consider the following topics: Links between Digital and Analogue Serial vs Parallel links Flow control

More information

SOME PHYSICAL LAYER ISSUES. Lecture Notes 2A

SOME PHYSICAL LAYER ISSUES. Lecture Notes 2A SOME PHYSICAL LAYER ISSUES Lecture Notes 2A Delays in networks Propagation time or propagation delay, t prop Time required for a signal or waveform to propagate (or move) from one point to another point.

More information

Telehaptic Communication over a Shared Network: Protocol Design and Analysis

Telehaptic Communication over a Shared Network: Protocol Design and Analysis Telehaptic Communication over a Shared Network: Protocol Design and Analysis A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy by Vineet Gokhale (Roll

More information

Multiplexing Concepts and Introduction to BISDN. Professor Richard Harris

Multiplexing Concepts and Introduction to BISDN. Professor Richard Harris Multiplexing Concepts and Introduction to BISDN Professor Richard Harris Objectives Define what is meant by multiplexing and demultiplexing Identify the main types of multiplexing Space Division Time Division

More information

Joint DAMA-TCP protocol optimization through multiple cross layer interactions in DVB RCS scenario

Joint DAMA-TCP protocol optimization through multiple cross layer interactions in DVB RCS scenario Joint DAMA-TCP protocol optimization through multiple cross layer interactions in DVB RCS scenario M. Luglio, F. Zampognaro Electronics Engineering Department University of Rome Tor Vergata Rome, Italy

More information

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally Digitizing Color Fluency with Information Technology Third Edition by Lawrence Snyder RGB Colors: Binary Representation Giving the intensities

More information

Some results on optimal estimation and control for lossy NCS. Luca Schenato

Some results on optimal estimation and control for lossy NCS. Luca Schenato Some results on optimal estimation and control for lossy NCS Luca Schenato Networked Control Systems Drive-by-wire systems Swarm robotics Smart structures: adaptive space telescope Wireless Sensor Networks

More information

Transcoding free voice transmission in GSM and UMTS networks

Transcoding free voice transmission in GSM and UMTS networks Transcoding free voice transmission in GSM and UMTS networks Sara Stančin, Grega Jakus, Sašo Tomažič University of Ljubljana, Faculty of Electrical Engineering Abstract - Transcoding refers to the conversion

More information

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally Fluency with Information Technology Third Edition by Lawrence Snyder Digitizing Color RGB Colors: Binary Representation Giving the intensities

More information

UTILIZATION OF AN IEEE 1588 TIMING REFERENCE SOURCE IN THE inet RF TRANSCEIVER

UTILIZATION OF AN IEEE 1588 TIMING REFERENCE SOURCE IN THE inet RF TRANSCEIVER UTILIZATION OF AN IEEE 1588 TIMING REFERENCE SOURCE IN THE inet RF TRANSCEIVER Dr. Cheng Lu, Chief Communications System Engineer John Roach, Vice President, Network Products Division Dr. George Sasvari,

More information

Qualcomm Research Dual-Cell HSDPA

Qualcomm Research Dual-Cell HSDPA Qualcomm Technologies, Inc. Qualcomm Research Dual-Cell HSDPA February 2015 Qualcomm Research is a division of Qualcomm Technologies, Inc. 1 Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. 5775

More information

A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction

A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction 1514 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 8, DECEMBER 2000 A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction Bai-Jue Shieh, Yew-San Lee,

More information

AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS)

AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS) AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS) 1.3 NA-14-0267-0019-1.3 Document Information Document Title: Document Version: 1.3 Current Date: 2016-05-18 Print Date: 2016-05-18 Document

More information

Rep. ITU-R BO REPORT ITU-R BO SATELLITE-BROADCASTING SYSTEMS OF INTEGRATED SERVICES DIGITAL BROADCASTING

Rep. ITU-R BO REPORT ITU-R BO SATELLITE-BROADCASTING SYSTEMS OF INTEGRATED SERVICES DIGITAL BROADCASTING Rep. ITU-R BO.7- REPORT ITU-R BO.7- SATELLITE-BROADCASTING SYSTEMS OF INTEGRATED SERVICES DIGITAL BROADCASTING (Questions ITU-R 0/0 and ITU-R 0/) (990-994-998) Rep. ITU-R BO.7- Introduction The progress

More information

MPEG-4 Structured Audio Systems

MPEG-4 Structured Audio Systems MPEG-4 Structured Audio Systems Mihir Anandpara The University of Texas at Austin anandpar@ece.utexas.edu 1 Abstract The MPEG-4 standard has been proposed to provide high quality audio and video content

More information

William Stallings Data and Computer Communications. Chapter 8 Multiplexing. Multiplexing

William Stallings Data and Computer Communications. Chapter 8 Multiplexing. Multiplexing William Stallings Data and Computer Communications Chapter 8 Multiplexing Multiplexing 1 Frequency Division Multiplexing FDM Useful bandwidth of medium exceeds required bandwidth of channel Each signal

More information

Utilization Based Duty Cycle Tuning MAC Protocol for Wireless Sensor Networks

Utilization Based Duty Cycle Tuning MAC Protocol for Wireless Sensor Networks Utilization Based Duty Cycle Tuning MAC Protocol for Wireless Sensor Networks Shih-Hsien Yang, Hung-Wei Tseng, Eric Hsiao-Kuang Wu, and Gen-Huey Chen Dept. of Computer Science and Information Engineering,

More information

Delay Variation Simulation Results for Transport of Time-Sensitive Traffic over Conventional Ethernet

Delay Variation Simulation Results for Transport of Time-Sensitive Traffic over Conventional Ethernet Delay Variation Simulation Results for Transport of Time-Sensitive Traffic over Conventional Ethernet Geoffrey M. Garner gmgarner@comcast.net Felix Feng Feng.fei@samsung.com SAMSUNG Electronics IEEE 2.3

More information

Inter-Device Synchronous Control Technology for IoT Systems Using Wireless LAN Modules

Inter-Device Synchronous Control Technology for IoT Systems Using Wireless LAN Modules Inter-Device Synchronous Control Technology for IoT Systems Using Wireless LAN Modules TOHZAKA Yuji SAKAMOTO Takafumi DOI Yusuke Accompanying the expansion of the Internet of Things (IoT), interconnections

More information

CT-Bus : A Heterogeneous CDMA/TDMA Bus for Future SOC

CT-Bus : A Heterogeneous CDMA/TDMA Bus for Future SOC CT-Bus : A Heterogeneous CDMA/TDMA Bus for Future SOC Bo-Cheng Charles Lai 1 Patrick Schaumont 1 Ingrid Verbauwhede 1,2 1 UCLA, EE Dept. 2 K.U.Leuven 42 Westwood Plaza Los Angeles, CA 995 Abstract- CDMA

More information

Lecture 5 Transmission. Physical and Datalink Layers: 3 Lectures

Lecture 5 Transmission. Physical and Datalink Layers: 3 Lectures Lecture 5 Transmission Peter Steenkiste School of Computer Science Department of Electrical and Computer Engineering Carnegie Mellon University 15-441 Networking, Spring 2004 http://www.cs.cmu.edu/~prs/15-441

More information

UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS. Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik

UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS. Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik Department of Electrical and Computer Engineering, The University of Texas at Austin,

More information

Chapter 6 Bandwidth Utilization: Multiplexing and Spreading 6.1

Chapter 6 Bandwidth Utilization: Multiplexing and Spreading 6.1 Chapter 6 Bandwidth Utilization: Multiplexing and Spreading 6.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 3-6 PERFORMANCE One important issue in networking

More information

Haptic Tele-Assembly over the Internet

Haptic Tele-Assembly over the Internet Haptic Tele-Assembly over the Internet Sandra Hirche, Bartlomiej Stanczyk, and Martin Buss Institute of Automatic Control Engineering, Technische Universität München D-829 München, Germany, http : //www.lsr.ei.tum.de

More information

Haptics CS327A

Haptics CS327A Haptics CS327A - 217 hap tic adjective relating to the sense of touch or to the perception and manipulation of objects using the senses of touch and proprioception 1 2 Slave Master 3 Courtesy of Walischmiller

More information

T. Yoo, E. Setton, X. Zhu, Pr. Goldsmith and Pr. Girod Department of Electrical Engineering Stanford University

T. Yoo, E. Setton, X. Zhu, Pr. Goldsmith and Pr. Girod Department of Electrical Engineering Stanford University Cross-layer design for video streaming over wireless ad hoc networks T. Yoo, E. Setton, X. Zhu, Pr. Goldsmith and Pr. Girod Department of Electrical Engineering Stanford University Outline Cross-layer

More information

Technical Aspects of LTE Part I: OFDM

Technical Aspects of LTE Part I: OFDM Technical Aspects of LTE Part I: OFDM By Mohammad Movahhedian, Ph.D., MIET, MIEEE m.movahhedian@mci.ir ITU regional workshop on Long-Term Evolution 9-11 Dec. 2013 Outline Motivation for LTE LTE Network

More information

Adaptation of MAC Layer for QoS in WSN

Adaptation of MAC Layer for QoS in WSN Adaptation of MAC Layer for QoS in WSN Sukumar Nandi and Aditya Yadav IIT Guwahati Abstract. In this paper, we propose QoS aware MAC protocol for Wireless Sensor Networks. In WSNs, there can be two types

More information

Simplified, high performance transceiver for phase modulated RFID applications

Simplified, high performance transceiver for phase modulated RFID applications Simplified, high performance transceiver for phase modulated RFID applications Buchanan, N. B., & Fusco, V. (2015). Simplified, high performance transceiver for phase modulated RFID applications. In Proceedings

More information

A Location-Aware Routing Metric (ALARM) for Multi-Hop, Multi-Channel Wireless Mesh Networks

A Location-Aware Routing Metric (ALARM) for Multi-Hop, Multi-Channel Wireless Mesh Networks A Location-Aware Routing Metric (ALARM) for Multi-Hop, Multi-Channel Wireless Mesh Networks Eiman Alotaibi, Sumit Roy Dept. of Electrical Engineering U. Washington Box 352500 Seattle, WA 98195 eman76,roy@ee.washington.edu

More information

Office: Room 517 (Bechtel) Office Hours: MWF 10 : :00 and by appointment (send ) Extension: 3538

Office: Room 517 (Bechtel) Office Hours: MWF 10 : :00 and by appointment (send  ) Extension: 3538 American University of Beirut Department of Electrical and Computer Engineering EECE 450 Computer Networks Spring 2004 Course Syllabus Catalog Description Data communications. Network architectures. Error

More information

Simulating Mobile Networks Tools and Models. Joachim Sachs

Simulating Mobile Networks Tools and Models. Joachim Sachs Simulating Mobile Networks Tools and Models Joachim Sachs Outline Types of Mobile Networks Performance Studies and Required Simulation Models Radio Link Performance Radio Network Performance Radio Protocol

More information

Bit-depth scalable video coding with new interlayer

Bit-depth scalable video coding with new interlayer RESEARCH Open Access Bit-depth scalable video coding with new interlayer prediction Jui-Chiu Chiang *, Wan-Ting Kuo and Po-Han Kao Abstract The rapid advances in the capture and display of high-dynamic

More information

Multiplexing. Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur

Multiplexing. Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur CS311: DATA COMMUNICATION Multiplexing Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur e-mail: manaskhatua@iitj.ac.in Outline of the Lecture What is Multiplexing and why is it used? Basic

More information

2B34 DEVELOPMENT OF A HYDRAULIC PARALLEL LINK TYPE OF FORCE DISPLAY

2B34 DEVELOPMENT OF A HYDRAULIC PARALLEL LINK TYPE OF FORCE DISPLAY 2B34 DEVELOPMENT OF A HYDRAULIC PARALLEL LINK TYPE OF FORCE DISPLAY -Improvement of Manipulability Using Disturbance Observer and its Application to a Master-slave System- Shigeki KUDOMI*, Hironao YAMADA**

More information

) #(2/./53 $!4! 42!.3-)33)/.!4! $!4! 3)'.!,,).' 2!4% ()'(%2 4(!. KBITS 53).' K(Z '2/50 "!.$ #)2#5)43

) #(2/./53 $!4! 42!.3-)33)/.!4! $!4! 3)'.!,,).' 2!4% ()'(%2 4(!. KBITS 53).' K(Z '2/50 !.$ #)2#5)43 INTERNATIONAL TELECOMMUNICATION UNION )454 6 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU $!4! #/--5.)#!4)/. /6%2 4(% 4%,%(/.%.%47/2+ 39.#(2/./53 $!4! 42!.3-)33)/.!4! $!4! 3)'.!,,).' 2!4% ()'(%2 4(!.

More information

Lecture 5 Transmission

Lecture 5 Transmission Lecture 5 Transmission David Andersen Department of Computer Science Carnegie Mellon University 15-441 Networking, Spring 2005 http://www.cs.cmu.edu/~srini/15-441/s05 1 Physical and Datalink Layers: 3

More information

TurboDrive. With the recent introduction of the Linea GigE line scan cameras, Teledyne DALSA is once again pushing innovation to new heights.

TurboDrive. With the recent introduction of the Linea GigE line scan cameras, Teledyne DALSA is once again pushing innovation to new heights. With the recent introduction of the Linea GigE line scan cameras, Teledyne DALSA is once again pushing innovation to new heights. The Linea GigE is the first Teledyne DALSA camera to offer. This technology

More information

ADAPTIVE SCHEDULING FOR HETEROGENEOUS TRAFFIC FLOWS IN CELLULAR WIRELESS OFDM-FDMA SYSTEMS

ADAPTIVE SCHEDULING FOR HETEROGENEOUS TRAFFIC FLOWS IN CELLULAR WIRELESS OFDM-FDMA SYSTEMS ADAPTIVE SCHEDULING FOR HETEROGENEOUS TRAFFIC FLOWS IN CELLULAR WIRELESS OFDM-FDMA SYSTEMS S. VALENTIN 1, J. GROSS 2, H. KARL 1, AND A. WOLISZ 2 1 University of Paderborn, Warburger Straße 100, 33098 Paderborn,

More information

Wireless Networked Systems

Wireless Networked Systems Wireless Networked Systems CS 795/895 - Spring 2013 Lec #4: Medium Access Control Power/CarrierSense Control, Multi-Channel, Directional Antenna Tamer Nadeem Dept. of Computer Science Power & Carrier Sense

More information

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 3, JUNE /$ IEEE

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 3, JUNE /$ IEEE IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 3, JUNE 2010 681 Broadcasting Video Streams Encoded With Arbitrary Bit Rates in Energy-Constrained Mobile TV Networks Cheng-Hsin Hsu, Student Member, IEEE,

More information

Chapter 8. Representing Multimedia Digitally

Chapter 8. Representing Multimedia Digitally Chapter 8 Representing Multimedia Digitally Learning Objectives Explain how RGB color is represented in bytes Explain the difference between bits and binary numbers Change an RGB color by binary addition

More information

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont. TSTE17 System Design, CDIO Lecture 5 1 General project hints 2 Project hints and deadline suggestions Required documents Modulation, cont. Requirement specification Channel coding Design specification

More information

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic

More information

A Quality of Service aware Spectrum Decision for Cognitive Radio Networks

A Quality of Service aware Spectrum Decision for Cognitive Radio Networks A Quality of Service aware Spectrum Decision for Cognitive Radio Networks 1 Gagandeep Singh, 2 Kishore V. Krishnan Corresponding author* Kishore V. Krishnan, Assistant Professor (Senior) School of Electronics

More information

DISTRIBUTED RATE ALLOCATION FOR VIDEO STREAMING OVER WIRELESS NETWORKS WITH HETEROGENEOUS LINK SPEEDS. Xiaoqing Zhu and Bernd Girod

DISTRIBUTED RATE ALLOCATION FOR VIDEO STREAMING OVER WIRELESS NETWORKS WITH HETEROGENEOUS LINK SPEEDS. Xiaoqing Zhu and Bernd Girod DISTRIBUTED RATE ALLOCATION FOR VIDEO STREAMING OVER WIRELESS NETWORKS WITH HETEROGENEOUS LINK SPEEDS Xiaoqing Zhu and Bernd Girod Information Systems Laboratory, Stanford University, CA 93, U.S.A. {zhuxq,bgirod}@stanford.edu

More information

Swarm Robotics. Communication and Cooperation over the Internet. Will Ferenc, Hannah Kastein, Lauren Lieu, Ryan Wilson Mentor: Jérôme Gilles

Swarm Robotics. Communication and Cooperation over the Internet. Will Ferenc, Hannah Kastein, Lauren Lieu, Ryan Wilson Mentor: Jérôme Gilles and Cooperation over the Internet Will Ferenc, Hannah Kastein, Lauren Lieu, Ryan Wilson Mentor: Jérôme Gilles UCLA Applied Mathematics REU 2011 Credit: c 2010 Bruce Avera Hunter, Courtesy of life.nbii.gov

More information

Efficient Method of Secondary Users Selection Using Dynamic Priority Scheduling

Efficient Method of Secondary Users Selection Using Dynamic Priority Scheduling Efficient Method of Secondary Users Selection Using Dynamic Priority Scheduling ABSTRACT Sasikumar.J.T 1, Rathika.P.D 2, Sophia.S 3 PG Scholar 1, Assistant Professor 2, Professor 3 Department of ECE, Sri

More information

Hello, and welcome to this presentation of the STM32 Digital Filter for Sigma-Delta modulators interface. The features of this interface, which

Hello, and welcome to this presentation of the STM32 Digital Filter for Sigma-Delta modulators interface. The features of this interface, which Hello, and welcome to this presentation of the STM32 Digital Filter for Sigma-Delta modulators interface. The features of this interface, which behaves like ADC with external analog part and configurable

More information

CHAPTER 5. Digitized Audio Telemetry Standard. Table of Contents

CHAPTER 5. Digitized Audio Telemetry Standard. Table of Contents CHAPTER 5 Digitized Audio Telemetry Standard Table of Contents Chapter 5. Digitized Audio Telemetry Standard... 5-1 5.1 General... 5-1 5.2 Definitions... 5-1 5.3 Signal Source... 5-1 5.4 Encoding/Decoding

More information

Data Communication (CS601)

Data Communication (CS601) Data Communication (CS601) MOST LATEST (2012) PAPERS For MID Term (ZUBAIR AKBAR KHAN) Page 1 Q. Suppose a famous Telecomm company AT&T is using AMI encoding standard for its digital telephone services,

More information

THERE is an increasing demand for multimedia streaming applications thanks to the ubiquity of internet access, the

THERE is an increasing demand for multimedia streaming applications thanks to the ubiquity of internet access, the Optimal Network-Assisted Multiuser DASH Video Streaming Emre Ozfatura, Ozgur Ercetin, Hazer Inaltekin arxiv:7.v [cs.ni] 9 Dec 7 Abstract Streaming video is becoming the predominant type of traffic over

More information

(Refer Slide Time: 2:23)

(Refer Slide Time: 2:23) Data Communications Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture-11B Multiplexing (Contd.) Hello and welcome to today s lecture on multiplexing

More information

Fine-grained Channel Access in Wireless LAN. Cristian Petrescu Arvind Jadoo UCL Computer Science 20 th March 2012

Fine-grained Channel Access in Wireless LAN. Cristian Petrescu Arvind Jadoo UCL Computer Science 20 th March 2012 Fine-grained Channel Access in Wireless LAN Cristian Petrescu Arvind Jadoo UCL Computer Science 20 th March 2012 Physical-layer data rate PHY layer data rate in WLANs is increasing rapidly Wider channel

More information

Configuring OSPF. Information About OSPF CHAPTER

Configuring OSPF. Information About OSPF CHAPTER CHAPTER 22 This chapter describes how to configure the ASASM to route data, perform authentication, and redistribute routing information using the Open Shortest Path First (OSPF) routing protocol. The

More information

CAN for time-triggered systems

CAN for time-triggered systems CAN for time-triggered systems Lars-Berno Fredriksson, Kvaser AB Communication protocols have traditionally been classified as time-triggered or eventtriggered. A lot of efforts have been made to develop

More information

Effect of Buffer Placement on Performance When Communicating Over a Rate-Variable Channel

Effect of Buffer Placement on Performance When Communicating Over a Rate-Variable Channel 29 Fourth International Conference on Systems and Networks Communications Effect of Buffer Placement on Performance When Communicating Over a Rate-Variable Channel Ajmal Muhammad, Peter Johansson, Robert

More information

A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols

A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols Josh Broch, David Maltz, David Johnson, Yih-Chun Hu and Jorjeta Jetcheva Computer Science Department Carnegie Mellon University

More information

Effect of Priority Class Ratios on the Novel Delay Weighted Priority Scheduling Algorithm

Effect of Priority Class Ratios on the Novel Delay Weighted Priority Scheduling Algorithm Effect of Priority Class Ratios on the Novel Delay Weighted Priority Scheduling Algorithm Vasco QUINTYNE Department of Computer Science, Physics and Mathematics, University of the West Indies Cave Hill,

More information

Working Party 5B DRAFT NEW RECOMMENDATION ITU-R M.[500KHZ]

Working Party 5B DRAFT NEW RECOMMENDATION ITU-R M.[500KHZ] Radiocommunication Study Groups Source: Subject: Document 5B/TEMP/376 Draft new Recommendation ITU-R M.[500kHz] Document 17 November 2011 English only Working Party 5B DRAFT NEW RECOMMENDATION ITU-R M.[500KHZ]

More information

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks Recently, consensus based distributed estimation has attracted considerable attention from various fields to estimate deterministic

More information

Syed Obaid Amin. Date: February 11 th, Networking Lab Kyung Hee University

Syed Obaid Amin. Date: February 11 th, Networking Lab Kyung Hee University Detecting Jamming Attacks in Ubiquitous Sensor Networks Networking Lab Kyung Hee University Date: February 11 th, 2008 Syed Obaid Amin obaid@networking.khu.ac.kr Contents Background Introduction USN (Ubiquitous

More information

DIGITAL Radio Mondiale (DRM) is a new

DIGITAL Radio Mondiale (DRM) is a new Synchronization Strategy for a PC-based DRM Receiver Volker Fischer and Alexander Kurpiers Institute for Communication Technology Darmstadt University of Technology Germany v.fischer, a.kurpiers @nt.tu-darmstadt.de

More information

Contents. IEEE family of standards Protocol layering TDD frame structure MAC PDU structure

Contents. IEEE family of standards Protocol layering TDD frame structure MAC PDU structure Contents Part 1: Part 2: IEEE 802.16 family of standards Protocol layering TDD frame structure MAC PDU structure Dynamic QoS management OFDM PHY layer S-72.3240 Wireless Personal, Local, Metropolitan,

More information

Assistant Lecturer Sama S. Samaan

Assistant Lecturer Sama S. Samaan MP3 Not only does MPEG define how video is compressed, but it also defines a standard for compressing audio. This standard can be used to compress the audio portion of a movie (in which case the MPEG standard

More information

Robust Haptic Teleoperation of a Mobile Manipulation Platform

Robust Haptic Teleoperation of a Mobile Manipulation Platform Robust Haptic Teleoperation of a Mobile Manipulation Platform Jaeheung Park and Oussama Khatib Stanford AI Laboratory Stanford University http://robotics.stanford.edu Abstract. This paper presents a new

More information

Adoption of this document as basis for broadband wireless access PHY

Adoption of this document as basis for broadband wireless access PHY Project Title Date Submitted IEEE 802.16 Broadband Wireless Access Working Group Proposal on modulation methods for PHY of FWA 1999-10-29 Source Jay Bao and Partha De Mitsubishi Electric ITA 571 Central

More information

Implementation of a Visible Watermarking in a Secure Still Digital Camera Using VLSI Design

Implementation of a Visible Watermarking in a Secure Still Digital Camera Using VLSI Design 2009 nternational Symposium on Computing, Communication, and Control (SCCC 2009) Proc.of CST vol.1 (2011) (2011) ACST Press, Singapore mplementation of a Visible Watermarking in a Secure Still Digital

More information

Introduction to Real-Time Systems

Introduction to Real-Time Systems Introduction to Real-Time Systems Real-Time Systems, Lecture 1 Martina Maggio and Karl-Erik Årzén 16 January 2018 Lund University, Department of Automatic Control Content [Real-Time Control System: Chapter

More information

Design of Pipeline Analog to Digital Converter

Design of Pipeline Analog to Digital Converter Design of Pipeline Analog to Digital Converter Vivek Tripathi, Chandrajit Debnath, Rakesh Malik STMicroelectronics The pipeline analog-to-digital converter (ADC) architecture is the most popular topology

More information

Energy Efficient Scheduling Techniques For Real-Time Embedded Systems

Energy Efficient Scheduling Techniques For Real-Time Embedded Systems Energy Efficient Scheduling Techniques For Real-Time Embedded Systems Rabi Mahapatra & Wei Zhao This work was done by Rajesh Prathipati as part of his MS Thesis here. The work has been update by Subrata

More information

XOR Coding Scheme for Data Retransmissions with Different Benefits in DVB-IPDC Networks

XOR Coding Scheme for Data Retransmissions with Different Benefits in DVB-IPDC Networks XOR Coding Scheme for Data Retransmissions with Different Benefits in DVB-IPDC Networks You-Chiun Wang Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, 80424,

More information

Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks

Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks Alvaro Pinto, Zhe Zhang, Xin Dong, Senem Velipasalar, M. Can Vuran, M. Cenk Gursoy Electrical Engineering Department, University

More information

GENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE

GENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE GENERIC CODE DESIGN ALGORITHMS FOR REVERSIBLE VARIABLE-LENGTH CODES FROM THE HUFFMAN CODE Wook-Hyun Jeong and Yo-Sung Ho Kwangju Institute of Science and Technology (K-JIST) Oryong-dong, Buk-gu, Kwangju,

More information

Research Article Implementing Statistical Multiplexing in DVB-H

Research Article Implementing Statistical Multiplexing in DVB-H Hindawi Publishing Corporation International Journal of Digital Multimedia Broadcasting Volume 29, Article ID 261231, 15 pages doi:1.1155/29/261231 Research Article Implementing Statistical Multiplexing

More information

Digital Audio Broadcasting Eureka-147. Minimum Requirements for Terrestrial DAB Transmitters

Digital Audio Broadcasting Eureka-147. Minimum Requirements for Terrestrial DAB Transmitters Digital Audio Broadcasting Eureka-147 Minimum Requirements for Terrestrial DAB Transmitters Prepared by WorldDAB September 2001 - 2 - TABLE OF CONTENTS 1 Scope...3 2 Minimum Functionality...3 2.1 Digital

More information

Configuring the maximum number of external LSAs in LSDB 27 Configuring OSPF exit overflow interval 28 Enabling compatibility with RFC Logging

Configuring the maximum number of external LSAs in LSDB 27 Configuring OSPF exit overflow interval 28 Enabling compatibility with RFC Logging Contents Configuring OSPF 1 Overview 1 OSPF packets 1 LSA types 1 OSPF areas 2 Router types 4 Route types 5 Route calculation 6 OSPF network types 6 DR and BDR 6 Protocols and standards 8 OSPF configuration

More information

The Physical Layer Outline

The Physical Layer Outline The Physical Layer Outline Theoretical Basis for Data Communications Digital Modulation and Multiplexing Guided Transmission Media (copper and fiber) Public Switched Telephone Network and DSLbased Broadband

More information

Distributed Virtual Environments!

Distributed Virtual Environments! Distributed Virtual Environments! Introduction! Richard M. Fujimoto! Professor!! Computational Science and Engineering Division! College of Computing! Georgia Institute of Technology! Atlanta, GA 30332-0765,

More information

Microwave Engineering Project Use Cases

Microwave Engineering Project Use Cases Microwave Engineering Project Use Cases Version 1 By KB5MU, W5NYV 18 March 2008 Version 2 By KB5MU, W5NYV 27 July 2008 Comments to W5NYV@yahoo.com Voice and Text Applications Under Study 2m repeater operation

More information