Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

Similar documents
Input devices and interaction. Ruth Aylett

Geo-Located Content in Virtual and Augmented Reality

VR based HCI Techniques & Application. November 29, 2002

Aural and Haptic Displays

Haptics CS327A

Realtime 3D Computer Graphics Virtual Reality

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

Sensing. Autonomous systems. Properties. Classification. Key requirement of autonomous systems. An AS should be connected to the outside world.

VR System Input & Tracking

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»!

A Java Virtual Sound Environment

Classifying 3D Input Devices

Virtual Reality Calendar Tour Guide

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

Omni-Directional Catadioptric Acquisition System

Waves Nx VIRTUAL REALITY AUDIO

Comparison of Haptic and Non-Speech Audio Feedback

Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain

Capacitive Face Cushion for Smartphone-Based Virtual Reality Headsets

Virtual Chromatic Percussions Simulated by Pseudo-Haptic and Vibrotactile Feedback

PRESENTED BY HUMANOID IIT KANPUR

Distributed Virtual Environments!

Using Hybrid Reality to Explore Scientific Exploration Scenarios

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally

Development of a telepresence agent

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number

Feeding human senses through Immersion

Figure 2. Haptic human perception and display. 2.2 Pseudo-Haptic Feedback 2. RELATED WORKS 2.1 Haptic Simulation of Tapping an Object

Revision 1.1 May Front End DSP Audio Technologies for In-Car Applications ROADMAP 2016

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

Development of intelligent systems

Perception in Immersive Environments

Gesture Identification Using Sensors Future of Interaction with Smart Phones Mr. Pratik Parmar 1 1 Department of Computer engineering, CTIDS

R (2) Controlling System Application with hands by identifying movements through Camera

Carnegie Mellon University. Embedded Systems Design TeleTouch. Cristian Vallejo, Chelsea Kwong, Elizabeth Yan, Rohan Jadvani

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

Essential Understandings with Guiding Questions Robotics Engineering

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Sound engineering course

Table of Contents. Chapter 1 Overview Chapter 2 Quick Start Guide Chapter 3 Interface and Controls Interface...

Designing an Audio System for Effective Use in Mixed Reality

Classifying 3D Input Devices

A Hybrid Immersive / Non-Immersive

Immersive Natives. Die Zukunft der virtuellen Realität. Prof. Dr. Frank Steinicke. Human-Computer Interaction, Universität Hamburg

702. Investigation of attraction force and vibration of a slipper in a tactile device with electromagnet

Additional Reference Document

A Road Traffic Noise Evaluation System Considering A Stereoscopic Sound Field UsingVirtual Reality Technology

Augmented and Virtual Reality

EL6483: Sensors and Actuators

Robot Sensors Introduction to Robotics Lecture Handout September 20, H. Harry Asada Massachusetts Institute of Technology

Touching and Walking: Issues in Haptic Interface

CONTENTS. Preface...vii. Acknowledgments...ix. Chapter 1: Behavior of Sound...1. Chapter 2: The Ear and Hearing...11

PRODUCTS DOSSIER. / DEVELOPMENT KIT - VERSION NOVEMBER Product information PAGE 1

Output Devices - Non-Visual

Introduction to Haptics

Multi-Modal User Interaction

CSE 165: 3D User Interaction. Lecture #11: Travel

Job Sheet 2 Servo Control

Chapter 12. Preview. Objectives The Production of Sound Waves Frequency of Sound Waves The Doppler Effect. Section 1 Sound Waves

Case study for voice amplification in a highly absorptive conference room using negative absorption tuning by the YAMAHA Active Field Control system

Spatial Audio Transmission Technology for Multi-point Mobile Voice Chat

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Sensors and Actuators

Tracking. Alireza Bahmanpour, Emma Byrne, Jozef Doboš, Victor Mendoza and Pan Ye

Sound Design and Technology. ROP Stagehand Technician

Heads up interaction: glasgow university multimodal research. Eve Hoggan

Team Breaking Bat Architecture Design Specification. Virtual Slugger

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

INDE/TC 455: User Interface Design

Exploring Surround Haptics Displays

HeroX - Untethered VR Training in Sync'ed Physical Spaces

SODAR- sonic detecting and ranging

ReSound Micro and Multi Mic

Combining Subjective and Objective Assessment of Loudspeaker Distortion Marian Liebig Wolfgang Klippel

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

TOUCH & FEEL VIRTUAL REALITY. DEVELOPMENT KIT - VERSION NOVEMBER 2017

CSE 165: 3D User Interaction. Lecture #7: Input Devices Part 2

VR/AR Concepts in Architecture And Available Tools

Chapter 1 - Introduction

SELECTING THE OPTIMAL MOTION TRACKER FOR MEDICAL TRAINING SIMULATORS

IDENTIFYING AND COMMUNICATING 2D SHAPES USING AUDITORY FEEDBACK. Javier Sanchez

Sound source localization and its use in multimedia applications

Virtual Mix Room. User Guide

APPEAL DECISION. Appeal No USA. Tokyo, Japan. Tokyo, Japan. Tokyo, Japan. Tokyo, Japan

Vocational Training with Combined Real/Virtual Environments

Acquisition of spatial knowledge of architectural spaces via active and passive aural explorations by the blind

Virtual Reality Devices in C2 Systems

Development of an engineering simulator for armored vehicle. Fang Tang

ACTUATORS AND SENSORS. Joint actuating system. Servomotors. Sensors

From Encoding Sound to Encoding Touch

Advancements in Gesture Recognition Technology

KINECT CONTROLLED HUMANOID AND HELICOPTER

United States Patent 5,159,703 Lowery October 27, Abstract

Chapter 05: Wave Motions and Sound

ARCHITECTURAL ACOUSTICS. Sound. bandshell; Honolulu, HI a passive, architectural system. Ball State Architecture ENVIRONMENTAL SYSTEMS 1 Grondzik 1

Audio Output Devices for Head Mounted Display Devices

Haplug: A Haptic Plug for Dynamic VR Interactions

Sound Processing Technologies for Realistic Sensations in Teleworking

UUIs Ubiquitous User Interfaces

Transcription:

VR Software Class 4 Dr. Nabil Rami http://www.simulationfirst.com/ein5255/ Audio Output Can be divided into two elements: Audio Generation Audio Presentation Page 4-1

Audio Generation A variety of audio generation exist: Some uses dedicated hardware to accelerate computations Some uses software to perform calculations Can be classified as either tightly coupled Or loosely coupled Audio Generation Tightly coupled with the simulator means that the audio generation is performed in the same host computer In this case the Audio Generation subsystem is dedicated to a particular simulator The software that control the audio generation subsystem is built into the simulation Page 4-2

Audio Generation Loosely Coupled with Protocol means that the Audio Generation system is operating on a host computer different from the one running the simulation These audio subsystem can provide services to a variety of simulators, however they are associated with one particular simulator during operation Audio Generation Communications between the simulator and the audio subsystem take place via message passing protocols: Musical Instrument Digital Interface (MIDI) messages SIMNET or DIS Protocol Data (PDUs) High Level Architecture (HLA) transactions Custom message formats Physical connectivity between the subsystems is commonly by Ethernet or serial line Page 4-3

Audio Presentation The presentation of audio can be achieved either through speakers or headphones The method used depends on the design of the physical simulator environment along with the objectives of the simulation Speakers Open field audio presentation Unencumbering Presents audio to a group of individuals Can also be disruptive to other participants or observer Speakers provide strong bass presentation and high energy output Page 4-4

Speakers Their installations may consist of: single-channel monaural dual-channel stereophonic Multi-channel configurations Can be self amplified or powered by an external amplifier/mixer Headphones Close field presentation Well suited for environments where the audio is not meant to be heard by anyone other than the participant Spatialized audio is generally perceived best when presented over headphones Unlike speakers, headphones are encumbering This encumbrance can be minimized with the use of wireless transmission Page 4-5

Two configurations: Circumaural (around the ear) Effectively eliminates all audio other than that generated by the system Supra-aural aural (on the ear) Allow the participant to hear sounds in addition to those of the audio subsystem In-ear (inside the ear) can be considered as supra- aural Headphones Content Representation The content of audio consists of sounds generated by: the local entity Remote entities Ambient environmental sounds Other objects Page 4-6

Local Entity Sounds The local entity representing the local participant in an exercise is a source of sounds that can be simulated in the virtual environment Sounds associated with the local entity include the sound of walking, running, or moving. Sounds from local entity s operated devices are also part of the local entity sounds, such as ammunition clip release or fire Remote Entity Sounds Remote entity sounds can include engine, tracks, missile, rotor blades The representation of remote entity sounds provides important cues that enhance the participant situational awareness Page 4-7

Remote Entity Sounds Spatialization of such sounds in either 2 or 3 dimensions further enhance this awareness The sound of a remote entity moving in the environment provide crucial cues especially when it originates behind the participant, outside the visual field of view Environmental Sounds Can work with the visual subsystem to provide a more realistic virtual environment Sounds from wind, rain, birds, crickets, crashing surf can add additional clues about the terrain, time of day Page 4-8

Other Sounds Other sounds may be included in a simulation: Radio voices Natural voices Physically-Based Simulation Depending on the objectives of the simulation, we may need to generate audio to behave as in real world This is the case of high fidelity systems that require the representation of sounds in three dimensions (four if you include time) Page 4-9

Attenuation for Distance Distance attenuation is the decrease in energy of the audio effect based on the distance from the listener There is also a drop-off off in the upper frequencies All systems that generate sounds for remote entities and events perform some level of distance attenuation Spatialization The spatialization of an audio effect can be classified as: Diotic, monaural with no spatialization Directional, two-dimensional stereo panning Spatialized, three-dimensional placement of the sound source The additional dimension of time can be applied to all the above, simulating the speed of sound propagation delay Page 4-10

Other effects Doppler shift: The relative velocities between a sound source and a listener cause the frequency of the sound waves to compress or expand Reflection/Echo: The material properties of a surface, as well as the geometric properties of a structure have direct effects on the perception of sound. These effects include echoes, reverberations and absorption Other effects Environment Effects: Wind, temperature, and humidity may affect how sound is propagated in the environment. Hills and valleys of the simulated terrain may mask sounds or cause loss of radio communication Depending on the needs of the simulation, it may be valuable to simulate these effects Page 4-11

Haptic/Tactile Output Haptic displays provide force feedback (joystick ) Tactile displays simulate the sense of touch (glove ) Can be divide into three types: Movement regulators Object Interactors Event Stimulators Movement regulators This type of devices is used to restrict or enhance movement in some way based on conditions in the virtual environment A device with a variable incline can be used to simulate the changes in terrain slope, which in turn affect mobility Page 4-12

Object Interactors This type of displays presents the feel of objects to the touch and may provide some degree of force feedback associated with the resistance of objects such as buttons Object interactors can be actual physical objects appropriately positioned in the real world to correspond to a virtual environment counterpart Page 4-13

Event Stimulators This type of device generates a discrete event An event stimulator might simulate the recoil from firing a weapon or an impact associated with being shot Delivery Haptic and tactile feedback can be delivered using direct or indirect techniques Direct haptic and tactile techniques utilize pneumatic, hydraulic, electro- mechanical, or other direct mechanisms to actuate a force or sensation Page 4-14

Delivery Pneumatic devices uses compressed air to apply a force to an object or a surface in direct contact with the user Hydraulic devices uses fluidic pressure to generate a force that is then delivered directly to the user Electro-mechanical displays utilize motors and/or gears to apply pushing, pulling, and resistance forces to the users Input Subsystem Locomotion subsystem translate the motion of the user from physical environment to the virtual environment The two essential components of locomotion that must be expressed are direction and velocity Page 4-15

Locomotion Subsystem Keyboard/Mouse: the most basic devices that can be used for controlling locomotion in a virtual environment The user interface with these devices are not very intuitive Locomotion Subsystem GUI and Touch Screens: A more intuitive approach is to use touch screen input to a graphical user interface Intuitive but still unnatural and abstract Page 4-16

Locomotion Subsystem Joystick is designed specifically for controlling locomotion With the addition of throttle control, the user can also control velocity in the environment Joystick is most intuitive when used to control the motion of a vehicle Locomotion Subsystem Data Glove is a glove-like like device that tracks the position of the hand and fingers. It have been used as a locomotion input device by allowing the user to move in a given direction by pointing Page 4-17

Locomotion Subsystem Motion Platform have multiple configuration: Uniport Treadport Omni-directional treadmill The cybersphere Page 4-18

Motion Capture/Body Tracking The simulation system must detect the user s s actions in order to react with appropriate feedback The above is called tracking the user s s motion. Page 4-19

Tracking The tracking subsystem should unencumbering so as not to influence the user s s actions It should provide reliable, accurate, real-time measurements of the user s s position Multiple categories: mechanical, electromagnetic, acoustical, optical, and inertial Mechanical Uses the relative positioning of various physical components to each other or to a fixed point to determine the position of body parts or objects High degree of accuracy, low latency, and high update rate Encumbering Page 4-20

Electromagnetic The most widely used. It employs an emitter to generate an electromagnetic field. Sensors are attached to the tracked objects Both Position and orientation can be derived Inexpensive, good accuracy, can track numerous object at a time Sensible to distortion from metallic objects Acoustic Uses ultrasonic frequency sound waves to measure the distances between emitters and receivers Some offer high data rate Require a clear line of sight between emitters and receivers. Is not affected by interference from electromagnetic field or ferromagnetic objects Page 4-21

Optical and Image Based One common feature is the use of light to determine position Usually uses camera to track either active (light emitting) or passive (reflective) markers Only three degrees of freedom per marker (position or orientation) Requires a clear line of sight No interference problems Page 4-22

Inertial Uses small accelerometers on the tracked subject to determine changes in position and orientation Can be unencumbering Only measure position and orientation changes rather than absolute values Have tendency to accumulate error over time Gesture Recognition Motion capture and body tracking can be used as a means of communicating commands Gesture recognition can be used to interact with other entities in the virtual environment Page 4-23

Voice Voice can also be used as an input, for example, to command other objects or participants in the simulation to do something. To use voice in the simulation, the system must be able to capture it, transmit it and interpret it Page 4-24