M.Gesture: An Acceleration-Based Gesture Authoring System on Multiple Handheld and Wearable Devices

Size: px
Start display at page:

Download "M.Gesture: An Acceleration-Based Gesture Authoring System on Multiple Handheld and Wearable Devices"

Transcription

1 M.Gesture: An Acceleration-Based Gesture Authoring System on Multiple Handheld and Wearable Devices Ju-Whan Kim, Han-Jong Kim, Tek-Jin Nam Department of Industrial Design, KAIST 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea {jong2jong2, ABSTRACT Gesture-based interaction is still underutilized in the mobile context despite the large amount of attention it has been given. Using accelerometers that are widely available in mobile devices, we developed M.Gesture, a software system that supports accelerometer-based gesture authoring on single or multiple mobile devices. The development was based on a formative study that showed users preferences for subtle, simple motions and synchronized, multi-device gestures. M.Gesture adopts an acceleration data space and interface components based on mass-spring analogy and combines the strengths of both demonstration-based and declarative approaches. Also, gesture declaration is done by specifying a mass-spring trajectory with planes in the acceleration space. For iterative gesture modification, multilevel feedbacks are provided as well. The results of evaluative studies have shown good usability and higher recognition performance than that of dynamic time warping for simple gesture authoring. Later, we discuss the benefits of applying a physical metaphor and hybrid approach. Author Keywords Gesture authoring; Multi-device gesture; Acceleration space; Mass-spring visualization; Hybrid approach ACM Classification Keywords H.5.2. Information interfaces and presentation (e.g., HCI): User Interfaces INTRODUCTION Gesture-based interaction using mobile (including handheld and wearable) devices has become popular in recent years [4-6, 8, 10]. Because it relates to the user s body parts and inuse contexts, it offers personal and intuitive interactions. Thus, it is considered a promising interaction on mobile devices with small or no display. However, there are several limitations to existing gesturebased interaction on mobile devices. First, it is difficult to Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. CHI'16, May 07-12, 2016, San Jose, CA, USA 2016 ACM. ISBN /16/05$15.00 DOI: H/W BUTTON Up Down TOOLBAR Trackball Starting Orientation Hurdle Passing Sequence WORKSPACE BODY PART Left hand Head 1 Right hand Figure 1. The user interface of M.Gesture define one standard set of gestures. Considerations should be given to individual users and their usage contexts. For example, the same device is often worn and used on different body parts by different users. Thus, the data from the same sensor may not be consistent across various users. Second, appropriate or preferred gesture interaction sets can be changed by the context of use. People have diverse combinations of devices. A preferred gesture for a function can vary depending on the user s task, pose, or situation. Another issue is that the memorability of predefined gestures is lower than user-defined one [26]. One way to overcome these limitations is to adopt user customization. This approach has many benefits. Users can create their own gesture commands for their own situations, and they will be better able to recall self-defined gesture than predefined ones [26]. Nevertheless, gesture authoring is difficult to support for end users, as the authoring interface most appropriate for the mobile context has not been fully investigated. Current systems that support cross-device applications mainly focus on development or prototyping [7, 13] rather than end-user customization. Among the recognition techniques for gesture authoring, we focused on acceleration-based sensing. Accelerometers are embedded in most mobile devices since they are cheap and power-efficient. Thus, using accelerometer-based gestures does not require an additional sensor device. Although acceleration-based sensing shows the potential for gesture authoring in mobile devices, many interface issues need to be resolved to support easy and efficient gesture authoring. 2307

2 As an accelerometer does not measure exact displacement [3], an appropriate set of gesture vocabularies is difficult to define and to reuse. Additionally, the concept of acceleration is unfamiliar to ordinary people. To be used by end users, the gaps between the average user s mental model about gesture and acceleration-based gesture should be filled. In this paper, we present M.Gesture (Figure 1), an accelerometer-based gesture authoring system that allows end users to compose gestures for use among various combinations of mobile devices. From the literature review, we identified the limitations of existing gesture authoring systems as well as opportunities for an alternative approach. We conducted a formative study to derive design implications and a general understanding of users gesture authoring with mobile devices. Then, we developed the M.Gesture system based on the findings. The proposed system is characterized by spatial authoring and interface components based on a physical metaphor to visualize acceleration data on single or multiple mobile devices. It offers multi-level feedback for gesture recognition. An evaluation was conducted to examine usability and recognition accuracy. This paper makes three main contributions. First, a novel gesture authoring approach based on both demonstration and declaration is proposed for end users. Second, a physicalmetaphor-based sensor visualization (mass-spring) is proposed to exhibit the behavior and constraints of an accelerometer. Finally, our evaluations demonstrate the usability and the accuracy of the M.Gesture system. RELATED WORK Related works on gesture authoring range from development tools to end-user customization. We review the three main approaches: programming by demonstration, declarative approaches, and hybrid ones. In addition, we review the literature on the spatial visualization of gestures. Programming by Demonstration The Programming-by-Demonstration (PBD) [9] approach defines a set of gestures by user s demonstration and classifies the input gesture. The only thing a user has to with PBD is to demonstrate the gesture. Thus, the gulf of execution [14, 27] is small. Some algorithms [21, 31, 34] require only one demonstration so that a user can easily define a gesture. However, the PBD approach is an under-the-hood system [11]. It has large gulf of evaluation [14, 27] because it is hard to find where an error occurs. A user may have little idea about the sensor characteristics and the gesture design space. This can cause frustration to the user who tries to define a gesture outside the design space or tries to use unrecognizable features. PBD-based gesture definition is often difficult with an accelerometer because it has no means to reconstruct the 3- D geometry of a gesture [3]. Gestures with the same geometry may produce sensor data that is inconsistent with an accelerometer. Thus, it is difficult for a user to know from where an error occurs, e.g., among sensor limitations, unsupported features, the algorithm, training, or the gesture performance. Declarative Approach With the declarative approach, a user composes a desired gesture using high-level language [15-19, 32]. Because the gesture is explicitly declared, a user can easily analyze where an error occurs. At the same time, the user can realize the gesture design space and explore alternatives. However, the declarative approach requires much effort to define a gesture and the gulf of execution is big. A user needs to interpret her gesture design to a declaration logic. The entry barrier is higher than PBD due to the learning burden. Because of these limitations, the declarative approach is largely seen in developers tools [15, 16, 32]. To the best of our knowledge, there has been no gesture declaration tailored for accelerometers. Because ordinary users are not familiar with acceleration, difficulties exist in translating a gesture to acceleration, remembering a set of gestures, and correctly performing it later. Hybrid Approach In order to minimize the limitations of both of these approaches, some researchers have proposed a hybrid approach [12, 24]. For example, Gesture Studio [24] utilizes multi-touch gesture authoring based on both declaration and demonstration. A basic gesture is defined by a demonstration with which a user can combine to create a compound gesture. Since each approach is independently used for defining different features, limitations still exist. For example, the geometry of a finger is only defined by demonstration and cannot be modified by declaration. There can be a way to synergize both approaches in defining a single feature. Visualization of Gesture Interaction One way to support user-friendly gesture authoring is to visualize gestures. MAGIC [1] and Exemplar [11] visualize sensor value streams on a time-series graph indicating value changes over time. Often this method does not fully explain the association between the axes. EventHurdle [17] and Hotspotizer [2] allow a user to define a gesture or trajectory directly in a space. As the gesture trajectory becomes visible in the gesture space, the gesture definition can be modified in a WYSIWYG manner. Visual gesture authoring thus improves in terms of understandability and modifiability. However, EventHurdle recognizes a gesture only by a series of intersections by line segments and it may produce a high rate of false positives. The Hotspotizer approach cannot be directly applied to acceleration-based gesture sensing because it is based on a depth-sensing camera. Our work is differentiated from the previous works in several aspects, although it shares common ground with some of the visual and hybrid approaches. Firstly, we use a unique, body- 2308

3 centric gesture space for the visualization and definition of acceleration data. This space is expected to guide an ordinary user to understand the behavior and constraints of an accelerometer. Secondly, we try to minimize the weaknesses of both approaches. PBD is hard to understand and makes it difficult to specify gestures. The declarative approach requires considerable time and effort to define gestures. Our mixed approach attempts to combine only the strengths of both. Lastly, M.Gesture can be used on various mobile devices and in a context with multiple devices. FORMATIVE STUDY To design our system, we needed to understand how users perceive and define accelerometer-based gestures with multiple mobile devices. The study s goal was to elicit gesture instances in a multi-device environment and to identify frequent and preferred gesture patterns. We chose the key features of our gesture authoring system based on the lessons from the formative study. We use the term gesture instance for each gesture motion that users perform. A gesture means a collection of gesture instances that shares the same combination of devices and similar trajectories per device. A gesture pattern refers to a common feature across multiple gestures. Study Setup Our study design was informed by previous gesture elicitation studies [30, 33] and focused on multi-device environments. We collected gesture instances from 19 potential users (undergraduate & graduate students at the researcher s institute, years old, six female). Participants were asked to design and to perform gestures of general gesture commands. They were instructed to design gestures that they were likely to use in their everyday lives. To exclude gestures that would be unrecognizable to an accelerometer (e.g., two devices relative position or the relative position of a device to a user), we explained the characteristics of an accelerometer to the participants. The participants were asked to assume three postures: standing, sitting, and leaning back on a chair. We gave a set of 22 basic commands, including general computing commands (yes, no, cancel, quit, left, right, up, and down), media control commands (play, pause, stop, previous, next, volume up, volume down, and mute), and browsing commands (back, forward, previous tab, next tab, scroll up, and scroll down). The basic commands were to help the participants to think about various usage contexts and situations. The participants were encouraged to design extra gestures for those other than the 22 basic commands. For this, they were presented with a sheet containing a matrix of three postures and 22 commands plus three blank commands. Three accelerometer-embedded devices were used: a headband for the head, a wrist device for the left arm, and a smartphone for the right hand. Only the smartphone had a display. We chose this combination because the head and hands are suitable body parts for gesture interfaces and worked well with the currently available mobile devices. Each device recorded tri-axial-accelerometer sensor values at 30 Hz during the study. Throughout, we video recorded the entire session to capture the process of the participants gesture designs and demonstrations. Analyzing both results, we found gesture attributes that the participants used for differentiating instances (e.g., combination of devices, gesture directions, and starting orientation). Findings We collected a total of 590 gesture instances (an average of 31.1 per person). Beside the basic commands, 14 additional commands were defined (go to home screen, call a person, reject a call, take a picture, close all apps, go to the previous page, go to the next page, turn off the screen, play a random track, fast forward, reverse, mute a call, put on vibrate mode, and execute a favorite app). A total of 101 gestures were rendered. Of these gestures, 27 used the headband device, 49 used the wrist device, and 47 used the smartphone. There were 22 gestures that involved a combination of two devices, but none used all three devices at the same time (Table 1). Movement type Phone Wrist dev. Head dev. Total Translate Primitive Rotate Halt Shake Composite Circular trajectory Rotateand-shake Total Table 1. Gesture patterns from the formative study We found several tendencies in motion geometry of a device. Most of the collected gestures were subtle and simple. Participants preferred single-axis motions. The majority of the motions were linear (e.g., swipe, flick, or shake) or circular motions. Participants tried to minimize the length of motions and hardly changed the direction of a motion. Our findings verify that the results of Ruiz et al. s work [30] are also applicable to mobile devices and multi-device environments. There were several findings in terms of gesture design with the combination of devices. First, the participants were flexible regarding the usage of the device or the body part and depending on the context. For example, in a private space, P2 defined a rotate to the right with a smartphone for the media control command next track. He thought that the smartphone was the media device and that it was more intuitive to move the media device directly. But the same participant mapped the same command in a public space as a subtle swipe to the right with a wrist-worn device. The gesture was designed not to draw people s attention in a public space. For P2, the selection of device was less important than the context of use. We also found a technical issue with accelerometer-based gesture sensing and authoring. Depending on the device and 2309

4 the location where it is worn, the sensor data from identical motions could vary between devices. Acceleration on the palm is generally greater than on a wrist during the same motion. Even with the same device, accelerometer data may vary based on the orientation at which the device is being held or attached to the body. This implies that a gesture definition cannot be simply transferred to other devices. Another issue was that, in order to perform a gesture properly, a user must remember not only the gesture motion but also the worn location and orientation of each device. When the participants made gestures using more than one device, they used simultaneous motions or motions that ended at the same time (20 out of 22). In most cases of twohanded gestures, the trajectories of both motions were identical or symmetrical to each other. Participants avoided unsynchronized movements. This could be because unsynchronized motions require more cognitive attention to be performed and memorized. This implies that unsynchronized motions would not be natural for multidevice gesture authoring. M.GESTURE SYSTEM We developed the M.Gesture system based on the findings from the formative study and the review of related works. M.Gesture is a software system supporting accelerometerbased gesture authoring on single or multiple mobile devices. It is characterized by a body-centric workspace and acceleration spaces based on a mass-spring behavior analogy designed to visualize acceleration data. The M.Gesture system assists the handy authoring of multi-body gesture using accelerometer data from multiple mobile devices. Authoring Process The authoring process with M.Gesture can be divided into three stages: planning, defining, and testing. In the planning stage, a user explores and analyzes the desired gesture. The behavior of the accelerometer sensor value is presented while the user is performing a gesture. Because an accelerometer cannot capture the full 3D geometry of a gesture, and because its behavior is unfamiliar to an ordinary user, the user cannot imagine the gesture needed. In order to help a user understand the behavior and constraints of an accelerometer as well as how to utilize the sensor s capability, the sensor data is visualized with a physical, mass-spring metaphor, which is similar to an accelerometer motion. In the defining stage, the gesture is defined graphically. We adopted the concept of hurdle [17] because the hurdle-based visual definition of a gesture trajectory allows for drawinglike, easy-to-read authoring. The hurdle scheme divides a gesture trajectory into small segments so that the user can control each segment in detail. Moreover, it is highly modifiable, like other declaration-based systems. M.Gesture also combines the features of the demonstration-based gesture definition, which has benefits in terms of speed and cost. The user can quickly define a gesture with a demonstration and partially specify it if necessary. In the testing stage, the system provides feedback about whether the performed gesture is well recognized. The system gives multi-level recognition feedbacks in real time. It shows the recognition feedback from the component level to the full-gesture level. The user can easily check to see if the performed gesture matches with the defined gesture. The user can make changes where necessary. We designed the sensor data, authoring information, and recognition feedback to be displayed in the same space without any mode change. This was because those pieces of information are closely related to each other. A user defines gesture trajectory based on sensor behavior. Recognition feedback is dependent on the defined gesture design and performed sensor data. Another rationale is that the stages of planning, defining, and testing often overlap simultaneously when a user iteratively modifies and tests a gesture design. Components of the M.Gesture System The M.Gesture system runs on a host device with a display. The host device offers the gesture-authoring interface, stores gesture definitions, and recognizes gestures. Sensor devices sample accelerometer data and send it to the host device. In the following, we explain the components of M.Gesture that a user uses on the host device. Acceleration Space The acceleration space is where the user observes and defines gestures visualized in a body-centric workspace. An acceleration space can be created by selecting the involved body part and the corresponding device. For example, to create a gesture using a smartphone in the right hand, the user selects the right hand and the smartphone, then the system creates an acceleration space for it. Currently, M.Gesture supports handheld, wrist-worn, and head-worn devices because they are the most popular mobile devices in the current market. Our system can easily be expanded to include other devices worn on other body parts. Figure 2 shows an example 3-D acceleration space that visualizes the sensor behavior of a device. The 3-D accelerometer value is mapped and visualized on a 3-D coordinate in the acceleration space. Interface components for gesture definition are directly placed and organized on the acceleration space to define a trajectory. Acceleration spaces are placed in a body-centric workspace that is inspired by body-centric interaction [6]. Body parts represent worn locations of devices and they are placed in relation to a user s body (Figure 1. Workspace & body part). Gestures with two or more devices can be defined in multiple acceleration spaces. Each space defines the gesture trajectory for the corresponding device. The one-to-one coupling of an acceleration space to a device is intended to allow users to explicitly specify any combinations they want. Also, we only considered simultaneous gestures with multiple devices. This decision follows from our formative study, which demonstrated that most multi-device gestures were synchronized. 2310

5 In-Air Gesture H H 3 1 S O 2 Figure 2. Acceleration space & mass-spring visualization. Position of a mass indicates acceleration of a device. Figure 3. Visual components: starting orientation (O), hurdle (H), and passing sequence (S) user demonstration for gesture definition. PBD offers quick and easy gesture authoring, while a declarative approach allows for the specification of gesture trajectory in detail. Mass-Spring Visualization An accelerometer gives information about various aspects of device motion, such as translation, rotation, and orientation. However, it is hard to reconstruct 3-D geometry in physical space because the acceleration is only a derivative of the device s velocity. Also, acceleration or inertia is an unfamiliar concept to an ordinary user, who may find it hard to determine which gesture can be recognized with an accelerometer. To address this issue, M.Gesture helps users understand the behavior and the constraint of an accelerometer before authoring a gesture. M.Gesture uses a user s demonstration as a reference trajectory. The distance between a performed gesture and the reference gesture is calculated by dynamic time warping (DTW) [31]. The reference trajectory is visualized as a 3-D curve in an acceleration space. To modify the reference gesture, a user simply re-demonstrates it. M.Gesture also allows a user to specify the tolerance of a gesture manually. The components necessary for this include a starting orientation component, a hurdle component, and a passing sequence. Starting orientation and hurdle components are geometrically located in acceleration spaces. They work as necessary conditions in gesture recognition. As a physical metaphor for visualizing the behavior of an accelerometer, we use mass-spring, meaning mass on top of a spring that is attached to the devices but which does not oscillate. A mass-spring s position relative to a device changes according to the device s inertial force. M.Gesture visualizes a virtual mass-spring attached to a device that delivers the intensity and direction of the inertial force (Figure 2.1). Inertial force is affected by gravity (Figure 2.2) and acceleration (Figure 2.3). Since the acceleration data is converted to a mass-spring coordinate as the gesture is performed, additional computation is not necessary. Starting orientation component (Figure 3.O) defines the starting orientation of a device for a desired gesture. In the formative study, we found that many users distinguish gestures by the starting orientation of devices. This is a means for a user to declare the starting orientation explicitly. The starting orientation component is visualized as an outline of a mass-spring located on the coordinate and a circle of the radius (Figure 3.O). It is determined by an acceleration coordinate and a radius. M.Gesture s recognition module tests whether the very first acceleration of an input trajectory is located within the radius. A large radius allows for a wide range of starting orientations, while a small radius only allows for precise ones. The early design of our system directly plotted the sensor value without the metaphor. The preliminary study revealed that many of the participants confused the visualization with the direct trajectory. Many of them could not perceive the meaning of the representation and could not learn the sensor s behavior even after they were informed about the concept of acceleration. The mass-spring metaphor is introduced to represent invisible acceleration within a visible coordinate in a space. We chose the mass-spring metaphor because it shows physical behavior similar to that of an accelerometer, which helps ordinary users to imagine the object without difficulty. The association helps users understand the sensor characteristics first, then the design space of accelerometer-based gestures. Though the removal of oscillation makes it different from physical phenomena, it is still acceptable to users. Hurdle component (Figure 3.H) is used to define a plane in an acceleration space through which an input gesture must travel. The area of a gesture is relative to the tolerance of a gesture; to pass a narrow hurdle, the user must perform the exact gesture. The hurdle is visualized as a line segment in an acceleration space. Hurdle-crossing by input gesture is visually apparent as a line segment crossing. A user can modify a hurdle by moving the positions of two end points. M.Gesture assumes that a hurdle has infinite depth along a z-axis to the current viewing angle of an acceleration space. It is shown as a line segment in 2-D space. This simplification allows for the handy authoring and modification of hurdle planes. Because the participants in the formative study preferred axial or linear motions, we used the simplified hurdle. Hybrid Gesture Definition A device s motion is visualized as a trajectory in an acceleration space. Defining a gesture in M.Gesture requires setting the tolerance of a trajectory in an acceleration space. In order to take advantages from both PBD and declarative approaches, we adopted hurdle-based declaration [17] and 2311

6 Passing Sequence (Figure 3.S) is the order of hurdles through which an input trajectory should pass when a gesture is to be recognized. The passing sequence is visualized as a series of arrows (Figure 3.S). The gesture recognizer of M.Gesture tests whether an input gesture trajectory crosses every hurdle in the right sequence. Gesture modification in M.Gesture is as easy as manipulating a 2-D object on a touch-screen. A user can explicitly include or exclude a specific gesture trajectory by choosing whether or not to place hurdles that cross the input gesture. A demonstration-based definition makes up for unspecified parts. It is the user s choice either to put in more effort to specify the trajectory in detail or to define a few parts briefly and leave the rest unspecified. Recognition Feedback M.Gesture offers multi-level recognition feedback, including the component level, the trajectory level, and the full level. Without any mode changes, M.Gesture generates real-time feedback whenever a defined gesture is recognized. At the component level, when an accelerometer value stays in a starting orientation component s radius, the starting orientation component indicates that the condition is satisfied by highlighting it for a short time. For hurdle components, the relevant hurdle component and passing sequence component are highlighted when an input gesture goes through the hurdle plane. The component-level feedback lets a user know which component needs modification. The trajectory-level feedback is given by highlighting the acceleration space when an input gesture satisfies a trajectory definition all the way to the end. It lets a user know which device s trajectory definition needs editing. The full-level feedback is given when an input gesture satisfies all of a device's trajectory definitions. It rings a short ding-dong sound. The sound feedback is for situations where the user is unable to see the screen. Recognition feedback is offered with sensor visualization. It allows users to judge both the gesture design and the performance. Depending on the feedback, a user can edit the gesture definition or revise his or her gesture performance. Implementation This section illustrates how the gesture-recognition algorithm works when a user makes the gesture motion. First, the recognizer cuts the incoming sensor data stream to a short acceleration trajectory. The trimmed input trajectory is examined by the orientation test and the hurdle test. The tests remove candidate gestures that fail the tests. Then, the recognizer calculates distances from the input trajectory to each candidate gesture. The input gesture is regarded as the candidate gesture with the shortest distance from the input. Finally, if the shortest distance is greater than the threshold, the input gesture is regarded as noise. The recognizer trims out the incoming sensor stream between pauses. If a sensor value remains virtually the same for more than 100 ms, the system regards it as a pause. If the trimmed motion trajectory (which we call input trajectory ) is longer than 8,000 ms, it is regarded as a passive motion. The time parameters were informed by the formative study, where the longest gesture was 6,044 ms except for one outlier (μ = 2,394 ms, σ = 999 ms). Next, the input trajectory goes to the orientation test, which examines if the starting orientation of the trimmed trajectory matches the candidate gesture definitions. If the distance between the starting orientations of an input trajectory and a candidate gesture is longer than its radius, then the gesture is removed from the list of candidates. The hurdle test examines if the input trajectory matches each candidate gesture s hurdle design. It only tests geometric intersections of the input trajectory with hurdles in the right sequence. If the input trajectory does not pass hurdles of a candidate gesture in the right sequence, then the gesture is removed from the list of candidates. After two tests, the recognizer calculates the distance between the input trajectory and the remaining candidate gestures using a modified DTW. Our algorithm (considered as a modified DTW) segments two trajectories by hurdle crossings and sums all sub-distances calculated by DTW [31]. Because segmentation can be done in more than one way, our recognizer finds the shortest sum. The sum is normalized by the geometric average of each gesture length. For example, Figure 4 shows an example of two possible trajectory segmentations. The final distance between R and I in Figure 4 can be calculated as R 1 I a R 2 Figure 4. Trajectories R & I that cross a hurdle The candidate gesture with the shortest distance is the recognized gesture. Any input trajectory whose shortest distance is bigger than 10 is considered a passive motion or not recognized. The threshold 10 is empirically determined. For multi-device gestures, a declaration-based test (i.e. orientation test and hurdle test) works on AND logic. All of the devices are required to satisfy each test in order to be recognized. The demonstration-based distance takes the geometric mean of the distances of all devices. USAGE SCENARIO Swipe-Gesture Authoring Using a Smartphone We illustrate in detail how a user can define and test a swipe gestures using the M.Gesture system. The first task is to I b I c 2312

7 In-Air Gesture 6 THIS DEVICE Nexus Left hand - Wrist device Figure 6. M.Gesture declaration of two-handed swipe gesture Right hand - Smartphone phase, the user performs two linked gestures while checking the multi-level recognition feedback from two devices USABILITY EVALUATION 14 We conducted a lab-based experiment to evaluate how easy it is to use M.Gesture and to understand the concepts behind the system. We presented three gesture-authoring tasks to participants and measured their success rates and completion times. Afterward, surveys and interviews about usability and understandability were carried out. Figure 5. Swipe gesture authoring procedure. choose the combination of devices. After running the M.Gesture application on a smartphone, the user selects the right-hand icon (Figure 1.1) and chooses her smartphone (Figure 5.1). Then, an acceleration space is created. Study Setting Twenty university students participated in the study (undergraduate and graduate students, years old; three were female; six participants had experiences with wearable devices before.) A smartphone and a wrist device were given to the participants. We wanted to compare the learnability of the mass-spring metaphor with and without a real massspring (Figure 2. right). The experiment had a betweensubject design. The participants were divided into two groups of ten. Physical mass-springs were attached only to the devices that were given to the treatment group. The next task is to record the gesture. The user tries the desired gesture and observes how it appears as the acceleration trajectory. The user pushes the down button to start recording (Figure 5.2). She performs a right-handed swipe gesture (Figure 5.3) and pushes the down button again to stop recording (Figure 5.4). The recorded gesture trajectory becomes the reference gesture for recognition. The following phase is when the user defines a gesture trajectory using a starting orientation component and hurdle components. The user chooses the appropriate viewing angle after observing the trajectory from various angles. She can pan and zoom the acceleration space with the two-finger touch interface (Figure 5.5). With a trackball tool (Figure 5.6), she can rotate its viewing angle (Figure 5.7). After the user determines the viewing angle, she selects the starting orientation tool on the toolbar (Figure 5.8) and pushes the down button to set the starting orientation component while holding the device in the desired orientation (Figure 5.9). She then switches to the hurdle tool (Figure 5.10) and places two hurdles (Figure 5.11 & 5.12). The user switches to the passing sequence tool (Figure 5.13) and defines the sequence as shown in Figure The sequence is defined as the order through which the user s touch stroke passes. The participants were asked to compose the three gestures in Figure 7. Each task was designed to examine whether the participant could understand and apply the main declaration components: starting orientation, hurdle, and multiacceleration-space. task 1 task 2 task 3 Face down Shake Translate both devices outward Figure 7. Three gesture-authoring tasks. Procedure The whole experiment took about 40 minutes. We took 5 minutes to introduce the M.Gesture system, the mass-spring metaphor, and basic interfaces. Then the participants carried out the three tasks for 25 minutes. Before each task, we described the concept of the key declaration component for that task. We explained the interface and demonstrated an example gesture-creation process. The participants were asked to compose another example gesture and were free to practice the interface by themselves. After practicing, the participants conducted gesture-authoring tasks. Finally, in the test phase, the user verifies the gesture definition by performing the swipe gesture several times. The acceleration space becomes highlighted when the gesture is correctly recognized (Figure 5.15). If the user needs to modify the gesture, he or she can come back to the recording or the declaring phases for further modifications. Two-Handed, Swipe-Gesture Authoring M.Gesture can be used with more than one device. In the setup phase, the user can create as many acceleration spaces as the number of devices she wants. Figure 6 shows an example in which the user defines trajectories for both watch-type and smartphone-type devices. In the testing We measured the time to complete initial gesture definition (t1) and the time until the performed gesture was recognized (t2). t1 is the approximate time for one cycle from the setup phase to the declaration phase. t2 indicates the time expected to complete authoring one gesture, including iterative testing 2313

8 and modification. We also measured the success rate. Success was determined only when t 2 was less than 300 sec. If a participant missed an essential step and could not make any progress for over one minute, he or she was informed. However, we did not give direct solutions for them. After the task session, we conducted a survey and interview for 15 minutes. The survey included five aspects listed in Figure 9, including the understandability of the basic concept as well as usability. During the interview, we asked for the reasons for their ratings. We also asked about general usability issues and solicited detailed feedback about the system and the experience of gesture authoring. Apparatus A smartphone and a wearable device were given to the participants during the experiment. The smartphone was a Nexus 5 (LGE), and the wearable device was a custom-made wrist-worn device in which an accelerometer was embedded. The mass was a hollow ball (4 cm in diameter and weighing 3 g), which was designed to be light and easily seen. The gesture-authoring software application of M.Gesture was developed on the Android platform. The two devices communicated via Bluetooth Serial Port Profile (SPP) and the sampling rate of sensor data was 30 Hz. Results Gesture-Authoring Performance The participants accomplished their tasks in a total of 59 out of 60 task trials. The success rates for tasks 1 and 2 were 100%, and for task 3 the success rate was 95%. Task 1, task 2, and task 3 took 12, 78, and 96 sec on average, respectively (Figure 8). Tasks 2 and 3 took longer than task 1 because they required additional time-taking processes such as recording the gesture and placing hurdles and passing sequences, while task 1 only required the participants to declare the starting orientation. There was no significant difference in performance between the treatment group and the control group. Most participants could successfully understand the core functions of M.Gesture. The participants could create gestures with various combinations of devices. They could compose gestures using demonstrative and declarative components in the system. time (s) Time to define first gesture Time to be recognized task task task 3 Figure 8. Average task-completion time per task Understandability Understanding the concept (4.50) Learning (4.30) Using interface (4.00) Understanding visual representation (3.80) Overall easiness (3.85) very hard 1 hard 2 normal 3 easy 4 very easy 5 Figure 9. User assessment of easiness of M.Gesture (5-point Likert scale, average in parentheses). The ease of understanding the concept (μ = 4.50) and the ease of learning (4.30) were positively rated (Figure 9). The participants noted that the concept of the M.Gesture s massspring metaphor and its visualization was easy to understand and learn. Between the participant groups with and without a real mass-spring, a significant difference existed only in the ease of understanding (average of 4.80 with the real massspring and 4.20 without one, t(18) = -3.18, p 0.05). The real mass-spring seems to help in understanding the concept of acceleration space. Because it did not make a difference in the performance, the real mass-spring can be removed afterwards. The participants with the real one mentioned that it was intuitive to understand the trajectory in 3-D space. P1 and P15 commented that the real mass-spring made it intuitive to understand the starting orientation component. P15 mentioned that, although the mass-spring moves differently with the sensor value, it is still helpful. Usability The ease of using the interface (μ=4.00), the ease of understanding the visual representation (3.80), and the overall easiness (3.85) were also positively rated (Figure 9). There was no significant difference between the treatment group and the control group. P3 mentioned that visualizing the acceleration trajectory was obvious. She also mentioned that using the hardware button and toolbar was clear and easy to learn. P7 said that the acceleration spaces were clearly distinguished when composing a multi-device gesture. P15 liked the hurdle scheme for specifying the trajectory range. During the interview, the participants mentioned several usability issues with M.Gesture s interfaces and visualization. One participant complained that visual feedback can be invisible when the gesture motion involves screen rotation. Another issue was that the participants were often confused when the viewing angle of the host device did not match the physical orientation. For example, when a user is looking at the right-side view of a smartphone s acceleration space, the smartphone on the display faces left and the real smartphone faces the user. The discrepancy between the real orientation and displayed orientation annoyed some participants. The participants generally agreed that the graphics of the devices and their body parts inside the acceleration spaces was helpful in realizing the context. For example, the right hand graphic in Figure 6 let users realize that the acceleration space is about a 2314

9 smartphone in the right hand and that the acceleration trajectory moves along the xy-plane of the device. On the other hand, however, we observed that some participants were confused by the left-side and right-side views because they looked similar. PERFORMANCE EVALUATION We compared the recognition accuracy and processing speed of M.Gesture to those of DTW [31]. We believe that the comparison to DTW is appropriate because it has recently been used for gesture recognition in HCI [11, 22] and is also a base algorithm of M.Gesture. We defined 12 gestures from Ruiz et al. s research [30], excluding duplicates (Table 2). This gesture set was chosen because it was verified in the study and because our formative study found that users preferred single-device gestures. We set the starting orientation for first eight gestures in Table 2 to be degrees along the x-axis from the ground and the later four gestures in Table 2 to be parallel to the ground. We assumed general orientations while interacting with a phone. We defined the gestures as inclusively as possible. We made the starting orientation component s radius large and placed long hurdles. We also demonstrated the reference trajectory for training. DTW and M.Gesture shared the same reference. We collected the sensor data stream while 20 participants performed the gesture set five times each. We tested the accuracy and processing time of each instance using M.Gesture and DTW with raw sensor values. Twenty undergraduate and graduate students (22 33 years old; ten female) entered gestures. Participants were given a Nexus 5 with a gesture-collection app. We instructed the participants to pay attention to the starting orientations. The participants pushed hardware buttons to start and stop recording. We demonstrated a sample gesture recording and let the participants follow it. We observed the gesture performances and asked the participants to demonstrate it again if the starting orientation strayed too much. Both algorithms calculate the distances between the input gesture and the defined gestures. Then, the input gesture is classified as the smallest distance among 12 gestures. The processing time was measured on a desktop computer (Intel(R) quad Core i at 3.30GHz, single thread). Result M.Gesture yielded 88.3% accuracy and took 299 μs on average (Table 2). It was 4.3% point more accurate and 26% faster than DTW. To summarize, the accuracy and speed of M.Gesture were similar to or better than those of DTW. In particular, the accuracy on the gesture rotate phone so screen is away was far improved over DTW ( ). DTW regarded many of those inputs as the gesture place phone to ear or bring phone to mouth. Because the gestures rotate phone so screen is away, place phone to ear, and bring phone to mouth produced sensor data that was inconsistent with an accelerometer, the DTW algorithm Gesture Accuracy DTW M. Gesture Place phone to ear Rotate phone so screen is away Bring phone to mouth Shake Flick along z-axis towards face Flick along z-axis away from face A flick to the right A flick to the left Rotate flick along x-axis away Rotate flick along x-axis toward face Rotate flick along y-axis to the left Rotate flick along y-axis to the right Average Processing Time (µs) DTW M. Gesture Table 2. Recognition accuracy & processing time of DTW and M.Gesture. could not recognize them. However, M.Gesture effectively distinguished between the gestures rotate phone so screen is away and place phone to ear during the hurdle test. The processing time of M.Gesture is faster than DTW except for the gestures shake, flick along z-axis towards face, and flick along z-axis away from face. The orientation and hurdle tests reduced computation time by pruning the list of candidate gestures. For those three gestures, many candidate gestures passed the pre-tests, leading to longer processing times. Because the modified DTW calculates the distance for every possible path, its processing time tends to be longer as input trajectory becomes longer and more complex. DISCUSSION Appropriateness of Mass-Spring Metaphor Several metaphors have been considered for M.Gesture. A damped mass on a spring is the exact physical model of an accelerometer. However, this concept was unfamiliar to the end users. By contrast, the direct trajectory of the device could be easy to understand, but computers cannot calculate it precisely. As a compromise between these two models, we devised the mass-spring metaphor. Unlike previous designs, it is designed to be understood by both humans and machines. Our study proved that users could easily understand the concept and understand its visual representation (Figure 9). At the same time, the computer can also correctly recognize authored gestures with 88.3% accuracy. The mass-spring metaphor seems to make a difference in terms of users perceptions of their gestures. The metaphor may require initial instruction and practice before use, but, as users use our system, they learn the relationship between a gesture motion and its corresponding mass-spring movement. The relationship explains which attribute is more relevant to accelerometers and why some similar gestures appear differently with the mass-spring and vice versa. We argue that mass-spring visualization may not be intuitive at first, but our system allows users to build their own intuition about the acceleration-based gesture space. 2315

10 Some conventional gestures using geometry or position features can be inaccurate with an accelerometer. At the same time, identical gestures with different accelerations are distinguishable for an accelerometer. This means that new opportunities exist when accelerations are used selectively. Who Will Use M.Gesture System? We assumed that the potential users of M.Gesture would be technologically oriented people who are interested in wearable technology and are willing to put effort into interface customization. We thought that technologically oriented users would be more interested in learning and using hurdle-based gesture authoring logic. We recruited young university students for participants based on this rationale. It is not fully known if the M.Gesture system will be easily accepted by all users. One way to support broader use is to provide a template design that is ready to use and can quickly be modified. Novice users will be guided by common gesture templates such as the gestures in Table 2. Further study is needed to see if the M.Gesture system will be accepted by novice users and incorporated as a standard interface technique for mobile devices. The M.Gesture system can be used by developers or interaction designers. For these advanced users, a declaration scheme can be more sophisticated and specific. For example, timing adjustments between devices can be added to support broader gestures. Desktop-based applications may be a more apt environment than the current one for software development and interaction design. Advantages of the Hybrid Approach M.Gesture adopts a hybrid approach to define a gesture trajectory. We discuss three points about its synergy. The hybrid approach allows a user to choose the ratios of PBD and declaration. For example, she may define a gesture by demonstration alone or with a number of hurdles that are tightly placed. A user can flexibly compose a trajectory in either a quick and brief way or a specific and effortful way. Graphical declaration enhances the modifiability of gesture design. With under-the-hood algorithms, error correction can be puzzling. A user must choose either to redesign the gesture, to retrain an example, or to correct her gesture motion. An under-the-hood system often constrains a user within the capability of a given algorithm and sensor. By contrast, hurdle-based declaration provides a hint. If the input trajectories are repeatedly inconsistent, then he can redesign the gesture. If the reference trajectory is too different from input trajectories, then she can re-input a training example. Finally, if the last input deviates too much, then she can adjust her gesture motion. Refining the gesture design and gesture performance improves recognition accuracy. Our system encourages the user fully to explore and utilize the capability of the given algorithm and sensors. The declaration scheme is simplified with PBD. In general, a gesture-declaration scheme should offer various components for specifying a complex gesture logic. But because the demonstration informs detailed sensor data, we could simplify the gesture declaration scheme into three components. The starting orientation and hurdle components are sufficient to define various gestures. This decreases the learning burden for novice users. It also allows for gesture authoring with an interface of a mobile device. Limitations and Future Work There remains some future work to improve the usability and solidity of this study. A field study could examine the usability of M.Gesture in long-term use. More realistic recognition accuracy could be acquired if a gesture is performed and designed by the same participant. Although the performance evaluation did not test multi-device gestures, our experience tells us that a multi-device gesture is less prone to false positives. We speculate that this is because the starting orientation and sequential hurdle-passing require a very specific and precise gesture performance. Lastly, the current M.Gesture system has no means to handle passive motions. We may benchmark the Everyday Gesture Library [1] to prevent the occurrence of a gesture from passive motions. An accelerometer is the only gesture sensor in M.Gesture. The next version could incorporate multiple data sensors, e.g. a gyroscope for sensing rotations. CONCLUSION We present M.Gesture, an accelerometer-based gestureauthoring system using mobile devices. The M.Gesture system allows end users freely to choose combinations of devices and to compose accelerometer-based gestures. M.Gesture is a visual and spatial gesture-authoring system using the concept of acceleration space. A virtual massspring metaphor visualizes accelerometer data, allowing users to understand the behavior and constraints of an accelerometer. A demonstration-based approach and a declarative approach are complementarily merged in M.Gesture. Users can quickly compose a gesture by demonstration or by a specifically defined, detailed gesture trajectory using declarative hurdle components. Multi-level recognition feedback allows users to analyze errors and plan to improve the gesture design. M.Gesture allows users to compose and apply their own custom gestures using mobile devices in different use contexts. Rich, gesture-based interactions can be made as more wearable devices come into use. HCI researchers and interaction designers can quickly implement multi-device gesture interactions using M.Gesture. The concept behind the M.Gesture system has implications for other interface systems. Physical metaphors can be applied to unfamiliar technical concepts to be understood by non-expert users. The combination of declaration and demonstration can be applied to other gesture authoring systems. ACKNOWLEDGEMENT This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No , UXoriented Mobile SW Platform) 2316

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness

From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness From Room Instrumentation to Device Instrumentation: Assessing an Inertial Measurement Unit for Spatial Awareness Alaa Azazi, Teddy Seyed, Frank Maurer University of Calgary, Department of Computer Science

More information

Aerospace Sensor Suite

Aerospace Sensor Suite Aerospace Sensor Suite ECE 1778 Creative Applications for Mobile Devices Final Report prepared for Dr. Jonathon Rose April 12 th 2011 Word count: 2351 + 490 (Apper Context) Jin Hyouk (Paul) Choi: 998495640

More information

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

3D-Position Estimation for Hand Gesture Interface Using a Single Camera 3D-Position Estimation for Hand Gesture Interface Using a Single Camera Seung-Hwan Choi, Ji-Hyeong Han, and Jong-Hwan Kim Department of Electrical Engineering, KAIST, Gusung-Dong, Yusung-Gu, Daejeon, Republic

More information

Autodesk Advance Steel. Drawing Style Manager s guide

Autodesk Advance Steel. Drawing Style Manager s guide Autodesk Advance Steel Drawing Style Manager s guide TABLE OF CONTENTS Chapter 1 Introduction... 5 Details and Detail Views... 6 Drawing Styles... 6 Drawing Style Manager... 8 Accessing the Drawing Style

More information

Advance Steel. Drawing Style Manager s guide

Advance Steel. Drawing Style Manager s guide Advance Steel Drawing Style Manager s guide TABLE OF CONTENTS Chapter 1 Introduction...7 Details and Detail Views...8 Drawing Styles...8 Drawing Style Manager...9 Accessing the Drawing Style Manager...9

More information

IED Detailed Outline. Unit 1 Design Process Time Days: 16 days. An engineering design process involves a characteristic set of practices and steps.

IED Detailed Outline. Unit 1 Design Process Time Days: 16 days. An engineering design process involves a characteristic set of practices and steps. IED Detailed Outline Unit 1 Design Process Time Days: 16 days Understandings An engineering design process involves a characteristic set of practices and steps. Research derived from a variety of sources

More information

Mnemonical Body Shortcuts for Interacting with Mobile Devices

Mnemonical Body Shortcuts for Interacting with Mobile Devices Mnemonical Body Shortcuts for Interacting with Mobile Devices Tiago Guerreiro, Ricardo Gamboa, Joaquim Jorge Visualization and Intelligent Multimodal Interfaces Group, INESC-ID R. Alves Redol, 9, 1000-029,

More information

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu

More information

How to Create a Touchless Slider for Human Interface Applications

How to Create a Touchless Slider for Human Interface Applications How to Create a Touchless Slider for Human Interface Applications By Steve Gerber, Director of Human Interface Products Silicon Laboratories Inc., Austin, TX Introduction Imagine being able to control

More information

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice ABSTRACT W e present Drumtastic, an application where the user interacts with two Novint Falcon haptic devices to play virtual drums. The

More information

UNIT VI. Current approaches to programming are classified as into two major categories:

UNIT VI. Current approaches to programming are classified as into two major categories: Unit VI 1 UNIT VI ROBOT PROGRAMMING A robot program may be defined as a path in space to be followed by the manipulator, combined with the peripheral actions that support the work cycle. Peripheral actions

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

HUMAN COMPUTER INTERFACE

HUMAN COMPUTER INTERFACE HUMAN COMPUTER INTERFACE TARUNIM SHARMA Department of Computer Science Maharaja Surajmal Institute C-4, Janakpuri, New Delhi, India ABSTRACT-- The intention of this paper is to provide an overview on the

More information

Conceptual Metaphors for Explaining Search Engines

Conceptual Metaphors for Explaining Search Engines Conceptual Metaphors for Explaining Search Engines David G. Hendry and Efthimis N. Efthimiadis Information School University of Washington, Seattle, WA 98195 {dhendry, efthimis}@u.washington.edu ABSTRACT

More information

CS 247 Project 2. Part 1. Reflecting On Our Target Users. Jorge Cueto Edric Kyauk Dylan Moore Victoria Wee

CS 247 Project 2. Part 1. Reflecting On Our Target Users. Jorge Cueto Edric Kyauk Dylan Moore Victoria Wee 1 CS 247 Project 2 Jorge Cueto Edric Kyauk Dylan Moore Victoria Wee Part 1 Reflecting On Our Target Users Our project presented our team with the task of redesigning the Snapchat interface for runners,

More information

A Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones

A Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones A Study of Direction s Impact on Single-Handed Thumb Interaction with Touch-Screen Mobile Phones Jianwei Lai University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 USA jianwei1@umbc.edu

More information

Principles and Applications of Microfluidic Devices AutoCAD Design Lab - COMSOL import ready

Principles and Applications of Microfluidic Devices AutoCAD Design Lab - COMSOL import ready Principles and Applications of Microfluidic Devices AutoCAD Design Lab - COMSOL import ready Part I. Introduction AutoCAD is a computer drawing package that can allow you to define physical structures

More information

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Quy-Hung Vu, Byeong-Sang Kim, Jae-Bok Song Korea University 1 Anam-dong, Seongbuk-gu, Seoul, Korea vuquyhungbk@yahoo.com, lovidia@korea.ac.kr,

More information

Objectives. Abstract. This PRO Lesson will examine the Fast Fourier Transformation (FFT) as follows:

Objectives. Abstract. This PRO Lesson will examine the Fast Fourier Transformation (FFT) as follows: : FFT Fast Fourier Transform This PRO Lesson details hardware and software setup of the BSL PRO software to examine the Fast Fourier Transform. All data collection and analysis is done via the BIOPAC MP35

More information

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space Chapter 2 Understanding and Conceptualizing Interaction Anna Loparev Intro HCI University of Rochester 01/29/2013 1 Problem space Concepts and facts relevant to the problem Users Current UX Technology

More information

Wi-Fi Fingerprinting through Active Learning using Smartphones

Wi-Fi Fingerprinting through Active Learning using Smartphones Wi-Fi Fingerprinting through Active Learning using Smartphones Le T. Nguyen Carnegie Mellon University Moffet Field, CA, USA le.nguyen@sv.cmu.edu Joy Zhang Carnegie Mellon University Moffet Field, CA,

More information

Honors Drawing/Design for Production (DDP)

Honors Drawing/Design for Production (DDP) Honors Drawing/Design for Production (DDP) Unit 1: Design Process Time Days: 49 days Lesson 1.1: Introduction to a Design Process (11 days): 1. There are many design processes that guide professionals

More information

Gesture Identification Using Sensors Future of Interaction with Smart Phones Mr. Pratik Parmar 1 1 Department of Computer engineering, CTIDS

Gesture Identification Using Sensors Future of Interaction with Smart Phones Mr. Pratik Parmar 1 1 Department of Computer engineering, CTIDS Gesture Identification Using Sensors Future of Interaction with Smart Phones Mr. Pratik Parmar 1 1 Department of Computer engineering, CTIDS Abstract Over the years from entertainment to gaming market,

More information

Designing in the context of an assembly

Designing in the context of an assembly SIEMENS Designing in the context of an assembly spse01670 Proprietary and restricted rights notice This software and related documentation are proprietary to Siemens Product Lifecycle Management Software

More information

Tangible interaction : A new approach to customer participatory design

Tangible interaction : A new approach to customer participatory design Tangible interaction : A new approach to customer participatory design Focused on development of the Interactive Design Tool Jae-Hyung Byun*, Myung-Suk Kim** * Division of Design, Dong-A University, 1

More information

Functions: Transformations and Graphs

Functions: Transformations and Graphs Paper Reference(s) 6663/01 Edexcel GCE Core Mathematics C1 Advanced Subsidiary Functions: Transformations and Graphs Calculators may NOT be used for these questions. Information for Candidates A booklet

More information

Initial Project and Group Identification Document September 15, Sense Glove. Now you really do have the power in your hands!

Initial Project and Group Identification Document September 15, Sense Glove. Now you really do have the power in your hands! Initial Project and Group Identification Document September 15, 2015 Sense Glove Now you really do have the power in your hands! Department of Electrical Engineering and Computer Science University of

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Designing in Context. In this lesson, you will learn how to create contextual parts driven by the skeleton method.

Designing in Context. In this lesson, you will learn how to create contextual parts driven by the skeleton method. Designing in Context In this lesson, you will learn how to create contextual parts driven by the skeleton method. Lesson Contents: Case Study: Designing in context Design Intent Stages in the Process Clarify

More information

COPRA 2002 is coming with 69 new features

COPRA 2002 is coming with 69 new features COPRA is coming with 69 new features are marked with COPRA is available for AutoCAD 14 / Mechanical Desktop 3 AutoCAD 2000 / Mechanical Desktop 4 AutoCAD 2000i / Mechanical Desktop 5 AutoCAD / Mechanical

More information

Laboratory 1: Uncertainty Analysis

Laboratory 1: Uncertainty Analysis University of Alabama Department of Physics and Astronomy PH101 / LeClair May 26, 2014 Laboratory 1: Uncertainty Analysis Hypothesis: A statistical analysis including both mean and standard deviation can

More information

Stitching MetroPro Application

Stitching MetroPro Application OMP-0375F Stitching MetroPro Application Stitch.app This booklet is a quick reference; it assumes that you are familiar with MetroPro and the instrument. Information on MetroPro is provided in Getting

More information

A Gestural Interaction Design Model for Multi-touch Displays

A Gestural Interaction Design Model for Multi-touch Displays Songyang Lao laosongyang@ vip.sina.com A Gestural Interaction Design Model for Multi-touch Displays Xiangan Heng xianganh@ hotmail ABSTRACT Media platforms and devices that allow an input from a user s

More information

More Info at Open Access Database by S. Dutta and T. Schmidt

More Info at Open Access Database  by S. Dutta and T. Schmidt More Info at Open Access Database www.ndt.net/?id=17657 New concept for higher Robot position accuracy during thermography measurement to be implemented with the existing prototype automated thermography

More information

Prismatic Machining Preparation Assistant

Prismatic Machining Preparation Assistant Prismatic Machining Preparation Assistant Overview Conventions What's New Getting Started Open the Design Part and Start the Workbench Automatically Create All Machinable Features Open the Manufacturing

More information

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives Using Dynamic Views Module Overview The term dynamic views refers to a method of composing drawings that is a new approach to managing projects. Dynamic views can help you to: automate sheet creation;

More information

Introduction to Autodesk Inventor for F1 in Schools (Australian Version)

Introduction to Autodesk Inventor for F1 in Schools (Australian Version) Introduction to Autodesk Inventor for F1 in Schools (Australian Version) F1 in Schools race car In this course you will be introduced to Autodesk Inventor, which is the centerpiece of Autodesk s Digital

More information

Table of Contents. Lesson 1 Getting Started

Table of Contents. Lesson 1 Getting Started NX Lesson 1 Getting Started Pre-reqs/Technical Skills Basic computer use Expectations Read lesson material Implement steps in software while reading through lesson material Complete quiz on Blackboard

More information

ithrow : A NEW GESTURE-BASED WEARABLE INPUT DEVICE WITH TARGET SELECTION ALGORITHM

ithrow : A NEW GESTURE-BASED WEARABLE INPUT DEVICE WITH TARGET SELECTION ALGORITHM ithrow : A NEW GESTURE-BASED WEARABLE INPUT DEVICE WITH TARGET SELECTION ALGORITHM JONG-WOON YOO, YO-WON JEONG, YONG SONG, JUPYUNG LEE, SEUNG-HO LIM, KI-WOONG PARK, AND KYU HO PARK Computer Engineering

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

D8.1 PROJECT PRESENTATION

D8.1 PROJECT PRESENTATION D8.1 PROJECT PRESENTATION Approval Status AUTHOR(S) NAME AND SURNAME ROLE IN THE PROJECT PARTNER Daniela De Lucia, Gaetano Cascini PoliMI APPROVED BY Gaetano Cascini Project Coordinator PoliMI History

More information

HELPING THE DESIGN OF MIXED SYSTEMS

HELPING THE DESIGN OF MIXED SYSTEMS HELPING THE DESIGN OF MIXED SYSTEMS Céline Coutrix Grenoble Informatics Laboratory (LIG) University of Grenoble 1, France Abstract Several interaction paradigms are considered in pervasive computing environments.

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

Android User manual. Intel Education Lab Camera by Intellisense CONTENTS

Android User manual. Intel Education Lab Camera by Intellisense CONTENTS Intel Education Lab Camera by Intellisense Android User manual CONTENTS Introduction General Information Common Features Time Lapse Kinematics Motion Cam Microscope Universal Logger Pathfinder Graph Challenge

More information

Laboratory 1: Motion in One Dimension

Laboratory 1: Motion in One Dimension Phys 131L Spring 2018 Laboratory 1: Motion in One Dimension Classical physics describes the motion of objects with the fundamental goal of tracking the position of an object as time passes. The simplest

More information

Learning Guide. ASR Automated Systems Research Inc. # Douglas Crescent, Langley, BC. V3A 4B6. Fax:

Learning Guide. ASR Automated Systems Research Inc. # Douglas Crescent, Langley, BC. V3A 4B6. Fax: Learning Guide ASR Automated Systems Research Inc. #1 20461 Douglas Crescent, Langley, BC. V3A 4B6 Toll free: 1-800-818-2051 e-mail: support@asrsoft.com Fax: 604-539-1334 www.asrsoft.com Copyright 1991-2013

More information

M TE S Y S LT U A S S A

M TE S Y S LT U A S S A Dress-Up Features In this lesson you will learn how to place dress-up features on parts. Lesson Contents: Case Study: Timing Chain Cover Design Intent Stages in the Process Apply a Draft Create a Stiffener

More information

TapBoard: Making a Touch Screen Keyboard

TapBoard: Making a Touch Screen Keyboard TapBoard: Making a Touch Screen Keyboard Sunjun Kim, Jeongmin Son, and Geehyuk Lee @ KAIST HCI Laboratory Hwan Kim, and Woohun Lee @ KAIST Design Media Laboratory CHI 2013 @ Paris, France 1 TapBoard: Making

More information

Servo Tuning Tutorial

Servo Tuning Tutorial Servo Tuning Tutorial 1 Presentation Outline Introduction Servo system defined Why does a servo system need to be tuned Trajectory generator and velocity profiles The PID Filter Proportional gain Derivative

More information

Lesson 4 Extrusions OBJECTIVES. Extrusions

Lesson 4 Extrusions OBJECTIVES. Extrusions Lesson 4 Extrusions Figure 4.1 Clamp OBJECTIVES Create a feature using an Extruded protrusion Understand Setup and Environment settings Define and set a Material type Create and use Datum features Sketch

More information

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences Elwin Lee, Xiyuan Liu, Xun Zhang Entertainment Technology Center Carnegie Mellon University Pittsburgh, PA 15219 {elwinl, xiyuanl,

More information

SITUATED CREATIVITY INSPIRED IN PARAMETRIC DESIGN ENVIRONMENTS

SITUATED CREATIVITY INSPIRED IN PARAMETRIC DESIGN ENVIRONMENTS The 2nd International Conference on Design Creativity (ICDC2012) Glasgow, UK, 18th-20th September 2012 SITUATED CREATIVITY INSPIRED IN PARAMETRIC DESIGN ENVIRONMENTS R. Yu, N. Gu and M. Ostwald School

More information

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL Darko Martinovikj Nevena Ackovska Faculty of Computer Science and Engineering Skopje, R. Macedonia ABSTRACT Despite the fact that there are different

More information

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network 436 JOURNAL OF COMPUTERS, VOL. 5, NO. 9, SEPTEMBER Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network Chung-Chi Wu Department of Electrical Engineering,

More information

The Control of Avatar Motion Using Hand Gesture

The Control of Avatar Motion Using Hand Gesture The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

Virtual Engineering: Challenges and Solutions for Intuitive Offline Programming for Industrial Robot

Virtual Engineering: Challenges and Solutions for Intuitive Offline Programming for Industrial Robot Virtual Engineering: Challenges and Solutions for Intuitive Offline Programming for Industrial Robot Liwei Qi, Xingguo Yin, Haipeng Wang, Li Tao ABB Corporate Research China No. 31 Fu Te Dong San Rd.,

More information

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction Creating a Collaborative Multi Touch Computer Aided Design Program Cole Anagnost, Thomas Niedzielski, Desirée Velázquez, Prasad Ramanahally, Stephen Gilbert Iowa State University { someguy tomn deveri

More information

Interactive System for Origami Creation

Interactive System for Origami Creation Interactive System for Origami Creation Takashi Terashima, Hiroshi Shimanuki, Jien Kato, and Toyohide Watanabe Graduate School of Information Science, Nagoya University Furo-cho, Chikusa-ku, Nagoya 464-8601,

More information

Application of 3D Terrain Representation System for Highway Landscape Design

Application of 3D Terrain Representation System for Highway Landscape Design Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented

More information

A STUDY ON DESIGN SUPPORT FOR CONSTRUCTING MACHINE-MAINTENANCE TRAINING SYSTEM BY USING VIRTUAL REALITY TECHNOLOGY

A STUDY ON DESIGN SUPPORT FOR CONSTRUCTING MACHINE-MAINTENANCE TRAINING SYSTEM BY USING VIRTUAL REALITY TECHNOLOGY A STUDY ON DESIGN SUPPORT FOR CONSTRUCTING MACHINE-MAINTENANCE TRAINING SYSTEM BY USING VIRTUAL REALITY TECHNOLOGY H. ISHII, T. TEZUKA and H. YOSHIKAWA Graduate School of Energy Science, Kyoto University,

More information

Estimation of Folding Operations Using Silhouette Model

Estimation of Folding Operations Using Silhouette Model Estimation of Folding Operations Using Silhouette Model Yasuhiro Kinoshita Toyohide Watanabe Abstract In order to recognize the state of origami, there are only techniques which use special devices or

More information

SolidWorks Part I - Basic Tools SDC. Includes. Parts, Assemblies and Drawings. Paul Tran CSWE, CSWI

SolidWorks Part I - Basic Tools SDC. Includes. Parts, Assemblies and Drawings. Paul Tran CSWE, CSWI SolidWorks 2015 Part I - Basic Tools Includes CSWA Preparation Material Parts, Assemblies and Drawings Paul Tran CSWE, CSWI SDC PUBLICATIONS Better Textbooks. Lower Prices. www.sdcpublications.com Powered

More information

METBD 110 Hands-On 17 Dimensioning Sketches

METBD 110 Hands-On 17 Dimensioning Sketches METBD 110 Hands-On 17 Dimensioning Sketches Why: Recall, Pro/E can capture design intent through the use of geometric constraints, dimensional constraints, and parametric relations. Dimensional constraints

More information

ECE 497 Introduction to Mobile Robotics Spring 09-10

ECE 497 Introduction to Mobile Robotics Spring 09-10 Lab 1 Getting to Know Your Robot: Locomotion and Odometry (Demonstration due in class on Thursday) (Code and Memo due in Angel drop box by midnight on Thursday) Read this entire lab procedure and complete

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

Cracking the Sudoku: A Deterministic Approach

Cracking the Sudoku: A Deterministic Approach Cracking the Sudoku: A Deterministic Approach David Martin Erica Cross Matt Alexander Youngstown State University Youngstown, OH Advisor: George T. Yates Summary Cracking the Sodoku 381 We formulate a

More information

Impeding Forgers at Photo Inception

Impeding Forgers at Photo Inception Impeding Forgers at Photo Inception Matthias Kirchner a, Peter Winkler b and Hany Farid c a International Computer Science Institute Berkeley, Berkeley, CA 97, USA b Department of Mathematics, Dartmouth

More information

2. Publishable summary

2. Publishable summary 2. Publishable summary CogLaboration (Successful real World Human-Robot Collaboration: from the cognition of human-human collaboration to fluent human-robot collaboration) is a specific targeted research

More information

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane Journal of Communication and Computer 13 (2016) 329-337 doi:10.17265/1548-7709/2016.07.002 D DAVID PUBLISHING Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

More information

Haptic messaging. Katariina Tiitinen

Haptic messaging. Katariina Tiitinen Haptic messaging Katariina Tiitinen 13.12.2012 Contents Introduction User expectations for haptic mobile communication Hapticons Example: CheekTouch Introduction Multiple senses are used in face-to-face

More information

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Xu Zhao Saitama University 255 Shimo-Okubo, Sakura-ku, Saitama City, Japan sheldonzhaox@is.ics.saitamau.ac.jp Takehiro Niikura The University

More information

AutoCAD Tutorial First Level. 2D Fundamentals. Randy H. Shih SDC. Better Textbooks. Lower Prices.

AutoCAD Tutorial First Level. 2D Fundamentals. Randy H. Shih SDC. Better Textbooks. Lower Prices. AutoCAD 2018 Tutorial First Level 2D Fundamentals Randy H. Shih SDC PUBLICATIONS Better Textbooks. Lower Prices. www.sdcpublications.com Powered by TCPDF (www.tcpdf.org) Visit the following websites to

More information

T&E Express SCSU Mobile Lab Program

T&E Express SCSU Mobile Lab Program T&E Express SCSU Mobile Lab Program Course : Industrial Technology 8 Science Strand and Substrand being addressed Develop a model to generate data for iterative testing and modification of a proposed object,

More information

Quartz Lock Loop (QLL) For Robust GNSS Operation in High Vibration Environments

Quartz Lock Loop (QLL) For Robust GNSS Operation in High Vibration Environments Quartz Lock Loop (QLL) For Robust GNSS Operation in High Vibration Environments A Topcon white paper written by Doug Langen Topcon Positioning Systems, Inc. 7400 National Drive Livermore, CA 94550 USA

More information

Up to Cruising Speed with Autodesk Inventor (Part 1)

Up to Cruising Speed with Autodesk Inventor (Part 1) 11/29/2005-8:00 am - 11:30 am Room:Swan 1 (Swan) Walt Disney World Swan and Dolphin Resort Orlando, Florida Up to Cruising Speed with Autodesk Inventor (Part 1) Neil Munro - C-Cubed Technologies Ltd. and

More information

Microsoft Scrolling Strip Prototype: Technical Description

Microsoft Scrolling Strip Prototype: Technical Description Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features

More information

GestureCommander: Continuous Touch-based Gesture Prediction

GestureCommander: Continuous Touch-based Gesture Prediction GestureCommander: Continuous Touch-based Gesture Prediction George Lucchese george lucchese@tamu.edu Jimmy Ho jimmyho@tamu.edu Tracy Hammond hammond@cs.tamu.edu Martin Field martin.field@gmail.com Ricardo

More information

Creo Parametric 2.0: Introduction to Solid Modeling. Creo Parametric 2.0: Introduction to Solid Modeling

Creo Parametric 2.0: Introduction to Solid Modeling. Creo Parametric 2.0: Introduction to Solid Modeling Creo Parametric 2.0: Introduction to Solid Modeling 1 2 Part 1 Class Files... xiii Chapter 1 Introduction to Creo Parametric... 1-1 1.1 Solid Modeling... 1-4 1.2 Creo Parametric Fundamentals... 1-6 Feature-Based...

More information

Relationship to theory: This activity involves the motion of bodies under constant velocity.

Relationship to theory: This activity involves the motion of bodies under constant velocity. UNIFORM MOTION Lab format: this lab is a remote lab activity Relationship to theory: This activity involves the motion of bodies under constant velocity. LEARNING OBJECTIVES Read and understand these instructions

More information

What was the first gestural interface?

What was the first gestural interface? stanford hci group / cs247 Human-Computer Interaction Design Studio What was the first gestural interface? 15 January 2013 http://cs247.stanford.edu Theremin Myron Krueger 1 Myron Krueger There were things

More information

A Gesture-Based Interface for Seamless Communication between Real and Virtual Worlds

A Gesture-Based Interface for Seamless Communication between Real and Virtual Worlds 6th ERCIM Workshop "User Interfaces for All" Long Paper A Gesture-Based Interface for Seamless Communication between Real and Virtual Worlds Masaki Omata, Kentaro Go, Atsumi Imamiya Department of Computer

More information

SRV02-Series Rotary Experiment # 3. Ball & Beam. Student Handout

SRV02-Series Rotary Experiment # 3. Ball & Beam. Student Handout SRV02-Series Rotary Experiment # 3 Ball & Beam Student Handout SRV02-Series Rotary Experiment # 3 Ball & Beam Student Handout 1. Objectives The objective in this experiment is to design a controller for

More information

Lab 7: Introduction to Webots and Sensor Modeling

Lab 7: Introduction to Webots and Sensor Modeling Lab 7: Introduction to Webots and Sensor Modeling This laboratory requires the following software: Webots simulator C development tools (gcc, make, etc.) The laboratory duration is approximately two hours.

More information

Lesson 6 2D Sketch Panel Tools

Lesson 6 2D Sketch Panel Tools Lesson 6 2D Sketch Panel Tools Inventor s Sketch Tool Bar contains tools for creating the basic geometry to create features and parts. On the surface, the Geometry tools look fairly standard: line, circle,

More information

Modern Control Theoretic Approach for Gait and Behavior Recognition. Charles J. Cohen, Ph.D. Session 1A 05-BRIMS-023

Modern Control Theoretic Approach for Gait and Behavior Recognition. Charles J. Cohen, Ph.D. Session 1A 05-BRIMS-023 Modern Control Theoretic Approach for Gait and Behavior Recognition Charles J. Cohen, Ph.D. ccohen@cybernet.com Session 1A 05-BRIMS-023 Outline Introduction - Behaviors as Connected Gestures Gesture Recognition

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement The Lecture Contains: Sources of Error in Measurement Signal-To-Noise Ratio Analog-to-Digital Conversion of Measurement Data A/D Conversion Digitalization Errors due to A/D Conversion file:///g /optical_measurement/lecture2/2_1.htm[5/7/2012

More information

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation Direct Manipulation and Instrumental Interaction 1 Review: Interaction vs. Interface What s the difference between user interaction and user interface? Interface refers to what the system presents to the

More information

Application of Gestalt psychology in product human-machine Interface design

Application of Gestalt psychology in product human-machine Interface design IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Application of Gestalt psychology in product human-machine Interface design To cite this article: Yanxia Liang 2018 IOP Conf.

More information

TEMPERATURE MAPPING SOFTWARE FOR SINGLE-CELL CAVITIES*

TEMPERATURE MAPPING SOFTWARE FOR SINGLE-CELL CAVITIES* TEMPERATURE MAPPING SOFTWARE FOR SINGLE-CELL CAVITIES* Matthew Zotta, CLASSE, Cornell University, Ithaca, NY, 14853 Abstract Cornell University routinely manufactures single-cell Niobium cavities on campus.

More information

Frequency Hopping Pattern Recognition Algorithms for Wireless Sensor Networks

Frequency Hopping Pattern Recognition Algorithms for Wireless Sensor Networks Frequency Hopping Pattern Recognition Algorithms for Wireless Sensor Networks Min Song, Trent Allison Department of Electrical and Computer Engineering Old Dominion University Norfolk, VA 23529, USA Abstract

More information

Introduction to HCI. CS4HC3 / SE4HC3/ SE6DO3 Fall Instructor: Kevin Browne

Introduction to HCI. CS4HC3 / SE4HC3/ SE6DO3 Fall Instructor: Kevin Browne Introduction to HCI CS4HC3 / SE4HC3/ SE6DO3 Fall 2011 Instructor: Kevin Browne brownek@mcmaster.ca Slide content is based heavily on Chapter 1 of the textbook: Designing the User Interface: Strategies

More information

ISO 1101 Geometrical product specifications (GPS) Geometrical tolerancing Tolerances of form, orientation, location and run-out

ISO 1101 Geometrical product specifications (GPS) Geometrical tolerancing Tolerances of form, orientation, location and run-out INTERNATIONAL STANDARD ISO 1101 Third edition 2012-04-15 Geometrical product specifications (GPS) Geometrical tolerancing Tolerances of form, orientation, location and run-out Spécification géométrique

More information

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY Sidhesh Badrinarayan 1, Saurabh Abhale 2 1,2 Department of Information Technology, Pune Institute of Computer Technology, Pune, India ABSTRACT: Gestures

More information