Design Of Virtual Sense Technology For System Interface Mr. Chetan Dhule, Prof.T.H.Nagrare Computer Science & Engineering Department, G.H Raisoni College Of Engineering. ABSTRACT A gesture-based human computer interaction can make computers and devices easier to use, such as by allowing people to control the application on windows by moving their hands through the air. Existing solutions have relied on gesture recognition algorithms they needs exotic hardware, often involving elaborate setups limited to the research lab. Gesture recognition algorithms used so far are not practical or responsive enough for real-world use, partially due to the inadequate data on which the image processing is applied. As existing methods are based on gesture recognition algorithms.it needs ANN training which makes whole process slow and reduce accuracy. Method we proposed is based on real time controlling the motion of mouse in windows according to the motion of hand and fingers by calculating the change in pixels values of RBG colors from a video, without using any ANN training to get exact sequence of motion of hands and fingers. Keywords computer vision, gesture recognition, speech computer human interaction 1. INTRODUCTION Existing solutions have relied on gesture recognition algorithms they needs exotic hardware, often involving elaborate setups limited to the research lab. Existing Gesture recognition algorithms used so far are not practical or responsive enough for real-world use, partially due to the inadequate data on which the image processing is applied. As existing methods are based on gesture recognition algorithms. It needs ANN training which makes whole process slow and reduce accuracy. The main objective of Method we proposed is based on real time controlling the motion of mouse in windows according to the motion of hand and fingers by Email Id :- chetandhule123@gmail.com calculating the change in pixels values of RBG colors from a video, without using any ANN training to get exact sequence of motion of hands and fingers. 2. PROBLEM DEFINATION Unfortunately, most existing solutions suffer from several shortcomings. Some of the hardware that has been used for processing gestures has required users to wear obtrusive sensors and stand near multiple carefully calibrated cameras. Most cameras used so far rely on color data and are therefore sensitive to environmental factors such as dynamic backgrounds and lighting conditions. The algorithms used to determine gestures from the data returned by the hardware have been unreliable when tested on a wide variety of users, and gestures have generally been limited to basic hand-tracking. Existing solutions have relied on gesture recognition algorithms.since the time needed for the computer to recognize a gesture is usually longer than the time needed to display its result, there is always a lag affecting the practical application of such interfaces. Finally, there have not been any collaborative workspaces or environments that allow users to freely use gestures for completing tasks such as controlling motion and events of mouse. 3. OBJECTIVES Existing solutions have relied on gesture recognition algorithms they needs exotic hardware, often involving elaborate setups limited to the research lab. Existing Gesture recognition algorithms used so far are not practical or responsive enough for real-world use, partially due to the inadequate data on which the image processing is applied. As existing methods are based on gesture recognition algorithms. It needs ANN training which makes whole process slow and reduce accuracy. The main objective of Method we proposed is based on real time www.ijrcct.org Page 1454
controlling the motion of mouse in windows according to the motion of hand and fingers by calculating the change in pixels values of RBG colors from a video, without using any ANN training to get exact sequence of motion of hands and fingers. 4. LITERATURE REVIEW The processing of hand gestures has been explored extensively in existing literature. Some of the earlier work by Freeman and Weissman [1] used a video camera and computer vision template matching algorithms to detect a user's hand from across a room and allow the user to control a television set. A user could show an open hand and an on-screen hand icon would appear that could be used to adjust various graphical controls, such as a volume slider. The slider was activated when the user would cover the control for a fixed amount of time. The authors discovered that users enjoyed this alternative to the physical remote control and that the feedback of the on-screen hand was effective in assisting the user. However, users found it tiring to hold their hand up for long amounts of time to activate the different controls. This user fatigue common of gesturebased interfaces has been called gorilla arm. Other approaches have relied on using multiple cameras to produce a 3D image which can be used to detect and track hand motion [2][4]. These systems required an elaborate installation process which had to be completed carefully as calibration parameters such as the distance between the cameras was important in the triangulation algorithms used. These algorithms were also computationally expensive since a large amount of video data needed to be processed in realtime,and stereo-matching typically fails on scenes with little or no texture. Ultimately, such systems would not be useable outside of their special lab environments. In [3], Mistry presented the Sixth Sense wearable gestural interface, which used a camera and projector worn on the user's chest to allow the user to zoom in on projected maps(among other activities) by the use of two-handed gestures. In order for the camera to detect the user's hand, the user had to wear brightly-colored markers on their index fingers and thumbs. The regular webcam worn by the user would also be sensitive to environmental conditions such as bright sunlight or darkness, which would make distinguishing the colored markers much more difficult, if not impossible. Wilson and Oliver [5] aimed to create a Minority Report-like environment that they called G Windows. The user was able to move an onscreen cursor of a Microsoft Windows desktop by pointing with their hand and using voice commands to trigger actions like "close" and "scroll" to affect the underlying application windows. They concluded that users preferred interacting with hand gestures over voice commands and that desktop workspaces designed for gesture interactions were worth pursuing further. When considering collaborative online workspaces, several commercial and academic web-based collaboration solutions have existed for some time. However, interaction with other users in these environments is usually limited to basic sharing of media files, rather than allowing for full real-time collaboration of entire web-based applications and their data between users on distinctly deployed domains, as this paper proposes. Cristian Gadea, Bogdan Ionescu [6] aimed to create Finger-Based Gesture Control of a Collaborative Online Workspace, but system needs continuous internet connectivity, but this is not possible always in India. It needs an online workspace called as UC- IC, the application is within web browser to determine latest hand gesture, but it is not possible always to provide all time high speed connectivity everywhere and every time. Beside this it needs the training to recognize gesture, it slows down the system. In [7,8,9] methods are based on gesture recognition algorithms. It needs ANN training which makes whole process slow and reduce accuracy. Because each time if we are trying to recognize the guestre so the ANN training will be needed, and much of time will be needed. So system will not work or can t match its output speed with exact motion of mouse pointer. 5. SYSTEM ARCHITECTURE In this system we have used different preprocessing techniques, feature extraction a tool for recognizing the pixel based values or coordinates of RBG color by tracking the change in pixel position of different color stickers attached at fingers of user in real time. So accordingly the new updated values will be sent to PC to track motion of mouse. www.ijrcct.org Page 1455
In this phase we will get pixel sequence from image without using any ANN training to get exact sequence of motion of hands and fingers. Video Data Capturing acquisition Color Detection : In this phase we will extract color positions of RGB color from pixel sequence to detect the motion of hand and fingures by calculating change in pixel values of RBG colors. Image Processing Pixel Extraction Color Detection Controlling Position of mouse Pointer: Send signals to system to control mouse pointer motion and mouse events. It will give an appropriate command to PC to display the motion of mouse pointer according to motion of users fingers or hand. 6. METHODOLOGY i.hand Position tracking and mouse control Controlling Position of mouse Pointer Fig. 1: Block diagram of the different phases of the system.. Video Capturing: Here continuous video will be given as an input toby our system to the laptop. Image Processing: Image segmentation is done under two phases: 1. Skin Detection Model: To detect hand and fingersfron image. Getting user input virtually is the main aim for this module where user will move his finger in front of camera capture area. This motion will capture and detected by the camera and processed by the system frame by frame. After processing system will try to get the finger co-ordinates and once coordinates get calculated it will operate the cursor position. ii. Laser Pointer Detection 2. Approximate Median model : For subtraction of background.it has been observed that by the use of both methods for segmentation was obtained much better for further process. Pixel Extraction: www.ijrcct.org Page 1456
vii. Virtual Sense for file handling. This system will make use of the virtual sense technology in order to copy a file from one system into another within a local area network (LAN)/wify. The user will make an action of picking upon the file that needs to be copied and then move it to the system where the file would be copied and then release it over that system. 7.RESULTS AND DISCUSSION iii. Hand Gesture Based Auto Image Grabbing: (virtual Zoom in/out) The software has provision to control all clicking events of mouse by using a color marker.. After several experiments, it was observed that use of red color marker are more effective in comparison with when other color markers are used. iv. Camera Processing and image capturing: Fig2.Graphical user Interface of application. v. Object based bricks game. vi. Virtual playing of drum by holding drum sticks in hand. www.ijrcct.org Page 1457
Fig3:Start camera This application can be very useful for people who want to control computer without actually touching to system or by using wireless mouse which needs always a platform to operate. As a part of future scope the application can be improved to work with mobile phone and play stations. Other mode of human computer interaction like voice recognition, facial expression, eye gaze, etc. can also be combined to make the system more robust and flexible. Acknowledgment We thank the subjects participating in our experiments. 7. REFERENCES Fig: Set the marker color. [1] W. T. Freeman and C. D. Weissman, "Television Control by HandGestures", in Proc. of Int. Workshop on Automatic Face and GestureRecognition. IEEE Computer Society, 1995, pp. 179-183. [2] Z. Jun, Z. Fangwen, W. Jiaqi, Y. Zhengpeng, and C. Jinbo, "3D HandGesture Analysis Based on Multi-Criterion in Multi-Camera System",in ICAL2008: IEEE Int. Conf. on Automation and Logistics. IEEEComputer Society, September 2008, pp. 2342-2346. Fig:Control motion and clicking events of mouse with the color marker set earlier 8. CONCLUSION By using use of red color marker, there is a significant increase in accuracy and user friendly. The accuracy of use of red color marker system is increased in comparison to the case when other color markers were used individually. The problem of changing lighting condition and color based recognition has been solved in this work by giving the button to set the marker color at starting phase of application. Still there are some problems while recognition speed, where speed of controlling motion of mouse is not 100% which need to be improved for some of the gestures. All mouse movement and keys action has already been mapped that is working well under given circumstances. [3] P. Mistry and P. Maes, "SixthSense: A Wearable Gestural Interface", inacm SIGGRAPH ASIA 2009 Sketches. New York, NY, USA: ACM,2009. [4] A. Utsumi, T. Miyasato, and F. Kishino, "Multi-Camera Hand Pose Recognition System Using Skeleton Image", in RO-MAN'95:Proc. Of4th IEEE Int. Workshop on Robot and Human Communication. IEEE Computer Society, July 1995, pp. 219-224. [5] A. Wilson and N. Oliver, "GWindows: Robust Stereo Vision for Gesture Based Control of Windows", in ICMI '03: Proc. of 5th Int. Con! On Multimodal interfaces. New York, NY, USA: ACM, 2003, pp. 211-218. [6]Cristian Gadea, BogdanIonescu, Dan Ionescu, Shahidul Islam, Bogdan Solomon University of Ottawa, Mgestyk Technologies, Finger-Based Gesture Control of a Collaborative Online Workspace 7th IEEE International Symposium on Applied computational intelligence and www.ijrcct.org Page 1458
Informatics May 24-26, 2012 Timisoara, Romania. [7]Manaram Ganasekera, COMPUTER VISION BASED HANDMOVEMENT CAPTURING SYSTEM, The 8th International Conference on Computer Science & Education (ICCSE 2013) April 26-28, 2013. Colombo, Sri Lanka [8]Fabrizio Lamberti, Endowing Existing Desktop Applications with Customizable Body Gesturebased Interfaces,IEEE Int l Conference on Consumer Electronics(ICCE),978-1-4673-1363-6, 2013 [9]Anupam Agrawal, Rohit Raj and Shubha Porwal, Vision-based Multimodal Human- Computer Interaction using Hand and Head Gestures, Proceedings of 2013 IEEE Conference on Information and Communication Technologies (ICT 2013) www.ijrcct.org Page 1459