A camera controlling method for lecture archive
|
|
- Coral Mason
- 6 years ago
- Views:
Transcription
1 A camera controlling method for lecture archive NISHIGUHI Satoshi Kyoto University Graduate School of Law, Kyoto University MINOH Michihiko enter for Information and Multimedia studies, Kyoto University Abstract Archiving lectures is important not only to students but also to lecturers. It will be intellectual properties of universities, will be material for multimedia course ware and for teaching evaluation, and will be knowledge sources. We present a method to control multiple cameras for lecture archive. Lecture archive consists of media information, like video, audio, text, and event information like position of lecturer, activities of students. We consider that video for lecture archive should includes various video clips because the users demand for the lecture archive is different from person to person. In this study, we propose a method for shooting various video clips using multiple video cameras, in which we introduce a probabilistic method to camera control. 1. Introduction In this study, we discuss a camera controlling method to get various kinds of video clips for lecture archive. The purpose to record lecture is to provide information about the lecture to users without spatial and temporal restriction. In order to achieve the purpose, we record the information sources in a lecture room into a lecture archive. Lecture archive is defined as a set of media information such as video, audio, slide, and event information generated as a result of interaction between a lecturer and students in the lecture room. Hence facial expressions and gestures of a lecturer and the students attending a lecture should be projected on the video for lecture archive. A camera controlling method for a distance learning system has been proposed. The system controls the cameras according to the situation of the lecture room and selects the most suitable video at a certain time. Then it transmits the selected video to the remote lecture room in real time. The shooting cameras which are not selected are controlled in order to shoot objects for the next transmission. In other words, an obtained video by a distance learning system is a sequence of the shots suitable to the situation of the lecture room. On the other hand, video clips for lecture archive should include more various kinds of video clips than those of a distance learning system because the lecture archive is used for various purposes by users. The various video clips in this study should have the following two characteristics: One is that important objects in the lecture room are shot by the cameras with as many camera works as possible at a certain time. The other is that different kinds of camera works can be assigned to a camera under the same situation in the lecture room. In order to characterize each shooting camera for lecture archive, we introduced a probabilistic model to design our camera controlling method. A probabilistic density function for selecting a camera work, which is designed for reflecting our policy, is assigned to each camera. Since slides on the screen and drawings on the white board are recorded by an electronic way, we focus on shooting the students based on the activity of them. There are many reasons why students move with fidgety behavior during lecture. In one case, they may be boring with a lecture, and in another case, they may bend themselves forward to see details on a screen. On the other hand, when students remain still, they may be sleepy or may think deeply. Hence, it is considered that the students activity is very important information for lecture archive and camera controlling method. The rest of the paper is organized as follows. In Section 2 we describe our lecture archive. In Section 3 we explain our method for camera controlling using probabilistic density function. Implementation and experimental results are
2 presented in Section 4 and Section 5, respectively. 2. Lecture archive 2.1. Lecture In a lecture of face-to-face style, a lecturer stands up in front of the lecture room, and teaches students a subject. He explains subjects with his voice and gesture. He can use slides about the subject. In addition, he writes some drawings on the white board. In order to indicate a point on the screen or engage students interest, he may walk around in front of the lecture room during lecture. Students in the lecture room sit down at their seats and listen to the lecturer and see him, slides and the white board. They remain seated, but move their head, hand and the upper half of their body in order to listen and see in detail. We express a degree of such a behavior of them as their activity. A lecturer can get information from students facial expression and fidgety behavior, and may reflect how to explain according to it. On the other hand, students in the lecture room can get information from the lecturer s facial expression and behavior, slides projected on the screen and drawings on the white board Lecture Archive The purpose to record lecture is to provide the information about the lecture to users without spatial and temporal restriction. However, users demand is different from person to person. Hence we record the information sources in the lecture room as our lecture archive. ach user can get information from the archived information sources. Information sources in a lecture room are the followings: ffl Lecturer ffl Students ffl Slides ffl rawings on the white board The above information sources have two aspects: information expressed by media and information expressed by event. We call these media information and event information, respectively. Media information is represented by media data like video, audio, stroke of drawings with its captured time. vent information expresses the status of information sources. Position, activity, presence or absence of status changes of information sources are examples of event information. vent information is represented by event data with its occurrence time. Lecturer and students as information sources are characterized by their facial and physical behavior and voice. Hence we record their behavior with video data and their voice with audio data. In addition, their position and activities are recorded as event information. The slides as information sources are characterized by their image and switching event of slides. Hence we record each slide image as media information and record the switching events of slide as event information. Th white board is characterized by the drawings on its screen. The lecturer writes drawings on the white board stroke by stroke. It is difficult to cut with the status change of the white board. The lecturer writes and erases drawings on it. A size of area of the drawing is defined as the minimum boundary rectangle(mbr). So we record erasing timing as event information of the white board. The drawings themselves can be reconstruct completely by the stroke information as event information Variety of video clips for lecture archive Over the past few years, several studies have focused on the camera controlling for shooting a lecturer in the lecture room. In these studies, the seats of the students are divided into the several fixed regions, and the students are modeled by them. Therefore, the students are recorded with only several kinds of shots. On the other hand, users demand for lecture video clips of the students, which is a set of serial frames which projects an object, is different from person to person. Hence we treat one or more students as shooting object in this paper, and we define various video clips of the students as follows: ffl Video clips shot by one camera should include various objects. ffl ifferent objects should be shot at a certain time, when we can use multiple cameras Approach for shooting various video clips In order to shoot various video clips, it is needed that there are a lot of candidate objects to shoot. Hence we defined seat region which is a set of seats sit by students. Position of the seats sit by students are used in order to detect seat regions. We propose the following steps as camera controlling rule for shooting students. 1. alculating seat regions The seat region is decided by the position of the seats sit by the students. Our method detect seats sit by students and calculate the seat regions from them. 2. Selecting a seat region with probabilistic density function In order to assign a camera work to each shooting camera, we introduce probabilistic density function to select a seat region for shooting.
3 3. Assigning the camera work based on the selected seat region Based on the selected seat region, camera controlling commands are sent to each shooting camera. More details of these steps are described in the next section. 3. Probabilistic method for camera control 3.1. Shooting cameras for shooting students Primary seat is A Primary seat is Primary seat is Primary seat is B Primary seat is Primary seat is The structure of a lecture room effects on where multiple cameras are installed in the lecture room. Students usually sit down on the seats from the middle to back of the lecture room. Therefore, the cameras to shoot the students should be installed in the front toward the back of the lecture room to shoot facial expression and behavior of the students. ach shooting camera has the ability of remote control of pan, tilt, zoom. Generally, the number of cameras installed in the lecture room is restricted because of the space of the lecture room, cost etc. It is necessary to select some seat regions according to the number of the shooting cameras. The number of selected seat region is equal to the number of the cameras to shoot the students efinition of seat region The seat region is a subset of seats sit by the students. It is defined as follows: In this definition a primary seat means a seat which exists within a predefined unit length from a seat, and the n th seats means the seats which exist at n times of unit length from a primary seat. ffl ach primary seat makes a seat region respectively. ffl ach primary seat and its next seat make a seat region which includes the two seats. ffl or 1» n» n max, the seats which are the 1 to n th seats of a primary seat and which can be traced through the next seats from the primary seat make a seat region. ffl ach primary seat and its second seat which can t be traced through the next seats of the primary seat make a seat region which includes the two seats. The example case when 6 students(a to ) are detected in the lecture room is shown in igure1. The seats for students are aligned on a grid and drawn by circle. The filled ones show the seats sit by the students, and yarded ones are the seat regions whose primary seats are A to respectively. As a result, there are 17 seat regions after deleting the same seat region in this example. A B igure 1. xample of seat regions: circles are seats for students. illed ones are seats sit by the students. Yarded circles are the seat regions 3.3. Selecting a seat region by probabilistic density function In this section, we consider the activity of the seat region. rom the definition of the seat region, it consists of one or more seats sit by the student. Hence we define the activity of a seat region as the average of the activity of students sitting on the seat included the seat region. In order to apply probabilistic density function for selecting a seat region from a set of seat regions, we judge ranking of all the seat regions with their activities. Now, we define how to make probabilistic density function (P) of a camera work: When the number of the cameras is m, and the number of the seat regions is n, the seat regions are divided into m parts by the order of their ranking. As a result, each part has n=m seat regions. The following probabilistic density function p i (x) (x is the rank of the seat region) is assigned to each shooting camera i(0» i<m). When m =1, When 2» m, p i (x) = ( A p 0 (x) = 1 n ; 2m ; for n=m Λ i» x < n=m Λ (i +1) n(m 1) m ; for other x n(m 1) B
4 xamples of probabilistic density function are shown in igure2. camera No.0 When the number of camera m = 1 1/n When the number of camera m = 2 camera No. 0 camera No. 1 When the number of camera m = 3 camera No.0 camera No. 1 camera No.2 igure 3. ish eye image of students(i f ) igure 2. xamples of probabilistic density function included in the corresponding rectangle region of the mask image I m. As a result, we can get the estimation value of the activity of each students. The value has been normalized between 0:0 to 1:0. ach shooting camera can select one seat region according to the assigned P. 4. Implementation 4.1. stimation of students activities We estimate the students activities by the inter-frame subtraction. When they behave a lot, difference of pixel value between frames becomes large. In order to avoid the occlusion caused by the students themselves, the observation camera with fish-eye lens is installed on the ceiling of the lecture room. The camera captures image (I f ) of the students from the ceiling. The size of the captured images are 640x480 pixels (igure3). The color image captured at time (t n ) is translated to a gray image, and then each pixel value is subtracted by each pixel value of the same position of gray image captured at the time t n 1. inally, each pixel is binarized with a threshold (T a ) into the image (I a ). Here we use a mask image like igure 4 which expresses the seats for students in the lecture room in order to estimate the activity of the seat sit by the students. ach region in the mask image is rectangle, and was defined by hand based on the seats for students. The binarized image I a is masked by the mask image I m into the image (I a 0). The binarized pixels in the image I a 0 reflects the activities of the students. Hence the number of pixels in each rectangle region of I a 0 is divided by the number of pixels igure 4. Mask image for fish eye image (I m ) 4.2. etection of position of seat sit by student In our method, the position of seats sit by students is needed in order to calculate the seat regions. In order to detect such a seats, we use the method of the background subtraction, but other objects like bags are extracted too. So we use the method of the inter-frame subtraction used for estimating the activities of students, in addition to the method of the background subtraction as follows: ffl A captured fish eye image I f is subtracted by the background image and binarized by a threshold (T p ) into the image (I p ). ffl The I p is masked by the mask image I m into the image (I p 0).
5 ffl With respect to each rectangle region of I m, the mean value of the number of pixels in I a 0 and the number of pixels in I p 0 is calculated. ffl When the mean value is exceed a threshold (T e ), we result that the seat represented by the rectangle region is sit by a student. Primary seat B xtre seat 4.3. alculation of seat regions In our environment, the seats of students are aligned on a grid. The following procedure is applied to each seat in order to calculate a set of seat regions. irst, we construct a tree which consists of the seats sit by students as follows: 1. One seat(primary seat) sit by a student is selected as a root node of the tree. 2. If there is a seat sit by a student at the 8-neighbors of the primary seat, the seat is added to the tree as a child node of the root node. 3. If there is a seat sit by students at the 8-neighbors of the seat which is already added, and it has not been added to the tree, the seat is added to the tree as the child node of the node which is already added. 4. Step 3 is repeated until no more seats are added as nodes. 5. If there is a seat which exist at the 8-neighbors of the 8-neighbors of the primary seat, the seat is added to the tree as a child node of the root node. we call the seat xtra seat. 6. The value of the depth from the root node is assigned to all the nodes of the tree. 7. Step 1 to 6 are repeated for all of the primary seats. Second, we calculate the seat regions of each primary seat using the tree. ffl Seat region consist of one seat A primary seat as root node is a seat region. ffl Seat region consist of two seats A combination of a root node and the child node at depth 1 make a seat region which includes the two seats. A combination of a root node and the child node which expresses an extra seat at depth 2 make a seat region which includes the two seats. ffl Seat region consist of three or more seats A combination of the nodes from depth 0 to d(1» d) make a seat region. All of the seat regions calculated by the above way make a set of the seat regions. An example tree of the seat B in igure 1 is shown in igure 5. depth 0 depth 1 depth 2 depth 3 igure 5. xample tree whose root node is the seat B in igure 1 5. xperimental results There are students in the center block of the lecture room. The block has 6 7=42seats. In our experiment, the lecturer talked about the subject and 20 students listened to him. We recorded the video of the fish eye camera into a video tape about 5 minutes. And we applied our method in order to check how many kinds of seat regions are selected. We used 2 shooting cameras. Table1 shows accuracy of detecting the seats sit by students. Totally, we can get about 74:9% accuracy of judging the status of seats. Table 1. Accuracy of detecting seats sit by students Student exist not exist exist not exist Judgment exist not exist not exist exist Ave % total(%) We can get on average about 93 seat regions at each time. If we defined that the seat region consists of one seat, we can select from only 20 seat regions. But in our method, we can select seat regions from about 93 seat regions at each time. Table 2 shows the result of selecting of a seat regions using our method, and Table 3 shows the result of random selecting of seat regions. igure 6 shows the comparison between our method and the random selecting. In igure 6, the line with box and star icon show the results of a random selection of the seat regions. It shows that similar seat regions are selected for the two shooting cameras. On the other hand, the line with plus and cross icon show the results of our method. The line with plus icon shows that the seat regions which include many students are more selected and the seat regions which include a few students are less selected than the random method. And the line with cross
6 icon has the opposite character. These results show that the two cameras can shoot more various objects by our method using P based on the activities of the seat regions comparing with the random method amera No.0 with P amera No.1 with P amera No.0 with random method amera No.1 with random method Table 2. Selected regions with P. amera No. 0 amera No. 1 selected Ave. of selected Ave. of seat region freq(%) activity freq(%) activity Table 3. Selected regions with the random method. amera No. 0 amera No. 1 selected Ave. of selected Ave. of seat region freq(%) activity freq(%) activity requency in selecting(%) The number of the seats included in the selected seat region igure 6. omparison between method with P and the random method. 6. onclusion In this paper, we proposed a camera controlling method for lecture archive. The seat region for variety of video clips is defined, and activities of the seat regions and the probabilistic density function are introduced. We showed that different trend of selecting a seat region can be assigned to each shooting camera with P based on the activity of the students and showed that we can get various video clips as a result of selecting various seat regions. References [1] S. Goodridge. Multimedia Sensor usion for Intelligent amera ontrol and Human-omputer-Interaction. Ph thesis, North arolina State University, [2] Y. KAMA, K. ISHIZUKA, and M. MINOH. A live video imaging method for capturing presentation information in d istance learning. In I International onference on Multimedia and xpo, volume 3, pages , [3] K. Yagi, Y. Kameda, M. Nakamura, M. Minoh, and M. Ashour-Abdalla. A novel distance learning system for the tide project. In Proceedings of I/IAI 2000, volume 2, pages , 2000.
Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture
Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture Akira Suganuma Depertment of Intelligent Systems, Kyushu University, 6 1, Kasuga-koen, Kasuga,
More informationMotion Detection Keyvan Yaghmayi
Motion Detection Keyvan Yaghmayi The goal of this project is to write a software that detects moving objects. The idea, which is used in security cameras, is basically the process of comparing sequential
More informationHuffman Coding - A Greedy Algorithm. Slides based on Kevin Wayne / Pearson-Addison Wesley
- A Greedy Algorithm Slides based on Kevin Wayne / Pearson-Addison Wesley Greedy Algorithms Greedy Algorithms Build up solutions in small steps Make local decisions Previous decisions are never reconsidered
More informationYue Bao Graduate School of Engineering, Tokyo City University
World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 8, No. 1, 1-6, 2018 Crack Detection on Concrete Surfaces Using V-shaped Features Yoshihiro Sato Graduate School
More informationEFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION
EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION 1 Arun.A.V, 2 Bhatath.S, 3 Chethan.N, 4 Manmohan.C.M, 5 Hamsaveni M 1,2,3,4,5 Department of Computer Science and Engineering,
More informationDetection of Compound Structures in Very High Spatial Resolution Images
Detection of Compound Structures in Very High Spatial Resolution Images Selim Aksoy Department of Computer Engineering Bilkent University Bilkent, 06800, Ankara, Turkey saksoy@cs.bilkent.edu.tr Joint work
More information1 ImageBrowser Software User Guide 5.1
1 ImageBrowser Software User Guide 5.1 Table of Contents (1/2) Chapter 1 What is ImageBrowser? Chapter 2 What Can ImageBrowser Do?... 5 Guide to the ImageBrowser Windows... 6 Downloading and Printing Images
More informationDigital Portable Overhead Document Camera LV-1010
Digital Portable Overhead Document Camera LV-1010 Instruction Manual 1 Content I Product Introduction 1.1 Product appearance..3 1.2 Main functions and features of the product.3 1.3 Production specifications.4
More informationA Method of Multi-License Plate Location in Road Bayonet Image
A Method of Multi-License Plate Location in Road Bayonet Image Ying Qian The lab of Graphics and Multimedia Chongqing University of Posts and Telecommunications Chongqing, China Zhi Li The lab of Graphics
More informationSmart Kitchen: A User Centric Cooking Support System
Smart Kitchen: A User Centric Cooking Support System Atsushi HASHIMOTO Naoyuki MORI Takuya FUNATOMI Yoko YAMAKATA Koh KAKUSHO Michihiko MINOH {a hasimoto/mori/funatomi/kakusho/minoh}@mm.media.kyoto-u.ac.jp
More informationAdaptive Fingerprint Binarization by Frequency Domain Analysis
Adaptive Fingerprint Binarization by Frequency Domain Analysis Josef Ström Bartůněk, Mikael Nilsson, Jörgen Nordberg, Ingvar Claesson Department of Signal Processing, School of Engineering, Blekinge Institute
More informationECEN 4606, UNDERGRADUATE OPTICS LAB
ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant
More informationA Geometric Correction Method of Plane Image Based on OpenCV
Sensors & Transducers 204 by IFSA Publishing, S. L. http://www.sensorsportal.com A Geometric orrection Method of Plane Image ased on OpenV Li Xiaopeng, Sun Leilei, 2 Lou aiying, Liu Yonghong ollege of
More informationWelcome to Math Journaling!
Created by Maggie at www.maggieskindercorner.blogspot.com Welcome to Math Journaling! Read each prompt, or have a student helper to tell the prompt to their neighbors. Students should say the number then
More informationT I P S F O R I M P R O V I N G I M A G E Q U A L I T Y O N O Z O F O O T A G E
T I P S F O R I M P R O V I N G I M A G E Q U A L I T Y O N O Z O F O O T A G E Updated 20 th Jan. 2017 References Creator V1.4.0 2 Overview This document will concentrate on OZO Creator s Image Parameter
More informationCoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering
CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image
More informationChapter 6. [6]Preprocessing
Chapter 6 [6]Preprocessing As mentioned in chapter 4, the first stage in the HCR pipeline is preprocessing of the image. We have seen in earlier chapters why this is very important and at the same time
More informationFamilySearch Mobile Apps: Family History Anytime, Anywhere
FamilySearch Mobile Apps: Family History Anytime, Anywhere For this and more information about FamilySearch Mobile Apps go to: https://www.familysearch.org/blog/en/familysearch-mobile-apps/ Take your family
More informationSample Test Project Regional Skill Competitions Level 3 Skill 23 - Mobile Robotics Category: Manufacturing & Engineering Technology
Sample Test Project Regional Skill Competitions Level 3 Skill 23 - Mobile Robotics Category: Manufacturing & Engineering Technology Version 3 May 2018 Skill - Mobile Robotics 1 Table of Contents A. Preface...
More information(Refer Slide Time: 2:23)
Data Communications Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture-11B Multiplexing (Contd.) Hello and welcome to today s lecture on multiplexing
More informationBlue-Bot TEACHER GUIDE
Blue-Bot TEACHER GUIDE Using Blue-Bot in the classroom Blue-Bot TEACHER GUIDE Programming made easy! Previous Experiences Prior to using Blue-Bot with its companion app, children could work with Remote
More informationEPS to Rhino Tutorial.
EPS to Rhino Tutorial. In This tutorial, I will go through my process of modeling one of the houses from our list. It is important to begin by doing some research on the house selected even if you have
More informationGESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL
GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL Darko Martinovikj Nevena Ackovska Faculty of Computer Science and Engineering Skopje, R. Macedonia ABSTRACT Despite the fact that there are different
More informationUnderstanding Digital Photography
chapter 1 Understanding Digital Photography DIGITAL SLR Are you confused about how digital photography works? This chapter introduces you to the advantages of digital photography, the different types of
More informationLecture 18: Light field cameras. (plenoptic cameras) Visual Computing Systems CMU , Fall 2013
Lecture 18: Light field cameras (plenoptic cameras) Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today:
More informationPrivacy-Protected Camera for the Sensing Web
Privacy-Protected Camera for the Sensing Web Ikuhisa Mitsugami 1, Masayuki Mukunoki 2, Yasutomo Kawanishi 2, Hironori Hattori 2, and Michihiko Minoh 2 1 Osaka University, 8-1, Mihogaoka, Ibaraki, Osaka
More informationLabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System
LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System Muralindran Mariappan, Manimehala Nadarajan, and Karthigayan Muthukaruppan Abstract Face identification and tracking has taken a
More informationTypes of Angles. Low Angle: High Angle: Dutch Angle:
Types of Angles Low Angle: To film this shot, the camera is placed lower than the person or object to be filmed. A low angle is used when you want to depict the power or importance of an individual or
More informationEye Contact Camera System for VIDEO Conference
Eye Contact Camera System for VIDEO Conference Takuma Funahashi, Takayuki Fujiwara and Hiroyasu Koshimizu School of Information Science and Technology, Chukyo University e-mail: takuma@koshi-lab.sist.chukyo-u.ac.jp,
More informationEstimation of Folding Operations Using Silhouette Model
Estimation of Folding Operations Using Silhouette Model Yasuhiro Kinoshita Toyohide Watanabe Abstract In order to recognize the state of origami, there are only techniques which use special devices or
More informationCS 376b Computer Vision
CS 376b Computer Vision 09 / 03 / 2014 Instructor: Michael Eckmann Today s Topics This is technically a lab/discussion session, but I'll treat it as a lecture today. Introduction to the course layout,
More informationArtificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization
Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department
More information'Smart' cameras are watching you
< Back Home 'Smart' cameras are watching you New surveillance camera being developed by Ohio State engineers will try to recognize suspicious or lost people By: Pam Frost Gorder, OSU Research Communications
More informationCustomized Foam for Tools
Table of contents Make sure that you have the latest version before using this document. o o o o o o o Overview of services offered and steps to follow (p.3) 1. Service : Cutting of foam for tools 2. Service
More informationThinking Kids. First Grade. NCTM Strands Covered: Number and Operations. Algebra. Geometry. Measurement. Data Analysis and Probability.
Thinking Kids First Grade NCTM Strands Covered: Number and Operations Algebra Geometry Measurement Data Analysis and Probability Pretest How to Use This Assessment This Pretest introduces your students
More informationSBIG ASTRONOMICAL INSTRUMENTS
SBIG ASTRONOMICAL INSTRUMENTS SANTA BARBARA INSTRUMENT GROUP 147-A Castilian Drive Santa Barbara, CA 93117 Phone (805) 571-SBIG (571-7244) FAX (805) 571-1147 e-mail:sbig@sbig.com home page:www.sbig.com
More informationCPSC 4040/6040 Computer Graphics Images. Joshua Levine
CPSC 4040/6040 Computer Graphics Images Joshua Levine levinej@clemson.edu Lecture 04 Displays and Optics Sept. 1, 2015 Slide Credits: Kenny A. Hunt Don House Torsten Möller Hanspeter Pfister Agenda Open
More informationAdobe Photoshop. Levels
How to correct color Once you ve opened an image in Photoshop, you may want to adjust color quality or light levels, convert it to black and white, or correct color or lens distortions. This can improve
More informationTable of Contents 1. Image processing Measurements System Tools...10
Introduction Table of Contents 1 An Overview of ScopeImage Advanced...2 Features:...2 Function introduction...3 1. Image processing...3 1.1 Image Import and Export...3 1.1.1 Open image file...3 1.1.2 Import
More informationBODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS
KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,
More informationin the list below are available in the Pro version of Scan2CAD
Scan2CAD features Features marked only. in the list below are available in the Pro version of Scan2CAD Scan Scan from inside Scan2CAD using TWAIN (Acquire). Use any TWAIN-compliant scanner of any size.
More informationNEW HIERARCHICAL NOISE REDUCTION 1
NEW HIERARCHICAL NOISE REDUCTION 1 Hou-Yo Shen ( 沈顥祐 ), 1 Chou-Shann Fuh ( 傅楸善 ) 1 Graduate Institute of Computer Science and Information Engineering, National Taiwan University E-mail: kalababygi@gmail.com
More informationContents Technical background II. RUMBA technical specifications III. Hardware connection IV. Set-up of the instrument Laboratory set-up
RUMBA User Manual Contents I. Technical background... 3 II. RUMBA technical specifications... 3 III. Hardware connection... 3 IV. Set-up of the instrument... 4 1. Laboratory set-up... 4 2. In-vivo set-up...
More informationAn Efficient DTBDM in VLSI for the Removal of Salt-and-Pepper Noise in Images Using Median filter
An Efficient DTBDM in VLSI for the Removal of Salt-and-Pepper in Images Using Median filter Pinky Mohan 1 Department Of ECE E. Rameshmarivedan Assistant Professor Dhanalakshmi Srinivasan College Of Engineering
More informationAn Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi
An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi Department of E&TC Engineering,PVPIT,Bavdhan,Pune ABSTRACT: In the last decades vehicle license plate recognition systems
More informationAdaptive Sensor Selection Algorithms for Wireless Sensor Networks. Silvia Santini PhD defense October 12, 2009
Adaptive Sensor Selection Algorithms for Wireless Sensor Networks Silvia Santini PhD defense October 12, 2009 Wireless Sensor Networks (WSNs) WSN: compound of sensor nodes Sensor nodes Computation Wireless
More informationSocial Editing of Video Recordings of Lectures
Social Editing of Video Recordings of Lectures Margarita Esponda-Argüero esponda@inf.fu-berlin.de Benjamin Jankovic jankovic@inf.fu-berlin.de Institut für Informatik Freie Universität Berlin Takustr. 9
More informationCSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics
CSC 170 Introduction to Computers and Their Applications Lecture #3 Digital Graphics and Video Basics Bitmap Basics As digital devices gained the ability to display images, two types of computer graphics
More informationImagesPlus Basic Interface Operation
ImagesPlus Basic Interface Operation The basic interface operation menu options are located on the File, View, Open Images, Open Operators, and Help main menus. File Menu New The New command creates a
More informationKeyword: Morphological operation, template matching, license plate localization, character recognition.
Volume 4, Issue 11, November 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Automatic
More informationPackshotCreator 3D User guide
PackshotCreator 3D User guide 2011 PackshotCreator - Sysnext All rights reserved. Table of contents 4 4 7 8 11 15 18 19 20 20 23 23 24 25 26 27 27 28 28 34 35 36 36 36 39 42 43 44 46 47 Chapter 1 : Getting
More informationIMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING
IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING PRESENTED BY S PRADEEP K SUNIL KUMAR III BTECH-II SEM, III BTECH-II SEM, C.S.E. C.S.E. pradeep585singana@gmail.com sunilkumar5b9@gmail.com CONTACT:
More informationA Novel Transform for Ultra-Wideband Multi-Static Imaging Radar
6th European Conference on Antennas and Propagation (EUCAP) A Novel Transform for Ultra-Wideband Multi-Static Imaging Radar Takuya Sakamoto Graduate School of Informatics Kyoto University Yoshida-Honmachi,
More informationFunctions added in CLIP STUDIO PAINT Ver are marked with an *.
Preface > Changes in Ver.1.7.1 Preface Changes in Ver.1.7.1 The functions added/changed in CLIP STUDIO PAINT Ver.1.7.1 are as follows. Functions added in CLIP STUDIO PAINT Ver.1.7.1 are marked with an
More informationAssignment: Light, Cameras, and Image Formation
Assignment: Light, Cameras, and Image Formation Erik G. Learned-Miller February 11, 2014 1 Problem 1. Linearity. (10 points) Alice has a chandelier with 5 light bulbs sockets. Currently, she has 5 100-watt
More informationForest Inventory System. User manual v.1.2
Forest Inventory System User manual v.1.2 Table of contents 1. How TRESTIMA works... 3 1.2 How TRESTIMA calculates basal area... 3 2. Usage in the forest... 5 2.1. Measuring basal area by shooting pictures...
More informationCameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017
Cameras Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Camera Focus Camera Focus So far, we have been simulating pinhole cameras with perfect focus Often times, we want to simulate more
More informationMulti-modal Human-Computer Interaction. Attila Fazekas.
Multi-modal Human-Computer Interaction Attila Fazekas Attila.Fazekas@inf.unideb.hu Szeged, 12 July 2007 Hungary and Debrecen Multi-modal Human-Computer Interaction - 2 Debrecen Big Church Multi-modal Human-Computer
More informationWhat will be on the midterm?
What will be on the midterm? CS 178, Spring 2014 Marc Levoy Computer Science Department Stanford University General information 2 Monday, 7-9pm, Cubberly Auditorium (School of Edu) closed book, no notes
More informationInteractive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1
VR Software Class 4 Dr. Nabil Rami http://www.simulationfirst.com/ein5255/ Audio Output Can be divided into two elements: Audio Generation Audio Presentation Page 4-1 Audio Generation A variety of audio
More informationCOMP 558 lecture 5 Sept. 22, 2010
Up to now, we have taken the projection plane to be in ront o the center o projection. O course, the physical projection planes that are ound in cameras (and eyes) are behind the center o the projection.
More informationCOMPACT GUIDE. Camera-Integrated Motion Analysis
EN 06/13 COMPACT GUIDE Camera-Integrated Motion Analysis Detect the movement of people and objects Filter according to directions of movement Fast, simple configuration Reliable results, even in the event
More informationInserting and Creating ImagesChapter1:
Inserting and Creating ImagesChapter1: Chapter 1 In this chapter, you learn to work with raster images, including inserting and managing existing images and creating new ones. By scanning paper drawings
More informationFinal Report. Project Title: E-Scope Team Name: Awesome
EEL 4924 Electrical Engineering Design (Senior Design) Final Report 04 August 2009 Team Members: Charlie Lamantia Scott Lee Project Abstract: Project Title: E-Scope Team Name: Awesome In match shooting
More informationToile la Joie: Toile Jardin Software Lesson. By Tamara Evans. Floriani...The Name That Means Beautiful Embroidery!
Toile la Joie: Toile Jardin Software Lesson By Tamara Evans Software Lesson: Toile Jardin By Tamara Evans While this quilt may look intricate and difficult, the embroidery does all the work in this garden.
More informationDistributed Vision System: A Perceptual Information Infrastructure for Robot Navigation
Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp
More informationFabrication of the kinect remote-controlled cars and planning of the motion interaction courses
Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 174 ( 2015 ) 3102 3107 INTE 2014 Fabrication of the kinect remote-controlled cars and planning of the motion
More informationFace Detection System on Ada boost Algorithm Using Haar Classifiers
Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics
More informationAndroid User manual. Intel Education Lab Camera by Intellisense CONTENTS
Intel Education Lab Camera by Intellisense Android User manual CONTENTS Introduction General Information Common Features Time Lapse Kinematics Motion Cam Microscope Universal Logger Pathfinder Graph Challenge
More informationA Vehicular Visual Tracking System Incorporating Global Positioning System
A Vehicular Visual Tracking System Incorporating Global Positioning System Hsien-Chou Liao and Yu-Shiang Wang Abstract Surveillance system is widely used in the traffic monitoring. The deployment of cameras
More informationNikon. King s College London. Imaging Centre. N-SIM guide NIKON IMAGING KING S COLLEGE LONDON
N-SIM guide NIKON IMAGING CENTRE @ KING S COLLEGE LONDON Starting-up / Shut-down The NSIM hardware is calibrated after system warm-up occurs. It is recommended that you turn-on the system for at least
More informationA study of the ionospheric effect on GBAS (Ground-Based Augmentation System) using the nation-wide GPS network data in Japan
A study of the ionospheric effect on GBAS (Ground-Based Augmentation System) using the nation-wide GPS network data in Japan Takayuki Yoshihara, Electronic Navigation Research Institute (ENRI) Naoki Fujii,
More informationTelling What-Is-What in Video. Gerard Medioni
Telling What-Is-What in Video Gerard Medioni medioni@usc.edu 1 Tracking Essential problem Establishes correspondences between elements in successive frames Basic problem easy 2 Many issues One target (pursuit)
More informationStudy and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for feature extraction
International Journal of Scientific and Research Publications, Volume 4, Issue 7, July 2014 1 Study and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for
More informationEnergy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks
Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks Alvaro Pinto, Zhe Zhang, Xin Dong, Senem Velipasalar, M. Can Vuran, M. Cenk Gursoy Electrical Engineering Department, University
More informationEvaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller
2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. September 9-13, 2012. Paris, France. Evaluation of a Tricycle-style Teleoperational Interface for Children:
More informationThe Fundamental Problem
The What, Why & How WHAT IS IT? Technique of blending multiple different exposures of the same scene to create a single image with a greater dynamic range than can be achieved with a single exposure. Can
More informationPASS Sample Size Software. These options specify the characteristics of the lines, labels, and tick marks along the X and Y axes.
Chapter 940 Introduction This section describes the options that are available for the appearance of a scatter plot. A set of all these options can be stored as a template file which can be retrieved later.
More informationUNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR
UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR TRABAJO DE FIN DE GRADO GRADO EN INGENIERÍA DE SISTEMAS DE COMUNICACIONES CONTROL CENTRALIZADO DE FLOTAS DE ROBOTS CENTRALIZED CONTROL FOR
More informationInterframe Coding of Global Image Signatures for Mobile Augmented Reality
Interframe Coding of Global Image Signatures for Mobile Augmented Reality David Chen 1, Mina Makar 1,2, Andre Araujo 1, Bernd Girod 1 1 Department of Electrical Engineering, Stanford University 2 Qualcomm
More informationDigital Image Processing. Lecture 5 (Enhancement) Bu-Ali Sina University Computer Engineering Dep. Fall 2009
Digital Image Processing Lecture 5 (Enhancement) Bu-Ali Sina University Computer Engineering Dep. Fall 2009 Outline Image Enhancement in Spatial Domain Histogram based methods Histogram Equalization Local
More informationImage acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor
Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the
More informationExcel / Education. GCSE Mathematics. Paper 5B (Calculator) Higher Tier. Time: 2 hours. Turn over
Excel / Education GCSE Mathematics Paper 5B (Calculator) Higher Tier Time: 2 hours 5B Materials required for examination Ruler graduated in centimetres and millimetres, protractor, compasses, pen, HB pencil,
More informationCELL PHONE PHOTOGRAPHY
CELL PHONE PHOTOGRAPHY Understanding of how current phone cameras are different due to advanced technology What this presentation will provide What features are available for control of your phone photography
More informationCompression Method for High Dynamic Range Intensity to Improve SAR Image Visibility
Compression Method for High Dynamic Range Intensity to Improve SAR Image Visibility Satoshi Hisanaga, Koji Wakimoto and Koji Okamura Abstract It is possible to interpret the shape of buildings based on
More informationTexts and Resources: Assessments: Freefoto.com Group Photo Projects
Effective Date: 2009-10 Name of Course: Digital Photography Grade Level: 9-12 Department: Industrial Technology and Engineering Length of Course: 30 cycles Instructional Time: 180 days Period Per Cycle:
More informationCamera Raw software is included as a plug-in with Adobe Photoshop and also adds some functions to Adobe Bridge.
Editing Images in Camera RAW Camera Raw software is included as a plug-in with Adobe Photoshop and also adds some functions to Adobe Bridge. Camera Raw gives each of these applications the ability to import
More informationIntelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples
2011 IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany, June 5-9, 2011 Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples Daisuke Deguchi, Mitsunori
More informationDifferential Image Compression for Telemedicine: A Novel Approach
PJETS Volume 1, No 1, 2011, 14-20 ISSN: 2222-9930 print Differential Image Compression for Telemedicine: A Novel Approach Adnan Alam Khan *, Asadullah Shah **, Saghir Muhammad *** ABSTRACT Telemedicine
More informationConsumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution
Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Michael E. Miller and Jerry Muszak Eastman Kodak Company Rochester, New York USA Abstract This paper
More informationPreprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition
Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition Hetal R. Thaker Atmiya Institute of Technology & science, Kalawad Road, Rajkot Gujarat, India C. K. Kumbharana,
More informationOBJECTIVE OF THE BOOK ORGANIZATION OF THE BOOK
xv Preface Advancement in technology leads to wide spread use of mounting cameras to capture video imagery. Such surveillance cameras are predominant in commercial institutions through recording the cameras
More informationDigital Image Processing. Digital Image Fundamentals II 12 th June, 2017
Digital Image Processing Digital Image Fundamentals II 12 th June, 2017 Image Enhancement Image Enhancement Types of Image Enhancement Operations Neighborhood Operations on Images Spatial Filtering Filtering
More informationIntroduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1
Objective: Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 This Matlab Project is an extension of the basic correlation theory presented in the course. It shows a practical application
More informationDigital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing
Digital images Digital Image Processing Fundamentals Dr Edmund Lam Department of Electrical and Electronic Engineering The University of Hong Kong (a) Natural image (b) Document image ELEC4245: Digital
More informationFinger print Recognization. By M R Rahul Raj K Muralidhar A Papi Reddy
Finger print Recognization By M R Rahul Raj K Muralidhar A Papi Reddy Introduction Finger print recognization system is under biometric application used to increase the user security. Generally the biometric
More informationGetting Started Guide
SOLIDWORKS Getting Started Guide SOLIDWORKS Electrical FIRST Robotics Edition Alexander Ouellet 1/2/2015 Table of Contents INTRODUCTION... 1 What is SOLIDWORKS Electrical?... Error! Bookmark not defined.
More informationMeasuring in Centimeters
MD2-3 Measuring in Centimeters Pages 179 181 Standards: 2.MD.A.1 Goals: Students will measure pictures of objects in centimeters using centimeter cubes and then a centimeter ruler. Prior Knowledge Required:
More informationTravel Photo Album Summarization based on Aesthetic quality, Interestingness, and Memorableness
Travel Photo Album Summarization based on Aesthetic quality, Interestingness, and Memorableness Jun-Hyuk Kim and Jong-Seok Lee School of Integrated Technology and Yonsei Institute of Convergence Technology
More informationAdvanced Techniques for Mobile Robotics Location-Based Activity Recognition
Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,
More information