Interactive Whiteboard

Size: px
Start display at page:

Download "Interactive Whiteboard"

Transcription

1 Slovak University of Technology in Bratislava Faculty of Informatics and Information Technologies FIIT Interactive Whiteboard Bachelor thesis Degree Course: Field of study: Place of development: Supervisor: Informatics Informatics Institute of Computer Engineering and Applied Informatics Ing. Andrej Fogelton May 2018

2

3

4

5 ANOTÁCIA Slovenská technická univerzita v Bratislave FAKULTA INFORMATIKY A INFORMAČNÝCH TECHNOLÓGIÍ Študijný odbor: Informatika Autor: Bakalárska práca: Vedúci bakalárskej práce: máj, 2018 Interaktívna tabul a Ing. Andrej Fogelton Bakalárska práca sa zaoberá vytvorením softvérovej verzie interaktívnej tabule. Na interakciu s interaktívnymi tabul ami sa často používajú Light Emitting Diode (LED) perá (perá s LED diódou na špičke). Problémom však je že takéto perá nie sú bežným hardvérom, preto si ich mnoho l udí vyrába samostatne. Aby sa predišlo tomuto problému, rozhodli sme sa toto pero nahradit laserovým ukazovadlom, ktoré je bežné a často používa aj pri prezentovaní. V tejto práci analyzujeme riešenia detekcie svetelného (laserový bod alebo LED) bodu v reálnom čase a reálnych podmienkach. Laserový bod je snímaný webovou kamerou. Navrhli sme dva algoritmy pre detekciu laserového bodu. Algoritmus pre predvolené nastavenia kamery a algoritmus pre upravené nastavenia kamery. Algoritmy sú založené na prahovaní v Red Green Blue (RGB) a Hue Saturation Value (HSV) farebnom modeli. Na prevod medzi súradnicovými sústavami kamery a obrazovky je použitá homografia. Zaoberáme sa aj interakciou používatel a s prezentáciou. V práci sme navrhli mód pre písanie pomocou laserového ukazovadla. Algoritmus pre upravené nastavenia kamery môže byt použitý na interakciu v reálnom čase.

6

7 ANNOTATION Slovak University of Technology Bratislava FACULTY OF INFORMATICS AND INFORMATION TECHNOLOGIES Degree Course: INFORMATICS Author: Bachelor thesis: Supervisor: May, 2018 Interactive Whiteboard Ing. Andrej Fogelton This bachelor thesis deals about software interactive whiteboard. To interact with the interactive whiteboards are often used LED pens (pens having an LED diode at the tip). The problem is, that such pens are not a common hardware, so many people assemble them by their own. To overcome these obstacles we decided to replace it with a laser pointer. It is a common hardware and often used in presentations. In this thesis, we analyze related work which describes the detection of the light source (laser spot or LED). We focus on methods working in a real-time and real-world environment. The laser spot is captured by a webcam. We designed two algorithms for the laser spot detection. The algorithm for default camera settings and adjusted camera settings. The Algorithms are based in thresholding in the Red Green Blue (RGB) and Hue Saturation Value (HSV) color model. The transfer between the camera and the screen coordinate systems is done using homography. We focus also on the interaction of the user with the presentation. We designed the writing interactive mode for writing the text with the laser pointer. The algorithm for the adjusted camera settings can be used for the real-time interaction.

8

9 Declaration of Honor I honestly declared, that I wrote this thesis independently under professional supervision of Ing. Andrej Fogelton with citated bibliograhy. May, 2018 in Bratislava Signature

10

11 Acknowledgement First and foremost, I have to thank my supervisor Ing. Andrej Fogelton for his professional guidance and valuable advice during the work on the this thesis. I also thank to my family and friends, who supported me throughout writing of this thesis. 3

12

13 Contents 1 Introduction Requirements Related Work Camera-to-display mapping Calibration process Noise reduction Thresholding Laser spot detection Methods enhancing the detection Interaction Design Calibration process The laser spot detection The detection in default camera settings The detection in adjusted camera settings Interaction Evaluation The dataset with the default camera settings The dataset with the adjusted camera settings Annotations Evaluation Discussion

14 Contents 5 Conclusion 31 A Technical Documentation 35 B User Guide 37 C Resumé 39 C.1 Úvod C.1.1 Požiadavky C.2 Súvisiace práce C.2.1 Kalibračný proces C.2.2 Redukovanie šumu C.2.3 Detekcia laseru C.3 Návrh C.3.1 Algoritmus pre predvolené nastavenia kamery C.3.2 Algoritmus pre upravené nastavenia kamery C.3.3 Interakcia C.4 Záver D DVD Contents 43 ii

15 List of Acronyms LED Light Emitting Diode MSV Mean Sample Value ND Neutral Density SVD Singular Value Decomposition HSV Hue Saturation Value RGB Red Green Blue CA center area SA surrounding area TP True positive TN True negative FP False positive FN False negative F1 F1 score GUI Graphical user interface

16 Contents iv

17 Chapter 1 Introduction Nowadays, interactive whiteboards are assistive devices for teaching. In order to make the teaching more efficient, they combine the benefits of a touchscreen, computer, and video projection. They give people more options and better comfort than classical whiteboards and are replacing them in schools. Teachers can write notes or draw objects in presentations naturally from the board environment and also present illustrative videos, images, graphs, and search for information. There are low variable costs like electricity, no need to buy chalks or whiteboard markers, and care about cleaning anymore. Interactive whiteboards are also time-saving because it is possible to save the progress and then reuse it later. Some teachers free students from making notes because they can make the teaching materials available to them. An interactive whiteboard is an input/output device the large display surface, which allows interaction with the computer. This display surface can be represented as: 1. Large screen The main advantage is that the whole screen is visible, without user overshadowing it. There are also screens with touch technology but other input methods can be used to interact with the screen (for example, visual-based methods). 2. Projector There are several types of projections: rear projection, projection from the top, and the most common frontal projection. Rear projection has the same advantage as the screen but the disadvantage is the need for more space behind the screen and the semitransparent canvas, which has lower contrast. The main disadvantage of frontal projection is that the user needs to be careful to do not overshadow the interacting objects with his own body. By projection from the top, the user needs to mount the projector properly, but it can be quite convenient because he overshadows just a small surface below the hand while interacting. There are many types to detect the touch on a display surface. Most of the products use special hardware components build in the whiteboard screen: Infrared touch technology 1 Whiteboard has light emitting diodes on 2 sides (top and left) and light detectors on the other two. When the user touches the surface, the light is not detected in some parts of the detectors and the touch position can be estimated. This technology is similar as in Kindle ebook reader 2. Capacitive screens 3 This technology is used in most smartphones and other touchscreen

18 Chapter 1. Introduction Figure 1.1: Touchscreen display as interactive whiteboard (Top Left) 6, rear projection (Top Right) 7, projection from the top (Bottom Left), front projection (Bottom Right) 8. devices because of its sensitivity and the multitouch support. The screen is made from one or more layers of tiny conductors. This surface allows another conductor (such as fingers) to complete the circuit during the touch, to detect its position. Electromagnetic technology 4 The whiteboard is equipped with an arrangement of grid wires behind the solid screen. The wires use electricity to create a magnetic field and determine the horizontal and vertical positions of the tip of the pen which is passive and containing coil. Touch glasses 5 It has the same technology as capacitive screens and can be mount as an overlay to any screen. Unfortunately, the main disadvantage of all these interactive whiteboards is their high price (from 700e up to 2000e) because of the special hardware components. Moreover, it is difficult to manipulate with them. There are also affordable solutions, which try to replace the expensive touchscreens with visualbased detection methods. Most of these solutions are based on infrared light detection because of noise reduction from other light spectrums. For example, company Ipevo 9 and Hivista 10 use an infrared camera to detect a specific wavelength of an infrared light which is emitting by the LED placed at the tip of the interactive pen (Figure 1.2). A problem may occure when user 4 html jpg larseq=166 2

19 Chapter 1. Introduction Figure 1.2: The Infrared camera transmits only infrared light spectrum (Top). An interactive pen with top and side button (Bottom)11. owershadows the light from the LED. Therefore, the camera is mostly mounted on the projector. With these products, you can change any wall into an interactive surface under 200e. All these products use special hardware components (touchscreen or the infrared camera). Our aim is to develop a software which will use only common hardware and image processing techniques. There are solutions which use a webcam and the LED pen in a visible light spectrum The problem is that the LED pen is not a common hardvare and it is often assembled by the user, which can be difficult for non-technical person. To overcam the obstacles to assemble a pen we decided to replace it with a laser pointer. Laser pointers are often used in presentations with large or high placed screens to point to important parts of the presentation, images or text. It is also a common part of presenters or remote controllers for projectors. Because of the handshaking of the user, it is difficult to keep the laser spot at the one place. It can be used as a pen for writing on the screen for users standing near by the display surface. 1.1 Requirements Our aim is to create an application which will detect the laser spot on a display surface using webcam. We want to add another kind of interaction with the presentation using the laser spot pdf 12 3

20 Chapter 1. Introduction Figure 1.3: Illustration of the interaction with the laser spot and the computer 14. For example, a cursor will be set up to follow the laser spot. Mouse clicks could be realized by quick turn off and on of the laser, while pointing to the near place, user wish to click on. This interaction has to work in real-time. All necessary hardware for our solution is: webcam, digital projector, laser pointer

21 Chapter 2 Related Work In this chapter, we focus on the laser spot and LED detection, and the user experience, how the user interacts with a presentation. 2.1 Camera-to-display mapping A common workflow of all the related work methods is first to set up camera-to-display mapping. Every time the camera captures the display surface, the shape of the computer screen is perspectively transformed into the trapezoid [1] (Figure 2.1) with different scale and rotation. Camera-to-display mapping can be done using homography. It is a transformation from one coordinate system into another. The estimation of homography matrix H requires determining the coordinates of at least 3 points from both coordinate systems. If x, y are the coordinates from the camera plane (Figure 2.1) and ˆx, ŷ are the coordinates of the corresponding point on the computer screen, then the relationship between these two points is represented by transformation with matrix H 1, which needs to be estimated. This transformation can be expressed using homogeneous coordinates: ˆx ŷ m = ˆx ŷ 1 h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 = ˆx /m ŷ /m m/m x y 1 = (2.1) h 11 x + h 12 y + h 13 h 21 x + h 22 y + h 23 h 31 x + h 32 y + h 33 (2.2) ˆx = h 11x + h 12 y + h 13 h 31 x + h 32 y + h 33 ŷ = h 21x + h 22 y + h 23 h 31 x + h 32 y + h 33 (2.3) h 11 x + h 12 y + h 13 h 31 xˆx h 32 yˆx h 33ˆx = 0 (2.4) h 21 x + h 22 y + h 23 h 31 xŷ h 32 yŷ h 33 ŷ = 0 (2.5) 1

22 Chapter 2. Related Work 1. Source image 2. Projected source image 3. Projected image on the camera plane 4. Camera-to-display mapping Figure 2.1: Perspective transformation of the computer screen on different planes (Top), camera-to-display mapping using homography (Bottom). Equations 2.4 and 2.5 can be written in matrix form as a formula Ah T = 0. h x 1 y x 1ˆx 1 y 1ˆx 1 ˆx x 1 y 1 1 x 1 ŷ 1 y 1 ŷ 1 ŷ 1 h 12 x 2 y x 2ˆx 2 y 2ˆx 2 ˆx 2 h 13 Ah T = x 2 y 2 1 x 2 ŷ 2 y 2 ŷ 2 ŷ 2 h h 22 = 0 (2.6) h 23 x n y n x nˆx n y nˆx n ˆx n h 31 h x n y n 1 x n ŷ n y n ŷ n ŷ 32 n h 33 Because Ah T is not a regular matrix, the Gaussian method cannot be used to solve it. In order to calculate vector h, a Singular Value Decomposition (SVD) has to be used. Matrix A is determined using coordinates x i, y i and ˆx i, ŷ i (i {1...n}) from both coordinate systems. The accuracy of camera-to-display mapping increases with the number of the obtained points from both coordinate systems. 2.2 Calibration process Calibration process obtains coordinates from both coordinate systems, to create the camerato-display mapping. It could be automatic or manual. An automatic calibration is often used when the detection is in a visible light spectrum (the camera captures visible light and also the projector projects only in a visible light spectrum). Therefore a calibration pattern can be projected and detected using the camera, to obtain the coordinates from both planes. Most common calibration patterns are a chessboard and asymmetrical circle pattern (Figure 2.3). Advantages of automatic calibration are speed and accuracy. Manual calibration (Figure 2.2) suffers from lower accuracy due to the difficulties with the location of the light source at a given point. The light source can be the laser spot or LED. Several calibration points are projected on the display surface. The user needs to go near the 6

23 Chapter 2. Related Work Figure 2.2: Manual calibration process 2. Figure 2.3: Chessboard pattern (left) 3. Asymmetrical circle pattern (right) 4. display surface and points to each of them with the infrared LED to determine the coordinates in the display surface coordinate systems. A problem may occur because the camera detects not only the light at the tip of the LED pen, but also its reflection on the display surface. The user can also inaccurately points to the center of the calibration points. Inayatullah Khan et. al. [2] use a camera working in a visible light spectrum, thanks to which an automatic calibration using projector can be done. The authors use chessboard pattern with 88 calibration points and mention the accuracy of this system with zero screen pixel difference. This metric measures the precision of the system and is defined as the distance between the center of the spotlight and the corresponding coordinates on the display surface. Another type of calibration is a green and pink pattern (Figure 2.4), introduced by Ali Khalid et. al. [3]. The disadvantage of this calibration is, that no movement in the background is allowed. It is based on changes in camera view, green image is replaced with the pink one. Only 4 points are detected in this calibration process (corners of the computer screen), therefore, it achieves lower accuracy (reported offset is 1 screen pixel). LED detection is a similar problem to the laser spot detection. It could be in a visible or infrared light spectrum. Johny Lee introduced a low-cost solution of interactive whiteboard [4], which

24 Chapter 2. Related Work Figure 2.4: Green and pink patterns [3]. The green colored image is replaced with the pink one. Assuming that background remains unchanged, these two images are substracted and binarized, which results in the extracted area (right). uses Nintendo Wii Remote 5 as an infrared camera to detect an infrared LED. It is necessary to use a manual calibration process because the camera detects only infrared light spectrum, which cannot be projected by the video projector. 2.3 Noise reduction The display surface is mostly affected by the light conditions of the room. For video cameras, exposure is defined as the amount of light which reaches the CCD sensor. It depends on 3 factors and can be controlled manually or automatically by the camera driver: ISO The sensitivity of the sensor. With the increased sensitivity also increases the noise of the image. Aperture The amount of light, which enters the camera (the size of the opening into the camera). Shutter speed The amount of time, for which the light reaches on the sensor. The areas of pixels with maximum brightness value are called overexposed (vice versa underexposed) areas (Figure 2.5, 2.6). In over-exposed areas, a real brightness of objects cannot be recognized, because it is beyond the sensor sensitivity. It means, that it will be impossible to recognize the laser spot from the surrounding pixels. Therefore, the over-exposed areas need to be reduced. Inayatullah Khan et. al. [2] introduce a Self Camera Exposure Control. This method calculates a Mean Sample Value (MSV) from the histogram of each image of the camera. In order to make the laser spot well visible, the authors use the MSV to control the exposure of the camera. The laser spot is seen clearly on under-exposed images with MSV=1.12. The histogram of the image luminance is divided into 5 bins and MSV is calculated by the formula: MSV = n i=0 (i + 1) h i n i=0 h, (2.7) i where n is a number of histogram bins and h i is the value of bin i. In this solution, a laser with output power 500mW is used, thanks to which an under-exposure did not affect its illumination

25 Chapter 2. Related Work Figure 2.5: The example of the influence of image brightness by camera exposure introduced by Inayatullah Khan et. al. [2]. The system calculates a Mean Sample Value (MSV) of the image brightness, which is used to maintain a convenient camera exposure. An overexposed image with MSV = 3.54 (right), a median exposed image with MSV = 2.5 (middle), and an underexposed image with MSV = 1.12 (left), where the laser spot can be seen clearly. Figure 2.6: Reduction of overexposed areas with the Neutral Density (ND) filter 7. Optical filters can be also used to reduce the overexposed areas. Neutral Density (ND) filters 6 reduce the amount of light, which enters the lens, while keeping the hue unchanged (Figure 2.6). The result of the ND filter placed in front of the camera is an image of lower light intensity. The noise in images can be reduced with convolution filters [5]. In this method, the new value of the pixel is calculated from the values of the surrounding pixels. Often used technique is a Gaussian smoothing [5]. This technique calculates the pixel value by averaging the values of the surrounding pixels according to the convolutional mask. It is assumed that the surrounding pixels have relatively the same value. By a weighted average, the pixels near the center have a greater weight than those further. The Median [5] filter is also often used to reduce the noise in images. Unlike the Gaussian smoothing, the value of each pixel is replaced by the median from the block of the surrounding pixels. Figure 2.7 shows that the Gaussian smoothing reduces the noise, but the edges are blurred due to the averaging of the surrounding pixels. The Median filter also induces blurring but the edges remain sharper

26 Chapter 2. Related Work Figure 2.7: Noisy image (left), noise reduction with Gaussian smoothing (middle), noise reduction with median filter (right). 2.4 Thresholding All the Related Work are based on the fact, that the laser spot is brighter than the light from the projector. The pixels, that could form the laser spot are segmented by thresholding. Lapointe and Godin [6] and Wang et. al. [7] mention, that the center of the laser spot appears as the brightest white point over the image, regardless of the laser color. The authors detect the laser spot by processing the grayscale image. Lapointe and Godin estimate the threshold after the camera exposure settings within the initialization process. During the initialization, the display surface is captured for the predetermined amount of time without the appearance of the laser spot. The threshold is set to the maximum measured brightness value from all of these captured images. The disadvantage of this method is that the threshold is set up just once (during the initialization process). The changes in the light conditions of the room can cause a false positive or false negative detection of the laser spot. Khan et. al. [2] and Meško and Toth [8] use the adaptative threshold algorithm. The authors use the HSV color model, where the color is separated from its saturation and brightness. The algorithms find the maximum value of the Value-channel of the pixels from each image. It is used to calculate the threshold. The neighboring pixels, that pass the threshold are grouped and create the blobs. The blob, which represents the laser spot is later recognized by the Hue value from the other created blobs. Meško and Toth [8] mention, that the middle of the laser spot does not contain the information about the color, because this area it is too bright and appears as white even if the color of the laser is red (Figure 2.8). The pixels outside the bright area in the middle are used to recognize the color of the laser spot. Ahlborn et. al. [9] point out that the laser spot has a varying intensity on the different parts of the display surface. Therefore, they use the threshold image instead of just one threshold value for the whole image. This threshold image contains the maximum measured intensity for each pixel on different parts of the display surface. It is obtained after adjusting the camera settings, when the full white screen is projected onto the display surface for the predetermined amount of time. When some pixel has a greater intensity than the intensity of the corresponding pixel on the threshold image, the pixel will be classified as a part of the laser spot. The final coordinates of the laser spot are calculated by weighted averaging of the coordinates of the satisfying pixels 10

27 Chapter 2. Related Work Figure 2.8: The appearance of a red laser spot in the project of Meško and Toth [8]. The incorrect exposure settings (left), correct exposure settings (middle), the illustration of the middle and surrounding area of the laser spot (right). The middle area of the laser spot has too much brightness (the white area on the right). The pixels near to the bright area in the middle are used to recognize the color of the laser spot (the gray area on the right). by the formula: x = n i=0 p i.x (p i.intensity ˆp i.intensity) n i=0 (p i.intensity ˆp i.intensity) (2.8) y = n i=0 p i.y (p i.intensity ˆp i.intensity) n i=0 (p, (2.9) i.intensity ˆp i.intensity) where n is a number of pixels that fit the threshold, p i the pixel on the camera image, and ˆp i is the corresponding pixel on the threshold image. 2.5 Laser spot detection After thresholding, the most probable laser spot pixel should be separated from the background. Because of the varying light conditions, the false positive blob detection has to be taken into consideration. Therefore all of the obtained blobs should be tested according to other specific features as well. Here we present several methods to obtain the blob of the final laser spot. Ahlborn et. al. [9] calculate the final coordinates of the laser spot by the weighted averaging of all the obtained pixels coordinates. The pixels are weighted by their intensity minus the intensity of the thresholded image at the corresponding position. Detection algorithm introduced by Lapointe and Godin [6] consists of several steps. If only one blob is detected it will be straightforwardly considered as the laser spot. When multiple blobs are detected in nearby distance, they will be considered as one and averaged to the single position, to be considered as to laser spot. In a case when multiple blobs are detected all over the display surface, it cannot be distinct which one is the laser spot. The situation is considered as no laser spot is detected. Khan et. al. [2] analyze the blobs by their geometry. The blob, that could be considered as the laser needs to have a predetermined number of pixels and elliptical shape. Finally, the laser spot is recognized by the mean Hue value of the created blob. Algorithm introduced by Meško and Toth [8] first calculates the count of the pixels obtained by thresholding, which needs to be lower than 100. The algorithm groups all the neighboring 11

28 Chapter 2. Related Work pixels to be considered as one blob. For each blob, the centroid and diameter are calculated. To determine whether the laser spot blob is distinctive from the surroundings, the algorithm scans the area around the centroid (the diameter of scanned area is two times larger as the diameter of the blob) and controls the proportion of the bright pixels. A new threshold is obtained, which is set to 90% of the maximum brightness value in the scanned area. The proportion of the pixels that fit the threshold needs be lower than 0.3. The last step is the color recognition by the mean Hue value. The resulting color of the blob is the one with the highest number of pixels Methods enhancing the detection Benjamin A. Ahlborn et. al. [9] designed a laser pointer interaction on large displays. In order to maintain the accuracy of the detection, the system uses 4 cameras which record different small parts of the display surface. Unfortunately, the large numbers of cameras can increase a latency of the system, because of high memory and CPU requirements. The laser spot is also often visible just by one camera. The authors speed up the detection by searching the laser spot in a small sub-image of the predicted location. If the laser spot is detected in a part of the display surface, the detection history is updated. The laser spot is detected just in a small predicted sub-image. If the laser spot is not found, the detection history is cleared and the system starts the detection by all cameras. Motion detection can be also used to detect the laser spot. Kirstein and Müller [10] start the detection in regions that have changed. Each frame is compared to a reference image, which is obtained by averaging of certain number of previous frames. The differences between the current frame and the reference image describe the movements in regions. The motion detection can be used for presentations with a mostly static background. 2.6 Interaction This section is focused on ensuring the mouse events with the laser pointer. The commercial interactive whiteboards basically activate the mouse events right after the sensor detects the touch or a specific light on the display surface. Interaction with the laser pointer from a distance needs to be realized differently because it is difficult to predict the position where the laser spot will be displayed. Another problem which needs to be considered is a difficulty to keep pointing to the same place due to small unconscious hand movements. Aizeboje and Peng [11] emulate the mouse events like the single left click, double left click and right click by quick turn off and on of the laser. When the laser spot is detected on the display surface the cursor is following its position. After the laser is turned off, the coordinates of its last appearance are stored. If the laser spot is then detected during a predetermined amount of time, a specific mouse event is activated on these coordinates. The single left click is realized by turning the laser off and on, during one second. To realize the double left click or the right click the laser spot needs to be detected twice (double left click) or three times (right click) during one second. Similar interaction is introduced by Nirav A. Vasa 8. Unlike the previous method, the mouse event will be realized just when the laser spot appears in a predetermined range of its last

29 Chapter 2. Related Work appearance. The small shaking movements of the cursor are reduced by measuring the distance between the laser spots position on the current and previous frame. The author also takes into consideration the false negative detection of the laser, which could cause the undesired mouse events. To avoid this problem, upper and lower limits for true positive detection are created to be considered as a click. Wang et. al. [12] designed three interactive modes. Mode for the single and double left click, cursor moving mode and drag and drop mode. In one interactive mode, only one specific mouse event can be performed. To select the interactive mode user needs to point the laser to the icon of the specific mode. In cursor moving mode, the cursor just follows the position of the laser spot. When single or double left click mode is selected the user needs first to point the laser to a position to be clicked. The specific click is performed after the laser is turned off. The drag and drop mode allows the user to write notes, highlight the text or drag the icons. The user first needs to point to the position or icon to be dragged and turn the laser off. While the laser is detected again, the selected icon is dragged. Chowdhary et. al. [13] divide the interaction into the presentation mode and mouse controlling mode. The presentation mode allows the user to switch the slides of the presentation using gestures. The display surface is divided into a lower, middle, and upper region. The transition to the next slide is realized by moving the laser spot from the left side of the lower region through the middle to the right side of the upper region. The transition to the previous slide is realized similarly, but it starts from the right side of the lower region to the left side of the upper region. To consider the gesture as a trigger of the transition the laser spot needs to be detected in each region on 5 consecutive frames. The mouse controlling mode allows to perform the single and double left click, right click and drag and drop event. The single and double left click are activated when the laser spot is detected on 5 frames (single left click) and 10 frames (double left click) within a predetermined range. The right click and the drag and drop event are controlled by gestures. To determine which mouse event will be activated, the laser spot first needs to be moved back and forth in a vertical direction (right click) or horizontal direction (drag and drop). The specific event is activated after the laser spot is detected on 5 frames in a predetermined range. 13

30 Chapter 2. Related Work 14

31 Chapter 3 Design Initialization process of our solution consists of setting up the camera-to-display mapping. The coordinate systems of the camera and computer screen are different. Every time the camera captures the display surface, the shape of the computer screen is perspectively transformed into the trapezoid. We use a homography to transfer the coordinates between the camera and the screen coordinate systems. A homography matrix is calculated from calibration points of the screen and the camera plane during the calibration process. During the interaction, when the laser spot will be detected the coordinates will be perspectively transformed into the coordinates of the computer screen according to the homography matrix. 3.1 Calibration process The related work introduces manual and automatic calibration process. We suggest using automatic calibration process because the camera captures a visible light spectrum. An automatic calibration is more efficient and precise because it doesn t require a human interaction. The human interaction causes a lower precision because of the possibility to incorrectly locate the light source at the given point. We use automatic calibration with the chessboard pattern, which is the most common. The chessboard (Figure 3.1) is dynamically generated according to the resolution of the screen. The dimension of the chessboard square is calculated by the formula: width 80 d =, (3.1) 9 where width represents the width of the computer screen. The chessboard needs to be surrounded by a white border because of the recognition of black quadrangles on its edges. Therefore we subtract 80 pixels from the screen width. With the resolution of the screen , a chessboard with the dimensions 9 6 will be displayed. The chessboard is generated and captured with default camera settings. We binarize the images with the adaptive threshold function to make the chessboard recognizable during the varying light condition in different areas over the image. Two main input parameters to the adaptive threshold are a blocksize (in pixels) of the square area around the pixel and a constant. The

32 Chapter 3. Design Figure 3.1: Chessboard pattern captured by the camera (top left). Image after binarization with adaptive threshold (top right). The detected chessboard corners (down). function calculates a threshold value for each pixel of a grayscale image by the formula: T (x, y) = I(x, y, blocksize) C, (3.2) where T (x, y) is the threshold value for the pixel(x, y), I(x, y, d) is a mean or Gaussian mean intensity of the square area around the pixel(x, y), blocksize is a dimension of the square area, and C is a constant to be subtracted from the mean or Gaussian mean. The binarization of the pixel is followed: { max p(x, y).i > T (x, y) p(x, y).i =, (3.3) min otherwise where p(x, y).i is the intensity of the pixel(x, y). We use normal mean intensity of the square area. The blocksize of the square area is 49 pixels and the constant value is 2. The algorithm detects the black quadrangles on a white surface. The points of contact of the black quadrangles are considered as the chessboard corners. From the experience, we found out 250 ms efficient to detect the chessboard corners. The coordinates of detected chessboard corners are the input to the homography matrix together with corresponding coordinates on the computer screen. 16

33 Chapter 3. Design Figure 3.2: The laser spot on a black background captured in a default camera settings (left). The illustration of the center area (CA) (the red area in the middle) and the surrounding area (SA) (the gray area in the middle image) of the laser spot. The SA is used to recognize the color of the laser spot. The highlighted pixels of the CA and the SA (right). 3.2 The laser spot detection Laser spot detection within all the related work is characterized by adjusting the camera settings (the exposure, brightness or contrast,...) to reduce the amount of the background light. Unfortunately, a lot of webcams do not allow the user to adjust these settings. Therefore we decided to design the detection algorithms for default and adjusted camera settings The detection in default camera settings The algorithm is inspired by the project of Meško and Toth [8]. Althouthough, the authors adjust the exposure of the camera, the laser spot still appears as a bright white blob with red surroundings, which is similar to the default camera settings. We suggest recognizing the laser spot by the color of the surrounding area (SA) and the center area (CA) of the bright blobs (Figure 3.2). Each image of the camera is first filtered using adaptive threshold function, with the followed input parameters: blocksize = 9, constant = 9, we use the Gaussian mean intensity of the square area, The filtered areas are scanned for the maximum brightness value in the red channel of the RGB color model. It is used to calculate the threshold (97% of the measured maximum). All the filtered areas are binarized by the created threshold because the center of the laser spot usually appears as one of the brightest points over the image. The neighbor pixels are grouped to be considered as one blob. We set the maximum limit for the area of one blob to 100 pixels. For each blob, the centroid is calculated. The color recognition is the last step of the detection. On the different colors of the background, the laser spot appears differently. By analyzing the images we find out 5 types of background colors on which the laser spot is recognized (Figure 3.3). Each blob is scanned for the mean Hue, Saturation, and Value of the CA and the SA (Figure 3.2). For the recognition of the laser spot, we set the different threshold values for 17

34 Chapter 3. Design Figure 3.3: The approximate background colors, on which the laser spot can be detected. A black (a), brown/orange (b), white (c), blue (d), and green (e) background color. each individual background color. The threshold values are located in the Table 3.1. The blob that passes the color recognition is considered as the laser spot. Table 3.1: The threshold values for the recognition of the laser spot on the different background colors. We set the threshold values for the mean Hue (H), Saturation (S), and Value (V) in the surrounding area (s) and the center area (c) of the laser spot (Figure 3.2). color black/brown white blue green Hs range <-20,25> <-30,25> <-120,0> <20,70> Hc range <-20,25> not defined <-90,25> <-20,25> Ss range <30,75> <0,20> not defined not defined Sc range not defined <0,20> <30,100> not defined S condition Sc <= (Ss + 3) not defined Sc <= (Ss + 3) not defined Vs range not defined <55,70> not defined not defined Vc range <70,100> <70,100> <60,100> <65,100> V condition Vs <= (Vc - 20) not defined Vs <= (Vc - 20) Vs <= (Vc - 20) In the case when multiple blobs pass the color recognition, the blob with the biggest Value in the CA is considered as the laser spot. When the laser spot was detected on a previous frame and multiple blobs pass the color recognition on a current frame, the algorithm selects the one with the shortest distance to the previous location of the laser spot The detection in adjusted camera settings In order to increase the detection rate, we adjust the exposure, gain, brightness, contrast, color intensity, and the white balance of the camera (Figure B.1), With the adjusted camera settings, the laser spot should appear as the brightest red spot over the image. The algorithm is based on thresholding implemented in RGB color model. By analyzing the images we found out a suitable lower threshold for the red channel of the center of the laser spot (the threshold value is 135). To reduce the false positive detection caused by the white background (e.g. the sunlight), we set the upper threshold for the blue and the green channel to 230 (the threshold is also founded out during the analysis of the images). The algorithm scans the whole image and searches for a suitable pixel according to the threshold values in the red, green, and blue channel. When the pixel is found, the algorithm will scan its CA and SA, which are shown in the Figure 3.4. This time, we consider the SA as belonging approximately 18

35 Chapter 3. Design Figure 3.4: The laser spot on a black background captured in adjusted camera settings (left). The illustration of the center area (CA) (the red area in the middle image) and the surrounding area (SA) (the gray area in the middle) of the laser spot. The SA represents approximately the edge of the laser spot. The highlighted pixels of the CA and the SA (right). to the edge of the laser spot. The recognition of the laser spot is based on the fact that the center of the laser spot appears as a bright red spot, while the intensity of the red color at its edge is significantly smaller. The intensity of the laser spot depends on the intensity of the red light in the area on which the laser is pointing. According to the image analysis, we divide the mean intensity of the red channel in the SA of the laser spot into 7 intervals. For each interval, we set the threshold value for the red channel of the pixel in the center of the laser spot and the threshold value for the mean intensity of the red channel in the CA of the laser spot. The threshold values are located in the Table 3.2. The algorithm continues to scan the image until the pixel with the maximum intensity in the red channel is found (it is considered as the center of the laser spot). When the laser was detected on a previous frame, the algorithm will first search for the laser spot in a small sub-image around its previous location. We found a square sub-image with the dimensions 80x80 pixels efficient for the average distance between the position of the laser spot of 2 consecutive frames. The approximately similar refresh rate of the projector and the shutter speed of the camera can differ the intensity of the laser spot on the captured frames. Therefore, we set the lower threshold for the red channel of the center of the laser spot this time to 80. When no laser spot is detected in the sub-image, the algorithm will scan the whole image (the lower threshold for the red channel is set to 130 again). 3.3 Interaction Our solution is designed to allow the user write the text or draw the objects using the laser pointer. The interaction is performed by emulating the mouse function. While the laser spot is detected, the cursor follows its position and we emulate the left mouse button down event. The interaction also allows to perform the single left click and the drag and drop event. The double left click can be performed using the single click twice on the same location (during the predetermined amount of the time for the double left click), but it is difficult to perform because of the small unconscious hand movements. 19

36 Chapter 3. Design Figure 3.5: The appearance of the laser spot in default (up) and adjusted (down) camera settings. 20

37 Chapter 3. Design Table 3.2: The intervals of the intensity in the surrounding area (SA) of the laser spot. For each interval, the lower threshold for the mean intensity in the center area (CA) and the lower threshold for the center pixel (P) is determined. All values are for the red channel of the RGB color model. When no laser was detected on a previous frame, the algorithm will use the intervals 4 to 7, otherwise all intervals. Interval number SA range CA lower threshold P lower threshold 1 <0,14> <15,29> <30,38> <0,61> <62,75> <76,95> <96,230>

38 Chapter 3. Design 22

39 Chapter 4 Evaluation To evaluate our algorithms we use datasets with default and adjusted camera settings. 4.1 The dataset with the default camera settings The dataset with the default camera settings Dataset consists of 6 videos captured in the 2 different rooms. All the videos are captured in the resolution of In the first room is captured 5 videos during the afternoon (April 19:00). In another room is captured only 1 video in common daylight conditions. Excepting the projector, there is no other light source in the room and the sunlight do not shine directly into the room. All videos are captured with the resolution of The default camera settings are shown in the figure (Figure 4.1). The total number of frames is The total number of frames with the laser spot is The training dataset consists of 1 video, the validation dataset consists of 2 videos and the test dataset consists of 3 videos. 4.2 The dataset with the adjusted camera settings The dataset with the adjusted camera consists of 24 videos captured in 2 different rooms. All the videos are captured in the resolution of We recorded the videos on 3 times, each day with the different camera settings (Figure 4.2). The dataset is divided into datasets with common daylight conditions, bright light conditions, and changing light conditions. The dataset with common daylight condition consists of 8 videos. Excepting the projector, there is no other light source in the room and the sunlight do not shine directly into the room. The total number of frames is The number of frames with the laser spot is The dataset with bright light condition consists of 6 videos. The light in the room is turned on, or the sun shines through the window into the room. The total number of frames is The number of frames with the laser spot is The dataset with changing light condition consists of 10 videos, on which the common daylight and the bright light conditions alternate. The total number of frames is The number of frames with the laser spot is 7734.

40 Chapter 4. Evaluation The training dataset consists of 3 videos with common daylight conditions, 3 videos with bright light conditions, and 5 videos with changing light conditions. The validation dataset consists of 3 videos with common daylight conditions, 2 videos with bright light conditions, and 2 videos with changing light conditions. The test dataset consists of 2 videos with common daylight conditions, 1 videos with bright light conditions, and 3 videos with changing light conditions. 4.3 Annotations We annotated the datasets manually. The ground truth annotations contain the following information: Frame ID id of the frame the annotation belongs to, X coordinate vertical coordinate of the laser spot. When no laser spot is detected the value is -1, Y coordinate horizontal coordinate of the laser spot. When no laser spot is detected the value is Evaluation We evaluate the algorithms according to following matrices: True positive (TP) the laser spot is detected correctly by the ground truth annotation (the maximum allowable deviation in both coordinates is the offset of 7 pixels), True negative (TN) the ground truth coordinates are -1 and no laser spot is detected, False positive (FP) the ground truth coordinates are -1 but the laser spot is detected, False negative (FN) the ground truth coordinates are greater than -1 and no laser spot is detected or the offset of is greater than 7 pixels, Reliability: reliability = Precision: precision = Recall: recall = T P, T P +F N T P +T N, numberofframes T P, T P +F P F1 score (F1): F 1 = 2 precision recall precision+recall. Table 4.1: Results of the detection algorithm for the default camera settings. True positive (TP), true negative (TN), false positive (FP), false negative (FN), F1 score (F1) Video Frames TP TN FP FN Reliability Precision Recall F % 71% 28% 40% 90 test % 72% 46% 56% % 55% 25% 35% Table 4.1 shows the results of the detection algorithm for default camera settings, Table 4.2 shows the results of the detection algorithm for the adjusted camera settings. 24

41 Chapter 4. Evaluation Figure 4.1: Approximate default camera settings. Figure 4.2: Different adjusted camera settings. The camera settings of videos 49, 53, 54, 55, 58, and 65 are in the top left image. The camera settings of videos 70, 71, 72, 74, and 76 are in the top right image (the best camera settings). The camera settings of videos 80 and 81 are in the image at the bottom. 25

42 Chapter 4. Evaluation Figure 4.3: The false positive (FP) detection of the laser spot caused by the cursor on the red colored background. 26

43 Chapter 4. Evaluation Figure 4.4: The appearance of the laser spot on the red colored background (black circle in the top image). The green circle shows the false positive (FP) detection. The laser spot appears just as a small white dot, which we do not consider as the laser spot (the black circle in the bottom image). 27

44 Chapter 4. Evaluation Table 4.2: Results of the detection algorithm for the adjusted camera settings. True positive (TP), true negative (TN), false positive (FP), false negative (FN), F1 score (F1). A common daylight conditions are on the index 1 to 5, bright light conditions are on the index 6 to 8, the changing light conditions are on the index 9 to 13. Index Video Frames TP TN FP FN Reliability Precision Recall F test % 100% 85% 92% 2 70 test % 100% 100% 100% % 100% 100% 100% % 95% 100% 97% % 99% 99% 99% % 98% 99% 98% % 96% 99% 98% 8 49 test % 99% 99% 99% 9 54 test % 99% 92% 95% test % 97% 96% 96% test % 100% 99% 99% % 99% 92% 95% % 99% 99% 99% 4.5 Discussion We evaluated the detection algorithms for default and adjusted camera settings. During the improvement phase of the algorithms we considerate the F1 as the main metrics. With default camera settings, many white objects with a red background appear similar to the laser spot (Figure 4.3) which causes a FP detection. On the fully red colored background (Figure 4.4) the laser spot appears just as a small white dot, which we do not consider as the laser spot. We found black or brown colored background (Figure 3.3) as the best background color for the detection, when the FN detection occurred only by the fast movements of the laser spot during the exposure time of the camera (Figure 4.5). Video 90 achieve the best results F1: 56% because of the most of the frames with the black or brown colored background, vice versa video 94 achieves the lowest result F1: 35%, because of the most of the frames with the white and red colored background. Although, the minimal FN and FP detection on the fully black and blown colored background, we do not consider the algorithm appropriate for the real-time interaction. The algorithm for the adjusted camera settings achieves the best result in common daylight conditions. With the appropriately adjusted camera settings, the algorithm can be used for the real-time interaction. 28

45 Chapter 4. Evaluation Figure 4.5: The continual appearance of the laser spot caused by its fast movements during the camera exposure time. 29

46 Chapter 4. Evaluation 30

47 Chapter 5 Conclusion In this thesis, we design a software interactive whiteboard using a projector and a webcam. Commercial interactive whiteboards use mostly the LED pens for the interaction. The problem is, that such pens are not a common hardware. Therefore, we decided to replace it with the laser pointer. We analyzed the related work methods to detect the laser spot. The laser spot detection within the related work is characteristic of adjusting the camera settings (exposure, contrast, brightness,...), to reduce the amount of the background light. We design the algorithm for the laser spot detection with adjusted camera settings and also default camera settings because a lot of webcameras do not allow the user to adjust these settings. The algorithm for default camera settings is based on thresholding in the HSV color model. We recognize the laser spot by the bright white color in its center area and the red color in the of the surrounding area. On different background colors, the laser spot appears differently. We determine the threshold values for the center and surrounding area of the laser spot for its recognition on 5 background colors. The detection algorithm for adjusted camera settings is based on thresholding in RGB color model. The recognition of the laser spot is based on the fact, that the center area appears with the high red channel intensity, while the intensity of the red channel on its edge (the surrounding area) is significantly lower. We divided the intensity of the surrounding area to 7 intervals and determine the threshold values of the center area for each interval. We design the writing mode for the interaction, which allows the user to draw objects or write the text. We evaluated each algorithms on a specific dataset. The algorithm for default camera settings achieves the maximum result of the F1 score: 56%, which makes it inappropriate for the real-time interaction. The algorithm for the adjusted camera settings achieves the minimum result of the F1 score: 92% and with the appropriately adjusted camera settings it can be used for the real-time interaction. The interaction can be completed by a mouse controlling mode, which would allow the user to perform the double left click and right click. The clicks would be realized by quick turn off and on of the laser while pointing it to approximately the same place. The detection can be improved using the laser with greater output power.

48 Chapter 5. Conclusion 32

49 Bibliography [1] Gary Bradski and Adrian Kaehler. Learning OpenCV: Computer Vision in C++ with the OpenCV Library. O Reilly Media, Inc., 2nd edition, [2] I. Khan, A. Bais, P. M. Roth, M. Akhtar, and M. Asif. Self-calibrating laser-pointer s spotlight detection system for projection screen interaction. In th International Bhurban Conference on Applied Sciences and Technology (IBCAST), pages , Jan [3] M. H. Mahmood, M. A. Khalid, A. G. Malhi, and A. A. Khan. A novel robust laser tracking system with automatic environment adaptation and keystone correction. In 2011 Sixth International Conference on Image and Graphics, pages , Aug [4] J. C. Lee. Hacking the nintendo wii remote. IEEE Pervasive Computing, 7(3):39 45, July [5] Front-matter. In E.R. Davies, editor, Computer and Machine Vision (Fourth Edition), pages i iii. Academic Press, Boston, fourth edition edition, [6] J. F. Lapointe and G. Godin. On-screen laser spot detection for large display interaction. In IEEE International Workshop on Haptic Audio Visual Environments and their Applications, pages 5 pp., Oct [7] Ching-Sheng Wang and Sheng-Yu Peng. A laser point interaction system integrating mouse functions [8] Matej Meško and Štefan Toth. Laser spot detection. pages 35 40, [9] Benjamin A. Ahlborn, David Thompson, Oliver Kreylos, Bernd Hamann, and Oliver G. Staadt. A practical system for laser pointer interaction on large displays. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 05, pages , New York, NY, USA, ACM. [10] Carsten Kirstein and Heinrich Müller. Interaction with a projection screen using a camera-tracked laser pointer. In MMM, [11] J Aizeboje and Taoxin Peng. An approach to using a laser pointer as a mouse. 2: ,

50 Bibliography 34

51 Appendix A Technical Documentation The solutions implemented in C++ language using OpenCV library and Qt framework. The important OpenCV functions, which we use are followed: findchessboardcorners(...) to find the chessboard corners for the, calibration process findhomography(...) to calculate the homography matrix, adaptivethreshold(...) the function used to threshold the laser spot in the algorithm for default camera settings blobdetector->detect(...) to group the neighbouring pixels after thresholding Listing A.1: The detection algorithm for the adjusted camera settings. int g, b, r, maxr; r = g = b = maxr = on = 0; uchar *p = original.data; for (int y = 0; y < original.rows; y++) { for (int x = 0; x < original.cols; x++) { b = *p; p++; g = *p; p++; r = *p; if ((b < 230) && (g < 230) && (r > 135) && (r > maxr)) { if (islaser(original.rows, original.cols, cv::point (x, y), p)) { maxr = r; xr = x; yr = y; maxr = r; } } p++; } } The function islaser() scans the CA and the SA of the pixes. The threshold values of the function are shown in the Table 3.1 and the Table 3.2.

52 Appendix A. Technical Documentation uchar maxr = 0; Listing A.2: The detection algorithm for default camera settings. cvtcolor(original, gray, CV_BGR2GRAY); adaptivethreshold(gray, gray, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 9, -9); dilate(gray, gray, getstructuringelement(morph_ellipse, Size(5, 5)) ); dilate(gray, gray, getstructuringelement(morph_ellipse, Size(5, 5)) ); original.copyto(gray2, gray); p = gray2.data;... //searching for the maximum red channel value: maxr... lowerthresh = (float)maxr * (float)0.97; p = gray2.data;... //binarizing of the gray2 by the lowerthresh... SimpleBlobDetector::Params params; // Filter by Area. params.filterbyarea = true; params.minarea = 1; params.maxarea = 100;... blobdetector->detect(gray2, keypoints); size_t x = keypoints.size(); cv::mat hsv; cvtcolor(original, hsv, CV_BGR2HSV); for (int i = 0; i < keypoints.size(); i++) { Point2f point = keypoints.at(i).pt; int x, y; uchar *p = hsv.data; x = point.x; y = point.y; value = islaser(hsv.rows, hsv.cols, cv::point(x, y), &p[hsv.step * y + 3 * x]); if (value > -1) { if (value >= maxvalue) { maxvalue = value; xr = x; yr = y; laserseen = 1; } } } } 36

53 Appendix B User Guide After running the application a followed window will be displayed. The stram from the camera will display on the Black rectangle. On the beginning, the button "start interaction" is not clickable. The user first needs to calibrate the camera with the screen by clicking on the button "calibrate". The application will display a chessboard pattern on a whole screen for 250ms. When the chessboard corners will be detected, the application will enable the button "start interaction". When the button "start interaction" is clicked, the cursor will follow the position of the laser spot, while emulating a left mouse button down event. Figure B.1: Illustration of the Graphical user interface (GUI) of the application.

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD This thesis is submitted as partial fulfillment of the requirements for the award of the Bachelor of Electrical

More information

Table of Contents 1. Image processing Measurements System Tools...10

Table of Contents 1. Image processing Measurements System Tools...10 Introduction Table of Contents 1 An Overview of ScopeImage Advanced...2 Features:...2 Function introduction...3 1. Image processing...3 1.1 Image Import and Export...3 1.1.1 Open image file...3 1.1.2 Import

More information

Kigamo Scanback which fits in your view camera in place of conventional film.

Kigamo Scanback which fits in your view camera in place of conventional film. What's included Kigamo Scanback which fits in your view camera in place of conventional film. SCSI Cable to connect your Scanback to the host computer. A 3-meter SCSI cable is standard. Kigamo also has

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Spotlight White paper

Spotlight White paper Spotlight White paper Benefits of digital highlighting vs. laser By Logitech, December 2017 EXECUTIVE SUMMARY The new Logitech Spotlight Presentation Remote with digital highlighting solves the laser visibility

More information

Workflow for Betterlight Imaging

Workflow for Betterlight Imaging Workflow for Betterlight Imaging [1] Startup Check that camera lens shutter is fully open Check lens is set to F stop 11 (change by manually adjusting lens aperture ring) Check Infrared (IR) Absorbing

More information

Information & Instructions

Information & Instructions KEY FEATURES 1. USB 3.0 For the Fastest Transfer Rates Up to 10X faster than regular USB 2.0 connections (also USB 2.0 compatible) 2. High Resolution 4.2 MegaPixels resolution gives accurate profile measurements

More information

An Embedded Pointing System for Lecture Rooms Installing Multiple Screen

An Embedded Pointing System for Lecture Rooms Installing Multiple Screen An Embedded Pointing System for Lecture Rooms Installing Multiple Screen Toshiaki Ukai, Takuro Kamamoto, Shinji Fukuma, Hideaki Okada, Shin-ichiro Mori University of FUKUI, Faculty of Engineering, Department

More information

Chapter 3 Part 2 Color image processing

Chapter 3 Part 2 Color image processing Chapter 3 Part 2 Color image processing Motivation Color fundamentals Color models Pseudocolor image processing Full-color image processing: Component-wise Vector-based Recent and current work Spring 2002

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

The introduction and background in the previous chapters provided context in

The introduction and background in the previous chapters provided context in Chapter 3 3. Eye Tracking Instrumentation 3.1 Overview The introduction and background in the previous chapters provided context in which eye tracking systems have been used to study how people look at

More information

Contents STARTUP MICROSCOPE CONTROLS CAMERA CONTROLS SOFTWARE CONTROLS EXPOSURE AND CONTRAST MONOCHROME IMAGE HANDLING

Contents STARTUP MICROSCOPE CONTROLS CAMERA CONTROLS SOFTWARE CONTROLS EXPOSURE AND CONTRAST MONOCHROME IMAGE HANDLING Operations Guide Contents STARTUP MICROSCOPE CONTROLS CAMERA CONTROLS SOFTWARE CONTROLS EXPOSURE AND CONTRAST MONOCHROME IMAGE HANDLING Nikon Eclipse 90i Operations Guide STARTUP Startup Powering Up Fluorescence

More information

DodgeCmd Image Dodging Algorithm A Technical White Paper

DodgeCmd Image Dodging Algorithm A Technical White Paper DodgeCmd Image Dodging Algorithm A Technical White Paper July 2008 Intergraph ZI Imaging 170 Graphics Drive Madison, AL 35758 USA www.intergraph.com Table of Contents ABSTRACT...1 1. INTRODUCTION...2 2.

More information

CONTENTS. Chapter I Introduction Package Includes Appearance System Requirements... 1

CONTENTS. Chapter I Introduction Package Includes Appearance System Requirements... 1 User Manual CONTENTS Chapter I Introduction... 1 1.1 Package Includes... 1 1.2 Appearance... 1 1.3 System Requirements... 1 1.4 Main Functions and Features... 2 Chapter II System Installation... 3 2.1

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

4 Use the adjustable Focus meter tool to take the subjectivity out of focusing the image, to get the best possible image

4 Use the adjustable Focus meter tool to take the subjectivity out of focusing the image, to get the best possible image Standard Edition VISIONx INC. www.visionxinc.com Real-Time Full Color Image Acquisition 4 Full support for NTSC and PAL cameras with Composite, Y/C (i.e. S-Video) and RGB video signal formats 4 Image display

More information

User Manual for HoloStudio M4 2.5 with HoloMonitor M4. Phase Holographic Imaging

User Manual for HoloStudio M4 2.5 with HoloMonitor M4. Phase Holographic Imaging User Manual for HoloStudio M4 2.5 with HoloMonitor M4 Phase Holographic Imaging 1 2 HoloStudio M4 2.5 Software instruction manual 2013 Phase Holographic Imaging AB 3 Contact us: Phase Holographic Imaging

More information

Supplementary Materials for

Supplementary Materials for advances.sciencemag.org/cgi/content/full/1/11/e1501057/dc1 Supplementary Materials for Earthquake detection through computationally efficient similarity search The PDF file includes: Clara E. Yoon, Ossian

More information

AgilEye Manual Version 2.0 February 28, 2007

AgilEye Manual Version 2.0 February 28, 2007 AgilEye Manual Version 2.0 February 28, 2007 1717 Louisiana NE Suite 202 Albuquerque, NM 87110 (505) 268-4742 support@agiloptics.com 2 (505) 268-4742 v. 2.0 February 07, 2007 3 Introduction AgilEye Wavefront

More information

Digital Image Processing. Lecture # 8 Color Processing

Digital Image Processing. Lecture # 8 Color Processing Digital Image Processing Lecture # 8 Color Processing 1 COLOR IMAGE PROCESSING COLOR IMAGE PROCESSING Color Importance Color is an excellent descriptor Suitable for object Identification and Extraction

More information

Multi-touch Technology 6.S063 Engineering Interaction Technologies. Prof. Stefanie Mueller MIT CSAIL HCI Engineering Group

Multi-touch Technology 6.S063 Engineering Interaction Technologies. Prof. Stefanie Mueller MIT CSAIL HCI Engineering Group Multi-touch Technology 6.S063 Engineering Interaction Technologies Prof. Stefanie Mueller MIT CSAIL HCI Engineering Group how does my phone recognize touch? and why the do I need to press hard on airplane

More information

Color and More. Color basics

Color and More. Color basics Color and More In this lesson, you'll evaluate an image in terms of its overall tonal range (lightness, darkness, and contrast), its overall balance of color, and its overall appearance for areas that

More information

Image Capture and Problems

Image Capture and Problems Image Capture and Problems A reasonable capture IVR Vision: Flat Part Recognition Fisher lecture 4 slide 1 Image Capture: Focus problems Focus set to one distance. Nearby distances in focus (depth of focus).

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing For a long time I limited myself to one color as a form of discipline. Pablo Picasso Color Image Processing 1 Preview Motive - Color is a powerful descriptor that often simplifies object identification

More information

Digital Portable Overhead Document Camera LV-1010

Digital Portable Overhead Document Camera LV-1010 Digital Portable Overhead Document Camera LV-1010 Instruction Manual 1 Content I Product Introduction 1.1 Product appearance..3 1.2 Main functions and features of the product.3 1.3 Production specifications.4

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

SIM University Projector Specifications. Stuart Nicholson System Architect. May 9, 2012

SIM University Projector Specifications. Stuart Nicholson System Architect. May 9, 2012 2012 2012 Projector Specifications 2 Stuart Nicholson System Architect System Specification Space Constraints System Contrast Screen Parameters System Configuration Many interactions Projector Count Resolution

More information

Maine Day in May. 54 Chapter 2: Painterly Techniques for Non-Painters

Maine Day in May. 54 Chapter 2: Painterly Techniques for Non-Painters Maine Day in May 54 Chapter 2: Painterly Techniques for Non-Painters Simplifying a Photograph to Achieve a Hand-Rendered Result Excerpted from Beyond Digital Photography: Transforming Photos into Fine

More information

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro Cvision 2 Digital Imaging António J. R. Neves (an@ua.pt) & João Paulo Silva Cunha & Bernardo Cunha IEETA / Universidade de Aveiro Outline Image sensors Camera calibration Sampling and quantization Data

More information

High Dynamic Range Imaging

High Dynamic Range Imaging High Dynamic Range Imaging 1 2 Lecture Topic Discuss the limits of the dynamic range in current imaging and display technology Solutions 1. High Dynamic Range (HDR) Imaging Able to image a larger dynamic

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

Creating Stitched Panoramas

Creating Stitched Panoramas Creating Stitched Panoramas Here are the topics that we ll cover 1. What is a stitched panorama? 2. What equipment will I need? 3. What settings & techniques do I use? 4. How do I stitch my images together

More information

ERS KEY FEATURES BEAM DIAGNOSTICS MAIN FUNCTIONS AVAILABLE MODEL. CMOS Beam Profiling Camera. 1 USB 3.0 for the Fastest Transfer Rates

ERS KEY FEATURES BEAM DIAGNOSTICS MAIN FUNCTIONS AVAILABLE MODEL. CMOS Beam Profiling Camera. 1 USB 3.0 for the Fastest Transfer Rates POWER DETECTORS ENERGY DETECTORS MONITORS SPECIAL PRODUCTS OEM DETECTORS THZ DETECTORS PHOTO DETECTORS HIGH POWER DETECTORS CAMERA PROFIL- CMOS Beam Profiling Camera KEY FEATURES ERS 1 USB 3.0 for the

More information

Using the Advanced Sharpen Transformation

Using the Advanced Sharpen Transformation Using the Advanced Sharpen Transformation Written by Jonathan Sachs Revised 10 Aug 2014 Copyright 2002-2014 Digital Light & Color Introduction Picture Window Pro s Advanced Sharpen transformation is a

More information

Instructions for the Experiment

Instructions for the Experiment Instructions for the Experiment Excitonic States in Atomically Thin Semiconductors 1. Introduction Alongside with electrical measurements, optical measurements are an indispensable tool for the study of

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Improving digital images with the GNU Image Manipulation Program PHOTO FIX

Improving digital images with the GNU Image Manipulation Program PHOTO FIX Improving digital images with the GNU Image Manipulation Program PHOTO FIX is great for fixing digital images. We ll show you how to correct washed-out or underexposed images and white balance. BY GAURAV

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi Department of E&TC Engineering,PVPIT,Bavdhan,Pune ABSTRACT: In the last decades vehicle license plate recognition systems

More information

Guidance on Using Scanning Software: Part 5. Epson Scan

Guidance on Using Scanning Software: Part 5. Epson Scan Guidance on Using Scanning Software: Part 5. Epson Scan Version of 4/29/2012 Epson Scan comes with Epson scanners and has simple manual adjustments, but requires vigilance to control the default settings

More information

English PRO-642. Advanced Features: On-Screen Display

English PRO-642. Advanced Features: On-Screen Display English PRO-642 Advanced Features: On-Screen Display 1 Adjusting the Camera Settings The joystick has a middle button that you click to open the OSD menu. This button is also used to select an option that

More information

Color Transformations

Color Transformations Color Transformations It is useful to think of a color image as a vector valued image, where each pixel has associated with it, as vector of three values. Each components of this vector corresponds to

More information

Android User manual. Intel Education Lab Camera by Intellisense CONTENTS

Android User manual. Intel Education Lab Camera by Intellisense CONTENTS Intel Education Lab Camera by Intellisense Android User manual CONTENTS Introduction General Information Common Features Time Lapse Kinematics Motion Cam Microscope Universal Logger Pathfinder Graph Challenge

More information

gfm-app.com User Manual

gfm-app.com User Manual gfm-app.com User Manual 03.07.16 CONTENTS 1. MAIN CONTROLS Main interface 3 Control panel 3 Gesture controls 3-6 2. CAMERA FUNCTIONS Exposure 7 Focus 8 White balance 9 Zoom 10 Memory 11 3. AUTOMATED SEQUENCES

More information

Point Calibration. July 3, 2012

Point Calibration. July 3, 2012 Point Calibration July 3, 2012 The purpose of the Point Calibration process is to generate a map of voltages (for galvos) or motor positions of the pointing device to the voltages or pixels of the reference

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

BEAMAGE-3.0 KEY FEATURES BEAM DIAGNOSTICS AVAILABLE MODELS MAIN FUNCTIONS SEE ALSO ACCESSORIES. CMOS Beam Profiling Cameras

BEAMAGE-3.0 KEY FEATURES BEAM DIAGNOSTICS AVAILABLE MODELS MAIN FUNCTIONS SEE ALSO ACCESSORIES. CMOS Beam Profiling Cameras BEAM DIAGNOSTICS BEAM DIAGNOSTICS SPECIAL PRODUCTS OEM DETECTORS THZ DETECTORS PHOTO DETECTORS HIGH POWER DETECTORS POWER DETECTORS ENERGY DETECTORS MONITORS CMOS Beam Profiling Cameras AVAILABLE MODELS

More information

Tablet overrides: overrides current settings for opacity and size based on pen pressure.

Tablet overrides: overrides current settings for opacity and size based on pen pressure. Photoshop 1 Painting Eye Dropper Tool Samples a color from an image source and makes it the foreground color. Brush Tool Paints brush strokes with anti-aliased (smooth) edges. Brush Presets Quickly access

More information

Leica DMi8A Quick Guide

Leica DMi8A Quick Guide Leica DMi8A Quick Guide 1 Optical Microscope Quick Start Guide The following instructions are provided as a Quick Start Guide for powering up, running measurements, and shutting down Leica s DMi8A Inverted

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Image Processing. Adam Finkelstein Princeton University COS 426, Spring 2019

Image Processing. Adam Finkelstein Princeton University COS 426, Spring 2019 Image Processing Adam Finkelstein Princeton University COS 426, Spring 2019 Image Processing Operations Luminance Brightness Contrast Gamma Histogram equalization Color Grayscale Saturation White balance

More information

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models Introduction to computer vision In general, computer vision covers very wide area of issues concerning understanding of images by computers. It may be considered as a part of artificial intelligence and

More information

Image processing for gesture recognition: from theory to practice. Michela Goffredo University Roma TRE

Image processing for gesture recognition: from theory to practice. Michela Goffredo University Roma TRE Image processing for gesture recognition: from theory to practice 2 Michela Goffredo University Roma TRE goffredo@uniroma3.it Image processing At this point we have all of the basics at our disposal. We

More information

ME 6406 MACHINE VISION. Georgia Institute of Technology

ME 6406 MACHINE VISION. Georgia Institute of Technology ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class

More information

APPENDIX D: ANALYZING ASTRONOMICAL IMAGES WITH MAXIM DL

APPENDIX D: ANALYZING ASTRONOMICAL IMAGES WITH MAXIM DL APPENDIX D: ANALYZING ASTRONOMICAL IMAGES WITH MAXIM DL Written by T.Jaeger INTRODUCTION Early astronomers relied on handmade sketches to record their observations (see Galileo s sketches of Jupiter s

More information

Editing your digital images:

Editing your digital images: Editing your digital images: 1 By Garry Sankowsky zodpub@rainforestmagic.com.au All images taken with a digital camera need to be edited. You will usually get software with your camera that can do at least

More information

Records the location of the circuit board fiducials.

Records the location of the circuit board fiducials. 17 Fiducial Setting: Records the location of the circuit board fiducials. Title Setting: Inputs detailed information of program,operator, pcb name and lot number. Also used to input measurement tolerances

More information

32 Float v2 Quick Start Guide. AUTHORED BY ANTHONY HERNANDEZ - (415)

32 Float v2 Quick Start Guide. AUTHORED BY ANTHONY HERNANDEZ - (415) 32 Float v2 Quick Start Guide 32 Float V2 Trademark/Copyright Information Copyright 2011 by United Color Technologies, LLC. All rights reserved. Unified Color Technologies, BeyondRGB, and HDR Float are

More information

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the

More information

Quintic Hardware Tutorial Camera Set-Up

Quintic Hardware Tutorial Camera Set-Up Quintic Hardware Tutorial Camera Set-Up 1 All Quintic Live High-Speed cameras are specifically designed to meet a wide range of needs including coaching, performance analysis and research. Quintic LIVE

More information

Exercise questions for Machine vision

Exercise questions for Machine vision Exercise questions for Machine vision This is a collection of exercise questions. These questions are all examination alike which means that similar questions may appear at the written exam. I ve divided

More information

Point Spread Function Estimation Tool, Alpha Version. A Plugin for ImageJ

Point Spread Function Estimation Tool, Alpha Version. A Plugin for ImageJ Tutorial Point Spread Function Estimation Tool, Alpha Version A Plugin for ImageJ Benedikt Baumgartner Jo Helmuth jo.helmuth@inf.ethz.ch MOSAIC Lab, ETH Zurich www.mosaic.ethz.ch This tutorial explains

More information

CS 200 Assignment 3 Pixel Graphics Due Tuesday September 27th 2016, 9:00 am. Readings and Resources

CS 200 Assignment 3 Pixel Graphics Due Tuesday September 27th 2016, 9:00 am. Readings and Resources CS 200 Assignment 3 Pixel Graphics Due Tuesday September 27th 2016, 9:00 am Readings and Resources Texts: Suggested excerpts from Learning Web Design Files The required files are on Learn in the Week 3

More information

ScanArray Overview. Principle of Operation. Instrument Components

ScanArray Overview. Principle of Operation. Instrument Components ScanArray Overview The GSI Lumonics ScanArrayÒ Microarray Analysis System is a scanning laser confocal fluorescence microscope that is used to determine the fluorescence intensity of a two-dimensional

More information

An Effective Method for Removing Scratches and Restoring Low -Quality QR Code Images

An Effective Method for Removing Scratches and Restoring Low -Quality QR Code Images An Effective Method for Removing Scratches and Restoring Low -Quality QR Code Images Ashna Thomas 1, Remya Paul 2 1 M.Tech Student (CSE), Mahatma Gandhi University Viswajyothi College of Engineering and

More information

BacklightFly Manual.

BacklightFly Manual. BacklightFly Manual http://www.febees.com/ Contents Start... 3 Installation... 3 Registration... 7 BacklightFly 1-2-3... 9 Overview... 10 Layers... 14 Layer Container... 14 Layer... 16 Density and Design

More information

Color Correction and Enhancement

Color Correction and Enhancement 10 Approach to Color Correction 151 Color Correction and Enhancement The primary purpose of Photoshop is to act as a digital darkroom where images can be corrected, enhanced, and refined. How do you know

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

BEAMAGE KEY FEATURES AVAILABLE MODELS. CMOS Beam Profiling Cameras

BEAMAGE KEY FEATURES AVAILABLE MODELS. CMOS Beam Profiling Cameras BEAM DIAGNOS TICS Beam Profiling Cameras KEY FEATURES SPECIAL PRODUCTS OEM DETECTORS THZ DETECTORS PHOTO DETECTORS HIGH POWER SOLUTIONS POWER DETECTORS ENERGY DETECTORS MONITORS AVAILABLE MODELS Beamage-3.0

More information

All Creative Suite Design documents are saved in the same way. Click the Save or Save As (if saving for the first time) command on the File menu to

All Creative Suite Design documents are saved in the same way. Click the Save or Save As (if saving for the first time) command on the File menu to 1 The Application bar is new in the CS4 applications. It combines the menu bar with control buttons that allow you to perform tasks such as arranging multiple documents or changing the workspace view.

More information

Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 2 Image display and enhancement 2 Image Display and Enhancement Purpose visual enhancement to aid interpretation enhancement for improvement of information

More information

Brain Tumor Segmentation of MRI Images Using SVM Classifier Abstract: Keywords: INTRODUCTION RELATED WORK A UGC Recommended Journal

Brain Tumor Segmentation of MRI Images Using SVM Classifier Abstract: Keywords: INTRODUCTION RELATED WORK A UGC Recommended Journal Brain Tumor Segmentation of MRI Images Using SVM Classifier Vidya Kalpavriksha 1, R. H. Goudar 1, V. T. Desai 2, VinayakaMurthy 3 1 Department of CNE, VTU Belagavi 2 Department of CSE, VSMIT, Nippani 3

More information

Practical work no. 3: Confocal Live Cell Microscopy

Practical work no. 3: Confocal Live Cell Microscopy Practical work no. 3: Confocal Live Cell Microscopy Course Instructor: Mikko Liljeström (MIU) 1 Background Confocal microscopy: The main idea behind confocality is that it suppresses the signal outside

More information

Class #9: Experiment Diodes Part II: LEDs

Class #9: Experiment Diodes Part II: LEDs Class #9: Experiment Diodes Part II: LEDs Purpose: The objective of this experiment is to become familiar with the properties and uses of LEDs, particularly as a communication device. This is a continuation

More information

Motic Live Imaging Module. Windows OS User Manual

Motic Live Imaging Module. Windows OS User Manual Motic Live Imaging Module Windows OS User Manual Motic Live Imaging Module Windows OS User Manual CONTENTS (Linked) Introduction 05 Menus, bars and tools 06 Title bar 06 Menu bar 06 Status bar 07 FPS 07

More information

Psy 280 Fall 2000: Color Vision (Part 1) Oct 23, Announcements

Psy 280 Fall 2000: Color Vision (Part 1) Oct 23, Announcements Announcements 1. This week's topic will be COLOR VISION. DEPTH PERCEPTION will be covered next week. 2. All slides (and my notes for each slide) will be posted on the class web page at the end of the week.

More information

Color Image Processing

Color Image Processing Color Image Processing Jesus J. Caban Outline Discuss Assignment #1 Project Proposal Color Perception & Analysis 1 Discuss Assignment #1 Project Proposal Due next Monday, Oct 4th Project proposal Submit

More information

Making PHP See. Confoo Michael Maclean

Making PHP See. Confoo Michael Maclean Making PHP See Confoo 2011 Michael Maclean mgdm@php.net http://mgdm.net You want to do what? PHP has many ways to create graphics Cairo, ImageMagick, GraphicsMagick, GD... You want to do what? There aren't

More information

Navigation of PowerPoint Using Hand Gestures

Navigation of PowerPoint Using Hand Gestures Navigation of PowerPoint Using Hand Gestures Dnyanada R Jadhav 1, L. M. R. J Lobo 2 1 M.E Department of Computer Science & Engineering, Walchand Institute of technology, Solapur, India 2 Associate Professor

More information

Chapter 6: TVA MR and Cardiac Function

Chapter 6: TVA MR and Cardiac Function Chapter 6 Cardiac MR Introduction Chapter 6: TVA MR and Cardiac Function The Time-Volume Analysis (TVA) optional module calculates time-dependent behavior of volumes in multi-phase studies from MR. An

More information

Reikan FoCal Aperture Sharpness Test Report

Reikan FoCal Aperture Sharpness Test Report Focus Calibration and Analysis Software Test run on: 26/01/2016 17:02:00 with FoCal 2.0.6.2416W Report created on: 26/01/2016 17:03:39 with FoCal 2.0.6W Overview Test Information Property Description Data

More information

Olympus Digital Microscope Camera (DP70) checklist

Olympus Digital Microscope Camera (DP70) checklist Smith College - July 2005 Olympus Digital Microscope Camera (DP70) checklist CONTENT, page no. Camera Information, 1 Startup, 1 Retrieve an Image, 2 Microscope Setup, 2 Capture, 3 Preview. 3 Color Balans,

More information

Automatics Vehicle License Plate Recognition using MATLAB

Automatics Vehicle License Plate Recognition using MATLAB Automatics Vehicle License Plate Recognition using MATLAB Alhamzawi Hussein Ali mezher Faculty of Informatics/University of Debrecen Kassai ut 26, 4028 Debrecen, Hungary. Abstract - The objective of this

More information

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 19: Depth Cameras Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Continuing theme: computational photography Cheap cameras capture light, extensive processing produces

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

SCD-0017 Firegrab Documentation

SCD-0017 Firegrab Documentation SCD-0017 Firegrab Documentation Release XI Tordivel AS January 04, 2017 Contents 1 User Guide 3 2 Fire-I Camera Properties 9 3 Raw Color Mode 13 4 Examples 15 5 Release notes 17 i ii SCD-0017 Firegrab

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Extreme Makeovers: Photoshop Retouching Techniques

Extreme Makeovers: Photoshop Retouching Techniques Extreme Makeovers: Table of Contents About the Workshop... 1 Workshop Objectives... 1 Getting Started... 1 Photoshop Workspace... 1 Retouching Tools... 2 General Steps... 2 Resolution and image size...

More information

Imaging Process (review)

Imaging Process (review) Color Used heavily in human vision Color is a pixel property, making some recognition problems easy Visible spectrum for humans is 400nm (blue) to 700 nm (red) Machines can see much more; ex. X-rays, infrared,

More information

ISCapture User Guide. advanced CCD imaging. Opticstar

ISCapture User Guide. advanced CCD imaging. Opticstar advanced CCD imaging Opticstar I We always check the accuracy of the information in our promotional material. However, due to the continuous process of product development and improvement it is possible

More information

Reikan FoCal Aperture Sharpness Test Report

Reikan FoCal Aperture Sharpness Test Report Focus Calibration and Analysis Software Reikan FoCal Sharpness Test Report Test run on: 26/01/2016 17:14:35 with FoCal 2.0.6.2416W Report created on: 26/01/2016 17:16:16 with FoCal 2.0.6W Overview Test

More information

DESIGN OF A LASER DISTANCE SENSOR WITH A WEB CAMERA FOR A MOBILE ROBOT

DESIGN OF A LASER DISTANCE SENSOR WITH A WEB CAMERA FOR A MOBILE ROBOT CZECH TECHNICAL UNIVERSITY IN PRAGUE FACULTY OF MECHANICAL ENGINEERING DEPT. OF INSTRUMENTATION AND CONTROL ENGINEERING DESIGN OF A LASER DISTANCE SENSOR WITH A WEB CAMERA FOR A MOBILE ROBOT ASHYKHMIN

More information

Technical Note How to Compensate Lateral Chromatic Aberration

Technical Note How to Compensate Lateral Chromatic Aberration Lateral Chromatic Aberration Compensation Function: In JAI color line scan cameras (3CCD/4CCD/3CMOS/4CMOS), sensors and prisms are precisely fabricated. On the other hand, the lens mounts of the cameras

More information

Computer Graphics Fundamentals

Computer Graphics Fundamentals Computer Graphics Fundamentals Jacek Kęsik, PhD Simple converts Rotations Translations Flips Resizing Geometry Rotation n * 90 degrees other Geometry Rotation n * 90 degrees other Geometry Translations

More information

Image Processing. Michael Kazhdan ( /657) HB Ch FvDFH Ch. 13.1

Image Processing. Michael Kazhdan ( /657) HB Ch FvDFH Ch. 13.1 Image Processing Michael Kazhdan (600.457/657) HB Ch. 14.4 FvDFH Ch. 13.1 Outline Human Vision Image Representation Reducing Color Quantization Artifacts Basic Image Processing Human Vision Model of Human

More information

4.5.1 Mirroring Gain/Offset Registers GPIO CMV Snapshot Control... 14

4.5.1 Mirroring Gain/Offset Registers GPIO CMV Snapshot Control... 14 Thank you for choosing the MityCAM-C8000 from Critical Link. The MityCAM-C8000 MityViewer Quick Start Guide will guide you through the software installation process and the steps to acquire your first

More information

Image Processing COS 426

Image Processing COS 426 Image Processing COS 426 What is a Digital Image? A digital image is a discrete array of samples representing a continuous 2D function Continuous function Discrete samples Limitations on Digital Images

More information

Photomatix Light 1.0 User Manual

Photomatix Light 1.0 User Manual Photomatix Light 1.0 User Manual Table of Contents Introduction... iii Section 1: HDR...1 1.1 Taking Photos for HDR...2 1.1.1 Setting Up Your Camera...2 1.1.2 Taking the Photos...3 Section 2: Using Photomatix

More information