BACHELOR THESIS. Jaroslav Fibichr Creating Panoramic Images from Photographs Acquired with Different Camera Settings

Size: px
Start display at page:

Download "BACHELOR THESIS. Jaroslav Fibichr Creating Panoramic Images from Photographs Acquired with Different Camera Settings"

Transcription

1 Charles University in Prague Faculty of Mathematics and Physics BACHELOR THESIS Jaroslav Fibichr Creating Panoramic Images from Photographs Acquired with Different Camera Settings Department of Software Engineering Supervisor: Mgr. Jiří Sedlář Specialization: Computer Science, Programming 2009

2 First of all, I would like to thank my supervisor Mgr. Jiří Sedlář for his patience and valuable advice. Then, I want to thank Miroslav Tamáš for giving me advice about issues concerning Java programming and I also want to thank Matúš Gažo for being very helpful with editing the text and testing. Finally, I want to thank my parents for borrowing me the digital camera for taking sample photographs. I hereby certify that I wrote the thesis myself, using only the referenced sources. I give consent with lending of the thesis. In Prague, May 29, 2009 Jaroslav Fibichr 2

3 Contents 1 Introduction 6 2 Basic Concepts Creating Panoramas Exposure Image Analysis and Enhancement Transformation Function Histogram Statistics Used Algorithms Image Ordering Image Exposure Calibration Direct Level Mapping Using Histogram Statistics Image Stitching Searching the End Points of a Seam Choosing the Image for Seam Searching Searching the Seams Blending Images Together Programmer s Reference Main Decisions Overview Data Structures Implementation User s Guide System Requirements Input Images Preparation

4 5.3 Usage Application Window Selection of Images Running the Process Preferences Results and Related Work Software Overview Adobe Photoshop Hugin Autopano Pro Evaluation Criteria Results and Comparison Conclusions 43 Bibliography 45 A CD Contents 47 4

5 Název práce: Tvorba panoramatických snímků z fotografií pořízených s rozdílným nastavením fotoaparátu Autor: Jaroslav Fibichr Katedra (ústav): Katedra softwarového inženýrství Vedoucí bakalářské práce: Mgr. Jiří Sedlář, Ústav teorie informace a automatizace AV ČR vedoucího: sedlar@utia.cas.cz Abstrakt: Při fotografování scény za účelem vytváření panoramatických snímků bývá vhodnější upravit nastavení fotoaparátu pro každou fotografii zvlášť, aby zůstalo zachováno správné rozložení jasů. V takovém souboru obrázků pro panorama však bývají mezi spojovanými snímky velké jasové rozdíly. Cílem této práce je navrhnout řešení, které by rozdíly mezi snímky potlačilo a implementovat jej v programu Panomedic. První metoda je založena na přímém mapování jasových hodnot v překrývajících se částech snímků. Druhá metoda využívá statistických hodnot získaných z rozdělení jasů ve snímcích. Dále je popsána metoda pro hledání hranice mezi jednotlivými snímky použitá pro spojování. Výsledkem je panoramatická fotografie bez viditelných přechodů mezi zdrojovými snímky. Klíčová slova: zpracování obrazu, korekce expozice, panorama, dynamický rozsah Title: Creating Panoramic Images from Photographs Acquired with Different Camera Settings Author: Jaroslav Fibichr Department: Department of Software Engineering Supervisor: Mgr. Jiří Sedlář, Institute of Information Theory and Automation of the ASCR Supervisor s address: sedlar@utia.cas.cz Abstract: When taking pictures for the purpose of creating panoramic images, it is more convenient to adjust camera settings for each image separately to keep the convenient brightness distribution. It usually leads to a set of images with differences in brightness must be corrected before stitching. The goal of this thesis is to propose solutions to adjust these differences and implement them in the application Panomedic. The first method is based on direct brightness level mapping in overlapping area of the images. The second method utilizes statistical values obtained from image brightness distribution. The method for searching the seam of stitching is also described. The resulting panorama image is without visible transitions between the input images. Keywords: image processing, exposure correction, panorama, dynamic range 5

6 Chapter 1 Introduction In the last few years, it seems that digital photography has finally reached classic photography in many aspects resolution has become sufficient, image quality and colour interpretation are fine and noise levels have also been reduced. Moreover, digital photography gives us the possibility to process taken images in various ways to picture the real world in a more realistic way, to emphasize any aspect of the image or just to impress the viewer. This includes joining the pictures together to create larger images called panoramas. This can be utilized also for scientific purposes. When the photographer takes images which are supposed to be a source for a panorama, he should abide certain rules. There are many options how a digital camera could be set up before shooting but some of them should be fixed for all the photographs in the sequence in order to assure convenient conditions for the stitching. However, this is sometimes not possible as the camera may not allow to change the desired settings. These days, for example, many people take photos with their mobile phones which have very simple, mostly fully automatic cameras. Objective of this thesis is to find a solution to the problem of different camera exposure settings. Two or more geometrically aligned photographs are assumed as the input. There may be differences in brightness and contrast among them. The task is to adjust these input images and to stitch them into one panoramic image that rectifies the differences. The methods proposed in this text refer to several topics related to creating panoramas. Before the images are adjusted, they need to be ordered into a sequence. The set of input images is represented by a graph and the sequence is created by a minimum spanning tree algorithm. The second 6

7 task is to align the exposures of the input images. We propose two methods that are based on analysis of the area in the intersection of the images. The first method utilizes statistical values obtained from image histograms. The histograms are matched to each other. The second approach is based on finding the transformation function directly from the pixel values. The image adjustments are given by the values from the created lookup table. The last topic concerns stitching process itself. The proposed method finds the seam where the neighbouring images are stitched together. The main goal of the algorithm is to prevent the panorama from duplicated edges. The seam is then found by a graph searching algorithm. The developed algorithms were implemented in a Java application. Panomedic allows user to set the parameters related to the processing. The processing itself is done automatically and does not require any intervention of user. The application is written in Java programming language. This brings the advantage of platform independency it can be run under any version of Windows, Linux or Mac OS. It has a simple graphical interface which allows the user to load the input, change the preferences of processing and show the input images, as well as the result. It is intuitive and easy to use. This thesis is organized as follows. The chapter Basic Concepts introduces the reader to the fundamentals of creating panoramas, e.g. exposure, image analysis and enhancement. The proposed methods are described in the chapter Used Algorithms. The chapter Programmer s Reference contains information about Panomedic implementation, and the chapter User s Guide includes instructions for using the developed application. In the chapter Results and Related Work we compare the achieved results with existing solutions. Finally, we summarize all the aspects of the work and suggest the possible future development. 7

8 Chapter 2 Basic Concepts In this chapter, we describe topics related to the work. This includes mainly techniques of capturing photographs for the purposes of creating panoramas, image registration, facts about exposure and its evaluation by histograms. Also, graph algorithms that were used in the implementation are mentioned. 2.1 Creating Panoramas As mentioned before, there are many ways how to create panoramas. A lot of specialized devices were invented and are sold for easier creation of great looking panoramic pictures. But these special cameras are quite expensive, their usability are often limited and they can be used only for this specific purpose. Main options are described in [2]. On the contrary, creating panoramas from images acquired by digital camera needs more preparation and its output is usually a set of images. In general, panoramas may consist of an arbitrary number of photographs. The actual realization of such task could be a very complex. There are several rules that photographer should observe to create more suitable conditions for further image processing. When photographing a landscape, one end of the panorama is often illuminated differently then the other side. Typically, one area is in the sun and the other is in the shade. If the exposition is metered locally, one picture by another, differences of illumination in the scene will show up as differences of illumination in overlapping areas. This could be avoided by fixing the exposure in the same way for all images in the sequence. The exposure values for all the images should be between these two extremes, preferably closer to the darker tones, 8

9 because in digital photography (as opposed to negative film) it is easier to fix the shadows. However, if the difference in brightness between the utmost two images is too big, the size of overexposed and underexposed areas may significantly impair the panoramic image. If the camera settings are not locked between acquisitions, the only way to fix these divergences is to make changes in brightness digitally. For similar reasons the white balance should be also fixed. Another problem emerges when photographing an interior together with exterior, for example windows or doors. The exterior parts would be inevitably overexposed if the interior is exposed correctly and vice versa. This might be solved by taking two sets of pictures: one for the interior and the other for exterior. After this, they are blended together into one seamless image. There are several problems that have to be solved in order to achieve good panorama. First of all, it is the registration of input images. This means analysis of the overlapping areas of the photographs, finding the control points and applying the transformations to the images. This procedure is complicated and applications written for automatic panorama stitching differ mainly in methods and quality of the registration. More on this topic was written in [9]. Next step, which is not necessary but advisable, is preprocessing the input photographs. To obtain convenient conditions for stitching, input images should be rectified. Because they might differ in exposure, white balance or depth of field, the result could be bad without adjustments made before blending. After that, stitching itself is made. Within this task, main attention should be paid to keep the stitching seams as invisible as possible. Potential viewers are very sensitive to all unnaturally looking edges in the picture. Last step is post process adjustments. Common improvements can be done, just like with any other photographs. However, at this point, defects originated from differences between input images are very difficult to fix. 2.2 Exposure Dynamic range (or light sensitivity range) indicates the maximum contrast that can be effectively captured by sensor, i.e. the greatest feasible amplitude between light and dark details an image sensor is able to measure. In digital imaging, it refers to the span of brightness across the captured scene. The 9

10 dynamic range of the light values visible by human eye exceeds the range that can be handled by any device. Image histogram is a graphical representation of brightness distribution in a digital image. It is a column graph where each column represents the number of pixels of the corresponding brightness value in the image. The left side of the horizontal axis usually represents the shadows or dark areas in the image while the right side represents bright areas. The optimal shape of a histogram should be similar to the one shown in Figure 2.1. frequenncy shadows highlights brightness Figure 2.1: Histogram The distribution of histogram values is given by the exposure and the light characteristics of the captured scene. For example, when the dynamic range of the scene is narrower than dynamic range of the capturing device, the values in the histogram would be greater in the middle and smaller ) on the sides. This often happens while photographing from the aircraft. On the other hand, when the range of the device is narrower than that of the scene, the histogram is flatter and the values outside the device range are distorted, as shown in Figure 2.2. This causes the loss of image information as the original light levels of the scene cannot be recognized and they merge into a short interval of artificial values. Exposure is defined as the total amount of light received by the film or sensor. It is determined by: shutter speed - effective length of time a shutter is open; this also influences the presence of motion blur, aperture - size of the opening in the lens, which influences the depth of field, and 10

11 Figure 2.2: Dynamic Range. Histograms typical of scene range narrower (left) and wider (right) than dynamic range of the sensor. sensitivity of the sensor, which has impact on undesired noise in the image. A change of any of these parameters (as well as the exposure) affects the histogram shape. In general, as the exposure grows, values on the right side of the horizontal axis grow as well. Further information about exposure and taking photographs can be found at [14]. 2.3 Image Analysis and Enhancement The principal objective of image enhancement is to process an image so that the result is more suitable for a specific application than the original image. We will use only methods working in spatial domain (as opposed to the frequency domain). These method are based on manipulation with pixels in an image plane itself Transformation Function Spatial domain transformation can be denoted by the expression g(x, y) = T[f(x, y)] where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f, defined over some neighbourhood of (x, y). When the neighbourhood is of size 1 x 1 (single pixel), g depends solely on the value of f at (x, y), and T becomes the so-called transformation (mapping) function. This function can be illustrated as shown in Fig This particular function would produce an image with higher contrast than the original one 11

12 by darkening the levels below m and brightening the levels above m. This technique is known as the contrast stretching. A mapping function, while easy to implement, can be very powerful in some cases. The transformation T maps a pixel value f to a pixel value g. These values are usually stored in a one-dimensional array and the mapping is implemented as simple lookup table. For example, if we use 8-bit colour format, such a look-up table consists of 256 values. In this way, the histogram can be adjusted in many ways, depending on the shape of the look-up table. g Dark Light T(f) Dark m Light f Figure 2.3: Transformation function Histogram Statistics In order to use the image histogram for enhancement, we can also utilize the statistical parameters of brightness distribution in the image estimated from the histogram. Let r denote a discrete random variable representing discrete grey-levels in range [0, L 1], and let p(r i ) denote the histogram component corresponding to ith value of r. Thus, p(r i ) is an estimate of the probability of occurrence of the ith grey level in the image. Then, m = L 1 i=0 r i p(r i ) is m is the mean value of r (its average grey level), and L 1 µ 2 (r) = (r i m) 2 p(r i ) i=0 12

13 is the second moment of r, also known as the variance of r, conventionally denoted σ 2 (r). It represents an average contrast in an image. The standard deviation σ(r) is defined simply as a square root of the variance. These values can be used for the adjustment of intensity and contrast. 13

14 Chapter 3 Used Algorithms In this chapter, we describe the methods used in the panorama-creating program. The first part focuses on compensating for the difference in exposure of the input images. Two different techniques are proposed. The first one is based on statistics computed from image histograms. The second one creates the mapping function directly from the overlapping area. Solutions to other problems concerning the task of panorama creation are also described. Process Flow The input of the application implemented in this work is a set of registered images. Both methods process just two images at a time. The result and is then processed with the third image and so on. In the end, the result represents one panoramic image that contains all the input images. The first task is to decide in which order the input images should be processed. This is followed by exposure corrections between the first two images. After that, the adjusted images are blended into one. To avoid artificial edges and make the transition smooth, the boundaries where the images should be connected are found. This process is repeated until there is no input image left. The process flow is shown in Figure Image Ordering As mentioned above, the input consists of an arbitrary number of images. After registration these images appear in different parts of the image plane 14

15 finding a boundary, blending set of input images two blended images determine the process order take the result and the next unprocessed image in sequence (if any) two calibrated images take first two images one of the main exposure calibrating methods set of ordered images two uncalibrated images with overlapping area Figure 3.1: Process Flow diagram. (see Fig.3.2(a)). The incremental approach where the images are added to the resulting panoramic image one by one requires the order in which the images should be processed. The procedure then processes the images in this particular order. To determine convenient order, a graph of image neighbourhood is created. The vertices represent all the input images, whereas the edges and their weights are defined by spatial intersections between the images (see Fig.3.2(b)). The weight of each edge is computed as the number of pixels common to both images. The larger the intersection area is, the better it expresses the differences between the images. If the area is large enough, we can assume that the brightness distribution inside the intersection roughly represents the distribution in the whole image. However, if the neighbouring images share only a small area, the estimated difference in exposure could be very inaccurate and the calibration would yield poor results. As each image is processed just once, the resulting graph should be continuous, otherwise not all images would not be interconnected by common areas and therefore they could not be calibrated. Images also should not contain cycles because the edges among already processed images are redundant. Gradual removing of the edges with the lowest weight forms the 15

16 minimum spanning tree of a graph. There are many algorithms that solve this problem efficiently, e.g. Jarník algorithm[6] which runs in time O(n+m), where n denotes the number of vertices and m denotes the number of edges. The minimum spanning tree of the graph in (b) is shown in Fig. 3.2(c). The last step is to order the vertices to a linear sequence. This is achieved by a standard breadth-first search algorithm. Every explored vertex is given an ordering number in an increasing sequence. It does not make a difference which vertex is the processed first, all images are assigned a number and the condition regarding the advance in graph consecutively via edges is fulfilled. (a) (b) B A B A C D E c D (c) E 5 4 Figure 3.2: Image Ordering. (a) Spatial location of images. (b) Graph of image neighbourhood. (c) Processing order determined by a minimum spanning tree algorithm. 3.2 Image Exposure Calibration One of the most important quality aspects of panoramas created iteratively is the homogeneity of the brightness distribution over the image and the absence of artificial edges. In a good looking panoramic image, the boundaries between parts from different photographs should not be visible. This can be achieved by various methods. A common approach is based on compensation for the differences in exposure among the processed images. Both techniques presented in this work compute the parameters of the histogram transformation from the regions of the input images that lie in 16

17 their intersection. These regions correspond to the same part of the photographed scene. Spatial correspondence can be provided by image registration. In the case of real data, however, the corresponding pixels may appear displaced. This is usually caused by errors in the estimation of the parameters of the geometric transformation and/or in the assumption of its type. As a result, two spatially corresponding pixel in the aligned images could represent different parts of the scene with different brightness properties. This could cause a disturbing double exposure effect near poorly aligned edges in the fused image. A straightforward solution is to blur the images before the analysis. The differences around the edges decrease while overall brightness levels remain unchanged. The method can be found in [3] Direct Level Mapping The first approach to the task employs pixels in overlapping area in a different way. Corresponding colour values from both pictures are directly mapped to the levels between them. Thus, the transformation function depends on brightness values in corresponding pixels. The created mapping function is then used for mapping levels from the whole images. We denote the first image as f and the second image as g, the brightness values in pixel (x, y) are then f(x, y) and g(x, y), respectively. Then the transformation function has the form s = T f (f, g) = T g (f, g), where T f and T g are mapping functions from the image f and g, respectively. These mapping functions equal and they are given by the formula s = f + (g f)d, where d is a variable determining to which image the resulting image is closer. It is a number from interval [0, 1]. For values near 0 stands that the resulting image will be almost the same as image f and analogously values approaching 1 imply similarity to image g. Its final form can be based on various characteristics. In this case, there are two main desired effects. The first one follows the fact that mapping from the image with greater contrast that with lower contrast causes losses of image information. The second one is that it avoids mapping to the image with a contrast so big that it contains a lot of pixels that have values close to the extremes. 17

18 The suggested solution uses standard deviations σ f, σ g calculated from the image histograms. Depending on their comparison we assume that: E f /2 if σ n f > σ g d = 1 Eg /2 if σ n f < σ g E f E f +E g otherwise where n represents total sum of pixels in the intersected area and E f (or E g ) is a number of all pixels in the image with values in some neighbourhood of the limits of the brightness range. This neighbourhood is defined proportionally to the length of the input range. For an 8-bit range, i.e. [0, 255], the values in this neighbourhood belong to the interval [0; 255d] [255(1 d); 255], where d is the proportion. In this case, 5% or 10% seem to be suitable values. Each of the input levels is mapped to more than one value and viceversa, i.e. there are brightness values that are not mapped to any value. This is solved by averaging more values from the input. The missing values of the mapping function are filled by means of interpolation. A simple linear interpolation is used. After all of the values from the overlapping areas of the images are mapped, the resulting transformation functions may appear distorted. This could be caused by several reasons, e.g. inaccurate registration, significant image noise or bigger differences in brightness distribution. To suppress this, the transformation function is smoothed by a simple one-dimensional convolution or median filter. The size of the mask depends on the range. Better results are achieved if the mapping function is monotonous. The function would otherwise disrupt the original order of grey levels in the input images and cause artefacts in the processed image. Figure 3.3 shows visual effects of smoothing and monotonization. Apart from the suggested operations, the function could also be approximated by a spline function. More about splines can be found in [1] Using Histogram Statistics The next approach to equalize exposures in neighbouring images is based on statistics acquired from the image histogram. As mentioned before, it is possible to represent some of the image characteristics by values computed from the brightness distribution, i.e. mean and standard deviance. The goal is to have these values equal in both images so that the brightness distribution and the overall visual image appearance become similar. 18

19 Tf s s s Tf Tf Tg Tg Tg f,g f,g f,g Figure 3.3: Transformation Function Adjustments. (a) Curves after interpolation. (b) Functions obtained from (a) by median filtering. (c) Nondecreasing function created from (b). To accomplish such situation it is good to allow using wider range of brightness levels then standard (e.g. 8-bit) range. Hence for the needs of the algorithm, the x-axis of the histogram is extended for using any positive value. After the histogram calibration, all the values on the extended range are compressed back to the original range. Otherwise it cannot be displayed or stored in standard format. First of all, let us assume a linear model which represents transformation function from image X to image Y. It is given by Y = a + bx, (3.1) where a denotes shifting value and b denotes extending factor. Figure 3.4 shows the impacts of image shifting and image extension on the histogram. σ σ μ 255 μ Figure 3.4: Histogram Operations. (a) Extension. (b) Shifting. These parameters can be estimated by means and standard deviations of both images stdev(y ) = stdev(a + bx) = b stdev(x), 19

20 where stdev denotes standard deviation. Therefore the extending factor is Then from equation b = stdev(y ) stdev(x). E(Y ) = E(a + bx) = a + bx, where E denotes mean, we compute the shifting value a = E(Y ) b E(X). Applying such transformations to one image causes that the mean values and also the standard deviations of both images equal. However, the first method showed that the linear model does not correspond the real transformation function because the shapes of the lookup table is never linear. For better estimation of the real transformation, we suggest function transformation with power-law transformation: Parameters of this transform are obtained from Y = bx c. (3.2) log(y ) = log(bx c ) = log(b) + c log(x), (3.3) where X, Y 1 without loss of generality. This equation resembles Eq. 3.1 and as we can compute log(x) and log(y ), we also gain log(b) and b, respectively, as well as c. However, to approximate the transformation between the images better, we need to have the function in the form Y = a + bx c. Obtaining parameters from this model is more complicated but we use a trick where we get b and c as in Eq. 3.3 and we estimate a after that. Let us assume X = bx c, then from we can obtain E(Y ) = E(a + X ) = a + E(X ) a = E(Y ) E(X ). 20

21 The brightness distributions now match better than after applying the simpler linear model transformation function. The last step is to map the extended range back to standard (8-bit) range. The shapes of the histograms should remain the same, it only needs to by compressed. The extended range is divided into 256 intervals of constant width. Every interval represents an equivalent class and all pixels belonging to one class are mapped to the same brightness level in the resulting image (see Figure 3.5). fmin = 0 i fmax fmin = 0 i 255 Figure 3.5: Histogram Compression. 3.3 Image Stitching The main purpose of this work is to make the resulting panorama look seamless. But as the human eye is very sensitive to edges in an image, especially when the edge does not conform a real object. Such edges would surely appear in the resulting image, if their exposure is not precisely aligned. One approach is to obtain pixel values in the intersection as an average of both blended images. Instead of simple average, a weighted combination can be used. However, this does not solve the problem but it only reduces the effects of exposure differences in neighbouring images. 21

22 The proposed solution finds a seam between two images that determines which pixels from the intersection area are taken from the first image and which are used from the second one. An example of input and output is shown in Fig Finding a such boundary and the stitching itself run up against certain problems. two input images one output image Figure 3.6: Stitching images via a seam. For example, the shapes of the input images may become out of parallel after registration. The registration process produces deformations of the image shape. Sometimes, the input images are even concave. It is more difficult to define a definite seam in non-convex-shaped image than in rectangular image. Another problem originates in registration shortcomings. If we choose the seam to be just around the edges in one image, they could appear once more in the resulting image, because the same edge appears in a different position. Also, when the input images have large areas of extremely overexposed or underexposed areas, a new problem arises: how to set the seam if it is supposed to go through this area. When taking pictures, the photographer, as well as the scene, is moving over time. Photographed object sometimes cannot be frozen in one position, e.g. branches of the trees, cars or people in the street. This could cause the moving object or its part do appear on the result image more than once. In general, this effect is difficult to remove. Our method of image stitching is designed to deal with these complications. It consist of following partial tasks: find the end points of the seams choose the image in which the seam will be searched 22

23 find all seams in the image stitch the images together Searching the End Points of a Seam The first task is to find the seam end points. It is easy but we must consider several special cases. We start by creating two sets of pixels. These sets represent points on the borders of two input images. By making an intersection of these sets we obtain a set of pixels which could potentially be searched for end points. Cases that could arise are shown in Figure 3.7. Two borders usually intersect in only one pixel (see Figure 3.7(A)). However, if the borders of images form an acute angle, the intersection consist of a group of pixels like in Figure 3.7(C). To get rid of such a group, let us consider that the pixels from the group are equivalent so that only one pixel represents the group as the end point. A B C Figure 3.7: Potential end points. A - regular case, B - point touches the image from inside, C - more points in group - considered as one end point. Now, we have a set of potential end points without groups. Some of them must be eliminated because they actually do not form any intersection. For example, when the border of one image only touches the second image (see Figure 3.7(B)). The decision which end points are retained and which are eliminated depends on position of the images in areas around the point. 23

24 To consider an end point to be valid, there have to be four different areas around the point (separated by image borders): intersected area - both images are present first image present second image present area where no image is present If there is even one of them missing, the end point is eliminated. The possibilities are shown in Fig.3.8. Once all points have been examined, we have a complete set of seam end points. (a) (b) (c) Figure 3.8: Potential end points elimination. (a) two intersected images - end point valid, (b) touch from inside - end point eliminated, (c) touch from outside - end point eliminated Choosing the Image for Seam Searching Searching the seams for stitching utilizes mainly brightness values in each pixel. But when an image contains large number of overexposed and underexposed pixels and/or they form large continuous areas, it might be difficult for the algorithm to find a proper seam there. There is one thing we can do to suppress this. The image with the smaller number of brightness values close to extremes is used for the seam searching. The brightness neighbourhood of extreme values, which is considered undesirable, is given proportionally as in Only the area common for both images (intersection) is used as the input for the seam searching. 24

25 3.3.3 Searching the Seams Our goal is to find a continuous path in an image between two end points. The path should avoid lines and edges in the images. The problem could be solved by active contours [4]. The method is based on minimizing energies in the image. Our task is even simpler. We only want not to place the seam too close to the edges. Unlike active contours there is no input from the user. The proposed solution utilizes the magnitudes of gradients as the energy. It is given by differences in brightness between neighbouring pixels. These values represent weighted edges in a graph. The vertices are taken only from pixels in intersecting area of two input images, as mentioned before. In this graph, we search for the shortest paths between given end points. For this purpose, a modified Dijkstra s algorithm[6] that can deal with possible zero-weighted graph edges is used. These edges, if present in groups, could cause the seam to have a random shape. We want it to be direct when going through such an area, so we add a second variable to the minimization algorithm - the distance counted as number of vertices from start. An example of a resulting seam can be seen in Figure 3.9. Figure 3.9: A seam found in the image. The resulting seam is drawn in two colours (white or black) for visualization purposes. The algorithm obviously observed the defined task - the seam follows areas with as few edges as possible - i.e. car bodies or signpost. 25

26 3.3.4 Blending Images Together The output of the previous step is a set of seam points. Two input images are blended together and boundary between them is formed by these points. For each pixel from the intersection it must be decided from which image to take the brightness value. This can be done in various ways. We can use polygon fill from [10], section 3.3, for example. Either row fill or seed fill can be used. 26

27 Chapter 4 Programmer s Reference This reference introduces implementation details of Panomedic. Chosen programming language and development decisions are discussed and architecture model and internal data structures are described. 4.1 Main Decisions The application is written in Java programming language. Java was chosen especially for its platform independency. Thanks to various Java libraries available, it is easy to write applications which work with images and have friendly graphical interface. Panomedic utilizes language, libraries and toolkits of Java Platform, Standard Edition (version 6) [18] with only one exception of logging library log4j [12]. 4.2 Overview Panomedic is composed of several packages: com.panomedic.core:main classes responsible for image manipulation and processing, creating histograms, statistics or lookup tables. com.panomedic.gui:definition of frames, dialogs and other components from Java Swing and AWT Toolkit used in application to form the graphical interface. 27

28 com.panomedic.kernel:application logic implementation - start, termination, listeners assignments and event handling. com.panomedic.log4j:customization of logging library com.panomedic.utils:functions for general use in whole application - like string or array manipulation etc. com.panomedic.colors:methods providing storing colours in various colour spaces 1 and converting among them 4.3 Data Structures As Panomedic is image processing software, the image itself is the most important object to concern about. Representation of pixel and its colour also worth mentioning as well as histogram and lookup table representation. Image Panomedic does not use special representation for an image. For internal representation of image data is utilized the standard java.awt.image package and its BufferedImage class. Java AWT Toolkit provides all needed methods for elemental operations with BufferedImage and its Raster. Various ways how to handle images are described in [5]. While the processing runs pixel by pixel and at most 3 images are processed at a time 2, no more than 3 images are necessary to keep in memory. An image is loaded into memory by method static BufferedImage javax.imageio.imageio.read(file input) every time it is required. BufferedImage object is a part of class Photo which also stores all the attributes related to the image. For example, reference to the original image file or image thumbnails. While in graph algorithms the images are represented by vertices, their attributes are also stored in Photo. The only supported format is PNG (Portable Network Graphics [20]). This image format is capable of handling with the alpha channel, which is 1 Colour manipulation is partially implemented in Panomedic because it was originally intended to be capable of processing colour images 2 two inputs and one result 28

29 important for separating the data of the aligned image from the rest of image canvas. The class Photo contains two attributes of class java.awt.point which form the constraints of the real, non-transparent data. All methods processing the image data go only through the rectangle determined by these points. However, as the areas of the geometrically aligned images are not always rectangular, pixels have to be checked for transparency before processing. Pixel By default, pixel values in the BufferedImage are stored in RGB colour space. During processing, the brightness value of a pixel is obtained from the RGB components and new value is calculated depending on method. This value is converted back to RGB and stored to the resulting image. Conversions are made by the classes in the package com.panomedic.colors. 4.4 Implementation The core of the whole image processing form the class Photos. It extends the javax.swing.defaultlistmodel which loosely implements Vector. The images loaded into Panomedic are store in it, although this was originally done because of the purposes of GUI. All the processes which are related to all the input images are secured by methods of this class, e.g. creating intersection, determining the order of images for processing and the two main methods. These functions control the creating of the resulting images. In the cycle, two images are always joined until all the input images are blended into the one output image. Finding the boundary and support functions are common for both methods. The two main functions differ according to the method they implement: processlevelmapping - Implements the Direct Level Mapping method (see Section 3.2.1). For each image pair is created a LUT object. Next the mappings are found by function boolean LUT.create(Photo, Photo, Point, Point). Obtained values are averaged for each brightness level, interpolated and smoothed by the LUT class. Then the images are blended while the values are mapped from the corresponding lookup tables. 29

30 processhistcalibration - Implements the method Using Histogram Statistics (see Section 3.2.2). At first, function determinetransfnc from Intersection class computes histograms of the images and obtains the parameters for the non-linear model transformation function as described in Sect During the processing are these parameters utilized for applying on individual pixels. Logging Mainly for the debugging purposes, log4j library was customized and implemented to the project. Logging is set up from the log4j.properties configuration file which must be located in the same folder as the application JAR. 30

31 Chapter 5 User s Guide Panomedic is the application which allows the user to balance exposure of images acquired for the purpose of creating panoramic image. It also makes the final blending so the output is one panoramic image. The application works automatically, no special knowledge is required. This manual is a guideline on how to use Panomedic and how to create input images for it. The guide also contains information about prerequisites required for running the application. 5.1 System Requirements Panomedic was developed as a platform independent application. It is executable on all operating systems running Java Virtual Machine, including Microsoft Windows, Linux and Mac OS. It is necessary to have Java SE Runtime Environment installed. Current version (6) is available on the CD attached to this work (see Ṅo other tools, libraries or protocols are required. As the application analyses and processes big amount of image data, Panomedic should not be run on system with less than 256 MB RAM. With growing number and size of the input images more operating memory is needed. At the processing time, the program uses significant amount of processor time. Especially on low performance systems, the processing can last up to several minutes. 31

32 5.2 Input Images Preparation Because Panomedic is not capable of image registration, it has to be done by other application). The input images accepted by Panomedic has to meet these specific requirements: PNG image format (for further information, see [20]) same size of all the images images are geometrically aligned - data of the images are located in the same coordinate system image data of aligned image covers the subset of all pixels in the input image remaining area (with no image information) is transparent (alpha channel equals zero) These requirements can be satisfied by registration function of several applications. We show how to create the input set of images in Adobe Photoshop [11]. Hugin [16] is also capable of such a task but it is rather difficult to use. Aligning in Adobe Photoshop To access the function capable of registration, go to File Automate Photomerge... The Photomerge dialog appears (see Fig. 5.1). Next choose the images to be aligned by clicking the Load button (1), selecting files and pressing OK. Ensure the check box Blend images together (2) is not checked. Continue by clicking OK. The images are then processed, it may take a while, depending on size and number of the images and system performance. The separate layer is created for each individual image. Each layer must be separately saved as PNG file. To do this, simply hide other layers by clicking on the eye icon in corresponding row in Layers panel. Now if the image is saved, it contains only the currently visible layer. 5.3 Usage Panomedic can be launched like other Java applications. When launched, the main application window is showed (see Figure 5.2). 32

33 3 1 2 Figure 5.1: Adobe Photoshop - Photomerge dialog Application Window Most of the functionality is accessible from the menu on the top of the window. Under the menu bar, there is a toolbar which contains several control buttons. On the left side of the window is the list of the input images. Thumbnails of loaded images are shown here. The image corresponding to the thumbnail selected is shown in the preview panel on the right. At the bottom of the window is status bar which inform the user about currently processing operations Selection of Images Loading input images into Panomedic is done through the menu. There are two options, both are in File Open sub menu: Files - multiple image files can be selected Directory - all images in the directory (not including subdirectories) are selected Clicking OK starts the loading. Thumbnails of all loaded images appears on the left panel. Any of the loaded images can be removed from the list by selecting File Remove. All images can be removed at once by menu item File Remove All Running the Process Before the process is started, method can be chosen in the preferences dialog (Tools Preferences, see below). Processing is started either by clicking 33

34 Menu bar Tool bar Status bar Image input selection Preview panel Figure 5.2: Panomedic - Main window. button Run or from the menu Process Run. During the processing, it is recommended not to launch any applications or do any other performance demanding tasks. After processing is done, the resulting image is immediately shown in the main preview panel. If the direct level mapping method is used, the Lookup table is shown as well (see Fig. 5.3). The resulting image can be saved. Go to File Save result and choose the name and the location of the file. The image is saved in PNG format Preferences The Preferences dialog is accessible from Tools menu. It allows the user to set several settings related to processing and application behaviour. 34

35 Figure 5.3: Lookup table. 35

36 Chapter 6 Results and Related Work With increasing popularity of digital photography, desire for creating the panoramic images becomes also more usual. Therefore demands of the users are growing. Several software developers concentrate on this area related to image processing. In this chapter, we discuss blending capabilities of several applications written for this purpose and a comparison to the two exposure-adjusting methods Panomedic is made. 6.1 Software Overview Short description of the applications compared with Panomedic follows: Adobe Photoshop Adobe Photoshop[11] CS4 is a well-known program for image manipulation. Its abilities are huge, one of them represents (since version CS) Photomerge. With this function, user is allowed to create panoramic images. Photoshop aligns the pictures and also blends them together if needed. The process of creating panorama in Photoshop is easy to use. Big disadvantage of Photoshop is its high selling price ($699). To access the Photomerge function, go to File Automate Photomerge... In the Photomerge dialog, it is possible to load an input set of pictures and choose whether the images are blended together or not. As Photomerge does not allow to change any settings related to exposure of the images, we kept default settings before starting the processing. 36

37 6.1.2 Hugin Hugin[16] is a powerful tool for creating panoramas. It is free and it allows to set up various settings related to every step of processing. Hugin also includes a wizard which helps users that are not familiar with panorama creating. However, Hugin produces better results if user has previous experiences in creating panoramic images. Results from Hugin were acquired with exposure optimizations enabled. In Stitcher, Normal Blended Panorama was selected in panel Output (see Fig. 6.1). Figure 6.1: Hugin - Stitcher setup Autopano Pro Autopano Pro[13] 2.0 is the application capable to create great-looking panorama almost automatically. In comparison to Hugin, its interface is much friendlier and its feature qualities are on similar level. Example of its abilities can be seen at [15]. However, it costs $99 in basic version. 37

38 6.2 Evaluation Criteria In order to evaluate various software they were run with the set of images to confront qualities of their outputs. Their price, time consumption and work difficulty are considered as well. Input The input images were taken with different camera settings. The captured scene contains areas that are illuminated well on one image but overexposed or underexposed on the second one. If the method of exposure adjustment is not designed well, such areas are often source of additional image quality degradation. The scene also includes edges and gradients which can cause problems in image stitching. The input images are shown together with their histograms and camera settings and in Figure Results and Comparison The output of each compared application successfully blend two images together. Except for Panomedic, all of them are capable of image registration. As the input for Panomedic are used images already aligned. The time consumed by joining the images were acceptable for all of the methods. Only direct level mapping runs quite longer than other processes. However, time required for processing develops with a number of the input pictures and their size. When comparing applications, there are several aspects which are minor but not negligible. As Panomedic is designed solely for adjusting and stitching the photos, so its usage is simple and do not require any extra skills. The second advantage consists in its price. Panomedic is a free software in contrast to some of the compared solutions. Probably the most important part of image blending is the quality of the resulting image. From this point of view, Panomedic succeeded in comparison with the other applications. Outputs from all compared applications are shown in Figure 6.3. (a) Direct level mapping method created very good resulting image. The seam is not visible at all and the input images are not distinguishable from each other. The details in darkest areas are more plentiful 38

39 Figure 6.2: Input images, their histograms and camera settings. (left): F4, 1/800s, ISO 50; (right): F4, 1/100s, ISO 50 39

40 than in both inputs, although the image information from brighter areas are lost. This effect is visible mostly on the buildings on the right side of the tree. However, general feeling of the image is good. (b) Statistical method. The output seems flatter than the one of the first method. Although the images were adjusted quite well, they lack of brighter tones. Especially the sky on the right side is unnaturally dark. On the other hand, the window above the balcony at the top left corner is detailed enough. The location of the seam is obvious on the road and lowers the impression of the image. (c) Adobe Photoshop. Regarding the resulting image, Photomerge function obviously lacks of convenient exposure correction made before stitching. The sides of the output image differ and resemble the input images. On the other hand, Photoshop applied on the image suitable transition which partly disguises the differences between the image brightness distribution. (d) Autopano Pro created the output with greatest contrast of all the images. The image is seamless. Although there is a slight difference between the brightness values of pedestrian crossing stripe on the left and on the right side of the road, the illumination seems distributed evenly. However, thr image is too dark and the whole tree is almost black. (e) Hugin proved that, as concerns creating panoramic images, it is a very sophisticated tool. The output image has optimal contrast and brightness and the seam is not visible. The pedestrian crossing and even the road look like as if the image was acquired by a single shot. Neither shadows nor highlights in the image are too close to the extreme brightness values. The proposed method compared favourably. The experiment showed that the method based on statistical values approximates the real data too inaccurately to produce results as good as the other solutions. However, possible improvements of the model exist and may enhance the transformation function in order to achieve better results. Applying the estimating step used in the algorithm recursively could tend to better approximation functions. 40

41 The rectification of the image exposure by direct level mapping seems to be a simple but powerful method. It returns worse results only when the intersection area is too small and therefore the small sample does not correspond the whole image. However, all methods based on analysis of the intersecting area have this problem. 41

42 Figure 6.3: Output images. (a) Panomedic - Direct Level Mapping, (b) Panomedic - Histogram Statistics Method, (c) Adobe Photoshop - Photomerge, (d) Autopano Pro, (e) Hugin. 42

How to combine images in Photoshop

How to combine images in Photoshop How to combine images in Photoshop In Photoshop, you can use multiple layers to combine images, but there are two other ways to create a single image from mulitple images. Create a panoramic image with

More information

Produce stunning. Pro photographer Chris Humphreys guides you through HDR and how to create captivating natural-looking images

Produce stunning. Pro photographer Chris Humphreys guides you through HDR and how to create captivating natural-looking images Masterclass: In association with Produce stunning HDR images Pro photographer Chris Humphreys guides you through HDR and how to create captivating natural-looking images 8 digital photographer 45 masterclass4produce

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

Color and More. Color basics

Color and More. Color basics Color and More In this lesson, you'll evaluate an image in terms of its overall tonal range (lightness, darkness, and contrast), its overall balance of color, and its overall appearance for areas that

More information

Advanced Diploma in. Photoshop. Summary Notes

Advanced Diploma in. Photoshop. Summary Notes Advanced Diploma in Photoshop Summary Notes Suggested Set Up Workspace: Essentials or Custom Recommended: Ctrl Shift U Ctrl + T Menu Ctrl + I Ctrl + J Desaturate Free Transform Filter options Invert Duplicate

More information

Dynamic Range. H. David Stein

Dynamic Range. H. David Stein Dynamic Range H. David Stein Dynamic Range What is dynamic range? What is low or limited dynamic range (LDR)? What is high dynamic range (HDR)? What s the difference? Since we normally work in LDR Why

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

High Dynamic Range (HDR) Photography in Photoshop CS2

High Dynamic Range (HDR) Photography in Photoshop CS2 Page 1 of 7 High dynamic range (HDR) images enable photographers to record a greater range of tonal detail than a given camera could capture in a single photo. This opens up a whole new set of lighting

More information

Digital Design and Communication Teaching (DiDACT) University of Sheffield Department of Landscape. Adobe Photoshop CS4 INTRODUCTION WORKSHOPS

Digital Design and Communication Teaching (DiDACT) University of Sheffield Department of Landscape. Adobe Photoshop CS4 INTRODUCTION WORKSHOPS Adobe Photoshop CS4 INTRODUCTION WORKSHOPS WORKSHOP 3 - Creating a Panorama Outcomes: y Taking the correct photographs needed to create a panorama. y Using photomerge to create a panorama. y Solutions

More information

This histogram represents the +½ stop exposure from the bracket illustrated on the first page.

This histogram represents the +½ stop exposure from the bracket illustrated on the first page. Washtenaw Community College Digital M edia Arts Photo http://courses.wccnet.edu/~donw Don W erthm ann GM300BB 973-3586 donw@wccnet.edu Exposure Strategies for Digital Capture Regardless of the media choice

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

Photoshop CS2. Step by Step Instructions Using Layers. Adobe. About Layers:

Photoshop CS2. Step by Step Instructions Using Layers. Adobe. About Layers: About Layers: Layers allow you to work on one element of an image without disturbing the others. Think of layers as sheets of acetate stacked one on top of the other. You can see through transparent areas

More information

CAMERA BASICS. Stops of light

CAMERA BASICS. Stops of light CAMERA BASICS Stops of light A stop of light isn t a quantifiable measurement it s a relative measurement. A stop of light is defined as a doubling or halving of any quantity of light. The word stop is

More information

Aperture. The lens opening that allows more, or less light onto the sensor formed by a diaphragm inside the actual lens.

Aperture. The lens opening that allows more, or less light onto the sensor formed by a diaphragm inside the actual lens. PHOTOGRAPHY TERMS: AE - Auto Exposure. When the camera is set to this mode, it will automatically set all the required modes for the light conditions. I.e. Shutter speed, aperture and white balance. The

More information

Creating a Panorama Photograph Using Photoshop Elements

Creating a Panorama Photograph Using Photoshop Elements Creating a Panorama Photograph Using Photoshop Elements Following are guidelines when shooting photographs for a panorama. Overlap images sufficiently -- Images should overlap approximately 15% to 40%.

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

Recovering highlight detail in over exposed NEF images

Recovering highlight detail in over exposed NEF images Recovering highlight detail in over exposed NEF images Request I would like to compensate tones in overexposed RAW image, exhibiting a loss of detail in highlight portions. Response Highlight tones can

More information

Photomatix Light 1.0 User Manual

Photomatix Light 1.0 User Manual Photomatix Light 1.0 User Manual Table of Contents Introduction... iii Section 1: HDR...1 1.1 Taking Photos for HDR...2 1.1.1 Setting Up Your Camera...2 1.1.2 Taking the Photos...3 Section 2: Using Photomatix

More information

Applying mathematics to digital image processing using a spreadsheet

Applying mathematics to digital image processing using a spreadsheet Jeff Waldock Applying mathematics to digital image processing using a spreadsheet Jeff Waldock Department of Engineering and Mathematics Sheffield Hallam University j.waldock@shu.ac.uk Introduction When

More information

GXCapture 8.1 Instruction Manual

GXCapture 8.1 Instruction Manual GT Vision image acquisition, managing and processing software GXCapture 8.1 Instruction Manual Contents of the Instruction Manual GXC is the shortened name used for GXCapture Square brackets are used to

More information

Before you start, make sure that you have a properly calibrated system to obtain high-quality images.

Before you start, make sure that you have a properly calibrated system to obtain high-quality images. CONTENT Step 1: Optimizing your Workspace for Acquisition... 1 Step 2: Tracing the Region of Interest... 2 Step 3: Camera (& Multichannel) Settings... 3 Step 4: Acquiring a Background Image (Brightfield)...

More information

Movie Merchandising. Movie Poster. Open the Poster Background.psd file. Open the Cloud.jpg file.

Movie Merchandising. Movie Poster. Open the Poster Background.psd file. Open the Cloud.jpg file. Movie Poster Open the Poster Background.psd file. Open the Cloud.jpg file. Movie Merchandising Choose Image>Adjustments>Desaturate to make it a grayscale image. Select the Move tool in the Toolbar and

More information

HISTOGRAMS. These notes are a basic introduction to using histograms to guide image capture and image processing.

HISTOGRAMS. These notes are a basic introduction to using histograms to guide image capture and image processing. HISTOGRAMS Roy Killen, APSEM, EFIAP, GMPSA These notes are a basic introduction to using histograms to guide image capture and image processing. What are histograms? Histograms are graphs that show what

More information

Machinery HDR Effects 3

Machinery HDR Effects 3 1 Machinery HDR Effects 3 MACHINERY HDR is a photo editor that utilizes HDR technology. You do not need to be an expert to achieve dazzling effects even from a single image saved in JPG format! MACHINERY

More information

Photography Help Sheets

Photography Help Sheets Photography Help Sheets Phone: 01233 771915 Web: www.bigcatsanctuary.org Using your Digital SLR What is Exposure? Exposure is basically the process of recording light onto your digital sensor (or film).

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 5 Image Enhancement in Spatial Domain- I ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation

More information

Color Correction and Enhancement

Color Correction and Enhancement 10 Approach to Color Correction 151 Color Correction and Enhancement The primary purpose of Photoshop is to act as a digital darkroom where images can be corrected, enhanced, and refined. How do you know

More information

Photoshop Elements 3 Panoramas

Photoshop Elements 3 Panoramas Photoshop Elements 3 Panoramas One of the good things about digital photographs and image editing programs is that they allow us to stitch two or three photographs together to create one long panoramic

More information

DodgeCmd Image Dodging Algorithm A Technical White Paper

DodgeCmd Image Dodging Algorithm A Technical White Paper DodgeCmd Image Dodging Algorithm A Technical White Paper July 2008 Intergraph ZI Imaging 170 Graphics Drive Madison, AL 35758 USA www.intergraph.com Table of Contents ABSTRACT...1 1. INTRODUCTION...2 2.

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

Organizing artwork on layers

Organizing artwork on layers 3 Layer Basics Both Adobe Photoshop and Adobe ImageReady let you isolate different parts of an image on layers. Each layer can then be edited as discrete artwork, allowing unlimited flexibility in composing

More information

[Use Element Selection tool to move raster towards green block.]

[Use Element Selection tool to move raster towards green block.] Demo.dgn 01 High Performance Display Bentley Descartes has been designed to seamlessly integrate into the Raster Manager and all tool boxes, menus, dialog boxes, and other interface operations are consistent

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

Digital Imaging - Photoshop

Digital Imaging - Photoshop Digital Imaging - Photoshop A digital image is a computer representation of a photograph. It is composed of a grid of tiny squares called pixels (picture elements). Each pixel has a position on the grid

More information

Introduction. Let s get started...

Introduction. Let s get started... Introduction Welcome to PanoramaPlus 2, Serif s fully-automatic 2D image stitcher. If you re looking for panorama-creating software that s quick and easy to use, but doesn t compromise on image quality,

More information

Movie 10 (Chapter 17 extract) Photomerge

Movie 10 (Chapter 17 extract) Photomerge Movie 10 (Chapter 17 extract) Adobe Photoshop CS for Photographers by Martin Evening, ISBN: 0 240 51942 6 is published by Focal Press, an imprint of Elsevier. The title will be available from early February

More information

Photoshop Elements. Lecturer: Ivan Renesto. Course description and objectives. Audience. Prerequisites. Duration

Photoshop Elements. Lecturer: Ivan Renesto. Course description and objectives. Audience. Prerequisites. Duration Photoshop Elements Lecturer: Ivan Renesto Course description and objectives Course objective is to provide the basic knowledge to use a selection of the most advanced tools for editing and managing image

More information

SUGAR fx. LightPack 3 User Manual

SUGAR fx. LightPack 3 User Manual SUGAR fx LightPack 3 User Manual Contents Installation 4 Installing SUGARfx 4 What is LightPack? 5 Using LightPack 6 Lens Flare 7 Filter Parameters 7 Main Setup 8 Glow 11 Custom Flares 13 Random Flares

More information

Adobe Photoshop CC 2018 Tutorial

Adobe Photoshop CC 2018 Tutorial Adobe Photoshop CC 2018 Tutorial GETTING STARTED Adobe Photoshop CC 2018 is a popular image editing software that provides a work environment consistent with Adobe Illustrator, Adobe InDesign, Adobe Photoshop,

More information

Module All You Ever Need to Know About The Displace Filter

Module All You Ever Need to Know About The Displace Filter Module 02-05 All You Ever Need to Know About The Displace Filter 02-05 All You Ever Need to Know About The Displace Filter [00:00:00] In this video, we're going to talk about the Displace Filter in Photoshop.

More information

Digital Imaging and Image Editing

Digital Imaging and Image Editing Digital Imaging and Image Editing A digital image is a representation of a twodimensional image as a finite set of digital values, called picture elements or pixels. The digital image contains a fixed

More information

Multi Viewpoint Panoramas

Multi Viewpoint Panoramas 27. November 2007 1 Motivation 2 Methods Slit-Scan "The System" 3 "The System" Approach Preprocessing Surface Selection Panorama Creation Interactive Renement 4 Sources Motivation image showing long continous

More information

Grid Assembly. User guide. A plugin developed for microscopy non-overlapping images stitching, for the public-domain image analysis package ImageJ

Grid Assembly. User guide. A plugin developed for microscopy non-overlapping images stitching, for the public-domain image analysis package ImageJ BIOIMAGING AND OPTIC PLATFORM Grid Assembly A plugin developed for microscopy non-overlapping images stitching, for the public-domain image analysis package ImageJ User guide March 2008 Introduction In

More information

1.1 Current Situation about GIMP Plugin Registry

1.1 Current Situation about GIMP Plugin Registry 1.0 Introduction One of the nicest things about GIMP is how easily its functionality can be extended, by using plugins. GIMP plugins are external programs that run under the control of the main GIMP application

More information

Photographing Long Scenes with Multiviewpoint

Photographing Long Scenes with Multiviewpoint Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an

More information

Computer Graphics Fundamentals

Computer Graphics Fundamentals Computer Graphics Fundamentals Jacek Kęsik, PhD Simple converts Rotations Translations Flips Resizing Geometry Rotation n * 90 degrees other Geometry Rotation n * 90 degrees other Geometry Translations

More information

A Short History of Using Cameras for Weld Monitoring

A Short History of Using Cameras for Weld Monitoring A Short History of Using Cameras for Weld Monitoring 2 Background Ever since the development of automated welding, operators have needed to be able to monitor the process to ensure that all parameters

More information

Capturing Realistic HDR Images. Dave Curtin Nassau County Camera Club February 24 th, 2016

Capturing Realistic HDR Images. Dave Curtin Nassau County Camera Club February 24 th, 2016 Capturing Realistic HDR Images Dave Curtin Nassau County Camera Club February 24 th, 2016 Capturing Realistic HDR Images Topics: What is HDR? In Camera. Post-Processing. Sample Workflow. Q & A. Capturing

More information

Fast and High-Quality Image Blending on Mobile Phones

Fast and High-Quality Image Blending on Mobile Phones Fast and High-Quality Image Blending on Mobile Phones Yingen Xiong and Kari Pulli Nokia Research Center 955 Page Mill Road Palo Alto, CA 94304 USA Email: {yingenxiong, karipulli}@nokiacom Abstract We present

More information

CHAPTER1: QUICK START...3 CAMERA INSTALLATION... 3 SOFTWARE AND DRIVER INSTALLATION... 3 START TCAPTURE...4 TCAPTURE PARAMETER SETTINGS... 5 CHAPTER2:

CHAPTER1: QUICK START...3 CAMERA INSTALLATION... 3 SOFTWARE AND DRIVER INSTALLATION... 3 START TCAPTURE...4 TCAPTURE PARAMETER SETTINGS... 5 CHAPTER2: Image acquisition, managing and processing software TCapture Instruction Manual Key to the Instruction Manual TC is shortened name used for TCapture. Help Refer to [Help] >> [About TCapture] menu for software

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

11 Advanced Layer Techniques

11 Advanced Layer Techniques 11 Advanced Layer Techniques After you ve learned basic layer techniques, you can create more complex effects in your artwork using layer masks, path groups, filters, adjustment layers, and more style

More information

FOCUS, EXPOSURE (& METERING) BVCC May 2018

FOCUS, EXPOSURE (& METERING) BVCC May 2018 FOCUS, EXPOSURE (& METERING) BVCC May 2018 SUMMARY Metering in digital cameras. Metering modes. Exposure, quick recap. Exposure settings and modes. Focus system(s) and camera controls. Challenges & Experiments.

More information

Using Curves and Histograms

Using Curves and Histograms Written by Jonathan Sachs Copyright 1996-2003 Digital Light & Color Introduction Although many of the operations, tools, and terms used in digital image manipulation have direct equivalents in conventional

More information

Adobe Photoshop CS5 Tutorial

Adobe Photoshop CS5 Tutorial Adobe Photoshop CS5 Tutorial GETTING STARTED Adobe Photoshop CS5 is a popular image editing software that provides a work environment consistent with Adobe Illustrator, Adobe InDesign, Adobe Photoshop

More information

User s Guide. Windows Lucis Pro Plug-in for Photoshop and Photoshop Elements

User s Guide. Windows Lucis Pro Plug-in for Photoshop and Photoshop Elements User s Guide Windows Lucis Pro 6.1.1 Plug-in for Photoshop and Photoshop Elements The information contained in this manual is subject to change without notice. Microtechnics shall not be liable for errors

More information

Digital Image Processing. Lecture # 3 Image Enhancement

Digital Image Processing. Lecture # 3 Image Enhancement Digital Image Processing Lecture # 3 Image Enhancement 1 Image Enhancement Image Enhancement 3 Image Enhancement 4 Image Enhancement Process an image so that the result is more suitable than the original

More information

MY ASTROPHOTOGRAPHY WORKFLOW Scott J. Davis June 21, 2012

MY ASTROPHOTOGRAPHY WORKFLOW Scott J. Davis June 21, 2012 Table of Contents Image Acquisition Types 2 Image Acquisition Exposure 3 Image Acquisition Some Extra Notes 4 Stacking Setup 5 Stacking 7 Preparing for Post Processing 8 Preparing your Photoshop File 9

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Topaz Labs DeNoise 3 Review By Dennis Goulet. The Problem

Topaz Labs DeNoise 3 Review By Dennis Goulet. The Problem Topaz Labs DeNoise 3 Review By Dennis Goulet The Problem As grain was the nemesis of clean images in film photography, electronic noise in digitally captured images can be a problem in making photographs

More information

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives Using Dynamic Views Module Overview The term dynamic views refers to a method of composing drawings that is a new approach to managing projects. Dynamic views can help you to: automate sheet creation;

More information

Improving digital images with the GNU Image Manipulation Program PHOTO FIX

Improving digital images with the GNU Image Manipulation Program PHOTO FIX Improving digital images with the GNU Image Manipulation Program PHOTO FIX is great for fixing digital images. We ll show you how to correct washed-out or underexposed images and white balance. BY GAURAV

More information

One Week to Better Photography

One Week to Better Photography One Week to Better Photography Glossary Adobe Bridge Useful application packaged with Adobe Photoshop that previews, organizes and renames digital image files and creates digital contact sheets Adobe Photoshop

More information

the RAW FILE CONVERTER EX powered by SILKYPIX

the RAW FILE CONVERTER EX powered by SILKYPIX How to use the RAW FILE CONVERTER EX powered by SILKYPIX The X-Pro1 comes with RAW FILE CONVERTER EX powered by SILKYPIX software for processing RAW images. This software lets users make precise adjustments

More information

Visible Light Communication-based Indoor Positioning with Mobile Devices

Visible Light Communication-based Indoor Positioning with Mobile Devices Visible Light Communication-based Indoor Positioning with Mobile Devices Author: Zsolczai Viktor Introduction With the spreading of high power LED lighting fixtures, there is a growing interest in communication

More information

INTRODUCTION TO COMPUTER GRAPHICS

INTRODUCTION TO COMPUTER GRAPHICS INTRODUCTION TO COMPUTER GRAPHICS ITC 31012: GRAPHICAL DESIGN APPLICATIONS AJM HASMY hasmie@gmail.com WHAT CAN PS DO? - PHOTOSHOPPING CREATING IMAGE Custom icons, buttons, lines, balls or text art web

More information

Image Processing by Bilateral Filtering Method

Image Processing by Bilateral Filtering Method ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image

More information

CHAPTER 7 - HISTOGRAMS

CHAPTER 7 - HISTOGRAMS CHAPTER 7 - HISTOGRAMS In the field, the histogram is the single most important tool you use to evaluate image exposure. With the histogram, you can be certain that your image has no important areas that

More information

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002 DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 22 Topics: Human eye Visual phenomena Simple image model Image enhancement Point processes Histogram Lookup tables Contrast compression and stretching

More information

Nova Full-Screen Calibration System

Nova Full-Screen Calibration System Nova Full-Screen Calibration System Version: 5.0 1 Preparation Before the Calibration 1 Preparation Before the Calibration 1.1 Description of Operating Environments Full-screen calibration, which is used

More information

How to blend, feather, and smooth

How to blend, feather, and smooth How to blend, feather, and smooth Quite often, you need to select part of an image to modify it. When you select uniform geometric areas squares, circles, ovals, rectangles you don t need to worry too

More information

Communication Graphics Basic Vocabulary

Communication Graphics Basic Vocabulary Communication Graphics Basic Vocabulary Aperture: The size of the lens opening through which light passes, commonly known as f-stop. The aperture controls the volume of light that is allowed to reach the

More information

Adobe Photoshop. Levels

Adobe Photoshop. Levels How to correct color Once you ve opened an image in Photoshop, you may want to adjust color quality or light levels, convert it to black and white, or correct color or lens distortions. This can improve

More information

The Use of Non-Local Means to Reduce Image Noise

The Use of Non-Local Means to Reduce Image Noise The Use of Non-Local Means to Reduce Image Noise By Chimba Chundu, Danny Bin, and Jackelyn Ferman ABSTRACT Digital images, such as those produced from digital cameras, suffer from random noise that is

More information

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: IJCE January-June 2012, Volume 4, Number 1 pp. 59 67 NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: A COMPARATIVE STUDY Prabhdeep Singh1 & A. K. Garg2

More information

Adobe Studio on Adobe Photoshop CS2 Enhance scientific and medical images. 2 Hide the original layer.

Adobe Studio on Adobe Photoshop CS2 Enhance scientific and medical images. 2 Hide the original layer. 1 Adobe Studio on Adobe Photoshop CS2 Light, shadow and detail interact in wild and mysterious ways in microscopic photography, posing special challenges for the researcher and educator. With Adobe Photoshop

More information

Photomatix Pro 3.1 User Manual

Photomatix Pro 3.1 User Manual Introduction Photomatix Pro 3.1 User Manual Photomatix Pro User Manual Introduction Table of Contents Section 1: Taking photos for HDR... 1 1.1 Camera set up... 1 1.2 Selecting the exposures... 3 1.3 Taking

More information

PHOTOTUTOR.com.au Share the Knowledge

PHOTOTUTOR.com.au Share the Knowledge THE DIGITAL WORKFLOW BY MICHAEL SMYTH This tutorial is designed to outline the necessary steps from digital capture, image editing and creating a final print. FIRSTLY, BE AWARE OF WHAT CAN AND CAN T BE

More information

Lecture 4: Spatial Domain Processing and Image Enhancement

Lecture 4: Spatial Domain Processing and Image Enhancement I2200: Digital Image processing Lecture 4: Spatial Domain Processing and Image Enhancement Prof. YingLi Tian Sept. 27, 2017 Department of Electrical Engineering The City College of New York The City University

More information

Photography Basics. Exposure

Photography Basics. Exposure Photography Basics Exposure Impact Voice Transformation Creativity Narrative Composition Use of colour / tonality Depth of Field Use of Light Basics Focus Technical Exposure Courtesy of Bob Ryan Depth

More information

5. SilverFast Tools Tools SilverFast Manual. 5. SilverFast Tools Image Auto-Adjust (Auto-Gradation) 114

5. SilverFast Tools Tools SilverFast Manual. 5. SilverFast Tools Image Auto-Adjust (Auto-Gradation) 114 Chapter 5 Tools 5. SilverFast Tools 5. SilverFast Tools 106 5.1 Image Auto-Adjust (Auto-Gradation) 114 5.2 Highlight / Shadow Tool 123 5.3 The Histogram 133 5.4 Gradation Dialogue 147 5.5 Global Colour

More information

By Washan Najat Nawi

By Washan Najat Nawi By Washan Najat Nawi how to get started how to use the interface how to modify images with basic editing skills Adobe Photoshop: is a popular image-editing software. Two general usage of Photoshop Creating

More information

Histograms& Light Meters HOW THEY WORK TOGETHER

Histograms& Light Meters HOW THEY WORK TOGETHER Histograms& Light Meters HOW THEY WORK TOGETHER WHAT IS A HISTOGRAM? Frequency* 0 Darker to Lighter Steps 255 Shadow Midtones Highlights Figure 1 Anatomy of a Photographic Histogram *Frequency indicates

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching.

Remote Sensing. The following figure is grey scale display of SPOT Panchromatic without stretching. Remote Sensing Objectives This unit will briefly explain display of remote sensing image, geometric correction, spatial enhancement, spectral enhancement and classification of remote sensing image. At

More information

ISCapture User Guide. advanced CCD imaging. Opticstar

ISCapture User Guide. advanced CCD imaging. Opticstar advanced CCD imaging Opticstar I We always check the accuracy of the information in our promotional material. However, due to the continuous process of product development and improvement it is possible

More information

Histograms and Tone Curves

Histograms and Tone Curves Histograms and Tone Curves We present an overview to explain Digital photography essentials behind Histograms, Tone Curves, and a powerful new slider feature called the TAT tool (Targeted Assessment Tool)

More information

Reveal the mystery of the mask

Reveal the mystery of the mask Reveal the mystery of the mask Imagine you're participating in a group brainstorming session to generate new ideas for the design phase of a new project. The facilitator starts the brainstorming session

More information

TOON BOOM HARMONY Advanced Edition - Compositing and Effects Guide (Server)

TOON BOOM HARMONY Advanced Edition - Compositing and Effects Guide (Server) TOON BOOM HARMONY 12.1 - Advanced Edition - Compositing and Effects Guide (Server) Legal Notices Toon Boom Animation Inc. 4200 Saint-Laurent, Suite 1020 Montreal, Quebec, Canada H2W 2R2 Tel: +1 514 278

More information

Extending the Dynamic Range of Film

Extending the Dynamic Range of Film Written by Jonathan Sachs Copyright 1999-2003 Digital Light & Color Introduction Limited dynamic range is a common problem, especially with today s fine-grained slide films. When photographing contrasty

More information

Working with the BCC Jitter Filter

Working with the BCC Jitter Filter Working with the BCC Jitter Filter Jitter allows you to vary one or more attributes of a source layer over time, such as size, position, opacity, brightness, or contrast. Additional controls choose the

More information

Transforming Your Photographs with Photoshop

Transforming Your Photographs with Photoshop Transforming Your Photographs with Photoshop Jesús Ramirez PhotoshopTrainingChannel.com Contents Introduction 2 About the Instructor 2 Lab Project Files 2 Lab Objectives 2 Lab Description 2 Removing Distracting

More information

Non Linear Image Enhancement

Non Linear Image Enhancement Non Linear Image Enhancement SAIYAM TAKKAR Jaypee University of information technology, 2013 SIMANDEEP SINGH Jaypee University of information technology, 2013 Abstract An image enhancement algorithm based

More information

High Dynamic Range (HDR) photography is a combination of a specialized image capture technique and image processing.

High Dynamic Range (HDR) photography is a combination of a specialized image capture technique and image processing. Introduction High Dynamic Range (HDR) photography is a combination of a specialized image capture technique and image processing. Photomatix Pro's HDR imaging processes combine several Low Dynamic Range

More information

Part 2: Spot Color Lessons

Part 2: Spot Color Lessons Why White? The importance of white in color printing is often overlooked. The foundation of color printing is based on applying Cyan, Magenta, Yellow and Black (CMYK) onto white paper. The paper s white

More information

Image preprocessing in spatial domain

Image preprocessing in spatial domain Image preprocessing in spatial domain convolution, convolution theorem, cross-correlation Revision:.3, dated: December 7, 5 Tomáš Svoboda Czech Technical University, Faculty of Electrical Engineering Center

More information

OUTDOOR PORTRAITURE WORKSHOP

OUTDOOR PORTRAITURE WORKSHOP OUTDOOR PORTRAITURE WORKSHOP SECOND EDITION Copyright Bryan A. Thompson, 2012 bryan@rollaphoto.com Goals The goals of this workshop are to present various techniques for creating portraits in an outdoor

More information

Image Enhancement contd. An example of low pass filters is:

Image Enhancement contd. An example of low pass filters is: Image Enhancement contd. An example of low pass filters is: We saw: unsharp masking is just a method to emphasize high spatial frequencies. We get a similar effect using high pass filters (for instance,

More information