Conventional Interpolation Methods Mrs. Amruta A. Savagave Electronics &communication Department, Jinesha Recidency,Near bank of Maharastra, Ambegaon(BK), Kataraj,Dist-Pune Email: amrutapep@gmail.com Prof.A.P.Patil Electronics Engg. Dept., Dr. J. J.Magdum College. Shirol-Wadi Road, Jaysingpur Tal.-Shirol,Dist.-Kolhapur Email:appatil2013@gmail.com ABSTRACT- In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the indepent variable. It is often required to interpolate (i.e. estimate) the value of that function for an intermediate value of the indepent variable. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complex to evaluate efficiently. A few known data points from the original function can be used to create an interpolation based on a simpler function. Of course, when a simple function is used to estimate data points from the original, interpolation errors are usually present; however, deping on the problem domain and the interpolation method used, the gain in simplicity may be of greater value than the resultant loss in accuracy. Interpolation methods can be divided into two main categories: (1) Conventional interpolation methods that use constant convolution kernels for the entire image. (2) Adaptive interpolation methods that use edge information for the interpolation. This paper presents various types of conventional interpolation techniques to obtain a high quality image Conventional interpolation methods include nearest neighbor interpolation, bilinear interpolation and bicubic interpolation algorithm. Keywords- Interpolation, conventional interpolation, Nearest Neighbor interpolation, Bilinear Interpolation, Bicubic Interpolation. 1. INTRODUCTION- Estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information. 135
Interpolation is useful where the data. Image interpolation is nowadays available surrounding the missing data is available and its tr, seasonality, and longer-term cycles are known. Time series analysis and regression analysis are the two statistical techniques employing the concept of interpolation. Image interpolation addresses the problem of generating a high-resolution image from its low-resolution version. The employed to describe model the relationship between high-resolution pixels and lowresolution pixels plays the critical role in the performance of an interpolation algorithm. Conventional linear interpolation schemes (e.g., bilinear and bicubic) based on spaceinvariant models fail to capture the fast evolving statistics around edges and consequently produce interpolated images with blurred edges and annoying artifacts. Linear interpolation is generally preferred not for the performance but for computational simplicity. Many algorithms have been proposed to improve the subjective quality of the interpolated images by imposing more accurate models. Approximating continues function s value using discrete samples is called interpolation in many image processing tools like Photoshop and other. Applications of image interpolation methods are image enlargement, image reduction, subpixel image registration, image decomposition and to correct spatial distortions and many more. figure 1 shows the basic concept of how can enlarge image using interpolation. Image interpolation occurs in all digital photos at some stage whether this be in Bayer demosaicing or in photo enlargement. It happens anytime you resize or remap (distort) your image from one pixel grid to another. Image resizing is necessary when you need to increase or decrease the total number of pixels, whereas remapping can occur under a wider variety of scenarios: correcting for lens distortion, changing perspective, and rotating an image. Even if the same image resize or remap is performed, the results can vary significantly deping on the interpolation algorithm. It is only an approximation, therefore an image will always lose some quality each time interpolation performed is 136
International Journal of Emerging Technology and Innovative Engineering Fig 1: Basic Interpolation Concept Model based recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministically recovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is "good" near data samples, and possibly less good elsewhere. Three most important hypothesis for interpolation are: 1. The underlying data is continuous defined 2. Given data samples, it is possible to compute a data value of the underlying continuous function at any abscissa 3. The evaluation of the underlying continuous function at the sample points yields the same value as the data themselves Two main categories are there for image interpolation algorithms called adaptive and non adaptive. In non adaptive method same procedure is applied on all pixels without considering image features while in a adaptive method, image quality and its feature are considered before applying algorithm. 2.NONADAPTIVE INTERPOLATION ALGORITHM Conventional interpolation methods include nearest neighbor interpolation, bilinear interpolation and bicubic interpolation algorithm. The bilinear interpolation and bicubic interpolation smooth the data and keeping the low frequency content of the source image. Because they are not able to enhance the high frequencies or preserve the 137
edges equally well, they may produce some annoying visual problems, such as aliasing, blurring or other artifact effects. It also includes spline, sinc, lanczos. Deping on their complexity, these use anywhere from 0 to 256 (or more) adjacent pixels when interpolating. The more adjacent pixels they include the more accurate they can become, but this comes at the expense of much longer processing time. These algorithms can be used to both distort and resize a photo. Various adaptive interpolation algorithms have already been developed to solve the artifact effects. Without thinking or considering the content of image, in this method simply apply some computational. Normally commercial product like Adobe Photoshop uses this kind of interpolation methods. 2.1 Nearest Neighbor Interpolation In order to up sample or zoom an image Nearest Neighbor provides easiest way. Image enlargement requires two steps:- 1) First is creation of new pixel locations 2) Assignment of pixel values to those locations. This can be done by treating image as a matrix and creating new rows and columns.having only zero value. Next step is to assign the pixel value of the near most neighbor to the newly generated pixel. That is why this method of grey level assignment is called Nearest Neighbor Interpolation. a=x1; scale=2; [r,c,d]=size(a); new_r=r*scale; new_c=c*scale; newim=imresize(a,[new_rnew_c], 'nearest'); figure; imshow(newim); title('nearest Neighbor Output'); a=imresize(a,[256 256]); X=imresize(newim,[256 256]); X=X(:,:,1); a=a(:,:,1); X=double(X); a=double(a); [M,N]=size(a); for i=1:m MSE(i) = sum(sum((a(i,:)- X(i,:)).^2))/(M*N); PSNR(i)=10*log10(256*256/MSE(i)); MSE=mean(MSE); PSNR=mean(PSNR); disp('nearest Neighbor Scaling:'); disp('mse :'); disp(mse); disp('psnr :') disp(psnr); 138
interpolated value is simply their sum divided by four. Advantage- 1. Nearest neighbor is the most basic and requires the least processing time. 2. This has the effect of simply making each pixel bigger. 3. This method is very simple and requires less computation. 4. This method is just copies available values, not interpolate values as it doesn t change values. Disadvantage- Nearest neighbor interpolation is cannot be used in high resolution zooming. 2.2 Bilinear Interpolation An interpolated point is filled with four closest pixel s weighted average. In this method there is need to perform two linear interpolations. Also need to calculate four interpolation function for grid point in Bilinear Interpolation. Fig is for the case when all known pixel distances are equal, so Fig 2:.Bilinear Interpolation It is performed in one direction first (row wise) then again in other direction (column wise).it uses four nearest neighbor of pixel whose value is to be determined. An image is selected and it is converted into matrix form. Another image of size 2m*2n is taken which contain zero elements. This matrix is padded with the matrix of image so that the resultant matrix contains zero elements in every alternate row and column. The weighted average of four pixels is calculated and the result is put into the newly generated pixel. 139
Advantage- if x1 == 0 x1 = 1; x = rem(i/s,1); for j = 1:cn; y1=cast(floor(j/s),'uint32'); y2=cast(ceil(j/s),'uint16'); if y1 == 0 y1 = 1; ctl = Img(x1,y1); cbl = Img(x2,y1); ctr = Img(x1,y2); cbr = Img(x2,y2); y = rem(j/s,1); tr=(ctr*y)+(ctl*(1-y)); br=(cbr*y)+(cbl*(1-y)); im_out(i,j)=(br*x)+(tr*(1-x)); output(:,:,ij) = im_out(:,:); figure; imshow(uint8(output)) title('bilinear Output'); a=imresize(img,[256 256]); X=imresize(output,[256 256]); X=X(:,:,1); a=a(:,:,1); X=double(X); a=double(a); [M,N]=size(a); for i=1:m MSE(i) = sum(sum((a(i,:)- X(i,:)).^2))/(M*N); PSNR(i)=10*log10(256*256/MSE(i)); MSE=mean(MSE); 1. Much smoother looking images than nearest neighbor interpolation method. 2. Bilinear interpolation technique that reduces the visual distortion. 2.3 Bicubic Interpolation In this method considering the closest 4x4 neighborhood of known pixels for a total of 16 pixels. Since these are at various distances from the unknown pixel. closer pixels are given a higher weighting in the calculation. As compare to bilinear interpolation, which takes only 4 pixels (2x2) into account, Bicubic Interpolation considers 16 pixels (4x4). Img=X1; factor=2; [r c d] = size(img); rn = floor(factor*r); cn = floor(factor*c); s = factor; output = zeros(rn,cn,d); for ij=1:d for i = 1:rn; x1=cast(floor(i/s),'uint16'); x2=cast(ceil(i/s),'uint32'); 140
Advantage- Fig 3:.Bicubic Interpolation 1. This method is sharper images than the previous two methods. 2. The ideal combination of processing time and output quality. 3. Various distances from the unknown pixel, 4. It is a standard in many image editing programs (including Adobe Photoshop), printer drivers and in-camera interpolation. 5. When speed is not an issue, Bicubic Interpolation is often chosen. 6. In bicubic interpolation the blur is not formed even when image is interpolated many times. 7. This technique is very effective and produces images that are very close to the original image. scale=2; a=x1; newim=bicubic(a,scale); figure; imshow(newim); title ('BiCubic Output'); a=imresize(a,[256 256]); X=imresize(newim,[256 256]); X=X(:,:,1); a=a(:,:,1); X=double(X); a=double(a); [M,N]=size(a); for i=1:m MSE(i) = sum(sum((a(i,:)- X(i,:)).^2))/(M*N); PSNR(i)=10*log10(256*256/MSE(i)) MSE=mean(MSE); 141 PSNR=mean(PSNR); disp('bicubic scaling :');
MSE is the cumulative squared error 3. PERFORMANCE ANALYSIS between the compressed and the original image The lower the value of MSE, the lower the error. The performance analysis is carried out by using two measuring techniques which is expressed as follows- 1) Peak Signal To Noise Ratio(PSNR) 2) Mean Square Error(MSE) 3.1 Peak Signal To Noise Ratio (PSNR)- Peak signal-to-noise ratio (PSNR) is a ratio between the maximum possible value (power) of a signal and the power of distorting noise that affects the quality of its representation. This ratio is often used as a quality measurement between the original and a compressed image. 4. EXPRIMENTAL RESULT The bilinear interpolation and bicubic interpolation smooth the data and keeping the low frequency content of the source image. Because they are not able to enhance the high frequencies or preserve the edges equally well. Produce some annoying visual problems, such as, aliasing, blurring or other artifact effects We compared three various conventional scaling algorithms in PSNR and MSE values. The following table Shows the MSE and PSNR values for Flower, Lena, Nature and for New image. The higher the PSNR, the better the quality of the compressed, or reconstructed image. 3.2 Mean Square Error (MSE)- Fig 4: Comparison of interpolation methods 142
Fig 5: Test Images a) Flower b) Lena c) Nature d) New 143
Table 1. MSE values for the Test images using above three interpolation method Nearest neighbor Bilinear Bicubic Flower 0.0038 0.0420 0.9842 Lena 0.0074 0.1087 1.7274 Nature 0.0215 0.1785 1.6100 New 0.0322 0.2682 1.3904 Table 2. PSNR values for the Test images using above three interpolation method Nearest neighbor Bilinear Bicubic Flower 75.0082 63.4870 48.8556 Lena 71.1336 58.6881 45.8335 Nature 70.8544 59.2054 47.1118 New 65.2427 56.1201 46.8804 5. CONCLUSION- By observing the Table 1 it is seen that MSE value for the Nearest Neighbor is low as compared to other two method and as mentioned above that,lower the value of MSE, the lower the error. And also in Table 2, PSNR values for the Nearest Neighbor is high as compared to other two method. Higher the PSNR, better the quality of the compressed, or reconstructed image. So Nearest Neighbor Interpolation method is commonly used for interpolation which is convenient and getting good quality image. 144
6. REFERENCES- Processing, Prentice-Hall, N.J., 1989. [1] R. C. Gonzalez and R. E. Woods, "Digital Image Process," Prentice-Hall, N.J., 2002. [2] R. Keys, "Cubic Convolution Interpolation for Digital Image Processing," IEEE Trans. Signal Processing, vol. 29, pp.1153-1160,1981. [3] M. Hadhoud, M.I. Dessouky and F.E.A. EI-Samie, "Adaptive image interpolation based on local activity levels," in Proc. IEEE Int. Conf.Radio Science Conference, pp. 1-8, 2003. [4] X. Li et.al., "New edge-directed interpolation," IEEE trans. on Image Processing, Vol. 10, No 10, October 2001, pp. 1521-1527. [5] Sobel, I. and Feldman,G., "A 3x3 Isotropic Gradient Operator for Image Processing", presented at a talk at the Stanford Artificial Project in 1968, unpublished but often cited, orig. in Pattern Classification and Scene Analysis, Duda,R. and Hart,P., John Wiley and Sons,'73, pp271-2 [6] Y. C. Lan, "Adaptive digital zoom techniques based on hypothesized boundary, " master thesis, National Taiwan Univ. 1999. [7] A. V. Oppenheim, R. W. Schafer and J. R. Burk, "Discrete-Time Signal 145