Learning Pixel-Distribution Prior with Wider Convolution for Image Denoising

Similar documents
Biologically Inspired Computation

Learning a Dilated Residual Network for SAR Image Despeckling

Tiny ImageNet Challenge Investigating the Scaling of Inception Layers for Reduced Scale Classification Problems

Fast Non-blind Deconvolution via Regularized Residual Networks with Long/Short Skip-Connections

CROSS-LAYER FEATURES IN CONVOLUTIONAL NEURAL NETWORKS FOR GENERIC CLASSIFICATION TASKS. Kuan-Chuan Peng and Tsuhan Chen

arxiv: v1 [cs.lg] 2 Jan 2018

DYNAMIC CONVOLUTIONAL NEURAL NETWORK FOR IMAGE SUPER- RESOLUTION

Camera Model Identification With The Use of Deep Convolutional Neural Networks

Research on Hand Gesture Recognition Using Convolutional Neural Network

arxiv: v1 [cs.cv] 2 May 2016

Understanding Neural Networks : Part II

arxiv: v3 [cs.cv] 18 Dec 2018

Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks

Does Haze Removal Help CNN-based Image Classification?

LANDMARK recognition is an important feature for

Semantic Segmentation in Red Relief Image Map by UX-Net

arxiv: v2 [cs.cv] 11 Oct 2016

Colorful Image Colorizations Supplementary Material

Introduction to Machine Learning

Convolutional Neural Network-Based Infrared Image Super Resolution Under Low Light Environment

Hand Gesture Recognition by Means of Region- Based Convolutional Neural Networks

Semantic Segmentation on Resource Constrained Devices

NU-Net: Deep Residual Wide Field of View Convolutional Neural Network for Semantic Segmentation

Project Title: Sparse Image Reconstruction with Trainable Image priors

Xception: Deep Learning with Depthwise Separable Convolutions

Impact of Automatic Feature Extraction in Deep Learning Architecture

Fast Perceptual Image Enhancement

arxiv: v2 [cs.cv] 14 Jun 2016

arxiv: v1 [cs.cv] 19 Feb 2018

Image Manipulation Detection using Convolutional Neural Network

Can you tell a face from a HEVC bitstream?

arxiv: v2 [cs.cv] 29 Aug 2017

Multi-Modal Spectral Image Super-Resolution

LIGHT FIELD (LF) imaging [2] has recently come into

Wadehra Kartik, Kathpalia Mukul, Bahl Vasudha, International Journal of Advance Research, Ideas and Innovations in Technology

Optimized Quality and Structure Using Adaptive Total Variation and MM Algorithm for Single Image Super-Resolution

Lecture 23 Deep Learning: Segmentation

Texture Enhanced Image denoising Using Gradient Histogram preservation

یادآوری: خالصه CNN. ConvNet

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. ECE 289G: Paper Presentation #3 Philipp Gysel

TRANSFORMING PHOTOS TO COMICS USING CONVOLUTIONAL NEURAL NETWORKS. Tsinghua University, China Cardiff University, UK

Visualizing and Understanding. Fei-Fei Li & Justin Johnson & Serena Yeung. Lecture 12 -

Suggested projects for EL-GY 6123 Image and Video Processing (Spring 2018) 360 Degree Video View Prediction (contact: Chenge Li,

Fast pseudo-semantic segmentation for joint region-based hierarchical and multiresolution representation

arxiv: v2 [cs.sd] 22 May 2017

CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression

LEARNING ADAPTIVE PARAMETER TUNING FOR IMAGE PROCESSING. J. Dong, I. Frosio*, J. Kautz

A machine learning approach for non-blind image deconvolution

EE-559 Deep learning 7.2. Networks for image classification

Object Recognition with and without Objects

A Neural Algorithm of Artistic Style (2015)

arxiv: v1 [cs.cv] 27 Nov 2018

arxiv: v1 [cs.cv] 21 Nov 2018

ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions

arxiv: v1 [cs.cv] 15 Apr 2016

Scene Text Eraser. arxiv: v1 [cs.cv] 8 May 2017

Pelee: A Real-Time Object Detection System on Mobile Devices

Bilateral image denoising in the Laplacian subbands

Multi-level Wavelet-CNN for Image Restoration

APJIMTC, Jalandhar, India. Keywords---Median filter, mean filter, adaptive filter, salt & pepper noise, Gaussian noise.

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

arxiv: v1 [cs.cv] 17 Dec 2017

Hyperspectral Image Denoising using Superpixels of Mean Band

arxiv: v4 [cs.cv] 20 Jun 2016

arxiv: v1 [cs.cv] 23 May 2016

ON CLASSIFICATION OF DISTORTED IMAGES WITH DEEP CONVOLUTIONAL NEURAL NETWORKS. Yiren Zhou, Sibo Song, Ngai-Man Cheung

AUGMENTED CONVOLUTIONAL FEATURE MAPS FOR ROBUST CNN-BASED CAMERA MODEL IDENTIFICATION. Belhassen Bayar and Matthew C. Stamm

Continuous Gesture Recognition Fact Sheet

Detection and Segmentation. Fei-Fei Li & Justin Johnson & Serena Yeung. Lecture 11 -

IMage demosaicing (a.k.a. color-filter-array interpolation)

Wide Residual Networks

Analyzing features learned for Offline Signature Verification using Deep CNNs

arxiv: v1 [cs.cv] 25 Feb 2016

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array

Park Smart. D. Di Mauro 1, M. Moltisanti 2, G. Patanè 2, S. Battiato 1, G. M. Farinella 1. Abstract. 1. Introduction

arxiv: v1 [cs.cv] 9 Nov 2015 Abstract

Computer Vision Seminar

Classification Accuracies of Malaria Infected Cells Using Deep Convolutional Neural Networks Based on Decompressed Images

En ny æra for uthenting av informasjon fra satellittbilder ved hjelp av maskinlæring

Computer Science and Engineering

Super resolution with Epitomes

arxiv: v1 [cs.cv] 27 Nov 2016

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

Scale-recurrent Network for Deep Image Deblurring

Convolutional Networks for Image Segmentation: U-Net 1, DeconvNet 2, and SegNet 3

arxiv: v1 [cs.cv] 3 May 2018

fast blur removal for wearable QR code scanners

A Study on Image Enhancement and Resolution through fused approach of Guided Filter and high-resolution Filter

FILTER FIRST DETECT THE PRESENCE OF SALT & PEPPER NOISE WITH THE HELP OF ROAD

Synthetic View Generation for Absolute Pose Regression and Image Synthesis: Supplementary material

ROAD RECOGNITION USING FULLY CONVOLUTIONAL NEURAL NETWORKS

arxiv: v5 [cs.cv] 23 Aug 2017

arxiv: v1 [cs.ce] 9 Jan 2018

NO-REFERENCE IMAGE BLUR ASSESSMENT USING MULTISCALE GRADIENT. Ming-Jun Chen and Alan C. Bovik

MOST digital cameras contain sensor arrays covered. Learning Deep Convolutional Networks for Demosaicing. arxiv: v1 [cs.

360 Panorama Super-resolution using Deep Convolutional Networks

A New Framework for Supervised Speech Enhancement in the Time Domain

IMAGE RESTORATION WITH NEURAL NETWORKS. Orazio Gallo Work with Hang Zhao, Iuri Frosio, Jan Kautz

A Deep Learning Approach for Wi-Fi based People Localization

Compact Deep Convolutional Neural Networks for Image Classification

Transcription:

Learning Pixel-Distribution Prior with Wider Convolution for Image Denoising Peng Liu University of Florida pliu1@ufl.edu Ruogu Fang University of Florida ruogu.fang@bme.ufl.edu arxiv:177.9135v1 [cs.cv] 28 Jul 217 Abstract In this work, we explore an innovative strategy for image denoising by using convolutional neural networks (CNN) to learn pixel-distribution from noisy data. By increasing CNN s width with large reception fields and more channels in each layer, CNNs can reveal the ability of learning pixel-distribution, which is a prior excising in many different types of noise. The key to our approach is a discovery that wider CNNs tends to learn the pixel-distribution features, which provides the probability of that inference-mapping primarily relies on the priors instead of deeper CNNs with more stacked non-linear layers. We evaluate our work: Wide inference Networks (WIN) on additive white Gaussian noise (AWGN) and demonstrate that by learning the pixel-distribution in images, WIN-based network consistently achieves significantly better performance than current state-of-the-art deep CNN-based methods in both quantitative and visual evaluations. Code and models are available at https://github.com/cswin/win. 1 Prior: pixel-distribution features In low-level vision problems, pixel-level features are the most important features. We compare the histograms of different images in various noise levels to investigate the pixel-level features having a certain of consistency. As we can see from Fig. 1 and Fig. 2, the pixel-distribution in noisy images is more similar in higher noise level σ = 5 than lower noise level σ = 1. WIN inferences noise-free images based on the learned pixel-distribution features. When the noise level is the higher, the pixel-distribution features are more similar. Thus, WIN can learn more pixel-distribution features from noisy images having higher level noise. This is the reason that WIN performs even better in higher-level noise, which can be seen and verified in section 3. 2 4 2 3 3 2 2 3 2 2 2 -I (b) Noisy-I Noise=1 (c) Ground-truth-II (d) Noisy-II Noise=1 Figure 1: Compare the pixel distributions of histograms of two different images added additive white Gaussian noise (AWGN) with same noise level σ = 1.

2 18 16 14 12 8 6 4 2 3 2 2 -I (b) Noisy-I Noise=5 (c) Ground-truth-II (d) Noisy-II Noise=5 Figure 2: Compare the pixel distributions of histograms of two different images added additive white Gaussian noise (AWGN) with same noise level σ = 5. 2 Wider Convolution Inference Strategy In Fig. 3, we illustrate the architectures of WIN5, WIN5-R, and WIN5-RB. 2.1 Architectures Three proposed models have the identical basic structure: L = 5 layers and K = 128 filters of size F = 7 7 in most convolution layers, except for the last one with K = 1 filter of size F = 7 7. The differences among them are whether batch normalization (BN) and an input-to-output skip connection are involved. WIN5-RB has two types of layers with two different colors. (a) Conv+BN+ReLU [19]: for layers 1 to L 1, BN is added between Conv and ReLU [19]. (b) Conv+BN: for the last layer, K = 1 filters of size F = 7 7 is used to reconstruct the R(y) n. In addition, a shortcut skip connecting the input (data layer) with the output (last layer) is added to merge the input data with R(y) as the final recovered image. 2.2 Having Knowledge Base with Batch-Normal In this work, we employ Batch Normalization (BN) for extracting pixel-distribution statistic features and reserving training data means and variances in networks for denoising inference instead of using the regularizing effect of improving the generalization of a learned model. The regularizer-bn can keep the data distribution the same as input: Gaussian distribution. This distribution consistency between input and regularizer ensures more pixeldistribution statistic features can be extracted accurately. The integration of BN [9] into more filters will further preserve the prior information of the training set. Actually, a number of state-of-the-art studies [5, 11, 24] have adopted image priors (e.g. distribution statistic information) to achieve impressive performance. INPUT Conv OUTPUT INPUT Conv OUTPUT INPUT Conv+BN+ReLU Conv+BN+ReLU Conv+BN+ReLU Conv+BN+ReLU Conv+BN OUTPUT Figure 3: Architectures (a) WIN5 (b) WIN5-R (c) WIN5-RB. Can Batch Normalization work without a Skip Connection? In WINs, BN [9] cannot work without the inputto-output skip connection and is always over-fitting. In WIN5-RB s training, BN keeps the distribution of input data consistent and the skip connection can not only introduce residual learning but also guide the network to extract the certain features in common: pixel-distribution. Without the input data as a comparison, BN could bring negative effects by keeping the each input distribution same, especially, when a task is to output pixel-level feature map. In DnCNN, two BN layers are removed from the first and last layers, by which a certain degree of the BN s negative effects can be reduced. Meantime DnCNN also highlights network s generalization ability largely relies on the depth of networks. In Fig.4, learned priors (means and variances) are preserved in WINs as knowledge base for denoising inference. When WIN has more channels to preserve more data means and variances, various combinations of these feature maps can corporate with residual learning to infer the noise-free images more accurately. 2

INPUT Layer-1 Layer-2 Layer-3 Layer-4 Layer-5 OUTPUT Figure 4: The process of denoising inference by sparse distribution statistics features. Learned priors (means and variances) are preserved in WINs as knowledge base for denoising inference. When WIN has more channels to preserve more data means and variances, various combinations of these feature maps can corporate with residual learning to infer the noise-free images more accurately. 3 Experimental Results Table 1: The average results of PSNR (db) / SSIM / Run Time (seconds) of different methods on the BSD2- test [18] (2 images). Note: WIN5-RB-B (blind denoising) is trained on larger number of patches as data augmentation is adapted.this is the reason why WIN5-RB-B (trained on σ = [ 7]) can outperform WIN5-RB-S (trained on single σ = 1, 3, 5, 7 separately) in some cases. PSNR (db) / SSIM σ BM3D [3] RED-Net [16] DnCNN [26] WIN5 WIN5-R WIN5-RB-S WIN5-RB-B 1 34.2/.9182 32.96/.8963 34.6/.9283 34.1/.925 34.43/.9243 35.83/.9494 35.43/.9461 3 28.57/.7823 29.5/.849 29.13/.86 28.93/.7987 3.94/.8644 33.62/.9193 33.27/.9263 5 26.44/.728 26.88/.723 26.99/.7289 28.57/.7979 29.38/.8251 31.79/.8831 32.18/.9136 7 25.23/.6522 26.66/.718 25.65/.679 27.98/.7875 28.16/.7784 3.34/.8362 31.7/.8962 Run Time(s) 3 1.67 69.25 13.61 15.36 15.78 2.39 15.82 5 2.87 7.34 13.76 16.7 22.72 21.79 13.79 7 2.93 69.99 12.88 16.1 19.28 2.86 13.17 3.1 Quantitative Result The quantitative result on test set BSD2 is shown on Table 1 including noise levels σ = 1, 3, 5, 7. Moreover, we compare WIN5-RB-B, DnCNN and BM3D behaviors at different noise levels of average PSNR on BSD2-test. As we can see from Fig.5, WIN5-RB-B (blind denoising) trained for σ = [ 7] outperforms BM3D [3] and DnCNN [26] on all noise levels and is significantly more stable even on higher noise levels. In addition, in Fig.5, as the noise level is increasing, the performance gain of WIN5-RB-B is getting larger, while the performance gain of DnCNN comparing to BM3D is not changing much as the noise level is changing. Compared with WINs, DnCNN is composed of even more layers embedded with BN. This observation indicates that the performance gain achieved by WIN5-RB does not mostly come from BN s regularization effect but the pixel-distribution features learned and relevant priors such as means and variances reserved in WINs. Both Larger kernels and more channels can promote CNNs more likely to learn pixel-distribution features. PSNR (db) 36 34 32 3 28 26 WIN5 RB B DnCNN BM3D 1 2 3 4 5 6 7 σ nosie Figure 5: Behavior at different noise levels of average PSNR on BSD2-test. WIN5-RB-B (blind denoising) is trained for σ = [ 7] and outperforms BM3D [3] and DnCNN [26] on all noise levels and is significantly more stable even on higher noise levels. 3.2 Visual results For Visual results, We have various images from two different datasets, BSD2-test and Set12, with noise levels σ = 1, 3, 5, 7 applied separately. One image from BSD2-test with noise level=1 3

(b) Noise=1 / 28.13dB /.712 (c) BM3D / 33.42dB /.931 (d) RED-Net / 32.49dB /.8951 (e) DnCNN / 34.31dB /.9186 (f) WIN5 / 33.82dB /.911 (g) WIN5-R / 34.14dB /.9142 (h) WIN5-RB / 36.1dB /.9589 (i) WIN5-RB-B / 35.23dB /.9542 Figure 6: Visual results of one image from BSD2-test with noise level σ = 1 along with PSNR(dB) / SSIM. As we can see, our proposed methods can yield more natural and accurate details in the texture as well as visually pleasant results. One image from BSD2-test with noise level=3 4

(e) DnCNN / 3.7dB /.8661 (b) Noise=3 / 18.78dB /.2496 (f) WIN5 / 3.34dB /.8556 (c) BM3D / 3.11dB /.8481 (d) RED-Net / 3.43dB /.8597 (g) WIN5-R / 31.66dB /.878 (h) WIN5-RB / 33.65dB /.91 Figure 7: Visual results of one image from BSD2-test with noise level σ = 3 along with PSNR(dB) / SSIM. As we can see, our proposed methods can yield more natural and accurate details in the texture as well as visually pleasant results. One image from BSD2-test with noise level=5 5

(e) DnCNN / 23.7dB /.5872 (b) Noise=5 / 14.78dB /.2652 (f) WIN5 / 23.88dB /.5858 (c) BM3D / 23.1dB /.5163 (g) WIN5-R / 24.7dB /.664 (d) RED-Net / 23.48dB /.561 (h) WIN5-RB / 26.95dB /.8254 Figure 8: Visual results of one image from BSD2-test with noise level σ = 5 along with PSNR(dB) / SSIM. As we can see, our proposed methods can yield more natural and accurate details in the texture as well as visually pleasant results. One image from BSD2-test with noise level=7 6

(d) RED-Net / 29.93dB /.8534 (g) WIN5-R/ 32.17dB /.8912 (b) Noise=7 / 12.35dB /.159 (e) DnCNN / 28.38dB /.8287 (h) WIN5-RB / 33.82dB /.8459 (c) BM3D / 27.91dB /.8172 (f) WIN5 / 31.9 db /.8865 (i) WIN5-RB-B / 34.55dB /.967 Figure 9: Visual results of one image from BSD2-test with noise level σ = 7 along with PSNR(dB) / SSIM. As we can see, our proposed methods can yield more natural and accurate details in the texture as well as visually pleasant results. One image from Set12 with noise level=1 7

(b) Noise=1 / 28.13dB /.722 (c) BM3D / 34.18dB /.9199 (d) RED-Net / 32.95dB /.8932 (e) DnCNN / 34.67dB /.9262 (f) WIN5 / 34.12dB /.9188 (g) WIN5-R / 34.53dB /.9235 (h) WIN5-RB / 37.dB /.9553 (i) WIN5-RB-B / 36.32 /.9535 Figure 1: Visual results of one image from Set12 with noise level σ = 1 along with PSNR(dB) / SSIM. As we can see, our proposed methods can yield more natural and accurate details in the texture as well as visually pleasant results. One image from Set12 with noise level=3 8

(b) Noise=3 / 18.71dB /.3263 (c) BM3D / 28.74dB /.895 (d) RED-Net / 28.99dB /.818 (e) DnCNN / 29.13dB /.8219 (f) WIN5 / 28.92dB /.8143 (g) WIN5-R/ 31.5dB /.8919 (h) WIN5-RB / 35.65dB /.9518 (i) WIN5-RB-B / 34.78dB /.9512 Figure 11: Visual results of one image from Set12 with noise level σ = 3 along with PSNR(dB) / SSIM. As we can see, our proposed methods can yield more natural and accurate details in the texture as well as visually pleasant results. One image from Set12 with noise level=5 9

(b) Noise=5 / 14.59dB /.1797 (c) BM3D / 26.23dB /.7164 (d) RED-Net / 26.77dB /.7379 (e) DnCNN / 26.83 /.7393 (f) WIN5 / 27.99dB /.7796 (g) WIN5-R / 3.dB /.8573 (h) WIN5-RB / 33.6dB /.989 (i) WIN5-RB-B / 32.96dB /.9285 Figure 12: Visual results of one image from Set12 with noise level σ = 5 along with PSNR(dB) / SSIM. As we can see, our proposed methods can yield more natural and accurate details in the texture as well as visually pleasant results. References [1] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical Society. Series B (Methodological), pages 99 12, 1974. [2] D. Ciregan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Computer Vision and Pattern Recognition (CVPR), 212 IEEE Conference on, pages 3642 3649. IEEE, 212. [3] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. BM3D image denoising with shape-adaptive principal component analysis. In SPARS 9-Signal Processing with Adaptive Sparse Structured Representations, 29. [4] C. Dong, Y. Deng, C. Change Loy, and X. Tang. Compression artifacts reduction by a deep convolutional network. In Proceedings of the IEEE International Conference on Computer Vision, pages 576 584, 215. [5] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736 3745, 26. [6] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. IEEE transactions on pattern analysis and machine intelligence, 35(8):1915 1929, 213. [7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 77 778, 216. [8] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554 2558, 1982. [9] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arxiv preprint arxiv:152.3167, 215. [1] V. Jain and S. Seung. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems, pages 769 776, 29. [11] N. Joshi, C. L. Zitnick, R. Szeliski, and D. J. Kriegman. Image deblurring and denoising using color priors. In Computer Vision and Pattern Recognition, 29. CVPR 29. IEEE Conference on, pages 155 1557. IEEE, 29. [12] D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi. Residual interpolation for color image demosaicking. In 213 IEEE International Conference on Image Processing, pages 234 238. IEEE, 213. 1

[13] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR Oral), June 216. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 197 115, 212. [15] S. Z. Li. Markov random field modeling in image analysis. Springer Science & Business Media, 29. [16] X.-J. Mao, C. Shen, and Y.-B. Yang. Image restoration using convolutional auto-encoders with symmetric skip connections. arxiv preprint arxiv:166.8921, 216. [17] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Computer Vision, 21. ICCV 21. Proceedings. Eighth IEEE International Conference on, volume 2, pages 416 423. IEEE, 21. [18] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th Int l Conf. Computer Vision, volume 2, pages 416 423, July 21. [19] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-1), pages 87 814, 21. [2] Y. A. Rozanov. Markov random fields. In Markov Random Fields, pages 55 12. Springer, 1982. [21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arxiv preprint arxiv:149.1556, 214. [22] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. arxiv preprint arxiv:1412.686, 214. [23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1 9, 215. [24] J. Xu, L. Zhang, W. Zuo, D. Zhang, and X. Feng. Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the IEEE International Conference on Computer Vision, pages 244 252, 215. [25] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818 833. Springer, 214. [26] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. arxiv preprint arxiv:168.3981, 216. 11