Hi!

Day in, day out, people share images containing sensitive information online, and choose to synthetically "blur out" the parts they do not wish to share in Photoshop or some other software. This is a bad idea, as I'm going to demonstrate.

Here we have an excerpt from a bank statement, which has been artificially blurred (using a random blurring function that I don't know the parameters to, no less). Can you read it? No, me neither. Yet...

Firstly, we need to consider what blurring is, and how it can be represented mathematically. All blurs, whether artificial/synthetic or "natural" (motion blur, depth of field from a camera etc) can be thought of as a 2D convolution operation. The convolution of a source image, **s(x,y)**, with a "blur kernel" **h(x,y)** will give us our blurry output image, **g(x,y)**:

Where * is the convolution operator, not to be confused with multiplication. In reality, it is likely that there has been noise added to the image, especially if the blurring is not artificial and is caused by effects from using a camera. For synthetic blurs there may not be such an issue with the noise, however compression artifacts and other issues do still arise. We're going to assume the noise has a gaussian distribution, which means it can be defined by a mean and a variance. The noise is additive (which is important later), and is not dependent on position or the source image. This changes our equation, slightly, as we need an extra noise term, **n(x,y)**:

It is not entirely obvious at this point how we would go about solving this equation for **S(x,y)**, the original un-blurred image, as convolution is not a particularly straightforward operator and does not have a simple inverse. Thankfully, we can use the Fourier Transform to consider our problem in the frequency domain, which gives us a solution to the problem through the use of the convolution theorem:

Or in plain English, convolution in the spatial domain is equivalent to multiplication in the frequency domain - much simpler!

Now, if we had a blurred image **G(u,v)** with no additional noise, we would just need to know the blur kernel, **H(u,v),** and divide through by it to get our original, unblurred image. Unfortunately, it is pretty much impossible to avoid the addition of noise, and if you make the assumption that the noise is so small it can be ignored it quickly becomes apparent that the assumption does not hold up and your "unblurred" image will be a noisy mess. So instead we need a method to reverse the convolution whilst suppressing noise, and we need to know, or at least estimate, the blur kernel.

For the kind of intentionally added artificial blurs we are concerned with, estimating the blur kernel (also known as a point spread function [**PSF**]) is not too difficult. There's a few different categories of blur kernels regularly used, and the visual effects of each is fairly obvious. Once you've established the type of blur kernel used, it's just a case of estimating the parameters (such as radius for a disc kernel), which can be done using clever iterative gradient-descent methods, or more simply through trial and error.

We have our kernel now, so how do we perform the deconvolution whilst suppressing the noise? One (of many) methods is to re-purpose the Wiener filter (invented by Nobert Wiener) to give us an estimate of our source image, **s(x,y)**. The Wiener filter is effectively a algorithm which creates a filter that aims to provide an optimal target signal from a noisy input signal, given an estimate of the signal to noise ratio, by reducing the mean square error. The equation we will use to implement this Wiener filter is as follows:

Where is the estimate of our original, unblurred image and is our blur kernel.

is defined here as the energy spectrum of the noise, and the energy spectrum of the source image. Dividing them in this way gives the signal-to-noise ratio of the image. In practice, we do not have the source image (as that's what we're aiming to find), nor do we know the exact noise in the image, so we have to work through another step here. There are two choices, both which work fairly well:- Use a similar image as a reference for (i.e. enough image of the same size with similar features, such as some kind of text on a white background in our case), and then generate a noise image with an estimated/guessed/found through trial-and-error variance.
- Replace with some constant SNR, , which can then be estimate/tweaked through trial and error. A good starting place is to estimate it by taking the mean of all pixel values, and diving by the standard deviation of all of the "background" pixels values.

The method I usually chose is number one. The choice of reference image to use is a little arbitrary - it works well for text-based images like we have here, because any image with text on a similar background will work well as a reference. For photos and such, it's not always quite such a successful method. However, it does have benefits over the second method, as it provides a value for every (u,v) point. That is, it is frequency dependent rather than assuming a constant signal-to-noise ratio at every frequency, which I have found generally provides better results.

So, let's take the blurred bank statement image above and try out the Wiener deconvolution. The MATLAB code used is given at the end of the post. The steps are as follows:

- Look at the blurred image and determine what type of kernel was used. In the example image above, it's pretty clear a "Disc" kernel is used (I wrote a little script to randomly choose the type and parameters and add noise with a random variance- you've just got to trust me on that one!). Failing that, there's only about 4 or 5 to test out if you want to go by trial-and-error.
- Decide on a method to estimate the SNR. We're using the first method, so we'll take this as our reference image:

And take an initial educated guess at our noise variance and use this to create a "noise image".

3. Finally, use the Wiener filter method to produce our output image, then iterate to find the best parameters for the blur kernel size (radius, in this case) and the noise variance.

And voilà, your credit card details plain for everyone to see.

There are currently a few manual steps to this problem as I have not yet spent the time on implementing them automatically, however, there is much room for expansion here. A simple function to give a metric on how blurry an image is (perhaps the mean pixel value from the Laplacian of the image) could be put into a gradient-descent optimization algorithm and be used to automatically choose the best parameters. Furthermore, machine learning techniques could no doubt be applied to identify the blur kernel shape and size, and I imagine I will be writing up a post on exactly this soon.

But for now - please just crop out anything you don't want people to see. It's for the best!

MATLAB has it's own deconvolution functions, but in case you want to implement it yourself, here is my implementation. It takes four parameters, all of which are images; the blurred input [image], the reference [image], an image of gaussian noise (that you create using the estimate variance) [image], and an estimate of the blur kernel [image]:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
%Load blurred and reference image image = imread('blurred.png'); whole_ref = imread('fakeaccountref.png'); reference = im2double(whole_ref(:,:,1)); %Blur kernel (parameters from trial and error) PSF = fspecial('disk', 10); %Noise noise_var = 0.0001; %Create noise image using variance noise = zeros(size(reference)); noise = imnoise(noise,'gaussian',0,noise_var); %Deconvolution with reference i=deconvolute(image,reference,noise,PSF); imshow(i) |

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
function [output_image,final_blur] = deconvolute(image,source_estimate,noise_estimate,PSF_estimate) signal_var = var(image(:)); %Variance of the blurred image noise_var = var(noise_estimate(:)); %Variance of the noise estimate (also an image, only noise) %%Fourier transform of the point spread function PSF_fft = (fft2(padarray(PSF_estimate,(size(image)-size(PSF_estimate)),'post'))); %G, FFT of our blurred image G = (fft2(image)); %FFT of our noise estimate image dftNoise = (fft2(noise_estimate)); %FFT of the "reference" image dftReference = (fft2(source_estimate)); psNoise = abs(dftNoise).^2; %Energy spectrum of noise psReference = abs(dftReference).^2; %Energy spectrum of reference %Wiener formula temp1 = (abs(PSF_fft)) .^ 2; temp2 = psNoise ./ psReference; S = (1 ./ PSF_fft) .* (temp1 ./ (temp1 + temp2)); G_inv = S .* G; %Our output image output_image = abs(ifft2((G_inv))); final_blur = classify_blur(output_image); end |