Computational Aspects of Digital Photography

When resampling images or performing neighborhood operations, we might try to access a pixel at x, y coordinates that are outside the image. To handle such cases gracefully, it is good to write functions that take x, y as input and check them against the bounds of the image before looking up a value.

Multiple options are possible when a pixel is requested outside the bounds, but the two most common are to return a black pixel or the closest available pixel (that of the closest edge). We recommend you implement both. For the latter, just clamp the pixel coordinates to `[0..height-1]` and `[0..width-1]` and perform the lookup there.

Black padding is more appropriate for applications such as scale and rotation whereas edge-pixel padding looks better for warping and for convolution.

In `floatimage.cpp` implement the function:

`float FloatImage::smartAccessor(int x, int y, int z, bool clampToEdge) const`

: that will return a black pixel (`clampToEdge = false`

) or the nearest pixel value (`clampToEdge = true`

) when indexing out of the bounds of the image. You may find the template function`clamp(a,low,hi)`

in`utils.h`useful.

In the following problems, you will implement several functions in which you will convolve an image with a kernel. This will require that you index out of the bounds of the image. Handle these boundary effects by using the `smartAccessor`

from the previous section. Also, process each of the three color channels in an image independently.

We have provided you with a function `FloatImage impulseImg(const int &k)`

that generates a \( k \times k \times 1 \) grayscale image that is black everywhere except for one pixel in the center that is completely white. If you convolve a kernel with this image, you should get a copy of the kernel in the center of the image. An example of this can be seen in `testGradient()`

(in `a4_main.cpp`).

In `filtering.cpp` implement the function:

`FloatImage boxBlur(const FloatImage &im, const int &k, bool clamp)`

: This function takes in an Image and an integer as input and outputs for each pixel the average of its \( k \times k \) neighbors, where \( k \) is an integer. Make sure the average is centered. We will only test you on odd \( k \).

Now, you will implement a more general convolution function that uses an arbitary kernel. The kernel is an instance of the `Filter`

class. To create a kernel use the constructor `Filter(const vector`

. This takes in a row-major vector containing
the values of the kernel (just like in the ` FloatImage`

class), and the width and height
of the kernel respectively (`fWidth`

and `fHeight`

must be odd). See `a4_main.cpp` for example kernels.

In `filtering.cpp` implement the functions:

`FloatImage Filter::Convolve(const FloatImage &im, bool clamp)`

in the`Filter`

class: This function should compute the convolution of an input image by its kernel. Make sure the convolution is centered. Remember that in convolution you must flip your kernel.**Hint:**To access the`(x,y)`location in a kernel from within the class type`Filter::operator()(x, y)`

`FloatImage boxBlur_filterClass(const FloatImage &im, const int &k, bool clamp)`

: Using`Convolve`

implement the box filter. Check that you get the same answer as before with`boxBlur`

.

Pay attention to indexing, `(0, 0)` denotes the upper left corner of the Image, but for our kernels we want the center to be in the middle. This means you might need to shift indices by half the kernel size. Test your function with `impulseImg()`

, a constant image and real images.

In `filtering.cpp` implement the functions:

`FloatImage gradientMagnitude(const FloatImage &im, bool clamp)`

: that uses the provided Sobel kernel in`testGradient()`

(in`a4_main.cpp`) to compute the gradient magnitude from the horizontal and vertical components of the gradient of an image. The gradient magnitude is defined as the square root of the sum of the squares of the two components. The Sobel kernels for the horizontal and vertical components of the gradient are respectively \[ \left[ \begin{array}{ccc} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{array} \right] \text{and} \left[ \begin{array}{ccc} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{array} \right] \]

In `filtering.cpp` implement the functions:

`vector<float> gauss1DFilterValues(float sigma, float truncate)`

: that returns the kernel values of a 1 dimensional Gaussian of standard deviation`sigma`

. Gaussians have infinite support, but their energy falls off so rapidly that you can truncate them at`truncate`

times the standard deviation`sigma`

. Make sure that your truncated kernel is normalized to sum to 1. Your kernel’s output length should be`1+2*ceil(sigma * truncate)`

.`FloatImage gaussianBlur_horizontal(const FloatImage &im, float sigma, float truncate, bool clamp)`

: Use the returned vector from`gauss1DFilterValues`

to generate a 1D horizontal Gaussian kernel using the`Filter`

class. Create and use this Filter to blur an image horizontally.

In `filtering.cpp` implement the functions:

`vector<float> gauss2DFilterValues(float sigma, float truncate)`

: that returns a full 2D rotationally symmetric Gaussian kernel. The kernel should have standard deviation`sigma`

corresponding to a size of:`1+2*ceil(sigma * truncate)`

\(\times\)`1+2*ceil(sigma * truncate)`

pixels.`FloatImage gaussianBlur_2D(const FloatImage &im, float sigma, float truncate, bool clamp)`

: that uses the kernel from`gauss2DFilterValues`

to filter an image.

In `filtering.cpp` implement the function:

`FloatImage gaussianBlur_seperable(const FloatImage &im, float sigma, float truncate, bool clamp)`

: implement seperable Gaussian filtering using a 1D horizontal Gaussian filter followed by a 1D vertical one.

Verify that you get the same result with the full 2D filtering as with the separable Gaussian filtering. Measure the running times of the separable filtering vs. 2D filtering using `testGaussianFilters()`

in `a4_main.cpp`.

In `filtering.cpp` implement the function:

`FloatImage unsharpMask(const FloatImage &im, float sigma, float truncate, float strength, bool clamp)`

: That sharpens an image. Use a Gaussian of standard deviation`sigma`

to extract a lowpassed version of the image. Subtract that lowpassed version from the original image to produce a highpassed version of the image and then add the highpassed version back to it`strength`

times.

In `filtering.cpp` implement the function:

`FloatImage bilateral(const FloatImage &im, float sigmaRange, float sigmaDomain, float truncateDomain, bool clamp)`

: that filters an image using the bilateral filter. The filter is defined as \[ I_{out}(x,y) = \frac 1k \sum_{x,y} G(x-x', y-y', \sigma_{Domain})G(I_{in}(x, y) - I_{in}(x',y'), \sigma_{Range})I_{in}(x',y') \\ k = \sum_{x,y} G(x-x', y-y', \sigma_{Domain})G(I_{in}(x,y) - I_{in}(x',y'), \sigma_{Range}) \] where \(I_{in}\) is the input image, \(G\) are Gaussian kernels and \(k\) is a normalization factor. The bilateral filter is very similar to convolution, but the kernel varies spatially and depends on the color difference between a pixel and its neighbors.

The range Gaussian on \(I_{in}(x, y) − I_{in}(x', y')\) should be computed using the Euclidean distance in RGB. Try your filter on the provided noisy image lens as well as on simple test cases.

We want to avoid chromatic artifacts by filtering chrominance more than luminance. This is because the human visual system is more sensitive to low frequencies in the chrominance components.

In `filtering.cpp` implement the function:

`FloatImage bilaYUV(const FloatImage &im, float sigmaRange, float sigmaY, float sigmaUV, float truncateDomain, bool clamp)`

: that performs bilateral denoising in YUV where the Y channel gets denoised with a different domain sigma than the U and V channels. You can use your own implementations of the YUV conversion from the previous assignment, or use the ones we provide in`basicImageManipulation.h`.

In all cases, make sure you compute the range Gaussian with respect to the full YUV coordinates, and not just for the channel you are filtering. We recommend a spatial sigma four times bigger for U and V as for Y.

Here are ideas for extensions you could attempt, for up to 5% extra credit each:

- Median filter
- Create fancier kernels for your
`Filter`

class that mimic different types of camera apertures (e.g. circular, pentagonal, hexagonal). Create these either parametrically/programmatically, or create the kernels in an image editing program and extend your`Filter`

class to load kernels from a PNG file. - Implement a fake tilt-shift effect by blurring an image with a spatially-varying kernel size.
- Fast incremental and separable box filter. Evaluate the speed improvements.
- Fast Gaussian filter approximation using repeated fast box blurs (your solution should be asympotically independent of kernel size, and should automatically determine the box blur sizes from the desired Gaussian sigma). Evaluate the performance and accuracy relative to your separable Gaussian implementation.
- Use a look-up table to accelerate the computation of Gaussians for bilateral filtering. Evaluate the performance difference.
- Denoising using NL means http://www.math.ens.fr/culturemath/maths/mathapli/imagerie-Morel/Buades-Coll-Morel-movies.pdf

Turn in your files using Canvas and make sure all your files are in the `a4` directory under the root of the zip file. Include all sources (.cpp and .h) files, any of **your** images, and the output of your program. Don't include your actual executable (we don't need your _build directory), and remove any superfluous files before submission.

In your readme.txt file, you should also answer the following questions:

- How long did the assignment take?
- Potential issues with your solution and explanation of partial completion (for partial credit)
- Any extra credit you may have implemented
- Collaboration acknowledgment (you must write your own code)
- What was most unclear