Denoising from sequence

First, grab the new basecode and input images from Canvas.

Basic sequence denoising

Try it on the sequence in the directory aligned-ISO3200 using the first part of testDenoiseSeq() in a3_main.cpp. We suggest testing with at least 16 images, and experimenting with other images to see how well the method converges.

Variance

Compare the signal-to-noise ratio of the ISO 3200 and ISO 400 sequences. Which ISO has better SNR? Answer the question in the README.txt file. Same as in the previous section, use at least 16 images, but more will give you better estimates. Visualize the variance of the images in aligned-ISO3200 using the second half of testDenoiseSeq() in a3_main.cpp.

Alignment

The image sequences you have looked at so far have been perfectly aligned. Sometimes, the camera might move, so we need to align the images before denoising.

Use testDenoiseShiftSeq() and testOffset() in a3_main.cpp to help test your functions on the images from the green sequence in Input/green.

averaging
aligned averaging
(a) Averaging
(b) Aligned averaging

Basic Demosaicing

Most digital sensors record color images through a Bayer mosaic, where each pixels captures only one of the three color channels, and a software interpolation is then needed to reconstruct all three channels at each pixel. The green channel is recorded twice as densely as red and blue, as shown below.

demosaicing

We represent raw images as a grayscale image (red, green, and blue channels are all the same). The images are encoded linearly so you do not have to account for gamma. You can open these images in your favorite image viewer and zoom in to see the pattern of the Bayer mosaic.

We provide you with a number of raw images and your task is to write functions that demosaic them. We encourage you to debug your code using raw/signs-small.png because it is not too big and exhibits the interesting challenges of demosaicing.

For simplicity, we ignore the case of pixels near the boundary of the image. That is, the first and last two rows and columns of pixels don't need to be reconstructed. This will allow you to focus on the general case and not worry about whether neighboring values are unavailable. It's actually not uncommon for cameras and software to return a slightly-cropped image for similar reasons. For the border pixels that you do not calculate, copy the pixels values from the same location in the original raw image to your output image. See http://www.luminous-landscape.com/contents/DNG-Recover-Edges.shtml

Basic green channel

Try your image on the included images in Input/raw and verify that you get a nice smooth interpolation. You can try on your own raw images by converting them using the program dcraw.

Basic red and blue

Use testBasicDemosaic() in a3_main.cpp to help test your basic demosaicing functions.

Edge-based green

One central idea to improve demosaicing is to exploit structures and patterns in natural images. In particular, 1D structures like edges can be exploited to gain more resolution. We will implement the simplest version of this principle to improve the interpolation of the green channel. We focus on green because it has a denser sampling rate and usually a better SNR.

For each pixel, we will decide to adaptively interpolate either in the vertical or horizontal direction. That is, the final value will be the average of only two pixels, either up and down or left and right. We will base our decision on the comparison between the variation up-down and left-right. It is up to you to think or experiment and decide if you should interpolate along the direction of biggest or smallest difference.

Red and blue based on green

A number of demosaicing techniques work in two steps and focus on first getting a high-resolution interpolation of the green channel using a technique such as edgeBasedGreen(), and then using this high-quality green channel to guide the interpolation of red and blue.

One simple such approach is to interpolate the difference between red and green (resp. blue and green). Adapt your code above to interpolate the red or blue channel based not only on a raw input image, but also on a reconstructed green channel.

Try this new improved demosaicing pipeline on raw/signs-small.png using testGreenEdgeDemosaic() in a3_main.cpp and notice that most (but not all) artifacts are gone.

BasicDemosaic
edgeBasedGreen
improvedDemosaic
(a) basicDemosaic
(b) edgeBasedGreenDemosaic
(c) improvedDemosaic

Sergey Prokudin-Gorsky

The Russian photographer Sergey Prokudin-Gorsky took beautiful color photographs in the early 1900s by sequentially exposing three plates with three different filters.

Sergey Prokudin-Gorsky triplet

We include a number of these triplets of images in Input/Sergey (courtesy of Alyosha Efros). Your task is to reconstruct RGB images given these inputs.

Cropping and splitting

Alignment

The image that you get out of your split function will have its 3 channels misaligned.

Naive RGB
Aligned RGB
(a) Naive RGB
(b) Aligned RGB

Use testSergey() in a3_main.cpp to help test your functions.

Extra credit (up to 10%)

Here are some ideas for extra credit for both graduate and undergradute students:

  • Implement a coarse-to-fine alignment. Compare against your brute-force one.
  • Take potential rotations into account for alignment. This could be slow!
  • Implement smarter demosaicing. Make sure you describe what you did. For example, you can use all three channels and a bigger neighborhood to decide the interpolation direction.
  • Implement an automatic method to intelligently crop and/or correct the tonality and colors of the Sergey Prokudin-Gorsky images. Describe the logic of your approach the readme.

Submission

Turn in your files using Canvas and make sure all your files are in the a3 directory under the root of the zip file. Include all sources (.cpp and .h) files, any of your images, and the output of your program. Don't include your actual executable (we don't need your _build directory), and remove any superfluous files before submission.

For this assignment, due to size, please do not include the original Input image directory in the zip file that you upload to Canvas. You can use make zip to zip the contents of your folder in this way from the terminal.

In your readme.txt file, you should also answer the following questions:

Acknowledgments: This assignment is based off of one designed by Frédo Durand, Katherine L. Bouman, Gaurav Chaurasia, Adrian Vasile Dalca, and Neal Wadhwa for MIT's Digital & Computational Photography class.