In this assignment, we will only work with images that are linear, static, and where the camera hasn't moved. The image files are provided with gamma 2.2 encoding and we have inserted lines of code in makeHDR
and toneMap
to decode and recode, respectively, the images. Keep these lines in your implementations of makeHDR
and toneMap
.
Consult the course slides for the overall merging approach. We will first compute weights for each pixel and each channel to eliminate values that are too high and too low. We will then compute the scale factor between the values of two images. Finally, we will merge a sequence of images using a weighted combination of the individual images according to the weights and scale factors.
FloatImage computeWeight( const FloatImage &im, float epsilonMini, float epsilonMaxi)
: that returns an image with pixel value 1.0 when the corresponding pixel value of im
is between epsilonMini
and epsilonMaxi
, and 0.0 otherwise. Your output weight image should have the same dimensions as your input image im. Therefore, this operation should be done on a per pixel and per channel basis.
Use testComputeWeight
in a5_main.cpp to help test this function.
float computeFactor(const FloatImage &im1, const FloatImage &w1, const FloatImage &im2, const FloatImage &w2)
: that takes two images and their weights computed according to the above method, and returns a single scalar corresponding to the median value of im2
/im1
for pixels that are usable in both im1
and im2
. Add an epsilon of \(10^{-10}\) to the pixels of im1
to avoid divide by 0 errors.
Use testComputeFactor
in a5_main.cpp to help test this function.
FloatImage makeHDR(const vector imageList, float epsilonMini, float epsilonMaxi)
: that takes a sequence of images as input and returns a single HDR image. Do not forget the special case for the computation of the weight of the darkest and brightest images. You should only threshold in one direction for these cases. If a pixel is not assigned to any of the weight images, then assign it the corresponding value from the first image in the sequance (by doing this you should avoid any divide by 0 errors as well). To test your method, you can write out the output image scaled by different factors.
Use testMakeHDR
in a5_main.cpp to help test this function.
Now that you have assembled HDR images, you are going to implement tone mapping. Your tone mapper will follow the method studied in class. The function will be called FloatImage toneMap(const Image &im, float targetBase, float detailAmp, bool useBila, float sigmaRange)
. It takes as input an HDR image, a target contrast for the base, an amplification factor for the detail, and a Boolean for the use of the bilateral filter vs. Gaussian blur.
As described in lecture slides, our goal is to reduce the contrast from the HDR image (say, 1:10000) to what we can show on a display (say 1:100). Although gamma correction would be the first thing we can think of, this results in washed out images, as you might have seen in class. The colors are actually okay (they're all there) but the high frequencies are washed out. We therefore want to work in the luminance, and increase the high frequencies, and we want to work in the log domain, since we are very sensitive to multiplicative contrast.
vector lumiChromi(const FloatImage &im)
float image_minnonzero(const FloatImage &im)
function, followed by FloatImage log10Image(const FloatImage &im)
, for which we have written function signatures and comments for you. Then, call log10Image
with the luminance image as the argument.Next, we want to extract the detail of the (log) luminance channel. We do this by blurring the log luminance, and subtracting the original. Convince yourself that gives you the details (high frequencies)!
useBila
. In both cases, we will use a standard deviation for the spatial Gaussian equal to the biggest dimension of the image divided by 50. The parameter truncateDomain
should be set to the default value of 3. You are welcome to use your own implementation of filtering methods, but you can also use our versions in filtering.cpp.Great, you are finally ready to put everything together!
k
in the log domain that brings the dynamic range of the base to the given target (that is, the range in the log domain should be log10(targetBase)
after applying k
. We have provided two new functions for you that you might find useful: float min() const
and float max() const
in the FloatImage
class. Get a new base luminance using the factor, and multiply the details (in the log domain) by detailAmp
, then add the resulting output base and detail. Make sure to add an offset that ensures that the largest base value will be mapped to 1 once the image is put back into the linear domain.
FloatImage exp10Image(const Image &im)
). Then, reintegrate the chrominance into the resulting image. We've provided the function FloatImage lumiChromi2rgb(const FloatImage &lumi, const Image &chromi)
in basicImageManipulation.cpp which might be helpful.
Enjoy your results and compare the bilateral version with the Gaussian one. Use the functions testToneMapping_ante2
, testToneMapping_ante3
, testToneMapping_design
, testToneMapping_boston
in a5_main.cpp to help test your tone mapping function.
Take your own exposure bracketed sequence of images of a scene with high dynamic range. Think about what types of scenes can benefit from HDR imaging and tone mapping. Try to be creative. Stabilize your camera either on a tripod or by resting it firmly against a rigid object so that it moves minimally between shots (Make sure to disable any fancy instagram-like post-processing filters if you are doing this on a smartphone). At a bare minimum, your sequence should include two images. Ensure that the brightest exposure reveals details in the dark portions of the scene while having clipped whites, and your darkest exposure has no clipping of the brightest regions. Convert your images (most likely JPEGs) into PNGs, and include them in your submission.
Use your code to merge your bracketed sequence of LDR images into an HDR image, and then tone map it for display. Explore different parameters for tone mapping, and choose the one you feel works best for your image. You can assume that your LDR images are gamma encoded with a gamma value of 2.2, and then linearize them before merging to HDR. Include your tone mapped result(s) in your submission.
Turn in your files using Canvas and make sure all your files are in the a5 directory under the root of the zip file. Include all sources (.cpp and .h) files, any of your images, and the output of your program. Don't include your actual executable (we don't need your _build directory), and remove any superfluous files before submission.
In your readme.txt file, you should also answer the following questions: