First, grab the new basecode from Canvas. Among other additions, the FloatImage
class now includes some additional operators that allow you to add, subtract, multiply and divide images element-wise. For example:
FloatImage im1(640,480,3), im2(640,480,3); float a = 2.0, b = -1.0, c = 0.0; FloatImage out1 = im1 + im2; FloatImage out2 = im1 - b; FloatImage out3 = a * im2; Image out3 = im1/c; //this will throw a DivideByZeroException()
As you can see, these operators can be used the same way as any built-in type like float
. For more information, please read floatimage.cpp.
Note: These operators must be used with two images of the same size. Otherwise, a MismatchedDimensionsException()
will be thrown.
Typically, 8-bit images are gamma-encoded. Meaning, to reduce the effect of quantization on the darker tones, the scene's radiance \(x\) is not stored, but \(x^\gamma\) is stored instead (where \(\gamma \approx \text{1/2.2}\)).
changeGamma(im, old_gamma, new_gamma)
: The image's intensities are stored as \(x^{\gamma_\text{old}}\) and they should be converted to \(x^{\gamma_\text{new}}\). You can use pow()
to do this.exposure(im, factor)
: This function should simulate changing the camera's exposure. In other words, the intensity values should be modified by multiplying with factor after the image has been gamma-decoded (\(\gamma = 1\)). Assume the input image is encoded with \(\gamma = 1/2.2\) and return an image that is also encoded with \(\gamma = 1/2.2\).In this section, you will implement several functions related to changing an image from RGB colorspace to other colorspaces.
color2gray(im, weights)
: This function should output a grayscale image by performing a weighted average across color channels of an input image im
. The weights are stored in the length 3 vector std::vector<float> & weights
. The image returned should be a three dimentional (not two dimentional) one channel image.
When we convert a color image to grayscale using the color2gray
function, we get the luminance of the image, but lose the color information or chrominance (kr,kg,kb). You can compute this chrominance by dividing the input image by the luminance. Once the luminance and color information have been separated, you can modify them separately to produce interesting effects.
lumiChromi
: This function should return a vector of two images, a luminance image and a chrominance image. The luminance image should be the first element in the vector and it can be computed using color2gray
with the default weights.brightnessContrastLumi
: This function should modify the brightness and contrast of only the luminance of the image. Decompose the image into luminance and chrominance and then modify the luminance. Recombine the modified luminance with the chrominance by multiplying to produce the output image.Another representation of an image that separates overall brightness and color is the YUV colorspace. An RGB image can be converted to and from a YUV image using the matrix multiplications \[ \left[ \begin{array}{c} Y\\ U\\ V \end{array} \right]= \left[ \begin{array}{ccc} 0.299 & 0.587 & 0.114 \\ −0.147 & −0.289 & 0.436 \\ 0.615 & −0.515 & −0.100 \\ \end{array} \right] \left[ \begin{array}{c} R\\ G\\ B \end{array} \right], \\ \left[ \begin{array}{c} R\\ G\\ B \end{array} \right] = \left[ \begin{array}{ccc} 1 & 0 & 1.14 \\ 1 & -0.395 & -0.581 \\ 1 & 2.032 & 0 \end{array} \right] \left[ \begin{array}{c} Y\\ U\\ V \end{array} \right]. \]
rgb2yuv
& yuv2rgb
: converts images from one colorspace to the other.saturate(im, factor)
: multiplies the U and V channels by factor
. The input and returned image should be in RGB colorspace.
Note: In YUV space the values in the image will not necessarily be in the range 0 to 1. If you try to write the image using FloatImage::write()
it will assume the image is in RGB and clamp its values to the range 0 to 1 prior to writing.
The chrominance-luminance and the YUV conversions perform similar operations: they decouple an images intensity from its color. There are, however, important differences. YUV is obtained by a purely linear transformation, whereas our chrominance-luminance decomposition requires a division. Furthermore, the latter is overcomplete (we now need 4 numbers), while YUV only requires 3. YUV does a better job of organizing color along opponents and the notion of a negative is more perceptually meaningful. On the other hand, the separation between intensity and color is not as good as with the ratio computation used for luminance-chrominance. As a result, modifying Y without updating U and V changes not only the luminance but also the apparent saturation of the color. In contrast, because the luminance-chrominance decomposition relies on ratios, it preserves colors better when luminance is modified. This is because the human visual system tends to be sensitive to ratios of color channels, and it discounts constant multiplicative factors. The color elicited by r, g, b, is the same as the color impression due to kr, kg, kb, only the brightness/luminance is changed. This makes sense because we want objects to appear to have the same color regardless of the amount of light that falls upon them.
You can use the colorspace functions you implemented to implement the Spanish castle illusion, which you can read more about here.
Given an input image, you should create two images. The first image has a constant luminance (Y) and negative chrominance (U and V), and the second image is a black-and-white version of the original, i.e. both U and V should be zero. In the first image, set the luminance to be 0.5. To help people focus on the same location, add a black dot in the middle of both images. If image has dimensions \(w \times h\), make sure that the black dot is at the 0-indexed location floor(w/2), floor(h/2)
.
spanish
: This function should take an input image and returns a vector of two images which can be used in the Spanish castle illusion. Make sure the second element of the returned vector contains the grayscale image.You can try out the Spanish castle illusion using the images castle_small.png and wheat.png provided in the Input directory located in this assignment's skeleton code.
You will implement a function to automatically white balance an image using the gray world assumption, in which the mean color of a scene is assumed to be gray. You do this by multiplying each image channel with a factor. The goal is to make the mean of each of the three channels in the output image the same.
grayworld
: this function will use the gray world assumption to white balance the input image. It will output an image with an average gray value equal to the average value of the green channel of the input image.You can try out white balance using the image flower.png provided with your skeleton code.
Implement the function autoLevels
: this function should linearly remap the pixel values in the c\(^\text{th}\) channel of the image so that the minimum pixel value is set to 0 and the maximum pixel value is set to 1.
Hint: You may find the functions FloatImage::min(c)
and FloatImage::max(c)
useful!
Implement the Histogram constructor Histogram::Histogram(const FloatImage & im, int c, int numBins)
: Initialize the histogram with numBins
bins by populating m_pdf
with the PDF as mentioned in the class lecture. You should normalize your pdf so that the values sum to 1.0.
Implement visualizeRGBHistogram
: given the Red, Green and Blue PDF histograms; this function returns an image with size numBins()
x 100 x 3 containing the visualization of the Red, Green and Blue histogram PDFs.
Take a photo that is under-exposed and one that is over-exposed. Include the images and their histograms in your submission.
m_cdf
with the running sum of m_pdf
.equalizeRGBHistograms
. Return an image created by applying histogram equalization to the input image.matchRGBHistograms
. Transfer the histogram of input image im2
onto the image data in im1
and return the resulting image. (It might be useful to implement the helper function inverseCDF
first!)Turn in your files using Canvas and make sure all your files are in the a2 directory under the root of the zip file. Include all sources (.cpp and .h) files, any required input, and the output of your program. Don't include your actual executable (we don't need your _build directory), and remove any superfluous files before submission.
In your readme.txt file, you should also answer the following questions: