Final Project - Liane Makatura

Motivational Image

Drawing inspiration from this year's theme "Something Old and Something New," I wanted to base my final rendering on the budding tulip models below:

Tulip bulbs

These tulips relate to the theme in and of themselves, as the bulbs represent the source of and potential for new life, while the bloomed flowers represent old growth. I wanted to fill in the remaining portion of the scene with other traditional Dutch items like those pictured below:

Shoes Spoons Tulips

Beyond the bulb-bloom metaphor for new and old, my Dutch scene conforms to the theme in a particularly personal way: as a memorial for my recently deceased grandfather, who came to the US from the Netherlands when he was 11 years old. In particular, I wanted to feature the wooden shoes pictured above, as they belonged to him for over 65 years. With his passing a few weeks ago, these aged shoes found new life as a conversation piece, sparking many laughs at the thought of this soft-spoken man clacking noisily down school hallways after his arrival.

My proposed scene also motivates the use of some interesting graphics and rendering techniques:

Spoons Shoes Tulips

Feature 1: Node-Based Texture Mapping

For our first feature, we have implemented an extensive Texture mapping interface. As described below, this includes the following:

For this reason, we have considered texture mapping to be an Advanced feature.

Constant Texture

As a first step toward implementing the texture mapping capabilities, we created a "constant Texture" to represent the uniformly-colored surface albedos we had been using before. This class is overkill in practice (since we could represent the solid-color property with a single Color3f before) but it unifies all patterns under the concept of "texture" while also providing a way for us to directly compare our Texture implementation with renderings we had previously produced.

In the example below, we modify the cbox_path_mats scene from Assignment 4, representing the color on all diffuse surfaces (walls) with constTexture objects. The comparison shows our result with the cbox_path_mats reference image provided in assignment 4.

Albedo Color3f Reference Constant Texture

Image Texture

After ensuring that the constant texture (and thus the Texture class) was working properly, we moved on to image texturing with UV coordinates. We used the header-only library stb_image.h to load in images like the PNG below (left). We then replaced the constTexture on the right wall of the cbox_path_mats scene with the gradient texture (right).

Gradient Texture Gradient Texture

In this instance, we expect the UV-coordinates to range between 0 and 1. However, since the UV-texture coordinates are explicitly given by the user, and have little standard regulation, we implemented several methods to deal with UV values outside of (0,1). We implemented 3 boundary conditions:

Gradient Texture Gradient Texture Gradient Texture
Clamping Tiling Mirroring
Any UV texture coordinate below 0.0 gets clamped to this minimum value, while any coordinate above 1.0 gets clamped to the maximum. We index into the original image using the fractional component of the given coordinate. We remove any non-zero integer value by subtracting the floor of each coordinate value. Similar to tiling, except that alternating tiles of the image get mirrored in both the x- and y- directions.

The above table entries show the texturing tested on UV coordinates ranging from (0,2) in each direction. In this entry, we demonstrate that the texturing methods work for other ranges.

Gradient Texture
This image is tiled from coordinates -2 to 2, and -3 to 3.

Nested Combination & Procedural Textures

Our node-based texturing interface also allows for combinations of textures, such as those implemented in the following classes:

Gradient Texture Gradient Texture Gradient Texture
ScalingTexture Checkerboard Perlin Noise
This texture holds two child textures, and its eval method returns the product of the children's eval results. The image above shows the multiplcative blending of the gradient texture and a constant texture with color (0.5, 1.0, 1.0) -- i.e., it halves the amount of red in the final image. This image demonstrates our ability to nest multiple texture types, with a checker texture that contains (1) a scaleTexture with a constant and perlin noise, and (2) a scale texture with a constant and an image. This is one of our procedural textures, created with perlin noise and a constant scale.

Texture Interface

The textures we present below all extend a generic Texture interface [texture.h], which need only implement an "eval" method. Their relevant parameters can all be specified directly in the XML, which makes our system user friendly. To emphasize this flexibility, we have provided a sample XML, which was placed on the right wall of the cornell box to generate the "Checkerboard" image above.

XML

Though we have only exhibited 2 layers of nesting, and multiplication with constant textures, the possibilities extend well beyond this. Our generalized implementation of this interface allows for very intricate texture combinations, as the user has several interesting simple textures at their disposal, as well as a convenient way to nest multiple layers together.

Our implementation of texture mapping also supports all meshes with proper UV coordinates attached. For simple meshes, like the right wall of the Cornell box above, the UV coordinates can be added to or edited in the obj file manually. This feature is supported in all integrators, provided that the integrator properly assigns bRec.uv = its.uv before sampling or evaluating the BSDF.

Feature 2: BTDF

To achieve some amount of translucency in the flower petals, we implemented a diffuse transmissive BSDF. This BSDF samples directions in the lower hemisphere, effectively allowing light to pass through the surface. In my simple implementation, I consider pure transimission. This code is found in difftrans.cpp. It functions in a manner similar to the basic diffuse texture, except that it samples outgoing directions on the opposide side (lower hemisphere) of the object, and all evaluation and pdf methods check for directions with opposite local z coordinates.

In the validation test below, I have a simple scene with only a single diffuse ground plane. In the left image, the plane has a typical diffuse BSDF (over the upper hemisphere) an area light source directly above; in the right image, the plane has a Diffuse Transmissive (lower hemisphere) BSDF, and a light directly below the plane. The light has been reflected over the xz (lateral)plane using a scale transform of (1,-1,1) to preserve the relative positioning. We expect and observe both of these images to be the same. If you switch the pairings (BRDF with light below, or BTDF with light above), we expect and observe that the images are completely black. These images have been omitted here.

Diffuse with Light Above DiffTrans with Light Below

Progressive Photon Mapping

In this section, we describe our second Advanced feature, which was Progressive Photon Mapping. We describe this in sections, beginning with the basic Photon Mapping implementation, and then going into improved photon mapping and finally progressive photon mapping.

Photon Mapping

We approached photon mapping by creating a new Integrator, photon.cpp, and taking advantage of the preprocessing method that was available in the header file. In this preprocessing step, we generate a single photon map, which we will use for density estimation over the entire render. Initially, we compute all illumination from a single global photon map, where photons are stored at every diffuse bounce. We do not store photons on specular surfaces. We guarantee the first 5 bounces and bound the path length by 100 bounces, but more often than not the paths will be terminated by Russian Roulette well before this maximum is reached.

As a first validation step, we visualize the photon bounces directly. One the left, we return the power values of the photons, and on the right we return an intensity corresponding to the hit point's distance to the photon in question.

Photon Power Visualization Photon Distance Visualization

As a next validation step, we visualize the direct illumination computed by the photon map. To do this, we terminate our tracePhoton function after the first pass. As you can see, the images match quite well, excepting the artifacts induced by sparse density estimation.
Direct EMS Reference Fixed Radius Direct Only

We implemented functionality for density estimation via a fixed number of photons, or a fixed radius. Each of the example renders below uses 100,000 photons. Note that there is a seemingly constant difference in intensity between the MIS reference and the photon mapped images. This was a consistent issue that has not yet been solved.

Path MIS Reference Fixed Radius Fixed Number=50 Fixed Number=100

With the exception of the illumination difference, a larger number of photons (1 million in the case below) allows us to compute the image in a much smoother way:

Path MIS Reference Fixed Radius 1mil

Photon Mapping with Direct Illumination

We optimize our code by computing the direct illumination separately, using standard next-event estimation. To avoid double counting the direct illumination, we do not store photons on the first bounce if this feature is enabled. The feature can be specified in the xml using the boolean tag "directEnabled" in the photon integrator.

Direct not enabled Direct Enabled

Progressive Photon Mapping

After studying the structure (and, in particular, the concurrency) of Nori's render.cpp, we were able to devise a strategy which allowed us to implement progressive photon mapping. In effect, this strategy computes the image using fixed-radius photon mapping Progressive photon mapping improves upon our previous implementation by reducing structured artifacts in two ways:

For our implementation, we leverage the samples-per-pixel (spp) quantity that is already specified by the user in the XML. Since nori loops over all samples, and computes a full "image" in each pass, we found this to be a natural place to insert our progressive photon mapping logic. We did this by adding an optional virtual method "startIteration" to the Integrator type (integrator.h, line 43). This method remains empty in most integrators, but for photon mapping it is responsible for generating a new photon map, and assigning the radius. As noted in the lecture slides, radius \(r_{i+1} = \sqrt{\frac{i + \alpha}{i + 1}r_i}\).

This method is called before each image pass is computed, in the loop over the samples in render.cpp, line 155. Due to this construction, Nori trivially performs and displays the running average of all the images computed so far. As shown below, our results are drastically improved using progressive photon mapping.

Non-Progressive Progressive

Homogeneous Absorbing Media

Lastly, we implemented a simple Medium interface, which can be attached to a mesh object. Currently we only support homogeneous absorbing media, in the path_mis_medium and photon_medium integrators.

In the verification below, we present a simple scene with an emissive unit cube, and a sphere that is a red absorbing media with sigma_a=(0, 10, 10). The sphere is placed directly between the cube and the camera. In this setup, we expect to and do observe less absorption around the edges of the sphere, since the rays do not have to travel as far through the medium.

Absorbing Media: Path_MIS Absorbing Media: Photon

Final Result: Rendering Competition Submission

Final Submission: Dutch