Multidimensional Adaptive Sampling and Reconstruction for Ray Tracing
Below you can find image comparisons for the three different scenes used in the paper "Multidimensional Adaptive Sampling and Reconstruction for Ray Tracing". We compare our multidimensional adaptive sampling (MDAS) technique (with and without anisotropic reconstruction) to Mitchell's image-space adaptive sampling (at equal render time) and to ground truth.
Tip: Move the mouse over the images to compare the algorithms on specific parts of the scene.
Warning: This page include many full-resolution images, so the page may take some time to fully load.
The Pool scene showing motion blur. Given equal time, our sampler generates an image with signiﬁcantly less noise and an MSE that is 9 times lower than Mitchell’s adaptive sampler. Our sampler is able to find and sample the regions with strong motion, which are problematic for the Mitchell sampler.
The Chess scene showing depth of ﬁeld. Our technique is able to sample and reconstruct the regions that are out of focus while the Mitchell sampler is noisy in these regions. Even though the out of focus areas are a small part of the image our sampler is able to produce an image with an MSE that is two times lower than the equal time rendered image with Mitchell’s adaptive sampler. Due to our anisotropic reconstruction technique we can successfully reconstruct both blurry, out-of-focus regions, and sharp, in-focus regions. Without anisotropic reconstruction, the MSE of our method more than doubles.
The Car scene demonstrates a 5-D sampling domain including both motion blur and depth-of-field. We use 10K initial samples and 32 samples per pixel in the final distribution. This scene renders in 2884 seconds and reconstruction takes 22% of total render time.