Virtual ray lights (VRL) are a powerful representation for multiple-scattered light transport in volumetric participating media. While efficient Monte Carlo estimators can importance sample the contribution of a VRL along an entire sensor subpath, render time still scales linearly in the number of VRLs. We present a new scalable hierarchical VRL method that preferentially samples VRLs according to their image contribution. Similar to Lightcuts-based approaches, we derive a tight upper bound on the potential contribution of a VRL that is efficient to compute. Our bound takes into account the sampling probability densities used when estimating VRL contribution. Ours is the first such upper bound formulation, leading to an efficient and scalable rendering technique with only a few intuitive user parameters. We benchmark our approach in scenes with many VRLs, demonstrating improved scalability compared to existing state-of-the-art techniques.
We thank the reviewers for their comments on improving the exposition. We thank Joey Litalien and Damien Rioux-Lavoie for fruitful discussions and proofreading. This research was supported by an NSERC Discovery Grant (RGPIN-2018-05669) and NSF Grant IIS-181279.
@article{vibert19scalable, author = {Vibert, Nicolas and Gruson, Adrien and Stokholm, Heine and Mortensen, Troels and Jarosz, Wojciech and Hachisuka, Toshiya and Nowrouzezahrai, Derek}, title = {Scalable virtual ray lights rendering for participating media}, journal = {Computer Graphics Forum (Proceedings of EGSR)}, year = {2019}, volume = {38}, number = {4}, month = jul, pages = {57--65}, doi = {10/gf6rx7}, publisher = {The Eurographics Association} }