We present a new BSSRDF for rendering images of translucent materials. Previous diffusion BSSRDFs are limited by the accuracy of classical diffusion theory. We introduce a modified diffusion theory that is more accurate for highly absorbing materials and near the point of illumination. The new diffusion solution accurately decouples single and multiple scattering. We then derive a novel, analytic, extended-source solution to the multilayer search-light problem by quantizing the diffusion Green's function. This allows the application of the diffusion multipole model to material layers several orders of magnitude thinner than previously possible and creates accurate results under high-frequency illumination. Quantized diffusion provides both a new physical foundation and a variable-accuracy construction method for sum-of-Gaussians BSSRDFs, which have many useful properties for efficient rendering and appearance capture. Our BSSRDF maps directly to previous real-time rendering algorithms. For film production rendering, we propose several improvements to previous hierarchical point cloud algorithms by introducing a new radial-binning data structure and a doubly-adaptive traversal strategy.

# A quantized-diffusion model for rendering translucent materials

##### Review badges

**0 pre-pub reviews

**4 post-pub reviews

# A quantized-diffusion model for rendering translucent materials

Published in ACM Transactions on Graphics on July 01, 2011

##### Abstract

##### Authors

Eugene D'Eon; Geoffrey Irving

##### Publons users who've claimed - I am an author

##### Contributors on Publons

- 1 author
- 2 reviewers

- Contribute
- Post-publication review Jun 2014
Following presentation of the paper at SIGGRAPH 2011 I can only recall one question from Wenzel Jakob: "How were the parameters of the BSSRDF derived when rendering a textured surface, like the face images?" Response: A pre-computed lookup table is used to map a requested diffuse albedo to a single-scattering albedo such that normally incident illumination of a semi-infinite material produces the desired diffuse albedo. For the skin renders, this was done for a fixed index of refraction (1.4). How this is applied during rendering using the radial binning acceleration method is explained further in the paper.

Published inReviewed by - Post-publication review Jun 2014
On page 4 the footnote incorrectly states that equation (5) is not absolutely convergent. Equation (5) provides the convergent form that separates the ballistic fluence from the scattered fluence and does converge. It is the alternative form $$ \phi(r) = \frac{\mu_t^2}{2 \pi^2 r} \int_0^\infty \frac{\text{arctan}\, u}{\mu_t-\mu_s \, \frac{\text{arctan} \, u}{u}} \, \sin (r \, \mu_t \, u) \, du. $$ that does not separate the ballistic fluence explicitly and only converges in the sense of Cesaro summability.

Published inReviewed by - Post-publication review Nov 2013
Let $\tau_0:=\frac{1}{s}\tau_1$, then $\tau_i=s^{i-1}\tau_1$ and $v_i=D(\tau_{i+1}+\tau_i)=s^i v_0$ $$\int\limits_{0}^{\infty} G_{3D}(2D\tau,r)e^{-\mu_a\tau}d\tau \approx \sum\limits_{i=0}^{k-1}\int\limits_{\tau_i}^{\tau_{i+1}} G_{3D}(2D\tau,r)e^{-\mu_a\tau}d\tau = \sum\limits_{i=0}^{k-1}(\tau_{i+1}-\tau_i) G_{3D}(D(\tau_{i+1}+\tau_i),r)e^{-\mu_a\frac{\tau_{i+1}+\tau_i}{2}} = \frac{s-1}{s+1}\sum\limits_{i=0}^{k-1}(\tau_{i+1}+\tau_i) G_{3D}(D(\tau_{i+1}+\tau_i),r)e^{-\mu_a\frac{\tau_{i+1}+\tau_i}{2}} = \frac{s-1}{s+1}\sum\limits_{i=0}^{k-1}\frac{s^i v_0}{D} G_{3D}(s^i v_0,r)e^{-\mu_a\frac{s^i v_0}{2D}}$$

Given $s=\frac{1+\sqrt{5}}{2}$, $ \frac{s-1}{s+1} \approx 0.236068$ rather than $0.240606$ in the paper. Can someone tell me how to get that magic number? And why $v_0$ is specified in terms of $\mu_a$? Thanks for your kindness.

Published inReviewed by - Post-publication review Aug 2013
The equation between (19) and (20) has incorrect indices on the $\tau$s and should read:

$$ w_i = \int_{\tau_i}^{\tau_{i+1}} e^{-\tau \mu_a} d\tau = \frac{e^{-\tau_i \mu_a} - e^{-\tau_{i+1} \mu_a }}{\mu_a} $$

(thanks to Toshiya Hachisuka for bringing this to my attention)

Published inReviewed by

**All peer review content displayed here is covered by a Creative Commons CC BY 4.0 license.**