## Cycles Open Questions

#### Ray Intersection

• Ray intersection precision with instances: to avoid self-intersection with raytracing, we use the system of refined intersection and offsets from the motion blur thesis on http://gruenschloss.org/. However this breaks down when we use instancing, the extra forward/inverse matrix transformations give precision loss again. Is there a way to make this work, or do we need to keep using bigger epsilons to work around the issue?
• Ray-Hair intersection: how to write an efficient ray-hair intersection routine? All the code I've seen is very slow, needs to be 10x-100x faster to be usable in practice, but I have no idea how to get there, besides tesselating the hair into triangles. Arnold somehow manages to trace hair reasonably fast, but I have no idea how they do it.

#### Sampling

• Retry samples: a random bsdf, bsdf direction or light that is picked may turn out to have no influence and just give black. In such a case it might help to retry sampling with a different random number. Is it worth it? How to do such rejection sampling with QMC sampling, where it seems that there is only one number available? Is there some way to generate a suitable random number for this case, would a pseudorandom number be appropriate?
• Adaptive sampling: is there a reliable algorithm for this? There's various papers and implementations, but I don't see much evidence that this is actually used in movie production, my impression is that generally a fixed number of samples is used. Is there a method that is sufficiently reliable that you could send out many frames to a renderfarm, and get back consistent results with a good speedup?
• Light sampling: in path tracing, is there a more efficient way to pick lights for sampling besides just picking each light with the same probability. Is there a particular weighting that works better than uniform weighting as a default? Light intensity, distance, size, may all work in some cases, but perform significantly works due more samples being wasted on occluded lights. Can we do better than just giving user control over weights?
• Path termination: what is the best way to determine when to terminate a path? Probably russian roulette based on the path throughput, but I have the impression this undersamples dark areas often. Especially as linear => display color space transformations tend to make these areas brighter. Should there be some sort of adjustment for the estimated pixel intensity?
• Multi-BSDF sampling: when you need to sample a path direction from two or more BSDF's, you pick one of the BSDF's and then sample a direction. You then have two ways to adjust the path throughput: use only the picked BSDF, or take all them into account by evaluating all the BSDF's. It's not clear to me which method is better, sometimes evaluating all helps reduce noise, but sometimes it also increases it. Is this a bug, expected behavior, can we do better?