From BlenderWiki

Jump to: navigation, search

Roadmap

This is a roadmap created at the Cycles developer meeting at the 2016 Blender Conference. It's based on the priorities of the developers that were present, and should not be considered a complete or accurate plan. Plans may change over time, the purpose here is to make it easier for developers to cooperate.

The names in () indicate who has already done some work on these features and who you might want to get in contact with if you want to help, it's not so much a task assignment and on many of these topics we could use help from more developers in reviewing code or helping to finish patches.

Here is information on getting in contact with developers.

2016 Targets

Features likely to make it into master in the next few months:

Cycles will also be affected by the ongoing viewport project, and there will likely be a proposal on how to integrate with that in the near future.

2017 Targets

Somewhat longer term, these are projects that have already been started or that developers are planning to work on.

  • Split kernel: main target is to get AMD GPU rendering up to the same level of NVidia GPU and CPU rendering, supporting the same features and performance. Once the split kernel works for CPU and CUDA it will also open possibilities for interesting algorithms, like wavefront path tracing, ray reordering and more efficient caching. (Mai)
  • Microdisplacement: this is currently an experimental feature and we'd like to make it a fully supported feature. A few issues to fix are: smooth UV subdivision, too high subdivisions outside of the screen, panorama camera support, viewport update for displacement shader changes, new (vector) displacement nodes. A memory cache maybe be added as well. Not all these features are required to make the feature non-experimental, that would happen earlier. (Mai)
  • Denoising for animation. There is a Cycles side implementation of this already, but Blender integration will require more work and design changes, for example to integrate with compositing or render farms. (Lukas)
  • Blender AOV render API. For Cycles and other renderers integrated into Blender, the current API with hardcoded passes is limiting. We'd like to make this more flexible so renderers and users can register their own AOVs / passes. (Lukas)
  • Light groups to render separate AOVs / passes for different light groups with minimal overhead, which then lets you tweak the light intensity and color in compositing. (Lukas)
  • Mipmaps and texture cache to render more textures with less memory usage. This requires some fairly deep changes to SVM, to pass along ray differentials through the nodes, while for OSL this is already automatic. The first implementation of this would likely use OpenImageIO, which means it would be CPU only to start. On the Blender side this would also require some changes to support .tx files and (auto)generate them. (Stefan)
  • Faster motion blur: rather than doing motion blur only per primitive, BVH inner nodes should take into account motion blur as well by interpolating node bounds during traversal. (Sergey)
  • Light linking to specify which lights affect which objects. (Tangent Animation)
  • AO Environment map (Tangent Animation)
  • AO, Samples and Alpha Overrides (Tangent Animation)
  • Cube map rendering for VR and panoramas. (Sergey)
  • Blue noised dithered sampling for lower noise in viewport renders. (Lukas)
  • Micro jitttered sampling for better performance on GPUs. (Lukas)
  • Configurable working color space (Lukas)
  • Network rendering to have multiple computers in a local network cooperate on the same frame. We already have a partial implementation of this but it is disabled. The first implementation of this might only support F12 renders, with viewport renders coming later. (Lukas)
  • CUDA asynchronous rendering: rendering with CUDA currently uses more CPU than necessary, and performance is not optimal for multiple GPUs and requires too much manual work to configure tile sizes for best performance. (Martijn)
  • CPU work stealing: rendering on the CPU requires small tiles to get a good work distribution, and may still not utilize all cores for the last few tiles. We would like to have a system where multiple cores can cooperate on the same tile.
  • VR / stereo / panorama viewport render (Dalai)
  • Resumable rendering (for render farms) or render pause option. (Blender Institute)

Other Targets

Here are a couple things we think are very important, but that no one at the meeting planned to work on within a specific time frame.

  • Adaptive sampling to focus samples on parts of the image that need them most. Ideally this should integrate closely with denoising. (Lukas)
  • OpenVDB rendering. This would likely included empty space skipping for better performance. It may be CPU only to begin with if we reuse the OpenVDB code for ray marching and sampling.Ideally this should be coupled with a new volume object datablock on the Blender side. (Kévin)
  • Volume rendering optimizations and sampling improvements
  • UDIM textures for mapping multiple high resolution textures to one model. This is implemented in the OpenImageIO texture cache in the latest version, so if we use that for mipmaps and texture caching we almost get UDIMs for free as well, at least in Cycles. However on the Blender side this would require more work to support it in the UV editor and viewport. (Kévin)
  • Statistics: for power users to investigate why rendering performance is slow, why memory usage is high, which objects or materials to optimize, etc. This would be a log generated that could be shown in Blender or as a HTML report, and possibly also debugging AOVs. (Thomas)
  • Combined Hair and Volume shaders similar to what we have in the Disney BSDF, to make it easier to set up these types of materials as well.

More Ideas

Some places to look for more work or ideas: