This document specifies the software requirements and specifications for Blender Light Paint. It is intended for both artists who would use the light paint tool in the future and the developers who would be involved in the design and implementation over the timeline of GSoC 2009 (and possibly in the future).
This document is a guidance for the initial design of the program. It will also be used during the development and testing of the system to ensure that all requirements are satisfied in the final implementation.
The scope of the program is initially defined in the GSoC proposal. Light paint is a tool that is mainly designed for static scenes. The end results of such a tool, however, can be applied to animated scenes.
Light paint is targeted at Blender 2.5.
Light paint will no longer need a light probe image to start. A default synthetic light source would be create for initial lighting, and will be replaced by the lighting effects specified by the user. Yukishiro 00:26, 24 April 2009 (UTC)
Definitions, Acronyms, Abbreviations, Notational Conventions
- SH: Spherical Harmonics
- L: degree of spherical harmonics function
The key functionality of light paint is to allow artists to modify the lighting environment (specified by a light probe image) by painting the desired colours on the model. This step is intended to help artists refine lighting before the scene is sent to final rendering. The result of such modification forms the other key component of the tool: light node.
- Approximating final rendering effect using SH.
- Choosing desired light colour and intensity
- Applying selected colours to different parts of the scene and generating new lighting environment
- Mixing (adding, subtracting, etc.) different lighting environments
- Use the modified lighting environment in final rendering
Assumptions and Dependencies
Light paint needs to solve several optimization problems that are constrained quadratic programming problems, therefore external GPL libraries would be used.
I tried LAPACK least square solver but was unhappy with it. I also tried a non-linear solver but it doesn't solve constrained problems. I still need to investigate a couple of libraries before we make a final decision. Most of the candidates are listed on this page.
When the user switch to the light paint mode for the first time, the program would generate SH coefficients for all the mesh models in the scene. The coefficients are per vertex, therefore the generation would take a long time if the model is complex. The computation would be threaded so that it wouldn't affect regular UI operations.
A default L=3 would be used for the computation. In general L=3 is sufficient to give a decent approximation of the lighting environment (in fact if the degree is too high we will have ripple-like defects). However the user can change L value from 1 to 5 and generate coefficients for other degree values as well. The coefficients would be stored as a part of .blend file and there is no need to regenerate once they've been generated.
To adjust lighting for certain poses in an animated scene, the user need to change to the right frame and make the changes afterwards.
It is also possible to specify how much the light environment rotates between frames, but I don't know how to integrate this with timeline space. Yukishiro 08:04, 23 April 2009 (UTC)
After the SH coefficients are computed, the scene would be rendered using both SH coefficients from the lighting environment and from the mesh objects. The default lighting environment is a synthetic light if no light probe image is provided. Here is the synthetic light function specified in spherical coordinate:
1 (t-θ)>0 f(t,θ,Φ) = / \ 0 otherwise
The property panel can be turned on and off by pressing NKEY. In the property panel users can choose light brush's colour from the colour picker. Users can also change the size of the brush. A small preview of the current lighting environment is also displayed. Light paint brush works very similar to other paint brushes. Users can brush the surface of the models and specify the desired colours on different parts of the surface.
Users can directly modify current lighting environment, or divide the desired modification into multiple light nodes. For example, one can start with a light probe image. He or she then creates a new light node to store the new changes to the light environment. The two nodes then can be mixed (added, subtracted, or other types of mixing) to create an output light environment. When creating a new light node, users can turn off the visibility of other light nodes and focus only on the new modification. After the creation is completed, the visibility can be turned on again. The changes done in the view3d space would always be applied to the active light node in the node space.
User creates a new light environment for a static scene
- Complete models in the scene
- Switch to light paint mode (not environment map specified, use default lighting)
- Paint models with desired light colours (default lighting replaced by new lighting effects)
- Output the resulting lighting to an HDR file.
User modifies an existing light environment for a static scene
- Complete models
- Select world texture
- Switch to light paint mode
- Open node space and use light nodes
- Create a new light node for new changes
- Paint light in view3d space using different colours
- Create a mix node
- Mix the new light node with the initial light node for world texture
- Save the mixed light environment as an HDR image
Users can create multiple light nodes, each of which holds certain changes, and mix them in a desired way.
User modifies the lighting environment for the active (selected) object
The workflow for this case is the same as the previous one. The user can only view the active object in view3d space and paint on that object.
theeth suggested this scenario since all the other modes (edit, vertex paint, texture paint, etc.) only act on the active object. In my opinion, it doesn't make sense to change lighting for only one object without putting it in the entire scene since change of lighting can affect adjacent objects in a undesired way. --Yukishiro 02:11, 25 April 2009 (UTC)
- I agree. I considered this a "what if" scenario only. "If that was something we wanted to do, would the system support it?" No need to spend time on this too much. --Theeth 17:35, 25 April 2009 (UTC)
User modifies the mesh objects
If the user modifies the vertices, the diffuse SH coefficients for those vertices will be updated automatically. However, shadow SH coefficients cannot be computed that easily since occlusion may have been changed. The user needs to manually trigger an update for shadow coefficients.
There will not be two separate variables for diffuse and shadow coefficients. Diffuse coefficients are easy to compute, but they do not give shadows, which may be important for the lighting modification. The diffuse values will be overwritten by shadow ones after the user triggers a new computation.