From BlenderWiki

Jump to: navigation, search

Gooseberry Simulation and Caching Development


Hair Solver Stability and Cleanup

Sintel's Bug

A long-standing problem in the Blender hair simulation system was the apparently random instability of the cloth solver for certain hair configurations. This problem already occured during the Durian project (aka. Sintel) around 2010 and was still untouched at the beginning of the Gooseberry project.

It turned out that the instability was caused by very short hair segments. The solver uses a stiffness parameter for defining how quickly hair strands react to changes in length and orientation (a standard model for describing hair geometry and dynamics). For very short segments the elongation of such springs easily becomes much larger than the rest length, and the solver has to deal with large numerical ratios. This situation creates a badly conditioned linear system which converges very slowly (or not at all), showing as instability.

The most common source of such short hairs seems to be the cut tool in hair edit mode. Often the tool will cut hairs just above the mesh surface, creating these tiny segments. It would help if the cut tool would avoid these issues, but short strands could still emerge by other means. A more elegant solution would be to precondition the solver in a way that avoids large length ratios. This was not implemented due to lack of time, but a quick solution was found by simply disabling hair simulation for any hair strands below a certain size (compared to the longest hairs).

Restructuring Cloth Solver Code

One of the oddities of hair simulation code in Blender is the use of the pre-existing cloth solver. This solver was implemented specifically for cloth simulation, so hair sim code has to jump through hoops to make it work for strands. Despite this, the internal algorithm of the solver is not specific to surface geometry: It's a generic mass-spring model consisting of vertices representing mass, and linear springs between them to describe mechanical material properties.

To make using this solver easier for non-cloth geometry, the cloth solver code has been moved into a dedicated library (keeping all the code in blenkernel becomes messy). The new `bf_physics` library was added. It uses a set of internal API functions to build and modify the motion state of simulated vertices and springs.

At first the only implementation using this API was the cloth solver (changing particles to use direct hair solver setup proved difficult). After the hair simulation was reimplemented on top of dupli overrides (see below) the API was extended to a true strands data conversion, which avoids a great deal of hacks and ugly workarounds.

Matrix Solver Alternatives

At the core of virtually all iterative solvers is a large, sparse, linear system of equations. How exactly a solution vector to such a system is found depends on the shape of the problem. In the case of Blender cloth simulation the chosen method is the Conjugate Gradient method.

The code for implementing a sparse matrix structure and performing the associated mathematical operations on it was written as a self-contained system in C. The sparse matrix representation is essentially an array of 3x3 blocks (since every interaction is based on 3D vectors and thus creates non-zero 3x3 blocks in the matrix). To map back from some vertex to the index in the block matrix array, each spring stores the matrix index for its interaction.

The design is not flexible, efficient nor elegant. Much better representations of sparse linear algebraic structures exist in external libraries. In the case of Blender the Eigen3 library is already linked by default, so it could readily be used as the basis for the conjugate gradient solver. In fact, a basic conjugate gradient solver already exists in the Eigen library, with the limitation that it does not support constraints. A modified CG method was therefore implemented in the Eigen3 namespace.

In the end, the Eigen solver implementation was not finished due to limited time, but could still replace the old implementation in the future. Another issue to consider though is the need for handling arbitrary linear complementarity problems (LCPs), for which the CG method is not suitable. A wholy new solver, such as a Gauss-Seidel solver, may therefore be a better option for future simulation systems.

Collision Detection and Response

Collisions are central to any realistic physical simulation for 3D graphics purposes. A bad collision algorithm can easily destroy a shot even when animation and other motion effects are perfect.

Collision can typically be separated into a contact detection step and a collision response step, which can be examined independent from each other for the most part.

Cloth Collision Methods

Contact points for the current cloth solver are generated from intersection points on the collider mesh for every face of the cloth mesh. These points are then used to calculate penetration. The actual collision response is formed by performing multiple solver steps and trying to find a relaxed position solution, which then gets transformed back into a velocity solution. No explicit mentioning of academic work is made in the old code, so it comes down to guesswork.

For surface meshes this method works reasonably well. For strand-like geometry, however, the contact point generation mechanism does not work. Furthermore using a BVH-based contact point lookup has severe pitfalls: Allowing a margin between cloth and collider increases the number of potential collision pairs, which can cause excessive memory use and extremely bad performance.

Introducing Hair Collision

An attempt was made at extending the collision method for cloth to support hair geometry as well.

Contact points are supposed to be a subset of all actual contacts in a collision, which reproduces the collision response. Accurately calculating physically correct contact points is a difficult challenge. Blender uses a naive (and incorrect) approach by calculating the closest point on a collider for every face of a cloth mesh. This method is simple, but wrong. Calculating reliable contact points is much more involved than one might think, but fortunately such algorithms are standard issue in the Bullet physics engine [1]. During gooseberry some experimental code for using Bullet contact calculations was created, but postponed for other priorities.

For implementing hair collision the contact point method for cloth meshes does not work at all (which is why hair collision was never supported before). Since hair collision was expected to be important, we tried generalizing the cloth contact code. The results turned out to be lacking in quality: generating contacts between faces and line segments does not yield sufficient results. The response is not good on concave surfaces and tends to miss a lot of cases where more than 1 contact is required.

The dynamic response for cloth simulation also relies heavily on both surface normals in order to find a collision-free state. This, again, does not work well for strand geometry. The resulting collision method gives a reasonable response for short-lived contacts, but leads to sliding and incremental penetration for hair resting on static colliders.

[1] "Contact Generation"

Research on Modern Collision Approaches

A great volume of academic research on soft-body collision has been produced in recent years. While mathematically much more advanced than current Blender code, this provides a range of possible solutions to the collision response problem for hair. None of these methods were implemented during the Gooseberry project, both due to lack of time and resources, and because a better contact-point algorithm as described above would be required.

All soft-body dynamics calculation ultimately has to solve large, sparse, linear equation systems of the form `Ax = b`. There is a plethora of different numerical iterative solver methods in existence, although they can be categorised into just a few base types. For an overview of common methods see [2]. The Conjugate Gradient (CG) solver is used for Blender cloth and hair dynamics at this point. The CG solver has the advantage of being fast, but it puts restrictions on the form of the matrix in the equation system. Extended CG versions (BiCG, BiCGSTAB) have been developed to handle other types of systems, at the expense of performance. A Gauss-Seidel solver may be a better choice for handling collision constraints though, due to widespread use, generality, and the potential for parallelization ([3]).

The form of the linear equation system we have to solve (and therefore the solver method we can use) is defined by the physical differential equations. The derivation of equations is out of scope here, for in-depth descriptions of cloth and hair physics see [8], [9], [11].

With the basic dynamics equations for a softbody mass-spring system the resulting matrix is positive semidefinite. This form of a matrix makes the CG solver the ideal method. However, the situation becomes more complicated once constraints are imposed on the movement of vertices. These can be simple constraints from boundaries or the like, but of particular interest is the formulation of collisions as motion constraints (a vertex can not move through a collider surface). Defining collisions in such a way promises to solve even complex penetration issues by calculating a collision-free state.

There is no room here for going into detail on all the solver methods. Following is a selection of papers that should give a general overview and describe some of the more advanced techniques.

[TODO: explain LCP, MLCP, SOCP?]

[2] "A Comparison of Some Iterative Methods in Scientific Computing"

[8] "Large Steps in Cloth Simulation"

[9] "Stable but Responsive Cloth"

[5] "Discrete Elastic Rods"

[11] "A Mass Spring Model for Hair Simulation"

[7] "A Hybrid Iterative Solver for Robustly Capturing Coulomb Friction in Hair Dynamics"

[3] "Parallel Dense Gauss-Seidel Algorithm on Many-Core Processors"

Volumetric Methods for Simulating Hair Interaction

Because of the tremendous amount of geometric detail in hair, self-collision of hair geometry is quantitatively different from collisions with the outside world (incl. the scalp). In any natural hair configuration the number of potential hair-hair contacts far exceeds contacts with simple external geometry. Especially for curly hair styles the orientation of hair segments is essentially random, so the number of contact points increases exponentially with the amount of hair strands and/or subdivision.

Classic, unmodified collision algorithms quickly run into their limits when trying to brute-force solve collisions. Two very different approaches for handling the self-collision problem have been investigated during the Gooseberry production. Unfortunately neither of them was ultimately implemented, but in the future one of these may be feasible to implement.

  • Aggressive pruning of contact pairs and grouping of uncoupled contacts for efficient threading has been used for the production of "Brave" [12]
  • Volumetric representation of hair can augment the Lagrangian geometry with a voxel grid, then use standard volumetric solvers for interpolating velocities [6], [10]

The contact pruning approach was initially investigated, but soon considered too complex to have been implemented in the available time span. The method has its merits due to treating hairs in their original Lagrangian (point mass) description, but for the limited resources available it was not a feasible target.

Volumetric hair dynamics is based on the fact that for any large amount of noisy geometry the dynamic behavior of the system as a whole can be described statistically over large enough volumes. The result is a cumulative effect on every vertex, simulating the effect of hair-hair collisions. Many natural phenomena can be described through this system, e.g. the tendendy of hair to surround large amounts of air and become "puffy".

[TODO add red fluff ball example]

The volumetric approach avoids the exponentiality of segment-segment combinations by using a regular fixed-size grid with controllable number of cells. The tradeoff is a loss of accuracy for individual hairs, which can become apparent when a hair strand is influenced by the grid without colliding visually.

An experimental implementation of this feature has been created during the Gooseberry production. However, configuring an abstract volumetric grid in addition to the existing hair simulation turned out to be too difficult. Many settings of the volumetric solver don't have natural visual representations, so getting the effect just right takes a lot of iterations and experience.

Eventually the movie was finished without any actual hair-hair interactions, which was possible only due to the restricted motion the script called for. Future hair simulation improvements, however, certainly have to provide at least one of these methods in a reliable way.

[12] "Artistic Simulation of Curly Hair"

[6] "Detail Preserving Continuum Simulation of Straight Hair"

[10] "Volumetric Methods for Simulation and Rendering of Hair"

Inverse Dynamics for Compensating Gravity

Hair simulation is typically run on a per-shot basis, starting on the first frame and advancing the simulation frame-by-frame. The initial hair deformation ("hair style") is typically created through grooming tools (i.e. hair editing).

A common problem then is that, as soon as the simulation starts, the hair reacts to the ever-present gravity force and starts "sagging". After a second or two the internal bending forces and gravity reach an equilibrium and the hair comes to rest. Typically artists would therefore add a "pre-roll" interval before every shot, which gives the hair time to relax and start the shot from a resting position.

Even with pre-roll though the problem remains that the relaxed hair deformation after pre-roll can be quite different from what the artist intended (giving a "wet hair" look). The reason is that artists will always model a hair style in a natural environment. The effect of gravity on hair is anticipated during grooming and a character will be designed with their hair resting on the head and shoulders. Ideally when starting a simulation on a character in rest pose the hair should not move at all, because the hair is already under the (imagined) influence of gravity.

What happens instead is that the simulation assumes the initial hair pose to be force-free, i.e. the natural shape of the hair in zero gravity. This discrepancy means that making long hair appear natural is difficult - either the hair will deviate from the intended look during simulation, or the artist would have to model a very unintuitive hair style of a character in zero gravity.

A solution to this problem is the use of an 'Inverse Dynamics' preparation step. Inverse Dynamics is a technique for finding a set of parameters that lead to a force-free configuration of a dynamic system. In the case of hair dynamics this would mean finding the bending angles of each hair segment, so that the resulting shape results in the observed gravitational deformation of the initial pose. After this calculation the bending angles could then be used to construct the actual force-free rest shape of hairs without gravity, theoretically removing the need for preroll and sagging compensation.

Mathematically the method of Inverse Dynamics is very similar to that of Inverse Kinematics (IK), well known from bone chains in Blender rigging tools. The goal of an IK solver is to produce a set of parameters for minimizing the deformation out of the rest pose (energy minimization). Inverse Dynamics uses the same techniques but on the level of dynamic states, i.e. where an IK chain produces a purely positional solution, the Inverse Dynamics solver gives positional as well as velocity.

This feature too remained on the drawing board during the Gooseberry project, it should be tackled in future development.

[4] "Inverse Dynamic Hair Modeling with Frictional Contact"

[13] "Introduction to Inverse Kinematics with Jacobian Transpose, Pseudoinverse and Damped Least Squares methods"

[14] "Discrete bending forces and their Jacobians"

Child Hair Features

Spiral Child Hair Mode

Most visual detail in the particle hair systems is created by using child hairs. These extra hairs are interpolated from the "actual" parent hairs and don't follow their own simulation.

The basic shape of child hairs is further modified by an assortment of features collectively named "child clumping". This includes various random noise displacements as well as shaping hair "bundles" around their parent, e.g. using curls, braids or waves.

For the Cosmos Laundromat movie the character design of Victor called for a peculiar hair shape that was not possible to create with the existing child modes. A new "Spiral" mode was then added, which creates an aligned logarithmic spiral at the tip of each hair.

Roughness Curves

Controlling the amount of turbulent displacement along the strand was crucial for achieving the final character look. The existing methods were based on a largely fixed "roughness" function based on 3 independent parameters (overall roughness and roughness at the root and tip).

To make controlling roughness and clumping more intuitive and flexible, curve widgets were added as alternative methods in the particle settings.

Clumping Noise

Further improvements were needed to refine the effect of "clumping": the bundling together of child hairs around their parent. This is a natural phenomenon, caused by the tendency of strands to stick together and align, forming thicker strands. The way this is done with particle hairs is to move each child vertex closer to its parent. This means that the distribution of clumps is tied to the distribution of parents. Usually artists try to keep the number of parents as low as possible, since it means better performance and less grooming work. With more extreme clumping (such as we needed for a filthy sheep) the clumps become too sparse to achieve the desired look.

A new feature "clump noise" was added to address this issue. If used, it will group child hairs around a number of clumping centers, rather than just around the parent. The number and density of clump centers can be controlled with the clump noise parameters.

Strands Data in Dupli Overrides

In the later stages of the Dupli Overrides development (see below) the need for having a new hair data system became inevitable. The way hair data is stored in particle systems at this point is not maintainable any more, and copying this data storage is out of the question. Particle code has become too unwieldy over time to make even simple fixes possible, let alone adapting the outdated data structures for an entirely new purpose.

Instead, a new straightforward data structure with the single purpose of describing hair strands was introduced. This data stores vertices and strands alone, which keeps it simple and predictable (particle data suffers from ambiguity and multi-purpose elements all the time). The simplified hair data can be generated from existing particle system data easily, which makes it possible to port over existing hair simulations.

'Strands' data exists in two flavors for "parent" and "child" strands. This distinction is inherited from particle systems for compatibility, but should probably be dropped in the future for simplicity. Parent strands are usually invisible and only serve as simulation state data. Child strands are the actual renderable component and they get deformed by parent strands to inherit their dynamic motion.

A great advantage of the new system is that it can utilize the mass-spring physics solver API much more easily than particle systems. Particles have to

  • encode all the hair data first into a mesh (with clumsy extra data)
  • then convert the mesh into solver data
  • then simulate it as if it were a cloth surface mesh
  • then store the result back into a mesh (which also functions as a kind of cache)
  • then reconstruct the strand geometry on top of the mesh

The new strands data on the other hand integrates seamlessly with the solver API and can be converted to and from the internal solver data without much effort and hackish conversions and does not have to diguise itself as a cloth mesh.

Hair Edit Mode Rewrite

Grooming could be described as the essential artistic component in hair simulation. It forms a bridge between the artists' need for control and expression and the largely automatic process of calculating movement. Other tools can be used to influence the behavior of the simulation, but the grooming tool is the most important for achieving a desired look of characters.

The current particle edit mode has served as the grooming tool set so far, but it shows major shortcomings. By design it is tightly coupled to the particle system data as well as point caches, which makes maintaining the code a constant challenge. Extending the system is virtually impossible due to code quality. Performance is a big problem for complex hair systems due to the way particle and hair data is accessed (each hair has it's own allocated array of hair keys). Drawing of hair strands in the OpenGL fixed function pipeline is seriously limited and has a hard time giving an accurate preview for rendering (a problem that can only be solved with a future viewport rewrite).

All these reasons made it advisable to modernize the hair edit mode, which means rewriting it. This has been completed, apart from some minor rough edges.

The new edit mode is based on the BMesh system, rather than directly working on the respective particle or strands data. Any hair system can be converted to and from the edit mode data with a few simple functions. Such decoupling of persistent data from the internal representation is a case of Separation of Concerns: It prevents changes to the hair data structure from affecting functionality throughout the whole edit mode and vice versa (all that needs to be touched are the conversion functions).

Using BMesh for hair edit mode rather than a customized system has a number of advantages:

  • BMesh is well-established and tested from its primary use for mesh edit mode
  • Performance of adding and removing elements is far better than in the particle edit mode, due to consistent use of hashes and memory pool allocation.
  • Since generic mesh data is a superset of the strand geometry we need, it is possible to encode strands cleanly in a bmesh
  • Many existing BMesh tools and operators could be used in the hair edit mode as well to reduce code duplication.

Hair Shape Keys

Shape Keys for hair were targeted at various times to solve a range of problems that cropped up during animation. Simulation results are often not satisfactory and need corrections. Even if the simulation results are physically correct, the outcome may not be to the director's specifications and need to be retouched.

Correcting results after integrating the simulation requires random access to all the calculated motion states, at any given frame in the relevant interval. Since we only store the immediate current state of the system as a DNA data block, such random access requires a functional caching system. The particle system edit mode uses the current Point Cache system for this purpose (or at least pretends to do so). For the new strands system a different caching system would be needed, which does not exist at this point.

Even when the basis for manipulating simulation results on a cache exists, the workflow for such corrections is far from easy. The sheer amount of data (one point for each vertex x each frame) makes tools impossible to use on anything other than the simplest simulations. Furthermore, the corrections made on the results of a simulation are not easily transferrable, meaning that if the simulation changes, all the corrections have to be redone as well. There may be intelligent ways to deal with these problems, but during Gooseberry there was no time for further experimentation.

Shape Keys promise to be a correction mechanism for hair simulation that is repeatable and requires only a limited set of data. The basic concept is the same as with meshes: A shape key is a target shape, which gets applied on top of the base data with a relative weight, fading in and out over a frame interval. Hair shape keys can be stored as part of the simulation settings (in our case in Cache Libraries, see below).

There are major differences in how mesh and hair shape keys work:

  • Mesh shape keys are applied on top of basic mesh shapes, but before applying animation keyframes (they are treated as a "virtual" modifier at the stack start). This has important consequences for how hair shape keys are evaluated, because the hair sim is by default animated and there exists no static base shape that a shape key could relate to. This means that hair shape keys would have to be stored as relative shapes rather than absolute mesh shapes. Then a shape key could apply on top of a cached simulation without always leading to the same absolute shape. A problematic side effect though is that such relative keys also cannot change the basic motion, so using shapes for correction has limitations.
  • Hair shapes should be independent from deformation of the scalp mesh as much as possible. Ideally a relative hair shape key would keep working even when rotating the scalp face the hair is attached to. In addition to being a relative offset, the shape key data also must be in local "hair space", which is based on the local surface geometry and rotates with deformation. There is no guarantee that the resulting deformation will still work when the mesh animation changes, but a rotation-independent shape key should be much more robust for such constrained geometry.

The shapekey system was ultimately not used in the production, although a tentative implementation exists.

Production Solutions for faking collisions

- Forcefields with sharp falloff, stability concerns and possible countermeasures - Artistic approach to faking air flow using Turbulence, jittering issues and their causes.

Cache Libraries


- Qualities of Alembic that are needed in Blender (general-purpose caching system with good compression) - Plans for replacement of point caches, difficulties of implementation

Cache Library Concept

- Unifying cache handling in Blender - Abstraction of file paths and folders - ID datablock for reusability, sharing between multiple cache users

Dupli Overrides

Linking and Objects

(note: a larger document about these issues is in preparation, here would be a more compact overview) - Ownership and Mutability of datablocks in the studio pipeline - Existing approaches (and their limitations): Animation Proxies, Driver toggles, Python overrides - DupliGroup problems apart from linking: how to access group content when it is not linked to the scene.

Overrides as a Workflow Concept

- Partial data replacement through localized objects - Storage problems: where do overrides live? where is resulting data stored?

Caches as primary means of storage

- Caches avoid problem of storing data outside of objects - Creating variants through multiple caches

Limitations and Problems

- Caches are no longer optional, overrides rely on them - Limitations of viewport, depsgraph, rendering that have to be worked around (to make Blender use overridden objects instead of originals) - Only one cache at a time can be created from originals - Overridden objects cannot easily be instanced themselves - Large-scale duplication of already existing structures and functionality (everything now has work on caches instead DNA data) - Complexity and blind spots

Grass Simulation

- Principle requirements: lots of almost-redundant detail, physical simulation, large scale - Basic idea:

 * create small localized patches of geometry, animated on their own (hair sim for moving grass)
 * instantiate patches on a large scale using duplis

- Remaining problem: uniform repetitive movement of large scale (no swaying fields grass possible this way) - 'Meadows' addon tries to manage direct object copies for localizing the physics simulation (not used because of performance limitations, viewport)