User:Sebbas/Reports/2021

Reports 2021

January 04 - 08

  • General:
    • Working on project proposals for the new year.
  • Bugs:
    • Fix T83777: Mantaflow - Crash when enabling guides
    • Fix T84103: Smoke simulation dont show up after baking Noise
  • Next week:
    • There are still several bugs in the tracker that I would like to have fixed in 2.92. So more bug fixing next week.

January 11 - 15

  • Bugs:
    • Fix T84280: Mantaflow viscosity: Repeating emission from inflow causes initial emission to be twice as fast compared to proceeding emissions.
    • Fix T84103: Smoke simulation dont show up after baking Noise
    • Work on fix for T84649: Quick liquid causing crash on scale operation
    • Looked into macOS bug T81169, but could not figure out the issue
  • Next week:
    • Same as last week: Work on more bugs in the tracker.

January 18 - 22

  • General:
    • Continued working on bugs from last week (especially T84280 and T84649). Fixes turned out to be a bit more complex and are yet to be finalized.
    • General brainstorming for upcoming projects.
  • Next week:
    • Bug sprint week.
    • Extra efforts to fix T81169.

January 25 - 29

  • Bugs:
    • Investigated T81017 and found out the problem. The fix for this will be a bigger task. Tagged as "Known Issue" for now.
    • Finalized the fix for T84649. Commit will be up this week in the release branch.
    • More tests for T84280, but so far no updates for this bug.
    • Helped with the fix of T81169.
  • Next week:
    • Help with the "Overrides" project, get familiar with the project requirements.
    • Formulate idea for a GSoC project (the idea is already there :)

February 01 - 05

  • General:
    • LibOverride: Only show relevant operators in outliner menu (bd973dbc44)
    • LibOverride: Added log statements in liboverride operator functions (07f7483296)
    • Miscellaneous fixes for fluids (critical for 2.92) (a563775649)
  • Bugs:
    • Fix T84649: Quick liquid causing crash on scale operation
    • Fix T85311: Mantaflow Wavelet Noise Crash
    • Closed T85381: Fluid and rigid bodies
  • Next week:
    • Look more into macOS library tasks
    • Formulate idea for a GSoC project (carry over from last week)

February 08 - 12

  • Bugs:
    • Submitted D10360: Animation: Prevent keyframe manipulation in linked data
  • Next week:
    • Library Overrides project: Diffing code, start working on T82160

February 15 - 19

  • Next week:
    • Finalize macOS lib updates

February 22 - 26

  • Next week:
    • Focus on library overrides (T82160)

March 01 - 05

  • Next week:
    • Catch up on physics module work
    • Investigate why unit tests are failing on macOS arm64

March 08 - 12

  • General:
    • Fix for some duplicate users in credits (b66c22e1fb97)
    • Enabled scale options for fluid particles in UI (b01e9ad4f0)
    • Began catching up on open issues / reports in physics module
  • Next week:
    • Similar to last week, more module work.

March 15 - 19

  • Bugs:
    • Investigated T86053: 2.9x - Crash while baking particle and/or smoke simulations
  • Next week:
    • Similar to last week, more module work

March 22 - 26

  • General:
    • Preparations for new project on better real-time physics / fluids!
  • Next week:
    • Solve issue from T86053 without updating the blosc library

March 29 - 02

  • Holiday week

April 05 - 09

  • Bugs:
    • Investigated T86053 and feasibility of blosc library upgrade for OpenVDB
  • Next week:
    • Bug fixing for 2.93

April 12 - 16

  • General:
    • macOS: Recompiled Python libs on 10.13 (rBL62615)
    • CMake/deps: Remove CPP11 option for OpenImageIO (2cc3a89cf6)
  • Next week:
    • Bug fixing for 2.93

April 19 - 23

  • General:
    • No bug fixing this week (as planned before). Focused on real-time fluids improvements instead.
    • In particular, I am exploring options on how to offload Mantaflow's (computationally expensive) simulation loops to the GPU.
  • Next week:
    • Bug sprint week for 2.93

April 26 - 30

  • General:
    • Mix of building a new workstation, reading CUDA developer docs and some potential fixes for 2.93 fluids
    • GSoC proposal reviews
  • Next week:
    • Finalize + commit fluid bug fixes for 2.93
    • Continue work fluid real-time optimizations

May 03 - 07

  • General:
    • Tests and fluid experiments with CUDA on new workstation
  • Next week:
    • Continue CUDA development

May 10 - 14

  • General:
    • Still working on 1st prototype for fast(er) fluids
    • Current idea is to expand the Mantaflow preprocessor with an option for GPU offloading (OpenMP directives)
  • Next week:
    • Check up for bcon3 and continue with prototype

May 17 - 21

  • General:
    • Updated fluid Mantaflow source files for 2.93: The update includes a workaround for an issue with Blosc OpenVDB compression (crash in 2.92) (8dd43ac23e). Once OpenVDB updates their recommended Blosc version, this fix can be reverted.
    • GPU fluids - Mantaflow side:
      • Added an -DOFFLOAD_OPENMP option to CMake. By enabling it, all Mantaflow KERNEL functions carrying an offload argument will be run on the GPU. This is achieved by making the code preprocessor place OpenMP GPU directives before for-loops (i.e. pragma omp target teams ...).
      • With the option from above, KERNEL functions (that don't require memory transfer to/from the GPU) can already run on the GPU
    • GPU fluids - Blender side:
      • Clang compilation: Blender's clang will need to be built twice and with GPU offloading capabilities. Started adjusting the deps build, but made no tangible progress yet (linker still complaining ...).
  • Next week:
    • Continue (& ideally finish) work with OpenMP map() directives (the transfer of Mantaflow memory blocks (e.g. grids & particle systems) between "CPU <-> GPU")
    • GSoC: Planning for the first weeks of coding (Soumya, "Simulation visualisation" project)

May 24 - 28

  • General:
    • The CPU <-> GPU Mantaflow data-block mapping development (using the OpenMP map() directive) continued:
      • In the current state, grids (specified via Python) can be a mapped to the GPU, modified there in parallel and then read back.
      • This makes it possible to run simple operations with Manta data structures on the GPU (e.g. multiply 2 grids cell-by-cell)
      • Caveat: There is still a lot of manual work involved (e.g. grid attributes need to be mapped explicitly in the code)
      • While the code is not ready yet, I can recommend anyone interested in OpenMP GPU offloading to watch some of OpenMP's conference videos. (e.g. "Best Practices for OpenMP on NVIDIA GPUs")
    • macOS platform: Updated ffmpeg to version 4.4 (rBL62631)
  • Next week:
    • Continue with OpenMP mapping work. Mantaflow grid and particle system attribute mapping needs to be fully automatic.
    • Small evaluation: How big is the overhead that is generated from copying grid data to the GPU? How expensive does function call need to be for it to pay off?

May 31 - 04

  • GPU fluid development:
    • Continued with grid to GPU mapping development. Managed to get a first smoke plume simulation running where some of the grid loops ran on the GPU.
    • Performance evaluation: The bigger the parallel loop over grid cells, the more gains can be seen with the GPU (obvious ...). The more interesting finding is that grid mapping from/to the GPU should be kept at a minimum. It's not a super expensive operation but should definitely not happen per function call (my 1st idea).
    • My GPU simulation tests turned out to be slower because of excessive mapping calls (the bottleneck). The actual loops over cells where much faster though.
    • Improvement for test: Frequently used simulation grids (i.e. density, velocity) can be mapped globally and only once when they are created (pragma omp target enter/exit). This already work nicely via the Python API.
  • Next week:
    • In addition to GPU work, GSoC coordination and getting patches into Mantaflow standalone repository

June 07 - 11

  • GPU fluid development:
    • The first smoke simulation (running partly on the GPU) has been finished!
    • Partly as in, only pressure is being solved on the GPU (the most expensive step in any simulation)
    • Some rough numbers (400x400x100 domain simulated with AMD 3700X + Nvidia 1050Ti):
      • CPU: 50 sec, GPU: 30 secs
      • (Times for 1 pressure step with 600 iterations)
    • As mentioned last week, bigger domains result in even greater speed-ups. A more thorough analysis with a more capable GPU (and memory!) should be done in the future. Would be nice to get someone from the community involved here (more infos on this will come soon)
  • Next week:
    • Code clean up, bring current diff in "committable" shape
    • Port more functions to GPU (e.g. advection)
    • Get a liquid simulation running on GPU (until now everything was just for smoke)

June 14 - 18

  • GPU fluid development:
    • Managed to get the GPU pressure solver working with liquid simulations. As I expected and hoped, the performance boost is similar to the one from smoke simulations (see report last week)
    • Worked mainly on advection functions and the code preprocessor that generates their GPU version. Workflow is similar to when I ported the pressure solver, i.e. adjust single function, confirm that GPU kicks in, repeat.
    • Working on a simplification for memory mapping to GPU. An ideal outcome would have no explicit calls on the Python side - right now I still rely on that.
    • Got excited about new OpenMP 5.0 directives (loop, unified_shared_memory) - only to realize later that LLVM does not fully support OpenMP 5.0 yet ...
  • Next week:
    • Continue adjusting functions to run on GPU (working my way down from computationally most to least expensive functions). The goal is to be able to run a full simulation step on the GPU.
    • In order to run first GPU simulations in Blender itself, LLVM from the deps needs to be adjusted. Planning to look into that next week.

June 21 - 25

  • GPU fluid development:
    • Continued and finished porting all fluid advection functions to the GPU.
    • As expected, the speedup gained from this optimization is less noticeable than the one from the pressure functions (fluid advection usually takes up 10%-15% of a simulation step). But still, it's a speedup.
    • Started with LLVM adjustments in deps builder - it's a WIP.
  • Next week:
    • Wrap up pressure and advection GPU ports (i.e. bring in committable state). These two optimizations should become the central part of a v1 release.
    • More LLVM deps adjustments (i.e. build clang with GPU offload capabilities).

June 28 - 02

  • GPU fluid development:
    • Spent most of my time running tests and comparing CPU / GPU performance.
    • Created some slides to document the findings and review what's working / not working so far (will be published later here).
  • Next week:
    • Same goals as last week as didn't spend a lot of time on the code.
    • So again, LLVM deps adjustments are high on my to-do list.

July 05 - 09

  • GPU fluid development:
    • Deps adjustments for GPU offloading: Worked on the builder in general, adding the same offloading options that I used when working on the Mantaflow repository.
    • I am able to build with the new options, however, the GPU does not kick in yet. Right now, it's unclear why that is.
  • General:
    • The deps upgrades from D11748 fit in very well with my GPU tweaks. Started review for that.
  • Next week:
    • Get the GPU solver working inside Blender (can be prototype level). That's the highest priority for this week.
    • Catch up on topics in the tracker, especially finish reviewing (D11748).

July 12 - 16

  • GPU fluid development:
    • To get OpenMP offloading running inside Blender, a library must not be linked statically (LLVM FAQ).
    • Therefore Mantaflow linking had to be changed - it's now being linked as a shared library.
    • This change finally made possible to run OpenMP offloading code in Blender (GPU kicks in).
    • So far everything else works fine with fluid code in a shared library. I'll have to watch out for side effects though.
  • Next week:
    • GPU grid memory management: While running my 1st GPU tests in Blender I encountered some crashes.
    • Will investigate what is wrong with GPU memory deallocation.

July 19 - 23

  • GPU fluid development:
    • Solved the memory deallocation problem from last week. In the end, it was the velocity grid that wasn't freed correctly and caused the hang up (advection step had silently swaped a pointer ...).
    • The GPU solver code now works in- and outside of Blender which is good.
    • Deps integration still needs to be a lot nicer but for now, it works as is.
  • Next week:
    • Cleanup GPU code for a v1 release.
    • Help with studio related issues.

July 26 - 30

  • Upcoming