Note: This is an archived version of the Blender Developer Wiki (archived 2024). The current developer documentation is available on developer.blender.org/docs.

User:Severin/GSoC-2019/Final Report

Core Support of Virtual Reality Headsets through OpenXR

Final Report

On this page, the results from the GSoC project Core Support of Virtual Reality Headsets through OpenXR are listed and explained, with references to the work done.

Overview

Spring vr viewport.png
Blender OpenXR Oculus Runtime.jpg

All deliverables as formulated in my project proposal are there. Note that it also lists a stretch goal, which I postponed in favor of other things.
Basically, we now have stable, well performing VR viewport rendering based on the OpenXR specification. A little demo video is available here. Upon this foundation, we can now start building more rich immersive experiences, uncovering the new workflows these technologies offer for 3D content creation.

In brief, the core features added are:

  • Well performing VR rendering.
  • Support to dynamically connect to OpenXR runtimes.
  • VR session management as per OpenXR specification.
  • Add-on to hide VR features by default from the UI.
  • Basic OpenXR event management.
  • Carefully designed error handling strategy, cancelling the VR session with a useful user error message and no side-effects to the rest of Blender.
  • Compatibility with DirectX-only runtimes.
  • Debugging utilities
  • Internal abstractions and APIs to support maintenance and further work.

All this work is submitted for review in D5537. If you want to test it, read the How to Test page.

The following section gets into a bit more detail. It also features a list of unfinished and reverted features. There's also a section on further work to be done. Lastly, there is a collection of a few related links.



Features Added

The following lists the relevant features added with a brief-ish explanation to each. If not explicitly mentioned otherwise, each feature listed here is included in patch D5537.


Well Performing VR Rendering

The main deliverable of the project was getting well performing VR viewport rendering to work with OpenXR compatible devices (or actually runtimes). There was no quantified goal, but I'd definitely consider this as reached. In scenes with a few hundred thousand vertices, we easily achieve render times of more than 100 FPS with solid shading and on mid-range hardware. Note that this is the theoretically achievable render time, in practice, frame rates are clamped to suite the HMD refresh rate (typically 60 or 90Hz).
Even scenes with few million vertices have worked reasonably well in tests, that is, frame rates are high enough to not get any noticeable stuttering, the experience remains smooth.
Profiling has shown that overhead of our VR specific code (not viewport drawing itself, nor external OpenXR code) costs less than 1.5ms. That is measured on a mid-range machine too.
While there are certainly things that could improved further, especially in the viewport drawing, this gives a quite decent experience to built upon.


OpenXR loader

Layered interfacing with OpenXR runtimes

The OpenXR specification splits the workload between the application (in our case Blender) and an OpenXR runtime. This runtime implements or manages the device drivers and higher level functionality. In future, common VR platforms will likely include an OpenXR runtime. E.g. the Oculus platform already does. Windows Mixed Reality still requires additional software, but plans are to integrate that into the standard Mixed Reality Portal app soon [1].

On the Blender side, we have to connect to these runtimes, i.e. we have to dynamically link to them. While we could write that linking code ourselves, the OpenXR-SDK contains a OpenXR loader, which links the runtime that is currently set as active on the user's system. That way users can have multiple runtimes installed (e.g. for multiple HMDs from different vendors), and decide which one to use in an OS wide manner.
I've experimented with different ways to use the OpenXR loader. The solution I went with adds the loader as external dependency for compiling Blender. My code should feature all the bits needed to get the build system ready for that.


OpenXR Extension and API-layer Management

The OpenXR specification can be extended with further functionality using extensions. These are provided by the OpenXR runtime. Further, OpenXR supports API-layers, which are layers injected in-between the application and the runtime. They may perform tasks like logging, validation or benchmarking. Layers themselves don't extend the OpenXR API, but they may include their own extensions to do that. Note that layers can be installed in a OS wide manner, the OpenXR loader has the functionality to detect and use them if wanted.

I've implemented the needed bits to manage extensions and API-layers:

  • Query available extensions/layers.
  • Suggest extensions/layers to enable.
  • Actually enable the suggested extensions/layers that are available.

With these in place, it should be simple to enable and use new extensions and API-layers in Blender.


Failure Safety

It was a high priority of the project to get the VR foundations to be rock stable. That includes failing gracefully on the occurrence of errors. All OpenXR calls should be checked for success, but our code has to support error reporting as well. I've designed and implemented an error handling strategy based on the following requirements:

  • If an error occurs, cleanly exit the VR session (or destroy the entire context), causing no resource leaks or side effects to the rest of Blender.
  • Show a useful error message to the user.
  • Don't impair readability of code too much with error handling.

As you can see, we don't hide errors from the user, but we make sure they get feedback on what happened. For example when the OpenXR runtime could not find a HMD, Blender reports: "Failed to get device information. Is a device plugged in?".

Unfortunately, there are rare crashes in runtimes or the graphics drivers which we can not handle.


DirectX Compatibility

Early experiments prior to the GSoC coding phase, revealed an issue for the project: The Windows Mixed Reality OpenXR runtime, which drives common devices and was planned to be my main testing platform, only provided support for DirectX, not OpenGL. After some research, I concluded that the only reasonable way to still support the platform was using an OpenGL extension for DirectX-OpenGL resource sharing. So early in the project I implemented support for DirectX 11 contexts and some OpenGL compatibility using mentioned extension. While this took quite some effort, the outcome seems to work just fine on tested hardware and there's close to no performance penalty.


Basic VR Viewer Add-on

The outcome of this project was supposed to be basic, as in limited and focused VR viewport rendering support. When users see a Toggle VR Session button, that may fool them into thinking there was full-blown VR support in Blender. We (my mentor and I) would still like to see my project merged into main Blender soon. To not fool users with false promises, I decided to hide this button from the UI by default. The best way I saw for this was by wrapping it into an Add-on that would have to be enabled by the user first. It can explicitly state that support is limited and basically an early preview of ongoing work. Hence, I added a Basic VR Viewer Add-on to do just this.


VR Session Management

In the Blender UI, creating and closing a VR session is simple. Users simply have to press Window » Toggle VR Session

Blender Toggle VR Session.png

Under the hood, this session has a lifecycle with multiple states, defined by the OpenXR specification. Based on this, the session has to be managed carefully so that it either functions properly, or fails gracefully in the case of errors. It also has to function properly when a session exit is requested from Blender's side, when the runtime is closed, or when the runtime expects that it's about to loose the session. The specification gives quite clear guidance on how the application should handle these cases, I carefully implemented the suggestions as suited for our needs.


Basic OpenXR Event Management

This is not a big feature, but the needed bits to query OpenXR events and to allow handling them were added. Note that this controls the flow of the session state as per OpenXR specification.


Debugging utilities

The field of XR is known to be difficult to debug. OpenXR tries to address that issue and includes a couple of debugging features. Based on them, I enabled a couple of debugging utilities:

  • --debug-xr command line option enabling our own debug/information prints, OpenXR debug prints and the OpenXR core validation layer.
    The core validation layer is injected in-between the OpenXR loader we use and the runtime and it ensures usage follows the rules as defined by the specification.
  • --debug-xr-time command line option to print frame render times and FPS information.

I further made good use of asserts, tried to add useful comments and just generally write rather clean code (although, not too dogmatic).
All these things should really help debugging for further development.


wmSurface API

In current Blender, all continuous drawing happens through the main loop and the wmWindow data structure. In this call-chain, unnecessary redraws are avoided.
For VR we need something different however. We need continuous drawing, that is not tied to a window, but happens entirely off screen. The render results are then send to the HMD via OpenXR. Further, we want to refresh this rendering all the time, there are no unnecessary redraws to avoid.
To achieve this, I've implemented a wmSurface data-structure and an interface for it. You could describe it as non-window drawable container. It has its own OpenGL offscreen context and manages related graphics resources.

Should we at some point decide to perform all rendering on a separate thread, this abstraction will probably be unnecessary. The drawing thread would have its own main loop then, through which drawing can be managed.


Abstraction for all OpenXR specific code.

As the OpenXR specification evolves, we'll have to keep our usage of it updated. Parts of it may be deprecated, removed or changed. Or maybe, there will be a different specification in the future, that we want to support. OpenGL is an example of such things happening. Therefore, it is generally a good idea to keep such API usage behind higher level interfaces. For this project, all OpenXR specific code was put behind abstractions with a simple, high level interface (see GHOST_IXrContext). The abstractions were carefully designed to be helpful rather than over-engineered.

The main reason to have these abstractions on the Ghost level (Blender's OS abstraction for window-management, events, devices, etc.) was the low level, OS specific graphics data access requirements of OpenXR. Reevaluating that decision however, I think our OpenXR usage is too high level to fit nicely into Ghost. We could port the code into a new module just for XR, but that will have to happen after GSoC (if at all - feedback from other developers is pending).

Note that code tying the Ghost-XR calls to Blender specific concepts (operators, viewport rendering, error reporting, etc.) is localized in a single window-manager file, wm_xr.c.


Make GPU_matrix usage thread safe

Previously, we had one global GPU_matrix stack, so the API was not thread safe. My changes made the stack be per GPUContext, effectively making it local per thread (GPUContext is located in thread local storage). This feature was committed to master already (e6425aa2bf).
I added this while working on threaded drawing of the VR view. See next section.


Unfinished or Reverted Features

There are a few things that I started working on, especially advanced performance improvements, but decided to postpone until after GSoC. Then there were features I added during the project, but didn't end up needing, so I removed them again.

Reverted:

  • Drawing of non-OpenGL Windows (see 873223f38d).
  • Blitting OpenGL framebuffer contents into another framebuffer in a different OpenGL context (see 3441314e40, 2c77f2deac).
  • Resizing of the default framebuffer (see 61014e1dd9).
  • Bundling relevant OpenXR-SDK sources with the Blender source code (under extern/). This one took quite some time to get and keep working and I figured this would be way too hard to maintain in master (see 0dd3f3925b).

Unfinished:


The way Forward

Prepararation for Further Work

This project was never supposed to bring full-blown VR support. It aimed at paving the way for further, regular development of XR experiences. So some time was spent on making sure related work will continue after GSoC.
With basic VR rendering in place, the next step is to figure out the 'big picture' for XR in Blender. That is, finding use-cases, developing interaction paradigms, designing tools, think about different levels of customization, ...

To get work going, I've written a kickoff document. Blender design development should usually happen on developer.blender.org, so I've put most of the document contents into tasks descriptions there:

  • Main/Parent Task (T68998)
  • VR design/usability (T68994)
  • VR input integration and mapping (T68995)


Further Work on the Code Side

One of the first things to work on for more rich VR experiences is controller input and haptics support. It is the one big part of OpenXR that was not in scope of this project.
It would further be nice to not only have VR support, but to walk towards AR and MR too. Thanks to OpenXR, this should not be too difficult enable in principle. We may want to design advanced experiences for AR and MR though, for which the project complexity can not be estimated at this point.

Besides that, many XR specific performance optimizations are possible. For example:

  • Finishing my work on a dedicated VR drawing thread (see temp-vr-draw-thread).
  • Single pass drawing
    Rather than calling the viewport draw-loop twice (for each eye), its code could be altered to support drawing for multiple eyes in one pass. While full single pass drawing, where each OpenGL draw call actually draws the geometry twice, is a bit ambitious, we can use the existing DRWView abstraction to automate executing these draw calls multiple times.
  • Threaded viewport drawing
    The draw-manager, which manages all viewport drawing, could be made CPU thread safe, so we can reduce time the GPU spends waiting on the CPU. While this would probably not give huge speedups, it's a not all too high hanging fruit.

I've written down some more related information here.

There is also lots of research and development on topics like foveated rendering, advanced audio playback for XR, multi-GPU rendering support etc. All of these would be very interesting to look at for Blender. These things are very specialized and may be difficult to implement though.

Lastly I should add, that VR rendering with material previews, or even Eevee will need some more work. It's unlikely that they will ever be as fast as the solid shading mode, but we should at least be able to get smooth experiences with reasonably complex scenes. I wasn't able to spend much time testing this unfortunately, my efforts focused on getting overall bottlenecks removed first. There is also the issue of reflective or other view dependent materials: For ground truth rendering, they would have to by calculated twice, once for each eye.


A few "Thank you"s

There were a few people who've put time and effort into supporting the project. First of all I'd like to thank LazyDodo, our Windows maintainer, for his help! Not only did he provide regular testbuilds, he's also been available for testing, feedback, fixing build-system issues and anything else I needed from him really. Then there's my mentor Dalai Felinto, who has always been available to talk over things, but at the same time trusted my expertise on the engineering side. I am grateful for that, it felt like just the right balance.
In few occasions I've also bugged other developers with a question or two. And there are the developers who accepted my proposal. Thank you all! Lastly there are a couple of users who gave feedback and/or tested the branch. I get that this project wasn't the most exiting one for users, as it was another one that merely laid foundations for further work, but you were there nonetheless. Your input was valuable, thank you too!

Related Links