Note: This is an archived version of the Blender Developer Wiki (archived 2024). The current developer documentation is available on developer.blender.org/docs.

User:Severin/GSoC-2019/Proposal

Core Support of Virtual Reality Headsets through OpenXR


'Updated Proposal Version'
This is the latest proposal, updated July 1, 2019. The initial one can be found here.


Synopsis

While virtual reality has reached mainstream adoption, it is still not possible to use this technology directly within Blender. However there is great potential in the new workflows virtual, augmented and mixed reality (VR, AR, MR - or combined XR) allow.
The recently released provisional OpenXR specification by the Khronos Group opens the door to cross-platform, feature rich VR/AR experiences in Blender.
This project aims to introduce the core features necessary to build immersive experiences in the Blender viewport through OpenXR. Namely this concerns rendering and interaction with head-mounted displays (HMDs). A stretch goal would be introducing basic support for handheld controllers too.

Benefits

Reduced iteration time for XR content creation

Artists working on XR-content will be able to get an immediate preview of their work within Blender. Such direct feedback removes or reduces the need to export content for testing to external engines, greatly reducing iteration times.

New content creation workflows

Immersive technologies open new ways to interact with tools and content: Handheld controllers remove the restriction of a 2D space that mice have; HMDs allow inspecting content as if it stood in front of you, with the depth of the real world.

While there are certainly (many) workflows where traditional mouse and keyboard interaction is superior, some may greatly benefit from immersive interaction. Drawing in 3D space (Grease Pencil), sculpting and painting come into mind, but there are use-cases for animation, lighting or set design as well.

Excellent device feature support through official runtimes

OpenXR allows Blender to use the officially supported runtimes (think about those as drivers for now), even if they are non-free software. These runtimes usually provide the best implementations of device features. They know the physical device properties, can access NDA protected technology (there is plenty of this for XR technology), usually support all features the device supports, perform device specific optimizations, ...
Despite the different runtimes, OpenXR provides a common, cross-platform interface.
Of course, there are very valid reasons to stick to fully FOSS runtimes as well (e.g. Monado). The point is that both work and that users are given both options.

Future proof, advanced and performance oriented device interfacing

OpenXR development was supported by a huge number of organizations and a lot of care was put into the design by very experienced people. The architecture was designed to be highly extensible. Features were implemented with latest advances in technology and research in mind (for example pose prediction and reprojection to compensate drawing latencies). As OpenXR and runtimes evolve, Blender will be able to benefit from these developments, often without much work on its end.

Support for multiple runtimes

There are reasons for needing more than one driver or runtime. E.g. because one supports different features, or different devices. Without OpenXR, we’d have to add support for these different drivers/runtimes in source code ourselves and add our own abstractions for common access. If a new runtime comes up, a Blender developer has to add support for it.
With OpenXR, runtimes become more like plug-ins. If a runtime implements the specification, it can be used by Blender. Likely without changes on the Blender side (although this depends a lot on the success of OpenXR). It should even be possible to change the OpenXR runtime while Blender is running.

Base to implement more rich immersive experiences

With the core features in place, further features for immersive experiences can be implemented as smaller, more concise projects. So a more active development of features for immersive technologies can be expected.
Given that OpenXR also incorparetes mixed reality, even technologies like the Microsoft HoloLens should be within reach.

Enriching the Free and Open-Source XR ecosystem

As most HMD runtimes/drivers with high-profile feature support are non-free, the free and open-source (FOSS) ecosystem has issues with immersive technology support. Many runtimes aren’t cross-platform, causing a lack of support on the Linux (and macOS) platform. With Blender, an important player would provide access to the immersive world.

Deliverables

OpenXR runtime support through OpenXR loader

Getting Blender hooked up with an OpenXR loader which again manages the runtime is by itself a useful achievement. The loader allows to dynamically choose an OpenXR runtime (e.g. in an OS wide manner, based on available extensions, or explicitly via a preference setting). It may further provide additional API layers which sit in-between the application (Blender) and the OpenXR runtime. That way debugging tools like validation or logging layers can be injected - highly useful for any further development.

Minimum viable product: Rendering based on HMD input

The minimum viable product (MVP) is going to be a version of Blender in which users can render the viewport to an HMD and use rotational and positional tracking of it - powered by an OpenXR runtime.

Stable, well performing XR experience

Building upon the MVP, effort should be put into making XR sessions stable and well performing. OpenXR seeks to move performance critical parts into the runtime, so for us the most important performance factor will be viewport drawing speed. Improving this is out of scope of this proposal, so there may not be much to optimize in this project.

Debugging Utilities

XR functionality is known to be tricky to debug. OpenXR provides a couple of ways to help out with this. Especially the additional API layers that may be injected by the loader or runtime seem useful. So with the help of OpenXR, a number of debug utilities should be added. These may be activatable by a command line argument, i.e. '--debug-xr'.

Stretch goal: Initial support for handheld controllers

Handheld controllers are almost vital for XR experiences with good usability. Having them working would add a lot of value to this project. It would be too much of a promise to include this as project deliverable though. Adding support for them may be a rather complex task. Still, if there is still time once the other deliverables are there, time is well spent on adding (basic) controller support.
OpenXR further provides a way to receive geometry of controllers, so that they can be rendered in the HMD session. This would be nice to have supported too.

End-user documentation on HMD usage

As the project should introduce basic HMD support, end-user documentation on how to use this should be created. This would include guides for choosing an OpenXR runtime, managing XR sessions and the settings available in Blender for it. However, since most settings are handled by the OpenXR runtime and the default UI for launching XR sessions will be small, not much end-user documentation will be needed.

Project Details

About OpenXR Runtimes

Layered interfacing with OpenXR runtimes

Previous sections often refered to OpenXR runtimes, for many readers it may not be clear what they are however. Just like the OpenGL specification requires GPU drivers to implement the API, the OpenXR specification requires a runtime to implement its API.
One crucial job of the runtime is managing devices, i.e. they contain or at least interface with the device driver(s). They also handle compositing, which is an important step for XR imaging. For example the compositor applies lens distortion shaders and is responsible for direct mode rendering (directly sending rendered frames to the XR display). There are many more things the runtime does, but this should give an idea of what its role is.

Interfacing with OpenXR runtimes

An OpenXR Loader may be used for OpenXR applications to connect to runtimes. It handles runtime symbol linking and is required for user specified runtimes. The OpenXR SDK project[1] by the Khronos Group created an OpenXR loader that is Apache 2 licensed. Blender can link to this as separate library.
OpenXR also allows inserting custom layers in-between the application and the OpenXR runtime. This is useful for various purposes, like debugging, logging and benchmarking, but it’s not certain how helpful they are for the core support. So using these is not planned for this project.


Rendering based on HMD input with OpenXR

Once users want to use an HMD, they have to start a (global) session. Likely through a simple button in the UI, probably in the Window Top-bar menu. This will make the viewport render in the HMD if direct mode is supported. Direct mode is a way to render directly to the HMD screen, but that is handled by the OpenXR runtime. If no direct mode is available, the HMD has to be used as a normal computer screen. Starting the session will then open a separate window which users can put onto the HMD in fullscreen.
To render the viewport to HMDs via OpenXR, it has to draw into a so-called swapchain. This is basically an abstraction to manage texture buffers which will be sent to the devices via the runtime. The runtime will also apply compositing effects, such as lens correction shaders that are required on some HMDs.
For each eye, the draw-loop needs to request some information from the OpenXR runtime (most importantly view and projection matrices) and draw the viewport with them. Although this hasn’t been investigated much, the new draw-manager design of 2.8 should be helpful here.
One issue is that some tools use these matrices for projection. We don’t want those to become messed up by HMD rendering, so special care has to be taken for that.

Note that there are optimizations possible to avoid having to draw the entire viewport twice. These are not in scope of this project though.

OpenXR Event forwarding

Events are an important way of communication in OpenXR. The runtime may send these to the application, which needs to queue and process them one by one. The events sent through OpenXR are more abstract than the ones Blender has. The design tries to be future-proof, customizable and general enough even for vastly different devices. Note that this especially matters for controllers. E.g. The OpenXR runtime may send a “teleport” event, which may be triggered differently for different runtimes, devices and user settings.
This approach is well reasoned and should be respected by our implementation. For basic event input we could just make our even system convert the OpenXR events to muliple own event types. But maybe a single generic XR event type with special treatment may be more suited. It would store additional event information, similar to OpenXR. It may make sense to have a little API to manage these special events as part of the XR API (see section #XR API).

Handheld Controllers

Controllers bring a few special challanges:

  • Picking (finding what controllers point at)
    We need picking support for controllers, e.g. for selecting objects. One way to do this is via raycasting, which we already do for some selection methods in Blender. For objects with no real geometry (e.g. lights, cameras, empties), OpenGL may work a bit better. It should be sufficient for what we need though. If needed it can be improved.

  • Event input
    Controller support seems to have been an important use-case for the design of OpenXR event management. It’s important that Blender’s way of forwarding OpenXR events is similarly flexible.

  • Visualizing controllers in XR views
    For good interaction experience, users will want to see a visualization of their controllers in the HMD view. If controllers are going to be supported, this is something that should to be added too.

    OpenXR provides geometry of controllers via glTF. Blender 2.80 supports this via an add-on[2], so we may be able to use that.

XR API

Much of the XR related logic should most likely be moved into a separate internal API; one that abstracts away details and ensures consistent (i.e. invariant preserving) states. For example logic to manage sessions, convert coordinate spaces, queries settings from OpenXR, etc. Given that sessions should be global, that we interface with hadware and that a OpenXR runtime is a system configuration, the window-manager module seems like the right place for this.
Besides that, some utility functions in the draw-manager, editor and RNA modules may be useful.

It’s worth considering putting all OpenXR calls behind an abstraction right away. That way we don’t end up where we’ve been with OpenGL recently: with calls all over the place that have to be refactored to suit a new version of the API (or an alternative library even).

Related Projects

There are a few related projects known, so possible conflicts were investigated:

BlenderXR

BlenderXR is a temporary project for until an OpenXR implementation is done[3]. It would actually benefit from a project introducing OpenXR support.

Monado

Collabora is working on a FOSS OpenXR runtime, based on OpenHMD. Such a project does not conflict with what’s proposed here, it actually fits together nicely. This has been confirmed by a developer working for Collabora.

Viewport HMD support through OpenHMD [4]

I was heavily involved with a project that intended to bring viewport HMD rendering capabilities to Blender. There were a number of issues causing the project to become stalled. With the 2.8 project, bigger changes would be required to the branch. Further, at this point it should probably be refactored to use OpenXR anyway. So it doesn’t seem to make much sense to simply update the branch, but to start (mostly) from scratch with updated project goals - which is what this proposal is about.

Viewport HMD support through OpenVR [5]

Based on above’s work, an OpenVR driven implementation of viewport HMD support was done too. Thus, it also has the issue of 2.8 and OpenXR incompatibility.

All these project may provide some useful reference for this project. Also, developers from BlenderXR and Collabora have offered collaboration.

Testing Platforms

For a project like this, it’s important to have some decent hardware for testing. I own an Acer AH101 Windows Mixed Reality HMD, which if powered by the Windows 10 Mixed Reality OpenXR Developer Preview runtime, should be a fine platform for testing. Obviously more devices would be helpful, but since most device dependent work is done by the OpenXR runtime, this should be totally sufficient for core feature development. Further, this device has support in OpenHMD (rotational-only), making it suited for testing on Linux as well.

Project Schedule

NOTE: Decision was made to alter the proposal to drop parts related to XR-UIs. The new schedule wasn't discussed yet so the version here is incomplete.

If things go rather fine, the following schedule should be realistic - with implementation time for controllers even:

Week 1: Set up OpenXR loader library

Get Blender to work with the OpenXR loader provided by the OpenXR SDK. At this point, querying simple device and configuration information should work.

Week 2: Session management

Implement routines to manage OpenXR instances and sessions. This includes the ability to open XR views.

Week 3: DirectX Compatibility

While testing the Windows Mixed Realtiy OpenXR runtime, it was found that it only supports DirectX drawing. Blender on the other hand only supports OpenGL. Not supporting this runtime would be a big limit for Blender's XR capabilities. Luckily, there is a graphics driver extension to share resources between OpenGL and DirectX. Using this, the Windows MR runtime should be accessible, with very small overhead. It's unknown how stable the extension is nowadays, so additional workarounds may have to be added.

Weeks 4-5: Rendering based on HMD input

Get the MVP ready by implementing HMD input and rendering to the device, both via OpenXR. This is also the first milestone.

Week 6-7: Debugging tools

Add various debug capabilities like for validation, logging and benchmarking.

Week 8-9: Stability improvements

Ask the community for testing to ensure XR sessions run stable and perform well. Especially the DirectX compatibility needs to be tested on various hardware/drivers.

Week 10-11: Implement basic OpenXR event forwarding

Implement event receival, queuing and initial handling support. Python API changes and the suggested API for high level XR-events are probably not needed at this point.

Week 12: Buffer

Some buffer for the case of unforeseen problems or additionally needed iterations. If not needed, controller support can be improved here.

Week 13: Documentation

Last but not least, the documentation mentioned above needs to be written. The end of this period marks the end of the GSoC period and the third milestone.

Note: I’ll have a handfull of exams and project deadlines during July. Hence, I’m planning to start work earlier than May 27 for compensation. I don’t expect too much distraction though. Besides that, I organized things so I’m able to dedicate enough time to GSoC.

Bio

I am currently on the final stretch of my Bachelor of Science in applied informatics at the University of Applied Sciences Kaiserslautern, Germany. On the topic of computer science I like to think about different perspectives on “Clean Code”: Architecture, languages and styles, programming paradigmas, agile processes, etc. As a personal project, I’ve been working on a core UI widget toolkit for Blender, named bWidgets[6]. To some degree it is a playground for me; but I also think Blender will greatly benefit from it in the not too distant future.
Having worked in the Blender code for almost five years now, I know my way around in the source code. My focus thereby has always been usability. If you have used 2.8, you have definitely seen and likely even used features I implemented.
It may also be worth repeating that I was one of the main developers of the OpenHMD based viewport HMD support branch. So I’ve already worked on a project with many similarities to the one proposed here.
I further…

  • …successfully participated in Google Summer of Code before[7],
  • …am co-owner of the Window-Manager module[8],
  • …am developer-member of the Blender UI-team[9].
  • …received multiple development grants from the Blender Foundation to work on various projects.

After having had a longer break from active Blender development, I’d be more than happy to do my comeback with this project.