Note: This is an archived version of the Blender Developer Wiki (archived 2024). The current developer documentation is available on developer.blender.org/docs.

User:Severin/GSoC-2019/Initial-Proposal

Core Support of Virtual Reality Headsets through OpenXR (Initial Proposal)


Synopsis

While virtual reality has reached mainstream adoption, it is still not possible to use this technology directly within Blender. However there is great potential in the new workflows virtual, augmented and mixed reality (VR, AR, MR - or combined XR) allow.
The recently released provisional OpenXR specification by the Khronos Group opens the door to cross-platform, feature rich VR/AR experiences in Blender.
This project aims to introduce the core features necessary to build immersive experiences in the Blender viewport through OpenXR. Namely this concerns rendering and interaction with head-mounted displays (HMDs). A stretch goal would be introducing basic support for handheld controllers too.
By exposing some of the added XR internals in the Python API, it should be possible to create user interfaces for immersive usage - with the help of the new gizmo system. So while support for this technology is implemented in the source code, XR UIs can be defined by add-ons.

Benefits

Reduced iteration time for XR content creation

Artists working on XR-content will be able to get an immediate preview of their work within Blender. Such direct feedback removes or reduces the need to export content for testing to external engines, greatly reducing iteration times.

New content creation workflows

Immersive technologies open new ways to interact with tools and content: Handheld controllers remove the restriction of a 2D space that mice have; HMDs allow inspecting content as if it stood in front of you, with the depth of the real world.

While there are certainly (many) workflows where traditional mouse and keyboard interaction is superior, some may greatly benefit from immersive interaction. Drawing in 3D space (Grease Pencil), sculpting and painting come into mind, but there are use-cases for animation, lighting or set design as well.

Excellent device feature support through official runtimes

OpenXR allows Blender to use the officially supported runtimes (think about those as drivers), even if they are non-free software. These runtimes usually provide the best implementations of device features. They know the physical device properties, can access NDA protected technology (there is plenty of this for XR technology), usually support all features the device supports, perform device specific optimizations, ...
Despite the different runtimes, OpenXR provides a common, cross-platform interface.
Of course, there are very valid reasons to stick to fully FOSS runtimes as well (e.g. Monado). The point is that both work and that users are given both options.

Future proof, advanced and performance oriented device interfacing

OpenXR development was supported by a huge number of organizations and a lot of care was put into the design by very experienced people. The architecture was designed to be highly extensible. Features were implemented with latest advances in technology and research in mind (for example pose prediction and reprojection to compensate drawing latencies). As OpenXR and runtimes evolve, Blender will be able to benefit from these developments, often without much work on its end.

Support for multiple runtimes

There are reasons for needing more than one driver or runtime. E.g. because one supports different features, or different devices. Without OpenXR, we’d have to add support for these different drivers/runtimes in source code ourselves and add our own abstractions for common access. If a new runtime comes up, a Blender developer has to add support for it.
With OpenXR, runtimes become more like plug-ins. If a runtime implements the specification, it can be used by Blender. Likely without changes on the Blender side (although this depends a lot on the success of OpenXR). It should even be possible to change the OpenXR runtime while Blender is running.

XR UIs defined in add-ons, not source code

Interaction models for immersive worlds is a rather recent topic with lots of room for innovation. Also, having different UIs for different users, workflows and contexts definitelys seems like a good thing to experiment with. So rather than implementing one static UI approach in the source code, it would be nice to let add-ons take over this part. With the new gizmo system, there is a way to draw potentially interactive 3D UI elements in the viewport. By extending the Python API to give access to some session information, implementing rich XR UIs with add-ons seems very much doable.

Base to implement more rich immersive experiences

With the core features in place, further features for immersive experiences can be implemented as smaller, more concise projects. So a more active development of features immersive technologies can be expected.
Given that OpenXR also incorparetes mixed reality, even technologies like the Microsoft HoloLens should be within reach.

Enriching the Free and Open-Source XR ecosystem

As most HMD runtimes/drivers with high-profile feature support are non-free, the free and open-source (FOSS) ecosystem has issues with immersive technology support. Many runtimes aren’t cross-platform, causing a lack of support on the Linux (and macOS) platform. With Blender, an important player would provide access to the immersive world.

Deliverables

OpenXR runtime support through OpenXR loader

Getting Blender hooked up with an OpenXR loader which again manages the runtime is by itself a useful achievement - even if technical only. Because once that works, features like dynamic runtime linking, custom OpenXR layers, property querying are available. These features are useful for debugging, testing or more broadly speaking, further development.

Minimum viable product: Rendering based on HMD input

The minimum viable product (MVP) is going to be a version of Blender in which users can render the viewport to an HMD and use rotational and positional tracking of it - powered by an OpenXR runtime.

Stretch goal: Initial support for handheld controllers

Handheld controllers are almost vital for XR experiences with good usability. Having them working would add a lot of value to this project. It would be too much of a promise to include this as project deliverable though. Adding support for them may be a rather complex task. Still, if there is still time once the other deliverables are there, time is well spent on adding (basic) controller support.
OpenXR further provides a way to receive geometry of controllers, so that they can be rendered in the HMD session. This would be nice to have supported too.

Python API additions for XR UI add-ons

The Python API will need some additions to support XR UIs: querying session information, operators to manage sessions, new event types to trigger own operators, etc. A new API for events from controllers and HMDs might be needed, as OpenXR follows a rather untypical, but in this case well suited approach to these.

Proof of concept XR UI add-on

To test how doable a XR UI is with added features and the current Python API, a proof of concept VR UI should be implemented via an Add-on. With this, issues and weak points can be found, documented and/or ideally addressed.
If support for handheld controllers is added, especially methods to trigger actions from these need to be added.

End-user documentation on HMD usage

As the project should introduce basic HMD support, end-user documentation on how to use this should be created. This would include guides for choosing an OpenXR runtime, managing XR sessions and the settings available in Blender for it. However, since most settings are handled by the OpenXR runtime and the default UI for launching XR sessions will be small, not much end-user documentation will be needed.

Technical documentation for XR add-on authors

For developers who want to create XR UIs, a basic suggested way to approach this should be described. A Python template may also be useful here.

Project Details

There are multiple aspects to look at for usable OpenXR driven headset support.

Interfacing with OpenXR runtimes

A loader layer may be used for OpenXR applications to interface with runtimes. It handles runtime symbol linking and is required for user specified runtimes. The OpenXR SDK project[1] by the Khronos Group created an OpenXR loader that is Apache 2 licensed. Blender can link to this as separate library.
OpenXR also allows inserting custom layers in-between the application and the OpenXR runtime. This is useful for various purposes, like debugging, logging and benchmarking, but it’s not certain how helpful they are for the core support. So using these is not planned for this project.

Rendering based on HMD input with OpenXR

Once users want to use an HMD, they have to start a (global) session. Likely through a simple button in the UI, probably in the Window Top-bar menu. This will make the viewport render in the HMD if direct mode is supported. Direct mode is a way to render directly to the HMD screen, but that is handled by the OpenXR runtime. If no direct mode is available, the HMD has to be used as a normal computer screen. Starting the session will then open a separate window which users can put onto the HMD in fullscreen.
To render the viewport to HMDs via OpenXR, it has to draw into a so-called swapchain. This is basically an abstraction to manage texture buffers which will be sent to the devices via the runtime. The runtime will also apply compositing effects, such as lens correction shaders that are required on some HMDs.
For each eye, the draw-loop needs to request some information from the OpenXR runtime (most importantly view and projection matrices) and draw the viewport with them. Although this hasn’t been investigated much, the new draw-manager design of 2.8 should be helpful here.
One issue is that some tools use these matrices for projection. We don’t want those to become messed up by HMD rendering, so special care has to be taken for that.

Note that there are optimizations possible to avoid having to draw the entire viewport twice. These are not in scope of this project though.

OpenXR Event forwarding

Events are an important way of communication in OpenXR. The runtime may send these to the application, which needs to queue and process them one by one. The events sent through OpenXR are more abstract than the ones Blender has. The design tries to be future-proof, customizable and general enough even for vastly different devices. Note that this especially matters for controllers. E.g. The OpenXR runtime may send a “teleport” event, which may be triggered differently for different runtimes, devices and user settings.
This approach is well reasoned and should be respected by our implementation. For basic event input we could just make our even system convert the OpenXR events to muliple own event types. But maybe a single generic XR event type with special treatment may be more suited. It would store additional event information, similar to OpenXR. It may make sense to have a little API to manage these special events as part of the XR API (see section #XR API).

Handheld Controllers

Controllers bring a few special challanges:

  • Picking (finding what controllers point at)
    We need picking support for controllers, e.g. for selecting objects. One way to do this is via raycasting, which we already do for some selection methods in Blender. For objects with no real geometry (e.g. lights, cameras, empties), OpenGL may work a bit better. It should be sufficient for what we need though. If needed it can be improved.

  • Event input
    Controller support seems to have been an important use-case for the design of OpenXR event management. It’s important that Blender’s way of forwarding OpenXR events is similarly flexible.

  • Visualizing controllers in XR views
    For good interaction experience, users will want to see a visualization of their controllers in the HMD view. If controllers are going to be supported, this is something that should to be added too.

    OpenXR provides geometry of controllers via glTF. Blender 2.80 supports this via an add-on[2], so we may be able to use that.

XR API

Much of the XR related logic should most likely be moved into a separate internal API; one that abstracts away details and ensures consistent (i.e. invariant preserving) states. For example logic to manage sessions, convert coordinate spaces, queries settings from OpenXR, etc. Given that sessions should be global, that we interface with hadware and that a OpenXR runtime is a system configuration, the window-manager module seems like the right place for this.
Besides that, some utility functions in the draw-manager, editor and RNA modules may be useful.

It’s worth considering putting all OpenXR calls behind an abstraction right away. That way we don’t end up where we’ve been with OpenGL recently: with calls all over the place that have to be refactored to suit a new version of the API (or an alternative library even).

Related Projects

There are a few related projects known, so possible conflicts were investigated:

BlenderXR

BlenderXR is a temporary project for until an OpenXR implementation is done[3]. It would actually benefit from a project introducing OpenXR support.

Monado

Collabora is working on a FOSS OpenXR runtime, based on OpenHMD. Such a project does not conflict with what’s proposed here, it actually fits together nicely. This has been confirmed by a developer working for Collabora.

Viewport HMD support through OpenHMD [4]

I was heavily involved with a project that intended to bring viewport HMD rendering capabilities to Blender. There were a number of issues causing the project to become stalled. With the 2.8 project, bigger changes would be required to the branch. Further, at this point it should probably be refactored to use OpenXR anyway. So it doesn’t seem to make much sense to simply update the branch, but to start (mostly) from scratch with updated project goals - which is what this proposal is about.

Viewport HMD support through OpenVR [5]

Based on above’s work, an OpenVR driven implementation of viewport HMD support was done too. Thus, it also has the issue of 2.8 and OpenXR incompatibility.

All these project may provide some useful reference for this project. Also, developers from BlenderXR and Collabora have offered collaboration.

Testing Platforms

For a project like this, it’s important to have some decent hardware for testing. I own an Acer AH101 Windows Mixed Reality HMD, which if powered by the Windows 10 Mixed Reality OpenXR Developer Preview runtime, should be a fine platform for testing. Obviously more devices would be helpful, but since most device dependent work is done by the OpenXR runtime, this should be totally sufficient for core feature development. Further, this device has support in OpenHMD (rotational-only), making it suited for testing on Linux as well.

Project Schedule

If things go rather fine, the following schedule should be realistic - with implementation time for controllers even:

Week 1: Setup OpenXR loader library

Get Blender to work with the OpenXR loader provided by the OpenXR SDK. At this point, querying simple device and configuration information should work.

Week 2: Session management

Implement routines to manage OpenXR instances and sessions. This includes the ability to open XR views.

Weeks 3-4: Rendering based on HMD input

Get the MVP ready by implementing HMD input and rendering to the device, both via OpenXR. This is also the first milestone, right in time for phase 1 evaluation.

Week 5: Implement basic OpenXR event forwarding

Implement event receival, queuing and initial handling support. Python API changes and the suggested API for high level XR-events are probably not needed at this point.

Week 6: Python API support

Extend the Python API to support foreseeable requirements for add-on defined XR UIs.

Week 7 - 8: Proof of concept XR UI through add-on

A proof of concept XR UI is a way to put previous deliverables to a test. Part of this work will likely involve fixing issues found.
A working proof of concept is the second milestone.

Week 9 - 11: Initial controller event input and drawing

If previous deliverables can be completed as scheduled, around this time it should be possible to start working on controller support. Again, this is a stretch goal, not a hard requirement for the project success.

Week 12: Buffer

Some buffer for the case of unforeseen problems or additionally needed iterations. If not needed, controller support can be improved here.

Week 13: Documentation

Last but not least, the documentation mentioned above needs to be written. The end of this period marks the end of the GSoC period and the third milestone.

Note: I’ll have a handfull of exams and project deadlines during July. Hence, I’m planning to start work earlier than May 28 for compensation. I don’t expect too much distraction though. Besides that, I organized things so I’m able to dedicate enough time to GSoC.

Bio

I am currently on the final stretch of my Bachelor of Science in applied informatics at the University of Applied Sciences Kaiserslautern, Germany. On the topic of computer science I like to think about different perspectives on “Clean Code”: Architecture, languages and styles, programming paradigmas, agile processes, etc. As a personal project, I’ve been working on a core UI widget toolkit for Blender, named bWidgets[6]. To some degree it is a playground for me; but I also think Blender will greatly benefit from it in the not too distant future.
Having worked in the Blender code for almost five years now, I know my way around in the source code. My focus thereby has always been usability. If you have used 2.8, you have definitely seen and likely even used features I implemented.
It may also be worth repeating that I was one of the main developers of the OpenHMD based viewport HMD support branch. So I’ve already worked on a project with many similarities to the one proposed here.
I further…

  • …successfully participated in Google Summer of Code before[7],
  • …am co-owner of the Window-Manager module[8],
  • …am developer-member of the Blender UI-team[9].
  • …received multiple development grants from the Blender Foundation to work on various projects.

After having had a longer break from active Blender development, I’d be more than happy to do my comeback with this project.

<references />