Everything Nodes

As Blender is growing, the amount of functionality is increasing quickly. It is very hard to find a place for all tools and their combinations in an ordinary user interface. The main goals of this project are:

  • Make it possible to add new tools without making the user interface less usable every time.
  • Allow more functionality to be used together to increase flexibility.
  • Introduce proceduralism in more areas of Blender.

Note: the goal is not to implement more node systems per se. However, node systems are a great way to achieve these goals. Therefore, they will play an important role in this project.

Since there is no way to design the whole system upfront, I decided to try to go into the right direction step by step. My plan is to start to integrate a generic function system. A function will probably be like a node group for Cycles, just more generic (i.e. it will be able to handle many more types of data). Additionally, different parts of Blender can become function users. A function user is able to use all functions that have a specific signature. A signature is the set of inputs and outputs of a function. This way, every tool in Blender that can use functions, automatically becomes much more flexible.


  • A function that has a vector as input and another vector as output can be used as deform modifier.
  • A function that outputs a float can be used in a driver.
  • A function that outputs a mesh can be used for procedual modelling.
  • A function that outputs many matrices can be used to control instancing or an armature.
  • A function that takes in a 2D vector and outputs a color can be used for image generation.

A function has to fulfill two main requirements to be widely useful:

  • It must not have side effects. So any modification of data that can impact other systems is not allowed.
  • It must be possible to get all dependencies of a function without executing it.

Especially the first requirement feels very restrictive at first. However, it is the key ingredient to make many tools accept functions, because they don't have to fear breaking something else. Not having side effects also makes composing many existing functions into bigger functions much easier. Obviously, at some point you want to have side effects ("Software exists for its side effects"). This is where function users come into play. They execute functions and use the result to change other data like the location of an object.

Personally, I believe that just having this concept opens up a whole lot of new possibilities in Blender. My goal is to fit as much as possible into this simple concept. It will probably take some trial and error. At some point we might notice that certain more complex operations (e.g. some kinds of simulation) don't fit in very well. If that's the case, we can carefully add higher level concepts later on. Before we make that decision, we should have tried though.

There are many possible approaches to create functions. Usually functions are written in code. However, we want that artists are able to make their own functions without having to code. Also it should be easy to share them like any other Blender asset. This is where node systems become important.

Over time I'll implement one or more new node tree types, whose goal it is to provide a good user interface to create custom functions. In theory a single node system is enough. However, maybe it will turn out to be more user friendly to provide multiple frontends, each of which is specialised for a specific tasks. Note: Not every frontend one can imagine is a node system. Other types of frontends could exist as well.

Executing Node Trees

A node tree itself does nothing. There has to be a system that executes the function described by a node tree. Since we don't want to restrict ourselves too much yet, multiple execution backends can be implemented. An execution backend takes a node tree and turns it into something that can be executed on some device (CPU/GPU/...?).

Multiple types of backends can be imagined:

  • Transpile node graph into LLVM IR and compile it down to machine code.
  • Transpile node graph into GLSL/... and compile it.
  • Interpret node graph without compiling it.
  • Interpret node graph interactively with user interaction.

All backends have advantages and disadvantages. E.g. using LLVM might give very good CPU performance, but maybe compiling a large node graph will take too long. Instead the node tree currently being edited could be interpreted instead of compiled.

To avoid having to write the same backend for multiple frontends, an intermediate node representation should be used. I call it the data flow graph. The data flow graph looks just like a node tree but has some more restrictions:

  • Every input socket has to be connected to some output socket (-> there are exactly as many links as there are inputs).
  • Every node represents a function (user level node trees can have special nodes like Input and Output).

A data flow graph does not store which inputs/outputs the final function will have. Instead, multiple functions could be generated based on the same data flow graph with different inputs and outputs. Note: not every node might support every backend. Having an intermediate representation makes implementing new frontends and backends easier.

Some operations can work on the data flow graph directly:

  • Insert nodes that do implicit conversions between types. Otherwise every backend or frontend had to do it.
  • Extracting external dependencies of a function.