Skip to content

Transform

Principles

Blender's transformation engine is based on the principle of generality.

Instead of working on specific data types, transformation occurs on transform-specific abstract data structures that hold all the information needed for each specific transformation to do its job on all the Blender data types that would support it.

The transform constraint system is also generic: axis selection and basic spacial adjustment (correction for viewport and perspective) functions are defined once, each transformation function has to define how to apply it from the axis definitions.

Same for numerical input (which supports N-axis input).

Structure

Important structures are all defined in source/blender/include/transform.h.

Here's a quick overview (the structure definitions are usually well documented in the code, refer to that for full information).

  • TransInfo : Represents the transform engine. All global transformation flags and settings are stored in here. This structure is passed to each transform function call.
  • TransData : Represents a single transformation unit (a vertex, a CV, an object, ...)
  • TransDataExtension : Holds additional information tied to TransData units (in the case of objects, this holds rotation and size information)
  • TransData2D : Represents 2D Transformation units. These are used when flushing the transformation back to the 2D data, the actual transformations are done on TransData.
  • NumInput : Contains data needed by the Numerical input system (pretty much standalone)
  • TransSnap : Contains data needed by the snapping system

Call Flow

Public Calls

The public interface to the transformation engine is defined in source/blender/include/BIF_transform.h. It is kept purposefully very simple.

The principal calls are initTransform(mode, context) which sets up the engine (data conversion, center and proportional editing (PET) calculations, transformation specific setup, ...) and Transform() which runs the actual transformation.

For the initialization call, mode corresponds to the transformation to execute and context refers to specific restrictions that can be applied to the transformation or that might precise on which data it needs to act if the global Blender context isn't clear enough. (constants for context flags and modes are also defined in this header)

The same pair of init/action functions exists for the manipulator (initManipulator and ManipulatorTransform) with the difference that the init function doesn't have a context argument (for lack of use, it might be added later if it is needed).

Drawing callbacks are defined for the manipulators, the PET circle of influence, the constraints guidelines and snapping target.

Setup calls for the constraints system exists and must be called after the transform initialization call and, obviously, before the action call.

Inner Workings

Here's a quick overview of the inner call flow when public calls are received.

Initialization

  1. Global initialization: initial mouse position, 3D view orientation save, global flags cleared
  2. Per transform mode flags setup: setup restrictions to numinput and constraints if needed
  3. TransData creation: depending on the Blender context, specific data is extracted from selection and converted to TransData for later manipulation.
    1. Select what data type to convert
    2. Extract all or only selected data (depending on PET tool)
    3. Convert to TransData: ''This involves saving initial value for location and other specific properties (tilt for curves), creating TransDataExtension for data types that need it.
  4. Initialize Snapping Engine
  5. Calculate PET factors per TransData units if needed
  6. Calculate transformation Center depending on the center mode selected by the user
  7. Per transform mode initialization: this involves setting the function pointer corresponding to the actual transformation mode that will be applied, setting up "gears" step values (Ctrl / Shift), numerical input restrictions and other specific values

Main Action Loop

This is basically a loop polling for UI events, calling the transform mode functions and doing cleanups once everything is done

  1. Transformation loop
    1. Check mouse position, if it moved, raise the redraw flag
    2. If redraw flag is raised, call the transform mode function
    3. Poll for event, call the events treatment functions for each: that treatment function will dispatch events to the numerical input and snapping system for further treatment after trying to act on events itself
  2. If state is CANCEL, roll back transformation using saved information in TransData and TransDataExtension (when present)
  3. Free transformation data structures
  4. Special post transform Blender updates (base flags, keyframe inserts, Actions insert, ...)
  5. Push undo if needed

Transform Mode function

These are usually pretty simple. From the saved mouse pointer data, derive a transformation vector/factor/angle, loop on all TransData units and apply it. Here's a typical implementation (ToSphere). Others might be more or less complex. Steps in bold are rather mandatory.

  1. Use a generic input method to derive motion (InputHorizontalRatio)
  2. Snap value to the "grid/gears" steps
  3. Apply numerical input
  4. Create output string for header
  5. Apply transformation to all TransData units
  6. recalcData : Flush updates to Blender data when needed
  7. headerPrint : Send text to the header
  8. viewRedrawForce : Send redraw events to the proper screen area