User:Jon Denning/Projects/Retopology Mode

Overall
This will serve as the proposal design document for a Retopology Mode in Blender.

Note: at the last edit timestamp, this document is still only a slightly organized dump of ideas and thoughts.

Note: this design focuses on (mostly) manual retopology work of organic and hard surface objects. Other types of objects have been considered, even though they might not be mentioned. This document will not focus on the more automated "remeshing" algorithms.

General Design
There is much discussion and opinions about how related retopology is to the current Edit Mode in Blender. This document proposes to create a separate mode for retopology. Below are a few reasons in support.


 * A separate mode will allow for a clean retopology workflow, similar to how Sculpt Mode (sculpting) is separate from Edit Mode (modeling) despite how both modes involve manipulating the topology and shape of a mesh.


 * Some Edit Mode tools do not make sense in the context of retopology. The Edit Mode tools that do fit both contexts will have a different behavior (ex: vertices are projected or snapped to another surface during/after transforming).  The artist can switch into Edit Mode to perform Edit Mode tool operations.


 * Overlays can be customized for retopology work, so that the new low-res mesh can appear "correctly" over the high-res mesh. This "correct" could adjust based on the retopology task (depending on what detail needs exposed).



Several Edit Mode tools (and perhaps some Sculpt Mode tools) work in the context of retopology if the operation is followed with a project or snap, but they tend to be very low-level operations. While we will expose as many Edit Mode tools that make sense, we propose to implement a set of problems-based tools that solve problems typical to retopology work.

Short Term

 * Basic retopology mode. Done!
 * Improved overlays
 * Render edit mesh(es) over the original mesh(es). See Retopology Overlay.
 * Pre-visualized edits? These visualizations would allow the artist to see the effect of an edit before performing the edit.  However this does go against the current edit paradigm of most of Blender (knife and loop cut are notable exceptions), where edits are typically performed, tweaked with the operator parameter panel (which reapplies operator), then committed (implicitly by moving on to next operation) or cancelled (by undoing).
 * New gizmos for tools.
 * New snap/project transformation settings
 * Option to snap to non-edit objects (original meshes). Done!
 * Snap to surface in 3D. Done!
 * Snap to surface in 3D. Done!

Long Term

 * Basic retopology tools
 * Geometry Pen. Quick creation of geometry.  Uses projection, works on visible surface and with what is selected (or within proximity?)
 * Loop Cut. Can either insert a loop cut into existing quad loop (undefined for other polygons) or add a new loop at intersection of a plane and surface with option to extend / extrude from a boundary edge loop.  Both insertions require topo traversal, where the former traverses the source mesh and latter traverses the target mesh.
 * Loop Slide.


 * Advanced retopology tools (reqs research)
 * Surface Slide. Rather than moving points in 3D space then snapping/projecting back to the surface, some problems might require 2D traversal along the surface for good behaviors.  Surface Sliding is not easily solvable (if it is at all solvable) in general case, as there are many tricky cases (even when the topology is manifold).  I have ideas on using projected lines (hack), geodesic distances (slow?), topological labeling, and others as a possible solution, but this will require some testing.

Terminology
A mesh can have either "good" or "poor" topology. The following list are items that indicate poor topology. Note:


 * Layout of geometry does not correlate / correspond well with features of surface or intended motion of surface. ex: face spans two disconnected objects; face crosses a bend line when animated.


 * Geometry density is much too high compared to size and number of features. ex: many planar vertices that aren't needed to capture the surface.


 * Geometry density is much too low compared to size and number of features. ex: large face crudely covers area with large number of features.


 * Faces with inappropriate number of edges. ex: 8-sided polygons on a game-ready mesh; 3-sided polygons (triangles) on a quad-only mesh.


 * Faces that are poorly shaped. ex: thin, very oblique triangles; concave quads; non-planar faces.


 * Two distinct verts / edges / faces that are co-located or overlapping.

Retopo Implementation Options
There are four main ways to approach retopology in Blender. They are listed below along with some pros and cons to each.

Edit Mode Retopology Tools
The first (and probably easiest to implement) option is to integrate retopology tools directly into edit mode. This option allows for faster switching between edit tools and retopo tools, as it involves only changing tools. With access to edit tools, any non-retopo-like edits (ex: moving vertices away from surface) are automatically allowed. Also, any edit mode add-ons will still be available.

The first two issues deal with UX. The tool set (along with keymaps) will grow even larger, which increases the effort in choosing the correct tool for the task. The retopo tools would need some indication of their different behavior (ex: color). Another issue deals with rendering, where visualizing the retopo mesh correctly "over" the original mesh. Additional settings could be added to visualization settings (see Retopology Overlay), but this complicates switching between "normal" edit mode visualization settings and "retopo" visualization.

Edit Mode Retopology Setting
The second option is to add settings to edit mode similar to sculpt mode's dyntopo option. This would allow for quick switching between visualization settings, and it would allow the artist to know that tool behaviors will change. However, not all edit tools would change behavior, so the artist would need to memorize these differences. As an alternative, the non-retopo tools could be hidden, and any retopo-only tools could be shown, but such a change in interface is quite different from current Blender interface language. Also, indicating such a change in behavior by a small checkbox is not artist-friendly, and arguments for/against keeping the setting state when toggling out edit mode make either decision unclear (note: similar arguments could be made for dyntopo in sculpt mode, but that's beyond the scope of this document).

Retopology Modifier
A third option is to create a retopology modifier that acts similarly to the shrinkwrap modifier, although there could be differences between the two. A few of these differences are listed below.


 * The retopo modifier would be destructive, meaning that the snap / projection would alter the position of the vertices either during the modal operation or at confirmation of operation. In contrast, the shrinkwrap modifier does not change the stored position of the vertices, but only changes the visualized position, which leads to some awkward snapping behaviors and poor performance when the vertices are far from the surface to which they snap.


 * The shrinkwrap modifier allows for only one snapping target. There are many cases where the artist may wish to snap to the closest point of many targets.  So the retopo modifier would need a way to do this, perhaps using collections instead.


 * The shrinkwrap modifier has a Vertex Group option to indicate whether a vertex should be snapped. This is roughly a Boolean option (related to the single snapping target; weighted by vertex group value).  Unfortunately, there isn't a way for the artist to quickly change this value, as doing so is completely tool settings operation (requires selection and adding to / removing from vertex group), which is separate from any vertex positioning operation.  The tools could be modified to allow the artist to toggle snapping using a keymap, but this further convolutes the transformation operation.


 * As an alternative or in addition to toggling vertex group assignment, a threshold setting could be implemented, where only vertices within a certain distance are snapped. However, some operations could move a group of vertices in such a way that some, but not all, are within the threshold, causing the overall shape of group to change significantly and requiring additional clean up operations.  A keymap could be used to override the threshold value during transformation, but this further convolutes the transformation operation.

The biggest issue with this option, though, is that it fundamentally changes the definition of a modifier, which is "automatic operations that affect an object's geometry in a non-destructive way".

Separate Retopology Mode
The fourth option is to create a separate / dedicated (sub)mode or workspace for retopology work. This separate mode makes it clear to artist what mode they are in, so tool behavior is clear. Visualization settings can be stored by the mode, so switching into Edit Mode or Object Mode is quick (ex: to see results of subdivision modifier). Also, a separate mode would be in line with other related-but-separate modes (ex: sculpt mode).

Cons for a separate mode are mostly on developer side. One possible artist-side con would be tool settings.

Types of Edits + Clean Up
There are several approaches to editing topology in a 3D environment.

Screen-Space Projection
In this approach, edits are done in screen space (ex: transformations are proportional to mouse movements) where the geometry is first projected to the screen, the edited geometry is manipulated by artist, and finally the edited geometry is projected back into the scene by ray casting onto the target geometry.

Screen-space editing is very intuitive for the artist, but it is limited. Only visible geometry should be edited (no clear way to project to the occluded / backside of target). Also, it is not clear what to do if the edited geometry falls off the target geometry in screen space (should geometry be snapped to target in 2D screen space or 3D space? should geometry move parallel to view plane and be allowed to be off target surface?).

Another potentially awkward issue is: while screen-space editing keeps the proportions, ratios, and relative positions of the edited geometry exactly the same in screen space, the proportions, ratios, and relative positions of the edited geometry can change significantly when projected onto the target geometry. This issue is compounded by the fact that this warping due to projection is unseen until the artist changes the viewing position.

This approach is already implemented with Face Snapping.

Edits that would use this approach include: grab (visible geometry only), screen-space rotation (visible geometry only), creating new vertex.

World-Space Snap
In this approach, edited geometry is manipulated it 3D world space as in Edit Mode. But prior to committing change or visualizing the change in action, the edited geometry is snapped to the nearest surface in 3D world space.

A difficult-to-solve limitation of this approach involve nearby surfaces. For example, if there are many intersecting or overlapping surfaces (could be separate objects, disjoint surfaces, geodesically distant surfaces, etc.), a vertex could snap to a "wrong" surface. While the vertex's position would be approximately the same when snapping to "correct" or "wrong" surface, the vertex's normal (or any other vertex data, such as color or index) could be significantly incorrect.

A very awkward issue with world-space snapping happens when the edited geometry moves away from the target surface. For example, when an edited vertex is in the middle of a concavity of U-shaped targets, the vertex can snap to far away surfaces with only a small edit (ex: mouse move). Also, the proportions, ratios, and relative positions of edited geometry can change drastically if the nearby target surface changes with respect to each edited vertex.

This approach is somewhat already implemented as a hack: standard Edit Mode but with Shrinkwrap Modifier. Note that this hack, because the modifier is non-destructive. In other words, a vertex's (hidden) position could be very far from the surface, so making further changes to position could result in unpredictable behavior. A correct implementation would need to be destructive and with additional snapping options.

Edits that would use this approach include: edge loop slide, grab (visible or occluded geometry).

Target Topology Walking
In this approach, the target's topology is walked to determine how to position created / edited geometry. The walking could take into account the position of target surface to determine how to traverse. For example, the artist could create an edge loop where a plan intersects with the target (perhaps starting at a given point).

The issues with this approach involve situations where the target is non-manifold (which can happen with 3D reconstruction, for example) or the walking crosses a mirror plane of the target or the source.

The "clean up" of this edit would use world-space snapping.

Source Topology Walking
In this approach, the source's topology is walked to determine how to position created / edited geometry. The walking could take into account the position of source surface to determine how to traverse. For example, the artist could insert an edge loop in a quad strip.

The issues with this approach involve situations where the source topology is non-manifold, the topology becomes complicated (triangle, n-gons, etc.), or the walking crosses a mirror plane of the source.

The "clean up" of this edit would use world-space snapping.

This approach is already implemented in Edit Mode tools.

Selection and Masking
This type of edit does not change the positions or normals of the source geometry, but instead changes the meta information (selection, masking). This can be done in screen space, world space, or by source topology walking. All of these methods are already implemented in Edit Mode.

Tool Workflows and Interactions
This section is a raw, roughly unformatted dump of ideas. I need to come back and clean this up.

single, specific, precise operation

 * using mouse with zooming to specify precise modifications. click / drag location is very important
 * ex: inserting a vertex
 * ex workflow: select, act (single action performed), select, act (single action performed), act (single action)
 * issues
 * not great for tablet
 * accessibility is always an issue, but it drops drastically as modifiers are added to operation (ex: Ctrl, Alt, Shift, Double Click, Triple Click)

stroke based

 * using a stroke to guide a series of modifications
 * usually, the stroke is smoothed
 * how much smoothing?
 * ex workflow: select, stroke, stop stroke (several of the same actions performed along stroke)
 * selection is not always necessary, as proximity to geometry can be used to inform modification
 * how to determine proximity?

brush based

 * use a brush with falloff to influence continuous modifications
 * usually, brush has radius (either in surface space or 3d), strength, falloff parameters
 * selection is usually not needed (modifications apply to all geometry under brush)
 * modifications are applied either continuously temporarily (every x seconds / n times per second) or spatially (whenever mouse has moved d pixels)
 * if modification is computationally heavy, continuous application can be problematic on slower machines. can take into account time / spatial delta, but calculations involving deltas can become difficult to do well

3D widget based

 * after selecting some geometry, a widget representing a particular operation will appear as a 3d object in the scene near the selected geometry. The widget maps it's parameters to different visual features of the widget.
 * while the artist adjusts the operation's parameters by interacting with the widget features, the operation is continuously (re)applied with new params.

2D / UI widget

 * some operations can have parameters that may need adjusting before finalizing the modification, but they are visualized or adjustable before initializing the operation.
 * typically, these parameters are displayed and adjusted through a basic UI
 * ex: specifying the number of (perpendicular) cuts along the extrusion of an edge strip
 * these operations can show up after initially applying the operation, then when adjusted the original operation is undone then reapplied with newly adjusted params.

2D / 3D visualization of parameters

 * some parameters are visualized as 2D / 3D elements over / in the view
 * while the parameters are adjustable, they are typically adjusted through keyboard actions or through another UI widget somewhere else (not directly on the visualization)

2D / 3D visualization of context

 * details about the context is sometimes reported to the artist as 2D / 3D visualizations over / in the view.
 * ex: number of selected edges
 * this information is useful for the artist to know much geometry will be affected by the operation

2D / 3D preview visualization of operation

 * sometimes an operation may be too complex to communicate well with simple visualizations. in these cases, some operations can construct a preview of what an operation will do if committed.
 * this preview should be visually different from the rest of the geometry to make it distinct and obvious.
 * the preview and final result should be the same. often, the generated preview is stored in a way so that it can be converted into geometry when the operation is committed (no additional computation is required)
 * this can be an issue when modifiers are involved (ex: displacement, subdiv) as an accurate preview will need to have the modifiers applied as well.

Non-Tool Workflows and Interactions
This section is a raw, roughly unformatted dump of ideas. I need to come back and clean this up.

Selection

 * selection can be screen space (single, rect, circle), 3D (all geometry within radius), or topological (shortest path, connected and within radius, increase selection)
 * selection is binary (either selected or not)

Masking

 * masking can be screen space (brush), 3D, or topological
 * float in range from 0 (fully unmasked) to 1 (fully masked)
 * controls the strength of operation applied (does not work on all operations)
 * masking can be implicit
 * ex: moving brush grabs all verts within radius with farther verts from brush center are moved less (masking was implied by brush radius and falloff when artist clicks)
 * masking can be temporary
 * ex: smoothing brush will affect all verts within radius from mouse's current position. in other words, as the artist moves the mouse, different vertices are affected (without needing to repress the mouse button)

Tool Design
This section is a raw, roughly unformatted dump of ideas. I need to come back and clean this up.

Problem-Centered

 * tools are designed or categorized around the types of problems they solve
 * these tools tend to be more intuitive, but their design requires domain knowledge
 * these tools typically don't work well in contexts different from their design

Tool-Centered

 * tools are designed to perform a single operation
 * could have many tools that perform similar operations, but slightly different outcomes or work in different contexts
 * tools can be used in different contexts than what they were originally designed
 * artist is required to understand tool through experimentation or through education
 * analogy: physical clay sculpting tools