Depth Of Field (DOF) Explained
Real world camera lenses and your eyeball transmit light through a lens (cornea) that bends the light, and an iris that limits the amount of light, to focus the image onto the film, CCD/Cmos sensor, or retina. Because of the interaction of the lens and iris, objects that are a certain distance away are in focus; objects in the foreground and background are out of focus. We call this distance their depth, or “Z” distance from the camera or eye.
Light comes to the lens (in the real world) at an angle; from some direction. What you see depends on your perspective; if you move closer, different angles of the scene are revealed. To make “flat” pictures, like an architectural drawing or plot, Blender can also make an orthographic rendering. So, there are two kinds of renderings, Perspective and Orthographic. Perspective simulates light coming in at an angle to the lens from the field of view, and Orthographic (disabled by default) simulates light coming straight in to an infinitely large backplane or flat retina.
Depending on the diameter of the iris, there is a range (of distance) where objects are in focus. In cameras, the diameter of the iris is controlled by an “f-stop”. Said another way, there is field of view that you see left to right, up and down; your “picture”, if you will. At a certain range, or depth away from your eye, things are in focus. For example, at night, you may be able to focus your eye on objects that are 10 to 15 feet (3 to 5 meters) away. Anything closer than 10 or farther away than 15 is blurry. Your depth of field is thus 5 feet (2 m).
The larger the iris, the smaller the depth of field. This is why, during the day, you can focus on a range of things stretching out far from you. In film, there is a person whose job is to measure the distance from the camera to the actor’s nose, to ensure that the focus is set perfectly.
The more that an object is out of its depth (the perfect value for this depth is called focal plane), the blurrier it is. In fact, the depth of field is the range on both sides of the focal plane in which the blurriness of the objects is considered to be low enough to be imperceptible. In Blender, this distance is called the Dof Dist or “Depth of Field Distance” and is set in the Editing context (F9) for the camera. Alternatively, you can have the camera automatically stay focused on an object, by entering the name of the object in the Dof Ob field.
Field of View and Lens Size
The field of view varies by the size of the lens. With cameras, a 35mm lens is kind of a standard size because the picture it takes mimics the size of the picture seen by the eye and pictures can be taken rather close. In Blender, use the Camera settings to change the size of the lens (35mm is the default). A longer lens taking a picture farther away has the same field of view, but has a different perspective of the view that many directors love because it “condenses” the scene and smooths a sweep, since it is farther away from the action:
Zooming in Blender
Zoom is the ability to expand a subset of the picture; we humans have no such ability. Well, I take that back; we do: we just get up off the couch and walk up closer to what we want to see (however, this is more like “traveling” than “zooming”). Blender allows you both actions: you can move the camera closer to or farther away from an object for a track (or “truck”) in/out, and/or change its lens size. You can automate these by assigning an Interpolated (Ipo) curve to the object or to the camera, respectively.
Depth of Field in Computer Graphics
In computer graphics (CG), there is no physical lens or iris, so the depth-of-field (DOF) is infinite and all objects are always in focus. However, for artistic reasons, we want our main characters to be in focus, and everything else a little blurry, so that our audience does not focus on distracting things in the background. Also, it is easier to discern the main actors when they are in focus, and everything else isn’t. So, we have to create an effect, or Depth of Field Effect, to composite our images and post-process them to achieve realistic-looking results.
How to Achieve DOF in 3D
The concept is to take information about our scene, specifically the Z values, and use it to blur objects that are out of depth both behind our DOF, and in front of our DOF. The more out of depth they are, the more they are blurred. We then combine those two pictures.
Tools in Blender
Blender 2.43 will have a Defocus node, which will do away with the need for the the noodle described in this section. However, the information on selective blurring may prove interesting. See the linked section for examples on using the Defocus node.
Old School (Version 2.42)
Ultimately we want to route our Z information into a foreground Map Value node and a background Map Value node; the result of each node's output being a gray scale (if you looked at it) that ranges from black (0.00) to white (1.00) the more an object is out of depth. We thread that to a Blur factor to blur our original image.
The color Z-Combine node combines two images based on which is in front of the other, using the Z-values supplied by two renderlayer nodes.
Blur the Foreground
You may recall the previous topic told you how to blur the background of your image. In that topic, we saw that, to start blurring objects a certain distance away from the camera, for example 10 units, we used the Offset to subtract 10, giving a blur factor of zero (no blur). We used the size value as a multiplier to scale the Z-depth values from that zero out to 1.00 (maximum blur).
We now want to blur objects that are closer to the camera, starting with objects, for example, that are 10 units away. The Offset subtracts a value from the Z-depth, and the Size factor multiplies it by some value. So, if we subtract 10 from an object that is 5 away, we get -5.0. Therefore, use a negative Size value to turn that negative into a positive. When we route those values to a Blur node, it will blur objects in the foreground.
Combine with a Blurred BackgroundBorder Select and ⇧ ShiftDuplicate the previous topic's node map (Blur the Background), and plug it in as shown:
Why Does It Work?
Notice that the mapped values from the background feed the foreground Z, and the mapped values from the background feed the foreground Z. We do this because in the blur-foreground noodle, the Map value node calculated negative numbers for objects in the background. The blur-background noodle calculates positive numbers for objects in the background. So, while the positive values tell the blur node what to keep and blur, the negative values tell the Z-Combine node what pixels to use from the image; namely, use the blur-background pixels when compositing the background (since a negative number is less than a positive number). Z-Combine thus uses the blurry foreground part of one picture, and the blurry background of the other.
Depth of Field
In the example picture above, the depth of field is -9.50 minus -10.50, or 1 unit deep. Yes, My Dear Aunt Sally, a negative minus a bigger negative is a positive.
Working with the Map
You can vary the size, blur factors, and mixing methods of the foreground and background independently to enhance the impact of the image. Changing the offset changes the focal plane. Spreading apart the offsets between the foreground and background Map Value nodes increases the depth of field. Using a larger Size value increases the rate at which objects blur, and increasing the X & Y values of the blur simulates the f-stop on a real camera.
If you move the camera and/or objects in the scene, you will have to calculate new offset and size values.
Lighting plays an important part
Keep in mind that lighting also plays an important part, and that a spotlight should be trained on the actors in focus. In the real world, adding light to a scene allowed the cameraman to stop down the lens, resulting in a larger DOF without overexposure. You can simulate this just by increasing the difference between the offsets.
Music videos in particular lag the stopping down from the increased lighting, resulting in a sort of 'fade-in-from-white while increasing DOF effect' which is very catchy. You can simulate this just by animating your lamps to reduce energy while increasing your DOF using the offsets.
Adding on to the Effect
It is also possible to add on other nodes to sharpen, enhance, highlight, and/or colorize the foreground or background. Different settings and node maps modifications will have better results depending on the scene setup, the shape of the objects being blurred, and what you want to show in focus. The output Viewer node can show you mapped values as an image; thread the Map Value output socket to the Image socket on a Viewer node, and you will see a gray-scale representation of the mapped values, with black being zero or less, white being 1.00 or more, and shades of gray in between.
Keeping your Desktop Uncluttered
To save window and desktop space, remember that all nodes can be collapsed and moved closer together. The window may be zoomed and panned as well. The UV Image Editor window can show the Viewer node output a little larger for your inspection by selecting Viewer Node as you browse IM: choices.
Excluding Objects from the DOF Effect
To exclude some objects in your scene from the DOF compositing action, you must move them to a different RenderLayer, and then mix the results of this node map with the other RenderLayer input. You can use this really wild effect to have a foreground actor in focus in the middle of a blurred crowd, with a single (ominous) actor far in the background but still clearly in focus. Of course, you can have multiple focal planes within the same image by simply dividing your scene up into renderlayers and applying the DOF effect to each RenderLayer.
Getting Exact Focal Plane Measurements
There is a Caliper script that measures absolute distances between two objects. Use this script to measure the distance from the camera to the object you want in focus. Use this distance as the average of your Map Value offsets between foreground and background.
Applying DOF to Animations
The noodle presented above takes an un-blurred Z-buffered input and blurs it. You can then F3 save your image. However, you may want to perform the DOF compositing later. If so, render your image frames in a format that captures the Z-buffer information but do not Do Composite; instead simply save your individual images. Well, you could Do Composite but use other composite nodes to do wonderful things.
Blender outputs the result of a render according to the format specified in the Format panel of the Render buttons. To apply DOF later, you simply must save your images in a format that supports a Z-buffer:
- Open EXR (be sure to click Zbuf and RGBA)
Save EXR space
You may click Half (16-bit) format to save disk space.
You cannot use any motion picture codec, because (as of this writing) no motion codecs (AVI or Quicktime) capture alpha or z-buffer information. 'Flat' image formats (JPG, PNG, GIF, BMP, Targa, TIFF) also don't capture Z. While some image formats purport to support a Z-buffer they don't work:
- Radiance HDR
You CAN use the HDR format by saving two images for every frame: one color (RGBA) and another Z. To do so, you must use a composite noodle that threads the renderlayer Z output to the Composite image socket. Render one pass of 'white' images which contain the Zbuffer information. Render another pass with renderlayer image to Composite image. Then, when ready to DOF, input Image set Z as the Z, connecting the Image output socket to the Map Value input socket. Use input Image set RGBA as the image source to the Blur image input socket.