Preparing your work for video
Once you have mastered the trick of animation you will surely start to produce wonderful animations, encoded with your favourite codecs, and possibly you'll share them on the Internet with the rest of the community.
Sooner or later you will be struck by the desire of building an animation for Television, or maybe burning your own DVDs. To spare you some disappointment, here are some tips specifically targeted at Video preparation. The first and principal one is to remember the double dashed white lines in the camera view! If you render for PC then the whole rendered image, which lies within the outer dashed rectangle will be shown. For Television, some lines and some part of the lines will be lost due to the mechanics of the electron beam scanning in your TV's cathode ray tube. You are guaranteed that what is within the inner dashed rectangle in camera view will be visible on the screen. Everything within the two rectangles may or may not be visible, depending on the given TV set that your audience watches the video on.
The rendering size is strictly dictated by the TV standard. Blender has three pre-set settings for your convenience:
- PAL 720x576 pixels at 54:51 aspect ratio.
- NTSC 720x480 pixels at 10:11 aspect ratio.
- PAL 16:9 720x576 at 64:45 aspect ratio, for 16:9 widescreen TV renderings.
- HD 1920 x 1080 pixels at 1:1 aspect, that operates in a downscaled mode of 720 horizontal scan lines interlaced.
TV screens do not have the square pixels which Computer monitors have; their pixels are somewhat rectangular, so it is necessary to generate pre-distorted images which will look bad on a computer but which will display nicely on a TV set.
If you render your animation at 1600x1200 resolution, and then burn a DVD, your image will not be clearer or crisper on the TV; in fact the DVD burning sofware will have had to downsize your images to fit the resolutions shown above, and you will have wasted about 4x disk space and render time.
Most video tapes and video signals are not based on the RGB model but on the YCrCb model: more precisely, the YUV in Europe (PAL), and the YIQ in the USA (NTSC), this latter being quite similar to the former. Hence some knowledge of this is necessary too.
The YCrCb model sends information as 'Luminance', or intensity (Y) and two 'Crominance' signals, red and blue (Cr and Cb). Actually a Black and White TV set shows only luminance, while colour TV sets reconstruct colour from Crominances (and from luminance). The contruction of the YCrCb values from the RGB ones takes two steps (the constants in italics depend on the system: PAL or NTSC):
- First, the Gamma correction (g varies: 2.2 for NTSC, 2.8 for PAL):
- R' = R1/g
- G' = G1/g
- B' = B1/g
- Then, the conversion itself:
- Y = 0.299R' + 0.587G' + 0.114B'
- Cr = a1(R' - Y) + b1(B' - Y)
- Cb = a2(R' - Y) + b2(B' - Y)
Whereas a standard 24 bit RGB picture has 8 bits for each channel, to keep bandwidth down, and considering that the human eye is more sensitive to luminance than to chrominance, the luminance signal is sent with more bits than the two chrominance signals. This bit-expansion results in a smaller dynamic of colours, in Video, than that which you are used to on Monitors. You hence have to keep in mind not all colours can be correctly displayed. Rule of thumb is to keep the colours as 'greyish' or 'unsaturated' as possible, this can be roughly converted in keeping the dynamics of your colours within 80% of one another. In other words, the difference between the highest RGB value and the lowest RGB value should not exceed 0.8 ([0-1] range) or 200 ([0-255] range). This is not strict, something more than 0.8 is acceptable, but an RGB display with a color contrast that ranges from 0.0 to 1.0 will appear to be very ugly (over-saturated) on video, while appearing bright and dynamic on a computer monitor.
Rendering to fields
Mode: All Modes
Panel: Render Context → Render
The TV standards prescribe that there should be 25 frames per second (PAL) or 30 frames per second (NTSC). Since the phosphorous of the screen does not maintain luminosity for very long, this could produce a noticeable flickering. To minimize this TVs do not represent frames as a Computer does ('progressive' mode), but rather represents half-frames, or fields at a double refresh rate, hence 50 half frames per second on PAL and 60 half frames per second on NTSC. This was originally bound to the frequency of power lines in Europe (50Hz) and the US (60Hz). In particular fields are "interlaced" in the sense that one field presents all the even lines of the complete frame and the subsequent field the odd ones. Since there is a non-negligible time difference between each field (1/50 or 1/60 of a second) merely rendering a frame the usual way and splitting it into two half frames does not work. A noticeable jitter of the edges of moving objects would be present.
- Enable field rendering. When the
Fields button in the Render Panel is pressed (Field Rendering setup.), Blender prepares each frame in two passes. On the first it renders only the even lines, then it advances in time by half a time step and renders all the odd lines.
This produces odd results on a PC screen (Field Rendering result.) but will show correctly on a TV set.
- Forces the rendering of Odd fields first
- Disables the half-frame time step between fields (x).
Setting up the correct field order
Blender's default setting is to produce Even fields before
Odd fields, this complies with European PAL standards. Odd fields are scanned
first on NTSC.
Fields and Composite Nodes
Nodes are currently not field-aware. This is partly due to the fact that in fields, too much information is missing to do good neighborhood operations (blur, vector blur etc.). The solution is to render your animation at double frame rate without fields and do the interlacing of the footage afterwards.
For an animation the frame rate (AVI Codec settings.) which, by default, is 25 frames per second, the standard for PAL (European) television. Use 30 frames per second for USA television.
A Codec is a little routine that compresses the video so that it will fit on a DVD, or be able to be streamed out over the internet, over a cable, or just be a reasonable file size. Codecs compress the channels of a video down to save space and enable continuous playback. Lossy codecs make smaller files at the expense of image quality. Some codecs, like H.264, are great for larger images. Codecs are used to encode and decode the movie, and so must be present on both the encoding machine (Blender) and the target machine. The results of the encoding are store in a container file.
Blender knows two kinds of container files:
- Audio Video Interlace (a .avi extension) and
- QuickTime (a .mov extension).
When AVI Codec is selected, Blender will popup a little Codec selector window, listing the codecs that are registered on your machine. Each Codec has unique configuration settings. Consult the documentation on the codec (supplied by the company that wrote it) for more information.
When Quicktime is selected, the codecs on your machine will pop-up and allow you to pick which one you want to use. You may have to have purchased Quicktime Pro to use this.
There are dozens, if not hundreds, of codecs, including XviD, H.264, DivX, Microsoft, and so on. Each has advantages and disadvantages and compatibiliity with different players on different operating systems.
Most codecs can only compress the RGB or YUV color space, but some support the Alpha channel as well. Codecs that support RGBA include:
- animation (quicktime)
- PNG *TIFF *Pixlet - not lossless, and may be only available on Apple Mac.
- Lagarith Lossless Video Codec
More information on image formats can be found at:
Interlacing is a way of providing a sort of motion blur with compression. Instead of capturing the full resolution image every so many times a second, half of the horizontal scan lines are captured twice the number of times a second. So, instead of displaying 1280x720 images 25 times a second, which is called HD 720p EU, you could display 1280x360 images 50 times a second, where the first frame is the even scan lines (horizontal rows 2, 4, 6, 8 ...) and the second frame is set 1/50th of a second later consisting of the odd scan lines (rows 1, 3, 5, 7,...). The net result is that the same number of pixels are displayed every second, but the interlaced variety will appear smoother, since the odd lines catch any movement that happened in between the even frames, and vice versa. Blender supports Even interlacing (described above, used for EU TV) and Odd interlacing, for US TV, where the first frame is scan lines (1, 3, 5, 7 ...) and the frame after is the even lines (2, 4, 6, 8, ...). Use the Even/Odd buttons for this purpose.
Additionally, Blender supports 50 or 60, or 24 or 30 frames per second. 50 and 25 fps are used for EU TV, and 60 and 30 are US, and 24 is film. Because of US Power Grid cycles, the actual frame rate is 29.97 fps. To accommodate this, Blender has a divider for the frame rate field; enter 30 fps and a divider of 1.001 to get exactly 29.97 fps.
FFMPEG, short for Fast Forward Moving Pictures Expert Group, is a collection of free and open source software libraries that can record, convert and stream digital audio and video in numerous formats. It includes libavcodec, an audio/video codec library used by several other projects, and libavformat, an audio/video container mux and demux library.
When you select FFMPEG as your output format, two more tabs appear that allow you to select the video codec and the audio codec.
Here you choose which video codec you want to use, and compression settings. With all of these compression choices, there is a tradeoff between filesize, compatibility across platforms, and playback quality. You can use the presets, DV, SVCD, DVD, etc. which choose optimum settings for you for that type of output, or you can manually select the format (MPEG-1, MPEG-2, MPEG-4, AVI, Quicktime (if installed), DV, H.264, or Xvid (if installed). You must have the proper codec installed on your computer for Blender to be able to call it and use it to compress the video stream.
If your video is HUGE and exceeds 2Gig, enable Autospilt Output. The main control over output filesize is the GOP, or keyframe interlace. A higher number generally leads to a smaller file, but needs a higher-powered device to replay it.
Codecs cannot encode off-the-wall video sizes, so stick to the XY sizes used in the presets for standard TV sizes.
Audio is encoded using the codec you choose, as long as you enable Multiplex Audio. For each codec, you may be able to control the bitrate (quality) of the sound in the movie. This example shows MP3 encoding at 128kbps. Higher bitrates are bigger files that stream worse but sound better. Stick to powers of 2 for compatibility.
Choosing which format to use depends on what you are going to do with the image. If you are going to
- email it to your friends, use JPG
- combine it with other images in post processing and simple color/alpha composition, use PNG
- use nodes to simulate depth of field and blurring, use EXR
- composite using Render Passes, such as the Vector pass, use Multilayer.
If you are animating a movie and are not going to do any post-processing or special effects on it, use either AVI-JPEG or AVI Codec and choose the XviD open codec. If you want to output your movie with sound that you have loaded into the VSE, use FFMPEG.
If you are going to do post-processing on your movie, it is best to use a frame set rendered as PNG images; if you only want one file, then choose AVI Raw. While AVI Raw is huge, it preserves the exact quality of output for the post-processing. After post-processing (compositing and/or sequencing), you can compress it down. You don't want to post-process a compressed file, because the compression artifacts might throw off what you are trying to accomplish with the post-processing.
Note that rendering an animation long to calculate in a unique file (AVI or QuickTime) is more risky than in a set of static images: if a problem occurs while rendering, you have to re-render all from the beginning, while with static images, you can restart the rendering from the place (the frame) where the problem occurred!