From BlenderWiki

Jump to: navigation, search

The purpose of this tutorial is to guide the user through the process of mapping a matte image to a 3D object so that camera moves can be made and yet appear realistic. Two examples are worked through, one for near-field objects, and another for horizons.

Near Field Mapping

This example uses an image in the foreground, an altar. We have a greenscreen person lying down, and we want to match move the camera move to this image, so that it appears that the person was lying on the altar. You can get a starting blend file here (open with Blender v2.46+): Media:Tutorial-Camera_Mapping.blend

Workflow

The process for mapping a near-field object in Blender is:

1. Set up a workspace for camera mapping
2. Model the near-field object
3. Unwrap the object into a set of UV Coordinates called a UV Texture
4. Project the matte image onto the object by one of three means:
4a. UV Projection
4b. UV Texturing
4c. UV Mapping
5. Finalize camera moves
6. Composite rest of scene.

Set up

We are going to assume the worst case, because I know that nothing in CG ever is easy. Specifically, we have only one single matte image of an object, in this case, an altar. We do not know what camera lens was used to take the image, what height or angle the image was taken from, nor how big the actual altar is. Further, we do not even have an reference objects in the image, such as a ruler or yardstick, from which we can even guess how big the object is. We do not know how tall the photographer is, or how high the camera was relative to the base of the object, when the picture was taken. In short, we just have to eyeball it.

Open Blender and set up the standard 4-up window as shown. In the 3D View, define a plane at {0,0,0} with dimensions large enough to cover the field of view; in this case 20 BU square. Create a camera that matches the approximate height of the camera in the image. A camera can be put on a tripod, held over ones head, up to ones eye, or put on a crane or stepladder. The image could have been shot with a long lens from far away at some height higher than the base of the image.

In this case, assume the camera was at eye height by an average person. Looking at the image, it looks like the person stood about a 45-degree angle to the center of the object, and tilted the camera slightly down, but held it level. So, create a 35mm camera at {5,-5,1.8} {we're going to use meters as our measurement unit). Angle the camera {80,0,45}. The 80 is 10 degrees of horizontal, because 90 degree RotX is looking out at the horizon and 0 degrees is looking straight down. Note that there is a script that gives precise angle distortion based on known camera lenses, but in this case we are flying blind.

Tutorial-Camera Mapping-Setup.jpg

Configure your 4 3D View windows as Overhead, Camera, Side and Front. In the camera view, load the Background Image. Your workspace should look like that above. the first issue is that the image perspective does not match our camera render perspective. So, in the Render settings, set the render size to match the image; in this case 2000x1410. Since we don't want to waste a lot of time rendering, set the Render to 25%. This will produce a test render that is 500x352, which should render pretty fast.

Model Base

Tutorial-Camera Mapping Initial Model.jpg

Now we model the altar in our 3D space. This object is rectangular, so we can use cube modeling as our approach. Place a cube at { 0,0,0}, and adjust the vertices up one unit (G, Z, Ctrl, move your mouse up until it snaps) so that the center of the cube aligns with the bottom face of the cube; so that the whole cube is sitting on the plane.

Based on this altar apparent proportions, scale the cube in the Y direction so that the cube has scale {1,2,1}-we will start with the base of the altar, since that appears to be a nice rectangular shape with easily discerned lines. Our anchor point, based on this image, is the bottom near corner of the altar base; adjust the object center to be at that vertex by moving the cube's vertices, and then position the cube at {1,-2,0}Your camera view should now look like the image to the right, Initial Model.

Now, I can see some issues with our reference image. First, our background image has been doctored. I see that the top of the image is not square, but concave. This means that some perspective distortion was done to eliminate lens distortion and make the image more orthographic. Second, parts of the image have been erased, increasing the challenge.

The main problem is that the cube's lines don't match up the image. We know the cube is square, and our alignment system is set up for the cube to align to the axis of our 3D space, so it must be that the camera is not in the right place and angle. We therefore have seven variables to play with -Camera position XYX, and Angle XYZ, and Lens. We have a nice clean line of the base of altar, however, so let's align that first.

Mapping a Reference

There are two ways to align the cube to the image. One way is to keep the cube fixed and move the camera. The other way is to leave the camera where it is and move the cube. I like moving the camera, because the cube stays put in the 3D space and lines up nicely with the ground plane an other parts of the set. Effectively, I try to match the angle and position of the photographer's camera when they took the picture with my virtual camera.

First, align the near corner of the cube with the near corner of the altar by adjusting the camera to location {4.85, -4.85, 1.8}. You should now see that the near vertical edge lines up nicely with the altar base. Now we can see the the image was taken more from the side of the altar, since the near-narrow end of the cube does not line up with the image. Rotate the Z of the camera and adjust XY location to keep that corner aligned, and we can see that the perspective does not match. Keep moving and rotating the camera until the bottom/base of the cube aligns with the image. Do not adjust the height (Z) of the camera.

Tutorial-Camera Mapping-Base Aligned.jpg

Recall your 3-point perspective training; to make the object appear flatter, angle the camera up more (closer to 90) and move the camera away in the XY to re-align. Work mostly in top view, adjusting the angle of the camera, and link in your mind how camera rotations and moves affect the apparent location of the base cube in the camera window. This is the frustrating mapping process. Mouse moves almost work opposite to the intended effect. I find it easier to align one edge parallel to the image edge using the angles or rotation, then adjust the camera location, and then iterate.

Eventually, you should get the image shown to the right, where the edges of the cube align with the base of the altar. I find it helpful to keep one edge aligned; the other edge then tells me whether I need to angle the camera down to make the cube appear more square, or to move the camera so that the edges align. Also, I switched my top and side views, since they were not being used, to camera view, and zoomed in on the three points of the altar that I was aligning with the cube as shown. The camera location is {3.484, -3.272, 1.8} and rotation angle is {72.5,0,53.818}

Correct for Lens Distortion

Now let's work on the upper edge of the base. Adjust the dimensions of the cube to match the image, { 1.2,2.35,0.285}. Since our object center is that near bottom corner and it is aligned with the image, you can just use click the arrows and use the N Transform properties window to do this.

Now we can see an issue with lens distortion. Looking at the far right edge of the base, our cube is "taller" than the image. Well, either the actual slab is not square, or the lens of the camera distorts the image as objects recede into the distance, more than the 35mm lens we assumed. Actually, the Blender Lens number is not a true mm lens diameter size, but it's close enough that I confuse the two.

Recall that a smaller Lens size distorts more, and a longer lens distorts less. Decrease the lens size until your far edge lines up with the image, both the top and the bottom edge; or 27. Then re-align the camera by changing the XY and the RotXZ. I found that a RotY of -1 was needed because the camera was not flat.

Tutorial-Camera Mapping-Lens Aligned.jpg

Eventually, a camera Lens of 27, location {3.554,-3.325,1.8} with a rotation of {77.5, -1.0, 51.788} seems to line up pretty well. The cube needs a dimension of {1.7,3.55,0.34}. Double check that it has not moved or rotated by accident.

Model Other Features

Tutorial-Camera Mapping-Final Model.jpg

Now we have a good base to build from! Model the rest of the altar in as much detail as you wish. Use Box modeling for this particular feature. Keep in mind that your reference image may not be of a square object, due to wear or human error in making the object. For the image to the right, I modeled the top slab first, trying to keep it very rectangular, since I suspected the real one was as well. The better model you can make, the better and less noticeable any distortion will be when the camera moves.

For the ground plane, add a material to receive shadows. I used a brown dirt color sampled from the image.

UV Unwrapping

We need to tell Blender how to apply the image to the object through a set of UV Coordinates, called a UV Texture.

Change one window to a UV/Image Editor. In the header, use the image selector to select the Altar image. With your cursor in a 3D View window, change to camera view (keypad 0), textured mode, tab into edit mode, select face select mode, limit your selection to visible faces, and box select all visible faces. You should select half of the faces.

While in Edit mode, press U to begin the unwrap process and select Project from View. This takes the view of the 3D View and puts those UVs in the UV/Image Editor in the same proportion and relation they are in the 3D View, which you have so carefully aligned.

Tutorial-Camera Mapping-UV Aligned.jpg

You will now be working in the UV/Image Editor window, so you can maximize it with Ctrl. The perspective of the camera won't match exactly, so align the UVs of each corner to line up with the image. I find it easier in this example to select all the UVs of a corner, align one UV, de-select it once it is in place, and work my way up. Keep in mind that UV commands like scale and grab are the same as working with vertices.

The UV Texture is named "UVTex" and is found in the Editing Mesh panel.

Projecting the Image onto the Object

Now it is time to project this image onto our model. We do this through one of your choice of processes:

  • UV Projection - Cast the image onto the mesh like a projector
  • UV Texturing - Assign a texture channel to the material
  • UV Mapping - Use the compositor to change the texture in post-production

Your choice depends on your situation and the degree of control you want over how the image appears when used to color the object. UV Projection is the simplest, whereas UV Texturing gives you much more control over the image and how it is applied, and allows you to mix the image with other textures as well. UV Mapping is done when the matte is not available during rendering, or if the director changes their mind after the principle rendering is done. Each of these techniques are described below.

UV Projection

Tutorials-Camera Mapping-UVProject.jpg

This paragraph describes how to do UV Projection. We are going to project the image out onto the object through a projector, just like a slide projector projects a slide image out onto a screen. Since the image falls onto a 3D object, it colors the object even when viewed from the side.

First, we need a projector. Since we modeled the object in camera view, we want a projector where the camera is. We cannot use the camera, because if we move the camera, it would be like moving the slide projector as well. Any object can be a projector; commonly an empty or another camera is used. Select the camera and duplicate it via ⇧ ShiftD. In the Editing panel, rename the OB: object from Camera.001 to something meaningful like "Projector".

Tutorials-Camera Mapping-UVProjectMaterial.jpg

With the altar selected, add a new material in the buttons window, shading context. For UV Projection, in the Material panel, enable TexFace, Alpha, and Shadeless (aka self-illuminating). This makes sure that the object uses the image exactly as it was taken. In the Shaders panel, set the Lambert Diffuse slider to 1.0 and Ambient to 0.

Tutorials-Camera Mapping-Modifier.jpg

Now change to the Editing F9 context. In the Modifiers panel, add a UVProject modifier to the object. The only available UV Texture for the altar, called "UVTex" will automatically be selected in the list box. In the Ob: field, enter "Projector" and the image name in the UV/Image Editor - in this case stone_alter1.jpg.

If you render now, F12, you will see your altar textured with the image. The image will be rendered exactly as it was in the original, since Shadeless in enabled. This is the only way to get a texture to render without using a texture channel - by using TexFace and the UV Project modifier together.

Tutorials-Camera Mapping-UVProject-Shader.jpg

When this original projection is locked in, the Image is projected onto the altar and used to color the object. These colors become the base colors. You can move the projector now, and as long as you do not re-project, the colors will stay put. To re-project and override this initial projection, enable Override Image and the current orientation (location and rotation) of the projector will be used to project the indicated image onto the object, wiping out any previous image/texture/color that was there.

When you project an image as a texture, you cannot mix it with the base material or any other texture channels. A projected image overrides any base material/texture color settings. If Shadeless is disabled, Blender does respect Shaders panel settings (Diffuse, Specular, Alpha, Emit, etc.) and it can receive shadows.


UV Texturing

As an alternative to UV Projection, let's use this image as a Texture by mapping the image onto the object through our UV layout, using a Material Texture channel. Using a texture channel gives us much finer control over how the image appears in our scene, allowing it to "fit in" better. We can change the base color to green or red, and have the image only partially affect the color of the altar object, for example. However, with this flexibility comes many more settings.

Tutorials-Camera Mapping-UVMapMaterial

First, select our altar object in object mode. Give it a new material. Set the diffuse Lambert shader to 1, the specularity to 0 and hardness to 1 to mimic the kind of surface reflection from aged stone. Make the Ambient effect 0.1, since the altar would be affected by and reflect the color of the ambient light.

Tutorials-Camera Mapping-UVMapTexChannel

Add a new Texture channel for the material. Map Input to UV and enter the name "UVTex" in the UV: field. The texture will automatically map 100% to Color. Go ahead and, since the image is essentially a gray-scale image with dark areas where there are crevices, map the image also to Nor (Normals) with a strength of about 10. Now, when the light hits the surface, it will appear to be pitted and aged. In this image, the stone is very old. Dark areas are covered with dirt, which makes the surface very flat. White areas though, reflect clear stone and possible a smoother surface (since the dirt could not stick) and would thus have a higher degree of specularity. We can thus use the image to map slightly to specularity - enable Spec but set the Var slider to 0.1. The UVs you defined will now map as you had them in the UV/Image Editor window.

Tutorials-Camera Mapping-UVMapTexImage

In the Texture sub-context, specify the altar image. We do not want any mip-mapping or interpolation, and want to use the alpha channel if present. We are using a still image, not a movie or image sequence (but we could).

Now we need to light the altar. With UV Projection, we used a shadeless material that appeared exactly as the original image. With UV Mapping, we don't have to and thus can control how the object responds to light.

Tutorial-Camera Mapping-Lighting.jpg

The image is well lit, but we can change that as we wish. We could make the material shadeless, which would match the lighting in the original image. However, it would not allow us to match the lighting used in the filming of our green-screen actor, for example with red or gold colors. In antiquity, this kind of altar was used to worship nature by placing flowers and fruits and grains atop the altar. To mimic the even lighting, you could add a hemi lamp positioned above the camera and pointing at the altar.

But let's make it dramatic and add a spot lamp, positioned above and just to the right of the camera, lighting the altar as you would in a play on a stage, and casting a visible shadow on the back of the model. Add a soft hemi left side light for fill, and you have the test image shown to the right.

UV Mapping

UV mapping, or re-mapping is done in post-production, as an image adjustment technique. It requires some anticipation before-hand, or knowledge that some areas of the animation are not quite "settled" and therefore subject to change. For example, suppose that we know we want to animate the runes on the altar to glow, but we just have not gotten around to it yet. We have the model, but not the image texture.

Knowing this, we set up the altar with an Object index of 1 by setting the PassIndex: to 1 in its Objects and Links panel (Object F7 context, Object buttons subcontext). This creates a mask that outlines the solid areas of the altar in the resulting image.

We create a neutral gray (50%) shadeless material for the altar. Ugly, but effective.

We then set the Render Layer to put out a UV pass, and save our renders in the MultiLayer format. The Multilayer format can store multiple layers; in this case the three layers: colors, ObjectIndex, and the UV channel for the image.

Tutorials-Camera Mapping-UVNodeNoodle.jpg

Then, just in the nick of time, the matte artist delivers the image of the altar. This image is read in by the top Image Input node. It is drawn in the same perspective as the UV Layout; in fact, we could have used the UV Layout script to output an outline of what the UV Layout looked like, and the artist could color it in.

The Render Layers node is used here for clarity in this example; if you had already rendered and were post-processing a multilayer image sequence, you would use another Image Input node set to process the multilayer sequence. In either case, you would have the Image, UV, and IndexOB sockets available to you.

The UV socket is an image layer that uses the Red and Green channels to store the U and V values of the object as it appears in that frame. The example above shows a render in progress for frame 61, when the camera is looking at the side of the altar.

The MapUV node maps the matte of the altar (drawn in the UV Layout perspective) onto the current perspective of the altar object for the current frame. This mapping process produces a textured image of just the altar, as it appears in that frame, as if it had been painted from that perspective for that frame.

We then use the ID Mask node to direct the Mix node (Factor input socket) in integrating the textured image of the altar into the background. Where the mask is 0 (black), the top image socket pixel is used; where it is white, the bottom socket pixel (the textured altar image) is used for the Composite output.

Camera Moves

Tutoria-Camera Mapping-Moved.jpg

Now comes the real test. First ensure you are on frame 1 and IPO the camera by selecting the camera, pressing I in the 3D view, and selecting LocRot. Go to frame 31 (up arrow three times), move the camera to location {4,-2, 3} and rotate it to {65, -2, 65}. This is an acid test to see how our texturing and modeling was done. Look for any gaps where a face was not mapped, or background elements where for example a piece of grass from the image was captured.

Congratulations! You have mapped a matte image onto a near-field object which retains its image integrity through a camera move!

If the green-screen was shot with a digital move camera, you can use the data-to-IPO script to assign to the camera, and thus match-move the camera to the altar perfectly.

To correct the stretching of the scrollwork, simply use the knife tool to split the faces top to bottom in two; Blender will automatically add and remap the UVs. In the UV/Image Editor, adjust that middle set of UVs to better align with the midpoint of the altar. As you move the UVs, keep an eye on that textured 3D View, so you can see how moving the UVs affects the rendered result.

Background Mapping

A common background, or backdrop, is a building or skyscraper behind the actors. Not quite the mountains in the distance, but possibly behind the helicopter or in back of superman as he flies through the town. These textures are very close, so there is a lot of movement and close-up distortion we want to achieve. For this example, we will be dropping away off a skyscraper. You can get the blend here: Media:Tutorials-Camera_Mapping-Building.blend

Workflow

The process for obtaining a background matte is:

  1. . Draw/paint/render the digital matte
  2. . Construct the set for the shot, consisting of simple backdrops
  3. . Texture the backdrops
  4. . Render the camera moves

Backdrop Matte

Tutorials-Camera Mapping-Matte.png

In this example, we went ahead and constructed a model of a skyscraper. Because it consists of a large number of polys and takes a long time to render, we cannot use it in the main shot. So, we have to make a matte that will then be used in filming and compositing the final shot. In the sample file, the Building scene has the model of the skyscraper, and it is lit with two suns to simulate a bright and sunny day under blue skies. Alternatively, we could have just painted this picture in Gimp or Photoshop.

We render the matte by using an orthographic camera, taking a picture of the model from the same basic perspective that will be used in the final shot. We use an orthographic camera because we do not want any lens distortion introduced in the matte. The image is a 2k x 3k resolution image, that is shot with an alpha channel (RGBA with a PNG format). This is very important so that we can have the solid parts and edge of the building blend in with the final background.

Set Construction

Tutorials-Camera Mapping-Set.png

In plays and movies, and in the digital world, we want very simple sets that have detailed textures, so that they are easy to move around and change, and do not detract from the detail (and rendering CPU cycles) of the main objects. In this case, you can see that our set consists of one object that is just two sides of a box. Since the shot is of the sides of the building, we don't even need the top or the bottom.

To create this set, start with the default cube and camera. Edit the cube to remove the 4 faces that do not face the camera. Then Scale the cube in the Z direction.

Texturing the Backdrops

Tutorials-Camera Mapping-UV-Texture Set.jpg

Here we have taken those two planes and unwrapped them. With your cursor is camera view, tab into Edit mode, and press U. From the popup Unwrap menu, select "Project from View (Bounds)". In your UV/Image Editor, you will see the two faces.

Load the matte image into the UV/Image Editor while still in Edit mode via Image->Open and finding the matte image. If your 3D View is in Textured display mode, you will now see the building image coloring the sides of the box.

Now let's use that texture so we can render. This is a two-step process.

Assign Material

Tutorials-Camera Mapping-Material.jpg

In the Buttons window, add a new material for the box. If one exists, that's fine, just make sure it is Shadeless (Material Panel) and transparent (A: slider is 0.0 in the Material panel). In the Textures panel, Add a new Texture, of if one is there, set it to affect the Color and the Alpha (transparency) of the box sides, and Map it To the UV coordinates you just established.

Load Texture

Tutorials-Camera Mapping-Texture.jpg

Now we load up the texture by switching to the Shading Texture buttons. Choose the Image texture type, and in the Image panel, load the matte. Since the matte has an alpha channel, ensure that Premul is on to indicate that the alpha channel has been pre-multiplied. Also ensure that UseAlpha is enabled, so that the transparency of the image will be used when applied to the box sides.

Render Camera Moves

Tutorials-Camera Mapping-Camera.jpg

Shorter lenses provide more drama for close-in shots. This example we use camera with a 30 Lens. We want to simulate falling away from the building; At frame 1, position the camera slightly below the top of the building looking straight at it. At frame 91, position the camera at the base of the backdrop (building), looking up at it. Remember to key frame your camera positions by pressing I at the frame, locking in both Location and Rotation (LocRot) to create an IPO curve.

Animate your three-second drop shot and enjoy!

Background Multi-Matte

Building on your knowledge, let's now step it up a notch and use multiple mattes, actually digital camera pictures, to texture a 3D object so that the camera can fly around. In this example, we will use the following two images: Tutorials-Camera Mapping-BrickSide.jpg Tutorials-Camera Mapping-BrickEnd.jpg


We are also going to fake some sunlight, add some motion blur and do some math. Go ahead and download these images at full resolution, and fire up Blender. I won't go into excrutiating detail in this section, and assume that you have completed the previous two examples.

Setup

First, add a ground plane at {0,0,0}. Looking at these pictures, you will notice that we do not have the lower near corner as a reference point. However, we do see the top edge of the building in the three pics. So, add a cube and re-align the vertices so that the object center is the upper near corner of the cube. The center of a unit cube should be at {1,-1,2} and the bottom of the cube "resting" on the ground plane.

Once again, we have no clue as to actual measurements, so we will have to eyeball it. We can go ahead and define a real gray sky, maybe with a white stretched cloud texture mapped to white, to simulate the basic sky in the image.

Position First Reference

Tutorials-Camera Mapping-Multi-01.jpg

When you are using an image of a real thing as your texture, you have to get a sense of the general proportions of the real-world object, or the image will be stretched or condensed, and not look right. In this example, we have a brick building. If the bricks are not uniform in size, for example look like pebbles on one side and fat blobs on the other, it will not look real. If there is a car there next to it, and the car is only twice the size of a brick, the viewer will notice that and will, possibly subconsciously, know that something is not right. The same is true for lighting; if the lighting in the scene does not match the lighting that was in effect when the picture was taken, the viewer may notice and perhaps not even be able to articulate why it looks fake, but will be able to tell none the less.

The image BrickSide.jpg is 2k x 3k, or a 2:3 aspect ratio. Set it as the camera view background image, and set your camera to the same aspect ratio, say 400x600. This picture was taken from the side at ground level, not a helicopter, so roughly put the camera just about on the ground; location should be {7,2,1), looking at the cube but slightly up and crooked {110,-5,110}. The image to the right shows what your camera window should look like; the 3D cursor shows the center of the cube.

Let's use a combination of positioning techniques and a parenting trick this time to align and size the cube and camera, and take some shortcuts.

In camera view, move the cube so that the center aligns with the upper left corner of the background building image. As you lift it, notice how your perspective of the cube changes and you can see the bottom of the cube. Press N and click on the DimZ to stretch the cube back down to where it appears to meet the ground, about 4.5. Click the DimY to stretch the cube to the right, and then DimX to the rear. Your cube Dimensions should be {2.3, 2.5, 4.5}. These are the aspect ratios of the actual building.

UV Texture North/South Sides

Now I am going to trip you up to see if you truly understand UV Mapping. Select the cube and enter its location {1, -1, 4.5}. The cube is now not even close to being correctly projected onto the background image. It does not have to be, because we are going to use UV Mapping.

Tutorials-Camera Mapping-Multi-04.jpg

Change one of your windows to the UV/Image Editor. With the cube selected, tab into edit mode with the 3D view set to textured. Enable Face selection and select only the north side of the cube. To me, the North side is the top side in overhead view. Unwrap using Project from View. Your UV/Image editor should now have 4 VUs, corresponding to the 4 vertices of the selected face, but no image.

In the UV/Image Editor window, select the BrickSide image, and scale/position your UVs to match the edges of the sides of the building. In the image to the left, the four UV's are shown as bright orange dots because they are pinned. Do so with yours by selecting them (e.g. Box select them all) and press P to Pin.

In the 3D Window, swing your view around to the other side and select it. Repeat the process you just performed for this side. If you turn on Shadow mesh, you will see the outline for how you mapped the other face. Technically, the other side of the building could look completely different, but in this case, it doesn't and we really don't care. However, you now can see how you can change a billboard in the background of a scene.

Tutorials-Camera Mapping-Mult02.jpg

Tab out of edit mode and you will notice that your cube looks bulged. This is called tessellation and is easily remedied by adding a simple subdivision Subsurf modifier with a Render Level of 2 or more.

In the Mesh Panel, rename the UV Texture "Side" as shown.

Tutorials-Camera Mapping-Multi-Mat-01.jpg

Assign a Material as before, mapping the image to the UV "Side". This time, set the Emit to 0.1 to enhance the contrast of the image. If you leave it shadeless, you can mimic the building in sunlight, or at night by lighting it. Rename the Material "Side".

Position the camera at {7,0,2} and angle {90,0,90}, looking dead on to the side of the building. Render. You have basically an orthographic image of the building. But wait, you say, the image of the building that I used was not straight at all!

Tutorials-Camera Mapping-Mult03.jpg

So long as the proportions of the object match the proportions of the real-world object, either through perspective matching or even blueprints, a picture of the real-world object will map, through UV Texture, onto the surface without distortion, regardless of object or camera position.

Even if the image itself is distorted, like this brick side picture, we can compensate by adjusting the UV coordinates, and Blender automatically stretches the image across the face to correct for the perspective distortion. Then, it takes a picture of that projected/mapped texture and renders it.

UV Texture East/West Sides

Tutorials-Camera Mapping-Multi05.jpg

Now we are going to map the other ends, but to a different image. Warning: You are going to do it, but it will not be 100% correct, but then we will change it. So do not be discouraged, this is a tutorial where we learn how, and then improve.

We are going to step up our efficiency and map both faces at once. Tab into edit mode and in the 3D View, select both the East and shift-select the West faces (the unmapped sides of the building). Your UV/Image Editor window changes to apparently only show you just four UVs. Actually there are eight UVs, but they are overlapped on top of one another. A UV face is like a piece of onion skin paper.

Notice that you have not unwrapped anything, and thus you have not created another UV Texture. You are adding this side to the UVTexture called "Side".

A UV Texture is only created when you specifically unwrap a set of faces. Simply selecting faces adds them to the active UV Texture.

A UV texture can contain the mapping for many faces across many images. So far, we have mapped two faces to this one image, and the other two faces to another image, and two more faces (the top and bottom) are still unmapped.

In the UV/Image Editor window, select Image->Load and load in the BrickEnd.jpg image. This image does not show the entire building, only the sides we want to map. Look at your 3D view to see if the UVs need to be rotated.

Working in the UV/Image Editor, box-select each face's corner UVs, and move them into position. Repeat for each corner. You may have to Tab out of edit mode to see your changes reflected in the 3D View.

If an image is on its side, you can Rotate the UVs. You can also Scale them (hold Ctrl down for exactness). If you enable Draw Faces, the selected face will be colored pink. Pin them to keep them from moving if you do any mesh work.

Multiple Materials

Tutorials-Camera Mapping-Multi-Mat-03.jpg

When you tab out of edit mode, everything may look great in 3D View. Now render an image showing two sides. Uh-Oh! The issue is that a texture channel can only map one image to a set of UVs, and we actually are using two. So, we need some way to tell Blender to use one image texture for the East/West faces, and another texture for the North/South. We do this using Multiple_Materials.

You already have the first material defined for the North/South sides. With the East/West sides selected, in the Editing (F9) button context, click New and Assign in the Link and Materials panel, material index section. These fields are highlighted in yellow in the image to the right.

Go ahead and review which faces are assigned to which material by using the Select and Deselect buttons. Selecting a face in 3D View will change the panel to show its material.

With 2Mat2 showing, switch to the Shading context. Notice that you have a direct copy of the original Material, but it is named Side.001. It is now quite easy to change the texture. Switch to the Texture subcontext and select BrickEnd.jpg as the image, not BrickSide.jpg.

Multiple Material Test Render

Now, the North/South faces have material Side which maps a Side UV Texture to image BrickSide.jpg. The East/West faces have Material Side.001 which maps a Side UV Texture to image BrickEnd.jpg. On the physical object, the east and west faces are shaded by material Side.001, and the north and south faces are shaded by the first material, called Side. For both materials, the Texture channel settings are the same; they simply map an image to the faces using the UV Texture.

Correct Geometry

Tutorials-Camera Mapping-Multi-07.jpg

As long as the camera stays at ground level, and matches the perspective of the original shot, all is well in CG Land. But, what happens when the camera moves? Now we have UV Texturing working against us, since the camera perspective changes, but the image does not. Thus, you could end up with something like the shot show to the right. We used one face, the East/West, to really map to two faces of that building. The building, apparently, is shaped (overhead view) like an H, where there are really two columns connected by a middle.

Tutorials-Camera Mapping-Multi-08.jpg

The only way to really correct this is to use a more physically accurate model of the building, and texture it according to the faces shown in the image. Fortunately, it's pretty easy. Scale the DimX to 1.0, and re-map the narrow side now to only one leg of the building. Duplicate the building, rotate it 180 degrees, and position it next to the first. Then make another duplicate, and scale it down in the Y direction and place it in between the two legs. Slightly offset it so that the little windows line up and are not cut off.

Tutorials-Camera Mapping-Multi-09.jpg

Now, you have three objects, all sharing the same two materials, each of which map an image through the same UV Texture to color the mesh. Lights add a sense of realism. This geometrically accurate model looks real from any camera angle and position, giving us much more flexibility in filming/compositing with the live action.

I want to point out that projecting an image texture on a real-world object can only go so far. At some point, you have to start modeling the details of the object, in order to get ultra-realistic renders. For example, on this building, there are some antennas and satellite dishes on top; those really need to be be modeled. In addition, there is a slight indent at each floor level; either we need to clean up the images by erasing alpha (see Texture Paint in the User Manual) or modeling each floor.

This holds true especially for shadows. A straight edge will cast a simple straight shadow. If the edge is supposed to be knurled, you have to model that in order to get an accurate shadow.

Hints and Tricks

Oversampling

If you download these images, you will see that they are 2k x 3k. If we were to use these in an HD presentation, we really could not zoom in on them much, since the HD resolution itself is 2k. If we zoomed in, say 2:1, two pixels of the HD screen would be filled with 1 pixel of the matte, and the image would start to appear blocky.

Fortunately, we have a trick: Oversampling. By setting Oversampling to 5, and or interpolating the image texture, we can get really close without getting the jaggies. The tradeoff is memory needs, resolution and rendering time, versus fuzzyness.

Image Sampling

Use the eyedropper to sample actual pixels from the image when choosing colors. To get the eyedropper, click on the swatch. For example, in the Multiple Material Test Render image, I sampled the image to get the exact sky color.

Aligning Images

Tutorials-Camera Mapping-Multi-10.jpg

You may notice that the floor levels from the North side do not align with the floor levels from the East side, for example. The overall number of floors are the same, but the individual floors do not match up. This is because both pictures were taken from very different approach angles, and thus have different numbers of pixels covering, for example, the first floor between sides. Even though they have the same number of pixels, the ratio as the building recedes into the sky is different.

To correct this, simply use your knife tool and do a multi-cut horizontally across one of the buildings, cutting the faces into fourths (3 cuts). Since you duplicated the building, they all share the same mesh, and changes that you make to one will automatically be shared by the other clones.

Your UVs will be cut as well. In 3D View, select the top two faces. In the UV/Image editor window, box select the UVs that map the edge between the top two faces. If you slide that edge down, the same number of image pixels will be stretched across a broader area.

Move the UVs up or down to align them, in a test render, to the floor of one side with the other. Work your way down the side of the building in this fashion, until all floors align. This is another subtlety that is picked up subconsciously, that affects the realism of your textures.