Compositing and Editing - Sharing Your Work with the World- Blender For Dummies (2015)

Blender For Dummies (2015)

Part IV

Sharing Your Work with the World

Chapter 15

Compositing and Editing

In This Chapter

arrow Taking a look at editing and compositing

arrow Editing video and animations with Blender’s Video Sequence Editor

arrow Compositing images and video

In live-action film and video, the term post-production usually includes nearly anything associated with animation. Nearly every animator or effects specialist has groaned upon hearing a director or producer say the line, “We’ll fix it in post.” Fortunately, in animation, post-production is more specific, focusing on editing and compositing.

This chapter is a quick guide to editing and compositing, using Blender’s Video Sequence Editor and Node Compositor. Understand that these topics are large enough for a book of their own, so the content of this chapter isn’t comprehensive. That said, you can find enough solid information here and in Blender’s online documentation (http://blender.org/manual) to figure it out. I explain Blender’s interface for these tools, as well as some fundamental concepts, including nonlinear editing and node systems. With this understanding, these tools can help turn your work from “Hey, that’s cool” to “Whoa!”

Comparing Editing to Compositing

Editing is the process of taking rendered footage — animation, film, or video — and adjusting how various shots appear in sequence. You typically edit using a nonlinear editor (NLE). An NLE, like Apple’s Final Cut Pro or Adobe Premiere, differs from older linear tape-to-tape editing systems that required editors to work sequentially. With an NLE, you can easily edit the beginning of a sequence without worrying too much about it messing up the timing at the end. Blender has basic NLE functionality with its Video Sequence Editor.

Compositing is the process of mixing animations, videos, and still images into a single image or video. It’s the way that credits show up over footage at the beginning of a movie, or how an animated character is made to appear like she is interacting with a real-world environment. Blender has an integrated compositor that you can use to do these sorts of effects, as well as simply enhance your scene with effects such as blur, glow, and color correction.

Working with the Video Sequence Editor

The default Video Editing screen layout in Blender is accessible through the Screens datablock in the header or by pressing Ctrl+← four times from the Default screen layout. The large editor across the middle of the layout is a Video Sequence Editor (VSE) in Sequencer view. In this view, you can add and modify sequences, called strips, in time. The numbers across the bottom of the Sequencer correspond to time in the VSE in seconds. The numbers to the left are labels for each of the tracks, or channels, that the strips live in. The upper left area is a Graph Editor, used in this case for tweaking the influence or timing of individual strips’ properties. To the right of the Graph Editor is a VSE in Preview view. When you’re editing, the footage under the time cursor appears here. At the bottom is a Timeline, which, at first, may seem odd. However, as when animating, the benefit of having the Timeline around all the time is that you can use the Sequencer to edit a specific portion of your production, while still having the ability to scrub the full piece. The playback controls are also handy to have on-screen.

tip As with the Animation layout (see Chapter 10), I like to make a few tweaks to the Video Editing layout. When I first start editing in Blender’s VSE, I’m usually more concerned with importing image sequences and movie clips than I am with tweaking the timing of edits. So one of the first changes I make is swapping the Graph Editor for a File Browser with files set to display thumbnails. This way, you can easily drag and drop footage from the File Browser directly onto the Sequencer. You may also note that you’re missing the Properties editor. While you’re in the process of editing, the Properties editor isn’t critical to have open, but it’s useful when you’re doing your initial setup. For that reason, I often split the Sequencer and make a narrow strip on the right a Properties editor (Shift+F7). Figure 15-1 shows my modified Video Editing layout.

image

Figure 15-1: A customized Video Editing layout for when you start a project.

The settings in the Dimensions panel amid the Render Properties are important for editing in Blender because that’s where you set the frame rate, measured in frames per second (fps), and resolution for the project. If you’re editing footage that runs at a different frame rate or resolution than the one that is set here, that footage is adjusted to fit. So if your project is at standard HD settings (24 fps and 1920 x 1080 pixels in size), but you import an animation rendered at 50 fps and at a size of 640 x 480 pixels, the footage appears stretched and in slow motion.

Besides your Render Properties, the Properties region of the Sequencer (press N to toggle its visibility) is relevant to your editing process. Because the default layout doesn’t have any strips loaded, this region appears as a blank section on the right side of the Sequencer. However, if you have a strip in the Sequencer and you have it selected, this region is populated and appears like the image in Figure 15-2. (The exact panels in the Properties region change depending on the type of strip you have selected. Figure 15-2 shows panels for a movie strip.)

image

Figure 15-2: The Sequencer’s Properties region gives you controls on a selected strip.

As you may guess, the Properties region has the most relevant options for working in the VSE. Six panels are available: Edit Strip, Strip Input, Effect Strip, Filter, Proxy/Timecode, and Modifiers. For most strip types, the Edit Strip, Strip Input, and Filter panels are available. The Effect Strip and Proxy panels are available only for certain types of strips. For example, audio strips can’t have proxies, so that panel doesn’t show up when you select a strip of that type.

Following are descriptions for the most commonly used panels:

· Edit Strip: The buttons in this panel pertain to where a selected strip appears in the VSE and how it interacts with other strips. You can name individual strips, control how a strip blends with lower channels, mute or lock a strip, set the strip’s start frame, and change the strip’s channel.

· Strip Input: The buttons in this panel allow you to crop and move the strip around the frame, as well as control which portion of the strip shows up in the Sequencer. When you have an audio strip selected, this panel has a few different controls and is labeled Sound.

· Effect Strip: This panel only appears for certain effect strips that have editable attributes. I give more detail on some effects that use this panel in the section “Adding effects,” later in this chapter. The Timeline at the bottom of the screen controls how Blender plays your sequence. However, the most relevant button for the VSE is the Sync drop-down menu. To ensure that your audio plays back in sync with your video while editing, make sure that this drop-down menu is either set to AV-sync or Frame Dropping. Of the two, I tend to get better results when I choose AV-sync. Nothing is worse than doing a ton of work to get something edited only to find out that none of the audio lines up with the visuals after you render. Figure 15-3 shows the options in the Sync drop-down menu of the Timeline.

image

Figure 15-3: Choose Frame Dropping or AV-sync to ensure that your audio plays back in sync with your video.

warning Before I get heavily into using the VSE, let me first say that Blender’s VSE is not a complete replacement for a traditional NLE. Although it is a very powerful tool, the VSE is best suited for animators who want to create a quick edit of their work. Professional video editors may have trouble because VSE is missing a number of expected features, such as a listing of available footage, sometimes called a clip library or bin. You can use the File Browser in thumbnail view to partially emulate the behavior of a bin, but it’s still not quite the same. That said, all of the open movie projects (Elephants Dream, Big Buck Bunny, Sintel, Tears of Steel, and the upcoming Cosmos Laundromat) were successfully edited using Blender's VSE. I find the VSE more than sufficient for quite a few of my own projects, so you ultimately have to decide for yourself.

Adding and editing strips

If you want to use the VSE to edit footage, you have to bring that footage into Blender. If you’re using the modified Video Editing screen layout I describe in the preceding section in this chapter, you can use the File Browser and navigate to where your footage is. Then you can just drag and drop that file from the File Browser directly into the Sequencer. The ability to drag and drop from the File Browser is an extremely handy feature that even many veteran Blender users don't know about. In fact, you can even drop media files from your operating system's file manager, too. Alternatively, you can add a strip by hovering your mouse cursor in the Sequencer and pressing Shift+A (just like adding objects in the 3D View). Figure 15-4 shows the menu of options that appears.

image

Figure 15-4: The Add Sequence Strip menu.

You can import a variety of strips: scenes, clips, masks, movies, still images, audio, and effects. These strips are represented by the following options in the menu:

· Scene: Scene strips are an extremely cool feature unique to Blender. When you select this option, a secondary menu pops up that allows you to select a scene from the .blend file you’re working in. If you use a single .blend file with multiple scenes in it, you can edit those scenes together to a final output sequence without ever rendering out those scenes first! This handy feature allows you to create a complete and complex animation entirely within a single file. (This feature also works with scenes linked from external .blend files, but that's an advanced use.) Scene strips are also a great way to use Blender for overlaying graphics, like titles, on video.

· Clip: Clips are newer strips that you can add in Blender. They relate to Blender's built-in motion tracking feature. A Clip is similar to a Movie strip (covered next), except that it's already loaded in Blender in the Movie Clip editor and has its own datablock. In contrast, Movie strips by themselves don't have any associated datablocks; they only exist on the VSE's timeline.

· Mask: To add a Mask strip, you must first create a Mask datablock in the UV/Image Editor. To create a mask, left-click the Editing Context drop-down menu in the UV/Image Editor's header and choose Mask. Masks are pretty heavily used in advanced motion tracking and compositing.

· Movie: When you select this option, the File Browser that comes up allows you to select a video file in one of the many formats Blender supports. On files with both audio and video, Blender loads the audio along with the video file as separate strips in the sequencer.

· Image: Selecting this option brings up a File Browser that allows you to select one or more images in any of the formats that Blender recognizes. If you select just one image, the VSE displays a strip for that one image that you can arbitrarily resize in time. If you select multiple images, the VSE interprets them as a sequence and places them all in the same strip with a fixed length that matches the number of images you selected.

· Sound: This option gives you a File Browser for loading an audio file into the VSE. When importing audio, you definitely want to import sound files in WAV or FLAC (Free Lossless Audio Codec) formats, which give you the best quality sound. Although Blender supports other audio formats like MP3, they’re often compressed and sometimes sound bad when played.

· Effect Strip: This option pops out a secondary, somewhat lengthy, menu of options. These strips are used mostly for effects and transitions. I cover them in more depth in the next section.

When you load a strip, it’s brought into the VSE under your mouse cursor. Table 15-1 shows helpful mouse actions for working efficiently in the VSE.

Table 15-1 Helpful Mouse Actions in the VSE

Mouse Action

Description

Right-click

Select strip to modify. Right-clicking the arrow at either end of the strip selects that end of the strip and allows you to trim or extend the strip from that point.

Shift+right-click

Select multiple strips.

Middle-click

Pan the VSE workspace.

Ctrl+middle-click

Zoom height and width of the VSE workspace.

Scroll wheel

Zoom the width in and out of the VSE workspace.

Left-click

Move the time cursor in the VSE. Left-clicking and dragging scrubs the time cursor, allowing you to view and hear the contents of the Sequencer as fast or slow as you move your mouse.

One thing you may notice is that quite a few of the controls are very similar to those present in other parts of Blender, such as the 3D View and Graph Editor. This similarity is also true when it comes to the hotkeys that the VSE recognizes, although a few differences are worth mentioning. Table 15-2 lists some the most common hotkeys used for editing.

Table 15-2 Common Features/Hotkeys in the VSE

Hotkey

Menu Access

Description

G

Strip⇒Grab/Move

Grabs a selection to move elsewhere in the VSE.

E

Strip⇒Grab/Extend from frame

Grabs a selection and extends one end of it relative to the position of the time cursor.

B

Border select, for selecting multiple strips.

Shift+D

Strip⇒Duplicate Strips

Duplicates the selected strip(s).

X

Strip⇒Erase Strips

Deletes the selected strip(s).

K

Strip⇒Cut (soft) at Current Frame

Splits a strip at the location of the time cursor. Similar to the razor tool in other NLEs.

M

Strip⇒Make Meta Strip

Combines selected strips into a single “meta” strip.

Alt+M

Strip⇒UnMeta Strip

Splits a selected meta strip back to its original individual strips.

Tab

Tabs into a meta strip to allow modification of the strips within it.

H

Strip⇒Mute Strips

Hides a strip from being played.

Alt+H

Strip⇒Un-Mute Strips

Unhides a strip.

Shift+L

Strip⇒Lock Strips

Prevents selected strips from being moved or edited.

Shift+Alt+L

Strip⇒Unlock Strips

Allows editing on selected strips.

Alt+A

Plays the animation starting from the location of the time cursor.

Editing in the VSE is pretty straightforward. If you have two strips stacked in two channels, one above the other, when the timeline cursor gets to them, the strip that’s in the higher channel takes priority. By default, that strip simply overrides, or replaces, any of the strips below it. You can, however, change this behavior in the Edit Strip panel of the VSE’s Properties region. The drop-down menu labeled Blend controls the blend mode of the selected strip. You can see that the default setting is Replace, but if you left-click this button, you get a short list of modes similar to the layer blending options you see in a program like Photoshop or GIMP. Besides Replace, the ones I use the most are Alpha Over and Add.

The Graph Editor is useful for animating all kinds of values in Blender, and it's quite useful for strips in the VSE. One of the primary animated values for strips is the Opacity slider in the Edit Strip panel. This slider controls the influence factor that the strip has on the rest of the sequence. For example, on an image strip — say, of a solid black image — you can use the Graph Editor to animate the overall opacity of that strip. Values less than 1.0 make the image more transparent, thereby giving you a nice way to create a controlled fade to black. The same principle works for sound strips, using the Volume slider in the Sound panel of the VSE’s Properties region. A value of 1.0 is the sound clip’s original loudness and it gradually fades away the lower you get. Values greater than 1.0 amplify the sound to a level greater than the original volume.

By combining the Graph Editor with Blending modes, you can create some very cool results. Say that you have a logo graphic with an alpha channel defining the background as transparent, and you want to make the logo flicker as if it’s being seen through poor television reception. To make the logo flicker, follow these steps:

1. Add a logo image to the VSE (Shift+A⇒Image).

2. Make sure that the logo’s strip is selected (right-click) and, in the Edit Strip panel, change the strip’s blend mode to Alpha Over.

3. Insert a keyframe for the strip’s opacity (right-click Opacity in the Edit Strip panel and choose Insert Keyframe or press I with your mouse hovered over the Opacity value field).

4. In the Graph Editor, tweak the Opacity f-curve so that it randomly bounces many times between 1.0 and 0.0 (Ctrl+left-click).

After tweaking the curve to your taste (see Chapter 10 for more on working in the Graph Editor), you should now have your flickering logo effect.

Adding effects

Pressing Shift+A in the VSE provides you with quite a few options other than importing audio and video elements. A large portion of these options are effects, and many typically require that you select two strips that are stacked on top of each other in the VSE. When necessary, I point out which effects these are.

tip Pay close attention to the order in which you select your strips because it often has a dramatic influence on how the effect is applied. The second strip you select is the active strip and the primary controller of the effect.

Here’s a list of the available options:

· Add/Subtract/Multiply: These effects are the same as the blend mode settings in the Edit Strip panel of the Properties region. Unless you really need some special control, I recommend using those blend modes rather than adding these as effects sequences. It works just as well and keeps your VSE timeline from getting too cluttered. Using these effects requires that you select two strips before pressing Shift+A and adding any of them. The following steps give a quick example of how to use them:

· Select the strip you want to start with.

· Shift+right-click the strip you want to transition to.

· Press Shift+AEffect StripAdd.

A new red strip is created that’s the length of the overlap between your two selected strips. On playback (Alt+A), the bright parts of the upper strip amplify the overlaying bright parts of the lower strip.

· Alpha Over/Alpha Under/Over Drop: These effect strips control how one strip’s alpha channel relates to another. They’re also available as Blending modes, and I suggest that you apply these effects that way in most cases. One example of a time where it makes sense to use these as strips is if you need to apply another effect (such as a Gaussian Blur) to a collection of strips that have been alpha'd over one another. Otherwise, stick with the blend mode.

· Cross/Gamma Cross: These effects are crossfades or dissolves between overlapping strips. The Cross effect also works in audio to smoothly transition from one sound to another. For images and video, Gamma Cross works the same as Cross, but takes the additional step of correcting the color in the transition for a smoother dissolve.

· Gaussian Blur: This effect works like a simplified version of the Blur node in the Node Compositor (covered later in this chapter). Using the Size X and Size Y values in the Effect Strip panel of the Properties region, the Gaussian Blur effect gives you the ability to make blurry the images or video in a strip.

· Wipe: Wipe is a transition effect like Cross and Gamma Cross. It transitions from one strip to another like a sliding door, à la the Star Wars movies. This effect also uses the Effect Strip panel in the Properties region to let you choose the type of wipe you want, including single, double, iris, and clock wipe. Also, you can adjust the blurriness of the wiping edge and the direction the wipe moves.

· Glow: The Glow effect works on a single strip. It takes a given image and makes the bright points in it glow a bit brighter. Ever wonder how some 3D renders get that glowing, almost ethereal quality? Glow is one way to do it. The Effect Strip panel in the Properties region lets you adjust the amount of glow you have and the quality of that glow.

· Transform: This effect provides very basic controls for the location, scale, and rotation of a strip. The effect works on a single strip, and you can find its controls on the Effect Strip panel of the Properties region. You can use f-curves on this effect strip to animate the transform values.

· Color: This handy little option creates an infinitely sized color strip. You can use this effect to do fades or set a solid background color for scenes.

· Speed Control: With the Speed Control effect, you can adjust the playback speed of individual strips. In the Effect Strip panel of the Properties region, you can choose to influence the Global Speed (1.0 is regular speed; setting it to 0.50 makes the strip play half as fast; setting it to 2.0 makes it play twice as fast). You can also have more custom control using the Graph Editor.

· Multicam Selector: If you’re using Scene strips in the Sequencer and you have multiple cameras in your scene, you can use this effect strip to dictate which camera you’re using from that scene. As with most things in Blender, you can animate that camera choice, allowing you to easily do camera switching in your scene.

· Adjustment Layer: Adjustment Layer strips are a bit unique. Like Color strips, you don't need to have any other strips selected when you add an Adjustment Layer strip. The cool thing is that Adjustment Layer strips can make simple modifications (adjustments) to the look of all strips below them on the VSE's timeline. If you add an Adjustment Layer strip and look at the mostly empty Properties region, you may wonder exactly how you add those adjustments. This is where you can apply VSE modifiers. In the Modifiers panel of the Properties region, you can left-click the Add Strip Modifier drop-down menu. From here, you can make more modifications, including color correction, brightness and contrast adjustments, and even masks.

Rendering from the Video Sequence Editor

To render your complete edited sequence from the VSE, the steps are largely identical to the ones outlined for creating a finished animation in Chapter 14. Actually, you must do only one additional thing.

In Render Properties, have a look in the Post Processing panel. Make sure that the Sequencer check box is enabled. Activating this check box lets Blender know that you want to use the strips in the VSE for your final output rather than anything that’s in front of the 3D camera. If you don’t enable this check box, Blender just renders whatever the camera sees, which may be just the default cube that starts with Blender, or whatever else you might place in front of the 3D camera.

Working with the Node-Based Compositor

Compositing is the process of mixing multiple visual assets to create a single image or sequence of images. By this definition, you may notice that technically Blender’s VSE qualifies as a sort of compositor because you can stack strips over each other in channels and blend them together with effects and transitions. Although this statement is true, the VSE is nowhere near as powerful as the Node Compositor is for mixing videos, images, and other graphics together.

tip As designed, the VSE is intended for working with multiple shots, scenes, images, or clips of video. It's also meant to play back in real time (or as near to that as possible). In contrast, the Compositor is intended for working with a single shot, and it's most certainly not meant for working in real time. There is a little bit of overlap in the functionality of these two parts of Blender, but depending on the task at hand, one is more suitable than the other.

What makes the Node Compositor so powerful? Well, it’s in the name: nodes. One of the best ways to understand the power of nodes is to imagine an assembly line. In an assembly line, each step in the process depends on the step immediately preceding it and feeds directly into the following step. This methodology is similar to the layer-based approach used in many image manipulation programs like Photoshop and GIMP. Each layer has exactly one input from the previous layer and one output to the following one. Figure 15-5 illustrates this idea.

image

Figure 15-5: An assembly line approach, like the layers in GIMP or Photoshop.

That approach works well, but you can enhance the assembly line a bit. Say that some steps produce parts that can go to more than one subsequent step, and that other steps can take parts from two or more earlier steps and make something new. And take it a bit farther by saying that you can duplicate groups of these steps and integrate them easily to other parts of the line. You then have an assembly network like that depicted in Figure 15-6. This network-like approach is what you can do with nodes.

image

Figure 15-6: Turning a simple assembly line into a complex assembly network.

Understanding the benefits of rendering in passes and layers

Before taking full advantages of nodes, it’s worthwhile to take a quick moment and understand what it means to render in layers. Assume for a moment that you animated a character walking into a room and falling down. The room is pretty detailed, so it takes a fairly long time for your computer to render each frame. However, because the camera doesn’t move, you can really render the room just once. So if you render your character with a transparent background, you can superimpose the character on just the still image of the room, effectively cutting your render time in half (or less)!

That’s the basics of rendering in layers. The preceding example had two layers, one for the room and one for the character. In addition to rendering in layers, each layer can contain multiple passes, or with more detailed content relevant to that layer. For example, if you want to, you can have a render pass that consists of just the shadows in the layer. You can take that pass and adjust it to make all the shadows slightly blue. Or you can isolate a character while she’s walking through a gray, blurry scene.

Another thing to understand for compositing 3D scenes is the concept of Z-depth. Basically, Z-depth is the representation of the distance that an object is from the camera, along the camera’s local Z-axis. The compositor can use this Z-depth to make an object look like it fits in a scene even though it was never rendered with it.

In Blender, all of this functionality starts with render layers. It’s important to make a distinction here between Blender’s regular layer system and render layers. Although render layers do use Blender’s layer system, they are separate things. Basically, you can decide arbitrarily which Blender layers you’d like to include or exclude from any of the render layers you create. All the controls for render layers are in Render Layers Properties. It's the second context button in the Property Editor's header. Figure 15-7 shows the panels in Render Layers Properties.

image

Figure 15-7: The panels in Render Layers Properties. On the left is what you see if rendering with Cycles, on the right is the BI version.

At the top of Render Layers Properties is a list box containing all the render layers in your scene. By default, there’s only one, named RenderLayer. Double-click items in the list box to change their names. The next section features three blocks of Blender layers. The first block of these layers shows the scene layers; the ones that are going to be sent actively to the renderer.

The next set of Blender layer buttons in the Layer panel determine which Blender layers actually belong in the selected render layer. For example, if you’re creating a render layer for background characters and you have all your background characters on layers 3 and 4, you Shift+left-click those layers in this block.

tip If you press Shift+left-click and keep those buttons held down, you can enable or disable multiple render layers by dragging your mouse cursor over them.

The third set of Blender layer buttons are mask layers. The objects in these layers explicitly block what’s rendered in this render layer, effectively masking them out. You typically use these layer buttons for more advanced compositing scenarios.

The series of check boxes that fill the lower half of the Layers panel and in the Passes panel beneath it are where the real magic and power of render layers lie. The first set of check boxes specify which pipeline products to deliver, or include (hence the label), to the renderer as input. These pipeline products refer to major renderable elements of the selected render layer that are seen by the render engine. If you disable Halo, for example, no halo materials are sent to the renderer. Basically, they’re omitted. You can use these check boxes in complex scenes to turn off pipeline features you don’t need in an effort to shorten your render times.

remember The passes and pipeline products that you have available in your render layers vary a bit, depending on which render engine you're using. For instance, the example from the preceding paragraph is only relevant if you're rendering with BI; Cycles doesn't support halo materials.

Here is a brief description of some of the more useful pipeline features if you're using BI as your renderer (if there's a Cycles equivalent, I mention it in the description):

· Solid: This feature is for solid faces. Basically, if you disable this option, the only things that render are lights, halo materials, and particles. Any solid-surfaced object doesn’t appear.

If you're rendering with Cycles, the equivalent to this pipeline product is the Use Surfaces check box.

· ZTransp: This name is short for Z-transparency. If you have an object that has a Z-transparent material, enabling this button ensures that the material gets rendered.

· Strand:Strands are static particles rendered in place. They’re often used to approximate the look of hair or grass. Keeping this option enabled ensures that your characters aren’t bald and that their front lawns aren’t lifeless deserts.

If you're rendering with Cycles, the rough counterpart to this is the Use Hair check box.

· All Z: The simplest way to explain this option is with an example. Say that you have a scene with a wall between your character and the camera. The character is on one render layer, and the wall is on another. If you’re on the character’s render layer and you enable this option, the character is masked from the render layer and doesn’t appear. With All Z off, the character shows up on its render layer.

Underneath the Include check boxes is the Passes panel. This panel contains a set of options that control which passes are sent to the selected render layer. These passes are most useful when used in the Node Compositor. They're really what make compositing so interesting and fun. Here are some of the most useful passes:

· Combined: The Combined pass is the fully mixed, rendered image as it comes from the renderer before getting any processing.

· Z: This pass is a mapping of the Z-depth information for each object in front of the camera. It is useful for masking as well as effects like depth of field, where a short range of the viewable range is in focus and everything else is blurry.

· Mist: This pass is only available if you render with Cycles. In a way, it's similar to the Z pass because it's based on z-depth information. There are three big differences:

· Values in the Mist pass already are normalized between 0 and 1.

· The Mist pass takes the transparency of your materials into account; the Z pass doesn't.

· Unlike the Z pass, the Mist pass is nicely anti-aliased and doesn't have some of the nasty star-stepped jaggies you may see in the Z pass.

If you're rendering in Cycles and you want to re-create the effect of the Mist panel in BI, using the Mist pass in the compositor is how you do it.

· Vector: This pass includes speed information for objects moving before the camera (meaning that either the objects or the camera is animated). This data is particularly useful for the Vector Blur node, which gives animations a decent motion blur effect.

· Normal: The information in this pass relates to the angle that the geometry in the scene has, relative to the camera. You can use the Normal pass for additional bump mapping as well as completely altering the lighting in the scene without re-rendering.

· UV: The UV pass is pretty clever because it sends the UV mapping data from the 3D objects to the compositor. Basically, this pass gives you the ability to completely change the textures on an object or character without the need to re-render any part of the scene. Often, you want to use this pass along with the Object Index pass to specify on which object you want to change the texture.

· Object Index: This pass carries index numbers for individual objects, if you set them. The Object Index pass allows very fine control over which nodes get applied to which objects. This pass is similar to plain masking, but somewhat more powerful because it makes isolating a single object or group of objects so much easier.

· Color: The color pass delivers the colors from the render, completely shadeless, without highlights or shadows. This pass can be helpful for amplifying or subduing colors in a shot. If you're rendering with Cycles, you can actually have separate Color passes for diffuse, glossy, transmission, and subsurface shaders.

· Specular: The specularity pass delivers an image with the specular highlights that appear in the render.

Cycles doesn't use specularity, so this pass isn't available when rendering with Cycles.

· Shadow: This pass contains all the cast shadows in the render, both from ray traced shadows as well as buffered shadows. In my example from earlier in this section about taking the shadows from the render and adjusting them (such as giving them a bluish hue), you’d use this pass.

· AO: This pass includes any ambient occlusion data generated by the renderer, if you have AO enabled. If you use this pass and you're rendering with BI, it’s a good idea to double-check to see whether you’re using approximate or ray traced AO in the Ambient Occlusion panel of World Properties. If you’re using ray traced AO, verify that ray tracing is enabled in Render Properties.

Working with nodes

After you set up your render layers the way you like, you’re ready to work in the Node Compositor. As with the VSE, Blender ships with a default screen layout for compositing, appropriately named Compositing. You can access the Compositing layout from the Screens datablock at the top of the window or by pressing Ctrl+← once from the Default screen layout. By default, Blender puts you in the Node Editor for compositing, which is exactly where you want to be. The other node editors are for materials and textures and are for more advanced work. See Chapters 7, 8, and 9 for other uses of the Node Editor.

By itself, the Node Editor looks pretty stark and boring, like a lame 2D version of the 3D View. However, left-click the Use Nodes check box in the header, and you see a screen layout that looks similar to the one shown in Figure 15-8.

image

Figure 15-8: Starting with nodes in the Composite Node Editor.

Blender starts by presenting you with two nodes, one input and one output. You can quickly tell which is which by looking at the location of the connection points on each node. The left node labeled Render Layers has connection points on the right side of it. The location of these connection points means that it can serve only as an input to other nodes and can’t receive any additional data, so it’s an input node. It adds information to the node network. In contrast, the node on the right, labeled Composite, is an output node because it has no connection points on its right edge, meaning it can’t feed information to other nodes. Essentially, it’s the end of the line, the result. In fact, when you render by using the Node Compositor, the Composite node is the final output that gets displayed when Blender renders.

Setting up a backdrop

I personally prefer to see the progress of my node network as I’m working, without having to constantly refer back to another editor for the results of my work. Fortunately, Blender can facilitate this workflow with another sort of output node: the Viewer node.

To add a new node, position your mouse cursor in the Node Editor and press Shift+A. You see a variety of options. For now, navigate to Output⇒Viewer to create a new output node labeled Viewer.

If the Render Layer input node was selected when you added the Viewer node, Blender automatically creates a connection, also called a noodle, between the two nodes. Noodles are drawn between the circular connection points, or sockets, on each node. If the noodle wasn’t created for you, you can add it by left-clicking the yellow Image socket on the Render Layer node and dragging your mouse cursor to the corresponding yellow socket on the Viewer node.

However, making this connection doesn’t seem to do much. You need to take three more steps:

1. Left-click the Use Backdrop check box in the Node Editor’s header.

A black box loads in the background of the compositor. (Don’t worry; it’s supposed to happen, I promise.)

2. Go to the Post Processing panel in Render Properties and ensure that the Compositing check box is enabled.

3. Render the scene by left-clicking the Image button in the Render panel or pressing F12.

When the render is complete and you return to the Blender interface by pressing F11, the empty black box has been magically replaced with the results of your render. Now anything that feeds into this Viewer node is instantly displayed in the background of the compositor.

This setup is the way I typically like to work when compositing. In fact, I often take it one step farther and press Shift+spacebar to maximize the Node Editor to the full window size. This way, you can take full advantage of your entire screen while compositing.

If you find that the backdrop gets in your way, you can disable it by left-clicking the Backdrop check box in the header, or you can move the backdrop image around in the background by Alt+middle-clicking in the editor and dragging the image around.

tip The VSE also can show a backdrop. You enable it the same way (enabling the Use Backdrop check box in the VSE timeline's header); you can scale and move the backdrop image around the same way, too. With the backdrop enabled in either case, you can composite or edit your work with a maximized area (Shift+Spacebar) and still see the results of your modifications.

Also, you can get more space by middle-clicking in the compositor and dragging the entire node network around, or by using your scroll wheel to zoom in and out on the nodes. You also have the ability to scale the backdrop image by using the V (zoom out) and Alt+V (zoom in) hotkeys. Table 15-3 shows most of the frequently used mouse actions in the Node Editor.

Table 15-3 Commonly Used Mouse Actions in the Node Editor

Mouse Action

Description

Right-click

Select a node.

Shift+right-click

Select multiple nodes.

Middle-click

Pan compositor work area.

Alt+middle-click

Move backdrop image.

Ctrl+middle-click

Zoom compositor work area.

Scroll wheel

Zoom compositor work area.

Left-click (on a node)

Select a node. Click and drag to move the node around.

Left-click (on a socket)

Attach or detach a noodle to/from the socket you click on. Click and drag to the socket you want to connect to.

Left-click+drag the left or right side of a node

Resize the node.

Ctrl+Left-click+drag in the compositor workspace

Knife cut through noodles.

Ctrl+Alt+left-click+drag in the compositor workspace

Lasso select.

Shift+Ctrl+left-click a node

Connect the active Viewer node to the output of the clicked node. If there is no Viewer node, one is automatically created. Continuous Shift+Ctrl+left-clicks iterate through the multiple outputs of the node.

Identifying parts of a node

At the top of each node are a pair of icons: the triangle on the left and a circular icon on the right. Following is a description of what each button (shown in Figure 15-9) does:

· Triangle: Expands and collapses the node, essentially hiding the information in it from view.

· Sphere: View window expand/collapse. This icon is available only on nodes that have an image window, such as a render layer node, any output node, or texture node.

image

Figure 15-9: Each node has icons at the top that control how you see it in the compositor.

Navigating the node compositor

For the most part, editing nodes in Blender conforms to the same user interface behavior that’s in the rest of the program. You select nodes by right-clicking, you can grab and move nodes by pressing G, and you can border select multiple nodes by pressing B. Of course, a few differences pertain specifically to the Node Editor. Table 15-4 shows the most common hotkeys used in Node Editor.

Table 15-4 Commonly Used Hotkeys in the Node Editor

Hotkey

Menu Access

Description

Shift+A

Add

Opens toolbox menu.

G

Node⇒Translate

Grabs a node and move it.

B

Select⇒Border Select

Border select.

X

Node⇒Delete

Deletes node(s).

Shift+D

Node⇒Duplicate

Duplicates node(s).

Ctrl+G

Node⇒Make Group

Creates a group out of the selected nodes.

Alt+G

Node⇒Ungroup

Ungroups the selected group.

Tab

Node⇒Edit Group

Expands the node group so you can edit individual nodes within it.

H

Node⇒Hide/Unhide

Toggles the selected nodes between expanded and collapsed views.

V

View⇒Backdrop Zoom Out

Zooms out (scales down) the backdrop image.

Alt+V

View⇒Backdrop Zoom In

Zooms in (scales up) the backdrop image.

When connecting nodes, pay attention to the colors of the sockets. The sockets on each node come in one of three different colors, and each one has a specific meaning for the type of information that is either sent or expected from the node. The colors and their meanings are as follows:

· Yellow: Color information. Specifically, this socket relates to color in the output image, across the entire red/green/blue/alpha (RGBA) scale. Color information is the primary type of data that should go to output nodes.

· Gray: Numeric values. Whereas the yellow sockets technically get four values for each pixel in the image — one for each red, green, blue, and alpha — this socket gets or receives a single value for each pixel. You can visualize these values as a grayscale image. These sockets are used mostly for masks. For example, the level of transparency in an image, or its alpha channel, can be represented by a grayscale image, with white for opaque and black for transparent (and gray for semitransparent).

· Blue: Geometry data. These sockets are pretty special. They send and receive information that pertains to the 3D data in the scene, such as speed, UV coordinates, and normals. Visualizing these values in a two-dimensional image is pretty difficult; it usually ends up looking like something seen through the eyes of the alien in Predator.

Grouping nodes together

As Table 15-4 shows, you can also group nodes together. The ability to make a group of nodes is actually one of the really powerful features of the Node Editor. You can border select a complex section of your node network and press Ctrl+G to quickly make a group out of it. Grouping nodes has a few really nice benefits. First of all, grouping can simplify the look of your node network so that it’s not a huge mess of noodles (spaghetti!). More than simplification, though, node groups are a great organizational tool. Because you can name groups like any other node, you can group sections of your network that serve a specific purpose. For example, you can have a blurry background group and a color-corrected character group.

But wait, there’s more! (Do I sound like a car salesman yet?) When you create a group, it’s added automatically to the Group menu when you go to add a new node (Shift+A⇒Group). To understand the benefit of being able to add groups, imagine that you created a really cool network that gives foreground elements a totally sweet glow effect. If you make a group out of that network, you can now instantly apply that glow to other parts of your scene or scenes in other .blend files. Go ahead: Try it and tell me that’s not cool — you can’t do it!

tip When working with nodes, it’s a good idea to have the network flow from the left to the right. Wherever possible, you want to avoid creating situations where you feed a node’s output back to one of the nodes that gives it input. This feedback loop is called a cyclic connection. If you’ve ever heard the painfully loud feedback noise that happens when you place a microphone too close to a speaker, you have an idea of why a cyclic connection is a bad idea.

Discovering the nodes available to you

Blender has quite an extensive list of nodes that you can add to your network. In fact, it seems like with every release of Blender, more and more incredible node types are added to the compositor. Many nodes have a Fac, or factor, value that you can usually either set with a value from another node or explicitly set by typing. Values less than 1 make the node have less influence, while values greater than 1 make the node have more influence than usual over the image.

Input

The input nodes are one of the two most important node types in the Node Compositor. If your node network doesn’t have any inputs, you don’t have anything to composite. Figure 15-10 shows each of these nodes side by side.

image

Figure 15-10: Input nodes: Render Layer, Image, Movie Clip, Mask, RGB, Value, Texture, Bokeh Image, Time, and Track Position.

· Render Layer: This node feeds input from a scene into the compositor. The drop-down menu at the bottom of the node allows you to pick any of the render layers you created in any scene. Notice also the button with the camera icon that is to the right of this menu. Left-click this button to render just this layer. This handy feature allows you to update a portion of your node network without needing to re-render all the layers in the network.

· Image: The name for this node is a bit over-simplistic because it can actually load more than just a single still image. The Image node allows you to bring any sort of image data into the compositor, including sequences of images and movie files, and allows you to control when the sequence starts, how long it is, and whether to loop it continuously.

· Movie Clip: Movie Clips are datablocks that get created in the Movie Clip Editor. They're most frequently used for Blender's motion tracking features. These are the same kinds of Clips that can be loaded in the VSE.

· Mask: Masks are created in the UV/Image Editor from the Mask editing context. They're primarily used for hiding and showing parts of images or videos when compositing.

· RGB: With the RGB node, you can feed any solid color to any other node in the compositor. This node is a good, quick way to adjust the hue of an image or provide a simple colored background.

· Value: This fairly simple input node allows you to feed any scalar (numerical) value to other nodes.

· Texture: The Texture node is unique as an input node, because it’s the only one that can actually receive input data as well. Through this node, you can take any texture that you built in Blender and add it to your node network. This node is particularly useful with UV data, because it can let you change the textures on objects in your image without re-rendering.

· Bokeh Image: The word bokeh comes from a Japanese word meaning blur. In photography, bokeh is used to describe the quality of background blur in an image (typically from lenses with a shallow depth of field, or focus area). The form of that blur is entirely dependent on how the lens is shaped. Most modern lenses have a circular bokeh, but many photographers and visual artists like the more harsh geometric look (typically hexagons) of older lenses. The Bokeh Image node can procedurally create the bokeh shape. This is particularly useful for the Bokeh Blur node, covered later in this chapter.

· Time: This node is probably one of the most powerful, yet misunderstood, nodes in Blender. In the past, the Node Compositor was not tied to the Graph Editor, making it difficult to animate attributes of individual nodes. The Time node was a way around this obstacle. Fortunately, as of Blender 2.5 and the ability to animate nearly everything, the Time node is less of an absolute necessity in the compositor. You can key node values just like any other value in Blender. However, the Time node is still useful.

· Track Position: The Track Position node outputs the X and Y coordinates of a marker in a movie clip. It's used by Blender's motion tracking feature. It's a bit on the advanced side, but you can use the Track Position node to make one compositor element follow something moving in tracked footage. For example, if you're tracking the position of a car's license plate in a movie clip, you can use the Track Position node along as part of a noodle network to blur out that license plate.

Output

Input nodes are one of the two most important node types in Blender. As you may have guessed, the Output nodes are the other important node types, for a similar reason. If you don’t have an output node, Blender doesn’t know what to save when it renders. The following are the two most-used Output nodes:

· Composite: Blender recognizes the Composite node as the final output from the Node Compositor. When you set up your output files for animation in Render Properties, or when you press F3 to save a render, it’s the information from this node that Blender saves out.

· Viewer: The Viewer node is similar to the Composite node, but it’s not necessarily for final output. Viewer nodes are great for spot-checking sections of your node network and making sure that things are going the way you want them to. Also, output from these nodes is seen in the compositor’s backdrop, if you enable it.

Color

The Color nodes have an enormous influence over the look of the final output. These nodes directly affect how colors appear, mix, and balance in an image. And because an image is basically just a bunch of colors arranged in a specific pattern, you can understand why these nodes have so much control. Figure 15-11 shows some of the most commonly used Color nodes. Following are descriptions of each node type:

· RGB Curves: Arguably one of the most powerful color nodes, the RGB Curves node takes image data and allows you to use curves to adjust the combined color, or any of the red, green, or blue channels in the image individually. Left-clicking the C, R, G, and B buttons on the upper left changes between combined, red, green, and blue color channels, respectively. With this node, you can do anything that the Hue Saturation Value, Bright/Contrast, and Invert nodes can do, but with more control.

· Mix: I personally use this node quite a bit. The Mix node has 18 different blending modes to allow you to control how to combine two input images. If you’ve used image editing software like GIMP or Photoshop, these blending modes should be pretty familiar to you. One thing to remember in this node — and it’s something I used to constantly get backwards — is that the upper image input socket is the background image, whereas the lower image input socket is the foreground image.

· Alpha Over: This node is very similar to the Mix node, except it deals exclusively with combining images using their alpha channels. The lower socket is the foreground, and the upper socket is the background image. The other thing to note with this node is the Convert Premul check box. Basically, if you see weird white or black edges around parts of your foreground elements, left-clicking this button usually fixes them.

· Z Combine: Like the Mix and Alpha Over nodes, the Z Combine node mixes two sets of image data together. However, instead of using color information or alpha channels, this node can use Z-depth information.

image

Figure 15-11: Color nodes: RGB Curves, Mix, Alpha Over, and Z Combine.

Converter

These handy little utility nodes have a variety of purposes, including converting one set of data to another and ripping apart or recombining elements from a rendered image. The ColorRamp and ID Mask nodes in particular get used quite a bit. The ColorRamp node is great for helping visualizing or re-visualizing numerical values on a scale. For example, the only way to get a good sense of what the Z-depth of an image looks like is to map Z values along a manageable scale and then feed that to a white-to-black color ramp, as shown in Figure 15-12.

image

Figure 15-12: Visualizing a scene’s Z-depth using a ColorRamp.

The ID Mask node is handy because it allows you to isolate an object even more specifically than with layers and render layers. Assume that you want to apply the Glare node to a ball that your character is holding. If the scene is complex enough, it doesn’t really make a lot of sense to give that ball a layer all by itself. So you can give the object a Pass Index value in the Relations panel of Object Properties. Then by using the ID Mask node, you can isolate just that ball and make it all shiny.

Filter

Filter nodes can drastically change the look of an image and are probably the No. 1 way to fake any effect in an image. These nodes actually process the pixels in an image and can do things like put thick black lines around an object, give the image a variety of customized blurs, or make bright parts of the image glow. Figure 15-13 shows some of the most useful Filter nodes:

· Blur: As its name implies, this node applies a blur across the entire input image. The first button gives you a drop-down menu to select the type of blur you want to use. I typically like to use Gaussian for most of my effects. When you first apply this node, you may not think that anything is happening. Change the values in the X and Y buttons to adjust the blur size on a scale from 0 to 256, or 0.0 to 1.0, depending on whether you enable the Relative check box. This node is like an advanced version of the Gaussian Blur effect strip in the VSE.

· Vector Blur: This node is the fastest way to get motion blur out of Blender if you're rendering with BI. The Vector Blur node takes speed information from the Vector pass (enable Vector in the Passes panel of Render Layers Properties) and uses it to fake the motion blur effect. One check box I recommend you enable in this node, especially if you’re doing character animation, is the Curved check box. This option gives objects that are moving in an arc a more natural, curved motion blur. This node is specifically for use with 3D data coming from Blender. It can’t add motion blur to just any footage. If you're rendering with Cycles, usually you don't need to use the Vector Blur node; you can get more accurate motion blur from the Motion Blur panel in Render Properties.

· Defocus: Blender’s Defocus node is the way to fake the depth of field, or DOF, effect you get with a real camera. If you’ve seen a photo where something in the middle of the picture is in focus, but objects in the foreground and background are blurry, it’s called a shallow DOF, and it looks pretty sweet. You can get an idea where the camera’s focal point is by selecting the camera in your scene and turning on Limits for the camera in Camera Properties. Then when you adjust the DOF Distance value, you can see the focal point as a yellow cross. Generally speaking, you don't need to use the Defocus node if you're rendering with Cycles; depth of field is built into the render engine.

· Glare: This node is a really quick way to give the bright parts in your render just a little extra bit of kick. The Fog Glow and Streaks options in the first drop-down menu are what I tend to use the most. Of all the values, this node gives you to play with, probably the most influential one is Threshold. Setting the threshold to values between 0.0 and 1.0 tends to work best for me, but results vary from one image to the next.

image

Figure 15-13: Blur, Vector Blur, Defocus, and Glare nodes.

Vector

Vector nodes use 3D data from your scene to influence the look of your final 2D image. The usage of these nodes tends to be a bit advanced, but they allow you to do things like change the lighting in a scene or even change the speed that objects move through the scene … all without re-rendering! If you render to an image format that understands render passes, like the very cool OpenEXR format (more on this topic in the next section), and you include vector and normal passes, these nodes can be a huge time-saver.

Matte

The Matte nodes are specifically tailored for using color information from an image as a way of isolating certain parts of it. Matting is referred to as keying because you pick the main color, or key color, to represent transparency. Keying is the fundamental basis for those cool bluescreen or greenscreen effects used in movies. The filmmaker shoots the action over a blue or green screen (blue is used for analog film, whereas green is typically used for digital footage), and a compositor removes the blue or green and replaces it with other footage or something built in 3D.

Distort

The Distort nodes typically do general-purpose image manipulation operations like Translate, Rotate, Scale, Flip, or Crop.

tip Want to do that spinning newspaper effect you see in old movies? Wire an image of a newspaper and the Time node to the Rotate and Flip nodes, and you’ve got it! However, three special Distort nodes are worth talking more about: Displace, Map UV, and Lens Distortion. Figure 15-14 shows each of these nodes.

image

Figure 15-14: Distort nodes: Displace, Map UV, and Lens Distortion.

· Displace: This node is great for doing quick-and-dirty image distortions, such as generating heat waves in the desert, faking refraction, or making an object appear to push through the image on the screen. The key is the Vector input socket. If you feed a grayscale image to this socket, it uses those values to shift pixels in the image. Connecting a color image, normals, or vectors shifts the image around with a more three-dimensional effect, thereby giving you things like the heat wave effect.

· Map UV: In this entire chapter, I talk about how one cool thing that the Node Compositor can do is change textures on objects after you’ve already rendered them. Well, the Map UV node gets you that awesome functionality. To use this node, you need to enable the UV pass on your render layer. Feed that pass to this node, along with the new texture you want to use, and BAM! Your new texture is ready to be mixed back with the image. To make sure that you’re changing the texture on the right object, combine the Map UV node with the ID Mask node before mixing.

· Lens Distortion: Sometimes you want to introduce the effects that some special (or, in some cases, poor) lenses have on the final image. The Lens Distortion node produces those effects for you. You can get everything from a wide fisheye lens look to that strange effect when an old projector isn’t calibrated properly and the colors are misaligned.

Group

When you press Ctrl+G to create a node group, that group is placed in this menu. When you group a set of nodes, you instantly have the ability to apply that network to other parts of your compositing node network. Also, grouping gives you the ability to share node networks between .blend files. When you append or link a node group from another file, it shows up in this menu. There's more on grouping earlier in this chapter in the section “Grouping nodes together.”

remember Whenever you have the opportunity, name everything you create. Unique names are especially important for groups because they’re automatically added to the Group menu. Using names that make sense makes choosing the right node group a lot easier. You can always rename node groups from the Properties region of the Node Editor (N) or directly from the datablock selector within the node itself.

Rendering from the Node Compositor

If you’re using the Node Compositor, you already know all the basic steps for getting a rendered image out of it. Of course, if you skipped straight to this section, here’s the quick version: Make sure that the Compositing check box in the Post Processing panel of Render Properties is enabled.

That said, you need to know one other thing about rendering from the Compositor. Say that you’re working on a larger production and want to save your render passes to an external file format so that either you or another compositor can work on it later without re-rendering the whole scene. You’d have to save your renders to a file format that understands render layers and render passes. That format is the venerable OpenEXR file format, developed and gifted to the world by the cool people at Industrial Light & Magic.

Now I know what you’re thinking, “Using this format is as easy as setting up my render layers and then choosing OpenEXR from the menu in the Output panel of Render Properties.” You’re actually two-thirds correct. You do set up your render layers and you do go to the Output panel. However, choosing OpenEXR saves only the final composite output (not the layers or passes) in an OpenEXR file (extension .exr). In order to get layers and passes, you should instead choose OpenEXR MultiLayer. With this format, you get an OpenEXR file that has all the layer and pass information stored with it.

warning Pay close attention to your hard drive space when you choose to render to OpenEXR with all your layers and passes embedded. Keeping all your render layers and passes is a great way to tweak and make adjustments after rendering. However, the file size for each individual .exr file can be huge. Whereas an HD frame in PNG format may be only a couple hundred kilobytes, an OpenEXR file on the same single frame with all the passes enabled may be well over 100 megabytes — yes, megabytes. And if your animation has a length in minutes, that 100 megabytes per frame starts taking up space quickly. So make sure that you do test saves to get a good benchmark for the file size and see that you have enough hard drive space to store all those frames.

newversion Motion Tracking

One of the most exciting new additions to Blender in recent years is an integrated motion tracking system. In motion tracking, software analyzes video footage and tracks various features in the footage in either two-dimensional or three-dimensional space. With properly tracked footage, an artist can add all kinds of exciting visual effects. Say that you have some video footage of a car driving away from you. With a good motion track (that you can do from right within Blender), you could do almost anything with that footage. It could be as simple as blurring out that car's license plate or as wild and complex as adding rocket boosters to the car and having it cruise into the sunset through a dystopian wasteland populated by ravenous man-eating cacti. Anything you can do in Blender can be added to that footage!

Blender has a screen layout specifically tailored for motion tracking; conveniently, it's called Motion Tracking. From the Default layout, you can get there by pressing Ctrl+→ twice. The following figures shows this screen layout.

image

The workhorse of Blender's motion tracking system (and the largest editor in the Motion Tracking screen layout) is the Movie Clip Editor. Complete books could be written on the topic of the Movie Clip Editor and motion tracking. there is an incredible video tutorial series produced by Sebastian Koenig called Track, Match, Blend. You can find it in the Blender e-shop (www.blender.org/e-shop), as well as an updated version on the Blender Cloud (http://cloud.blender.org), the Blender Foundation's resource for Blender assets and training. I highly recommend these tutorials from Sebastian; there's really no better place to discover how to take full advantage of Blender's motion tracking features.