Creating a third-person 3D game: player movement and animation - Getting comfortable - Unity in Action: Multiplatform game development in C# with Unity 5 (2015)

Unity in Action: Multiplatform game development in C# with Unity 5 (2015)

Part 2. Getting comfortable

Chapter 7. Creating a third-person 3D game: player movement and animation

This chapter covers

· Adding real-time shadows to the scene

· Making the camera orbit around its target

· Changing rotation smoothly using the Lerp algorithm

· Handling ground detection for jumping, ledges, and slopes

· Applying and controlling animation for a lifelike character

In this chapter you’ll create another 3D game, but this time you’ll be working in a new game genre. If you think back to chapter 2, you built a movement demo for a first-person game. Now you’re going to write another movement demo, but this time it’ll involve third-person movement. The most important difference is the placement of the camera relative to the player: a player sees through their character’s eyes in first-person view, and the camera is placed outside the character in third-person view. This view is probably familiar to you from adventure games, like the long-lived Legend of Zelda series, or the more recent Uncharted series of games (refer ahead to figure 7.3 if you want to see a comparison of first-person and third-person views).

The project in this chapter is one of the more visually exciting prototypes we’ll build in this book. Figure 7.1 shows how the scene will be constructed. Compare this with the diagram (figure 2.2) of the first-person scene we created in chapter 2.

Figure 7.1. Roadmap for the third-person movement demo

Figure 7.2. Wireframe view of the model we’ll use in this chapter

You can see that the room construction is the same, and the use of scripts is much the same. But the look of the player, as well as the placement of the camera, are different in each case. Again, what defines this as a “third-person” view is that the camera is outside the player’s character and looking inward at that character. We’ll use a model that looks like a humanoid character (rather than a primitive capsule) because now players can actually see themselves.

Recall that two of the types of art assets discussed in chapter 4 were 3D models and animations. The term 3D model is almost a synonym for mesh object; the 3D model is the static shape defined by vertices and polygons (that is, mesh geometry). For a humanoid character, this mesh geometry is shaped into a head, arms, legs, and so forth (see figure 7.2).

As usual, we’ll focus on the last step in the roadmap: programming objects in the scene. Here’s a recap of our plan of action:

1. Import a character model into the scene.

2. Implement camera controls to look at the character.

3. Write a script that enables the player to run around on the ground.

4. Add the ability to jump to the movement script.

5. Play animations on the model based on its movements.

Copy the project from chapter 2 to modify it, or create a new Unity project (be sure it’s set to 3D, not the 2D project from chapter 5) and copy over the scene file from chapter 2’s project; either way, also grab the scratch folder from this chapter’s download to get the character model we’ll use.


We’re going to build this chapter’s project in the walled area from chapter 2. We’ll keep the walls and lights but replace the player and all scripts. If you need them, download the sample files from that chapter.

Assuming you’re starting with the completed project from chapter 2 (the movement demo, not later projects), let’s delete everything we don’t need for this chapter. First disconnect the camera from the player in the Hierarchy list (drag the camera object off the player object). Now delete the player object; if you hadn’t disconnected the camera first then that would be deleted too, but what you want is to delete only the player capsule and leave the camera. Alternatively, if you already deleted the camera by accident, create a new camera object by selecting GameObject > Camera.

Delete all the scripts as well (which involves removing the script component from the camera as well as deleting the files in the Project view), leaving only the walls, floor, and lights.

7.1. Adjusting the camera view for third-person

Before we can write code to make the player move around, we need to put a character in the scene and set up the camera to look at that character. We’ll import a faceless humanoid model to use as the player character, and then place the camera above at an angle to look down at the player obliquely. Figure 7.3 compares what the scene looks like in first-person view with what the scene will look like in third-person view (shown with a few large blocks that we’ll add in this chapter).

Figure 7.3. Side-by-side comparison of first-person and third-person views

We prepared the scene already, so now let’s put a character model into the scene.

7.1.1. Importing a character to look at

The scratch folder for this chapter’s download includes both the model and the texture; as you’ll recall from chapter 4, FBX is the model and TGA is the texture. Import the FBX file into the project; either drag the file into the Project view, or right-click in the Project view and select Import New Asset. Then look in the Inspector to adjust import settings for the model. Later in the chapter you’ll adjust imported animations, but for now you need to make only a couple of adjustments in the Model tab. First change the Scale Factor value to 10 (to partially counteract the File Scale value of .01) so that the model will be the correct size.

A bit farther down you’ll find the Normals option (see figure 7.4). This setting controls how lighting and shading appear on the model, using a 3D math concept known as, well, normals.

Figure 7.4. Import settings for the character model


Normals are direction vectors sticking out of polygons that tell the computer which direction the polygon is facing. This facing direction is used for lighting calculations.

The default setting for Normals is Import, which will use the normals defined in the imported mesh geometry. But this particular model doesn’t have correctly defined normals and will react in odd ways to lights. Instead, change the setting to Calculate so that Unity will calculate a vector for the facing direction of every polygon.

Once you’ve adjusted these two settings, click the Apply button in the Inspector. Next import the TGA file into the project and then assign this image as the texture in a material. Select the player material in the Materials folder. Drag the texture image onto the empty texture slot in the Inspector. Once the texture is applied you won’t see a dramatic change in the model’s color (this texture image is mostly white), but there are shadows painted into the texture that’ll improve the look of the model.

With the texture applied, drag the player model from the Project view up into the scene. Position the character at 0, 1.1, 0 so that it’ll be in the center of the room and raised up to stand on the floor. Great, we have a third-person character in the scene!


The imported character has his arms stuck straight out to the sides, rather than the more natural arms-down pose. That’s because animations haven’t been applied yet; that arms-out position is referred to as the T-pose and the standard is for animated characters to default to a T-pose before they’re animated.

7.1.2. Adding shadows to the scene

Before we move on, I want to explain a bit about the shadow being cast by the character. We take shadows for granted in the real world, but shadows aren’t guaranteed in the game’s virtual world. Fortunately Unity can handle this detail, and shadows are turned on for the default light that comes with new scenes. Select the directional light in your scene and then look in the Inspector for the Shadow Type option. That setting (shown in figure 7.5) is already on Soft Shadows for the default light, but notice the menu also has a No Shadows option.

Figure 7.5. Before and after casting shadows from the directional light

That’s all you need to do to set up shadows in this project, but there’s a lot more you should know about shadows in games. Calculating the shadows in a scene is a particularly time-consuming part of computer graphics, so games often cut corners and fake things in various ways in order to achieve the visual look desired. The kind of shadow cast from the character is referred to as real-time shadow because the shadow is calculated while the game is running and moves around with moving objects. A perfectly realistic lighting setup would have all objects casting and receiving shadows in real time, but in order for the shadow calculations to run fast enough, real-time shadows are limited in how the shadows look or which lights can even cast shadows. Note that only the directional light is casting shadows in this scene.

Another common way of handling shadows in games is with a technique called lightmapping.


Lightmaps are textures applied to the level geometry, with pictures of the shadows baked into the texture image.


Drawing shadows onto a model’s texture is referred to as baking the shadows.

Because these images are generated ahead of time (rather than while the game is running), they can be very elaborate and realistic. On the downside, because the shadows are generated ahead of time, they won’t move. Thus, lightmaps are great to use for static level geometry, but they aren’t useful for dynamic objects like characters. Lightmaps are generated automatically rather than being painted by hand. The computer calculates how the lights in the scene will illuminate the level while subtle darkness builds up in corners. In Unity, the system for rendering lightmaps is called Enlighten, so you can look up that keyword in Unity’s manual.

Whether or not to use real-time shadows or lightmaps isn’t an all-or-nothing choice. You can set the Culling Mask property on a light so that real-time shadows are used only for certain objects, allowing you to use the higher-quality lightmaps for other objects in the scene. Similarly, though you almost always want the main character to cast shadows, sometimes you don’t want the character to receive shadows; all mesh objects have settings to cast and receive shadows (see figure 7.6).

Figure 7.6. The Cast Shadows and Receive Shadows settings in the Inspector


Culling is a general term for removing unwanted things. The word comes up a lot in computer graphics in many different contexts, but in this case culling mask is the set of objects you want to remove from shadow casting.

All right, now you understand the basics of how to apply shadows to your scenes. Lighting and shading a level can be a big topic unto itself (books about level editing will often spend multiple chapters on lightmapping), but here we restrict ourselves to turning on real-time shadows on one light. And with that, let’s turn our attention to the camera.

7.1.3. Orbiting the camera around the player character

In the first-person demo, the camera was linked to the player object in Hierarchy view so that they’d rotate together. In third-person movement, though, the player character will be facing different directions independently of the camera. Therefore, you don’t want to drag the camera onto the player character in the Hierarchy view this time. Instead, the camera’s code will move its position along with the character but will rotate independently of the character.

First, place the camera where you want it to be relative to the player; I went with position 0, 3.5, -3.75 to put the camera above and behind the character (reset rotation to 0, 0, 0 if needed). Then create a script called OrbitCamera (see the next listing). Attach the script component to the camera and then drag the player character into the Target slot of the script. Now you can play the scene to see the camera code in action.

Listing 7.1. Camera script for rotating around a target while looking at it

As you’re reading through the listing, note the serialized variable for target. The code needs to know what object to orbit the camera around, so this variable is serialized in order to appear within Unity’s editor and have the player character linked to it. The next couple of variables are rotation values that are used in the same way as in the camera control code from chapter 2. And there’s an _offset value declared; _offset is set within Start() to store the position difference between the camera and target. This way, the relative position of the camera can be maintained while the script runs. In other words, the camera will stay at the initial distance from the character regardless of which way it rotates. The remainder of the code is inside the LateUpdate() function.


LateUpdate() is another method provided by MonoBehaviour and it’s very similar to Update(); it’s a method run every frame. The difference, as the name implies, is that LateUpdate() is called on all objects after Update() has run on all objects. This way, we can ensure that the camera updates after the target has moved.

First, the code increments the rotation value based on input controls. This code looks at two different input controls—horizontal arrow keys and horizontal mouse movement—so a conditional is used to switch between them. The code checks if horizontal arrow keys are being pressed; if they are, then it uses that input, but if not, it checks the mouse. By checking the two inputs separately, the code can rotate at different speeds for each type of input.

Next, the code positions the camera based on the position of the target and the rotation value. The transform.position line is probably the biggest “aha!” in this code, because it provides some crucial math that you haven’t seen before in previous chapters. Multiplying a position vector by a quaternion (note that the rotation angle was converted to a quaternion using Quaternion.Euler) results in a position that’s shifted over according to that rotation. This rotated position vector is then added as the offset from the character’s position in order to calculate the position for the camera. Figure 7.7 illustrates the steps of the calculation and provides a detailed breakdown of this rather conceptually dense line of code.

Figure 7.7. The steps for calculating the camera’s position


The more mathematically astute among you may be thinking “Hmm, that transforming-between-coordinate-systems thing in chapter 2...can’t we do that here, too?” The answer is, yes, we could transform the offset position using a rotated coordinate system to get the rotated offset. But that’d require setting up the rotated coordinate system first, and it’s more straightforward not to need that step.

Finally, the code uses the LookAt() method to point the camera at the target; this function points one object (not just cameras) at another object. The rotation value calculated before was used to position the camera at the correct angle around the target, but in that step the camera was only positioned and not rotated. Thus without the final LookAt line, the camera position would orbit around the character but wouldn’t necessarily be looking at it. Go ahead and comment out that line to see what happens.

The camera has its script for orbiting around the player character; next up is code that moves the character around.

7.2. Programming camera-relative movement controls

Now that the character model is imported into Unity and we’ve written code to control the camera view, it’s time to program controls for moving around the scene. Let’s program camera-relative controls that’ll move the character in various directions when arrow keys are pressed, as well as rotate the character to face those different directions.

What does “camera-relative” mean?

The whole notion of “camera-relative” is a bit nonobvious but very crucial to understand. This is similar to the local versus global distinction mentioned in previous chapters: “left” points in different directions when you mean “left of the local object” or “left of the entire world.” In a similar way, when you “move the character to the left,” do you mean toward the character’s left, or the left side of the screen?

The camera in a first-person game is placed inside the character and moves with it, so no distinction exists between the character’s left and the camera’s left. A third-person view places the camera outside the character, though, and thus the camera’s left may be pointed in a different direction from the character’s left. For example, they’re literally opposite directions if the camera is looking at the front of the character. Thus we have to decide what we want to have happen in our specific game and controls setup.

Although occasionally games do it the other way, most third-person games make their controls camera-relative. When the player presses the left button, the character moves to the left of the screen, not the character’s left. Over time and through experiments with trying out different control schemes, game designers have figured out that players find the controls more intuitive and easier to understand when “left” means “left side of the screen” (which, not coincidentally, is also the player’s left).

Implementing camera-relative controls involves two primary steps: first rotate the player character to face the direction of the controls, and then move the character forward. Let’s write the code for these two steps next.

7.2.1. Rotating the character to face movement direction

First we’ll write code to make the character face in the direction of the arrow keys. Create a C# script called RelativeMovement (see listing 7.2). Drag that script onto the player character, and then link the camera to the target property of the script component (just like you’d linked the character to the target of the camera script). Now the character will face different directions when you press the controls, facing directions relative to the camera, or stand still when you’re not pressing any arrow keys (that is, when rotating using the mouse).

Listing 7.2. Rotating the character relative to the camera

The code in this listing starts the same way as listing 7.1 did, with a serialized variable for target. Just as the previous script needed a reference to the object it’d orbit around, this script needs a reference to the object it’ll move relative to. Then we get to the Update() function. The first line of the function declares a Vector3 value of 0, 0, 0. It’s important to create a zeroed vector and fill in the values later rather than simply create a vector later with the movement values calculated, because the vertical and horizontal movement values will be calculated in different steps and yet they all need to be part of the same vector.

Next we check the input controls, just as we have in previous scripts. Here’s where X and Z values are set in the movement vector, for horizontal movement around the scene. Remember that Input.GetAxis() returns 0 if no button is pressed, and it varies between 1 and –1 when those keys are being pressed; putting that value in the movement vector sets the movement to the positive or negative direction of that axis (the X-axis is left-right, and the Z-axis is forward-backward).

The next several lines are where the movement vector is adjusted to be camera-relative. Specifically, TransformDirection() is used to transform from Local to Global coordinates. This is the same thing we did with TransformDirection() in chapter 2, except this time we’re transforming from the target’s coordinate system instead of from the player’s coordinate system. Meanwhile, the code just before and after the TransformDirection() line is aligning the coordinate system for our needs: first store the target’s rotation to restore later, and then adjust the rotation so that it’s only around the Y-axis and not all three axes. Finally perform the transformation and restore the target’s rotation.

All of that code was for calculating the movement direction as a vector. The final line of code applies that movement direction to the character by converting the Vector3 into a Quaternion using Quaternion.LookDirection() and assigning that value. Try running the game now to see what happens!

Smoothly rotating (interpolating) by using Lerp

Currently, the character’s rotation snaps instantly to different facings, but it’d look better if the character smoothly rotated to different facings. We can do so using a mathematical operation called Lerp. First add this variable to the script:

public float rotSpeed = 15.0f;

Then replace the existing transform.rotation... line at the end of listing 7.2 with the following code:


Quaternion direction = Quaternion.LookRotation(movement);

transform.rotation = Quaternion.Lerp(transform.rotation,

direction, rotSpeed * Time.deltaTime);




Now instead of snapping directly to the LookRotation() value, that value is used indirectly as the target direction to rotate toward. The Quaternion.Lerp() method smoothly rotates between the current and target rotations (with the third parameter controlling how quickly to rotate).

Incidentally, the term for smoothly changing between values is interpolate; you can interpolate between two of any kind of value, not just rotation values. Lerp is a quasi-acronym for “linear interpolation,” and Unity provides Lerp methods for vectors and float values, too (to interpolate positions, colors, or anything). Quaternions also have a closely related alternative method for interpolation called Slerp (for spherical linear interpolation). For slower turns, Slerp rotations may look better than Lerp.

Currently the character is rotating in place without moving; in the next section we’ll add code for moving the character around.


Because sideways facing uses the same keyboard controls as orbiting the camera, the character will slowly rotate while the movement direction points sideways. This doubling up of the controls is desired behavior in this project.

7.2.2. Moving forward in that direction

As you’ll recall from chapter 2, in order to move the player around the scene, we need to add a character controller component to the player object. Select the character and then choose Components > Physics > Character Controller. In the Inspector you should slightly reduce the controller’s radius to .4, but otherwise the default settings are all fine for this character model.

The next listing shows what you need to add in the RelativeMovement script.

Listing 7.3. Adding code to change the player’s position

If you play the game now, you can see the character (stuck in a T-pose) moving around in the scene. Pretty much the entirety of this listing is code you’ve already seen before, so I’ll just review everything briefly.

First, there’s a RequireComponent() method at the top of the code. As explained in chapter 2, RequireComponent() will force Unity to make sure the GameObject has a component of the type passed into the command. This line is optional; you don’t have to require it, but without this component the script will have errors.

Next there’s a movement value declared, followed by getting this script a reference to the character controller. As you’ll recall from previous chapters, GetComponent() returns other components attached to the given object, and if the object to search on isn’t explicitly defined, then it’s assumed to be this.GetComponent() (that is, the same object as this script).

Movement values are assigned based on the input controls. This was in the previous listing, too; the change here is that we also account for the movement speed. Multiply both movement axes by the movement speed, and then use Vector3.Clamp-Magnitude() to limit the vector’s magnitude to the movement speed; the clamp is needed because otherwise diagonal movement would have a greater magnitude than movement directly along an axis (picture the sides and hypotenuse of a right triangle).

Finally, at the end we multiply the movement values by deltaTime in order to get frame rate–independent movement (recall that “frame rate-independent” means the character moves at the same speed on different computers with different frame rates). Pass the movement values toCharacterController.Move() to make the movement.

This handles all the horizontal movement; next let’s take care of vertical movement.

7.3. Implementing the jump action

In the previous section we wrote code to make the character run around on the ground. In the chapter introduction, though, I also mentioned making the character jump, so let’s do that now. Most third-person games do have a control for jumping. And even if they don’t, they almost always have vertical movement from the character falling off ledges. Our code will handle both jumping and falling. Specifically, this code will have gravity pulling the player down at all times, but occasionally an upward jolt will be applied when the player jumps.

Before we write this code, let’s add a few raised platforms to the scene. There’s currently nothing to jump on or fall off of! Create a couple more cube objects, and then modify their positions and scale to give the player platforms to jump on. In the sample project, I added two cubes and used these settings: Position 5, .75, 5 and Scale 4, 1.5, 4; Position 1, 1.5, 5.5 and Scale 4, 3, 4. Figure 7.8 shows the raised platforms.

Figure 7.8. A couple of raised platforms added to the sparse scene

7.3.1. Applying vertical speed and acceleration

As mentioned when we first started writing the RelativeMovement script in listing 7.2, the movement values are calculated in separate steps and added to the movement vector progressively. The next listing adds vertical movement to the existing vector.

Listing 7.4. Adding vertical movement to the RelativeMovement script

As usual we start by adding a few new variables to the top of the script for various movement values, and initialize the values correctly. Then we skip down to just after the big if statement for horizontal movement, where we’ll add another big if statement for vertical movement. Specifically, the code will check if the character is on the ground, because the vertical speed will be adjusted differently depending on whether the character is on the ground. CharacterController includes isGrounded for checking whether the character is on the ground; this value istrue if the bottom of the character controller collided with anything in the last frame.

If the character is on the ground, then the vertical speed value (the private variable _vertSpeed) should be reset to essentially nothing. The character isn’t falling while on the ground, so obviously its vertical speed is 0; if the character then steps off a ledge, we’re going to get a nice, natural-looking motion because the falling speed will accelerate from nothing.


Well, not exactly 0; we’re actually setting the vertical speed to minFall, a slight downward movement, so that the character will always be pressing down against the ground while running around horizontally. There needs to be some downward force in order to run up and down on uneven terrain.

The exception to this grounded speed value is if the jump button is clicked. In that case, the vertical speed should be set to a high number. The if statement checks GetButtonDown(), a new input function that works much like GetAxis() does, returning the state of the indicated input control. And much like Horizontal and Vertical input axes, the exact key assigned to Jump is defined by going to Input settings under Edit > Project Settings (the default key assignment is Space—that is, the spacebar).

Getting back to the larger if condition, if the character is not on the ground, then the vertical speed should be constantly reduced by gravity. Note that this code doesn’t simply set the speed value but rather decrements it; this way, it’s not a constant speed but rather a downward acceleration, resulting in a realistic falling movement. Jumping will happen in a natural arc, as the character’s upward speed gradually reduces to 0 and it starts falling instead.

Finally, the code makes sure the downward speed doesn’t exceed terminal velocity. Note that the operator is “less than” and not “greater than,” because downward is a negative speed value. Then after the big if statement, assign the calculated vertical speed to the Y-axis of the movement vector.

And that’s all you need for realistic vertical movement! By applying a constant downward acceleration when the character isn’t on the ground, and adjusting the speed appropriately when the character is on the ground, the code creates nice falling behavior. But this all depends on detecting the ground correctly, and there’s a subtle glitch we need to fix.

7.3.2. Modifying the ground detection to handle edges and slopes

As explained in the previous section, the isGrounded property of Character-Controller indicates whether the bottom of the character controller collided with anything in the last frame. Although this approach to detecting the ground works the majority of the time, you’ll probably notice that the character seems to float in the air while stepping off edges. That’s because the collision area of the character is a surrounding capsule (you can see it when you select the character object) and the bottom of this capsule will still be in contact with the ground when the player steps off the edge of the platform. Figure 7.9 illustrates the problem. This won’t do at all!

Figure 7.9. Diagram showing the character controller capsule touching the platform edge

Similarly, if the character stands on a slope, the current ground detection will cause problematic behavior. Try it now by creating a sloped block against the raised platforms. Create a new cube object and set its transform values to Position -1.5, 1.5, 5 Rotation 0, 0, -25 Scale 1, 4, 4.

If you jump onto the slope from the ground, you’ll find that you can jump again from midway up the slope and thereby ascend to the top. That’s because the slope does touch the bottom of the capsule obliquely and the code currently considers any collision on the bottom to be solid footing. Again, this won’t do; the character should slide back down, not have solid footing to jump from.


Sliding back down is only desired on steep slopes. On shallow slopes, such as uneven ground, we want the player to run around unaffected. If you want one to test on, make a shallow ramp by creating a cube and set it to Position 5.25, .25, .25 Rotation 0, 90, 75 Scale 1, 6, 3.

All these problems have the same root cause: checking for collisions on the bottom of the character isn’t a great way of determining if the character is on the ground. Instead, let’s use raycasting to detect the ground. In chapter 3 the AI used raycasting to detect obstacles in front of it; let’s use the same approach to detect surfaces below the character. Cast a ray straight down from the player’s position. If it registers a hit just below the character’s feet, that means the player is standing on the ground.

This does introduce a new situation to handle: when the raycast doesn’t detect ground below the character but the character controller is colliding with the ground. As in figure 7.9, the capsule still collides with the platform while the character is walking off the edge. Figure 7.10 adds raycasting to the diagram in order to show what will happen now: the ray doesn’t hit the platform, but the capsule does touch the edge. The code needs to handle this special situation.

Figure 7.10. Diagram of raycasting downward while stepping off a ledge

In this case, the code should make the character slide off the ledge. The character will still fall (because it’s not standing on the ground), but it’ll also push away from the point of collision (because it needs to move the capsule away from the platform it’s hitting). Thus the code will detect collisions with the character controller and respond to those collisions by nudging away.

The following listing adjusts the vertical movement with everything we just discussed.

Listing 7.5. Using raycasting to detect the ground

This listing contains much of the same code as the previous listing; the new code is interspersed throughout the existing movement script and this listing needed the existing code for context. The first line adds a new variable to the top of the RelativeMovement script. This variable is used to store data about collisions between functions.

The next several lines do raycasting. This code also goes below horizontal movement but before the if statement for vertical movement. The actual Physics.Raycast() call should be familiar from previous chapters, but the specific parameters are different this time. Although the position to cast a ray from is the same (the character’s position), the direction will be down this time instead of forward. Then we check how far away the raycast was when it hit something; if the distance of the hit is at the distance of the character’s feet, then the character is standing on the ground, so set hitGround to true.


It’s a little nonobvious how the check distance is calculated, so let’s go over that in detail. First take the height of the character controller (which is the height without the rounded ends) and then add the rounded ends. Divide this value in half because the ray was cast from the middle of the character (that is, already halfway down) to get the distance to the bottom of the character. But we really want to check a little beyond the bottom of the character to account for tiny inaccuracies in the raycasting, so divide by 1.9 instead of 2 to get a distance that’s slightly too far.

Having done this raycasting, use hitGround instead of isGrounded in the if statement for vertical movement. Most of the vertical movement code will remain the same, but add code to handle when the character controller collides with the ground even though the player isn’t over the ground (that is, when the player walks off the edge of the platform). There’s a new isGrounded conditional added, but note that it’s nested inside the hitGround conditional so that isGrounded is only checked when hitGround doesn’t detect the ground.

The collision data includes a normal property (again, a normal vector says which way something is facing) that tells us the direction to move away from the point of collision. But one tricky thing is that we want the nudge away from the contact point to be handled differently depending on which direction the player is already moving: when the previous horizontal movement is toward the platform, we want to replace that movement so that the character won’t keep moving in the wrong direction; but when facing away from the edge, we want to add to the previous horizontal movement in order to keep the forward momentum away from the edge. The movement vector’s facing relative to the point of collision can be determined using the dot product.


The dot product is one kind of mathematical operation that can be done on two vectors. Long story short, the dot product of two vectors ranges between -1 and 1, with 1 meaning they point in exactly the same direction, and -1 when they point in exactly opposite directions. Don’t confuse “dot product” and “cross product”; the cross product is a different but also commonly seen vector math operation.

Vector3 includes a Dot() function to calculate the dot product of two given vectors. If we calculate the dot product between the movement vector and the collision normal, that will return a negative number when the two directions face away from each other and a positive number when the movement and the collision face the same direction.

Finally, the very end of listing 7.5 adds a new method to the script. In the previous code we were checking the collision normal, but where did that information come from? It turns out that collisions with the character controller are reported through a callback function calledOnControllerColliderHit() that MonoBehaviour provides; in order to respond to the collision data anywhere else in the script, that data must be stored in an external variable. That’s all the method is doing here: storing the collision data in _contact so that this data can be used within the Update() method.

Now the errors are corrected around platform edges and on slopes. Go ahead and play to test it out by stepping over edges and jumping onto the steep slope. This movement demo is almost complete. The character is moving around the scene correctly, so only one thing remains: animating the character out of the T-pose.

7.4. Setting up animations on the player character

Besides the more complex shape defined by mesh geometry, a humanoid character needs animations. In chapter 4 you learned that an animation is a packet of information that defines movement of the associated 3D object. The concrete example I gave was of a character walking around, and that situation is exactly what you’re going to be doing now! The character is going to run around the scene, so you’ll assign animations that make the arms and legs swing back and forth. Figure 7.11 shows what it’ll look like when the character has an animation playing while it moves around the scene.

Figure 7.11. Character moving around with a run animation playing

A good analogy with which to understand 3D animation is to think about puppeteering: 3D models are the puppets, the animator is the puppeteer, and an animation is a recording of the puppet’s movements. Animations can be created with a few different approaches; most character animation in modern games (certainly all the animations on this chapter’s character) uses a technique called skeletal animation.


Skeletal animation is a kind of animation where a series of bones are set up inside the model, and then the bones are moved around during the animation. When a bone moves, the model’s surface linked to that bone moves along with it.

As the name implies, skeletal animation makes the most intuitive sense when simulating the skeleton inside a character (figure 7.12 illustrates this), but the “skeleton” is an abstraction that’s useful any time you want a model to bend and flex while still having a definite structure to how it moves (for example, a tentacle that waves around). Although the bones move rigidly, the model surface around the bones can bend and flex.

Figure 7.12. Skeletal animation of a humanoid character

Achieving the result illustrated in figure 7.11 involves several steps: first define animation clips in the imported file, then set up the controller to play those animation clips, and finally incorporate that animation controller in your code. The animations on the character model will be played back according to the movement scripts you’ll write.

Of course the very first thing you need to do, before any of those steps, is turn on the animation system. Select the player model in the Project view to see its Import settings in the Inspector. Select the Animations tab and make sure Import Animation is checked. Then go to the Rig tab and switch Animation Type from Generic to Humanoid (this is a humanoid character, naturally). Note that this last menu also has a Legacy setting; Generic and Humanoid are both settings within the umbrella term Mecanim.

Explaining Unity’s Mecanim animation system

Unity has a sophisticated system for managing animations on models, called Mecanim. Mecanim is based on skeletal animation, the style of animation defined in this chapter. The special name Mecanim identifies the newer, more advanced animation system that was recently added to Unity as a replacement for the older animation system. The older system is still around, identified as Legacy animation, but it may be phased out in a future version of Unity, at which point Mecanim will simply be the animation system.

Although the animations we’re going to use are all included in the same FBX file as our character model, one of the major advantages of Mecanim’s approach is that you can apply animations from other FBX files to a character. For example, all of the human enemies can share a single set of animations. This has a number of advantages, including keeping all your data organized (models can go in one folder, whereas animations go in another folder) as well as saving time spent animating each separate character.

Click the Apply button at the bottom of the Inspector in order to lock these settings onto the imported model and then continue defining animation clips.


You may notice a warning (not an error) in the console that says “conversion warning: spine3 is between humanoid transforms.” That specific warning isn’t a cause for worry; it indicates that the skeleton in the imported model has extra bones beyond the skeleton that Mecanim expects.

7.4.1. Defining animation clips in the imported model

The first step in setting up animations for our character is defining the various animation clips that’ll be played. If you think about a lifelike character, different movements can happen at different times: sometimes the player is running around, sometimes the player is jumping on platforms, and sometimes the character is just standing there with its arms down. Each of these movements is a separate “clip” that can play individually.

Often imported animations come as a single long clip that can be cut up into shorter individual animations. To split up the animation clips, first select the Animations tab in the Inspector. You’ll see a Clips panel, shown in figure 7.13; this lists all the defined animation clips, which initially are one imported clip. You’ll notice + and – buttons at the bottom of the list; you use these buttons to add and remove clips on the list. Ultimately we need four clips for this character, so add and remove clips as necessary while you work.

Figure 7.13. The Clips list in Animation settings

When you select a clip, information about that clip (shown in figure 7.14) will appear in the area below the list. The top of this information area shows the name of this clip, and you can type in a new name. Name our first clip idle. Define Start and End frames for this animation clip; this allows you to slice a chunk out of the longer imported animation. For the idle animation enter Start 3 and End 141. Next up are the Loop settings.

Figure 7.14. Information about the selected animation clip


Loop refers to a recording that plays over and over repeatedly. A looping animation clip is one that plays again from the start as soon as playback reaches the end.

The idle animation loops, so select both Loop Time and Loop Pose. Incidentally, the green indicator dot tells you when the pose at the beginning of the clip matches the pose at the end for correct looping; this indicator turns yellow when the poses are somewhat off, and it turns red when the start and end poses are completely different.

Below the Loop settings are a series of settings related to the root transform. The word root means the same thing for skeletal animation as it does for a hierarchy connected within Unity: the root object is the base object that everything else is connected to. Thus the animation root can be thought of as the base of the character, and everything else moves relative to that base. There are a few different settings here for setting up that base, and you may want to experiment here when working with your own animations. For our purposes, though, the settings should be Body Orientation, Center Of Mass, and Center Of Mass, in that order.

Now click Apply and you’ve added an idle animation clip to your character. Do the same for two more clips: walk starts at frame 144 and ends at 169, and run starts at 171 and ends at 190. All the other settings should be the same as for idle because they’re also animation loops.

The fourth animation clip is jump, and the settings for that clip differ a bit. First, this isn’t a loop but rather a still pose, so don’t select Loop Time. Set the Start and End to 190.5 and 191; this is a single-frame pose, but Unity requires that Start and End be different. The animation preview below won’t look quite right because of these tricky numbers, but this pose will look fine in the game.

Click Apply to confirm the new animation clips, and then move on to the next step: creating the animation controller.

7.4.2. Creating the animator controller for these animations

The next step is to create the animator controller for this character. This step allows us to set up animation states and create transitions between those states. Various animation clips are played during different animation states, and then our scripts will cause the controller to shift between animation states.

This might seem like an odd bit of indirection—putting the abstraction of a controller between our code and the actual playing of animations. You may be familiar with systems where you directly play animations from your code; indeed, the old Legacy animation system worked in exactly that way, with calls like Play("idle"). But this indirection enables us to share animations between models, rather than only being able to play animations that are internal to this model. In this chapter we won’t take advantage of this ability, but keep in mind that it can be helpful when you’re working on a larger project. You can obtain your animations from several sources, including multiple animators, or you can buy individual animations from stores online (such as Unity’s Asset Store).

Begin by creating a new animator controller asset (Assets > Create> Animator Controller—not Animation, a different sort of asset). In the Project view you’ll see an icon with a funny-looking network of lines on it (see figure 7.15); rename this asset to player. Select the character in the scene and you’ll notice this object has a component called Animator; any model that can be animated has this component, in addition to the Transform component and whatever else you’ve added. The Animator component has a Controller slot for you to link a specific animator controller, so drag and drop your new controller asset (and be sure to uncheck Root Motion).

Figure 7.15. Animator controller and Animator component

The animator controller is a tree of connected nodes (hence the icon on that asset) that you can see and manipulate by opening the Animator view. This is another view just like Scene or Project (shown in figure 7.16) except this view isn’t open by default. Select Animator from the Window menu (be careful not to get confused with the Animation window; that’s a separate selection from Animator). The node network displayed here is whichever animator controller is currently selected (or the animator controller on the selected character).

Figure 7.16. The Animator view with our completed animator controller


Remember that you can move tabs around in Unity and dock them wherever you like in order to organize the interface. I like to dock the Animator right next to the Scene and Game windows.

Initially there are only two default nodes, for Entry and Any State. You’re not going to use the Any State node. Instead, you’ll drag in animation clips to create new nodes. In the Project view, click the arrow on the side of the model asset to expand that asset and see what it contains. Among the contents of this asset are the animation clips you defined (see figure 7.17), so drag those clips into the Animator view. Don’t bother with the walking animation (that could be useful for other projects) and drag in idle, run, and jump.

Figure 7.17. Expanded model asset in Project view

Right-click on the Idle node and select Set As Layer Default State. That node will turn orange while the other nodes stay gray; the default animation state is where the network of nodes starts before the game has made any changes. You’ll need to link the nodes together with lines indicating transitions between animation states; right-click on a node and select Make Transition in order to start dragging out an arrow that you can click on another node to connect. Connect nodes in the pattern shown in figure 7.16 (be sure to make transitions in both directions for most nodes, but not from jump to run). These transition lines determine how the animation states connect to each other, and control the changes from one state to another during the game.


While working in the Animator view, you may see an error about AnimationStateMachine.TransitionEditionContext.BuildNames. Simply restart Unity; this seems to be a harmless bug.

The transitions rely on a set of controlling values, so let’s create those parameters. In the top left of figure 7.16 is a tab called Parameters; click that to see a panel with a + button for adding parameters. Add a float called Speed and a Boolean called Jumping. Those values will be adjusted by our code, and they’ll trigger transitions between animation states.

Click on the transition lines to see their settings in the Inspector (see figure 7.18). Here’s where we’ll adjust how the animation states change when the parameters change. For example, click on the Idle-to-Run transition to adjust the conditions of that transition. Under Conditions, choose Speed, Greater, and 0.1. Turn off Has Exit Time (that would force playing the animation all the way through, as opposed to cutting short immediately when the transition happens). Then click the arrow next to the Settings label in order to see that entire menu; other transitions should be able to interrupt this one, so change the Interruption Source menu from None to Current State. Repeat this for all the transitions in table 7.1.

Figure 7.18. Transition settings in the Inspector

Table 7.1. Conditions for all transitions in this animation controller





Speed greater than .1

Current State


Speed less than .1



Jumping is true



Jumping is true



Jumping is false


In addition to these menu-based settings, there’s a complex visual interface shown in figure 7.18 just above the Condition setting. This graph allows you to visually adjust the length in time of a transition. The default transition time looks fine for both transitions between Idle and Run, but all of the transitions to and from Jump should be shorter so that the character will snap faster between the jump animation. The shaded area of the graph indicates how long the transition takes; to see more detail, use Alt+left-click to pan across the graph and Alt+right-click to scale it (these are the same controls as navigating in the Scene view). Use the arrows on top of the shaded area to shrink it to under 4 milliseconds for all three Jump transitions.

Finally, you can perfect the animation network by selecting the animation nodes one at a time and adjusting the ordering of transitions. The Inspector will show a list of all transitions to and from that node; you can drag items in the list (their drag handles are the icon on the left side) to reorder them. Make sure the Jump transition is on top for both the Idle and Run nodes so that the Jump transition has priority over the other transitions. While you’re looking at these settings you can also change the playback speed if the animation looks too slow (Run looks better at 1.5 speed).

The animation controller is set up, so now we can operate the animations from the movement script.

7.4.3. Writing code that operates the animator

Finally, you’ll add methods to the RelativeMovement script. As explained earlier, most of the work of setting up animation states is done in the animation controller; only a small amount of code is needed to operate a rich and fluid animation system (see the following listing).

Listing 7.6. Code for setting values in the Animator component

Again, much of this listing is repeated from previous listings; the animation code is a handful of lines interspersed throughout the existing movement script. Pick out the _animator lines in order to find additions to make in your code.

The script needs a reference to the Animator component, and then the code sets values (either floats or Booleans) on the animator. The only somewhat nonobvious bit of code is the condition (_contact != null) before setting the Jumping Boolean. That condition prevents the animator from playing the jump animation right from the start. Even though the character is technically falling for a split second, there won’t be any collision data until the character touches the ground for the first time.

And there you have it! Now we have a nice third-person movement demo, with camera-relative controls and character animation playing.

7.5. Summary

In this chapter you’ve learned that

· Third-person view means the camera moves around the character instead of inside the character.

· Simulated shadows, like real-time shadows and lightmaps, improve the graphics.

· Controls can be relative to the camera instead of relative to the character.

· You can improve on Unity’s ground detection by casting a ray downward.

· Sophisticated animation set up with Unity’s animator controller results in lifelike characters.