Adding Animation and Sound to Your App - Getting Your Feet Wet: Basic Functionality - iOS App Development For Dummies (2014)

iOS App Development For Dummies (2014)

Part III. Getting Your Feet Wet: Basic Functionality

Chapter 10. Adding Animation and Sound to Your App

In This Chapter

arrow Understanding the iOS coordinate system

arrow Animating the car (view)

arrow Working with the block object pattern

arrow Detecting and responding to touches

arrow Animating in place

Although it may take some time before you go on your road trip, as well as complete the building of the app I’m showing you in this book, the least I can do is show you how to take a test drive in your ’59 pink Cadillac Eldorado Biarritz convertible.

In this chapter, you find out how to make the car move up the screen, turn around, and move back to its original position — with the appropriate sound effects.

I also show you how to drag the car on the screen to position the ride from wherever you’d like. And to add just a little more pizzazz, I show you how to make the Test Drive button blink.

This chapter provides you with a very good base for understanding animation, sound, and how to manage touches on the screen. They’re particularly useful in games, but they also find a very comfortable home in advanced interfaces for all types of apps.

Understanding iOS Animation

Fortunately, most of what you need to do as far as iOS animation is concerned is already built into the framework. Some view properties can be animated (the center point, for example), which means that you just need to tell the view where to start and where to end its move, and a few other optional parameters, and you’re done. The view itself (in the UIView base class) has the functionality to animate the move. To give you some context in which to understand how animation on the iPhone and iPad works, however, I need to explain what goes on under the hood when a framework takes care of the animation chores for you. More specifically, I need to delve a bit deeper into views, their properties, and the coordinate systems on the iPad.

View geometry and coordinate systems

The default coordinate system in UIKit places its origin in the top-left corner and has axes that extend down and to the right from the origin point. Coordinate values are represented using floating-point numbers, and you don’t have to worry about the screen resolution; the frameworks take care of that automatically. In addition to the screen coordinate system, views define their own local coordinate systems that allow you to specify coordinates relative to the view instead of relative to the screen. In practice, you often do both depending on what you’re trying to do.

image Because every view and window defines its own local coordinate system, whenever you're drawing or dealing with coordinates, you’ll need to pay attention to which coordinate system you're using. I know that sounds ominous, but it’s really not that big a deal after you get into the rhythm of working with the coordinate systems.

Points versus pixels

Okay, so where does the high-resolution Retina display come in?

In iOS, all coordinate values and distances are specified using floating-point values in units referred to as points. The main thing to understand about points is that they provide a fixed frame of reference for drawing. That fixed frame of reference is derived from the fact that a point is 1/72 of an inch. (This was set for the original Macintosh and LaserWriter and remains a milestone — perhaps the cornerstone — of desktop publishing. In our context, a point is a commonly accepted unit of length.)

The original Macintosh had a screen resolution of 72 pixels per inch (PPI). This meant that points and pixels were identical. However, over time, technology has advanced and now the pixel size and density (PPI) have changed. No longer do most devices actually have 72 PPI, but because pixels and points have been used interchangeably, the arrival of high-density displays such as the Retina display has caused confusion.

When you are talking about size or location, you are probably talking in points. If you are talking about the resolution of the image you will place on an object with a certain size or location, you are probably talking in pixels. On a Retina display, your image will have twice the pixels that you have on a non-Retina display, and you probably use a separate .png file. (Don’t worry: The asset manager makes it easy to have two files for a single image.)

Keep this distinction in mind, particularly when you are looking at old (pre-2013) documentation. You have to sort out when “pixel” means pixel and when it means point. Some developers use as a basic rule of thumb, “Xcode=points and Photoshop=pixels.” That’s a generalization and it’s not true in all cases, but as generalizations go, it’s generally right.

A view’s size and position

A view object’s location in a coordinate system is determined using either its frame or its center property:

· The frame property contains the frame rectangle, which specifies the size and location of the view in its superview’s coordinate system. (If you’re still hazy about the whole superview/subview thing, check out Chapter 4.)

· The center property contains the known center point of the view in its superview’s coordinate system.

In your wanderings, you may someday encounter the bounds property. It’s tied up with the bounds rectangle, which specifies the size of the view (and its content origin) in the view’s own local coordinate system. I mention it here for completeness, but I don't use it in this book.

The view coordinates you set for your view’s location in Interface Builder are in points. The coordinates start from 0,0 in the top left and increase as you go down and to the right. You usually place your objects below the 20-point status bar, but with iOS 7, views can appear through a navigation or toolbar, so you may place them even lower if you don’t want them to show through.

Working with data structures

In addition to knowing what goes where, you’ll need to understand how data structures impact how you work with views.

The frame is a CGRect — a struct (a C language type that aggregates conceptually related variables into a single type) with an origin and a size that are comprised of CGPoints. CG here stands for Core Graphics, one of the frameworks included by the Xcode when you selected the Single-View Application template. (See Chapter 4 to remind yourself about frameworks.) The following code shows the CGRectstruct:

struct CGRect {
CGPoint origin;
CGSize size;
};

An origin is a CGPoint with an x and y value, and a CGSize is a CGPoint with a width and height value. The following code shows the CGPoint struct:

struct CGPoint {
CGFloat x;
CGFloat y;
};

struct CGSize {
CGFloat width;
CGFloat height;
};

Similarly, the center property is a CGPoint. And that’s all you need to know about the data structures you’ll be using.

Coordinating Auto Layout, Frames, and Constraints

If you are using Auto Layout (and you should be), you need to know at least the basics of how it interacts with your view's frame. Whereas the Size inspector lets you specify the exact size and location of each point in the frame, the constraints-based Auto Layout system lets you prioritize constraints. This means that at runtime, the constraints are juggled together with their priorities affecting the whole layout. Into the mix, the size of the device, its orientation, and the sizes of views that depend on what their contents happen to be are all taken into account. You cannot know exactly what will happen.

In the Size inspector before Auto Layout came into the picture, you could pin edges of objects to their container view. As a result, there was a certain amount of dynamism, but Auto Layout brings much more to the table, and it lets you deal with changing device and view sizes easily.

In Apple's documentation as well as in this book, most of the discussion of Auto Layout assumes that the things that change at runtime are the orientation of the device as well as the size and position of views that respond to orientation and content changes.

With this chapter, however, another variable comes into play. You are going to be moving the view containing the car image. Unless you are careful, the results may be other than what you expect to see. Here is what you should keep in mind.

If you will be transforming a view (and you will be doing that when you rotate the car image), make certain that is constraints don't undo what you are trying to do. Positional constraints work with the center point of a view. Sizing constraints (pinning to the frame of another view or container view) are working with frames most of the time.

Before moving ahead to animate the car image, check what its constraints are by opening the Constraints section in the Document Outline for the Main_iPad.storyboard file. It should be pinned vertically to Bottom Layout Guide. It should also be horizontally centered (that uses the center point and not the frame). Any other constraints for the car image view that may have accumulated in your experiments should be removed. Just delete them from the Constraints section of the Document Outline using the Delete key.

If necessary, use Editor⇒Align⇒Horizontal Center in Container to add the centering constraint. Select the car image and control-drag from it to Bottom Layout Guide in the Document Outline to add the vertical constraint (choose the vertical spacing option).

Animating a View

Whenever you assign a new value to certain view properties (such as the frame and center properties, as explained in the previous section), the view is immediately redrawn and the change is immediately visible on the screen.

In addition, changes to several view properties (such as those just mentioned) can be animated. This means that changing the property creates an animation that conveys the change to the user over a short period of time — and it’s all handled for you by the UIView class. What’s more, it takes only one method call to specify the animations to be performed and the options for the animation.

You can animate the following properties of the UIView class (the first three are explained previously):

· frame: This property contains the frame rectangle, which specifies the size and location of the view in its superview’s coordinate system.

· bounds: This property contains the bounds rectangle, which specifies the size of the view (and its content origin) in the view’s own local coordinate system.

· center: This property contains the known center point of the view in its superview’s coordinate system.

· transform: I get to this one a bit later in the chapter.

· alpha: This property controls the degree of transparency. If you animate it, you can get views to fade in and fade out.

· backgroundColor: This property allows you to transition from one color to another.

Finally, More Code

In this section, you add the code to animate your ’59 pink Cadillac Eldorado Biarritz convertible and have it travel up the screen, turn around, and travel back down the screen.

Implementing the testDrive Method

In Chapter 9, you learned how to create an action for the Test Drive button using Interface Builder, which generated a method stub for you. Now it’s time to fill that stub with code.

Add the bolded code in Listing 10-1 to the testDrive: method in TestDrive Controller.m. I’m also having you add the stubs for code you’ll be adding later so that you can run your program before you're completely finished with the back and forth of the animation.

Listing 10-1: Updating testDrive: to Move the Car up the Screen

- (IBAction)testDrive:(id)sender {

CGPoint center = CGPointMake(self.car.center.x,
self.view.frame.origin.y + self.car.frame.size.height/2);
[UIView animateWithDuration:3 animations:^ {
self.car.center = center;
}
completion:^(BOOL finished){
[self rotate];
}];
}

- (void)rotate {
}

- (void)returnCar {
}

- (void)continueRotation {
}

Now, run your program and click or touch the Test Drive button. You’ll see your car move up the screen. You’re on your way!

Looking more closely at Listing 10-1, you see that you start by creating the coordinate (CGPoint) of where you would like the car to end up.

image A car is just another view. The following code shows how to move the car on-screen by simply moving the center of the view that holds the image of the car.

CGPoint center = CGPointMake(self.car.center.x,
self.view.frame.origin.y + 
self.car.frame.size.height/2);

You use the center and frame properties primarily for manipulating the view. If you're changing only the position of the view (and not its size), the center property is the preferred way to do so.

CGPointMake is a function that creates a point for you when you specify the y and x coordinates as parameters. (You’ll be setting the car’s new center point.)

You can leave the x coordinate as is. Doing so makes the car drive right up the center of the screen.

self.car.center.x

Here’s the y coordinate:

self.view.frame.origin.y + self.car.frame.size.height/2)

self.view.frame.origin.y is the top of the view, but if you have the center there, half the car is off the screen. To keep it all on the screen, you add back half the car’s height by including car.frame.size.height/2.

Notice I'm adding to the y coordinate because y increases as you move down the screen from the origin.

So, how do you get the sucker to actually move? Listing 10-1 uses the following code:

[UIView animateWithDuration:3 animations:^ {
self.car.center = center;
}
completion:^(BOOL finished){
[self rotate];
}];

animateWithDuration:animations:completion: is a UIView class method that allows you to set an animation duration and specify what you want animated as well as a completion handler that's called when the animation is complete.

First you specify that you want the animation to take three seconds:

animateWithDuration:3

and then you pass in an animation block with what you want animated:

animations:^ {
self.car.center = center;
}

This sets the new center you just computed, taking three seconds to move it from start to finish.

image If the preceding syntax seems mysterious (what’s the ^ doing there and what’s up with the code as part of the message?), don’t worry: I explain blocks in the next section.

So although that’s all there is to get the car to move across the screen, you’re not done. You want it to rotate and then drive back across the screen and then rotate again. That’s where the completion handler comes in.

Although you can use a completion handler to simply let you know that an animation is finished, you can also use a completion handler to link multiple animations. (In fact, it’s the primary way to take care of that task.)

The completion handler that you specify

completion:^(BOOL finished){
[self rotate];
}

causes the rotate message to be sent when the animation is complete. You do the actual rotation in the rotate method.

Of course, right now, the rotate method does nothing. I have you add it so that the app will compile and run. I have you add returnCar and continueRotation to prevent the Incomplete Implementation TestDriveController.m compiler warning.

image animateWithDuration:animations:completion: is only one of a number of block-based methods that offer different levels of configuration for the animation block. Other methods include

animateWithDuration:animations:

and

animateWithDuration:delay:options:animations:completion

animateWithDuration:animations: has no completion block, as you can see.

Both animateWithDuration:animations:completion: and animateWithDuration:animations: run only once, using an ease-in, ease-out animation curve — the default for most animations, it begins slowly, accelerates through the middle of the animation, and then slows again before completing. If you want to change the default animation parameters, you must use the animateWithDuration:delay:options:animations:completion: method, which lets you customize the following:

· The delay to use before starting the animation

· The type of timing curve to use during the animation

· The number of times the animation should repeat

· Whether the animation should reverse itself automatically when it reaches the end

· Whether touch events are delivered to views while the animations are in progress

· Whether the animation should interrupt any in-progress animations or wait until those are complete before starting

As you probably noticed (and I even admitted to), one of the things I tiptoed around was an explanation of the animation syntax:

[UIView animateWithDuration:3 animations:^ {
self.car.center = center;
}

Animations use blocks, which is a primary design pattern in iOS and is becoming increasingly more important. So before I get to the rotate completion handler, I want to explain blocks.

Understanding Block Objects

Objective-C blocks are like traditional C functions in that blocks are small, self-contained units of code. They can be passed in as arguments of methods and functions and then used when they’re needed to do some work. (Like many programming topics, understanding block objects is easier when you use them, as you do in the previous section.)

With iOS 4 and newer versions, a number of methods and functions of the system frameworks are starting to take blocks as parameters, including the following:

· Completion handlers

· Notification handlers

· Error handlers

· Enumeration

· View animation and transitions

· Sorting

In the code listings in this chapter, you get to use a block-based method to animate the car, but block objects also have a number of other uses, especially in Grand Central Dispatch and the NSOperationQueue class, the two recommended technologies for concurrent processing. But because concurrent processing is beyond the scope of this book (way beyond the scope, in fact), I leave you to explore that use on your own.

One of the values of using blocks is that you can access local variables (as well as instance variables), which you can’t do in a function or a callback. You also don’t have to pass data around — a block can modify variables to pass data back. In addition, if you need to change something, there's no API to change, with its concomitant ripple effect.

In the animation explained in the previous section, you passed a block as the argument to a method. You created the block inline (it’s there in the message you are sending to the UIView to do the animation) because there wasn’t that much code, and that's often the way it's done. But sometimes it's easier to follow what's happening by declaring a block variable and passing that as the argument to the method. The declaration syntax, however, is similar to the standard syntax for function pointers, except that you use a caret (^) instead of an asterisk pointer (*).

If you look at animateWithDuration:animations:completion: in the UIView class reference, you’ll see

+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
completion:(void (^)(BOOL finished))completion;

I know this looks a bit advanced for a For Dummies book, but I cover it here because Apple is now treating blocks as a primary design pattern, up there with inheritance and delegation — so don’t be surprised to find blocks being used more and more.

Nevertheless, because it's a tad advanced, I’ll go through the code slowly, and by the end — I promise — you’ll be comfortable with blocks, despite the really weird syntax.

To start, this is the syntax that defines animations as a block that has no parameters and no return value:

void (^)(void))animations

completion is defined as a block that has no return value and takes a single Boolean argument parameter:

(void (^)(BOOL finished))completion

When you create a block inline, you just use the caret (^) operator to indicate the beginning of a block and then follow with the code enclosed within the normal braces. That’s what was going on in Listing 10-1, with

animations:^ {
self.car.center = center;
}

and

completion:^(BOOL finished){
[self rotate];
}

Although in this example you use blocks inline, you could also declare them like any other local variable, as you can see in Listing 10-2. Add the code in bold in Listing 10-2 to your testDrive method replacing what you already have in that spot.

Listing 10-2: Using Declared Blocks

- (IBAction)testDrive:(id)sender {

CGPoint center = CGPointMake(car.center.x,
self.view.frame.origin.y + car.frame.size.height/2);

void (^animation)()= ^() {

self.car.center = center;
};

void (^completion)(BOOL) = ^(BOOL finished){
[self rotate];
};

[UIView animateWithDuration:3 animations:animation
completion:completion];
}

image When you declare a block, you use the caret (^) operator to indicate the beginning of a block with the code enclosed within the normal braces, and a semicolon to indicate the end of a block expression.

The declaration in Listing 10-2 is pretty much the same as you see in the following animateWithDuration:animations:completion: method declaration, except that the identifiers have been moved around a little. I have bolded both to make that a little easier to see:

+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
completion:(void (^)(BOOL finished))completion;

Here, you're declaring two block variables by using the ^ operator: one with the name of animations that has no return value, and one with the name of completion that has no return value and takes BOOL as its single argument:

void (^animation)()
void (^completion)(BOOL)

This is like any other variable declaration (int i = 1, for example), in which you follow the equal sign with its definition.

You use the ^ operator again to indicate the beginning of the block literal — the definition assigned to the block variable. The block literal includes argument names (finished) as well as the body (code) of the block and is terminated with a semicolon:

void (^animation)() = ^() {
self.car.center = center;
};

void (^completion)(BOOL) = ^(BOOL finished){
[self rotate];
};

You’ll be using blocks a few more times in this book, so at some point (despite the weird syntax), you’ll become comfortable with them. (Frankly it took me a while to get used to them myself.) After you do get the hang of them, though, you’ll find all sorts of opportunities to use them to simplify your code, as you discover in Chapter 19.

Rotating the Object

In this section, I show you how to rotate a view (in this case, turn the car around). To do so, you update the rotate code stub you started out with back in Listing 10-1 with the bolded code in Listing 10-3.

Listing 10-3: Updating rotate

- (void)rotate {

CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI);

void (^animation)() = ^() {
self.car.transform = transform;
};

void (^completion)(BOOL) = ^(BOOL finished){
[self returnCar];
};

[UIView animateWithDuration:3 animations:animation completion:completion];
}

This method uses the block declarations I explain in the previous section.

The CGAffineTransform data structure represents a matrix used for affine transformations — a blueprint for how points in one coordinate system map to points in another coordinate system. Although CGAffineTransform has a number of uses (such as scaling and translating a coordinate system), the only one covered here is the rotation method you use in Listing 10-3:

CGAffineTransformMakeRotation(M_PI)

To rotate a view, you specify the angle (in radians) to rotate the coordinate system axes. Whereas degrees are numbers between 0 and 360, radians, though similar, range from 0 to 2Π. So when you create a rotation that turns an object around one half-circle, that rotation in radians is pi. (M_PI is a system constant that represents pi.)

image Just to make your life interesting, you should note that in iOS, positive is counterclockwise, but on Mac OS X, positive is clockwise.

The end result of Listing 10-3 is that the car will rotate 180 degrees in three seconds, and when it's done, you send the returnCar message in the completion handler.

To return the car to its original position, add the bolded code in Listing 10-4 to the returnCar method stub in TestDriveController.m.

Listing 10-4: Updating returnCar

- (void)returnCar {

CGPoint center = CGPointMake(self.view.center.x, self.view.frame.size.height -
self.car.frame.size height);


void (^animation)() = ^() {
self.car.center = center;
};

void (^completion)(BOOL) = ^(BOOL finished){
[self continueRotation];
};

[UIView animateWithDuration:3 animations:animation
completion:completion];
}

This approach is pretty much the same as that of the testDrive method. You put the center back by computing the bottom of the view

self.view.frame.size.height
- self.car.frame.size.height);

You can experiment with these formulas to see how to move the car around the view.

But you’re not done yet. You need to rotate the car back to its original position (unless you want to drive in reverse from California to New York). Add the bolded code in Listing 10-5 to the continueRotation method stub in TestDriveController.m.

Listing 10-5: Updating continueRotation

- (void)continueRotation {

CGAffineTransform transform =
CGAffineTransformMakeRotation(0);

void (^animation)() = ^() {
self.car.transform = transform;
};

[UIView animateWithDuration:3 animations:animation
completion:NULL];
}

You need to understand that the transform (in this case, a view rotation) is still there; that is, you created a transform to rotate the car 180 degrees. If you want to get the car back to the original position, you need to return the transform to 0.

You could extend this action by having the car drive around the perimeter of the screen — but I’ll leave that up to you.

Working with Audio

Cars make noise, and a ’59 Cadillac certainly doesn't disappoint in that respect. So in this section, I show you how to add some sound to the RoadTrip app so that everyone can hear your car coming down the road.

More specifically, I discuss using two different ways iOS has for implementing audio. One is an instance of the AVAudioPlayer class — called, appropriately enough, an audio player — which provides playback of audio data from a file or memory. You use this class unless you're playing audio captured from a network stream or in need of very low I/O latency (lag time). The AVAudioPlayer class offers quite a lot of functionality, including playing sounds of any duration, looping sounds, playing multiple sounds simultaneously, and having one sound per audio player with precise synchronization among all the players in use. It also controls relative playback level, stereo positioning, and playback rate for each sound you're playing.

The AVAudioPlayer class lets you play sound in any audio format available in iOS. You implement a delegate to handle interruptions (such as an incoming SMS message) and to update the user interface when a sound has finished playing. The delegate methods to use are described in theAVAudioPlayerDelegate Protocol Reference (which you can access in the Organizer window as I explain in Chapter 7).

The second way to play sound is by using System Sound Services, which provides a way to play short sounds and make the device vibrate. You can use System Sound Services to play short (30 seconds or shorter) sounds. The interface doesn't provide level, positioning, looping, or timing control and doesn't support simultaneous playback: You can play only one sound at a time. You can use System Sound Services to provide audible alerts; on some iOS devices, alerts can even include vibration.

You have seen how to add frameworks to your app. You now need to add both AVFoundation.framework and AudioToolbox.framework. You see how to do this in Chapter 8 in the section on network availability. Just as a reminder, you add them to Linked Frameworks and Libraries in the project's General tab.

I showed you how to do that because I wanted you to understand that you often need to add new frameworks to support your code. Starting with Xcode 5, the process is easier (that is to say, it's automated), so I won't be asking you to add the new frameworks.

Later in this chapter, you will import the two framework header files using this code:

#import AudioToolbox;
#import AVFoundation;

The libraries will be linked automatically for you.

The sound files you need for RoadTrip (the aptly named BurnRubber.aif and CarRunning.aif) are already in the Resources folder that you added to your project. (See Chapter 3 if you haven’t already done this.)

image You can use Audacity, a free, open source software for recording and editing sounds, to create your own sound files. It's available for Mac OS X, Microsoft Windows, GNU/Linux, and other operating systems.

With the added frameworks in place, you now need to import the necessary audio player and system sound services headers and then add the instance variables you’ll be using. To accomplish all this, add the bolded code in Listing 10-6 to TestDriveController.m.

Listing 10-6: Updating the TestDriveController Class Extension

#import "TestDriveController.h"
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/AudioToolbox.h>

@interface TestDriveController () {
AVAudioPlayer *backgroundAudioPlayer;
SystemSoundID burnRubberSoundID;
}
@property (weak, nonatomic) IBOutlet UIButton
*testDriveButton;
@property (strong, nonatomic) IBOutlet UIImageView *car;
@property (weak, nonatomic) IBOutlet UIToolbar *toolbar;
- (IBAction)testDrive:(id)sender;
- (void)rotate;
- (void)returnCar;


@end

@implementation TestDriveController

As you can see, I'm having you take advantage of being able to put instance variables in the implementation file to keep them hidden. In fact, the file template already had added the class extension for you, and you have already placed two properties and an action in it.

@interface TestDriveController ()

@end

Next, you need to set up the audio player and system sound services. Add the bolded code in Listing 10-7 to viewDidLoad in TestDriveController.m.

Listing 10-7: Updating viewDidLoad

- (void)viewDidLoad
{
[super viewDidLoad];

NSURL* backgroundURL = [NSURL fileURLWithPath:
[[NSBundle mainBundle]pathForResource:
@"CarRunning" ofType:@"aif"]];
backgroundAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:backgroundURL error:nil];
backgroundAudioPlayer.numberOfLoops = -1;
[backgroundAudioPlayer prepareToPlay];

NSURL* burnRubberURL = [NSURL fileURLWithPath:
[[NSBundle mainBundle] pathForResource:
@"BurnRubber" ofType:@"aif"]];
AudioServicesCreateSystemSoundID(
(__bridge CFURLRef)burnRubberURL, &burnRubberSoundID);
}

In Listing 10-7, the first thing you do is load the sound file from the resources in your bundle:

NSURL* backgroundURL = [NSURL fileURLWithPath:
[[NSBundle mainBundle]pathForResource:
@"CarRunning" ofType:@"aif"]];

“What bundle?” you say? Well, when you build your iOS application, Xcode packages it as a bundle — one containing the following:

· The application’s executable code

· Any resources that the app has to use (for instance, the application icon, other images, and localized content — in this case, the plist, .html files, and .png files)

· The RoadTrip-Info.plist file, also known as the information property list, which defines key values for the application, such as bundle ID, version number, and display name

Pretty easy, huh?

Coming back to Listing 10-7, fileURLWithPath is an NSURL class method that initializes and returns an NSURL object as a file URL with a specified path. The NSURL class includes the utilities necessary for downloading files or other resources from web and FTP servers and from the file system.

The sound file you’ll be using is a resource, and pathForResource: is an NSBundle method that creates the path needed by the fileURLWithPath: method to construct the NSURL. Just give pathForResource: the name and the file type, and it returns the path that gets packed in to the NSURL and loaded.

image Be sure that you provide the correct file type; otherwise, this technique won’t work.

Next, you create an instance of the audio player

backgroundAudioPlayer = [[AVAudioPlayer alloc] 
initWithContentsOfURL:backgroundURL error:nil];

and initialize it with the audio file location (NSURL). You’ll ignore any errors.

Set the number of loops to -1 (which will cause the audio file to continue to play until you stop it) and tell the player to get ready to play:

backgroundAudioPlayer.numberOfLoops = -1;
[backgroundAudioPlayer prepareToPlay];

prepareToPlay prepares the audio player for playback by preloading its buffers; it also acquires the audio hardware needed for playback. This preloading minimizes the lag between calling the play method and the start of sound output. Without this preloading, although the player would still play when you send the play message (later) in viewDidLoad, you’ll likely notice a lag as it sets up its buffers.

Similarly, you set up the NSURL for the BurnRubber sound:

NSURL* burnRubberURL = [NSURL fileURLWithPath:
[[NSBundle mainBundle] pathForResource:
@"BurnRubber" ofType:@"aif"]];

You then call a core foundation method to create a system sound object that you later use to play the sound:

AudioServicesCreateSystemSoundID((__bridge
CFURLRef)burnRubberURL, &burnRubberSoundID);

CFURLRef is a CoreFoundation object, and ARC doesn't automatically manage the lifetimes of CoreFoundation types. And although you can use certain CoreFoundation memory management rules and functions, you don’t need to do that here. That’s because all you’re doing is casting an Objective-C object to a CoreFoundation type object, and you won’t need to use any CoreFoundation memory management in your code. You have to let the compiler know about any memory management implications, however, so you need to use the __bridge cast.

In testDrive, you’ll play both the BurnRubber and CarRunning sounds. To do so, add the bolded code in Listing 10-8 to testDrive: in TestDriveController.m.

Listing 10-8: Updating testDrive

- (IBAction)testDrive:(id)sender {


AudioServicesPlaySystemSound(burnRubberSoundID);
[self performSelector:@selector(playCarSound)
withObject:self afterDelay:.2];

CGPoint center = CGPointMake(_car.center.x,
self.view.frame.origin.y + self.car.frame.size.height/2 );

void (^animation)() = ^() {

self.car.center = center;
};

void (^completion)(BOOL) = ^(BOOL finished){
[self rotate];
};

[UIView animateWithDuration:3 animations:animation completion:completion];
}

You also need to add the code in Listing 10-9 to play the CarRunning sound.

Listing 10-9: Adding playCarSound

- (void)playCarSound {

[backgroundAudioPlayer play];
}

I’m having you play the BurnRubber sound first, followed by the CarRunning sound. If you don’t wait until the BurnRubber sound is complete before you play the CarRunning sound, the BurnRubber sound is drowned out by the CarRunning sound.

To play the BurnRubber sound, you use a function call to System Sound Services:

AudioServicesPlaySystemSound(burnRubberSoundID);

After this sound is done, you start the CarRunning sound by using a very useful method that will enable you to send the message to start the audio player after a delay. That method is performSelector:withObject:
afterDelay:, and it looks like this:

[self performSelector:@selector(playCarSound)
withObject:self afterDelay:.2];

performSelector:withObject:afterDelay: sends a message that you specify to an object after a delay. The method you want invoked should have no return value, and should have zero or one argument.

In Listing 10-9, this method meets these rules:

- (void)playCarSound {

[backgroundAudioPlayer play];
}

@selector(playCarSound) is a compiler directive that returns a selector for a method name. A selector is the name used to select a method to execute for an object; it becomes a unique identifier when the source code is compiled.

image Selectors really don’t do anything. What makes the selector method name different from a plain string is that the compiler makes sure that selectors are unique. Selectors are useful because at runtime they act like a dynamic function pointer that, for a given name, automatically points to the implementation of a method appropriate for whichever class they’re used with.

withObject: is the argument to pass to the method when it's invoked. afterDelay: is the minimum time before which the message is sent. Specifying a delay of 0 doesn't necessarily cause the selector to be performed immediately. When you send the performSelector:withObject:message, you specify 0.2 seconds because that's the duration of the BurnRubber sound.

image Sometimes you may need to cancel a selector. cancelPerformSelectors
WithTarget: cancels all outstanding selectors scheduled to be performed with a given target.

Several other variations exist on the performSelector:withObject:afterDelay: method. Those variations are part of the NSObject class, which is the root class of most Objective-C class hierarchies. It provides the basic interface to the runtime system as well as the capability to behave as Objective-C objects.

Finally, to play the sound in the playCarSound method, you send the audio player the play message:

[backgroundAudioPlayer play];

The play message plays a sound asynchronously. If you haven’t already sent the prepareToPlay message, play will send that for you as well (although you should expect a lag before the sound is played).

Next, you need to stop playing the sound in the continueRotation animation’s completion block (or it gets really annoying). To stop playing the sound, add the bolded code in Listing 10-10 to continueRotation in TestDriveController.m. (completion replaces the previous value that wasNULL.)

Listing 10-10: Updating continueRotation to Stop the Sound

- (void)continueRotation {

CGAffineTransform transform =
CGAffineTransformMakeRotation(-0);

void (^animation)() = ^() {
_car.transform = transform;
};

void (^completion)(BOOL) = ^(BOOL finished){
[backgroundAudioPlayer stop];
[backgroundAudioPlayer prepareToPlay];
};

[UIView animateWithDuration:3 animations:animation
completion:completion];
}

In the code in Listing 10-10, you also set up the audio player to play again.

And there you have it. Run your project and you’ll notice some very realistic sound effects when you tap the Test Drive button.

Tracking Touches

It would be nice to be able to drag the car and place it anywhere on the screen. In this section, I explain how to code for dragging an object, as well as how touches work on an iOS device.

The touch of a finger (or lifting it from the screen) adds a touch event to the application’s event queue, where it’s encapsulated (contained) in a UIEvent object. A UITouch object exists for each finger touching the screen, which enables you to track individual touches.

The touchesBegan:withEvent: message is sent when one or more fingers touch down in a view. This message is a method of the TestDriveController’s superclass, UIResponder, from which the view controller is derived.

As the user continues to touch the screen with his or her fingers, the system reports the changes for each finger in the corresponding UITouch object, thereby sending the touchesMoved:withEvent: message. The touchesEnded:withEvent: message is sent when one or more fingers lift from the associated view. The touchesCancelled:withEvent: message, on the other hand, is sent when a system event (such as a low-memory warning) cancels a touch event.

In this app, you need be concerned only with the first two methods just described.

To begin the process of responding to a touch event, add a new instance variable (bolded in Listing 10-11) to the TestDriveController.m implementation file.

Listing 10-11: Updating the TestDriveController Implementation

@interface TestDriveController () {

AVAudioPlayer *backgroundAudioPlayer;
SystemSoundID burnRubberSoundID;
BOOL touchInCar;
}
@end

Next, add the touchesBegan: method in Listing 10-12 to TestDrive
Controller.m to start tracking touches. (You're actually overriding this method because UIViewController inherited it from the UIResponder base class.)

Listing 10-12: Overriding touchesBegan:

- (void)touchesBegan:(NSSet *)touches withEvent:
(UIEvent *)event
{
UITouch *touch = [touches anyObject];
if (CGRectContainsPoint(self.car.frame,
[touch locationInView:self.view]))
touchInCar = YES;
else {
touchInCar = NO;
[super touchesBegan:touches withEvent:event];
}
}

As mentioned previously, the touchesBegan:withEvent: message is sent when one or more fingers touch down in a view. The touches themselves are passed to the method in an NSSet object — an unordered collection of distinct elements.

To access an object in NSSet, use the anyObject method — it returns one of the objects in the set. For our purposes here, you're assuming just one object — but you might want to explore this issue further on your own so that you can understand how to handle additional possibilities.

The following code shows how to set up the anyObject method:

UITouch *touch = [touches anyObject];

Next, have the code determine whether the user’s touch event is in the Car (UIImage) view:

if (CGRectContainsPoint(self.car.frame,
[touch locationInView:self.view]))

CGRectContainsPoint is a function that returns YES when a rectangle (view coordinates) contains a point. You specify the car’s frame as the rectangle:

self.car.frame

and you specify the point by sending the locationInView: message to the touch:

locationInView:self.view

locationInView: returns the current location of the receiver in the coordinate system of the given view. In this case, you're using the Main view, but you might want to change the view if you’re trying to determine the location within another view, for example. Maybe the user is touching an itty-bitty gas pedal. (Just to be clear, in our RoadTrip app, the car does not have an itty-bitty gas pedal.)

If it's determined that the touch is in the car, you assign YES to the touchInCar instance variable; if it’s not, you assign NO and forward the message up the responder chain. You use touchInCar later to determine whether the user is dragging the car around or just running his finger over the screen.

image The default implementation of touchesBegan: does nothing. However, subclasses derived directly from UIResponder, particularly UIView, forward the message up the responder chain. To forward the message to the next responder, send the message to super (the superclass implementation).

If you override touchesBegan:withEvent: without calling super (a common use pattern), you must also override the other methods for handling touch events, if only as stub (empty) implementations.

Multiple touches are disabled by default. To allow your app to receive multiple touch events, you must set the multipleTouchEnabled property of the corresponding view instance to YES.

As users merrily move the car around the screen (perhaps saying zoom zoom to themselves), your app is constantly being sent the touchesMoved: message. Add the code in Listing 10-13 to TestDriveController.m to override that method, which will enable you to move the car to where the user’s finger is.

Listing 10-13: Overriding touchesMoved:withEvent:

- (void)touchesMoved:(NSSet *)touches withEvent:
(UIEvent *)event {

if (touchInCar) {
UITouch* touch = [touches anyObject];
self.car.center = [touch locationInView:self.view];
}
else
[super touchesMoved:touches withEvent:event];
}

If the first touch was in the Car view (touchInCar is YES), you assign car’s center property to the touch coordinate. As I explain in the “Animating a View” section, earlier in this chapter, when you assign a new value to the center property, the view’s location is immediately changed. Otherwise, you ignore the touch and forward the message up the responder chain.

It's interesting to observe that when you position the car next to a button, it will travel under that button when you touch the Test Drive button. This feature illustrates the subview structure that I explain in Chapter 4 in the section about the view hierarchy. Because I had you add the buttons last (they're subviews of the Main view), they're displayed on top of the subviews (car) that you added earlier.

Experiment with moving the car around and then using the Test Drive button. If there's anything wrong with your formulas for positioning the car during the Test Drive, you'll see it when the car starts from a different place.

Animating a Series of Images “In Place”

Although I explain animation using the UIView methods earlier in this chapter, this section shows you a way to animate a series of images “in place” — you are not moving the image around as you did earlier with the car; instead you are changing the image where it is to make it appear as if it were animated.

To make the Test Drive button blink, for example, add the bolded code in Listing 10-14 to TestDriveController.m. As you can see in the listing, only a single line of code is needed to animate the button.

Listing 10-14: Creating a Blinking Button

- (void)viewDidLoad
{
[super viewDidLoad];

NSURL* backgroundURL = [NSURL 
fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"CarRunning" ofType:@"aif"]];
backgroundAudioPlayer = [[AVAudioPlayer alloc] 
initWithContentsOfURL:backgroundURL error:nil];
backgroundAudioPlayer.numberOfLoops = -1;
[backgroundAudioPlayer prepareToPlay];

NSURL* burnRubberURL = [NSURL 
fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"BurnRubber" ofType:@"aif"]];
AudioServicesCreateSystemSoundID((__bridge 
CFURLRef)burnRubberURL, &burnRubberSoundID);
[self.testDriveButton setBackgroundImage:[UIImage animatedImageNamed:@"Button" duration:1.0 ] forState:UIControlStateNormal];
}

image This blinking button is designed to show you how to animate changing images. Blinking objects on the screen are generally avoided in good interfaces. Remember the famous saying, “Less is more.”

In Chapter 5, I show you how to add a custom button with a Button background image. You could have also programmatically added the background image by sending the button the setBackgroundImage:forState: message. (Chapter 5 explains the control state as well.) Normally, you might think of making the background image a single image. However, animatedImageNamed:duration: and some similar methods use instead a series of files, each displayed for a duration you specify. This type of method enables you to animate (this time, in place) not only a button but also any image by simply supplying a series of images:

[testDriveButton setBackgroundImage:
[UIImage animatedImageNamed:@"Button" duration:1.0]
  forState:UIControlStateNormal];

In the animatedImageNamed: method, you supply a base name of an image to animate. The method appends a 0 to the base name and loads that image (in this case, Button0). After the time that you specify in duration has elapsed, the animatedImageNamed: method appends the next number (in this case, 1) to the base image name and attempts to load it and the remainder of images (up to 1,024 images) until it runs out of images, and then it starts over.

In the Project navigator, open the disclosure triangle for the RoadTrip Resources group that you created in Chapter 3. If you look in the RoadTrip Resources group, you see two images, Button0 and Button1 — with Button being the base name you specified. This is an “in place” animation, so all images included in the animated image should share the same size and scale.

If you select each image in the Project navigator, you can see that they're slightly different colors, and each will display for 1.0 second (duration:1.0). This makes the button blink and certainly adds some life to the Main view.

iPhone versus iPad

The iOS 7 animation and sound libraries and frameworks are the same for the iPhone and iPad, so the code shown in this chapter works properly on both the iPhone and iPad apps. The differences are confined mostly to the separate storyboards and the support for a navigation view interface as opposed to a split view controller interface.