Adding Animation and Sound to Your App - Getting Your Feet Wet - Basic Functionality - iOS 6 Application Development For Dummies (2013)

iOS 6 Application Development For Dummies (2013)

Part III. Getting Your Feet Wet - Basic Functionality

Chapter 10. Adding Animation and Sound to Your App

In This Chapter

arrow Understanding the iOS coordinate system

arrow Animating the car (view)

arrow Working with the block object pattern

arrow Detecting and responding to touches

arrow Animating in place

Although it may take some time before you go on your road trip, as well as complete the building of the app I’m showing you in this book, the least I can do is show you how to take a test drive in your ’59 pink Cadillac Eldorado Biarritz convertible.

In this chapter, you find out how to make the car move up the screen, turn around, and move back to its original position — with the appropriate sound effects.

I also show you how to drag the car on the screen to position the ride from wherever you’d like. And to add just a little more pizzazz, I show you how to make the Test Drive button blink.

This chapter provides you with a very good base for understanding animation, sound, and how to manage touches on the screen.

Understanding iOS Animation

Fortunately, most of what you need to do as far as iOS animation is concerned is already built into the framework. Some view properties can be animated (the center point, for example), which means that you just need to tell the view where to start and where to end, and a few other optional parameters, and you’re done. The view itself (in the UIView base class) has the functionality to animate the move. To give you some context in which to understand how animation on the iPhone and iPad works, however, I need to explain what goes on under the hood when a framework takes care of the animation chores for you. More specifically, I need to delve a bit deeper into views, their properties, and the coordinate systems on the iPad.

View geometry and coordinate systems

The default coordinate system in UIKit places its origin in the top-left corner and has axes that extend down and to the right from the origin point. Coordinate values are represented using floating-point numbers, and you don’t have to worry about the screen resolution; the frameworks take care of that automatically. Figure 10-1 shows this coordinate system relative to the iPad screen. In addition to the screen coordinate system, views define their own local coordinate systems that allow you to specify coordinates relative to the view instead of relative to the screen.

image

Figure 10-1: The coordinate system on an iPad screen (with a split view).

imageBecause every view and window defines its own local coordinate system, whenever you’re drawing or dealing with coordinates, you’ll need to pay attention to which coordinate system you’re using. I know that sounds ominous, but it’s really not that big a deal after you get into the rhythm of working with the coordinate systems.

Points versus pixels

Okay, so where does the high-resolution display come in?

In iOS, all coordinate values and distances are specified using floating-point values in units referred to as points. The measurable size of a point varies from device to device and is largely irrelevant. The main thing to understand about points is that they provide a fixed frame of reference for drawing.

For example, the screen dimensions (width x height) for the iPhone 4s is 480 x 320 points and for the iPad are 1024 x 768 points.

So although an iPhone 4s with Retina display has a 960-by-640-pixel resolution (a pixel density of 326 pixels per inch [ppi]) and a non-Retina display has a 480-by-320-pixel resolution (163 ppi), as long as you design your interface to fit the screen sizes in points, your views will display correctly on the corresponding type of device. The same principles apply with non-Retina and Retina display on the iPad.

The takeaway here is, “Don’t worry about the resolution; concentrate on points and you’ll be fine.”

A view’s size and position

A view object’s location in a coordinate system is determined using either its frame or its center property:

image The frame property contains the frame rectangle, which specifies the size and location of the view in its super view’s coordinate system. (If you’re still hazy about the whole super view/sub view thing, check out Chapter 4.)

image The center property contains the known center point of the view in its super view’s coordinate system.

In your wanderings, you may someday encounter the bounds property. It’s tied up with the bounds rectangle, which specifies the size of the view (and its content origin) in the view’s own local coordinate system. I mention it here for completeness, but you won’t be using it in this book.

Figure 10-1 shows the frame of the iPad’s Main view (not the Image view I have you add in Chapter 5) with an origin of x = 0 and y = 20. Its size is shown as width = 320 and height = 460. The reason that its origin is at y = 20 is that its frame is in its window coordinates (its super view), and it has to share the window with the status bar which is, as you might deduce, 20 pixels high.

Working with data structures

In addition to knowing what goes where, you’ll need to understand how data structures impact how you work with views.

The frame is a CGRect — a struct (a C language type that aggregates conceptually related variables into a single type) with an origin and a size that are comprised of CGPoints. CG here stands for Core Graphics, one of the frameworks included by the Xcode when you selected the Single-View Application template. (See Chapter 4 to remind yourself about frameworks.) The following code shows the CGRect struct:

struct CGRect {

CGPoint origin;

CGSize size;

};

An origin is a CGPoint with an x and y value, and a CGSize is a CGPoint with a width and height value. The following code shows the CGPoint struct:

struct CGPoint {

CGFloat x;

CGFloat y;

};

struct CGSize {

CGFloat width;

CGFloat height;

};

Similarly, the center property is a CGPoint. And that’s all you need to know about the data structures you’ll be using.

Animating a View

Whenever you assign a new value to certain view properties (such as the frame and center properties, as explained in the previous section), the view is immediately redrawn and the change is immediately visible on the screen.

In addition, changes to several view properties (such as those just mentioned) can be animated. This means that changing the property creates an animation that conveys the change to the user over a short period of time — and it’s all handled for you by the UIView class. What’s more, it takes only one method call to specify the animations to be performed and the options for the animation.

You can animate the following properties of the UIView class (the first three are explained previously):

image frame: This property contains the frame rectangle, which specifies the size and location of the view in its super view’s coordinate system.

image bounds: This property contains the bounds rectangle, which specifies the size of the view (and its content origin) in the view’s own local coordinate system.

image center: This property contains the known center point of the view in its super view’s coordinate system.

image transform: I get to this one a bit later in the chapter.

image alpha: This property controls the degree of transparency. If you animate it, you can get views to fade in and fade out.

image backgroundColor: This property allows you to transition from one color to another.

image contentStretch: This property controls how a view’s content is stretched to fill its bounds when the view is resized and is often used to animate the resizing of buttons and controls.

Finally, More Code

In this section, you add the code to animate your ’59 pink Cadillac Eldorado Biarritz convertible and have it travel up the screen, turn around, travel back down the screen, and then turn around again so that it’s back to its original position.

Implementing the testDrive Method

In Chapter 9, you learned how to create an action for the Test Drive button using Interface Builder, which generated a method stub for you. Now it’s time to fill that stub with code.

Add the bolded code in Listing 10-1 to the testDrive: method in TestDriveController.m. I’m also having you add the stubs for code you’ll be adding later so that you can run your program before you’re completely finished with the back and forth of the animation.

Listing 10-1: Updating testDrive: to Move the Car Up the Screen

- (IBAction)testDrive:(id)sender {

CGPoint center = CGPointMake(self.car.center.x,

self.view.frame.origin.y + self.car.frame.size.height/2);

[UIView animateWithDuration:3 animations:^ {

self.car.center = center;

}

completion:^(BOOL finished){

[self rotate];

}];

}

- (void)rotate {

}

- (void)returnCar {

}

- (void)continueRotation {

}

Now, run your program and click or touch the Test Drive button. You’ll see your car move up the screen. You’re on your way!

Looking more closely at Listing 10-1, you see that you start by creating the coordinate (CGPoint) of where you would like the car to end up.

imageA car is just another view. The following code shows how to move the car on-screen by simply moving the center of the view that holds the image of the car.

CGPoint center = CGPointMake(self.car.center.x,

self.view.frame.origin.y + self.car.frame.size.height/2);

You use the center and frame properties primarily for manipulating the view. If you’re changing only the position of the view (and not its size), the center property is the preferred way to do so.

CGPointMake is a function that creates a point for you when you specify the y and x coordinates as parameters. (You’ll be setting the car’s new center point.)

You can leave the x coordinate as is. Doing so makes the car drive right up the center of the screen.

self.car.center.x

Here’s the y coordinate:

self.view.frame.origin.y + self.car.frame.size.height/2)

self.view.frame.origin.y is the top of the view, but if you have the center there, half the car is off the screen. To keep it all on the screen, you add back half the car’s height by including car.frame.size.height/2.

Notice I’m adding to the y coordinate because y increases as you move down the screen from the origin.

So, how do you get the sucker to actually move? Listing 10-1 uses the following code:

[UIView animateWithDuration:3 animations:^ {

self.car.center = center;

}

completion:^(BOOL finished){

[self rotate];

}];

animateWithDuration:animations:completion: is a UIView class method that allows you to set an animation duration and specify what you want animated as well as a completion handler that’s called when the animation is complete.

First you specify that you want the animation to take three seconds:

animateWithDuration:3

and then you pass in an animation block with what you want animated:

animations:^ {

self.car.center = center;

}

This sets the new center you just computed, taking three seconds to move it from start to finish.

imageIf the preceding syntax seems mysterious (what’s the ^ doing there and what’s up with the code as part of the message), don’t worry: I explain blocks in the next section.

So while that’s all there is to get the car to move across the screen, you’re not done. You want it to rotate and then drive back across the screen and then rotate again. That’s where the completion handler comes in.

Although you can use a completion handler to simply let you know that an animation is finished, you can also use a completion handler to link multiple animations. (In fact, it’s the primary way to take care of that task.)

The completion handler that you specify

completion:^(BOOL finished){

[self rotate];

}

causes the rotate message to be sent when the animation is complete. You do the actual rotation in the rotate method.

Of course, right now, the rotate method does nothing. I have you add it so that the app will compile and run. I have you add returnCar and continueRotation to eliminate the Incomplete Implementation TestDriveController.m compiler warning.

imageanimateWithDuration:animations:completion: is only one of a number of block-based methods that offer different levels of configuration for the animation block. Other methods include

animateWithDuration:animations:

and

animateWithDuration:delay:options:animations:completion

animateWithDuration:animations: has no completion block, as you can see.

Both animateWithDuration:animations:completion: and animateWithDuration:animations: run only once, using an ease-in, ease-out animation curve — the default for most animations, it begins slowly, accelerates through the middle of the animation, and then slows again before completing. If you want to change the default animation parameters, you must use the animateWithDuration:delay:options:animations:completion: method, which lets you customize the following:

image The delay to use before starting the animation

image The type of timing curve to use during the animation

image The number of times the animation should repeat

image Whether the animation should reverse itself automatically when it reaches the end

image Whether touch events are delivered to views while the animations are in progress

image Whether the animation should interrupt any in-progress animations or wait until those are complete before starting

As you probably noticed (and I even admitted to), one of the things I tiptoed around was an explanation of the animation syntax:

[UIView animateWithDuration:3 animations:^ {

self.car.center = center;

}

Animations use blocks, which is a primary design pattern in iOS and is becoming increasingly more important. So before I get to the rotate completion handler, I want to explain blocks.

Understanding Block Objects

Objective-C blocks are like traditional C functions in that blocks are small, self-contained units of code. They can be passed in as arguments of methods and functions and then used when they’re needed to do some work. (Like many programming topics, understanding block objects is easier when you use them, as you do in the previous section.)

With iOS 4 and newer versions, a number of methods and functions of the system frameworks are starting to take blocks as parameters, including the following:

image Completion handlers

image Notification handlers

image Error handlers

image Enumeration

image View animation and transitions

image Sorting

In the code listings in this chapter, you get to use a block-based method to animate the car, but block objects also have a number of other uses, especially in Grand Central Dispatch and the NSOperationQueue class, the two recommended technologies for concurrent processing. But because concurrent processing is beyond the scope of this book (way beyond the scope, in fact), I leave you to explore that use on your own.

One of the values of using blocks is that you can access local variables (as well as instance variables), which you can’t do in a function or a callback. You also don’t have to pass data around — a block can modify variables to pass data back. In addition, if you need to change something, there’s no API to change, with its concomitant ripple effect.

In the animation explained in the previous section, you passed a block as the argument to a method. You created the block inline (it’s there in the message you are sending to the UIView to do the animation) because there wasn’t that much code, and that’s often the way it’s done. But sometimes it’s easier to follow what’s happening by declaring a block variable and passing that as the argument to the method. The declaration syntax, however, is similar to the standard syntax for function pointers, except that you use a caret (^) instead of an asterisk pointer (*).

If you look at animateWithDuration:animations:completion: in the UIView class reference, you’ll see

+ (void)animateWithDuration:(NSTimeInterval)duration

animations:(void (^)(void))animations

completion:(void (^)(BOOL finished))completion;

I know this looks a bit advanced for a For Dummies book, but I cover it here because Apple is now treating blocks as a primary design pattern, up there with inheritance and delegation — so don’t be surprised to find blocks being used more and more.

Nevertheless, because it’s a tad advanced, I’ll go through the code slowly, and by the end — I promise — you’ll be comfortable with blocks, despite the really weird syntax.

To start, this is the syntax that defines animations as a block that has no parameters and no return value:

void (^)(void))animations

completion is defined as a block that has no return value and takes a single Boolean argument parameter:

(void (^)(BOOL finished))completion

When you create a block inline, you just use the caret (^) operator to indicate the beginning of a block and then follow with the code enclosed within the normal braces. That’s what was going on back in Listing 10-1, with

animations:^ {

self.car.center = center;

}

and

completion:^(BOOL finished){

[self rotate];

}

Although in this example you use blocks inline, you could also declare them like any other local variable, as you can see in Listing 10-2.

Listing 10-2: Using Declared Blocks

- (IBAction)testDrive:(id)sender {

CGPoint center = CGPointMake(car.center.x,

self.view.frame.origin.y + car.frame.size.height/2);

void (^animation)()= ^() {

self.car.center = center;

};

void (^completion)(BOOL) = ^(BOOL finished){

[self rotate];

};

[UIView animateWithDuration:3 animations:animation

completion:completion];

}

imageWhen you declare a block, you use the caret (^) operator to indicate the beginning of a block with the code enclosed within the normal braces, and a semicolon to indicate the end of a block expression.

The declaration in Listing 10-2 is pretty much the same as you see in the following animateWithDuration:animations:completion: method declaration, except that the identifiers have been moved around a little. I have bolded both to make that a little easier to see:

+ (void)animateWithDuration:(NSTimeInterval)duration

animations:(void (^)(void))animations

completion:(void (^)(BOOL finished))completion;

Here, you’re declaring two block variables by using the ^ operator: one with the name of animations that has no return value, and one with the name of completion that has no return value and takes BOOL as its single argument:

void (^animation)()

void (^completion)(BOOL)

This is like any other variable declaration (int i = 1, for example), in which you follow the equal sign with its definition.

You use the ^ operator again to indicate the beginning of the block literal — the definition assigned to the block variable. The block literal includes argument names (finished) as well as the body (code) of the block and is terminated with a semicolon:

void (^animation)() = ^() {

self.car.center = center;

};

void (^completion)(BOOL) = ^(BOOL finished){

[self rotate];

};

Although the code you add in Listing 10-2 creates the blocks inline, you should actually replace the testDrive method you added with the code in Listing 10-2 because that’s the way I have you use blocks for the rest of this book. Inline blocks work fine for small blocks; for large blocks, however, I find that declaring the block makes the code easier to follow.

You’ll be using blocks a few more times in this book, so at some point (despite the weird syntax), you’ll become comfortable with them. (Frankly it took me a while to get used to them myself.) After you do get the hang of them, though, you’ll find all sorts of opportunities to use them to simplify your code, as you discover in Chapter 19.

Rotating the Object

In this section, I show you how to rotate a view (in this case, turn the car around). To do so, you update the rotate code stub you started out with back in Listing 10-2 with the bolded code in Listing 10-3.

Listing 10-3: Updating rotate

- (void)rotate {

CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI);

void (^animation)() = ^() {

self.car.transform = transform;

};

void (^completion)(BOOL) = ^(BOOL finished){

[self returnCar];

};

[UIView animateWithDuration:3 animations:animation completion:completion];

}

This method uses the block declarations I explain in the previous section.

The CGAffineTransform data structure represents a matrix used for affine transformations — a blueprint for how points in one coordinate system map to points in another coordinate system. Although CGAffineTransform has a number of uses (such as scaling and translating a coordinate system), the only one covered here is the rotation method you use in Listing 10-3:

CGAffineTransformMakeRotation(M_PI)

To rotate a view, you specify the angle (in radians) to rotate the coordinate system axes. Whereas degrees are numbers between 0 and 360, radians, though similar, range from 0 to 2Π. So when you create a rotation that turns an object around one half-circle, that rotation in radians is pi. (M_PI is a system constant that represents pi.)

imageJust to make your life interesting, you should note that in iOS, positive is counterclockwise, but on Mac OS X, positive is clockwise.

The end result of Listing 10-3 is that the car will rotate 180 degrees in three seconds, and when it’s done, you send the returnCar message in the completion handler.

To return the car to its original position, add the bolded code in Listing 10-4 to the returnCar method stub in TestDriveController.m.

Listing 10-4: Updating returnCar

- (void)returnCar {

CGPoint center = CGPointMake(car.center.x, self.view.frame.origin.y + self.view.frame.size.height - 7

- self.car.frame.size.height/2);

void (^animation)() = ^() {

self.car.center = center;

};

void (^completion)(BOOL) = ^(BOOL finished){

[self continueRotation];

};

[UIView animateWithDuration:3 animations:animation

completion:completion];

}

This approach is pretty much the same as that of the testDrive method except that the new center is back where the car started. You put the center back by computing the bottom of the view

self.view.frame.size.height - 7

- self.car.frame.size.height/2);

and then subtracting 7 — because the car isn’t quite at the bottom of the view — and then subtracting half the car’s height.

But you’re not done yet. You need to rotate the car back to its original position (unless you want to drive in reverse from California to New York). Add the bolded code in Listing 10-5 to the continueRotation method stub in TestDriveController.m.

Listing 10-5: Updating continueRotation

- (void)continueRotation {

CGAffineTransform transform =

CGAffineTransformMakeRotation(0);

void (^animation)() = ^() {

self.car.transform = transform;

};

[UIView animateWithDuration:3 animations:animation

completion:NULL];

}

You need to understand that the transform (in this case, a view rotation) is still there; that is, you created a transform to rotate the car 180 degrees. If you want to get the car back to the original position, you need to return the transform to 0.

You could extend this action by having the car drive around the perimeter of the screen — but I’ll leave that up to you.

Working with Audio

Cars make noise, and a ’59 Cadillac certainly doesn’t disappoint in that respect. So in this section, I show you how to add some sound to the RoadTrip app so that everyone can hear your car coming down the road.

More specifically, I discuss using two different ways iOS has for implementing audio. One is an instance of the AVAudioPlayer class — called, appropriately enough, an audio player — which provides playback of audio data from a file or memory. You use this class unless you’re playing audio captured from a network stream or in need of very low I/O latency (lag time). The AVAudioPlayer class offers quite a lot of functionality, including playing sounds of any duration, looping sounds, playing multiple sounds simultaneously, and having one sound per audio player with precise synchronization among all the players in use. It also controls relative playback level, stereo positioning, and playback rate for each sound you’re playing.

The AVAudioPlayer class lets you play sound in any audio format available in iOS. You implement a delegate to handle interruptions (such as an incoming SMS message) and to update the user interface when a sound has finished playing. The delegate methods to use are described in theAVAudioPlayerDelegate Protocol Reference (which you can access in the Organizer window as I explain in Chapter 7).

The second way to play sound is by using System Sound Services, which provides a way to play short sounds and make the device vibrate. You can use System Sound Services to play short (30 seconds or shorter) sounds. The interface doesn’t provide level, positioning, looping, or timing control and doesn’t support simultaneous playback: You can play only one sound at a time. You can use System Sound Services to provide audible alerts; on some iOS devices, alerts can even include vibration.

To add sound to your app, you start by adding the frameworks. If you need a refresher on how to do that in more detail than I give here, see the section in Chapter 8 about network availability. Or, just follow these steps:

1. In the Project navigator, select the project icon at the top of the Project Navigator area (RoadTrip) to display the Project editor.

2. In the Targets section, select RoadTrip.

3. On the Summary tab, scroll down to the Linked Frameworks and Libraries section.

4. Expand the Linked Frameworks and Libraries section — if it isn’t already expanded — by clicking the disclosure triangle.

5. Click the + (plus sign) button underneath the list of current Project frameworks.

A list of frameworks appears.

6. Scroll down and select both AVFoundation.framework and AudioToolbox.framework from the displayed list of frameworks.

7. Click the Add button.

You’ll see the frameworks added to the Linked Frameworks and Libraries section.

8. Close the Linked Frameworks and Libraries section.

9. In the Project navigator, drag the AVFoundation.framework and AudioToolbox.framework files to the Frameworks group.

Be sure to do your dragging in the Project navigator, not from the Linked Frameworks and Libraries section!

The sound files you need for RoadTrip (the aptly named BurnRubber.aif and CarRunning.aif) are already in the Resources folder that you added to your project. (See Chapter 3 if you haven’t already done this.)

imageYou can use Audacity, a free, open source software for recording and editing sounds, to create your own sound files. It’s available for Mac OS X, Microsoft Windows, GNU/Linux, and other operating systems.

With the added frameworks in place, you now need to import the necessary audio player and system sound services headers and then add the instance variables you’ll be using. To accomplish all this, add the bolded code in Listing 10-6 to TestDriveController.m.

Listing 10-6: Updating the TestDriveController Private Interface

#import “TestDriveController.h”

#import <AVFoundation/AVFoundation.h>

#import <AudioToolbox/AudioToolbox.h>

@interface TestDriveController () {

AVAudioPlayer *backgroundAudioPlayer;

SystemSoundID burnRubberSoundID;

}

@end

@implementation TestDriveController

As you can see, I’m having you take advantage of being able to put instance variables in the implementation file to keep them hidden (as I explain in Chapter 6). In fact, the file template already had added

@interface TestDriveController ()

@end

for you.

Next, you need to set up the audio player and system sound services. Uncomment out viewDidLoad and add the bolded code in Listing 10-7 to viewDidLoad in TestDriveController.m.

Listing 10-7: Updating viewDidLoad

- (void)viewDidLoad

{

[super viewDidLoad];

NSURL* backgroundURL = [NSURL fileURLWithPath:

[[NSBundle mainBundle]pathForResource:

@”CarRunning” ofType:@”aif”]];

backgroundAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:backgroundURL error:nil];

backgroundAudioPlayer.numberOfLoops = -1;

[backgroundAudioPlayer prepareToPlay];

NSURL* burnRubberURL = [NSURL fileURLWithPath:

[[NSBundle mainBundle] pathForResource:

@”BurnRubber” ofType:@”aif”]];

AudioServicesCreateSystemSoundID(

(__bridge CFURLRef)burnRubberURL, &burnRubberSoundID);

}

In Listing 10-7, the first thing you do is load the sound file from the resources in your bundle:

NSURL* backgroundURL = [NSURL fileURLWithPath:

[[NSBundle mainBundle]pathForResource:

@”CarRunning” ofType:@”aif”]];

“What bundle?” you say? Well, when you build your iOS application, Xcode packages it as a bundle — one containing the following:

image The application’s executable code

image Any resources that the app has to use (for instance, the application icon, other images, and localized content — in this case, the plist, .html files, and .png files)

image The RoadTrip-Info.plist file, also known as the information property list, which defines key values for the application, such as bundle ID, version number, and display name

Pretty easy, huh?

Coming back to Listing 10-7, fileURLWithPath is an NSURL class method that initializes and returns an NSURL object as a file URL with a specified path. The NSURL class includes the utilities necessary for downloading files or other resources from web and FTP servers and from the file system.

The sound file you’ll be using is a resource, and pathForResource: is an NSBundle method that creates the path needed by the fileURLWithPath: method to construct the NSURL. Just give pathForResource: the name and the file type, and it returns the path that gets packed in to the NSURL and loaded.

imageBe sure that you provide the correct file type; otherwise, this technique won’t work.

Next, you create an instance of the audio player

backgroundAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:backgroundURL error:nil];

and initialize it with the audio file location (NSURL). You’ll ignore any errors.

Set the number of loops to -1 (which will cause the audio file to continue to play until you stop it) and tell the player to get ready to play:

backgroundAudioPlayer.numberOfLoops = -1;

[backgroundAudioPlayer prepareToPlay];

prepareToPlay prepares the audio player for playback by preloading its buffers; it also acquires the audio hardware needed for playback. This preloading minimizes the lag between calling the play method and the start of sound output. Without this preloading, although the player would still play when you send the play message (later) in viewDidLoad, you’ll likely notice a lag as it sets up its buffers.

Similarly, you set up the NSURL for the BurnRubber sound:

NSURL* burnRubberURL = [NSURL fileURLWithPath:

[[NSBundle mainBundle] pathForResource:

@”BurnRubber” ofType:@”aif”]];

You then call a core foundation method to create a system sound object that you later use to play the sound:

AudioServicesCreateSystemSoundID((__bridge

CFURLRef)burnRubberURL, &burnRubberSoundID);

CFURLRef (as I explain in Chapter 6 in the section about automatic reference counting [ARC]) is a CoreFoundation object, and ARC doesn’t automatically manage the lifetimes of CoreFoundation types. And although you can use certain CoreFoundation memory management rules and functions, you don’t need to do that here. That’s because all you’re doing is casting an Objective-C object to a CoreFoundation type object, and you won’t need to use any CoreFoundation memory management in your code. You have to let the compiler know about any memory management implications, however, so you need to use the __bridge cast.

In testDrive, you’ll play both the BurnRubber and CarRunning sounds. To do so, add the bolded code in Listing 10-8 to testDrive: in TestDriveController.m.

Listing 10-8: Updating testDrive

- (IBAction)testDrive:(id)sender {

AudioServicesPlaySystemSound(burnRubberSoundID);

[self performSelector:@selector(playCarSound)

withObject:self afterDelay:.2];

CGPoint center = CGPointMake(self.car.center.x,

self.view.frame.origin.y + self.car.frame.size.height/2 );

void (^animation)() = ^() {

self.car.center = center;

};

void (^completion)(BOOL) = ^(BOOL finished){

[self rotate];

};

[UIView animateWithDuration:3 animations:animation completion:completion];

}

You also need to add the code in Listing 10-9 to play the CarRunning sound.

Listing 10-9: Adding playCarSound

- (void)playCarSound {

[backgroundAudioPlayer play];

}

I’m having you play the BurnRubber sound first, followed by the CarRunning sound. If you don’t wait until the BurnRubber sound is complete before you play the CarRunning sound, the BurnRubber sound is drowned out by the CarRunning sound.

To play the BurnRubber sound, you use a function call to System Sound Services:

AudioServicesPlaySystemSound(burnRubberSoundID);

After this sound is done, you start the CarRunning sound by using a very useful method that will enable you to send the message to start the audio player after a delay. That method is performSelector:withObject:afterDelay:, and it looks like this:

[self performSelector:@selector(playCarSound)

withObject:self afterDelay:.2];

performSelector:withObject:afterDelay: sends a message that you specify to an object after a delay. The method you want invoked should have no return value, and should have zero or one argument.

In Listing 10-9, this method meets these rules:

- (void)playCarSound {

[backgroundAudioPlayer play];

}

@selector(playCarSound) is a compiler directive that returns a selector for a method name. A selector is the name used to select a method to execute for an object; it becomes a unique identifier when the source code is compiled.

imageSelectors really don’t do anything. What makes the selector method name different from a plain string is that the compiler makes sure that selectors are unique. Selectors are useful because at runtime they act like a dynamic function pointer that, for a given name, automatically points to the implementation of a method appropriate for whichever class they’re used with.

withObject: is the argument to pass to the method when it’s invoked. In this case, you’re passing nil because the method doesn’t take an argument.

afterDelay: is the minimum time before which the message is sent. Specifying a delay of 0 doesn’t necessarily cause the selector to be performed immediately. When you send the performSelector:withObject: message, you specify 0.2 seconds because that’s the duration of theBurnRubber sound.

imageSometimes you may need to cancel a selector. cancelPerformSelectorsWithTarget: cancels all outstanding selectors scheduled to be performed with a given target.

Several other variations exist on the performSelector:withObject:afterDelay: method. Those variations are part of the NSObject class, which is the root class of most Objective-C class hierarchies. It provides the basic interface to the runtime system as well as the capability to behave as Objective-C objects.

Finally, to play the sound in the playCarSound method, you send the audio player the play message:

[backgroundAudioPlayer play];

The play message plays a sound asynchronously. If you haven’t already sent the prepareToPlay message, play will send that for you as well (although you should expect a lag before the sound is played).

Next, you need to stop playing the sound in the continueRotation animation’s completion block (or it gets really annoying). To stop playing the sound, add the bolded code in Listing 10-10 to continueRotation in TestDriveController.m. (completion replaces the previous value that wasNULL.)

Listing 10-10: Updating continueRotation to Stop the Sound

- (void)continueRotation {

CGAffineTransform transform =

CGAffineTransformMakeRotation(-0);

void (^animation)() = ^() {

self.car.transform = transform;

};

void (^completion)(BOOL) = ^(BOOL finished){

[backgroundAudioPlayer stop];

[backgroundAudioPlayer prepareToPlay];

};

[UIView animateWithDuration:3 animations:animation

completion:completion];

}

In the code in Listing 10-10, you also set up the audio player to play again.

And there you have it. Run your project and you’ll notice some very realistic sound effects when you touch the Test Drive button.

Tracking Touches

While Skippy (the inspiration for the RoadTrip app) was pretending to drive across country, he said that it would be nice to be able to drag the car and place it anywhere on the screen. And because his wish is my command, in this section, I explain how to code for dragging an object, as well as how touches work on the iPad.

The touch of a finger (or lifting it from the screen) adds a touch event to the application’s event queue, where it’s encapsulated (contained) in a UIEvent object. A UITouch object exists for each finger touching the screen, which enables you to track individual touches.

The touchesBegan:withEvent: message is sent when one or more fingers touch down in a view. This message is a method of the TestDriveController’s superclass, UIResponder, from which the view controller is derived. (I explain this in Chapter 6, in the section about UIApplicationMain.)

As the user manipulates the screen with his or her fingers, the system reports the changes for each finger in the corresponding UITouch object, thereby sending the touchesMoved:withEvent: message. The touchesEnded:withEvent: message is sent when one or more fingers lift from the associated view. The touchesCancelled:withEvent: message, on the other hand, is sent when a system event (such as a low-memory warning) cancels a touch event.

In this app, you need be concerned only with the first two methods just described.

To begin the process of responding to a touch event, add a new instance variable (bolded in Listing 10-11) to the TestDriveController.m implementation file.

Listing 10-11: Updating the TestDriveController Implementation

@interface TestDriveController () {

AVAudioPlayer *backgroundAudioPlayer;

SystemSoundID burnRubberSoundID;

BOOL touchInCar;

}

@end

Next, add the touchesBegan: method in Listing 10-12 to TestDriveController.m to start tracking touches. (You’re actually overriding this method because you have inherited it from the UIResponder base class.)

Listing 10-12: Overriding touchesBegan:

- (void)touchesBegan:(NSSet *)touches withEvent:

(UIEvent *)event

{

UITouch *touch = [touches anyObject];

if (CGRectContainsPoint(self.car.frame,

[touch locationInView:self.view]))

touchInCar = YES;

else {

touchInCar = NO;

[super touchesBegan:touches withEvent:event];

}

}

As mentioned previously, the touchesBegan:withEvent: message is sent when one or more fingers touch down in a view. The touches themselves are passed to the method in an NSSet object — an unordered collection of distinct elements.

To access an object in NSSet, use the anyObject method — it returns one of the objects in the set. For our purposes here, you’re assuming just one object — but you might want to explore this issue further on your own so that you can understand how to handle additional possibilities.

The following code shows how to set up the anyObject method:

UITouch *touch = [touches anyObject];

Next, have the code determine whether the user’s touch event is in the Car (UIImage) view:

if (CGRectContainsPoint(self.car.frame,

[touch locationInView:self.view]))

CGRectContainsPoint is a function that returns YES when a rectangle (view coordinates) contains a point. You specify the car’s frame as the rectangle:

self.car.frame

and you specify the point by sending the locationInView: message to the touch:

locationInView:self.view

locationInView: returns the current location of the receiver in the coordinate system of the given view. In this case, you’re using the Main view, but you might want to change the view if you’re trying to determine the location within another view, for example. Maybe the user is touching an itty-bitty gas pedal. (Just to be clear, in our RoadTrip app, the car does not have an itty-bitty gas pedal.)

If it’s determined that the touch is in the car, you assign YES to the touchInCar instance variable; if it’s not, you assign NO and forward the message up the responder chain. You use touchInCar later to determine whether the user is dragging the car around or just running his or finger over the screen.

imageThe default implementation of touchesBegan: does nothing. However, subclasses derived directly from UIResponder, particularly UIView, forward the message up the responder chain. To forward the message to the next responder, send the message to super (the superclass implementation).

If you override touchesBegan:withEvent: without calling super (a common use pattern), you must also override the other methods for handling touch events, if only as stub (empty) implementations.

Multiple touches are disabled by default. To allow your app to receive multiple touch events, you must set the multipleTouchEnabled property of the corresponding view instance to YES.

As users merrily move the car around the screen (perhaps saying zoom zoom to themselves), your app is constantly being sent the touchesMoved:message. Add the code in Listing 10-13 to TestDriveController.m to override that method, which will enable you to move the car to where the user’s finger is.

Listing 10-13: Overriding touchesMoved:withEvent:

- (void)touchesMoved:(NSSet *)touches withEvent:

(UIEvent *)event {

if (touchInCar) {

UITouch* touch = [touches anyObject];

self.car.center = [touch locationInView:self.view];

}

else

[super touchesMoved:touches withEvent:event];

}

If the first touch was in the Car view (touchInCar is YES), you assign car’s center property to the touch coordinate. As I explain in the “Animating a View” section, earlier in this chapter, when you assign a new value to the center property, the view’s location is immediately changed. Otherwise, you ignore the touch and forward the message up the responder chain.

It’s interesting to observe that when you position the car next to a button, it will travel under that button when you touch the Test Drive button. This feature illustrates the subview structure that I explain in Chapter 4 in the section about the view hierarchy. Because I had you add the buttons last (they’re subviews of the Main view), they’re displayed on top of the subviews (car) that you added earlier.

Animating a Series of Images “In Place”

Although I explain animation using the UIView methods earlier in this chapter, this section shows you a way to animate a series of images “in place” — you are not moving the image around as you did earlier with the car; instead you are changing the image where it is to make it appear as if it were animated.

To make the Test Drive button blink, for example, add the bolded code in Listing 10-14 to TestDriveController.m. As you can see in the listing, only a single line of code is needed to animate the button.

Listing 10-14: Creating a Blinking Button

- (void)viewDidLoad

{

[super viewDidLoad];

NSURL* backgroundURL = [NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@”CarRunning” ofType:@”aif”]];

backgroundAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:backgroundURL error:nil];

backgroundAudioPlayer.numberOfLoops = -1;

[backgroundAudioPlayer prepareToPlay];

NSURL* burnRubberURL = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@”BurnRubber” ofType:@”aif”]];

AudioServicesCreateSystemSoundID((__bridge CFURLRef)burnRubberURL, &burnRubberSoundID);

[self.testDriveButton setBackgroundImage:[UIImage animatedImageNamed:@”Button” duration:1.0 ] forState:UIControlStateNormal];

}

In Chapter 5, I show you how to add a custom button with a Button background image. You could have also programmatically added the background image by sending the button the setBackgroundImage:forState: message. (Chapter 5 explains the control state as well.) Normally, you might think of making the background image a single image. However, animatedImageNamed:duration: and some similar methods use instead a series of files, each displayed for a duration you specify. This type of method enables you to animate (this time, in place) not only a button but also any image by simply supplying a series of images:

[testDriveButton setBackgroundImage:

[UIImage animatedImageNamed:@”Button” duration:1.0]

forState:UIControlStateNormal];

In the animatedImageNamed: method, you supply a base name of an image to animate. The method appends a 0 to the base name and loads that image (in this case, Button0). After the time that you specify in duration has elapsed, the animatedImageNamed: method appends the next number (in this case, 1) to the base image name and attempts to load it and the remainder of images (up to 1,024 images) until it runs out of images, and then it starts over.

In the Project navigator, open the disclosure triangle for the RoadTrip Resources group that you created in Chapter 3. If you look in the RoadTrip Resources group, you see two images, Button0 and Button1 — with Button being the base name you specified. This is an “in place” animation, so all images included in the animated image should share the same size and scale.

If you select each image in the Project navigator, you can see that they’re slightly different colors, and each will display for 1.0 second (duration:1.0). This makes the button blink and certainly adds some life to the Main view.

iPhone versus iPad

The iOS 6 animation and sound libraries and frameworks are the same for the iPhone and iPad, so the code shown in this chapter works properly on both the iPhone and iPad apps. Very nice.