Coming Events - Learn iOS 8 App Development, Second Edition (2014)

Learn iOS 8 App Development, Second Edition (2014)

Chapter 4. Coming Events

Now that you’ve seen an iOS app in action, you might be wondering what keeps your app “alive,” so to speak. In the Shorty app, you created action functions that were called when the user tapped a button or pressed the Go key on the keyboard. You created delegate objects that received messages when certain milestones were reached, such as when a web page had problems loading or the URL shortening service responded. You never wrote any code to see whether the user had touched something or checked to see whether the web page had finished loading. In other words, you didn’t go out and get this information; your app waited for this information to come to it.

iOS apps are event-driven applications. An event-driven application doesn’t (and shouldn’t!) spin in a loop checking to see whether something has happened. Event-driven applications set up the conditions they want to respond to (such as a user’s touch, a change in the device’s orientation, or the completion of a network transaction). The app then sits quietly, doing nothing, until one of those things happen. All of those things are collectively referred to as events and are what this chapter is all about.

In this chapter, you’ll learn about the following:

· Events

· Run loops

· Event delivery

· Event handling

· The first responder and the responder chain

· Running your app on a real iOS device

I’ll start with some basic theory about how events get from the device’s hardware into your application. You’ll learn about the different kinds of events and how they navigate the objects in your app. Finally, you’ll create two apps: one that handles high-level events and one that handles low-level events.

Run Loop

iOS apps sit perfectly still, waiting for something to happen. This is an important feature of app design because it keeps your app efficient; the code in your app runs only when there’s something important to do.

This seemingly innocuous arrangement is critical to keeping your users happy. Running computer code requires electricity, and electricity in mobile devices is a precious commodity. Keeping your code from running at unnecessary times allows iOS to conserve power. It does this by turning off or minimizing the amount of power the CPU and other hardware accessories use when they are not needed. This power management happens hundreds of times a second, but it’s crucial to the battery life of mobile devices, and users love mobile devices with long battery life.

The code in your app is at the receiving end of two mechanisms: a run loop and an event queue. The run loop is what executes code in your app when something happens and stops your app from running when there’s nothing to do. The event queue is a data structure containing the list of events waiting to be processed. As long as there are events in the queue, the run loop sends them—one at a time—to your app. As soon as all of the events have been processed and the event queue is empty, your app stops executing code.

Conceptually, your app’s run loop looks like this:

while true {
let event: UIEvent = iOS.waitForNextEvent()
yourApp.processEvent(event)
}

The magic is in the waitForNextEvent() function (which doesn’t exist; I made it up). If there’s an event waiting to be processed, that event is removed from the queue and returned. The run loop passes it to your app for processing. If there’s no event, the function simply doesn’t return; your app is suspended until there’s something to do. Now let’s look at what those events are and where they come from.

Event Queue

Events waiting to be processed are added to a first in, first out (FIFO) buffer called the event queue. There are different kinds of events, and events come from different sources, as shown in Figure 4-1.

image

Figure 4-1. The event queue

Let’s follow one event through your app. When you touch your finger to the surface of an iOS device, here’s what happens:

1. Hardware in the screen detects the location of the touch.

2. This information is used to create a touch event object, which records the position of the touch, what time it occurred, and other information.

3. The touch event object is placed in the event queue of your app.

4. The run loop pulls the touch event object from the queue and passes it to your application object.

5. Your application object uses the geometry of the active views in your app to determine which view your finger “touched.”

6. An event message containing the touch event is sent to that view object.

7. The view object decides what the touch event means and what it will do. It might highlight a button or send an action message.

When you touched the “shorten URL” button in the Shorty app from Chapter 3, that’s how the physical act of touching the screen turned into the shortenURL(_:) call your view controller received.

Different event types take different paths. The next few sections will describe the different delivery methods, along with the types of events that each delivers.

Event Delivery

Event delivery is how an event gets from the event queue to an object in your app. Different types of events take different paths, befitting their purpose. The actual delivery mechanism is a combination of logic in the Cocoa Touch framework, your application object, and various functions defined by your app objects.

Broadly speaking, there are three delivery methods.

· Direct delivery

· Hit testing

· First responder

The next few sections will describe each of these three methods and the events that get delivered that way.

Direct Delivery

Direct delivery is the simplest form of event delivery. A number of event types target specific objects. These events know which objects will receive them, so there’s not much to know about how these events are delivered, beyond that they’re dispatched by the run loop.

For example, a Swift function call can be placed in the event queue. When that event is pulled from the queue, the call is performed on its target object. That’s how the web view told your Shorty app when the web page had loaded. When the network communications code (running in its own thread) determined the page had finished loading, it pushed a webViewDidFinishLoad() call onto the main thread’s event queue. As your main thread pulled events from its event queue, one of those events made that call on your web view delegate object, telling it that the page had loaded.

Note That isn’t exactly how asynchronous delegate messages are delivered. But from an app developer’s perspective—which is you—it’s conceptually accurate; the details aren’t important.

Other events that are sent to specific objects, or groups of objects, are notifications, timer events, and user interface updates. All of these events know, either directly or indirectly, which objects they will be sent to. As an app developer, all you need to know is that when those events work their way to the end of the event queue, the run loop will call a Swift function on one or more objects.

Hit Testing

Hit testing delivers events based on the geometry of your user interface, and it applies only to touch events. When a touch event occurs, the UIWindow and UIView objects work together to determine which view object corresponds to the location of the touch. Messages are then sent to that view object, which interprets those events however it chooses; it may flip a switch, scroll a shopping list, or blow up a spaceship. Let’s take a quick look at how hit testing works.

When a touch event is pulled from the event queue, it contains the absolute hardware coordinates where the touch occurred, as shown on the left in Figure 4-2. This example will use a stylized representation of the Shorty app from the previous chapter.

image

Figure 4-2. Hit testing a touch event

Your UIApplication object uses the event coordinates to determine the UIWindow object that’s responsible for that portion of the screen. That UIWindow object receives a sendEvent(_:) call containing the touch event object to process.

The UIWindow object then performs hit testing. Starting at the top of its view hierarchy, it calls the hitTest(_:,withEvent:) function on its top-level view object, as shown in the second panel of Figure 4-2.

The top-level view first determines whether the event is within its bounds. It is, so it starts to look for any subviews that contain the touch coordinate. The top-level view contains three subviews: the navigation toolbar at the top, the web view in the middle, and the toolbar at the bottom. The touch is within the bounds of the toolbar, so it passes on the event to the hitTest(_:,withEvent:) function of the toolbar.

The toolbar repeats the process, looking for a subview that contains the location, as shown in the third frame of Figure 4-2. The toolbar object discovers that touch occurs inside the leftmost bar button item bounds. The bar button item is returned as the “hit” object, which causes UIWindowto begin sending it low-level touch event messages.

Being a “button,” the bar button item object examines the events to determine whether the user tapped the button (as opposed to swiping it or some other irrelevant gesture). If they did, the button sends its action, in this case shortenURL(), to the object its connected to.

Tip Hit testing is highly customizable, should you ever need to modify it. By overriding the pointInside(_:,withEvent:) and hitTest(_:,withEvent:) functions of your view objects, you can literally rewrite the rules that determine how touch events find the view object they will be sent to. See the Event Handling Guide for iOS, which can be found in Xcode’s Documentation and API Reference, for the details.

The First Responder

The first responder is a view, view controller, or window object in your active interface (a visible window). Think of it as the designated receiver for events that aren’t determined by hit testing. I’ll talk about how an object becomes the first responder later in this chapter. For now, just know that every active interface has a first responder.

The following are the events that get delivered to the first responder:

· Shake motion events

· Remote control events

· Key events

The shake motion event tells your app that the user is shaking their device (moving it rapidly back and forth). This information comes from the accelerometer hardware.

So-called remote control events are generated when the user presses any of the multimedia controls, which include the following:

· Play

· Pause

· Stop

· Skip to Next Track

· Skip to Previous Track

· Fast Forward

· Fast Backward

These are called remote events because they could originate from external accessories, such as the play/pause button on the cord of many headphones. In reality, they most often come from the play/pause buttons you see on the screen.

Key events come from tapping the virtual keyboard or from a hardware keyboard connected via Bluetooth.

To review, direct delivery sends event objects or Swift calls directly to their target objects. Touch events use hit testing to determine which view object will receive them, and all other events are sent to the first responder. Now it’s time to do something with those events.

Event Handling

You’ve arrived at the second half of your event processing journey: event handling. In simple terms, an object handles or responds to an event if it contains code to interpret that event and decide what it wants to do about it.

I’ll get the direct delivery events out of the way first. An object receiving a direct delivery event must have a function to process that event, call, or notification. This is not optional. If the object doesn’t implement the expected function, your application will malfunction and could crash. That’s all you need to know about directly delivered events.

Caution When requesting timer and notification events, make sure the object receiving them has the correct functions implemented.

Other event types are much more forgiving. Much like optional delegate functions, if your object overrides the function for handling an event, it will receive those events. If it isn’t interested in handling that type of event, you simply omit those functions from its implementation, and iOS will go looking for another object that wants to handle them.

To handle touch events, for example, you override the following functions in your class:

override func touchesBegan(touches: NSSet, withEvent event: UIEvent)
override func touchesMoved(touches: NSSet, withEvent event: UIEvent)
override func touchesEnded(touches: NSSet, withEvent event: UIEvent)
override func touchesCancelled(touches: NSSet!, withEvent event: UIEvent)

If hit testing determines that your object should receive touch events, it will receive a touchesBegan(_:,withEvent:) call when the hardware detects a physical touch in your view, a touchesMoved(_:,withEvent:) call whenever the position changes (a dragging gesture), and a touchesEnded(_:,withEvent:) call when contact with the screen is removed. As the user taps and drags their fingers across the screen, your object may receive many of these calls, often in rapid succession.

Note The touchesCancelled(_:,withEvent:) function is the oddball of the group. This function is called if something interrupts the sequence of touch events, such as your app changing to another screen in the middle of a drag gesture. You need to handle the cancel call only if an incomplete sequence of touch events (such as receiving a “began” but no “ended” message) would confuse your object.

If you omit all of these functions from your class, your object will not handle any touch events.

All of the methods for handling events are inherited from the UIResponder class, and each type of event has one or more functions that you must implement if you want to handle that event. The UIResponder class documentation has a complete list of event handling functions.

So, what happens if the hit test or first responder object ignores the event? That’s a good question, and the answer is found in the responder chain.

The Responder Chain

The responder chain is a string of objects that represent the focus of your user interface. What I mean by focus is those objects controlling the currently visible interface and those view objects that are most relevant to what the user is doing. Does that sound all vague and confusing? A picture and an explanation of how iOS creates and uses the responder chain will make things clear.

The responder chain starts with the initial responder (see Figure 4-3). When delivering motion, key, and remote events, the first responder is the initial responder object. For touch events, the initial responder is the view object determined by hit testing.

image

Figure 4-3. First responder chain

Note All objects in the responder chain are subclasses of UIResponder. So, technically, the responder chain consists of UIResponder objects. UIApplication, UIWindow, UIView, and UIViewController are all subclasses of UIResponder. By extension, the initial responder (first responder or hit test result) is always a UIResponder object.

iOS begins by trying to deliver that event to the initial responder. Trying is the key word here. If the object provided functions to handle that event, it does so. If not, iOS moves onto the next object in the chain until either it finds an object that wants to process the event or it gives up and throws the event away.

Figure 4-3 shows the conceptual organization of view objects in an app with two screens. The second screen is currently being shown to the user. It consists of a view controller object, a number of subviews, some nested inside other subviews, and even a subview controller. In this example, a sub-subview has been designated the initial responder, which would be appropriate after a hit test determined that the user touched that view.

iOS will try to deliver the touch event to the initial responder (the sub-subview). If that object doesn’t handle touch events, iOS sees whether that view has a view controller object (it doesn’t) and tries to send the event to its controller. If neither the view nor its controller handles touch events, iOS finds the view that contains that view (its superview) and repeats the entire process until it runs out of views and view controllers to try.

After all the view and view controller objects have been given a chance to handle the event, delivery moves to the window object for that screen and finally to the single application object.

What makes the responder chain so elegant is its dynamic nature and ordered processing of events. The responder chain is created automatically, so your object doesn’t have to do anything to be a part of the responder chain, except to make sure that either it or a subsidiary view is the initial responder. Your object will receive event messages while that portion of your interface is active and won’t receive events when it’s not.

The other aspect is the specific-to-general nature of responder chain event handling. The chain always starts at the view that’s most relevant to the user: the button they touched, an active text input field, or a row in a list. That object always receives the events first. If the event has specific meaning to those views, it’s processed accordingly. At the same time, your view controller or UIApplication object could also respond to those events, but if one of the subviews handles it first, those objects won’t receive it.

If the user moves to another screen, as shown in Figure 4-4, and presses the pause button on their headphones, a new responder chain is established. This chain starts at the first responder, which in this case is a view controller. The chain doesn’t include any view objects at all because the top-level view controller object is the first responder.

image

Figure 4-4. Second responder chain

If the view controller handles the “pause” event, then it goes ahead and does so. The view controllers in other interfaces never see the event. By implementing different “pause” event handling code in the various controllers, your app’s response to a “pause” event will be different, depending on which screen is active.

Your application object could also handle the “pause” event. If none of the view controllers handled the “pause” event, then all “pause” events would trickle down to the application object. This would be the arrangement you’d use if you wanted all “pause” events to be handled the same way, regardless of what screen the user was looking at.

Finally, you can mix these solutions. A “pause” event handler in the application could handle the event in a generic way, and then specific view controllers could intercept the event if pressing the pause button has special meaning in that screen.

Tip It’s rare to create a custom subclass of UIApplication and even rarer to subclass UIWindow. In a typical app, all of your event handling code will be in your custom view and view controller objects.

CONDITIONALLY HANDLING EVENTS

In practical terms, you override an event handling function (such as touchesBegan(_:,withEvent:)) to handle that event type, or you omit the function to ignore it. In reality, it’s a little more nuanced.

Events are handled by receiving specific Swift function calls (such as touchesBegan(_:,:withEvent:)). Your object inherits these functions from the UIResponder base class. So, every UIResponder object has a touchesBegan(_:,:withEvent:) function and will receive the touch event object via a function call. So, how does the object ignore the event?

The secret is in UIResponder’s implementation of these functions. The inherited base class implementation for all event calls simply passes the event up the responder chain. So, a more precise description is this: To handle events, you override UIResponder’s event handler function and process the event. To ignore it, you let the event go to UIResponder’s function, which ignores the event and passes it to the next object in the responder chain.

That brings up an interesting feature: conditionally handling events. It’s possible to write an event handler function that decides whether it wants to handle an event. It can arbitrarily choose to process the event itself or pass it along to the next object in the responder chain. Passing it on is accomplished by forwarding the event to the base class’s implementation, like this:

override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
if iWantToHandleTheseTouches(touches) {
// handle event
doSomethingWithTheseTouches(touches)
} else {
// ignore event and pass it up the responder chain
super.touchesBegan(touches, withEvent: event)
}
}

Using this technique, your object can dynamically decide which events it wants to handle and which events it will pass along to other objects in the responder chain.

Now that you know how events are delivered and handled, you’re ready to build an app that uses events directly. To do that, you’ll need to consider what kind of events you want to handle and why.

High-Level vs. Low-Level Events

Programmers are perpetually labeling things as high level or low level. Objects in your app form a kind of pyramid. A few complex objects at the top are constructed from more primitive objects in the middle, which are themselves constructed from even more primitive objects. The complex objects on top are called the high-level objects (UIApplication, UIWebView). The simple objects at the bottom are called the low-level objects (NSNumber, String). Similarly, programmers will talk about high- and low-level frameworks, interfaces, communications paths, and so on.

Events, too, come in different levels. Low-level events are the nitty-gritty, moment-by-moment, details that are happening right now. The touch events are examples of low-level events. Another example is the instantaneous force vector values that you can request from the accelerometer and gyroscope hardware.

At the other end of the scale are high-level events, such as the shake motion event. Another example is the UIGestureRecognizer objects that interpret complex touch event patterns and turn those into a single high-level action, such as “pinched” or “swiped.”

When you design your app, you must decide what level of events you want to process. In the next app, you’re going to use the shake motion event to trigger actions in your app.

To do that, you could request and handle the low-level accelerometer events. You would have to create variables to track the force vectors for each of the three movement axes (x, y, and z). When you detected that the device was accelerating in a particular direction, you would record that direction and start a timer. If the direction of travel reversed, within a reasonable angle of trajectory and within a short period of time, and then reversed two or three more times, you could conclude that the user was shaking the device.

Or, you could let iOS do all of those calculations for you and simply handle the shake motion events generated by the Cocoa Touch framework. When the user starts to shake their device, your first responder receives a motionBegan(_:,withEvent:) call. When the user stops shaking it, your object receives a motionEnded(_:,withEvent:) call. It’s that simple.

That doesn’t mean you’ll never need low-level events. If you were writing a game app where your user directed a star-nosed mole through the soil of a magical garden by tilting the device from side to side, then interpreting the low-level accelerometer events would be the correct solution. You’ll use the low-level accelerometer events in Chapter 16.

Decide what information you need from events and then handle the highest-level events that give you that information. Now you’re ready to start designing your app.

Eight Ball

The app you’ll create mimics the famous Magic Eight Ball toy from the 1950s (http://en.wikipedia.org/wiki/Magic_Eight_Ball). The app works by displaying an eerily prescient message whenever you shake your iOS device. Start by sketching out a quick design for your app.

Design

The design for this app is the simplest so far: a screen containing a message is displayed in the center of a “ball,” as shown in Figure 4-5. When you shake the device, the current message disappears. When you stop shaking it, a new message appears.

image

Figure 4-5. EightBall app design

Create the Project

Launch Xcode and choose File image New Project. Select the Single View iOS app template. In the next sheet, name the app EightBall, set the language to Swift, and choose iPhone for the device, as shown in Figure 4-6.

image

Figure 4-6. Creating the EightBall project

Choose a location to save the new project and create it. In the project navigator, select the project, select the EightBall target from the pop-up menu (if needed), select the General tab, and then turn off the two landscape orientations in the Supported Interface Orientation section so only the portrait orientation is enabled.

Create the Interface

Select the Main.storyboard Interface Builder file and select the single view object. Using the attributes inspector, set the background color to Black, as shown in Figure 4-7.

image

Figure 4-7. Setting the main view background color

From the library, drag a new image view object into the interface. With the new image object selected, click the pin constraints control (the second button in the lower-right corner of the canvas). Check the Width and Height constraints, as shown on the left in Figure 4-8, and set both of their values to 320. Click the Add 2 Constraints button.

image

Figure 4-8. Setting the image view constraints

Click the align constraints control (leftmost button). Check the Horizontal Center in Container and Vertical Center in Container constraints, as shown in the middle of Figure 4-8. Make sure both of their values are set to 0. Click the Add 2 Constraints button. Finally, click the Resolve Auto Layout Issues control (third button) and choose the Update Frames command, as shown on the right in Figure 4-8. The image view object will now have a fixed size (320x320 points) and will always be centered in the view controller’s root view.

Tip When adding new constraints via the pin constraints or align constraints control, choose an option from the Update Frames pop-up menu before clicking the add button. Xcode will apply the constraints and then update the frames in a single step.

Just as you did in Chapter 2, you’re going to add some resource image files to your project. In the project navigator, select the Images.xcassets assets catalog. In the Finder, locate the Learn iOS Development Projects folder you downloaded in Chapter 1. Inside the Ch 4folder you’ll find the EightBall (Resources) folder, which contains five image files. Select the files eight-ball.png and eight-ball@2x.png. With these files and your workspace window visible, drag the two image files into the assets catalog, as shown in Figure 4-9.

image

Figure 4-9. Adding eight-ball images to the assets catalog

Returning to your project, select Main.storyboard and select the image view object. Using the attributes inspector, set the image property to eight-ball, as shown in Figure 4-10.

image

Figure 4-10. Setting the image

Now you need to add a text view to display the magic message. From the object library, drag in a new text view (not a text field) object, placing it over the “window” in the middle of the eight ball. Almost exactly as you did for the eight-ball image view (see Figure 4-8), add the following constraints:

1. Pin Width to 160 and Height to 112.

2. Align Horizontal Center in Container and Vertical Center in Container.

The text view now has a fixed size and is centered right on top of the eight-ball image view. With the text view still selected, use the attributes inspector to set the following properties:

1. Set the text to SHAKE FOR ANSWER, on three lines (see Figure 4-11). Hold down the Option key when pressing the Return key to insert a literal “return” character into the text property field.

image

Figure 4-11. Finished EightBall interface

2. Make the text view color white.

3. Click the up arrow in the Font property until it reads System 24.0.

4. Choose the centered (middle) alignment.

5. Uncheck the Editable behavior property.

6. Further down, find the Background property and set it to default (no background).

7. Uncheck the Opaque property.

Your interface design is finished and should look like the one in Figure 4-11. Now it’s time to move on to the code.

Writing the Code

Your ViewController object will need a connection to the text view object. Select your ViewController.swift file and add the following property:

@IBOutlet var answerView: UITextView!

You’ll also need a set of answers, so add this immediately after the new property:

let answers = [ "\rYES", "\rNO", "\rMAYBE",
"I\rDON'T\rKNOW", "TRY\rAGAIN\rSOON", "READ\rTHE\rMANUAL" ]

This statement defines an immutable array of String objects. Each object is one possible answer to appear in the eight ball. The \r characters are called an escape sequence. They consist of a backslash (left leaning slash) character followed by a code that tells the compiler to replace the sequence with a special character. In this case, the \r is replaced with a literal “carriage return” character—something you can’t type into your source without starting a new line.

Now you’re going to add two functions to update the message display: fadeFortune() and newFortune(). Add this code after the array:

func fadeFortune() {
UIView.animateWithDuration(0.75) {
self.answerView.alpha = 0.0
}
}

func newFortune() {
let randomIndex = Int(arc4random_uniform(UInt32(answers.count)))
answerView.text = answers[randomIndex];
UIView.animateWithDuration(2.0) {
self.answerView.alpha = 1.0
}
}

The fadeFortune() function uses iOS animation to change the alpha property of the answerView text view object to 0.0. The alpha property of a view is how opaque the view appears. A value of 1.0 is completely opaque, 0.5 makes it 50 percent transparent, and a value of 0.0makes it completely invisible. fadeFortune() makes the text view object fade away to nothing, over a period of ¾ of a second.

Note Animation is covered in more detail in Chapter 11.

The newFortune() function is where all the fun is. The first statement does these three things:

1. The arc4random_uniform(_:) function is called to pick a random number between 0 and a number less than the number of answers. So if answers.count is 6, the function will return a random number between 0 and 5 (inclusive).

2. The random number is used as an index into the answers array to pick one of the constant string objects.

3. The random answer is used to set the text property of the text view object. Once set, the text view object will display that text in your interface.

Finally, iOS animation is used again to change the alpha property slowly back to 1.0, going from invisible to opaque over a period of 2 seconds, causing the new message to gradually appear.

There’s one minor detail remaining: connecting the answerView outlet to the text view object in the interface. Switch to the Main.storyboard Interface Builder file. Select the view controller object and then use the connections inspector to connect the answerView outlet, as shown in Figure 4-12.

image

Figure 4-12. Connecting the answerView outlet

Handling Shake Events

Your app now has everything it needs to work, except the event handling that will make it happen. In the Xcode documentation (Help image Documentation and API Reference), take a look at the documentation for UIResponder. In it, you’ll find documentation for three functions.

func motionBegan(motion: UIEventSubtype, withEvent event: UIEvent)
func motionEnded(motion: UIEventSubtype, withEvent event: UIEvent)
func motionCancelled(motion: UIEventSubtype, withEvent event: UIEvent)

Each function is called during a different phase of a motion event. Motion events are simple—remember, these are “high-level” events. Motion events begin, and they end. If the motion is interrupted or never finishes, your object receives a motion canceled message.

To handle motion events in your view controller, add these three event handler functions to your ViewController:

override func motionBegan(motion: UIEventSubtype, withEvent event: UIEvent) {
if motion == .MotionShake {
fadeFortune()
}
}

override func motionEnded(motion: UIEventSubtype, withEvent event: UIEvent) {
if motion == .MotionShake {
newFortune()
}
}

override func motionCancelled(motion: UIEventSubtype, withEvent event: UIEvent) {
if motion == .MotionShake {
newFortune()
}
}

Each function begins by examining the motion parameter to see whether the motion event received describes the one you’re interested in (the shake motion). If not, you ignore the event. This is important. Future versions of iOS may add new motion events; your object should pay attention only to the ones it’s designed to work with.

The motionBegan(_:,withEvent:) function calls fadeFortune(). When the user starts to shake the device, the current message fades away.

The motionEnded(_:,withEvent:) function calls newFortune(). When the shaking stops, a new fortune appears.

Finally, the motionCancelled(_:,withEvent:) function makes sure a message is visible if the motion was interrupted or interpreted to be some other gesture.

Testing Your EightBall App

Make sure you have an iPhone simulator selected in the scheme and run your app. It will appear in the simulator, as shown in Figure 4-13.

image

Figure 4-13. Testing EightBall

Choose the Hardware image Shake Gesture command in the simulator, as shown in the middle of Figure 4-13. This command simulates the user shaking their device, which will cause shake motion events to be sent to your app.

Congratulations, you’ve successfully created a shake-motion event handler! Each time you shake your simulated device, a new message appears, as shown on the right in Figure 4-13.

FIRST RESPONDER AND THE RESPONDER CHAIN

Technically, it isn’t necessary that your view controller be the first responder. What’s required is that your view controller be in the responder chain. If any view, or subview, in your interface is the first responder, your view controller will be in the responder chain and will receive motion events—unless one of those other views intercepts and handles the event first.

By default, your view controller isn’t the first responder and can’t become the first responder. An object that wants to be a first responder must return true from its canBecomeFirstResponder() function. The base class implementation of UIResponder returns false. Therefore, any subclass of UIResponder is ineligible to be the first responder unless it overrides its canBecomeFirstResponder() function.

After making your object eligible to be the first responder, the next step is to explicitly request to be the first responder. This is often done in your viewDidAppear() function, using code like this:

becomeFirstResponder()

Specific Cocoa Touch classes—most notably the text view and text field classes—are designed to be first responders, and they return true when canBecomeFirstResponder() is called. These objects establish themselves as the first responder when touched or activated. As the first responder, they handle keyboard events, copy and paste requests, and so on.

At this point you might be wondering why your view controller is getting motion events, if it’s not the first responder and it’s not in the responder chain. You can thank iOS 7 for that. Recent changes in iOS deliver motion events to the active view controller if there is no first responder or the window is the first responder. If you want your app to work with earlier versions of iOS too, you’d need to make sure your view controller can become the first responder (by overriding canBecomeFirstResponder()) and then request that it is (becomeFirstResponder()) when the view loads.

Here’s an experiment that demonstrates the responder chain in action. In the Main.storyboard file, select the text view object and use the attributes inspector to check the Editable behavior. Run the app, tap and hold the text field, and when the keyboard pops up, edit the fortune text. Now choose the simulator’s Hardware image Shake Gesture command. What happens? The text in the field changes, just as you programmed it to.

Return to the ViewController.swift file and comment out all three of your motion event handling functions. Do this by selecting the text of all three functions and choosing Editor image Structure image Comment Selection (Command+/). Now run your app again, select the text, change it, and shake the simulator. What happens? This time you see an Undo dialog, asking if you want to undo the changes you made to the text.

image

Motion events are initially sent to the first responder (the text field), eventually pass through the view controller, and ultimately land in the UIApplication object. The UIApplication object interprets a shake event to mean “undo typing.” By intercepting the motion events in your view controller, you overrode the default behavior supplied by the UIApplication object.

Put your app back the way it was returning to ViewController.swift and choosing Edit image Undo. Do the same to Main.storyboard.

Finishing Touches

Put a little spit and polish on your app with a nice icon—well, at least with the icon you’ll find in the EightBall (Resources) folder. In your project navigator, select the images.xcassets file and then select the AppIcon group. With the EightBall (Resources) folder visible, drag the three icon images files into the AppIcon preview area, as shown in Figure 4-14. Xcode will automatically assign the appropriate image file to each icon resource, based on its size.

image

Figure 4-14. Importing app icons

With that detail taken care of, let’s shake things up—literally—by running your app on a real iOS device.

Testing on a Physical iOS Device

You can test a lot of your app using Xcode’s iPhone and iPad simulators, but there are few things the simulator can’t emulate. Two of those things are multiple (more than two) touches and real accelerometer events. To test these features, you need a real iOS device, with real accelerometer hardware, that you can touch with real fingers.

The first step is to connect Xcode to your iOS Developer account. Choose Xcode image Preferences and switch to the Accounts tab. Choose Add Apple ID from the + button at the bottom of the window, as shown in Figure 4-15.

image

Figure 4-15. Adding a new Apple ID to Xcode

Supply your Apple ID and password and then click the Add button. If you’re not a member of the iOS Developer Program yet, there’s a convenient Join Program button that will take you to Apple’s website.

Note Before you can run your app on a device, you must first become a member of the iOS Developer Program. See http://developer.apple.com/programs/ios to learn how to become a member. Once you are a member, Xcode will use your Apple ID to download and install the necessary security certificates required to provision a device.

Plug an iPhone, iPad, or iPod Touch into your computer’s USB port. Open the Xcode devices window (Window image Devices). The iOS device you plugged in will appear on the left, as shown in Figure 4-16. If a “trust” dialog appears on your device, as shown on the right in Figure 4-16, you’ll need to grant Xcode access to your device.

image

Figure 4-16. Device management

Select your iOS device in the sidebar. If a Use for Development button is displayed, click it, and Xcode will prepare your device for development, a process known as provisioning. This will allow you to build, install, and run most iOS projects directly through Xcode.

Once your device is provisioned, return to your project workspace window. Change the scheme setting from one of the simulators to your actual device. I provisioned an iPad named iPad 4, so iPad 4 now appears as one of the run destinations in Figure 4-17.

image

Figure 4-17. Selecting an iOS device to test

Run the EightBall app again. This time, your app will be built, it will be copied onto your iOS device, and the app will start running there. Pretty neat, isn’t it?

The amazing thing is that Xcode is still in control—so don’t unplug your USB connection just yet! You can set breakpoints, freeze your app, examine variables, and generally do anything you could do in the simulator.

With EightBall app running, shake your device and see what happens. When you’re done, click the Stop button in the Xcode toolbar. You’ll notice that your EightBall app is now installed on your device. You’re free to unplug your USB connection and take it with you; it is, after all, your app.

Other Uses for the Responder Chain

While the responder chain concept is still fresh in your mind, I want to mention a couple of other uses for the responder chain, before you move on to low-level events. The responder chain isn’t used solely to handle events. It also plays an important role in actions, editing, and other services.

In earlier projects, you connected the actions of buttons and text fields to specific objects. Connecting an action in Interface Builder sets two pieces of information.

· The object that will receive the action (ViewController)

· The action function to call (shortenURL(_:))

It’s also possible to send an action to the responder chain, rather than a specific object. In Interface Builder you do this by connecting the action to the first responder placeholder object, as shown in Figure 4-18.

image

Figure 4-18. Connecting an action to the responder chain

When the button’s action is sent, it goes initially to the first responder object—whatever that object is. For actions, iOS tests to see whether the object implements the expected function (loadLocation(_:), in this example). If it does, the object receives that message. If not, iOS starts working its way through the responder chain until it finds an object that does.

This is particularly useful in more complex apps where the recipient of the action message is outside the scope of the Interface Builder file or storyboard scene. You can make connections between objects only in the same scene. If you need a button to send an action to another view controller or the application object itself, you can’t make that connection in Interface Builder. But you can connect your button to the first responder. As long as the intended recipient is in the responder chain when the button fires its action, your object will receive it.

Editing also depends heavily on the responder chain. When you begin editing text in iOS, like the URL field in the Shorty app, that object becomes the first responder. When the user types using the keyboard—virtual or otherwise—those key events are sent to the first responder. You can have several text fields in the same screen, but only one is the first responder. All key events, copy and paste commands, and so on, go to the active text field.

Touchy

You’ve learned a lot about the so-called high-level events, the initial responder, and the responder chain. Now it’s time to dig into low-level event handling, and you’re going to start with the most commonly used low-level events: touch events.

The Touchy app is a demonstration app. It does nothing more than show you where you’re touching the screen. It’s useful both to see this in action and to explore some of the subtleties of touch event handling. You’ll also learn a new and really important Interface Builder skill: creating custom objects in your interface.

Design

The Touchy app also has a super-simple interface, as depicted in Figure 4-19. Touchy will display the location, or locations, where you’re touching your view object. So the app isn’t too boring, you’ll jazz it up a little with some extra graphics, but that’s not the focus of this outing.

image

Figure 4-19. Sketch of Touchy app

The app will work by intercepting the touch events using a custom view object. Your custom view object will extract the coordinates of each active touch point and use that to draw their positions.

Creating the Project

As you’ve done several times already, start by creating a new Xcode project based on the Single View iOS Application template. Name the project Touchy, set the language to Swift, and choose Universal for the device.

Select a location to save the new project and create it. In the project navigator, select the project, select the Touchy target, select the Summary tab, and change the device orientations so that only the portrait orientation is enabled.

Creating a Custom View

You’re going to depart from the development pattern you’ve used in previous apps. Instead of adding your code to the ViewController class, you’ll create a new custom subclass of UIView. “Why” is explained in Chapter 11. “How” will be explained right now.

Select the Touchy group (not the project) in the project navigator. From the file template library, drag in a new Swift file and drop it into your project, just below the ViewController.swift file. See Figure 4-20. Name the file TouchyView and add it to your project.

image

Figure 4-20. Adding a new Swift source file

Replace the single import Foundation statement with an import UIKit statement, followed by a class TouchyView: UIView { } declaration, as shown in Figure 4-21.

image

Figure 4-21. Defining a new Swift class

You’ve successfully created a new Swift class! Your class is a subclass of UIView, so it inherits all of the behavior and features of a UIView object and can be used anywhere a UIView object can.

Handling Touch Events

Now you’re going to customize your UIView object to handle touch events. Remember that the base classes UIResponder and UIView don’t handle touch events. Instead, they just pass them up the responder chain. By implementing your own touch event handling functions, you’re going to change that so your view responds directly to touches.

As you already know, touch events will be delivered to the view object they occurred in. If you didn’t know that, go back and read the section “Hit Testing.” All you have to do is add the appropriate event handling functions to your class. Add the following code to your TouchyView class:

override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
updateTouches(event.allTouches())
}

override func touchesMoved(touches: NSSet, withEvent event: UIEvent) {
updateTouches(event.allTouches())
}

override func touchesEnded(touches: NSSet, withEvent event: UIEvent) {
updateTouches(event.allTouches())
}

override func touchesCancelled(touches: NSSet!, withEvent event: UIEvent) {
updateTouches(event.allTouches())
}

Note Xcode will be showing some errors in your source code. Ignore them for now; you’ll fix that when you add the updateTouches(_:) function.

Each touch event message includes two objects: an NSSet object, containing the touch objects of interest, and a UIEvent object that summarizes the event that caused the function to be called.

In a typical app, your function would be interested in the touches set. This set, or unordered collection, of objects contains one UITouch object for every touch relevant to the event. Each UITouch object describes one touch position: its coordinates, its phase, the time it occurred, its tap count, and so on.

For a “began” event, the touches set will contain the UITouch objects for the touches that just began. For a “moved” event, it will contain only those touch points that moved. For an “ended” event, it will contain only those touch objects that were removed from the screen. This is convenient from a programming perspective because most view objects are interested only in the UITouch objects that are relevant to that event.

The Touchy app, however, is a little different. Touchy wants to track all of the active touches all of the time. You’re not actually interested in what just happened. Instead, you want “the big picture”: the list of all touch points currently in contact with the screen. For that, move over to theevent object.

The UIEvent object’s main purpose is to describe the single event that just occurred, or, more precisely, the single event that was just pulled from the event queue. But UIEvent has some other interesting information that it carries around. One of those is the allTouches property that contains the current state of all touch points on the device, regardless of what view they are associated with.

Now I can explain what all of your event handling functions are doing. They are waiting for any change to the touch state of the device. They ignore the specific change and dig into the event object to find the state of all active touch objects, which it passes to your updateTouches(_:)function. This function will record the position of all active touches and use that information to draw those positions on the screen.

So, I guess you need write that function! Immediately after the touch event handler functions you just added in TouchyView.swift, add this function:

var touchPoints = [CGPoint]()

func updateTouches( touches: NSSet? ) {
touchPoints = []
touches?.enumerateObjectsUsingBlock() { (element,stop) in
if let touch = element as? UITouch {
switch touch.phase {
case .Began, .Moved, .Stationary:
self.touchPoints.append(touch.locationInView(self))
default:
break
}
}
}
setNeedsDisplay()
}

The updateTouches(_:) function starts by setting the touchPoints array object to an empty array. This is where you’ll store the information you’re interested in. updateTouches(_:) then loops through each of the UITouch objects in the set and examines its phase. The phase of a touch is its current state: “began,” “moved,” “stationary,” “ended,” or “canceled.” Touchy is interested only in the states that represent a finger that is still touching the glass (“began,” “moved,” and “stationary”). The switch statement matches these three states and obtains the coordinates of the touch relative to this view object. That CGPoint value is then added to the touchPoints array.

Once all of the active touch coordinates have been gathered, your view object calls its setNeedsDisplay() function. This function tells your view object that it needs to redraw itself.

Drawing Your View

So far, you haven’t written code to draw anything. You’ve just intercepted the touch events sent to your view and extracted the information you want about the device’s touch state. In iOS, you don’t draw things when they happen. You make note of when something needs to be drawn and wait for iOS to tell your object when to draw it. Drawing is initiated by the user interface update events I mentioned at the beginning of this chapter.

How drawing works is described in Chapter 11, so I won’t go into any of those details now. Just know that when iOS wants your view to draw itself, your object’s drawRect(_:) function will be called. Add this drawRect(_:) function to your class:

override func drawRect(rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
UIColor.blackColor().set()
CGContextFillRect(context,rect)

var connectionPath: UIBezierPath?
if touchPoints.count>1 {
for location in touchPoints {
if let path = connectionPath {
path.addLineToPoint(location)
}
else {
connectionPath = UIBezierPath()
connectionPath!.moveToPoint(location)
}
}
if touchPoints.count>2 {
connectionPath!.closePath()
}
}

if let path = connectionPath {
UIColor.lightGrayColor().set()
path.lineWidth = 6
path.lineCapStyle = kCGLineCapRound
path.lineJoinStyle = kCGLineJoinRound
path.stroke()
}

var touchNumber = 0
let fontAttributes = [
NSFontAttributeName: UIFont.boldSystemFontOfSize(180),
NSForegroundColorAttributeName: UIColor.yellowColor()
];
for location in touchPoints {
let text: NSString = "\(++touchNumber)"
let size = text.sizeWithAttributes(fontAttributes)
let textCorner = CGPoint(x: location.x-size.width/2,
y: location.y-size.height/2)
text.drawAtPoint(textCorner, withAttributes: fontAttributes)
}

}

Wow, that’s a lot of new code. Again, the details aren’t important, but feel free to study this code to get a feel for what it’s doing. I’ll merely summarize what it does.

The first part fills the entire view with the color black.

The middle section is a big loop that creates a Bezier path, named after the French engineer Pierre Bézier. A Bezier path can represent practically any line, polygon, curve, ellipsis, or arbitrary combination of those things. Basically, if it’s a shape, a Bezier path can draw it. You’ll learn all about Bezier paths in Chapter 11. Here, it’s used to draw light gray lines between the touch points, when there are two or more. It’s pure eye candy, and this part of the drawRect(_:) function could be left out and the app would still work just fine.

The last part is the interesting bit. It loops through the touch coordinates and draws a big “1,” “2,” or “3” centered underneath each finger that’s touching the screen, in yellow.

Now you have custom view class that collects touch events, tracks them, and draws them on the screen. The last piece of this puzzle is how to get your custom object into your interface.

Adding Custom Objects in Interface Builder

Select your Main.storyboard Interface Builder file and select the one and only view object in the view controller scene. Switch to the identity inspector. The identity inspector shows you the class of the object selected. In this case, it’s the plain-vanilla UIView object created by the project template.

Here’s the cool trick: you can use the identity inspector to change the class of a generic view object to any subclass of UIView that you’ve created. Change the class of this view object from UIView to TouchyView, as shown in Figure 4-22. You can do this either by using the pull-down menu or by just typing in the name of the class.

image

Figure 4-22. Changing the class of an Interface Builder object

Now instead of creating a UIView object as the root view, your app will create a TouchyView object, complete with all the functions, properties, outlets, and actions you defined. You can do this to any existing object in your interface. If you want to create a new custom object, find the base class object in the library (NSObject, UIView, and so on), add that object, and then change its class to your custom one.

There are still a few properties of your new TouchyView object that need to be customized for it to work correctly. With your TouchyView object still selected, switch to the attributes inspector and check the Multiple Touch option under Interaction. By default, view objects don’t receive multitouch events. In other words, the touchesSomePhase(_:,withEvent:) function will never contain more than one UITouch object, even if multiple touches exist. To allow your view to receive all the touches, you must turn on the multiple touch option.

Note If you want, give Touchy an icon too. Open the Touchy (Resources) folder, locate the five TouchyIcon....png files, and drag them into the AppIcon group of the images.xcassets asset catalog, just as you did for the EightBall app.

Testing Touchy

Set your scheme to the iPhone or iPad simulator and run your project. The interface is completely—and ominously—black, as shown on the left in Figure 4-23.

image

Figure 4-23. Running Touchy in the simulator

Click the interface and the number “1” appears, as shown in the middle of Figure 4-23. Try dragging it around. Touchy is tracking all changes to the touch interface, updating it, and then drawing a number under the exact location of each touch.

Hold down the Option key and click again. Two positions appear, as shown on the right in Figure 4-23. The simulator will let you test simple two finger gestures when you hold down the Option key. With just the Option key, you can test pinch and zoom gestures. Hold down both the Option and Shift keys to test two-finger swipes.

But that’s as far as the simulator will go. To test any other combination of touch events, you have to run your app on a real iOS device. Back in Xcode, stop your app, and change the scheme to iOS Device (iPhone, iPad, or iPod Touch—whatever you have plugged in). Run your app again.

Now try Touchy on your iOS device. Try touching two, three, four, or even five fingers. Try moving them around, picking one up, and putting it down again. It’s surprisingly entertaining.

Advanced Event Handling

There are a couple of advanced event handling topics I’d like to mention, along with some good advice. I’ll start with the advice.

Keep your event handling timely. As you now know, your app is “kept alive” by your main thread’s run loop. That run loop delivers everything to your app: touch events, notifications, user interface updates, and so much more. Every event, action, and message that your app handles must execute and return before the next event can be processed. That means if any code you write takes too long, your app will appear to have died. And if code you write takes a really, really long time to finish, your app will die—iOS will terminate your app because it’s stopped responding to events.

I’m sure you’ve had an app “lock up” on you; the display is frozen, it doesn’t respond to touches or shaking or anything. This is what happens when an app’s run loop is off doing something other than processing events. It’s not pleasant. Most iOS features that can take a long time to complete have asynchronous functions (like the ones you used in Shorty), so those time-consuming tasks won’t tie up your main thread. Use these asynchronous functions, pay attention to how long your program takes to do things, and be prepared to reorganize your app to avoid “locking up” your run loop. I’ll demonstrate all of these techniques in later chapters.

Second, handling multiple touch events can be tricky, even confusing. iOS does its best to untangle the complexity of touch events and present them to your object in a rational and digestible form. iOS provides five features that will make your touch event handling simpler.

· Gesture recognizers

· Filtering out touch events for other views

· Prohibiting multitouch events

· Providing exclusive touch event handling

· Suspending touch events

Gesture recognizers are special objects that intercept touch events on behalf of a view object. Each recognizer detects a specific touch gesture, from a simple tap to a complex multifinger swipe. If it detects the gesture it’s programmed to recognize, it sends an action—exactly like the button objects you’ve used in earlier projects. All you need to do is connect that action to an object in Interface Builder, and you’re done. This feature alone has saved iOS developers tens of thousands of lines of touch event handling code. I’ll show you how to use gesture recognizer objects in later chapters.

As I described earlier, the touch events (such as touchesMoved(_:,withEvent:)) include only the relevant touch objects—those touches that originated in your view object—in the touches parameter. Your code doesn’t have to worry about other touches in other views that might be happening at the same time. In Touchy, this was actually a disadvantage, and you had to dig up the global set of touch objects from the UIEvent object. But normally, you pay attention only to the touches in your view.

You’ve also seen how iOS will prohibit multitouch events using UIView’s multipleTouchEnabled property. If this property is false, iOS will send your view object events associated only with the first touch—even if the user is actually touching your view with more than one finger. For the Touchy app to get events about all of the touches, you had to set this property to true. Set this property to false if your view interprets single touch events only and you won’t have to write any code that worries about more than one touch at a time.

If you don’t want iOS to be sending touch events to two view objects simultaneously, you can set UIView’s exclusiveTouch property to true. If set, iOS will block touch events from being sent to any other views once a touch sequence has begun in yours (and vice versa).

Finally, if your app needs to, you can temporary suspend all touch events from being sent to a specific view or even your entire app. If you want to make an individual view “deaf” to touch events, set its userInteractionEnabled property to false. You can also call your application object’s beginIgnoreingInteractionEvents() function, and all touch events for your app will be silenced. Turn them back on again by calling endIgnoringInteractionEvents(). This is useful for preventing touch events from interfering with something else that’s going on (say, a short animation), but don’t leave them turned off for long.

Summary

By now you have a pretty firm grasp on how messages and events get into your app and how they are handled. You know about the event queue and the run loop. You know that events in the queue are dispatched to the objects in your app. You know that some of them go directly to your objects, touch events use hit testing, and the rest get sent to the first responder.

You’ve learned a lot about the responder chain. The responder chain performs a number of important tasks in iOS, beyond delivering events.

You know how to configure an object to handle or ignore specific types of events. You’ve written two apps, one that handled high-level events and a second that tracked low-level touch events.

Possibly even more astounding, you built and ran your app on a real iOS device! Feel free to run any other projects on your device too. Creating your own iOS app that you can carry around with you is an impressive feat!

In the next chapter, you’re going to learn a little about data models and how complex sets of data get turned into scrolling lists on the screen.

EXERCISES

According to the instructions that come with the Magic Eight Ball, you should not shake the ball; it causes bubbles to form in the liquid. Of course, this never stopped my brother and I from shaking the daylights out of it. Instead, you were supposed to place the ball with the “8” up on a table, ask a question, gently turn it over, and read the answer.

For extra credit, rewrite the EightBall app so it uses the device orientation events, instead of shake motion events, to make the message disappear and appear. A device’s orientation will be one of portrait, landscape left, landscape right, upside down, face up, or face down.

Changes in device orientation are delivered via notifications. You haven’t used notifications yet, but think of them as just another kind of event (at least in this context). Unlike events, your object must explicitly request the notifications it wants to receive. Whenever the device changes orientation, such as when the user turns their iPhone over, your object will receive a notification message.

All of the code you need to request and handle device orientation change notifications is shown in the Event Handling Guide for iOS, in the section “Getting the Current Device Orientation with UIDevice.” In Xcode, choose Help image Documentation and API Reference and search for Event Handling Guide.

Change EightBall so it requests device orientation notifications instead of handling shake motion events. When your app receives an orientation change notification, examine the current orientation of the current UIDevice object. If the orientation property isUIDeviceOrientationFaceUp, make a new message appear. If it’s anything else, make the message disappear. Now you have a more “authentic” Magic Eight Ball simulator! You can find my solution to this exercise in the EightBall E1 folder.