Taps, Touches, and Gestures - Beginning iPhone Development with Swift: Exploring the iOS SDK (2014)

Beginning iPhone Development with Swift: Exploring the iOS SDK (2014)

Chapter 18. Taps, Touches, and Gestures

The screens of the iPhone, iPod touch, and iPad—with their crisp, bright, touch-sensitive display—are truly things of beauty and masterpieces of engineering. The multitouch screen common to all iOS devices is one of the key factors in the platform’s tremendous usability. Because the screen can detect multiple touches at the same time and track them independently, applications are able to detect a wide range of gestures, giving the user power that goes beyond the interface.

Suppose you are in the Mail application staring at a long list of junk e-mail that you want to delete. You can tap each one individually, tap the trash icon to delete it, and then wait for the next message to download, deleting each one in turn. This method is best if you want to read each message before you delete it.

Alternatively, from the list of messages, you can tap the Edit button in the upper-right corner, tap each e-mail row to mark it, and then hit the Trash button to delete all marked messages. This method is best if you don’t need to read each message before deleting it. Another alternative is to swipe across a message in the list from right to left. That gesture produces a More button and a Trash button for that message. Tap the Trash button, and the message is deleted.

This example is just one of the countless gestures that are made possible by the multitouch display. You can pinch your fingers together to zoom out while viewing a picture or reverse-pinch to zoom in. On the home screen, you can long-press an icon to turn on “jiggly mode,” which allows you to delete applications from your iOS device.

In this chapter, we’re going to look at the underlying architecture that lets you detect gestures. You’ll learn how to detect the most common gestures, as well as how to create and detect a completely new gesture.

Multitouch Terminology

Before we dive into the architecture, let’s go over some basic vocabulary. First, a gesture is any sequence of events that happens from the time you touch the screen with one or more fingers until you lift your fingers off the screen. No matter how long it takes, as long as one or more fingers remain against the screen, you are still within a gesture (unless a system event, such as an incoming phone call, interrupts it). Note that Cocoa Touch doesn’t expose any class or structure that represents a gesture. In some sense, a gesture is a verb, and a running app can watch the user input stream to see if one is happening.

A gesture is passed through the system inside a series of events. Events are generated when you interact with the device’s multitouch screen. They contain information about the touch or touches that occurred.

The term touch refers to a finger being placed on the screen, dragging across the screen, or being lifted from the screen. The number of touches involved in a gesture is equal to the number of fingers on the screen at the same time. You can actually put all five fingers on the screen, and as long as they aren’t too close to each other, iOS can recognize and track them all. Now there aren’t many useful five-finger gestures, but it’s nice to know the iOS can handle one if necessary. In fact, experimentation has shown that the iPad can handle up to 11 simultaneous touches! This may seem excessive, but could be useful if you’re working on a multiplayer game, in which several players are interacting with the screen at the same time.

A tap happens when you touch the screen with a finger and then immediately lift your finger off the screen without moving it around. The iOS device keeps track of the number of taps and can tell you if the user double-tapped, triple-tapped, or even 20-tapped. It handles all the timing and other work necessary to differentiate between two single-taps and a double-tap, for example.

A gesture recognizer is an object that knows how to watch the stream of events generated by a user and recognize when the user is touching and dragging in a way that matches a predefined gesture. The UIGestureRecognizer class and its various subclasses can help take a lot of work off your hands when you want to watch for common gestures. This class nicely encapsulates the work of looking for a gesture and can be easily applied to any view in your application.

In the first part of this chapter, you’ll see the events that are reported when the user touches the screen with one or more fingers, and how to track the movement of fingers on the screen. You can use these events to handle gestures in a custom view or in your application delegate. Next, we’ll look at some of the gesture recognizers that come with the iOS SDK, and finally, you’ll see how to build your own gesture recognizer.

The Responder Chain

Since gestures are passed through the system inside events, and events are passed through the responder chain, you need to have an understanding of how the responder chain works in order to handle gestures properly. If you’ve worked with Cocoa for Mac OS X, you’re probably familiar with the concept of a responder chain, as the same basic mechanism is used in both Cocoa and Cocoa Touch. If this is new material, don’t worry; we’ll explain how it works.

Responding to Events

Several times in this book, we’ve mentioned the first responder, which is usually the object with which the user is currently interacting. The first responder is the start of the responder chain, but it’s not alone. There are always other responders in the chain as well. In a running application, the responder chain is a changing set of objects that are able to respond to user events. Any class that has UIResponder as one of its superclasses is a responder. UIView is a subclass of UIResponder, and UIControl is a subclass of UIView, so all views and all controls are responders.UIViewController is also a subclass of UIResponder, meaning that it is a responder, as are all of its subclasses, such as UINavigationController and UITabBarController. Responders, then, are so named because they respond to system-generated events, such as screen touches.

If a responder doesn’t handle a particular event, such as a gesture, it usually passes that event up the responder chain. If the next object in the chain responds to that particular event, it will usually consume the event, which stops the event’s progression through the responder chain. In some cases, if a responder only partially handles an event, that responder will take an action and forward the event to the next responder in the chain. That’s not usually what happens, though. Normally, when an object responds to an event, that’s the end of the line for the event. If the event goes through the entire responder chain and no object handles the event, the event is then discarded.

Let’s take a more specific look at the responder chain. An event first gets delivered to the UIApplication object, which in turn passes it to the application’s UIWindow. The UIWindow handles the event by selecting an initial responder. The initial responder is chosen as follows:

· In the case of a touch event, the UIWindow object determines the view that the user touched, and then offers the event to any gesture recognizers that are registered for that view or any view higher up the view hierarchy. If any gesture recognizer handles the event, it goes no further. If not, the initial responder is the touched view and the event will be delivered to it.

· For an event generated by the user shaking the device (which we’ll say more about in Chapter 20) or from a remote control device the event is delivered to the first responder.

If the initial responder doesn’t handle the event, it passes the event to its parent view, if there is one, or to the view controller if the view is the view controller’s view. If the view controller doesn’t handle the event, it continues up the responder chain through the view hierarchy of its parent view controller, if it has one.

If the event makes it all the way up through the view hierarchy without being handled by a view or a controller, the event is passed to the application’s window. If the window doesn’t handle the event, the UIApplication object will pass it to the application delegate, if the delegate is a subclass of UIResponder (which it normally is if you create your project from one of Apple’s application templates). Finally, if the app delegate isn’t a subclass of UIResponder or doesn’t handle the event, then the event goes gently into the good night.

This process is important for a number of reasons. First, it controls the way gestures can be handled. Let’s say a user is looking at a table and swipes a finger across a row of that table. What object handles that gesture?

If the swipe is within a view or control that’s a subview of the table view cell, that view or control will get a chance to respond. If it doesn’t respond, the table view cell gets a chance. In an application like Mail, in which a swipe can be used to delete a message, the table view cell probably needs to look at that event to see if it contains a swipe gesture. Most table view cells don’t respond to gestures, however. If they don’t respond, the event proceeds up to the table view, and then up the rest of the responder chain until something responds to that event or it reaches the end of the line.

Forwarding an Event: Keeping the Responder Chain Alive

Let’s take a step back to that table view cell in the Mail application. We don’t know the internal details of the Apple Mail application; however, let’s assume that the table view cell handles the delete swipe and only the delete swipe. That table view cell must implement the methods related to receiving touch events (discussed shortly) so that it can check to see if that event could be interpreted as part of a swipe gesture. If the event matches a swipe that the table view is looking for, then the table view cell takes an action, and that’s that; the event goes no further.

If the event doesn’t match the table view cell’s swipe gesture, the table view cell is responsible for forwarding that event manually to the next object in the responder chain. If it doesn’t do its forwarding job, the table and other objects up the chain will never get a chance to respond, and the application may not function as the user expects. That table view cell could prevent other views from recognizing a gesture.

Whenever you respond to a touch event, you need to keep in mind that your code doesn’t work in a vacuum. If an object intercepts an event that it doesn’t handle, it needs to pass it along manually. One way to do this is to call the same method on the next responder. Here’s a bit of fictional code:

func respondToFictionalEvent(event: UIEvent) {
if shouldHandleEvent(event) {
handleEvent(event)
} else {
nextResponder().respondToFictionalEvent(event)
}
}

Notice that we call the same method on the next responder. That’s how to be a good responder-chain citizen. Fortunately, most of the time, methods that respond to an event also consume the event. However, it’s important to know that if that’s not the case, you need to make sure the event is passed along to the next link in the responder chain.

The Multitouch Architecture

Now that you know a little about the responder chain, let’s look at the process of handling gestures. As we’ve indicated, gestures are passed along the responder chain, embedded in events. This means that the code to handle any kind of interaction with the multitouch screen needs to be contained in an object in the responder chain. Generally, that means we can choose to either embed that code in a subclass of UIView or embed the code in a UIViewController.

So, does this code belong in the view or in the view controller?

If the view needs to do something to itself based on the user’s touches, the code probably belongs in the class that defines that view. For example, many control classes, such as UISwitch and UISlider, respond to touch-related events. A UISwitch might want to turn itself on or off based on a touch. The folks who created the UISwitch class embedded gesture-handling code in the class so the UISwitch can respond to a touch.

Often, however, when the gesture being processed affects more than the object being touched, the gesture code really belongs in the relevant view controller class. For example, if the user makes a gesture touching one row that indicates that all rows should be deleted, the gesture should be handled by code in the view controller. The way you respond to touches and gestures in both situations is exactly the same, regardless of the class to which the code belongs.

The Four Touch Notification Methods

Four methods are used to notify a responder about touches. When the user first touches the screen, the system looks for a responder that has a method called touchesBegan(_, withEvent:). To find out when the user first begins a gesture or taps the screen, implement this method in your view or your view controller. Here’s an example of what that method might look like:

override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
let touch = touches.anyObject() as UITouch
let numTaps = touch.tapCount
let numTouches = event.allTouches().count

// Do something here
}

This method (and each of the touch-related methods) is passed an NSSet instance called touches and an instance of UIEvent, which has a property called allTouches that is another set of touches. Here’s a simple description of what these two sets of touches contain:

· The allTouches property contains one UITouch object for each finger that is currently pressed against the screen, whether or not that finger is currently moving.

· The NSSet passed as the touches argument contains one UITouch object for each finger that has just been added or removed from the screen or which has just moved or stopped moving. In other words, it tells you what changed between this call and the last time one of your touch notification methods was called.

Each time a finger touches the screen for the first time, a new UITouch object is allocated to represent that finger and added to the set that is delivered with each UIEvent and can be retrieved by calling its allTouches() method. All future events that report activity for that same finger will contain the same UITouch instance in both the allTouches() set and in the touches argument (although in the latter case, it will not be present if there is no activity to report for that finger), until that finger is removed from the screen. Thus, to track the activity of any given finger, you need to monitor its UITouch object.

You can determine the number of fingers currently pressed against the screen by getting a count of the objects returned by allTouches(). If the event reports a touch that is part of a series of taps by any given finger, you can get the tap count from the tapCount property of the UITouchobject for that finger. If there’s only one finger touching the screen, or if you don’t care which finger you ask about, you can quickly get a UITouch object to query by using the anyObject() method of NSSet