Smile - Learn iOS 8 App Development, Second Edition (2014)

Learn iOS 8 App Development, Second Edition (2014)

Chapter 7. Smile!

Pictures and video are a big part of mobile apps. This is made possible by the amazing array of audio/video hardware built into most iOS devices. Your apps can take advantage of this hardware—and it’s not that difficult. Apple has made it exceptionally easy to present an interface where your user can take a picture, or choose an existing picture from their photo library, and use that image in your app.

In this chapter you’ll add pictures to MyStuff. You’ll allow a user to choose, or take, a picture for each item they own and display that image in both the detail view and the master list. In doing that, you’ll learn how to do the following:

· Create a camera or image picker controller and display it

· Retrieve the image the user took or chose

· Use Core Graphics to crop and resize the image

· Save the image to the user’s camera roll

· Show image thumbnails in the rows of a table view

Along the way, you’ll learn a few other useful skills:

· Add a tap gesture recognizer to a view object

· Present a view controller in a popover

· Dismiss the keyboard

This chapter will extend the MyStuff app you wrote in Chapter 5. You can continue working on the version you wrote in Chapter 5 or locate the finished version in the Ch 7 folder of the Learn iOS Developer Projects folder. If you’re adding to the project in Chapter 5—which I highly recommend—you will need the resource file in the Ch 7 image MyStuff (Resources) folder.


Expanding your MyStuff app won’t be difficult. You’ve already created the master-detail interface, and you have the table views and editing working. All of the hard work is done; you just need to embellish it a little. In the detail view you’ll add a UIImageView object to display an image of the item, and in the table view you’ll add icons to show a smaller version in the list, as shown in Figure 7-1.


Figure 7-1. Updated MyStuff design

When the user taps the image in the detail view, your app will present either a camera or a photo picker interface. The camera interface will allow the user to take a picture with the device’s built-in camera. The photo picker interface lets the user choose an existing image from their photo library. The new image will appear in both the detail view and the master list. Let’s get started!

Extending Your Design

To extend your design, you’ll need to make small alterations to a number of existing classes and interface files. Whether you realize it or not, your MyStuff app uses a model-view-controller design pattern. I describe the model-view-controller design in the next chapter, but for now just know that some of the objects in your app are “data model” objects, some are “view” objects, and others are “controller” objects. Adding pictures to MyStuff will require the following steps:

1. Extending your data model to include image objects

2. Adding view objects to display those images

3. Expanding your controller objects to take a picture and update the data model

Revising the Data Model

The first step is to extend your data model. Locate your MyWhatsit.swift interface file and add two new properties.

var image: UIImage? {
didSet {

var viewImage: UIImage {
return image ?? UIImage(named: "camera")

The first adds an optional stored UIImage property to each MyWhatsit object. It includes a didSet observer that posts a “did change” notification whenever it’s modified, just like your other stored properties. Now every MyWhatsit object has an image, and changing that image will notify its observers. Gee, that was easy!

The second property requires a little more explanation. In all of the view objects (both in the details view and in the table view) you want to display the image of the item. If there is no image, however, you want to display a placeholder image—an image that says “there’s no image.” The computed viewImage property will return either the item’s image or a placeholder image. It’s an immutable property, which means that clients of this object can’t change it; in other words, the statement something.viewImage = newImage is not allowed.

This property also uses some Swift syntax you might not recognize. The ?? is the weirdly named nil coalescing operator. It’s used with optional values to return a substitute value when the value is nil. If the value on the left has a value, the expression evaluates to that value. If the value on the left doesn’t have a value, it evaluates to the expression on the right.


Adding the viewImage property to the MyWhatsit class is actually poor software design. The problem is that the MyWhatsit class is a data model class and the viewImage property is in the domain of the view classes. In plain English, it solves a problem with displaying the image, not with storing the image. You’re adding view-specific functionality to a data model object, which is something you should avoid.

In a well-organized model-view-controller (MVC) design, the domain of each class should be pure: the data model classes should have only data model–related properties and functions—nothing else. The problem here is that it’s so darned convenient to add a viewImage property to theMyWhatsit class: it encapsulates the logic of providing a consistent and predictable display image for that item, which simplifies the code elsewhere. Code that encapsulates logic and makes the object easier to use can’t be bad, right?

It isn’t bad. In fact, it’s good. But is there a way to avoid the architectural “flaw” of adding viewImage directly to the MyWhatsit class? The solution is to use an extension. An extension is an unusual feature of Swift that solves thorny domain issues like this, without making your objects more difficult to use. Using an extension, you can still add a viewImage property to your MyWhatsit objects but do it in a different module—a view module, separate from your MyWhatsit data model class. You get the benefits of adding a viewImage property toMyWhatsit while keeping your data model code separate from your view code. I explain extensions in Chapter 20.

At runtime (when your app runs) your MyWhatsit object still has a viewImage property, just as if you’d added it directly to your MyWhatsit class. So, what does it matter? Not much, and for a small project like this the ramifications are negligible, which is why I didn’t have you create an extension for viewImage. Sometimes pragmatism trumps a fanatic adherence to design patterns. Just know that in a much more complex project, defining viewImage in MyWhatsit could become an obstacle, and the solution would be to move it into an extension.

For the computed viewImage property to work, you need to add that placeholder image file to your project. Find the camera.png file in the MyStuff (Resources) folder and drag it into the group list of your Images.xcassets asset catalog, as shown in Figure 7-2.


Figure 7-2. Adding camera.png resource

MyWhatsit is finished, so it’s time to add the new view objects to your interface.

Adding an Image View

The next step is to add the view objects to your detail interface. This should feel like familiar territory by now.

1. Add an imageView outlet to your DetailViewController class.

2. Add label and image view objects to your DetailViewController interface file.

3. Connect the imageView outlet to the image view object.

Start in your DetailViewController.swift file. Add the following property:

@IBOutlet var imageView: UIImageView!

Now switch to the Main.storyboard file. From the object library, add a new label object. Position it below the Location text field, as shown in Figure 7-3. Change the label’s title to Picture. Click the pin constraints control and add a top and leading edge constraint, as shown on the right in Figure 7-3.


Figure 7-3. Adding the Picture label

Add an image view object and position it underneath the new label. Select it. Click the pin constraints control, add a top constraint, and add height and width constraints, both set to 230, as shown in Figure 7-4. Finally, click the alignment constraint control and add a Center Horizontally in Container View constraint (with a value of 0). Your image view is now 230 by 230 pixels in size, centered, and just below the Picture label.


Figure 7-4. Adding size constraints to the image view

With the image view still selected, switch to the attributes inspector and change its Image property to camera. This will display the placeholder image when there’s no MyWhatsit object being edited. (This is mostly for the benefit of the split-view iPad interface.)

The last step is to select Detail View Controller. Switch to the connections inspector and locate the imageView outlet you added to the controller. Connect it to the image view object, as shown in Figure 7-5.


Figure 7-5. Connecting the imageView outlet

Note The layout of the image view could be adapted so it filled out a regular-width (iPad) interface better. You can define constraints that adapt to different size classes, and I show you how in Chapter 9. After you’ve read that chapter, come back and make this layout smarter.

With the view objects in place, it’s time to add the code to make your item images appear.

Updating the View Controller

You need to modify the code in the master view controller to add the image to the table cell and modify the code in the detail view controller to make the image appear in the new image view. Start with MasterViewController.swift. Locate the following code intableView(_:,cellForRowAtIndexPath:) and add the bold text:

cell.textLabel?.text =
cell.detailTextLabel?.text = thing.location
cell.imageView?.image = thing.viewImage
return cell

The new code sets the image for the cell (cell.imageView.image) to the viewImage of the row’s MyWhatsit object. Remember that the view image will be either the item’s actual image or a placeholder. The act of setting the cell’s image view will alter the cell’s layout so the image appears on the left. (Refer to the “Cell Styles” section in Chapter 5.)

You’re all done with MasterViewController. Click DetailViewController.swift and locate the configureView() function. Find the following code and add the one bold line:

if nameField != nil {
nameField.text =
locationField.text = detail.location
imageView.image = detail.viewImage

This new line sets the image of the UIImageView object (connected to the imageView outlet) to the image of the MyWhatsit object being edited.

From a data model and view standpoint, everything is ready to go, so give it a try. Set the scheme to the iPhone simulator and run the project. You’ll see the placeholder images appear in the table and the detail view, as shown in Figure 7-6.


Figure 7-6. Placeholder images

So far everything is working great—there’s just no way to change the picture. To accomplish that, you’ll need to create an action.

Connecting a Choose Image Action

You want the camera, or photo library picker, interface to appear when the user taps the image in the detail view. That’s simple enough to hook up: create an action method and connect the image view to it. Start by defining a new action in DetailViewController.swift (you don’t need to write it yet; just declare it).

@IBAction func choosePicture(_: AnyObject!) {

Now switch back to the Main.storyboard interface, select the image view object, and connect its action outlet to the choosePicture: action in the DetailViewController.

Uh-oh, we seem to have a problem. The image view object isn’t a button or any other kind of control view; it doesn’t send an action message. In fact, by default, it ignores all touch events (its User Interaction Enabled property is false). So, how do you get the image view object to send an action to your view controller?

There are a couple of ways. One solution would be to subclass UIImageView and override its touch event methods, as described in Chapter 4. But there’s a much simpler way: attach a gesture recognizer object to the view.

In the object library, locate the tap gesture recognizer. Drag a new tap gesture recognizer object into the interface and drop it into the image view object, as shown in Figure 7-7.


Figure 7-7. Attaching a tap gesture recognizer to the image view

When you drop a gesture recognizer into a view object, Interface Builder creates a new gesture recognizer object and connects the view object to it. This is a one-to-many relationship: a view can be connected to multiple gesture recognizers, but a recognizer works only on a single view object. To see the relationship, select the view object and use the connections inspector to see its recognizers. Hover your cursor over the connection, and Interface Builder will highlight the object it’s connected to, shown at the bottom of Figure 7-8.


Figure 7-8. Examining the gesture recognizer connection of the image view object

Tip You can also see the inverse connections in the connections inspector. Select a recognizer object. Toward the bottom of the inspector you’ll find the referencing outlet collections section. This section shows the connections from other view objects to this recognizer object. This works with all objects in Interface Builder.

By default, a new tap gesture recognizer is configured to recognize single-finger tap events, which is exactly what you want. You do, however, need to change the attributes of the image view object. Even though you have it connected to a gesture recognizer, the view object is still set to ignore touch events, so it will never receive any events to recognize. Rectify this by selecting the image view object and use the attributes inspector to check the User Interaction Enabled property, as shown in Figure 7-9.


Figure 7-9. Enabling touch events for the image view

The last step is to connect the gesture recognizer to the choosePicture: action. Holding down the Control key, drag from the gesture recognizer in the scene’s dock, as shown in Figure 7-10, or from the object outline. Both represent the same object. Drag the connection to theDetailViewController object and connect it to the choosePicture: action, also shown in Figure 7-10.


Figure 7-10. Connecting the -choosePicture: action

A choosePicture: action will now be sent to the detail view controller when the user taps the image. Now you have to implement the choosePicture(_:) function, which brings you to the fun part: letting the user take a picture.

Taking Pictures

The UIImagePickerController class provides simple, self-contained interfaces for taking a picture, recording a movie, or choosing an existing item from the user’s photo library. The image picker controller does all of the hard work. For the most part, all your app has to do is create aUIImagePickerController object and present it as you would any other view controller. The delegate of the controller will receive messages that contain the image the user picked, the photo they took, or the movie they recorded.

That’s not to say the image picker controller can do everything. There are a number of decisions and considerations that your app must make before and after the image picker has done its thing. This will be the bulk of the logic in your app, and I’ll explain these decisions as you work through the code. Start by filling out the choosePicture(_:) function you started in your DetailViewController.swift file.

@IBAction func choosePicture(_: AnyObject!) {
if detailItem == nil {

The first decision is easy: this action does something only if the detail view is currently editing a MyWhatsit object. If not (detailItem == nil), then return and do nothing. This can happen in the iPad interface when the detail view is visible, but the user has yet to select an item to edit.

You Can’t Always Get What You Want

Now you get to the code that deals with deciding what image picker interfaces are available to your app. Continue writing the choosePicture(_:) function in DetailViewController.swift.

let hasPhotos = UIImagePickerController.isSourceTypeAvailable(.PhotoLibrary)
let hasCamera = UIImagePickerController.isSourceTypeAvailable(.Camera)

This is the intersection of what your user wants to do and what your app can do. The UIImagePickerController has the potential to present a still camera, video camera, combination still and video camera, photo library picker, or camera roll (saved) photo picker. That doesn’t, however, mean it can do all of those things. Different iOS devices have different hardware. Some have a camera, some don’t, and some have two. Some cameras are capable of taking movies, while others aren’t. Even on devices that have cameras and photo libraries, security or restrictions may prohibit your app from using them.

The first step to using the image picker is to decide what you want to do and then find out what you can do. For this app, you want to either present a still camera interface or present a picker interface to choose an existing image from the photo library. Use theUIImagePickerController class function isSourceTypeAvailable(_:) to find out whether you can do either of those. You pass the function a constant indicating the kind of interface you’d like to present, and the method tells you whether that interface can be used.

The result of asking whether the photo library picker interface can be used is saved in the hasPhotos value. The hasCamera value will remember if the live camera interface is available.

Note There’s a third interface, UIImagePickerControllerSourceType.SavedPhotosAlbum. This presents the same interface as the photo library picker but allows the user to choose images only in their camera roll—called the Saved Photos album on devices that don’t have a camera.

The switch statement that follows decides what to do in each of the four possible situations:

switch (hasPhotos,hasCamera) {
case (true,true):
let alert = UIAlertController(title: nil,
message: nil,
preferredStyle: .ActionSheet)
alert.addAction(UIAlertAction(title: "Take a Picture",
style: .Default,
handler: { (_) in
alert.addAction(UIAlertAction(title: "Choose a Photo",
style: .Default,
handler: { (_) in
alert.addAction(UIAlertAction(title: "Cancel",
style: .Cancel,
handler: nil))
if let popover = alert.popoverPresentationController {
popover.sourceView = imageView
popover.sourceRect = imageView.bounds
popover.permittedArrowDirections = ( .Up | .Down )
presentViewController(alert, animated: true, completion: nil)
case (true,false):
case (false,true):
default: /* (false,false) */

The switch statement considers the tuple (hasPhotos,hasCamera). In the first case, both are true, which means you don’t know which interface to present. When in doubt, ask the user using an action sheet.

The action sheet has three buttons: Take a Picture, Choose a Photo, and Cancel. A UIAlertAction object defines each choice with properties for the button’s title, its style, and—most importantly—the code that will execute should the user tap that button.

The second and third cases of the switch statement ((true,false) and (false,true)) handle the situations where only one of the interfaces is available and simply presents the one that is. The final case does nothing since there’s nothing to do.

Tip In the real world, it would be a good idea to put up an alert message telling the user that there are no available image sources, rather than just ignoring their tap—but I’ll leave that as an exercise you can explore on your own.

To review, you’ve queried the UIImagePickerController to determine which interfaces, in the subset of interfaces you’d like to present, are available. If none, do nothing. If only one is available, present that interface immediately. If more than one is available, ask the user which one they would like to use, wait for their answer, and present that. The next big task is to present the interface.

Presenting the Image Picker

Now add the presentImagePickerUsingCamera(_:) function to your DetailViewController class.

func presentImagePicker(source: UIImagePickerControllerSourceType) {
let picker = UIImagePickerController()
picker.sourceType = source
picker.mediaTypes = [kUTTypeImage as NSString]
picker.delegate = self
presentViewController(picker, animated: true, completion: nil)

This method starts by creating a new UIImagePickerController object, a special subclass of UIViewController.

The sourceType property determines which interface the image picker will present. It should be set only to values that returned true by isSourceTypeAvailable(_:). In your app, it’s set to either UIImagePickerControllerSourceType.Camera orUIImagePickerControllerSourceType.PhotoLibrary, which you’ve already determined is available.

The mediaTypes property is an array of data types that your app is prepared to accept. The valid choices in iOS 8 are (currently) kUTTypeImage, kUTTypeMovie, or both. This property modifies the interface (camera or picker) so that only those image types are allowed. Setting onlykUTTypeImage when presenting the camera interface limits the controls so the user can only take still images. If you included both types (kUTTypeImage and kUTTypeMovie), then the camera interface would allow the user to switch between still and movie capture as they please.

Tip To find out which media types are actually supported for a particular picker interface (say, for the .Camera interface), call the availableMediaTypesForSourceType(_:) function. Some cameras can record movies, while others can take only still photographs. And future versions of iOS may sport new media types, so it’s a good practice to check.

The kUTTypeImage value is also an odd duck. First, it’s not part of the standard UIKit framework. It’s defined in the Mobile Core Services framework. To use it, add this import statement at the beginning of the DetailViewController.swift file:

import MobileCoreServices

Another quirk is that the constants kUTTypeImage and kUTTypeMovie aren’t native Swift values. They’re Core Foundation literals, which is why you had to coerce it into a Cocoa string object (kUTTypeImage as NSString). Core Foundation types and the toll-free bridge are explained in Chapter 20.

Tip There are a number of other UIImagePickerController properties that you could set before you start the interface. For example, set its allowsEditing property to true if you’d like to give the user the ability to refine pictures or trim movies.

The last two lines of presentImagePicker(_:) set your controller as the delegate for the picker and start its interface. The controller slides into view and waits for the user to take a picture, pick an image, or cancel the operation. When one of those happens, your controller receives the appropriate delegate message. But to be the image picker delegate, your controller must adopt both the UIImagePickerControllerDelegate and UINavigationControllerDelegate protocols. Add those to your DetailViewController class declaration now.

class DetailViewController: UIViewController, UIImagePickerControllerDelegate,
UINavigationControllerDelegate {

Note Your DetailViewController isn’t interested in, and doesn’t implement, any of the UINavigationControllerDelegate functions. It adopts the protocol simply to avoid the compiler error that results if it doesn’t.

With the picker up and running, you’re now ready to deal with the image the user takes or picks.

Importing the Image

Ultimately, the user will take or choose a picture. This results in a call to your imagePickerController(_:,didFinishPickingMediaWithInfo:) delegate function. This is the function where you’ll take the image the user took/selected and add it to the MyWhatsit object. All of the information about what the user did is contained in a dictionary, passed to your function via the info parameter. Add this code to your DetailViewController.swift file. The function starts out simply enough.

func imagePickerController(_: UIImagePickerController,
didFinishPickingMediaWithInfo info: [NSObject : AnyObject]) {
var image: UIImage! = info[UIImagePickerControllerEditedImage] as? UIImage
if image == nil {
image = info[UIImagePickerControllerOriginalImage] as UIImage

The first task is to obtain the image object. There are, potentially, two possible images: the original one and the edited one. If the user cropped, filtered, or performed any other in-camera editing, the one you want is the edited version. Start by requesting that one (UIImagePickerControllerEditedImage) from the info dictionary. If that value is nil, then the original (UIImagePickerControllerOriginalImage) is the only image supplied.

Note If you configured a picker interface that allowed the user to use more than one media type (kUTTypeImage and kUTTypeMovie), the info[UIImagePickerControllerMediaType] value will tell you which was chosen.

The next block of code considers the case where the user has taken a picture. When users take a picture, most expect their photo to appear in their camera roll. This isn’t a requirement, and another app might act differently, but here you meet the user’s expectations by saving the picture they just took to their camera roll:

if picker.sourceType == .Camera {

You don’t want to do this if the user picked an existing image from their photo library; you’d just duplicate the picture that was already in their library.

Tip Many apps allow users to save an image to their camera roll. You can do this at any time using the UIImageWriteToSavedPhotosAlbum() function. This function isn’t limited to being used in conjunction with the image picker interface.

Cropping and Resizing

Now that you have the image, what do you do with it? You could just set the MyWhatsit image property to the returned image object and return. While that would work, it’s a bit crude. First, modern iOS devices have high-resolution cameras that produce big images, consuming several megabytes of memory for each one. It won’t take too many such pictures before your app will run out of memory and crash. Also, the images are rectangular, and both the details interface and the table view would look better using square images.

To solve both of these problems, you’ll want to scale down and crop the user’s image. Start by cropping the image with this code, which is the next part of your imagePickerController(_:,didFinishPickingMediaWithInfo:) function:

let cgImage = image.CGImage
let height = Int(CGImageGetHeight(cgImage))
let width = Int(CGImageGetWidth(cgImage))
var crop = CGRect(x: 0, y: 0, width: width, height: height)
if height > width {
crop.size.height = crop.size.width
crop.origin.y = CGFloat((height-width)/2)
} else {
crop.size.width = crop.size.height
crop.origin.x = CGFloat((width-height)/2)
let croppedImage = CGImageCreateWithImageInRect(cgImage, crop)

The first step is to get a Core Graphics image reference from the UIImage object. UIImage is a convenient and simple-to-use object that handles all kinds of convoluted image storage, conversion, and drawing details for you. It does not, however, let you manipulate or modify the image in any significant way. To do that, you need to “step down” into the lower-level Core Graphics frameworks, where the real image manipulation and drawing functions reside. The cgImage value contains a CGImageRef—Core Graphics Image Reference. This is a reference (think of it like an object reference) that contains primitive image data.

The next step is to get the height and width (in pixels) of the image. That’s accomplished by calling the functions CGImageGetHeight() and CGImageGetWidth().


Much of Cocoa Touch framework is actually written in C and Objective-C, not Swift. C is a procedural language that’s been around a long time and is probably the world’s most commonly used computer language. Objective-C is built on top of C, adding the concept of objects to C.

In Chapter 6 I spoke of writing programs entirely by defining structures and passing those structures to functions. This is exactly how you program using C and the framework of C functions called Core Foundation.

While C is not an object-oriented language, you can still write object-oriented programs; it’s just more work. In Core Foundation, a class is called a type, and an object is a reference. Instead of calling the functions of an object, you call a global function and pass it a reference (typically as the first parameter). In other words, instead of writing myImage.height to get the height of an image, you write CGImageGetHeight(myImage). In C there are no classes, and structs and enums can’t have functions the way they can in Swift.

While most Core Foundation types will work only with Core Foundation functions, a few fundamental types are interchangeable with Swift (and Objective-C) objects. These include String/NSString/CFStringRef, NSNumber/CFNumberRef, Array/NSArray/CFArrayRef, Dictionary/NSDictionary/CFDictionaryRef, NSURL/CFURLRef, and others. Any C, Objective-C, or Swift function that expects one will accept the other as is. This is called the toll-free bridge, and you’ve already used it in this app. The kUTTypeImage string is really aCFStringRef, not an NSString object. But since the two are interchangeable, it was possible to pass the Core Foundation kUTTypeImage string value in the parameter that expected an NSString object.

The if block decides whether the image is landscape (width > height) or portrait (height > width). Based on this, it sets up a CGRect that describes a square in the middle of the image. If landscape, it makes the rectangle the height of the image and insets the left and right edges. If portrait, the rectangle is the width of the image, and the top and bottom are trimmed.

The function after the if/else block does all of the work. The CGImageCreateWithImageInRect() function takes an existing Core Graphics image, picks out just the pixels in the rectangle, and copies them to a new Core Graphics image. The end result is a square Core Graphics image with just the middle section of the original image.

The next step is to turn the CGImageRef back into a UIImage object so it can be stored in the MyWhatsit object. At the same time, you’re going to scale it down so it’s not so big.

let maxImageDimension: CGFloat = 640.0
image = UIImage(CGImage: croppedImage,
scale: max(crop.height/maxImageDimension,1.0),
orientation: image.imageOrientation)

The UIImage class function imageWithCGImage(_:,scale:,orientation:) creates a new UIImage object from an existing CGImageRef. At the same time, it can scale the image and change its orientation. The scale calculates a ratio between the size of the original image and a 640-pixel one. This scales down the (probably) larger image size from the device’s camera to a 640x640 pixel image, which is a manageable size. The max() function is used to keep the ratio from dropping below 1.0 (1:1); this prevents an image that’s already smaller than 640 pixels from being made larger.

Note UIImage has an orientation property. Core Graphic images do not. Images taken with the camera are all in landscape format. When you take a portrait (vertical) picture, you get a UIImage with a landscape image and an orientation that tells UIImage to draw the image vertically. When you started working with the CGImageRef, that orientation information was lost. If you step through the program with the Xcode debugger, you’ll see that the code crops a landscape image (width > height), even if you took a portrait photo. So to make the photo draw the way it was taken, you have to supply the original orientation when creating the new UIImage.

Winding Up

All of the hard part is over. The only thing left for this function to do is store the cropped and resized image in the MyWhatsit object and dismiss the image picker controller.

detailItem?.image = image
imageView.image = image
dismissViewControllerAnimated(true, completion: nil)

The first line stores the cropped image in the new image property of the MyWhatsit object. The second updates the image view in the detail view, so it reflects the same change. Finally, you must dismiss the picker view since the user is done with it.

But what if the user didn’t take a picture or refused to choose a photo from their library? If the user taps the cancel button in the picker, your imagePickerControllerDidCancel(_:) delegate function is called instead. You need to handle that too. Add this function right after your new imagePickerController(_:,didFinishPickingMediaWithInfo:) function:

func imagePickerControllerDidCancel(_: UIImagePickerController!) {
dismissViewControllerAnimated(true, completion: nil)

This method does nothing but dismiss the controller, making no change to your MyWhatsit object.

Testing the Camera

You’re ready to test your image picker interface—for real. The simulator, unfortunately, does not emulate the camera hardware or come with any images in its photo library. To test this app, you’ll need to run it on a real iOS device.

Note Ideally, you have an iPhone, iPod Touch, or similar compact iOS device to test with. If not, you’ll need to read through the “Handling Regular Interfaces” section before your app will work.

Plug in your device and set the project’s scheme to it. Run it. Your app’s interface should look like that in Figure 7-11.


Figure 7-11. Testing the iPhone interface

Tap an item, tap the placeholder image in the detail view, tap Take a Picture, and take a picture. The cropped image should appear in the detail view and again back in the master table, as shown on the right in Figure 7-11.

Congratulations, you’ve added picture taking to your app! Enjoy the moment and have fun with the camera. iPad users, however, aren’t feeling the love. Let’s see whether we can figure out why.

Handling Regular Interfaces

Regular iOS devices, and by “regular” I mean ones you can’t stuff in your jeans pocket, have a lot more screen real estate. They take advantage of that with some alternative interface techniques. On compact devices (iPhone, iPod, and so on) almost all views controllers are presented full-screen—your master table view, your detail view, the alert sheet, and the image picker all take over the entire screen. You just move from one screen to another.

On regular devices (like the iPad) there are popover windows, form sheets, split view controllers, and other techniques that don’t consume the entire screen. They show multiple views simultaneously or overlay a smaller interface on top of the existing one.

So, how do you know which interface style to use or which one will be used? I’ll talk a lot more about this in Chapter 12, but here’s the short lesson: every view controller has a modalPresentationStyle property that hints to iOS how that view controller would like to be presented, size permitting. You can set this property, but iOS might choose to ignore it on compact devices.

Run your app on an iPad and try to tap the image view. Nothing seems to happen. Actually, something disastrous happened; your app just crashed. Back in Xcode, look at the console pane (at the bottom of the workspace window). You’ll see a message like this one:

2014-08-12 15:24:47.429 MyStuff[494:211871] *** Terminating app due to uncaught exception 'NSGenericException', reason: 'UIPopoverPresentationController (<_UIAlertControllerActionSheetRegularPresentationController: 0x15683b00>) should have a non-nil sourceView or barButtonItem set before the presentation occurs.'


Let me explain. The action sheet has a preferred presentation style of “popover.” On an iPhone, there’s not enough room, so iOS ignores the style and presents the action sheet as an overlay (see Figure 7-11). On the iPad, a popover is a much tidier interface—much nicer than consuming the entire screen just to show three buttons—and iOS presents the alert in a popover.

And here’s the rub. A popover requires some additional information. At a minimum, you have to tell the popover where the focus of the interface is so it can be positioned intelligently. Find your choosePicture(_:) function in the DetailViewController.swift file and insert this new code, shown in bold:

alert.addAction(UIAlertAction(title: "Cancel",
style: .Cancel,
handler: nil))
if let popover = alert.popoverPresentationController {
popover.sourceView = imageView
popover.sourceRect = imageView.bounds
popover.permittedArrowDirections = ( .Up | .Down )
presentViewController(alert, animated: true, completion: nil)

The popoverPresentationController property returns a UIPopoverPresentationController if (and only if) that controller is, or will, be presented in a popover. This object does exactly what you think it would do—manages the appearance of the popover. At a minimum, you must tell the presentation controller either the view rectangle or the bar button item the popover should appear next to. If you’re setting a rectangle, you set the rectangle’s coordinates (sourceRect) along with the view object (sourceView) those coordinates are in. In this app, you use imageView and imageView.bounds, which defines the frame of imageView. (You could just as easily have used view and imageView.frame, which also defines the frame of imageView.)

UIPopoverPresentationController has lots of optional properties. A particularly useful one is permittedArrowDirections. Here you set that to (.Up|.Down), so the popover always tries to appear above, or below, the image view. Run your app again on an iPad and check out the results, shown on the left in Figure 7-12.


Figure 7-12. Regular alert controller in a popover

Your alert now appears in a popover with two buttons. Notice that the third action (Cancel) isn’t shown. Tapping anywhere outside the popover dismisses it. This makes a Cancel button redundant. The alert controller knows this and omitted it.

Choosing to take a picture presents a full-screen camera interface. That’s perfect. (Apple’s human interface guidelines recommend always presenting the camera interface full-screen.)

The photo library picker (shown on the right in figure 7-12), however, leaves something to be desired. Taking over the entire screen is heavy handed, and it supports only portrait orientation—awkward to say the least. I think it can be improved; what do you think? Find yourpresentImagePicker(_:) function and change it to this (new code in bold):

func presentImagePicker(source: UIImagePickerControllerSourceType) {
let picker = UIImagePickerController()
picker.sourceType = source
picker.mediaTypes = [kUTTypeImage as NSString]
picker.delegate = self
if source == .PhotoLibrary {
picker.modalPresentationStyle = .Popover
if let popover = picker.popoverPresentationController {
popover.sourceView = imageView
popover.sourceRect = imageView.bounds
presentViewController(picker, animated: true, completion: nil)

If you’re going to present the library picker interface, change its preferred presentation style to UIModalPresentationStyle.Popover. If your recommended style isn’t ignored by iOS (which it will be on compact devices), get the popover presentation controller—exactly as you did for the alert—and configure it. Now the library picker interface is much more iPadish, as shown in Figure 7-13.


Figure 7-13. Presenting the photo picker interface in a popover

There are even more ways to customize the presentation of view controllers, but I’ll save that for Chapter 12.

Sticky Keyboards

One quirk of your app, if you haven’t noticed, is the sticky keyboard. No, I’m not talking about the kind you get from eating chocolate while programming. I’m talking about the virtual keyboard in iOS. Figure 7-14 shows the virtual keyboard that appears when you tap in a text field.


Figure 7-14. iOS’s virtual keyboard

The problem is that, once summoned, it won’t go away. It hangs around, covering up your image view and generally being annoying. This has been a “feature” of iOS from its beginning, and it’s something you must deal with if it’s a problem for your app.

Now I’m sure you’ve noticed that many other apps you use don’t have this problem. Tapping outside of a text field makes the keyboard go away again. The authors of those apps intercept taps outside of the text field and dismiss the keyboard. There have been a wide variety of solutions to this problem, and you’ll find many of them floating around the Internet. I’m going to show you a particularly simple one that will take only a minute to add to your app.

The “trick” is to catch touch events outside any of the text field objects and translate those events into an action that will retract the keyboard. Start with the second part first: create an action to retract the keyboard. In your DetailViewController.swift file, add the following method to your file:

@IBAction func dismissKeyboard(_: AnyObject!) {

This simple method calls the endEditing(_:) function on the root view of your interface. The endEditing(_:) function is ready-built to solve this problem; it searches through the view’s subviews looking for an editable object that’s currently being edited. If it finds one, it asks the object to resign its first responder status, ending the editing session, and retracting the keyboard.

Tip The single value passed to the endEditing(_:) function is the force parameter. If true, it forces the view to end editing, even if it doesn’t want to. Passing false lets the view decide and might not end the editing session. I elected to be polite and let the view decide.

Now you’re going to add another tap gesture recognizer. In the Main.storyboard file, find the tap gesture recognizer in the object library. Drag one into your interface and drop it into the root view object, by dropping it either into the empty space in the interface, as shown in Figure 7-15, or directly into the root view object in the outline.


Figure 7-15. Adding a tap gesture recognizer to the root view

Control+right-click the new gesture recognizer, drag it to the Detail View Controller, and connect it to the new dismissKeyboard: action. (If you can’t figure out which gesture recognizer object belongs to the root view, use the connections inspector, as shown in Figure 7-8, in the section “Connecting a Choose Image Action.”) Now any tap that occurs outside a specific subview will pass those touch events to the root view, dismissing the keyboard. If you’re not sure why that happens, review the section “Hit Testing” in Chapter 4.

Give it a try. Run your app, tap inside a text field, and then tap outside the text field. You should see the keyboard appear and then disappear.

This happens anywhere you tap, except in the image view. That’s easy to fix. Find the point in the choosePicture(_:) function where the app intends to present an interface and add this one bold line of code:

let hasPhotos = UIImagePickerController.isSourceTypeAvailable(.PhotoLibrary)

This will cause the keyboard to retract when the user taps the image view to change it. Remember that in hit testing, it’s the most specific view object that gets the touch events. Since the image view object receives touch events, those events won’t make their way to the root view.

Advanced Camera Techniques

I’m sure you’re excited to add camera and photo library features to your app. But there are more features to UIImagePickerController than you’ve explored in this chapter. There are properties to adjust the flash and capture modes, show or hide the camera controls, add your own custom views to the interface, and programmatically take a picture or start recording a movie. Check out the details in the UIImagePickerController documentation.

If your goal, however, is to create the next Hipstamatic or Instagram, the UIImagePickerController isn’t what you want; you want the low-level camera controls. You’ll find that kind of control in the AVCaptureDevice object. That object represents a single image capture device (aka a camera) and gives you excruciatingly precise control over every aspect of it, from controlling the focus to setting the white balance of the exposure.

This is part of the much larger AV Foundation framework, which also encompasses video capture, video playback, audio recording, and audio playback. You’ll explore some parts of this framework later in this book. Some of its features are object-oriented, while others are C functions.

The advantage of using a class like UIImagePickerController is that so many of the picture-taking details are taken care of for you. But it also constrains your app’s functionality and design. The lower-level classes and functions open up a world of design and interface possibilities but require that you deal with those details yourself. To learn more, start with the AV Foundation Programming Guide you’ll find in Xcode’s Documentation and API Reference.


Adding picture taking to your MyStuff app spiffed it up considerably and made it much more exciting to use. You also learned a little about presenting view controllers and manipulating images. You now know how to export an image to the user’s camera roll, add tap gesture recognizers to an existing view, and get that pesky keyboard out of the way. You’re also getting comfortable with outlets, connections, and delegates; in other words, you’re turning into an iOS developer!

Throughout the past few chapters, I’ve constantly referred to view, controller, and data model objects. The next chapter is going to take another short recess from development to explain what that means and explore an important design pattern.


If there’s no camera or photo library, it would be nice to tell the user that, rather than just ignoring them. In the Shorty app, you put up an alert when a web page couldn’t be loaded for some reason. Use the same technique to present a dialog if neither the camera nor the photo library picker interface is available.

Also consider how to test this code. In the devices you’re likely to own, and in the simulator, one of those interfaces is always going to be available. You can find a modified MyStuff project, with comments, in the MyStuff E1 project folder for this chapter.