Media capture with AV Foundation - Barcodes with iOS: Bringing together the digital and physical worlds (2015)

Barcodes with iOS: Bringing together the digital and physical worlds (2015)

Chapter 2. Media capture with AV Foundation

This chapter covers

· Introducing media capture in AV Foundation

· How video frames flow through the AV Foundation components to a preview layer

· Configuring cameras and toggling device features

· Implementing autofocus and tap-to-focus

· Capturing still images

· Handling UI rotation

To be able to scan barcodes with the iPhone camera, you need to understand two things: AV Foundation’s media capture functionality and its metadata detector.

Most iOS developers have little reason to familiarize themselves with AV Foundation. The usual kinds of apps have no need to capture audio or video. Even less often do developers need to manipulate or compose media. But this knowledge is a requirement for the barcode scanning this chapter is devoted to, so we’ll run through a tutorial on AV Foundation and its components pertinent to media capture.

Chapter 3 will build on this foundation and add the actual barcode scanning via AV Foundation’s metadata detector.

2.1. Introducing AV Foundation

AV Foundation is Apple’s framework for working with audiovisual media. Initially it contained only functions for dealing with audio media, most notably AVAudioPlayer and AVAudioRecorder, which are still available today. The earliest traces of AV Foundation date back to iPhone OS 2.2 and OS X 10.7.

When iOS 4 was released in the summer of 2010, Apple added a plethora of new APIs for handling video media. AV Foundation rests on the lower-level frameworks Core Audio, Core Media, and Core Animation (see figure 2.1). Of these, both Core Audio and Core Media are C-based; Core Animation and AV Foundation provide comfortable Objective-C APIs that make working with them an order of magnitude more convenient.

Figure 2.1. AV Foundation rests on three lower-level frameworks.

These additions for video content found their way into OS X 10.7 (Lion), which was first unveiled at an event Apple titled “Back to the Mac” in the fall of 2010. This suggests that Apple made media handling on the mobile platform a priority but also gave developers on OS X access to the rich media functionality. Why wouldn’t they?

AV Foundation is vast. As of iOS 7 there are 78 public headers. With its richness and multitude of applications, AV Foundation is deserving of a book in its own right, but much of it is outside of what you’ll need for barcode scanning.

Pro Tip

Cmd-click on any class name or constant in your code to jump straight to the header file where it’s defined.

We’re only interested in the parts of AV Foundation that let you access the camera and scan barcodes, so we’ll focus on AVCapture functionality in this chapter. In the next chapter, we’ll add AVMetadataOutput for barcode recognition.

Dual platform: iOS and OS X

Apple is developing AV Foundation in parallel on OS X and iOS, with only minor differences between the two. For example, on OS X you can find a class to grab video from your desktop display, including mouse clicks. So far Apple hasn’t felt it necessary to provide similar functionality for iOS. This class is visible in the AVCaptureInput.h header file, but it’s marked as not available on iOS.

Apple provides exactly the same headers for AV Foundation on both iOS and OS X, and the NS_AVAILABLE macro is used to mark items available for the individual platforms. If a class is not available, it’s marked as NA; otherwise you’ll see the minimum OS version supporting it.

For example, the AVCaptureSessionPresetPhoto constant is available beginning with iOS 4.0 and OS X 10.7, as you can see in this excerpt from AVCaptureSession.h:

AVF_EXPORT NSString *const AVCaptureSessionPresetPhoto

NS_AVAILABLE(10_7, 4_0);

This is the quickest way to determine whether something is available for iOS or OS X, should you ever want to check for a specific constant, class, or method.

2.2. Building a camera app

Imagine you’re tasked with building the next awesome camera app for the iOS App Store. Building it will give you a solid understanding of AV Foundation’s media capture functionality.

Note

This chapter’s sample project works on iOS 6 or higher. Because the iOS simulator doesn’t have any camera devices, you’ll need to run it on a physical iOS device.

Your camera app (see figure 2.2) will have the following features:

Figure 2.2. The finished camera app

· Show a live preview of the camera image

· Support interface rotation as you rotate the device

· Switch between multiple camera devices (if available)

· Ask the user for permission to access the camera (if required)

· Take a picture and save it to the camera roll

· Toggle the torch (video light) on and off

· Select a focus point by tapping

· Switch back to continuous autofocus if subject changes

To build this app, you’ll use classes and methods of AV Foundation that support media capture, whose names are collectively prefixed with AVCapture. Figure 2.3 shows an overview of the basic building blocks at your disposal.

Figure 2.3. AV Foundation components involved in media capture

You’ll start by selecting a camera device, for which you’ll create a device input. This will be added to a capture session. To get a live preview of the camera image, you’ll add a preview layer. Finally, to take pictures, you’ll add a still image output. The AVCaptureSession acts as a central manager that establishes and controls the connections between its inputs and outputs.

The number of moving parts involved in capturing media might seem daunting, but you’ll see how it all fits together as you build the camera app.

2.2.1. AV Foundation setup

Some initial setup is required before you can dive into media capture:

1. Create a new app project from the Single View Application template.

2. Rename the root view controller to DTCameraPreviewController.

3. Link in the AV Foundation framework.

4. Add the AV Foundation framework header import to your precompiled header.

5. Add private instance variables to hold onto references for often-used AV Foundation objects.

Let’s look at each of these steps in detail.

First, create a new app project by selecting File > New Project and choosing the Single View Application template (see figure 2.4). This template has the fewest unnecessary files while still having a storyboard for you to customize.

Figure 2.4. Creating a new single-view application

Next, rename the ViewController class to DTCameraPreviewController, and adjust the class name in the storyboard. Make sure that it still builds and runs.

Pro Tip

The fastest way to rename a class is to select the class name in the source file while it’s open in the editor and then select Edit > Refactor > Rename from the menu bar. This renames the .h and .m files and updates all interface builder files referencing them.

The third step is to link in the AV Foundation framework. In your app target, under Build Phases, add AVFoundation.framework (see figure 2.5). This is a dynamic framework (as you can tell from the yellow toolbox icon) that’s preinstalled on all iOS devices. Adding this in the Link Binary With Libraries build phase allows the linker to resolve references to AV Foundation symbols at link time.

Figure 2.5. Linking the target with AVFoundation.framework

Next you need to add the AV Foundation framework header to your precompiled header (PCH) file. Xcode optimizes the building of the app binary by creating a fast index of the precompiled header contents; during the build, it uses this index to quickly look up system symbols, classes, and methods.

This way, you don’t have to repeat the same imports for system headers in all the source files where you make use of the symbols. There’s less to type, your code is shorter, and builds go much faster ... isn’t that great?

Add the import for the AV Foundation header to your Camera-Prefix.pch, as follows:

#ifdef __OBJC__

#import <UIKit/UIKit.h>

#import <Foundation/Foundation.h>

#import <AVFoundation/AVFoundation.h>

#endif

You can also safely remove the UIKit and Foundation imports from class headers that you create via Xcode templates. Those imports are redundant, and besides cluttering up your source files, they slow down your builds, because Xcode has to compile these headers every time it encounters an import for them.

Finally, you need to add several instance variables to DTCameraPreviewController to give you easy access to the AV Foundation instances you’re currently using. Put them in the implementation file to make them private. Classes outside of the preview controller never need to access these variables, and this way of hiding them from the outside world makes for simpler and cleaner headers. You can prefix them with an underscore to visually mark them as internal:

With this setup in place, you’re ready to dive into media capture with AV Foundation.

2.2.2. Building the camera UI

Now that you’ve created a single-view app, you have an empty Main.storyboard file. You next need to set up a few basic user interface (UI) elements that will allow you to interact with the example camera app. The UI calls for a camera preview that covers the entire display. Layered on top of it, there should be a dimmed bar that contains three buttons: Switch Cam (to switch cameras), Snap! (to take a picture), and Torch (to toggle the torch on and off).

You haven’t created the specialized DTVideoPreviewView yet, so leave the base view empty for now. An empty class string for the base view means that it defaults to UIView. The shaded bar at the bottom is 60 pixels high, and it has a black background color with 20% alpha. This creates a good contrast between the button text and the background, so that the user can more easily recognize the buttons.

Add three buttons to the shading bar and name them Switch Cam, Snap!, and Torch, from left to right (see figure 2.6). Add sufficient autolayout constraints to the buttons and the shaded bar to keep the bottom of the bar aligned with the bottom of the view and all buttons in their relative places. The UI layout should survive device rotation later in this chapter.

Figure 2.6. Camera UI storyboard

Connect the three buttons to one action and one outlet each.

It’s good practice to do a quick test of the actions after connecting them like this. Add an NSLog statement for each action handler and launch the app:

- (IBAction)snap:(UIButton *)sender {

NSLog(@"Snap!");

}

- (IBAction)switchCam:(UIButton *)sender {

NSLog(@"Switch Cam");

}

- (IBAction)toggleTorch:(UIButton *)sender {

NSLog(@"Torch");

}

If everything is connected correctly, then tapping the individual buttons will log the corresponding text.

2.2.3. Selecting capture devices

The AVCaptureDevice class provides class methods for retrieving media capture devices for a given media type. You specify the media type with the AVMediaTypeVideo or AVMediaTypeAudio constants. Other media types are also defined in the AVMediaFormat.h header, but they aren’t used for media capture.

Note

Never rely on your assumptions about the capture hardware available for specific devices. Use the methods to query the system for what devices are actually available.

Quickly adding action and outlet connections

The fastest way to create an IBAction and IBOutlet for UI elements in Interface Builder is by enabling the assistant editor. In automatic mode (while showing a storyboard in the left pane) the assistant editor shows the header of the matching view controller class in the right pane. Ctrl-drag the UI element from the left canvas to the right editor, and choose whether to create an action or an outlet as shown.

Ctrl-drag to create outlet or action

Current iOS devices provide devices for capturing video or audio media. The iPhone 5, for example, provides an AVCaptureDevice for both the front- and back-facing cameras and the microphone. For the camera app, we’ll use only the video capture hardware.

Open the DTCameraPreviewController.m implementation file. This is where you’ll add most of the code for interacting with AV Foundation. You need to add a _setupCamera method to set up the media capture stack. This will contain code for initializing various components of AV Foundation for media capture.

First, you need code to retrieve the default capture device for the AVMediaTypeVideo media type. Usually this will be a camera pointing away from the user. Add this code to _setupCamera:

_camera = [AVCaptureDevice defaultDeviceWithMediaType:

AVMediaTypeVideo];

The preceding code will retrieve the AVCaptureDevice, which you can think of as the camera hardware itself. In order to use it with your system, you have to connect it somehow, like you’d connect a physical camera via a USB cable. The job of the cable is taken on byAVCaptureDeviceInput (see figure 2.7).

Figure 2.7. The capture device is plugged into the device input.

To create a device input for the default camera, you initialize it with the device. The following code should also be added to _setupCamera:

Plugging device input (the cable) into the device (the camera) might not always work, as evidenced by the existence of an error parameter. On OS X this fails if an app tries to access the iSight camera while another app is already recording video from it. On iOS devices, such a scenario is extremely unlikely, because iOS lets only the foreground app access media devices. Still, it’s safer to deal with the error case. If the connection fails, the result is nil and the error variable will be filled with the reason for the failure. The preceding example simply aborts the setup, but in a production app you might want to inform the user about the failure, possibly providing a way to retry later.

There’s no other (publicly visible) link between the capture device and the device input besides passing the camera in the -initWithDevice: method. Calling this method causes the device input class to set up the available ports. You can think of ports as strands in the metaphorical USB cable. Each port can transport a single stream of media data. On the iPhone 5, one input port provides an AVMediaTypeVideo stream and another provides an AVMediaTypeMetadata stream, regardless of which camera device you inspect.

In practice, you’ll probably never need to deal with the ports individually. The system hands you a device that it knows supports the media type you’re interested in, so you can simply rely on that.

Pro Tip

Code defensively. If a method provides an NSError ** output parameter, check the result and handle the potential problem gracefully.

2.2.4. Media capture session

The AVCaptureSession class is the central manager for a media capture session. Typically you only need a single one. It has inputs and outputs to plug in devices, and it takes care of connecting compatible inputs and outputs (see figure 2.8).

Figure 2.8. The capture session manages everything.

Some input or output objects might not be suitable for a session, so you have to ask it for permission to plug in the object with the -canAddInput: method. Add this code to _setupCamera:

When you add any input or output to a capture session, it checks which connections are reasonable and establishes these as instances of AVCaptureConnection. It would make little sense, for example, to connect an output writing a video file with the metadata stream from a camera, so that connection wouldn’t be made.

For the most part, you can rely on these automatic connections on iOS. But should you feel the need to establish these manually, AVCaptureSession provides the -addConnection: and -removeConnection: methods.

Any AVCaptureConnection has multiple inputPorts and a single outputPort or videoPreviewLayer. The output port and preview layer can never be connected at the same time.

So far, there’s no video preview, nor is there any output capture device. That also means there are no connections yet in the session.

2.2.5. Showing live video preview

Apple provides the AVCaptureVideoPreviewLayer, which taps into the internal video stream from the camera and displays a high-fidelity video preview. Because it’s a CALayer subclass, the preview layer is the one item that AV Foundation gets from Core Animation. In order to make the preview layer play nicely with the other UIView hierarchy, you can wrap it in its own UIView. This lets the preview layer work much better with autoresizing masks and autolayout.

To set this up, create a new DTVideoPreviewView class deriving from UIView. In the header, define a property allowing access to the video preview layer:

The implementation of this class sets a black background and specifies an autoresizing mask so that both dimensions follow the superview’s lead. The +layerClass method is overwritten to return an AVCaptureVideoPreviewLayer. This makes sure that the layer class backing this view will be a video preview layer instead of the default CALayer. The -previewLayer method simply passes the reference on, typecast for future convenience:

Having DTVideoPreviewView in your project lets you add the live video preview wherever you can put a view in Interface Builder. Note that the _commonSetup method is executed regardless of whether this object is instantiated from code via initWith-Frame: or from a NIB.

Now open your storyboard and set the class of the DTCameraPreviewController root view to DTVideoPreviewView (see figure 2.9).

Figure 2.9. Changing the root view to be a video preview

In DTCameraPreviewController, add the following -viewDidLoad method, which sets a new instance variable (_videoPreview) to a DTVideoPreviewView reference after the view hierarchy is loaded from the storyboard:

With this in place, you can now add the final piece to _setupCamera to connect the preview layer to the capture session:

_videoPreview.previewLayer.session = _captureSession;

Now you have the first AVCaptureConnection established, connecting the video port of the AVCaptureDeviceInput with the preview layer (see figure 2.10).

Figure 2.10. The first connection between a video input port and the preview layer

To start the flow of data, you call -startRunning on the session when the view controller is presented. To make your app a good iOS citizen, add a corresponding -stopRunning when the view controller is dismissed:

Run the app on your iOS device, and you should see a live video feed ... unless you’re testing on a device that was sold in China. There the law requires that the user consent to any app accessing the camera.

2.2.6. Authorizing camera access (or not)

There are some situations where camera access might require user authorization, or access to the camera might have been disabled through device restrictions. You have to take charge of the authorization process as part of your app’s user experience. Failure to do so might leave your app unusable and only display an empty rectangle where the user might expect the video preview.

Disabling camera access

The most common way to disable access to the camera is via Settings > General > Restrictions > Enable Restrictions.

For configuring multiple iOS devices in a school or business environment, there’s also the Apple Configurator utility (https://itunes.apple.com/us/app/apple-configurator/id434433123). This tool builds a profile specifying available device features for “supervised” devices. This has the same restrictions available on a single iOS device, along with a plethora of additional configuration options.

Authorization for audio input is necessary on all iOS devices. Up until iOS 7, authorization for access to video input was only required for devices sold in China. Apple figured that it would be beneficial for users’ privacy to extend this requirement, so beginning with iOS 8, apps require user authorization to access the video camera as well. The authorization request dialog looks the same as the requests for accessing the user’s location or microphone (see figure 2.11).

Figure 2.11. Camera+ asking a Chinese iOS 7 user for camera authorization

You don’t have to do anything about this—the dialog will be presented in any case. But because the user might disallow camera access, you should be aware that this will probably make some functionality in your app impossible. You should disable functionality that won’t work without the camera and inform the user that it’s their choice. If you don’t disable camera functionality when camera access is denied, your code will progress without a problem but all video images will be black.

Add a new method that’s called from -viewDidLoad instead of -setupCamera. This checks the authorization status first, and only sets up the camera once the authorization status has been determined:

This code also works on iOS 6. The AVCaptureDevice doesn’t have an -authorization-StatusForMediaType: selector in that version, so you can skip right to camera setup in that case .

If the authorization status is AVAuthorizationStatusNotDetermined, you request access for the video media type , and only if the completion handler comes back with a positive result do you set up the camera.

If access is restricted or denied, or the completion handler comes back negative, you need to inform the user that their choice makes camera functionality not possible . An -_informUserAboutCamNotAuthorized helper method displays an alert view to that effect.

Note the dispatch_async to the main thread . This is needed because the completion handler will be called on a background queue, but you want the methods contained in the dispatch block to be executed on the main thread.

If the authorization status is restricted or denied, there’s nothing you can do about that from inside your app. All you can do is inform the user that this app requires authorization to be granted from the Settings app. If you call the access request method with the status being restricted or denied, the completion handler will be executed right away with granted being NO and no authorization dialog being shown. The dialog asking for permission only appears if the current authorization state is undetermined.

2.2.7. Toggling the video light

Video capture devices have a number of features that might vary quite a bit between devices and device generations. In this section you’ll add a video light, a.k.a. “torch,” to your camera. A torch isn’t a typical feature for a still image camera, but adding one will demonstrate how to query available features from a capture device. Also, in chapter 3 you’ll find the light useful for scanning barcodes in dimly lit places.

The AVCaptureDevice has a -hasTorch method that you can query, and you can use this to hide the Torch button on cameras that have no LED flashlight, like the user-facing camera. The following method adjusts the visibility of the Torch button based on the currently selected camera. Note the use of the Torch button outlet that you connected earlier:

In section 2.2.2 you added a dummy implementation for the three actions linked with three buttons in Interface Builder. Now you need to add the actual code to toggle the torch feature. This method demonstrates how to query for the availability of a device feature as well as how to lock it while making configuration changes:

Your camera app now doubles as flashlight app! If you tap the Torch button, you turn on the video light. Tap it again to turn it off.

Beware of AV exceptions!

You must lock a capture device before and unlock it after making configuration changes to it. Trying to change the configuration without locking will cause AV Foundation to trigger an exception.

Trying to set an unsupported value on a property of the capture device is equally bad. For every configuration property foo, there’s a corresponding -isFooSupported method, which you can find in the SDK documentation.

Exceptions are bad for the user experience: they terminate your app.

2.2.8. Taking pictures to the camera roll

Now you get to take pictures! Let’s fill in the action method for the Snap! button.

To take pictures, you need to add an AVCaptureStillImageOutput class to your capture session. This is the last piece to the video capture puzzle (see figure 2.12).

Figure 2.12. The completed AV capture puzzle

In figure 2.12 the media streams from the capture device (top left) and flows through the device input into the capture session. The session manages the capture connections between inputs and outputs. At the bottom, the media data flows into the still image output as well as the video preview layer.

The following code shows the complete _setupCamera implementation containing all the previously explained camera setup steps. It also adds a still image output to the capture session:

You need a reference to the current capture connection to take a still image. This helper method finds and returns it:

With this setup in place, you can fill in the -snap: action. This is the second method you need to replace in the previous dummy implementation from section 2.2.2:

For capturing still images, you need a reference to the AVCaptureConnection that’s feeding your still image output. AVCaptureSession doesn’t provide a method to get connections, so you have to iterate through the image output connections, as demonstrated by the_captureConnection helper method shown earlier. Capture connections have multiple inputs, so you’re looking for the one that has the video stream.

The actual photography occurs when you call the awkwardly named -capture-StillImageAsynchronouslyFromConnection: completionHandler: . After a few milliseconds, the completion handler will be called, providing a reference to a sample buffer.

Camera sound

The still image capturing method also makes a camera-shutter sound. Apparently there have been incidents of people sticking their camera apps where they don’t belong, so some governments mandate that you shouldn’t be able to disable the camera sound on mobile phones. If you can, you should abide by these rules, but if you absolutely need to eliminate the sound, there’s a way to do that: you can employ AVCaptureVideoDataOutput, which lets you grab individual video frame sample buffers and then convert them to images.

The CM prefix of CMSampleBufferRef tells you that it’s coming from the depths of Core Media, which AV Foundation rests on. This sample buffer contains the actual pixels of a frame of the video stream.

You don’t need to worry about converting this buffer data into a JPEG image—that’s taken care of by the +jpegStillImageNSDataRepresentation: class method of AVCaptureStillImageOutput . This gives you the actual JPEG data, which can easily be transmuted into aUIImage for saving to the user’s camera roll .

Note

UIImageWriteToSavedPhotosAlbum asks for user permission the first time it’s called, much like camera authorization. The additional function parameters let you specify a callback method to handle the user’s response. If you were to make a real-life photo app, you’d also have to deal with the user taking a picture but then denying access to the camera roll.

2.2.9. Supporting rotation of device and UI

If you run the camera app at this point, you should be able to take pictures. The default setting for iPhones is to support both landscape orientations and the one portrait orientation where the home button is at the bottom. But if you rotate to landscape, you’ll find that both the preview image as well as the photos taken aren’t rotated with the interface orientation as you would expect. To fix this, you also need to update the video connection orientation when the phone is rotated.

In real life, you’ll probably want to keep the system default of UIInterfaceOrientationMaskAllButUpsideDown for iPhone and iPod touch, and UIInterfaceOrientationMaskAll for iPad. For demonstration purposes, let’s override -supportedInterfaceOrientations to allow all orientations, even upside-down portrait:

- (NSUInteger)supportedInterfaceOrientations {

return UIInterfaceOrientationMaskAll;

}

AV capture connections allow you to specify a video orientation, and you’ll want to set this in sync with the current view controller’s interface orientation. Because interface orientation and video orientation are two different enums, you’ll need to implement a function to convert between them. In the sample code, you’ll find this function in DTAVFoundationFunctions.m:

AVCaptureVideoOrientation

DTAVCaptureVideoOrientationForUIInterfaceOrientation(

UIInterfaceOrientation interfaceOrientation) {

switch (interfaceOrientation) {

case UIInterfaceOrientationLandscapeLeft:

return AVCaptureVideoOrientationLandscapeLeft;

case UIInterfaceOrientationLandscapeRight:

return AVCaptureVideoOrientationLandscapeRight;

default:

case UIInterfaceOrientationPortrait:

return AVCaptureVideoOrientationPortrait;

case UIInterfaceOrientationPortraitUpsideDown:

return AVCaptureVideoOrientationPortraitUpsideDown;

}

}

The following method iterates through the video connections and updates the video-Orientation where relevant, using the preceding helper function to determine the correct video orientation for the given interface orientation parameter. Grouping these updates together in a helper method allows you to call it where necessary, like before a rotation and after switching cameras:

The update method in the preceding code iterates over all capture connections of the still image output object to update the video orientation where possible. The same is done for the view preview layer’s capture connection. Note the repeated pattern of inquiring -isVideoOrientationSupported and only making the change if the answer is YES.

Finally, you need to call the orientation update method right before a rotation animation occurs:

- (void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)

toInterfaceOrientation

duration:(NSTimeInterval)duration {

[super willRotateToInterfaceOrientation:toInterfaceOrientation

duration:duration];

[self _updateConnectionsForInterfaceOrientation:

toInterfaceOrientation];

}

Now you can launch the app to see that the video orientation stays in sync with the interface orientation.

Orientation and performance

Changing the video orientation of a capture connection has no impact on performance. AV Foundation avoids rotating sample buffers and instead adds metadata to the video streams indicating the video orientation. Apple says so in Q&A QA1744 (https://developer.apple.com/library/ios/qa/qa1744/).

The same is true for pictures saved to the camera roll and for images that users upload to websites. The orientation information is stored in an EXIF header in the image file. If you encounter user-uploaded images on the web that are awkwardly rotated by 90 degrees, this is often because the website didn’t properly handle the EXIF orientation value.

2.2.10. Switching between camera devices

You’ll be hard pressed to find a current iOS device that has only a single camera. At the time of writing, only the fifth-generation iPod Touch has a single FaceTime camera. All other devices supported by iOS 6 and up—except for the Apple TV—have two cameras.

The process for switching cameras is similar to the process for configuring devices. You have to call -beginConfiguration up front and end the configuration activities with -commitConfiguration.

Let’s implement a helper method to determine if there is indeed an alternative to the current camera available:

To inform the user about which camera is currently selected, you can update the text on the camera-switching button accordingly:

- (void)_setupCamSwitchButton {

AVCaptureDevice *alternativeCam = [self _alternativeCamToCurrent];

if (alternativeCam) {

self.switchCamButton.hidden = NO;

NSString *title;

switch (alternativeCam.position) {

case AVCaptureDevicePositionBack:

title = @"Back";

break;

case AVCaptureDevicePositionFront:

title = @"Front";

break;

case AVCaptureDevicePositionUnspecified:

title = @"Other";

break;

}

[self.switchCamButton setTitle:title

forState:UIControlStateNormal];

} else {

self.switchCamButton.hidden = YES;

}

}

The action method for switching cameras is called whenever the user taps on the camera-switching button. This is the third and last of the action methods you need to replace from section 2.2.2:

Just like you had a method for showing the Torch button only when a torch was available, you also have such a method for the camera switch button, albeit one that’s slightly more complex. The -position property of a camera lets you update the button title to read “Bottom” or “Front” to match the camera position. This is done in the _setupCamSwitchButton helper method (see sample code).

Switching the camera involves removing all previous -inputs from the capture session, adding a new input for the alternative camera, and updating the resulting video connections for the current interface orientation.

Both the _setupCamSwitchButton and _setupTorchToggleButton methods for updating the UI buttons need to be called in multiple places so that the UI always reflects the capabilities of the current camera. The sample code (in DTCameraPreviewController.m) calls these methods in -_setupCameraAfterCheckingAuthorization and -viewWillAppear: in addition to the -switchCam: action. If you aren’t sure where to add these, check the sample code.

Note

If you turn on the torch on a back-facing camera and switch to a front-facing camera, the torch is disabled automatically, because the video light would be facing the wrong direction. There is no flash/torch for the front-facing camera.

2.2.11. Implementing autofocus and tap-to-focus

The last feature of this chapter’s camera app to implement is tap-to-focus coupled with automatically switching back to continuous autofocus if the subject area changes.

The default capturing cameras on iOS devices—cameras pointing away from the user—generally support autofocus. The FaceTime camera—the camera pointing toward the user—doesn’t. There are three autofocus modes:

· AVCaptureFocusModeContinuousAutoFocus

· AVCaptureFocusModeAutoFocus

· AVCaptureFocusModeLocked

The continuous autofocus mode is the default; in this case the camera focuses automatically when needed. The noncontinuous option will focus on the current focus point; once focus has been found, it changes to the locked mode.

To enable subject-area change monitoring, there’s a new method (shown in the following code snippet) to configure the current camera. This method is called from _setupCamera and _switchCam:, and it enables this feature if it’s supported. Enabling subject-area change monitoring is a rare exception to the rule of having to inquire about an ability before using it—in this case, there’s no method available to do that. Monitoring the video stream for significant changes is not a function performed by the video capture hardware but rather is done by the OS itself:

Once subject-area change monitoring is enabled, iOS sends an AVCaptureDevice-SubjectAreaDidChangeNotification whenever there’s a substantial change of what’s visible to the camera. You can subscribe to this notification in -viewDidLoad, as this is where you already put some setup code previously. At the same time, you can install a tap-gesture recognizer for detecting a tap-to-focus gesture:

The code to unsubscribe from the subject-area change notifications goes into the class dealloc method:

- (void)dealloc {

[[NSNotificationCenter defaultCenter] removeObserver:self];

}

The method that’s called on the notification sets the autofocus mode back to continuous, if supported:

Note the focusPointOfInterest property on the camera device, which gets set to a point (0.5, 0.5) . The reason for this is that the focus point needs to be specified as a percentage rather than in points. The default setting is the center of the video image: 50% of the width and 50% of the height.

You can use the preview layer’s -captureDevicePointOfInterestForPoint: method to convert the tap coordinates from the preview layer’s coordinate system to the capture device’s point of interest:

You don’t need to remove the tap gesture recognizer when switching the current camera. Instead, the tap-handler method checks whether tap-to-focus is possible by looking for the availability of both the focus point and the focus-once-then-lock mode AVCaptureFocusModeAutoFocus.

With this code in place, the camera app launches in continuous autofocus mode. If the user taps on the screen, the default focus point in the center of the screen is moved to the tap location. The device will focus on this point and then lock focus. If the subject area changes—causing the subject-area change notification to be sent—then autofocus is switched back to continuous mode.

Note

The built-in iOS camera app makes a distinction between a normal tap and a long press. Try to emulate this behavior as an advanced exercise.

2.3. Summary

Your camera app now has all the features promised at the start of this chapter. There are many more configuration properties available on AVCaptureDevice that you could use to further enhance your photo app. You could try adding support for the available flash modes, depending on which ones the current camera supports. An advanced exercise would be to capture video instead of still images. For this you’d need to use AVCaptureMovieFileOutput to send video output to a file, and you’d need to add an audio input so that your videos have sound as well.

There are several key takeaways for this chapter:

· Start a capture session with -startRunning. A corresponding -stopRunning doesn’t hurt if the view controller goes away.

· Enclose configuration changes for a running capture session between -beginConfiguration and -commitConfiguration.

· Enclose setting changes for capture devices in -lockForConfiguration: and -unlockForConfiguration.

· Always inquire from the device whether a setting is supported before setting it.

· Camera focus points are specified as percentages and they relate to the capture device’s coordinate system. Use the provided methods (such as -captureDevicePointOfInterestForPoint:) to convert from view coordinates to device coordinates.

You’ve now learned how to fit together the pieces of AV Foundation necessary for capturing media. This gives you a solid basis for adding barcode recognition in chapter 3.