Draw Me a Picture - Learn iOS 8 App Development, Second Edition (2014)

Learn iOS 8 App Development, Second Edition (2014)

Chapter 11. Draw Me a Picture

You have arrived at a critical point in your mastery of iOS development. You have a good deal of experience adding existing view objects to your app. You’ve had them display your data, you’ve connected them to your custom controller logic, and you’ve customized their look and feel. But you’ve still been limited to the view classes that Apple has written for you. There’s no substitute for creating your own view object—an object that will draw things no one else has imagined.

OK, that’s not entirely true. You have created custom view objects, but in both cases I neglected to explain how they worked. Instead, there was a little note attached that read “Ignore the view behind the curtain; all will be explained in Chapter 11.” Welcome to Chapter 11! In this chapter, you will learn (more) about the following:

· Creating view subclasses

· View geometry

· How and when views are drawn

· Core Graphics

· Bézier paths

· Animation

· Gesture recognizers

· Off-screen drawing

This chapter is going to get a little technical, but I think you’re ready.

Creating a Custom View Class

You create a custom view by subclassing either UIView or UIControl, depending on whether your intent is to create a display object or something that acts like a control, like a new kind of switch. In this chapter, you’ll be subclassing UIView only.

Caution Do not subclass concrete view classes, such as UIButton or UISwitch, in an attempt to “fiddle” with how they function. That is a recipe for disaster. Their internal workings are not public and often change from one iOS release to the next, meaning your class might stop working in the near future. View classes designed to be subclassed, such as UIControl, are clearly documented, usually with a section in their documentation titled “Subclassing Notes.”

To create your own view class, you need to understand three things.

· The view coordinate system

· User interface update events

· How to draw in a Core Graphics context

Let’s start at the top—literally.

View Coordinates

The device’s screen, windows, and views all have a graphics coordinate system. The coordinate system establishes the position and size of everything you see on the device: the screen, windows, views, images, and shapes. Every view object has its own coordinate system. The origin of a coordinate system is at its upper-left corner and has the coordinates (0,0), as shown in Figure 11-1.


Figure 11-1. Graphics coordinate system

X coordinates increase to the right, and Y coordinates increase downward. The y-axis is upside-down from the Cartesian coordinate system you learned in school or maybe from reading geometry books in your spare time. For computer programs, this arrangement is more convenient; most content “flows” from the upper-left corner, so it’s usually simpler to perform calculations from the upper-left corner than the lower-left corner.

Note If you’ve done any OS X programming, you’ll notice a lot of similarities between iOS and OS X view objects. iOS, however, has no flipped coordinates—they’re always flipped, from an OS X perspective.

There are four key types used to describe coordinates, positions, sizes, and areas in iOS, all described in Table 11-1.

Table 11-1. Coordinate Value Types




The fundamental scalar value. CGFloat is floating-point type used to express a single coordinate or distance.


A pair of CGFloat values that specify a point (x,y) in a coordinate system.


A pair of CGFloat values that describe the dimensions (width,height) of something.


The combination of a point (CGPoint) and a size (CGSize) that, together, describe a rectangular area.

Frame and Bounds

View objects have two rectangle (CGRect) properties: bounds and frame. The bounds property describes the coordinate system of the object. All of the view’s graphic content, which includes any subviews, uses this coordinate system. The really important thing to understand is that all drawing of a view’s content is performed by that view, and it’s done using the view’s coordinate system—often referred to as its local coordinates.

Moving the view around (in its superview) does not change the view’s coordinate system. All of the graphics within the view remain the same, relative to the origin (upper-left corner) of that view object. In Figure 11-1, the subview is 160 points wide by 50 points high. Its bounds rectangle is, therefore, ((0,0),(160,50)); it has an origin (x,y) of (0,0) and a size (width,height) of (160,50). When the subview draws itself, it draws within the confines of that rectangle.

The frame property describes the view in the coordinates of its superview. In other words, the frame is the location of a subview in another view—often called its superview coordinates. In Figure 11-1, the origin of the subview is (20,60). The size of the view is (160,50), so its frame is ((20,60),(160,50)). If the view were moved down 10 points, its frame would become ((20,70),(160,50)). Everything drawn by the view would move down 10 points, but it wouldn’t change the bounds of the view or the relative coordinates of what’s drawn inside the view.

The size of the bounds and frame are linked. Changing the size of the frame changes the size of its bounds, and vice versa. If the frame of the subview in Figure 11-1 was 60 points narrower, its frame would become ((20,60),(100,50)). This change would alter its bounds so it was now ((0,0),(100,50)). Similarly, if the bounds were changed from ((0,0),(160,50)) to ((0,0),(100,40)), the frame would automatically change to ((20,60),(100,40)).

Note There are a few exceptions to the “size of the frame always equals the size of the bounds” rule. You’ve already met one of those exceptions: the scroll view. The size of a scroll view’s content (bounds) is controlled by its contentSize property that is independent of its frame size, the portion that appears on the screen. Other exceptions occur when transforms are applied, which I’ll talk about later.

UIView also provides a synthetic center property. This property returns the center point of the view’s frame rectangle. Technically, center is always equal to (frame.midX, frame.midY). If you change the center property, the view’s frame will be moved so it is centered over that point. The center property makes it easy to both move and center subviews, without resizing them. You’ll use this feature later in the chapter.

Converting Between Coordinate Systems

It will probably take you a while—it took me a long time—to wrap your head around the different coordinate systems and learn when to use bounds, when to use frame, and when to translate between them. Here are the quick-and-dirty rules to remember:

· The bounds are a view’s inner coordinates: the coordinates of everything inside that view.

· The frame is a view’s outer coordinates: the position of that view in its superview.

Should you need them, there are a number of functions that translate between the coordinate systems of views. The four most common are the UIView functions listed in Table 11-2. As an example, let’s say you have the coordinates of the lower-right corner of the subview in Figure 11-1 in its local coordinates, (160,50). If you want to know the coordinate of that same point in the superview’s coordinate system, call the function superview.convertPoint(CGPoint(160,50), fromView: subview). That statement will return the point (180,110), which is the same point but in the superview’s coordinate system.

Table 11-2. Coordinate Translation Functions in UIView

UIView function


convertPoint(_:, toView:)

Converts a point in the view’s local coordinate system to the same point in the local coordinates of another view

convertPoint(_:, fromView:)

Converts a point in another view’s coordinates into this view’s local coordinate system

convertRect(_:, toView:)

Converts a rectangle in the view’s local coordinate system to the same rectangle in the local coordinates of another view

convertRect(_: fromView:)

Converts a point in another view’s coordinates into this view’s local coordinate system

Also, all of the event-related classes that deliver coordinates report them in the coordinate system of a specific view. For example, the UITouch class doesn’t have a location property. Instead, it has a locationInView(_:) function that translates the touch point into the local coordinates of the view you’re working with.

When Views Are Drawn

In Chapter 4, you learned that iOS apps are event-driven programs. Refreshing the user interface (programmer speak for drawing stuff on the screen) is also triggered by the event loop. When a view has something to draw, it doesn’t just draw it. Instead, it remembers what it wants to draw, and then it requests a draw event message. When your app’s event loop decides that it’s time to update the display, it sends user interface update messages to all the views that need to be redrawn. A view’s drawing life cycle, therefore, repeats this pattern:

1. Change the data to draw.

2. Call your view object’s setNeedsDisplay() function. This marks the view as needing to be redrawn.

3. When the event loop is ready to update the display, iOS will call your view’s drawRect(_:) function.

You rarely need to call another view’s setNeedsDisplay() function. Most views send themselves that message whenever they change in a way that would require them to redraw themselves. For example, when you set the text property of a UILabel object, the label object calls itssetNeedsDisplay() so the new label will appear. Similarly, a view automatically gets a setNeedsDisplay() call if it’s changed in a way that would require it to redraw itself, such as adding it to a new superview.

This doesn’t mean that every change to a view will trigger another drawRect(_:) call. When a view draws itself, the resulting image is saved, or cached, by iOS—like taking a snapshot. Changes that don’t affect that image, such as moving the view around the screen (without resizing it), won’t result in another drawRect(_:) call; iOS simply reuses the snapshot of the view it already has.

Note The rect parameter passed to your drawRect(rect:) function is the portion of your view that needs to be redrawn. Most of the time, it’s the same as bounds, which means you need to redraw everything. In rare cases, it can be a smaller portion. MostdrawRect(rect:) functions don’t pay much attention to it and simply draw their entire view. It never hurts to draw more than what’s required; just don’t draw less than what’s needed. If your drawing code is really complicated and time-consuming, you might try to save time by updating only the area in the rect parameter.

Now you know when and why views draw themselves, so you just need to know how.

Drawing a View

When your view object receives a drawRect(_:) call, it must draw itself. In simple terms, iOS prepares a “canvas” that your view object must then “paint.” The resulting masterpiece is then used by iOS to represent your view on the screen—until it needs to be drawn again.

Your “canvas” is a Core Graphics context, also called your current context or just context for short. It isn’t an object, per se. It’s a drawing environment that is prepared before your object’s drawRect(_:) function is called. While your drawRect(_:) function is executing, your code can use any of the Core Graphics drawing routines to “paint” into the prepared context. The context is valid until your drawRect(_:) function returns, and then it goes away.

Caution Your view’s Core Graphics context exists only when your drawRect(_:) function is invoked by iOS. Because of this, you should never call your view’s drawRect(_:) function, and you should never use any of the Core Graphics drawing functions outside of yourdrawRect(_:) function. (The exception is “off-screen” drawing, which I’ll describe toward the end of this chapter).

For most of the object-oriented drawing functions, the current context is implied. That is, you call a paint function (myShape.fill()), and the function draws into the current context. If you use any of the C drawing functions, you’ll need to get the current context reference and pass that as the call’s first parameter, like this:

let currentContext = UIGraphicsGetCurrentContext()
CGContextSetAlpha(currentContext, 0.5)

A lot of the details of drawing are implied by the state of the current context. The context state consists of all the settings and properties that will be used for drawing in that context. This includes things such as the color used to fill shapes, the color of lines, the width of lines, the blend mode, the clipping regions, and so on.

Rather than specify all of these variables for every action, like drawing a line, you set up the state for each property first. Let’s say you want to draw a shape (myShape), filling it with the red color and drawing the outline of the shape with the color black.


The setFill() and setStroke() functions set the current fill and stroke colors of the context. The fill() function uses the context’s current fill color, and stroke() uses the current stroke color. This arrangement makes it efficient to draw multiple shapes or effects using the same, or similar, parameters.

Now the only question remaining is what tools you have to draw with. Your fundamental painting tools are the following:

· Simple fill and stroke

· Bézier path (fill and stroke)

· Images

That doesn’t sound like a lot, but taken together, they are remarkably flexible. Let’s start with the simplest: the fill functions.

Fill and Stroke Functions

The Core Graphics framework includes a handful of functions that fill a region of the context with a color. The two principal functions are CGContextFillRect and CGContextFillEllipseInRect. The former fills a rectangle with the current fill color. The latter fills an oval that just fits inside the given rectangle (which will be a circle if the rectangle is a square).

CGContextFillRect is often used to fill in the background of the entire view before drawing its details. It’s not uncommon for a drawRect(_:) function to begin something like this:

override func drawRect(rect: CGRect) {
let context = UIGraphicsGetCurrentContext()

This code starts by getting the current context (which you’ll need for the CGContextFillRect call). It then obtains the background color for this view (backgroundColor) and makes that color the current fill color. It then fills the portion of the view that needs drawing (rect) with that color. Everything drawn after that will draw over a background painted with backgroundColor.

Tip Drawing in a Core Graphics context works much like painting on a real canvas. Whenever you draw something, you’re drawing over what’s been drawn before. So, just like a painting, you typically start by covering the entire surface with a neutral color—artists call this aground. You then paint with different colors and shapes on top of that, until you’ve painted everything.

The functions CGContextStrokeRect and CGContextStrokeEllipseInRect perform a similar function, but instead of filling the area inside the rectangle or oval, it draws a line over the outline of the rectangle or oval, using the current line color, line width, and line joint style.Stroke is the term used to describe the act of drawing a line.

Bézier Paths

You’ll notice that there are hardly any Core Graphics functions for drawing really simple things, like straight lines. Or what about the rounded rectangles you see everywhere in iOS, triangles, or any other shape, for that matter? Instead of giving you a bazillion different functions for drawing every shape, the iOS gods provide you an almost magical tool that will let you draw all of those things and more: the Bézier path.

A Bézier path, named after the French engineer Pierre Bézier, can represent any combination of straight or curved lines, as shown in Figure 11-2. It can be as simple as a square or as complex as the coastline of Canada. A Bézier path can be closed (circle, triangle, pie chart), or it can be open (a line, an arc, the letter W).


Figure 11-2. Bézier paths

You define a Bézier path by first creating a UIBezierPath object. You then construct the path by adding straight and curved line segments. When you’re done, you can use the path object to draw into the context by painting its interior (filling), drawing its outline (stroking), or both. You can reuse a path as often as you like.

Tip For common shapes, such as squares, rectangles, circles, ovals, rounded rectangles, and arcs, the UIBezierPath class provides convenience initializers that will create a Bézier path with that shape in a single statement.

To show you how easy it is to create paths, you’ll write an app that draws Bézier paths in a view. But before you get to that, let’s briefly talk about the last major source of view content.


An image is a picture and doesn’t need much explaining. You’ve been using image (UIImage) objects since Chapter 2. Up until now, you’ve been assigning them to UIImageView objects (and other controls) that drew the image for you. But UIImage objects are easy to draw into the context of your own view too. The two most commonly used UIImage drawing functions are drawAtPoint(_:) and drawInRect(_:). The first draws an image into your context, at its original size, with its origin (upper-left corner) at the given coordinate. The second function draws the image into the given rectangle, scaling and stretching the image as necessary.

When I say an image is “drawn” into your context, I really mean it’s copied. An image is a two-dimensional array of pixels, and the canvas of your context is a two-dimensional array of pixels. So really, “drawing” a picture amounts to little more than overwriting a portion of your view’s pixels with the pixels in the image. The exceptions to this are images that have transparent pixels or if you’re using atypical blend modes, both of which I’ll touch on later.

I’ll explain a lot about creating, converting, and drawing images in your custom view later in this chapter by revisiting an app you already wrote. But before I get to that, let’s draw some Bézier paths.


You’re going to create an app that uses Bézier paths to draw simple shapes in a custom view. Through a few iterations of the app, you’ll expand it to include movement and resizing gestures and learn about transforms and animation—along with a heap of UIView and Bézier path goodness. The design of the app is simple, as shown in Figure 11-3.


Figure 11-3. Shapely app design

The app will have a row of buttons that create new shapes. Shapes appear in the middle area where they can be moved around, resized, and reordered. Get started by creating a new project. In Xcode, do the following:

1. Create a new project based on the single-view app template.

2. Name the project Shapely.

3. Set the language to Swift.

4. Set the devices to Universal.

The next thing to do is to create your custom view class. You’ve done this several times already. Select the Shapely group in your project navigator, choose New File (from the File menu or by Control+right-clicking the group), and then do the following:

1. From the iOS group, choose the Swift class template.

2. Name the file ShapeView.

3. Add it to your project.

Creating Views Programmatically

In this app, you’ll be creating your view objects programmatically, instead of using Interface Builder. In fact, you’ll be creating just about everything programmatically. By the end of the chapter, you should be good at it.

When you create any object, it must be initialized. This is accomplished by using one of the class’s initializer functions. Some classes, such as NSURL, provide a variety of initializers so you can create them in different ways: NSURL(string:), NSURL(scheme:, host:, path:),NSURL(string:, relativeToURL:), NSURL(fileURLWithPath:), and so on. Some of these initializers are called convenience initializers, for the obvious reason that they make object creation more convenient.

The UIView class has what is called a designated initializer. This initializer (UIView(frame:)) is the one you must use when constructing a subclass of UIView. Your subclass is free to define its own initializers, but it must call the designated initializer (via super.init(frame:)) so the UIView superclass gets set up correctly.

Note The UIView class actually has two designated initializers. The other is used when creating an object defined in an Interface Builder file, which is described in Chapter 15. The designated initializer you’re using here is for UIView objects created programmatically.

You’re going to define a single initializer to create a new ShapeView object that will draw a specific shape (square, circle, and so on). The object’s frame will be set to a predetermined placeholder frame, which you’ll reposition later. So, you’ll need a custom initializer function that tells the new object what kind of shape it’s going to draw. Your view will draw its shape in a specific color, so you’ll also need a property for its color. Start by editing the ShapeView.swift file. Change it so it looks like this:

import UIKit

enum ShapeSelector: Int {
case Square = 1
case Rectangle
case Circle
case Oval
case Triangle
case Star

class ShapeView: UIView {

The enum statement creates an enumeration that determines which shape the view will draw. An enumeration is a sequence of constant values assigned to names. You list the names, and the compiler assigns each a value. In this case, you’ve specified that the enumeration is compatible with the Int type, so all of the values assigned will have integer equivalents—you’ll need that later. Often you don’t care how the values are assigned, but for this app you want them to start at 1 (Square=1, Rectangle=2, Circle=3, and so on).

The next statement is the biggie; it declares a new class (ShapeView), which is a subclass of UIView. That’s where all your work will go.

Start by adding some properties (new code in bold):

class ShapeView: UIView {
let initialSize = CGSize(width: 100.0, height: 100.0)
let alternateHeight: CGFloat = 100.0/2
let strokeWidth = CGFloat(8.0)

let shape: ShapeSelector
var color: UIColor = UIColor.whiteColor()

The properties that begin with let are constants. They’re immutable values that never change. You’ve defined constants for the initial dimensions of the view (initialSize, alternateHeight), the width of the line used to draw the shape (strokeWidth), and which shape to draw (shape). color is a variable property that stores the UIColor the shape will be drawn in. This permits you to later change the color of the shape. The default color is white.

You now have all the pieces you need to write your initializer function.

Initializing Your Object

Every class has at least one, and often several, initializer functions. Add a custom initializer that will construct a new ShapeView object for a given ShapeSelector.

init(shape: ShapeSelector) {
self.shape = shape
var frame = CGRect(origin: CGPointZero, size: initialSize)
if shape == .Rectangle || shape == .Oval {
frame.size.height = alternateHeight
super.init(frame: frame)
opaque = false
backgroundColor = nil
clearsContextBeforeDrawing = true

This initializer function is called whenever you create a new ShapeView object from a ShapeSelector, like this: ShapeView(.Circle). The first statement sets the shape property to the value chosen by the caller.

Now at this point you might be saying “Just wait a second there, mister. The shape property is an immutable (let) value and can’t be assigned!” And, you’re almost correct. The shape property is immutable but only after the object is created. During the call to your object’s initializer, there’s a narrow window of opportunity to set up immutable properties, allowing you to determine their value for the lifetime of the object.

The next four lines construct an initial frame for the view. It will have an origin of (0.0,0.0) and will be 100.0 points wide by 100.0 points high, unless it’s a rectangle or an oval, in which case it will be half as high.

The placeholder frame is then passed to the superclass’s designated initializer by calling super.init(frame: frame). The superclass completes the process of constructing the object. When it returns, the object is now fully initialized, and you can start using it.

In fact, you can start using it before you return from your initializer. The next three lines change the default properties of the object (the property values that were just set up by super.init(frame:)). The most important is resetting the opaque property. If your view object will have transparent regions, you must declare that your view isn’t opaque. The background property is set to nil because this view doesn’t fill its background with a color. I'll explain the clearsContextBeforeDrawing property shortly.

Caution If your view leaves any portion of its image transparent, or even semitransparent, you must set the view’s opaque property to false or it may not appear correctly on the screen.

Oddly, the compiler is now complaining that your class doesn’t implement all of its required functions. As it turns out, there are a number of rules (explaining in Chapter 20) regarding what initializers you can, can’t, and must implement. I won’t go into the details here, but because you defined a custom initializer, there’s now a required initializer that you must also define.

required init(coder decoder: NSCoder!) {
shape = .Square
super.init(coder: decoder)

This initializer is called whenever your object is constructed from a document or Interface Builder file. How that might happen is explained in Chapters 15 and 19. For now, just throw in this code, and the compiler will stop complaining.

The drawRect(_:) Function

I think it’s time to write your drawRect(_:) function. This is the heart of any custom view class. Add this function to your ShapeView.swift file:

override func drawRect(rect: CGRect) {

Whoa! That’s it? Yes, that’s all the code your class needs to draw its shape. It gets the Bézier path object from its path property. The Bézier path defines the outline of the shape this view will draw. You then set the color you want to draw with, and the stroke() function draws the outline of the shape. The details of how the line is drawn—its width, the shape of joints, and so on—are properties of the path object.

You’ll also notice that you didn’t have to first fill the context (as I explained in the “Fill and Stroke” section). That’s because you set the view’s clearsContextBeforeDrawing property. Set this to true, and iOS will prefill your context with (black) transparent pixels before it calls your drawRect(_:) function. For views that need to start with a transparent “canvas”—as this one does—why not let iOS do that work for you? If your view always fills its context with an image or color, set clearsContextBeforeDrawing to false; leaving it true will pointlessly fill the context twice, slowing down your app and wasting CPU resources.

Creating the Bézier Path

Clearly, the heavy lifting is creating that Bézier path object. Do that now. Add this computed property to your class:

var path: UIBezierPath {
var rect = bounds
rect.inset(dx: strokeWidth/2.0, dy: strokeWidth/2.0)
var shapePath: UIBezierPath!
switch shape {
case .Square, .Rectangle:
shapePath = UIBezierPath(rect: rect)
// TODO: Add cases for remaining shapes
shapePath = UIBezierPath()
shapePath.lineWidth = strokeWidth
shapePath.lineJoinStyle = kCGLineJoinRound
return shapePath

This code declares a read-only variable named path that bestows the object with a UIBezierPath property. code is a computed property. Computed properties don’t store a value like the shape and color properties do. Instead, when you request the path property value, this block of code is executed. The code constructs a new UIBezierPath object that describes the shape this view draws (square, rectangle, circle, and so on), exactly fitting its current size (bounds).

Note The path property provides code only to get the value, so you can’t set the path property. While this makes the property immutable (it can’t be set), it’s still a var (variable) and not a let (constant) because the value it returns can change over the lifetime of the object.

The first two lines of code create a CGRect variable that describes the outer dimensions of the shape. The reason it is strokeWidth/2.0 pixels smaller than the bounds is explained in the “Avoiding Pixelitis: Coordinates versus Pixels” sidebar.


All coordinates in Core Graphics are mathematical points in space; they do not address individual pixels. This is an important concept to understand. Think of coordinates as infinitely thin lines between the pixels of your display or image. This has three ramifications.

· Points or coordinates are not pixels.

· Drawing occurs on and inside lines, not on or inside pixels.

· One point may not mean one pixel.

When you fill a shape, you’re filling the pixels inside the infinitely thin lines that define that shape. In the following figure, a rectangle ((2,1),(5,2)) is filled with a dark color. A lower-resolution display will have one physical pixel per coordinate space, as shown on the left. On the right is a “retina” display, with four physical pixels per coordinate space.


The rectangle defines a mathematically precise region, and the pixels that fall inside that region are filled with the color. This precision avoids a common programmer malady known as pixelitis: the anxiety of not knowing exactly what pixels will be affected by a particular drawing operation, common in many other graphic libraries.

This mathematical precision can have unanticipated side effects. One common artifact occurs when drawing a line with an odd width—“odd” meaning “not evenly divisible by 2.” A line’s stroke is centered over the mathematical line or curve. In the next figure, a horizontal line segment is drawn between two coordinates, with a stroke width of 1.0. The upper line in the next figure does not draw a solid line on a lower-resolution display because the stroke covers only half of the pixels on either side of the line. Core Graphics draws partial pixels using anti-aliasing, which means that the color of those pixels is adjusted using half the stroke’s color value. On a 2.0 retina display, this doesn’t occur because each pixel is half of a coordinate value.


The lower line in the figure avoids the “half-pixel” problem by centering the line between two coordinates. Now the 1.0 width line exactly fills the space between coordinate boundaries, neatly filling the pixels and appearing to the user as a clean, solid line.

If pixel-prefect alignment is important to your app, you may need to consult the contentScaleFactor property of UIView or its trait collection. It discloses the number of physical screen pixels between two whole coordinate values. As of this writing, it can be one of three values:1.0 for lower resolution displays, 2.0 for Retina displays, and 3.0 for Retina HD displays.

The next block of code declares a UIBezierPath variable and then switches on the shape constant to build the desired shape. For right now, the case statement only makes the paths for square and rectangular shapes, as shown in Figure 11-4. You’ll fill in the other cases later.


Figure 11-4. Unfinished -path function

Tip If you start a //-style comment with MARK:, TODO:, or FIXME:, that comment will automatically appear in the file navigation menu at the top of the editing area, as shown in Figure 11-4. This is a really handy way to make a note about something you need to address later because it will appear prominently in your file’s navigation menu until you remove it.

Sharp-eyed readers will notice that the code to create a square shape and a rectangular shape are the same. That’s because the difference between these shapes is the aspect ratio of the view, and that was established in init(shape:) when the object was created. If you go back and look atinit(shape:), you’ll see this code:

if shape == .Rectangle || shape == .Oval {
frame.size.height = alternateHeight

When the view’s frame was initialized, it was made half as high if the shape was a rectangle or oval. All other shape views begin life with a square frame.

Finally, the line width of the shape is set to strokeWidth, and the joint style is set to kCGLineJoinRound. This last property determines how a joint (the point where one line segment ends and the next begins) is drawn. Setting it to kCGLineJoinRound draws shapes with rounded “elbows.”

Testing Squares

That’s enough code to draw a square-shaped view, so let’s hook this up to something and try it. The Shapely app creates new shapes when the user taps a button, so define a button to test it. The buttons get custom images, so start by adding those image resources to your project. Select theImages.xcassets asset catalog item in the navigator. Find the Learn iOS Development Projects image Ch 11 image Shapely (Resources) folder and drag all 12 of the image files (addcircle.png, addcircle@2x.png, addoval.png, addoval@2x.png,addrect.png, addrect@2x.png, addsquare.png, addsquare@2x.png, addstar.png, addstar@2x.png, addtriangle.png, and addtriangle@2x.png) into the asset catalog, as shown in Figure 11-5. There are also some app icons in the Shapely (Icons)folder, which you’re free to drop into the AppIcon group.


Figure 11-5. Adding button image resources

Select the Main.storyboard file. Bring up the object library (View image Utilities image Show Object Library) and drag a button into the upper-left corner of your interface, as shown in Figure 11-6.


Figure 11-6. Adding the first button

Select the button and click the pin attributes control. Pin the height and width to 44 points, also shown in Figure 11-6. Choose the Add Missing Constraints in View Controller command, either from the layout issues control next to the pin constraints control or from the Editor menu.

Switch to the attributes inspector, select the root view object, and change its background property to Black Color. Select the new button again and make the following changes:

1. Change its type to Custom.

2. Erase its title (replacing Button with nothing).

3. Change its image to addsquare.png.

Now you’re going to connect the button’s action to a new Swift function, and you’re going to use a really nifty Interface Builder trick to do it. Start by switching to the assistant view (View image Assistant Editor image Show Assistant Editor). The source for your view controller (ViewController.swift) will appear in the right-hand pane. If it doesn’t, select the ViewController.swift file from the navigation ribbon immediately above the right-hand editor pane. In the ViewController.swift file (right-hand editing pane), add a new action.

@IBAction func addShape(sender: AnyObject!) {
if let button = sender as? UIButton {
let shapeView = ShapeView(shape: .Square)

var shapeFrame = shapeView.frame
let safeRect = CGRectInset(view.bounds, shapeFrame.width, image
var randomCenter = safeRect.origin
randomCenter.x += CGFloat(arc4random_uniform(UInt32(safeRect.width)))
randomCenter.y += CGFloat(arc4random_uniform(UInt32(safeRect.height)))
shapeView.center = randomCenter

In brief, the first two lines of code create a new ShapeView object that draws a square. The new view object is then added to the root view so it will appear in your interface. If you did nothing else, a white square would be drawn at the upper-left corner of the display. The rest of the code simply picks a random location and moves the new view to that location. safeRect is inset by the height and width of the new view, so the randomly choosen position inside safeRect ensures the new view will be safely inside the bounds of the view.

Up until this point in this book, you’ve been creating and adding view objects using Interface Builder. This code demonstrates how you do it programmatically. Anything you add to a view using Interface Builder can be created and added programmatically, and you can create things in code that you can’t create in Interface Builder.

Note The addSubview(_:) function adds a view to another view. The view being added becomes a subview of the other view (now its superview). The subview will appear at the coordinates of its frame, in the superview’s local coordinate system. You can add a view to only one superview at a time; a view can’t appear in two superviews simultaneously. To remove a view, call the view’s removeFromSuperview() function.

Now for that Interface Builder trick. Notice that a little connection socket has appeared in the margin next to your addShape(_:) function. This acts exactly like the connectors in the connections inspector. Connect the button to the action by dragging the connection socket next to theaddShape(_:) function into the new button, as shown in Figure 11-7.


Figure 11-7. Connecting the first button

The assistant editor allows you to write the Swift code for properties and actions in your view controller and then connect them directly to the view objects in your interface, without switching files or windows. It’s a great time-saver.

Fire up an iPad simulator and run your app, as shown in Figure 11-8. Tap the button a few times to create some shape view objects, as shown on the right in Figure 11-8.


Figure 11-8. Working square shape views

So far, you’ve designed a custom UIView object that draws a shape using a Bézier path. You’ve created an action that creates new view objects and adds them to a view, programmatically. This is a great start, but you still want to draw different shapes, in different colors, so expand the app to do that.

More Shapes, More Colors

Back in Xcode, stop the app and switch to the Main.storyboard file again. Your app will draw six shapes, so create five more buttons. I did this by holding down the Option key and dragging out copies of the first UIButton object, as shown in Figure 11-9. You could, alternatively, copy and paste the first button. If you’re a masochist, you could drag in new button objects from the library and individually change them to match the first one. I’ll leave those decisions to you.


Figure 11-9. Duplicating the first button

Just as you did in DrumDub, you’ll use the tag property of the button to identify the shape it will create. Since you duplicated the first button, all of the buttons are connected to the same addShape(_:) function in ViewController. (If not, connect them now.) Working from left to right, use the attributes inspector to set the tag and image properties of the buttons using Table 11-3.

Table 11-3. New Shape Button Properties















You’ll notice that the tag values, cleverly, match up with the enum constants you defined in ShapeView.swift. You’ll change the first line of addShape(_:) (in ViewController.swift) to use the button’s tag value instead of the .Square constant, so each button will create a different shape.

Of course, the path property in ShapeView still knows only how to create shapes for squares and rectangles; you’ll correct that shortly. But before you leave ViewConroller.swift, modify your addShape(_:) function to choose the new shape based on the tag and give it a random color—just to make it pretty. Find your addShape(_:) function and make the following changes (in bold):

let colors = [ UIColor.redColor(), UIColor.greenColor(),
UIColor.blueColor(), UIColor.yellowColor(),
UIColor.purpleColor(), UIColor.orangeColor(),
UIColor.grayColor(), UIColor.whiteColor() ]

@IBAction func addShape(sender: AnyObject!) {
if let button = sender as? UIButton {
if let shapeSelector = ShapeSelector(rawValue: button.tag) {
let shapeView = ShapeView(shape: shapeSelector)
shapeView.color = image

The modified function gets the tag of the button and uses the enumerator’s built-in fromRaw() function to convert the integer into a ShapeSelector value, which is then passed to the ShapeView initializer. Now the shape will be determined by which button was tapped. A random color is then selected and assigned to the new shape.

Note The enumerator’s init(rawValue:) initializer returns an optional. In other words, it won’t return any value at all if there’s no enumeration value that corresponds to the given integer value. So if the tag value is invalid (0 or 42, for example), the if statement fails and no code executes. Optionals are explained in Chapter 20.

To draw those shapes, your ShapeView object still needs some work. Switch to the ShapeView.swift file, find the path property’s getter function, and finish it with the code shown in bold in Listing 11-1. Oh, and you might as well remove the default: case from the unfinished version; you don’t need that anymore because the switch statement now covers all possible cases.

Listing 11-1. Finished Path Property Getter Function

var path: UIBezierPath {
var rect = bounds
rect.inset(dx: strokeWidth/2.0, dy: strokeWidth/2.0)

var shapePath: UIBezierPath!
switch shape {
case .Square, .Rectangle:
shapePath = UIBezierPath(rect: rect)
case .Circle, .Oval:
shapePath = UIBezierPath(ovalInRect: rect)
case .Triangle:
shapePath = UIBezierPath()
shapePath.moveToPoint(CGPoint(x: rect.midX, y: rect.minY))
shapePath.addLineToPoint(CGPoint(x: rect.maxX, y: rect.maxY))
shapePath.addLineToPoint(CGPoint(x: rect.minX, y: rect.maxY))
case .Star:
shapePath = UIBezierPath()
let armRotation = CGFloat(M_PI)*2.0/5.0
var angle = armRotation
let distance = rect.width*0.38
var point = CGPoint(x: rect.midX, y: rect.minY)
for _ in 0..<5 {
point.x += CGFloat(cos(Double(angle)))*distance
point.y += CGFloat(sin(Double(angle)))*distance
angle -= armRotation
point.x += CGFloat(cos(Double(angle)))*distance
point.y += CGFloat(sin(Double(angle)))*distance
angle += armRotation*2
shapePath.lineWidth = strokeWidth
shapePath.lineJoinStyle = kCGLineJoinRound
return shapePath

The .Circle and .Oval cases use another UIBezierPath convenience initializer to create a finished path object that traces an ellipse that fits exactly inside the given rectangle.

The .Triangle case is where things get interesting. It shows a Bézier path being created, one line segment at a time. You begin a Bézier path by calling moveToPoint(_:) to establish the first point in the shape. After that, you add line segments by making a series ofaddLineToPoint(_:) calls. Each call adds one edge to the shape, just like playing “connect the dots.” The last edge is created using the closePath() function, which does two things: it connects the last point to the first point and makes this a closed path—one that describes a solid shape.

Note This app creates Bézier paths only with straight lines, but you can mix in calls to addArcWithCenter(_:,radius:,startAngle:,endAngle:,clockwise:), addCurveToPoint(_:,controlPoint1:,controlPoint2:), andaddQuadCurveToPoint(_:,controlPoint:), in any combination, to add curved segments to the path too.

The .Star creates an even more complex shape. If you’re curious about the details, read the comments in the finished Shapely project code that you’ll find in the Learn iOS Development Projects image Ch 11 image Shapely folder. In brief, the code creates a path that starts at the top-center of the view (the top point of the star), adds a line that angles down to the interior point of the star, and then adds another (horizontal) line back out to the right-hand point of the star. It then rotates 72° and repeats these steps, four more times, to create a five-pointed star.

Tip Trigonometric math functions perform their calculations in radians. If your trig skills are a little rusty, angles in radians are expressed as fractions of the constant π, which is equal to 180°. The iOS math library includes constants for π (M_PI or 180°), π/2 (M_PI_2 or 90°), and π/4 (M_PI_4 or 45°), as well as other commonly used constants (e, the square root of 2, and so on).

Run your app again (see Figure 11-10) and make a bunch of shapes!


Figure 11-10. Multicolor shapes


Next up on your app’s feature list is dragging and resizing shapes. To implement that, you’re going to revisit gesture recognizers and learn something completely new. Let’s start with the gesture recognizer.

Like view objects, you can create, configure, and connect gesture recognizers programmatically. The concrete gesture recognizer classes supplied by iOS (tap, pinch, rotate, swipe, pan, and long-press) have all the logic needed to recognize these common gestures. All you have to do is create one, do a little configuration, and hook them up.

Return to your addShape(_:) function in ViewController.swift. At the end of the function, add this code:

let pan = UIPanGestureRecognizer(target: self, action: "moveShape:")
pan.maximumNumberOfTouches = 1

The first two statements create a new pan (drag) gesture recognizer object. This recognizer will send its action message ("moveShape:") to your ViewController object (self). The maximumNumberOfTouches property is set to 1. This configures the object to recognize only single-finger drag gestures; it will ignore any two- or three-finger drags that it witnesses. Finally, the recognizer object is attached to the shape view that was just created.

Note This code is equivalent to going into an Interface Builder file, dragging a new Pan Gesture Recognizer into a ShapeView object, selecting it, changing its Maximum Touches to 1, and then connecting the recognizer to the moveShape(_:) action of the controller. And when I say “equivalent,” I mean “identical to.”

Now all you need is a moveShape(_:) function. In your ViewController class, add it now.

func moveShape(gesture: UIPanGestureRecognizer) {
if let shapeView = gesture.view as? ShapeView {
let dragDelta = gesture.translationInView(shapeView.superview)
switch gesture.state {
case .Began, .Changed:
shapeView.transform = image
CGAffineTransformMakeTranslation(dragDelta.x, dragDelta.y)
case .Ended:
shapeView.transform = CGAffineTransformIdentity
shapeView.frame = image
CGRectOffset(shapeView.frame, dragDelta.x, dragDelta.y)
shapeView.transform = CGAffineTransformIdentity

Gesture recognizers analyze and absorb the low-level touch events sent to the view object and turn those into high-level gesture events. Like many high-level events, they have a phase. The phase of continuous gestures, such as dragging, progress through a predictable order: possible, began,changed, and finally ended or canceled.

Your moveShape(_:) function starts by getting the view that caused the gesture action; this will be the view the user touched and the one you’re going to move. It then gets some information about how far the user dragged and the gesture’s state. As long as the gesture is in the “began” or “changed” state, it means the user touched the view and is dragging their finger around the screen. When they release it, the state will change to “ended.” In rare circumstances, it may change to “cancel” or “failed,” in which case you ignore the gesture.

While the user is dragging their finger around, you want to adjust the origin of the shape view by the same distance on the screen, which gives the illusion that the user is physically dragging the view around the screen. (I hope you didn’t think you could actually move things inside your iPhone by touching it.) The way you’re going to do that uses a remarkable feature of the UIView class: the transform property.

Note Notice that you didn’t need the @IBAction keyword at the beginning of the moveShape(_:) function. That’s because you connected it to the gesture recognizer programmatically. The @IBAction keyword is needed only if you want your action to appear in, and be connected using, Interface Builder.

Applying a Translate Transform

iOS uses affine transforms in a number of different ways. An affine transform is a 3x3 matrix that describes a coordinate system transformation. In English, it’s a (seemingly) magical array of numbers that can describe a variety of complex coordinate conversions. It can move, resize, skew, flip, and rotate any set of points. And since just about everything (view objects, images, and Bézier paths) is a “set of points,” affine transforms can be used to move, flip, zoom, shrink, and spin any of those things. Even more amazing, a single affine transform can perform all of those transformations in a single operation.


iOS provides functions to create and combine the three common transforms: translate (shift), scale, and rotate. These are illustrated in the following figure. The gray shape represents the original shape and the dark figure its transformation:


You create a basic transform using the function CGAffineTransformMakeTranslation, CGAffineTransformMakeScale, or CGAffineTransformMakeRotation. If you’re a hotshot math whiz, you can create any arbitrary transform usingCGAffineTransformMake.

The special identity transform (CGAffineTransformIdentity) performs no translation at all. This is the default value for the transform property and the constant you use if you don’t want any transformation performed.

Transforms can be combined. The effect of this is illustrated in the following figure:


To add transforms together, use the functions CGAffineTransformTranslate, CGAffineTransformScale, CGAffineTransformRotate, and CGAffineTransformConcat. These functions take one transform (which might already be the sum of other transforms), apply an additional transform, and return the combined transform. You would then use this combined transform value to perform all of the individual transforms, in a single operation.

The gesture cases for the “began” and “changed” states (in moveShape(_:)) take the distance the user dragged their finger and uses that to create a translate transform. Try to say “translate transform” three times fast. The transform property is set to this value, and you’re done. But what, exactly, does this magic property do?

When you set the transform property of a view, all of the coordinates that the view occupies in its superview are transformed before they appear on the screen. The view’s content and location (its frame) don’t change. What changes is where the view’s image appears in the superview. I like to think of the UIView transform as a lens that “projects” the view so it appears elsewhere or in a different way. If you apply a translate transform, as you just did in moveShape(_:), then the view will appear at a different set of coordinates.

Caution If you set the transform property to anything other than the identity transform, the value of the frame property becomes meaningless. It’s not entirely meaningless, but it’s unusable for most practical purposes. Just remember this: after you set a transform to anything other than the identity transform, don’t use frame.

If you set the transform property back to the identity transform (CGAffineTransformIdentity), the view will reappear at its original location. Programmers call the transform property a nondestructive translation because setting it doesn’t alter any of the object’s other properties. Set it back, and everything returns to where it was. In the default: case, this is exactly what happens. The default: case handles the “canceled” and “failed” states by setting the transform property back to the identity transform.

The gesture “ended” case is where the work happens. First, the view’s transform property is reset to the identity transform. Then, the view’s frame origin is updated, based on the total distance the user dragged the view. The updated frame permanently relocates the view object to its new location.

Note The transform property of the view is set back to the identity transform before the frame property is used to change its location.

Run your project and try it. I didn’t supply a figure because (as my publisher explained it to me) the illustration in the book wouldn’t move. Create a few shapes and drag them around. It’s a lot of fun. When you’re done playing, get ready to add zooming and pinching to the mix.

But before you get to that, let me share a few nuggets about affine transforms. Transforms can be used in a variety of places, not just to distort the frame of a view. They can be used to transform the coordinate system of the current context while you draw your view. In essence, this use applies a transform to the bounds of your view, changing the effect of what you draw in your view, rather than translating the final results of your view. For example, you might have a complex drawing that you want to shift up or down in your view or maybe draw something upside down. Rather than recalculate all of the coordinates you want to draw, use the CGContextTranslateCTM, CGContextRotateCTM, or CGContextScaleCM function to shift, rotate, or resize all of the drawing operations. You’ll use these functions in Chapter 16.

Tip You can also shift the drawing coordinates of your view by changing the origin of the bounds property.

Transforms can also be used to change the points in a Bézier path. Create the desired transform and then call the path’s applyTransform(_:) function. All of the points in the path will be altered using that transform. This is a destructive translation; the original points in the path are lost.

Applying a Scale Transform

If one gesture recognizer is fun, then two must make a party. This time, you’re going to add a pinch/zoom gesture that will resize your shape view. As before, start by creating and attaching a second gesture recognizer object at the end of the addShape(_:) function (inViewController.swift).

let pinch = UIPinchGestureRecognizer(target: self, action: "resizeShape:")

The pinch gesture recognizer object doesn’t need any configuration because a pinch/zoom is always a two-finger gesture.

Now add this resizeShape(_:) function:

func resizeShape(gesture: UIPinchGestureRecognizer) {
if let shapeView = gesture.view as? ShapeView {
let pinchScale = gesture.scale
switch gesture.state {
case .Began, .Changed:
shapeView.transform = image
CGAffineTransformMakeScale(pinchScale, pinchScale)
case .Ended:
shapeView.transform = CGAffineTransformIdentity
var frame = shapeView.frame
let xDelta = frame.width*pinchScale-frame.width
let yDelta = frame.height*pinchScale-frame.height
frame.size.width += xDelta
frame.size.height += yDelta
frame.origin.x -= xDelta/2
frame.origin.y -= yDelta/2
shapeView.frame = frame
shapeView.transform = CGAffineTransformIdentity


This function follows the same pattern as moveShape(_:). The only significant difference is in the code to adjust the view’s final size and position, which requires a little more math than the drag function.

Run the project and try it. Create a shape and then use two fingers to resize it, as shown in Figure 11-11.


Figure 11-11. Resizing using a transform

Tip If you’re using the simulator, hold down the Option key to simulate a two-finger pinch gesture. You’ll have to first position a shape in the middle of the view because the second “finger” in the simulator is always mirrored across the center point of the display, and you need to have both “fingers” inside the view to be recognized as a pinch gesture.

You’ll notice that when you zoom the shape out a lot, its image gets the “jaggies”: aliasing artifacts caused by magnifying the smaller image. This is because you’re not resizing the view during the pinch gesture. You’re just applying a transform to the original view’s image. Bézier paths are resolution independent and draw smoothly at any size. But a transform has only the pixels of the view’s current image to work with. At the end of the pinch gesture, the shape view’s size is adjusted and redrawn. This creates a new Bézier path, at the new size, and all is smooth again, as shown on the right in Figure 11-11.

Your app is looking pretty lively, but I think it could stand to be jazzed up a bit. What do you think about adding some animation?

Animation: It’s Not Just for Manga

Animation has become an integral and expected feature of modern apps. Without it, your app looks dull and uninteresting, even if it’s doing everything you intended. Fortunately for you, the designers of iOS know this, and they’ve done a staggering amount of work, all so you can easily add animation to your app. There are (broadly) four ways to add movement to your app.

· The built-in stuff


· Core Animation

· OpenGL, Sprite Kit, Scene Kit, and Metal

The “built-in stuff” includes those places in the iOS API where animation will be done for you. Countless functions, from view controllers to table views, include a Boolean animated parameter. If you want your view controller to slide over, your page to peel up, your toolbar buttons to resize smoothly, your table view rows to spritely leap to their new positions, or your progress indicator to drift gently to its new value, all you have to do is pass true for the animated parameter and the iOS classes will do all of the work. So, keep an eye out for those animatedparameters and use them.

Tip Some view properties have two setters: one that’s never animated and one that can be animated. For example, the UIProgressView class has a settable progress property (never animated) and a setProgress(_:,animated:) function (optionally animated). If you’re setting a visual property, check the documentation to see whether there’s an animated alternative.

In the do-it-yourself (DIY) animation solution, your code performs the frame-by-frame changes needed to animate your interface. This usually involves steps like this:

1. Create a timer that fires 30 times/second.

2. When the timer fires, update the position/look/size/content of a view.

3. Mark the view as needing to be redrawn.

4. Repeat steps 2 and 3 until the animation ends.

The DIY solution is, ironically, the method most often abused by amateurs. It might work “OK” in a handful of situations, but most often it suffers from a number of unavoidable performance pitfalls. The biggest problem is timing. It’s really difficult to balance the speed of an animation so it looks smooth but doesn’t run so fast that it wastes CPU resources, uses up battery life, and drags the rest of your app and the iOS system down with it.

Using Core Animation

Smart iOS developers—that’s you since you’re reading this book—use Core Animation. Core Animation has solved all of the thorny performance, load-balancing, background-threading, and efficiency problems for you. All you have to do is tell it what you want animated and let it work its magic.

Animated content is drawn in a layer (CALayer) object. A layer object is just like a UIView; it’s a canvas that you draw into using Core Graphics. Once drawn, the layer can be animated using Core Animation. In a nutshell, you tell Core Animation how you want the layer changed (moved, shrunk, spun, curled, flipped, and so on), over what time period, and how fast. You then forget about it and let Core Animation do all of the work. Core Animation doesn’t even bother your app’s event loop; it works quietly in the background, balancing the animation work with available CPU resources so it doesn’t interfere with whatever else your app needs to do. It’s really a remarkable system.

Keep in mind that Core Animation doesn’t change the contents of the layer object. It temporarily animates a copy of the layer, which disappears when the animation is over. I like to think of Core Animation as “live” transforms; it temporarily projects a distorted, animated version of your layer but never changes the layer.

Oh, did I say “a layer object is just like a UIView?” I should have said, “a layer object, like the one in UIView” because UIView is based on Core Animation layers. When you’re drawing your view in drawRect(_:), you’re drawing into a CALayer object. You can get your UIView’s layer object through the layer property, should you ever need to work with the layer object directly. The takeaway lesson is this: all UIView objects can be animated using Core Animation. Now you’re cooking with gas!

Adding Animation to Shapely

There are three ways to get Core Animation working for you. I already described the first: all of those “built-in” animated: parameters are based on Core Animation—that’s no surprise. The second, traditional Core Animation technique is to create an animation (CAAnimation) object. An animation object controls an animation sequence. It determines when it starts, when it stops, the speed of the animation (called the animation curve), what the animation does, whether it repeats, how many times, and so on. There are subclasses of CAAnimation that will animate a particular property of a view or animate a transition (the adding, removal, or exchange of view objects). There’s even an animation class (CAAnimationGroup) that synchronizes multiple animation objects.

Honestly, creating CAAnimation objects isn’t easy. Because it can be so convoluted, there are a ton of convenience constructors and functions that try to make it as painless as possible—but it’s still a hard row to hoe. You have to define the beginning and ending property values of what’s being animated. You have to define timing and animation curves; then you have to start the animation and change the actual property values at the appropriate time. Remember that animation doesn’t change the original view, so if you want a view to slide from the left to right, you have to create an animation that starts on the left and ends on the right, and then you have to set the position of the original view to the right, or the view will reappear on the left when the animation is over. It’s tedious.

Fortunately, the iOS gods have felt your pain and created a really simple way of creating basic animations called the block-based animation functions. These UIView functions let you write a few lines of code to tell Core Animation how you want the properties of your view changed. Core Animation then handles the work of creating, configuring, and starting the CAAnimation objects. It even updates your view’s properties, so when the animation is over, your properties will be at the end value of the animation—which is exactly what you want.

So, how simple are these block-based animation functions to use? You be the judge. Find your addShape(_:) function in ViewController.swift. Locate the code that randomly positions the new view and edit it so it looks like this (replacing the statement shapeView.center = randomCenter with the code in bold):

var shapeFrame = shapeView.frame
let safeRect = CGRectInset(view.bounds, shapeFrame.width, shapeFrame.height)
var randomCenter = safeRect.origin
randomCenter.x += CGFloat(arc4random_uniform(UInt32(safeRect.width)))
randomCenter.y += CGFloat(arc4random_uniform(UInt32(safeRect.height)))
shapeView.center = button.center
shapeView.transform = CGAffineTransformMakeScale(0.4, 0.4)
UIView.animateWithDuration(0.5) {
shapeView.center = randomCenter
shapeView.transform = CGAffineTransformIdentity

The new code starts by setting the center of the new view to the center of the button the user tapped, essentially positioning the new view right over the button. A transform is applied to shrink the size of the shape to 40 percent of its original size (approximately the same size as the button). If you stopped here, your shape view would appear right on top of the button you tapped, covering it.

The last statement is the magic. It starts an animation that will last a half second (0.5). The closure expression describes what you want animated, and by “describe” I mean you just write the code to set the properties that you want animated. It’s that simple. UIView will automatically animate any of these seven properties:

· frame

· bounds

· center

· transform

· alpha

· backgroundColor

· contentStretch

If you want a view to move or change size, animate its center or frame. Want it to fade away? Animate its alpha property from 1.0 to 0.0. Want it to smoothly turn to the right? Animate its transform from the identity transform to a rotated transform. You can do any of these, or even multiple ones (changing the alpha and center), at the same time. It’s that easy.

There are a number of variations on the animateWithDuration() function that provide different effects and even more control. With these functions you can easily do all of the following:

· Delay the start of the animation

· Animate a transition between two views

· Specify custom animation options, such as the following:

· Start this animation at the currently animated value (interrupting another animation)

· Select a different animation curve (ease in, ease out, and so on)

· Choose a transition style (flip, page curl, cross dissolve, and so on)

· Redraw the view’s content during the animation

· Reverse the animation

· Provide code that will be executed when the animation is finished, which can include code to start another animation, making it easy to create an animated sequence

See the “Animating Views with Block Objects” section of the UIView documentation for a complete list of functions.

Run your app again and create a few shapes. Pretty cool, huh? (Again, no figure). As you tap each add shape button, the new shape flies into your view, right from underneath your finger, like some crazy arcade game. If you’re fast, you can get several going at the same time. And all it took was five lines of code.

What if you want to animate something other than these seven properties, create animations that run in loop, move in an arc, or run backward? For that, you’ll need to dig into Core Animation and create your own animation objects. You can read about it in the Core Animation Programming Guide you’ll find in Xcode’s Documentation and API Reference.

OpenGL, Sprite Kit, Scene Kit, and Metal

Oops, I almost forgot about those other animation technologies. Modern computer systems, even a tiny one like an iPod, are actually two computer systems in one: a central processing unit (CPU) that does what computers do and a graphics processing unit (GPU) that does the work of putting bits on the screen. Each has its own set of processing units and memory. There’s not much overlap between the two, save for some pathways that allow data and instructions to be exchanged. While the CPU is fast at a lot of tasks, it’s not well suited to the kind of massively parallel computations needed to draw and animate graphics. That’s where the GPU really shines. GPUs have dozens, sometimes hundreds, of small, simple, but blazingly fast processing units. I like to think of the GPU as “my army of pixel minions.”

In the past two decades, most advances in computer graphics and animation have been made by shifting more and more of the data and computations from the CPU to the GPU. This requires a lot of coordination. Instead of having an object (like a UIView) that contains all of the data and logic to draw that view in the CPU, you create a hybrid solution where the CPU prepares the data needed to draw a view and hands that over to the GPU. The GPU can later render that view when told to. The CPU has only a reference to the image information (often called a texture), while the actual image data sits in the GPU.

An even more advanced solution is to write tiny programs, called shaders, that execute your code in the GPU. This is quite a different experience than programming in Swift, but the advantages are huge. All of the logic (data and computer code) needed to generate and draw figures and scenes is now entirely in the GPU. The CPU is just directing the show, telling the GPU what it needs to draw and when.

The results can be nothing less than stunning. If you’ve ever run a 3D flight simulator, shoot-’em-up, or adventure game, you’re looking at code that’s leveraging the power of the GPU.

The various animation technologies available to you on iOS can be broadly characterized by how much you (the programmer) are insulated from the details of what the GPU is doing. At one end is Core Animation, which you’ve already used. From your perspective, everything has been happening in the CPU. There’s been no mention of texture buffers, shaders, or pipeline scheduling. Core Animation did those things for you. While ignorance might be bliss, there are also a lot of possibilities if you’re willing to step into GPU-land. Here are your options.

Sprite Kit

Sprite Kit appeared in iOS 7, so it’s a relatively recent addition to iOS. Sprite Kit’s view classes are based on SKNode instead of UIView, but many of the properties and relationships will be familiar. You’ll find a lot of what you’ve learned about UIView works with SKNode. (For example, SKNode is also a UIResponder subclass, so event handlers work the same way.)

Sprite Kit and UIView are both designed to draw and animate 2D graphics. Sprite Kit, however, is specifically designed for continuous animation and interaction. Core Animation tends to work best with short, simple sequences. Sprite Kit is designed to keep images moving around all the time, which is why it makes a great choice for 2D games. You’ll create a Sprite Kit game in Chapter 14.

Sprite Kit also has an extensive physics simulation engine. You can program Sprite Kit nodes with a shape, mass, velocity, and collision rules, and then let Sprite Kit animate their behavior.

Note Cocoa Touch also has a physics simulation engine called View Dynamics that you can use with UIView objects. I’ll show you how in Chapter 16.

Scene Kit

Scene Kit is new to iOS 8, although it’s been around on OS X for a while. Like Sprite Kit, it’s designed for high-speed rendering and animation. The big difference is that Scene Kit is primarily focused on 3D modeling and animation.

Scene Kit is Apple’s replacement for OpenGL. It’s designed on the same underlying technology but—like Sprite Kit—uses objects, structures, data, concepts, and programming languages that are familiar and convenient to Cocoa Touch programmers. Scene Kit and Metal are beyond the scope of this book, but Apple has excellent tutorials and guides to get you started.


New in iOS 8 is Metal. And, just like it sounds, Metal gives you direct access to all of the power of the GPU. Using Metal, it’s possible to create 2D and 3D animations that exceed the performance of even Scene Kit and OpenGL. It’s also terribly technical.

Metal can also be used for nongraphic computations. The number of floating-point calculations per second (known as a FLOP) that a GPU unit can perform is staggering. There are lots of applications that can exploit that. Cryptography, field calculations, and linear math problems can be greatly accelerated by running hundreds of small calculations in parallel—exactly what a GPU is designed to do. Metal provides an API for running and managing those calculations.


OpenGL is short for Open Graphics Library. It’s a cross-language, multiplatform API for 2D and 3D animation and is the granddaddy of all GPU control libraries. Before Sprite Kit, Scene Kit, and Metal, OpenGL was the only way to fully realize the power of the GPU in iOS. The flavor of OpenGL included in iOS is OpenGL for Embedded Systems (OpenGL ES). It’s a trimmed-down version of OpenGL suitable for running on small computer systems, like iOS devices.

The advantage of OpenGL is that it’s an industry standard. There’s a lot of OpenGL knowledge and code that will run on iOS. This gives you access to a vast reservoir of source code and solutions. And the OpenGL work you do on iOS will translate to other platforms as well.

The downside is that OpenGL is another world. An OpenGL view is programmed using a special C-like computer language called the OpenGL Shading Language (GLSL). To use it, you write vertex and fragment shader programs.1 This is not like Swift at all. Even on the CPU side, OpenGL programming is typically written in C++—not Swift or even Objective-C. While some of the C APIs could be called from Swift, I would hesitate to write an OpenGL application entirely in Swift.

Having tried to steer you away from OpenGL, I must admit that it’s a powerful technology and it works well in iOS—once you get past the learning curve and language barriers. Start with the OpenGL ES Programming Guide for iOS that you’ll find in Xcode’s Documentation and API Reference. But be warned, you’d need to learn a lot of OpenGL fundamentals before much of that document will make any sense.

The Order of Things

While you still have the Shapely project open, I want you to play around with view object order a little bit. Subviews have a specific order, called their Z-order. It determines how overlapping views are drawn. It’s not rocket science. The back view draws first, and subsequent views draw on top of it (if they overlap). If the overlapping view is opaque, it obscures any views behind it. If portions of it are transparent, the views behind it “peek” through holes.

This is easier to see than explain, so add two more gesture recognizers to Shapely. Once again, go back to the addShape(_:) action function in ViewController.swift. Immediately after the code that attaches the other two gesture recognizers, insert this:

let dblTap = UITapGestureRecognizer(target: self, action: "changeColor:")
dblTap.numberOfTapsRequired = 2

let trplTap = UITapGestureRecognizer(target: self, action: "sendShapeToBack:")
trplTap.numberOfTapsRequired = 3

This code adds double-tap and triple-tap gesture recognizers, which call changeColor(_:) and sendShapeToBack(_:), respectively. Now add these two new functions:

func changeColor(gesture: UITapGestureRecognizer) {
if gesture.state == .Ended {
if let shapeView = gesture.view as? ShapeView {
let currentColor = shapeView.color
var newColor: UIColor!
do {
newColor = image
} while currentColor == newColor
shapeView.color = newColor

func sendShapeToBack(gesture: UITapGestureRecognizer) {
if gesture.state == .Ended {

The changeColor(_:) function is mostly for fun. It determines which color the shape is and picks a new color for it at random.

The sendShapeToBack(_:) function illustrates how views overlap. When you add a subview to a view (using UIView’s addSubview(_:) function), the new view goes on top. But that’s not your only choice. If view order is important, a number of functions will insert a subview at a specific index or immediately below or above another (known) view. You can also adjust the order of existing views using bringSubviewToFront(_:) and sendSubviewToBack(_:), which you’ll use here. Your triple-tap gesture will “push” that subview to the back, behind all of the other shapes.

To make this effect more obvious, make a minor alteration to your drawRect(_:) function in ShapeView.swift, by adding the code in bold:

override func drawRect(rect: CGRect) {
let shapePath = path

The new code fills the shape with black that’s 40 percent opaque (60 percent transparent). It will appear that your shapes have a “smoky” middle that darkens any shapes that are drawn behind it. This will make it easy to see how shapes are overlapping.

Run your app, create a few shapes, resize them, and then move them so they overlap, as shown in Figure 11-12.


Figure 11-12. Overlapping shapes with semitransparent fill

The shapes you added last are “on top” of the shapes you added first. Now try double-tapping a shape to change its color. I’ll wait.

I’m still waiting.

Is something wrong? Double-tapping doesn’t seem to be changing the color of a shape? There are two probable reasons: the changeColor(_:) function isn’t being called. Test that by setting a breakpoint in Xcode. Click once in the margin next to the first line of code inchangeColor(_:). Double-tap the shape again. Xcode stops your app in the changeColor(_:) function, as shown in Figure 11-13, so that isn’t the problem. Delete or disable the breakpoint and click the Continue button (in the debug control ribbon) to resume app execution.


Figure 11-13. Determining whether changeColor(_:) is being called

The other possible problem is that the color is being changed but isn’t showing up. You can test that by resizing the shape. If you double-tap a shape and then resize it, you’ll see the color change. OK, it’s the latter. Take a moment to fix this.

The problem is that the ShapeView object doesn’t know that it should redraw itself whenever its color property changes. You could add a shapeView.setNeedsDisplay() statement to changeColor(_:), but that’s a bit of a hack. I’m a strong believer that view objects should trigger their own redrawing when any properties that change their appearance are altered. That way, client code doesn’t have to worry about whether to call setNeedsDisplay(); the view will take care of that automatically.

Return to ShapeView.swift and edit the color property so it looks like this (new code in bold):

var color: UIColor = UIColor.whiteColor() {
didSet {

You’ve added a property observer to the color property. The didSet code will be executed every time someone sets the color property of the object. The new code calls setNeedsDisplay() to indicate that this view needs to be redrawn since, we assume, its color has changed.

Run the app and try the double-tap again. That’s much better!

Finally, you get to the part of the demonstration that rearranges the view. Overlap some views and then triple-tap one of the top views. Do you see the difference when the view is pushed to the back?

What is that, you say? The color changed when you triple-tapped it?

Oh, for Pete’s sake, don’t any of these gesture recognizer things works? Well, actually they do, but you’ve created an impossible situation. You’ve attached both a double-tap and a triple-tap gesture recognizer to the same view. The problem is that there’s no coordination between the two. What’s happening is that the double-tap recognizer fires as soon as you tap the second time, before the triple-tap recognizers gets a chance to see the third tap.

There are a number of ways to fix this bug, but the most common recognizer conflicts can be fixed with one line of code. Return to the ViewController.swift file, find the addShape(_:) function, and locate the code that adds the double- and triple-tap recognizers. Immediately after that, add this line:


This message creates a dependency between the two recognizers. Now, the double-tap recognizer won’t fire unless the triple-tap recognizer fails. When you tap twice, the triple-tap recognizer will fail (it sees two taps but never gets a third). This creates all of the conditions needed for the double-tap recognizer to fire. If you triple-tap, however, the triple-tap recognizer is successful, which prevents the double-tap from firing. Simple.

Now run your app for the last time. Resize and overlap some shapes. Triple-tap a top shape to push it to the back and marvel at the results, shown in Figure 11-14.


Figure 11-14. Working Shapely app

Note Hit testing knows nothing about the transparent portions of your view. So even if you can see a portion of one view in the middle, or near the edge, of the view on top of it, you won’t be able to interact with it because the touch events are going to the view on top. It would be possible to change that by overriding the hitTest(_:,withEvent:) and pointInside(_:,withEvent:) functions of your view, but that’s more work than I want to demonstrate. (Hint: use the hit test functions of UIBezierPath to determine whether a point is inside, or outside, of the view’s shape).

By now you should have a firm grasp of how view objects get drawn, when, and why. You understand the context, Bézier paths, the coordinate system, color, a little about transparency, 2D transforms, and even how to create simple animations. That’s a lot.

One thing you haven’t explored much are images. Let’s get to that by going back in time.

Drawing Images

As you’ve seen, drawing the content of your view is pretty straightforward. You draw your view’s content into its context in response to a drawRect(_:) call. In addition to filling rectangles and drawing paths, you can also paint an image (UIImage) object into your context. You did this in the ColorView class in Chapter 8:


When you draw an image into a context, its pixels are copied into the context buffer. You can also have fun with the image, stretching, transforming, and blending its pixels, depending on the drawing mode and the context’s state.

But it’s possible to go the other direction too: capturing the pixels in a context and turning them into a UIImage object. This is called off-screen drawing, because you’re drawing into a Core Graphics context that isn’t destined to be displayed on the screen.

You also performed off-screen drawing in Chapter 8—I just glossed over the details. Let’s take a closer look at that code now.

if !hsImage {
brightness = colorModel!.brightness
UIGraphicsBeginImageContextWithOptions(bounds.size, true, 1.0)
let imageContext = UIGraphicsGetCurrentContext()
for y in 0..<Int(bounds.height) {
for x in 0..<Int(bounds.width) {
let color = UIColor(hue: CGFloat(x)/bounds.width,
saturation: CGFloat(y)/bounds.height,
brightness: brightness/100.0,
alpha: 1.0)
CGContextFillRect(imageContext,CGRect(x: x, y: y,
width: 1, height: 1))
hsImage = UIGraphicsGetImageFromCurrentImageContext()

Off-screen drawing begins with a call to either UIGraphicsBeginImageContext() or UIGraphicsBeginImageContextWithOptions(). Both initialize a new Core Graphics context with a size you specify. The latter function provides additional options to control the transparency and scale of the context.

Once created, you perform Core Graphics drawing, exactly as you would in your drawRect(_:) function. In fact, if you want to call your drawRect(_:) function and let it draw into your ad hoc context, that’s fine. I’ll show you an example of this in Chapter 13. (This is the one exception to the rule of never calling your own drawRect(_:) function.)

Once your context is drawn, you take a “snapshot” of the finished image and preserve that in a new UIImage object using the aptly named UIGraphicsGetImageFromCurrentImageContext() function. The returned UIImage can be retained in a property, converted to a file, copied to the clipboard, saved on the camera roll, drawn into another context, or whatever else you want to do with it.

Caution UIGraphicsGetImageFromCurrentImageContext() works only with a context created by UIGraphicsBeginImageContext(). It can’t be used with a normal “onscreen” context.

When you’re all done, don’t forget to call UIGraphicsEndImageContext(). This tears down the off-screen context and frees up its resources.

Advanced Graphics

Oh, there’s more. Before your head explodes from all of this graphics talk, let me briefly mention a few more techniques that could come in handy.


You can also draw text directly into your custom view. The basic technique is the following:

1. Create a UIFont object that describes the font, style, and size of the text.

2. Set the drawing color.

3. Call the drawAtPoint(_:...) or (drawInRect(_:...) function of any String object.

You can also get the size that a string would draw (so you can calculate how much room it will take up) using the various sizeWithFont(_:...) functions.

You’ll find examples of this in the Touchy app you wrote in Chapter 4 and later in the Wonderland app in Chapter 12. The drawAtPoint(_:...) and drawInRect(_:,...) functions are just wrappers for the low-level text drawing functions, which are described in the “Text” chapter of the Quartz 2D Programming Guide. If you need really precise control over text, read the Core Text Programming Guide.

Shadows, Gradients, and Patterns

You’ve learned to draw solid shapes and solid lines. Core Graphics is capable of a lot more. It can paint with tiled patterns and gradients, and it can automatically draw “shadows” behind the shapes you draw.

You accomplish this by creating various pattern, gradient, and shadow objects and then setting them in your current context, just as you would set the color. You can find copious examples and sample code in the Quartz 2D Programming Guide.

Blend Modes

Another property of your context, and many drawing functions, is the blend mode. A blend mode determines how the pixels of what’s being drawn affect the pixels of what’s already in the context. Normally, the blend mode is kCGBlendModeNormal. This mode paints opaque pixels, ignores transparent ones, and blends the colors of partially transparent ones.

There are some two dozen other blend modes. You can perform “multiplies” and “adds,” paint only over the opaque portions of the existing image, paint only in the transparent portion of the existing image, paint using “hard” or “soft” light, affect just the luminosity or saturation—the list goes on and on. You set the current blend mode using the CGContextSetBlendMode() function. A few drawing functions take a blend mode parameter.

The available blend modes are documented, with examples, in two places in the Quartz 2D Programming Guide. For drawing operations (shapes and fills), refer to the “Setting Blend Modes” section of the “Paths” chapter. For examples of blending images, find the “Using Blend Modes with Images” section of the “Bitmap Images and Image Masks” chapter.

The Context Stack

All of these settings can start to make your context hard to work with. Let’s say you need to draw a complex shape, with a gradient, drop shadow, rotated, and with a special blend mode. After you’ve set up all of those properties and drawn the shape, now you just want to draw a simple line. Yikes! Do you now have to reset every one of those settings (drop shadow, transform, blend mode, and so on)?

Don’t panic—this is a common situation, and there’s a simple mechanism for dealing with it. Before you make a bunch of changes, call the CGContextSaveGState(_:) function to save almost everything about the current context. It takes a snapshot of your current context settings and pushes them onto a stack. You can then change whatever drawing properties you need (clipping region, line width, stroke color, and so on) and draw whatever you want.

When you’re done, call CGContextRestoreGState(_:), and all of the context’s setting will be immediately restored to what they were when you called CGContextSaveGState(_:). You can nest these calls as deeply as you need: save, change, draw, save, change, draw, restore, draw, restore, draw. It’s not uncommon, in complex drawing functions, to begin with a call to CGContextRestoreGState(_:) so that later portions of the function can retrieve an unadulterated context.


I think it’s time for a little celebration. What you’ve learned in this chapter is more than just some drawing mechanics. The process of creating your own views, drawing your own graphics, and making your own animations is like trading in your erector set for a lathe. You’ve just graduated from building apps using pieces that other people have made to creating anything you can imagine.

I just hope the next chapter isn’t too boring after all of this freewheeling graphics talk. It doesn’t matter how cool your custom views are; users still need to get around your app. The next chapter is all about navigation.


If there’s a big flaw in Shapely’s interface, it’s that it allows the user to make shapes that are so big they cover the entire interface and allow them to move shapes off the edge of the screen or cover the button views. Wouldn’t it be nice if Shapely would gently slide, or shrink, shapes the user has dragged so this doesn’t happen? I think so too.

Here’s your challenge: add code to Shapely so that shapes can’t be moved off the edge of the screen or cover the add shape buttons. There are a variety of ways to approach this problem. You could simply prevent the user from moving or resizing the shape too much during the drag or pinch gesture. Another solution would be to let them move it wherever they want and then gently “correct” it afterward. Whatever solution you choose, make it clear to the user what’s happening so the user doesn’t just think your app is broken.

You’ll find my solution in the Learn iOS Development Projects image Ch 11 image Shapely E1 folder. (Hint: I added a corralShape(_:) function to ViewController.swift).


1Sprite Kit, Scene Kit, and Metal can also employ GPU shader programs, if you have them.