Drawing with Core Graphics - Beginning iPhone Development: Exploring the iOS SDK, Seventh Edition (2014)

Beginning iPhone Development: Exploring the iOS SDK, Seventh Edition (2014)

Chapter 16. Drawing with Core Graphics

Every application we’ve built so far has been constructed from views and controls that are part of the UIKit framework. You can do a lot with UIKit, and a great many applications can be constructed using only its predefined objects. Some visual elements, however, can’t be fully realized without going beyond what the UIKit stock components offer.

For example, sometimes an application needs to be able to do custom drawing. Fortunately, iOS includes the Core Graphics framework, which allows us to do a wide array of drawing tasks. In this chapter, we’ll explore this powerful graphics environment. We’ll also build sample applications that demonstrate key features of Core Graphics and explain its main concepts.

Paint the World

One of the main components of Core Graphics is a set of APIs called Quartz 2D. This is a collection of functions, data types, and objects designed to let you draw directly into a view or an image in memory. Quartz 2D treats the view or image that is being drawn into as a virtual canvas. It follows what’s called a painter’s model, which is just a fancy way of saying that the drawing commands are applied in much the same way that paint is applied to a canvas.

If a painter paints an entire canvas red, and then paints the bottom half of the canvas blue, the canvas will be half red and half either blue or purple (blue if the paint is opaque; purple if the paint is semitransparent). Quartz 2D’s virtual canvas works the same way. If you paint the whole view red, and then paint the bottom half of the view blue, you’ll have a view that’s half red and half either blue or purple, depending on whether the second drawing action was fully opaque or partially transparent. Each drawing action is applied to the canvas on top of any previous drawing actions.

Quartz 2D provides a variety of line, shape, and image drawing functions. Though easy to use, Quartz 2D is limited to two-dimensional drawing.

Now that you have a general idea of Quartz 2D, let’s try it out. We’ll start with the basics of how Quartz 2D works, and then build a simple drawing application with it.

The Quartz 2D Approach to Drawing

When using Quartz 2D (Quartz for short), you’ll usually add the drawing code to the view doing the drawing. For example, you might create a subclass of UIView and add Quartz function calls to that class’s drawRect: method. The drawRect: method is part of the UIView class definition and is called every time a view needs to redraw itself. If you insert your Quartz code in drawRect:, that code will be called, and then the view will redraw itself.

Quartz 2D’s Graphics Contexts

In Quartz, as in the rest of Core Graphics, drawing happens in a graphics context, usually referred to simply as a context. Every view has an associated context. You retrieve the current context, use that context to make various Quartz drawing calls, and let the context worry about rendering your drawing onto the view. You can think of this context as a sort of canvas. The system provides you with a default context where the contents will appear on the screen. However, it’s also possible to create a context of your own for doing drawing that you don’t want to appear immediately, but to save for later or use for something else. We’re going to be focusing mainly on the default context, which you can acquire with this line of code:

CGContextRef context = UIGraphicsGetCurrentContext();

Note Core Graphics is a C-language API, so you’ll see a lot of C syntax in the code examples in this chapter.

Once you’ve defined your graphics context, you can draw into it by passing the context to a variety of Core Graphics drawing functions. For example, this sequence will create a path describing a simple line, and then draw that path:

CGContextSetLineWidth(context, 4.0);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextMoveToPoint(context, 10.0, 10.0);
CGContextAddLineToPoint(context, 20.0, 20.0);
CGContextStrokePath(context);

The first call specifies that any subsequent drawing commands that create the current path should be performed with a brush that is 4 points wide. Think of this as selecting the size of the brush you’re about to paint with. Until you call this function again with a different number, all lines will have a width of 4 points when drawn. You then specify that the stroke color should be red. In Core Graphics, two colors are associated with drawing actions:

· The stroke color is used in drawing lines and for the outline of shapes.

· The fill color is used to fill in shapes.

A context has a sort of invisible pen associated with it that does the line drawing. As drawing commands are executed, the movements of this pen form a path. When you call CGContextMoveToPoint(), you lift the virtual pen and move to the location you specify, without actually drawing anything. Whatever operation comes next, it will do its work relative to the point to which you moved the pen. In the earlier example, for instance, we first moved the pen to (10, 10). The next function call added a line from the current pen location (10, 10) to the specified location (20, 20), which became the new pen location.

When you draw in Core Graphics, you’re not drawing anything you can actually see—at least not immediately. You’re creating a path, which can be a shape, a line, or some other object; however, it contains no color or other features to make it visible. It’s like writing in invisible ink. Until you do something to make it visible, your path can’t be seen. So, the next step is to call the CGContextStrokePath() function, which tells Quartz to draw the path you’ve constructed. This function will use the line width and the stroke color we set earlier to actually color (or “paint”) the path and make it visible.

The Coordinate System

In the previous chunk of code, we passed a pair of floating-point numbers as parameters to CGContextMoveToPoint() and CGContextLineToPoint(). These numbers represent positions in the Core Graphics coordinate system. Locations in this coordinate system are denoted by their x and y coordinates, which we usually represent as (x, y). The upper-left corner of the context is (0, 0). As you move down, y increases. As you move to the right, x increases.

In the previous code snippet, we drew a diagonal line from (10, 10) to (20, 20), which would look like the one shown in Figure 16-1.

image

Figure 16-1. Drawing a line using Quartz 2D’s coordinate system

The coordinate system is one of the gotchas in drawing with Quartz on iOS because its vertical component is flipped from what many graphics libraries use and from the traditional Cartesian coordinate system (introduced by René Descartes in the 17th century). In other systems, such as OpenGL, or even the OS X version of Quartz, (0, 0) is in the lower-left corner; and as the y coordinate increases, you move toward the top of the context or view, as shown in Figure 16-2.

image

Figure 16-2. In many graphics libraries, including OpenGL, drawing from (10, 10) to (20, 20) would produce a line that looks like this instead of the line in Figure 16-1

To specify a point in the coordinate system, some Quartz functions require two floating-point numbers as parameters. Other Quartz functions ask for the point to be embedded in a CGPoint, a struct that holds two floating-point values: x and y. To describe the size of a view or other object, Quartz uses CGSize, a struct that also holds two floating-point values: width and height. Quartz also declares a data type called CGRect, which is used to define a rectangle in the coordinate system. A CGRect contains two elements: a CGPoint called origin, with x andy values that identify the top left of the rectangle; and a CGSize called size, which identifies the width and height of the rectangle.

Specifying Colors

An important part of drawing is color, so understanding the way colors work on iOS is critical. UIKit provides an Objective-C class that represents a color: UIColor. You can’t use a UIColor object directly in Core Graphic calls. However, UIColor is just a wrapper around CGColor(which is what the Core Graphic functions require) and you can retrieve a CGColor reference from a UIColor instance by using its CGColor property, as we showed earlier, in this code snippet:

CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);

We created a UIColor instance using a convenience method called redColor, and then retrieved its CGColor property and passed that into the function.

A Bit of Color Theory for Your iOS Device’s Display

In modern computer graphics, any color displayed on the screen has its data stored in some way based on something called a color model. A color model (sometimes called a color space) is simply a way of representing real-world color as digital values that a computer can use. One common way to represent colors is to use four components: red, green, blue, and alpha. In Quartz, each of these values is represented as CGFloat (which is a 4-byte floating-point value on 32-bit systems and an 8-byte value on 64-bit systems). These values should always contain a value between 0.0 and 1.0.

Note A floating-point value that is expected to be in the range 0.0 to 1.0 is often referred to as a clamped floating-point variable, or sometimes just a clamp.

The red, green, and blue components are fairly easy to understand, as they represent the additive primary colors, or the RGB color model (see Figure 16-3). If you add together the light of these three colors in equal proportions, the result will appear to the eye as either white or a shade of gray, depending on the intensity of the light mixed. Combining the three additive primaries in different proportions gives you a range of different colors, referred to as a gamut.

image

Figure 16-3. A simple representation of the additive primary colors that make up the RGB color model

In grade school, you probably learned that the primary colors are red, yellow, and blue. These primaries, which are known as the historical subtractive primaries, or the RYB color model, have little application in modern color theory and are almost never used in computer graphics. The color gamut of the RYB color model is much more limited than the RGB color model, and it also doesn’t lend itself easily to mathematical definition. As much as we hate to tell you that your wonderful third-grade art teacher, Mrs. Smedlee, was wrong about anything—well, in the context of computer graphics, she was. For our purposes, the primary colors are red, green, and blue, not red, yellow, and blue.

In addition to red, green, and blue, Quartz uses another color component, called alpha, which represents how transparent a color is. When drawing one color on top of another color, alpha is used to determine the final color that is drawn. With an alpha of 1.0, the drawn color is 100% opaque and obscures any colors beneath it. With any value less than 1.0, the colors below will show through and mix with the color above. If the alpha is 0.0, then this color will be completely invisible and whatever is behind it will show through completely. When an alpha component is used, the color model is sometimes referred to as the RGBA color model, although technically speaking, the alpha isn’t really part of the color; it just defines how the color will interact with other colors when it is drawn.

Other Color Models

Although the RGB model is the most commonly used in computer graphics, it is not the only color model. Several others are in use, including the following:

· Hue, saturation, value (HSV)

· Hue, saturation, lightness (HSL)

· Cyan, magenta, yellow, black (CMYK), which is used in four-color offset printing

· Grayscale

To make matters even more confusing, there are different versions of some of these models, including several variants of the RGB color space.

Fortunately, for most operations, we don’t need to worry about the color model that is being used. We can just call CGColor on our UIColor objects, and in most cases, Core Graphics will handle any necessary conversions.

Color Convenience Methods

UIColor has a large number of convenience methods that return UIColor objects initialized to a specific color. In our previous code sample, we used the redColor method to initialize a color to red.

Fortunately, the UIColor instances created by most of these convenience methods all use the RGBA color model. The only exceptions are the predefined UIColors that represent grayscale values—such as blackColor, whiteColor, and darkGrayColor—which are defined only in terms of white level and alpha. In our examples here, we’re not using those, so we can assume RGBA for now.

If you need more control over color, instead of using one of those convenience methods based on the name of the color, you can create a color by specifying all four of the components. Here’s an example:

UIColor *red = [UIColor colorWithRed:1.0 green:0.0 blue:0.0 alpha:1.0];

Drawing Images in Context

Quartz allows you to draw images directly into a context. This is another example of an Objective-C class (UIImage) that you can use as an alternative to working with a Core Graphics data structure (CGImage). The UIImage class contains methods to draw its image into the current context. You’ll need to identify where the image should appear in the context using either of the following techniques:

· By specifying a CGPoint to identify the image’s upper-left corner

· By specifying a CGRect to frame the image, resized to fit the frame, if necessary

You can draw a UIImage into the current context, like so:

UIImage *image; // assuming this exists and points at a UIImage instance
CGPoint drawPoint = CGPointMake(100.0, 100.0);
[image drawAtPoint:drawPoint];

Drawing Shapes: Polygons, Lines, and Curves

Quartz provides a number of functions to make it easier to create complex shapes. To draw a rectangle or a polygon, you don’t need to calculate angles, draw lines, or do any math at all. You can just call a Quartz function to do the work for you. For example, to draw an ellipse, you define the rectangle into which the ellipse needs to fit and let Core Graphics do the work:

CGRect theRect = CGRectMake(0, 0, 100, 100);
CGContextAddEllipseInRect(context, theRect);
CGContextDrawPath(context, kCGPathFillStroke);

You use similar methods for rectangles. Quartz also provides methods that let you create more complex shapes, such as arcs and Bezier paths.

Note We won’t be working with complex shapes in this chapter’s examples. To learn more about arcs and Bezier paths in Quartz, check out the Quartz 2D Programming Guide in the iOS Dev Center athttp://developer.apple.com/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/ or in Xcode’s online documentation.

Quartz 2D Tool Sampler: Patterns, Gradients, and Dash Patterns

Quartz offers quite an impressive array of tools. For example, Quartz supports filling polygons not only with solid colors, but also with gradients. And in addition to drawing solid lines, it can also use an assortment of dash patterns. Take a look at the screenshots in Figure 16-4, which are from Apple’s QuartzDemo sample code, to see a sampling of what Quartz can do for you.

image

Figure 16-4. Some examples of what Quartz 2D can do, from the QuartzDemo sample project provided by Apple

Now that you have a basic understanding of how Quartz works and what it is capable of doing, let’s try it out.

The QuartzFun Application

Our next application is a simple drawing program (see Figure 16-5). We’re going to build this application using Quartz to give you a real feel for how the concepts we’ve been describing fit together.

image

Figure 16-5. Our chapter’s simple drawing application in action

The application features a bar across the top and one across the bottom, each with a segmented control. The control at the top lets you change the drawing color, and the one at the bottom lets you change the shape to be drawn. When you touch and drag, the selected shape will be drawn in the selected color. To minimize the application’s complexity, only one shape will be drawn at a time.

Setting Up the QuartzFun Application

In Xcode, create a new project using the Single View Application template and call it QuartzFun. The template has already provided us with an application delegate and a view controller. We’re going to be executing our custom drawing in a custom view, so we need to also create a subclass of UIView where we’ll do the drawing by overriding the drawRect: method.

With the QuartzFun folder selected (the folder that currently contains the app delegate and view controller files), press imageN to bring up the new file assistant, and then select Cocoa Touch Class from the iOS section. Name the new class QuartzFunView and make it a subclass of UIView.

We’re going to define some constants, as we’ve done in previous projects; but this time, our constants will be needed by more than one class. We’ll create a header file just for the constants.

Select the QuartzFun group again and press imageN to bring up the new file assistant. Select the Header File template from the iOS section, and name the file Constants.h.

We have two more files to go. If you look at Figure 16-5, you can see that we offer an option to select a random color. UIColor doesn’t have a method to return a random color, so we’ll need to write code to do that. We could put that code into our controller class, but because we’re savvy Objective-C programmers, we’ll put it into a category on UIColor.

Again, select the QuartzFun folder and press imageN to bring up the new file assistant. Select Objective-C File from the iOS heading and hit Next. When prompted, name the file Random, set the File Type to Category and the Class to UIColor, then press Next and save the file in the project folder.

Creating a Random Color

Let’s tackle the category first. Add the following lines to UIColor+Random.h, replacing everything that’s currently in the file:

#import <UIKit/UIKit.h>

@interface UIColor (Random)
+ (UIColor *)randomColor;
@end

Now, switch over to UIColor+Random.m and add this code:

#import <Foundation/Foundation.h>
#import "UIColor+Random.h"

@implementation UIColor (Random)

+ (UIColor *)randomColor {
CGFloat red = (CGFloat)(arc4random() % 256)/255;
CGFloat blue = (CGFloat)(arc4random() % 256)/255;
CGFloat green = (CGFloat)(arc4random() % 256)/255;
return [UIColor colorWithRed:red green:green blue:blue alpha:1.0f];
}

@end

This is fairly straightforward. For each color component, we use the arc4random() function to generate a random floating point number. Each component of the color needs to be between 0.0 and 1.0, so we take the remainder after dividing the random value by 256, which gives us a number in the range 0 to 255, and then divide by 255. Why 255? Quartz 2D on iOS supports 256 different intensities for each of the color components, so using the number 255 ensures that we have a chance to randomly select any one of them. Finally, we use those three random components to create a new color. We set the alpha value to 1.0 so that all generated colors will be opaque.

Defining Application Constants

Next, we’ll define constants for each of the options that the user can select using the segmented controllers. Single-click Constants.h and replace everything in the file with the following code:

typedef NS_ENUM(NSInteger, ShapeType) {
kLineShape = 0,
kRectShape,
kEllipseShape,
kImageShape
};

typedef NS_ENUM(NSInteger, ColorTabIndex) {
kRedColorTab = 0,
kBlueColorTab,
kYellowColorTab,
kGreenColorTab,
kRandomColorTab
};

To make our code more readable, we’ve declared two enumerated types using typedef and the NS_ENUM macro. One will represent the shape options available in our application; the other will represent the various color options available. The values these constants hold correspond to the segments of the two segmented controls we’ll create in our application.

Implementing the QuartzFunView Skeleton

Since we’re going to do our drawing in a subclass of UIView, let’s set up that class with everything it needs, except for the actual code to do the drawing, which we’ll add later. Single-click QuartzFunView.h and add the following code at the top:

#import <UIKit/UIKit.h>
#import "Constants.h"

@interface QuartzFunView : UIView

@property (assign, nonatomic) ShapeType shapeType;
@property (assign, nonatomic) BOOL useRandomColor;
@property (strong, nonatomic) UIColor *currentColor;

@end

First, we import the Constants.h header we just created so we can use our enumeration values. We then declare three properties—a ShapeType property to keep track of the shape that the user wants to draw, a boolean that will be used to keep track of whether the user is requesting a random color, and a UIColor property to keep track of the currently chosen color.

Switch over to QuartzFunView.m; we have several changes we need to make in this file. For starters, import the UIColor+Random.h header so that we can generate random colors by adding this line near the top, just below the other import:

#import "UIColor+Random.h"

Next, we need to create the class extension and add three more properties to it:

#import "UIColor+Random.h"

@interface QuartzFunView ()

@property (assign, nonatomic) CGPoint firstTouchLocation;
@property (assign, nonatomic) CGPoint lastTouchLocation;
@property (strong, nonatomic) UIImage *image;

@end

The first two properties will track the user’s finger as it drags across the screen. We’ll store the location where the user first touches the screen in firstTouchLocation and we store the location of the user’s finger while dragging and when the drag ends in lastTouchLocation. Our drawing code will use these two variables to determine where to draw the requested shape. The image property holds the image to be drawn on the screen when the user selects the rightmost toolbar item on the bottom toolbar (see Figure 16-6). These properties are in the class extension and not in the QuartzFunView.h file because they are for internal use only, so are not a part of the view’s public API.

image

Figure 16-6. Using QuartzFun to draw a UIImage

Now on to the implementation itself. The template gave us a method called initWithFrame:, but we won’t be using that. Keep in mind that object instances in nibs and storyboards are stored as archived objects, which is the same mechanism we used in Chapter 13 to archive our objects. As a result, when an object instance is loaded from a nib or a storyboard, neither init nor initWithFrame: is ever called. Instead, initWithCoder: is used, so this is where we need to add any initialization code. In our case, we’ll set the initial color value to red, initializeuseRandomColor to NO, and load the image file that we’re going to draw later in the chapter. Delete the existing stub implementation of initWithFrame: and replace it with the following method:

- (id)initWithCoder:(NSCoder*)coder {
if (self = [super initWithCoder:coder]) {
_currentColor = [UIColor redColor];
_useRandomColor = NO;
_image = [UIImage imageNamed:@"iphone"] ;
}
return self;
}

After initWithCoder:, we need to add a few more methods to respond to the user’s touches. After initWithCoder:, insert the following three methods:

#pragma mark - Touch Handling

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if (self.useRandomColor) {
self.currentColor = [UIColor randomColor];
}
UITouch *touch = [touches anyObject];
self.firstTouchLocation = [touch locationInView:self];
self.lastTouchLocation = [touch locationInView:self];
[self setNeedsDisplay];
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
self.lastTouchLocation = [touch locationInView:self];

[self setNeedsDisplay];
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
self.lastTouchLocation = [touch locationInView:self];

[self setNeedsDisplay];
}

These three methods are inherited from UIView, which in turn inherits them from UIView’s parent, UIResponder. They can be overridden to find out where the user is touching the screen. They work as follows:

· touchesBegan:withEvent: is called when the user’s finger first touches the screen. In that method, we change the color if the user has selected a random color using the new randomColor method we added to UIColor earlier. After that, we store the current location so that we know where the user first touched the screen, and we indicate that our view needs to be redrawn by calling setNeedsDisplay on self.

· touchesMoved:withEvent: is continuously called while the user is dragging a finger on the screen. All we do here is store the new location in lastTouchLocation and indicate that the screen needs to be redrawn.

· touchesEnded:withEvent: is called when the user lifts the finger off the screen. Just as in the touchesMoved:withEvent: method, all we do is store the final location in the lastTouchLocation variable and indicate that the view needs to be redrawn.

Don’t worry if you don’t fully understand the rest of the code here. We’ll get into the details of working with touches and the specifics of the touchesBegan:withEvent:, touchesMoved:withEvent:, and touchesEnded:withEvent: methods in Chapter 18.

We’ll come back to this class once we have our application skeleton up and running. That drawRect: method, which is currently commented out, is where we will do this application’s real work, and we haven’t written that yet. Let’s finish setting up the application before we add our drawing code.

Creating and Connecting Outlets and Actions

Before we can start drawing, we need to add the segmented controls to our GUI, and then hook up the actions and outlets. Single-click Main.storyboard to set these things up.

The first order of business is to change the class of the view. In the document outline, expand the items for the scene and for the view controller it contains, and then single-click the View item. Press imageimage3 to bring up the identity inspector and change the class from UIView to QuartzFunView.

Now use the object library to find a segmented control and drag it to the top of the view, just below the status bar. Place it somewhere near the center, as shown in Figure 16-7. You don’t need to be too accurate with this because we’ll shortly add a layout constraint that will center it.

image

Figure 16-7. Adding a segmented control for color selection

With the segmented control selected, bring up the Attributes Inspector and change the number of segments from 2 to 5. Double-click each segment in turn, changing its label to (from left to right) Red, Blue, Yellow, Green, and Random, in that order. Now let’s apply layout constraints. In the Document Outline, Control-drag from the segmented control item to the Quartz Fun View item, release the mouse and select Top Space to Top Layout Guide. Repeat the Control-drag operation and, this time, select Center Horizontally in Container. So far, we’ve pinned the segmented control horizontally and vertically—all that remains is to set its size. Click the Pin button at the bottom of the editing area, click the Width check box and enter 290, as shown in Figure 16-8. Click Add 1 constraint. In the Document Outline, select the View Controller icon, and then back in the storyboard editor, click the Resolve Auto Layout Issues button (the one to the right of the Pin button) and select Update Frames. The segmented control should now be properly sized and positioned.

image

Figure 16-8. Setting the size of the color selection segmented control

Bring up the assistant editor, if it’s not already open, and select ViewController.m from the jump bar. Now Control-drag from the segmented control in the Document Outline to the ViewController.m file on the right, into the space between the @interface and @end lines near the top that delineate the class extension. When your cursor is between the @interface and @end declarations, release the mouse to create a new outlet. Name the new outlet colorControl, and leave all the other options at their default values.

Next, let’s add an action. With ViewController.m still open in the assistant editor, select Main.storyboard again and Control-drag from the segmented control over to the view controller file, directly above the @end declaration at the bottom. This time, change the connection type to Action and the name to changeColor. The pop-up should default to using the Value Changed event, which is what we want. You should also set the type to UISegmentedControl.

Now let’s add a second segmented control. This one will be used to choose the shape to be drawn. Drag a segmented control from the library and drop it near the bottom of the view. Select the segmented control in the Document Outline, bring up the Attributes Inspector, and change the number of segments from 2 to 4. Now double-click each segment and change the titles of the four segments to Line, Rect, Ellipse, and Image, in that order. Now we need to add layout constraints to fix the size and position of the control, just like we did with the color selection control. Here’s the sequence of steps that you need:

1. In the Document Outline, Control-drag from the new segmented control item to the Quartz Fun View item, release the mouse, and select Bottom Space to Bottom Layout Guide.

2. Control-drag again and select Center Horizontally in Container.

3. Click the Pin button at the bottom of the editing area, and then click the Width check box and enter 220. Click Add 1 constraint.

4. In the Document Outline, select the View Controller icon, and then back in the editor, click the Resolve Auto Layout Issues button and select Update Frames.

Once you’ve done that, open ViewController.m in the assistant editor again, and then Control-drag from the new segmented control over to just above the @end line in ViewController.m to create an action. Change the connection type to Action, name the action changeShape, and change the type to UISegmentedControl.

The storyboard should now look like Figure 16-9. Our next task is to implement the action methods.

image

Figure 16-9. Both segmented controls are in place

Implementing the Action Methods

Save the storyboard and feel free to close the assistant editor. Now select ViewController.m. The first thing we need to do is to import our constants file, so that we have access to our enumeration values. We’ll also be interacting with our custom view, so we need to import its header as well. At the top of the file, immediately below the existing import statement, add the following lines of code:

#import "Constants.h"
#import "QuartzFunView.h"

Next, look for the stub implementation of changeColor: that Xcode created for you and add the following code to it:

- (IBAction)changeColor:(UISegmentedControl *)sender {
QuartzFunView *funView = (QuartzFunView *)self.view;
ColorTabIndex index = [sender selectedSegmentIndex];
switch (index) {
case kRedColorTab:
funView.currentColor = [UIColor redColor];
funView.useRandomColor = NO;
break;
case kBlueColorTab:
funView.currentColor = [UIColor blueColor];
funView.useRandomColor = NO;
break;
case kYellowColorTab:
funView.currentColor = [UIColor yellowColor];
funView.useRandomColor = NO;
break;
case kGreenColorTab:
funView.currentColor = [UIColor greenColor];
funView.useRandomColor = NO;
break;
case kRandomColorTab:
funView.useRandomColor = YES;
break;
default:
break;
}
}

This is pretty straightforward. We simply look at which segment was selected and create a new color based on that selection to serve as our current drawing color. After that, we set the currentColor property so that our class knows which color to use when drawing, unless random color has been selected. In that case, we set the useRandomColor property to YES and a new color will be chosen each time the user starts a new drawing action (you’ll find in this code in the touchesBegan:withEvent: method, which we added a few pages ago. Since all the drawing code will be in the view itself, we don’t need to do anything else in this method.

Next, look for the existing implementation of changeShape: and add the following code to it:

- (IBAction)changeShape:(UISegmentedControl *)sender {
[(QuartzFunView *)self.view setShapeType:[sender
selectedSegmentIndex]];
self.colorControl.hidden = [sender selectedSegmentIndex] == kImageShape;
}

In this method, all we do is set the shape type based on the selected segment of the control. Do you recall the ShapeType enum? The four elements of the enum correspond to the four toolbar segments at the bottom of the application view. We set the shape to be the same as the currently selected segment, and we also hide or show the color selection control based on whether the Image segment was selected.

Make sure that everything is in order by compiling and running your app. You won’t be able to draw shapes on the screen yet, but the segmented controls should work; and when you tap the Image segment in the bottom control, the color controls should disappear.

Now that we have everything working, let’s do some drawing.

Adding Quartz 2D Drawing Code

We’re ready to add the code that does the drawing. We’ll draw a line, some shapes, and an image. We’re going to work incrementally, adding a small amount of code and then running the app to see what that code does.

Drawing the Line

Let’s do the simplest drawing option first: drawing a single line. Select QuartzFunView.m and replace the commented-out drawRect: method with this one:

- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, self.currentColor.CGColor);

switch (self.shapeType) {
case kLineShape:
CGContextMoveToPoint(context,
self.firstTouchLocation.x,
self.firstTouchLocation.y);
CGContextAddLineToPoint(context,
self.lastTouchLocation.x,
self.lastTouchLocation.y);
CGContextStrokePath(context);
break;
case kRectShape:
break;
case kEllipseShape:
break;
case kImageShape:
break;
default:
break;
}
}

We start things off by retrieving a reference to the current context, so we know where to draw:

CGContextRef context = UIGraphicsGetCurrentContext();

Next, we set the line width to 2.0, which means that any line that we stroke will be 2 points wide:

CGContextSetLineWidth(context, 2.0);

After that, we set the color for stroking lines. Since UIColor has a CGColor property, which is what this function needs, we use that property of our currentColor property to pass the correct color on to this function.

CGContextSetStrokeColorWithColor(context, self.currentColor.CGColor);

We use a switch to jump to the appropriate code for each shape type. As we mentioned earlier, we’ll start off with the code to handle kLineShape, get that working, and then we’ll add code for each shape in turn as we make our way through this example:

switch (self.shapeType) {
case kLineShape:

To draw a line, we tell the graphics context to create a path starting at the first place the user touched. Remember that we stored that value in the touchesBegan:withEvents: method, so it will always reflect the starting point of the most recent touch or drag:

CGContextMoveToPoint(context,
self.firstTouchLocation.x,
self.firstTouchLocation.y);

Next, we draw a line from that spot to the last spot the user touched. If the user’s finger is still in contact with the screen, lastTouchLocation contains the finger’s current location. If the user is no longer touching the screen, lastTouchLocation contains the location of the user’s finger when it was lifted off the screen:

CGContextAddLineToPoint(context,
self.lastTouchLocation.x,
self.lastTouchLocation.y);

This function doesn’t actually draw the line—it just adds it to the context’s current path. To make the line appear on the screen, we need to stroke the path. This function will stroke the line we just drew, using the color and width we set earlier:

CGContextStrokePath(context);

After that, we finish the switch statement:

break;
case kRectShape:
break;
case kEllipseShape:
break;
case kImageShape:
break;
default:
break;
}

And that’s it for now. At this point, you should be able to compile and run the app once more. The Rect, Ellipse, and Shape options won’t work, but you should be able to draw lines just fine using any of the color choices (see Figure 16-10).

image

Figure 16-10. The line-drawing part of our application is now complete. Here, we are drawing using the color red

Drawing the Rectangle and Ellipse

Let’s write the code to draw the rectangle and the ellipse at the same time, since Quartz implements both of these objects in basically the same way. Add the following bold code to your existing drawRect: method:

- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();

CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, self.currentColor.CGColor);

CGContextSetFillColorWithColor(context, self.currentColor.CGColor);
CGRect currentRect = CGRectMake(self.firstTouchLocation.x,
self.firstTouchLocation.y,
self.lastTouchLocation.x -
self.firstTouchLocation.x,
self.lastTouchLocation.y -
self.firstTouchLocation.y);

switch (self.shapeType) {
case kLineShape:
CGContextMoveToPoint(context,
self.firstTouchLocation.x,
self.firstTouchLocation.y);
CGContextAddLineToPoint(context,
self.lastTouchLocation.x,
self.lastTouchLocation.y);
CGContextStrokePath(context);
break;
case kRectShape:
CGContextAddRect(context, currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;
case kEllipseShape:
CGContextAddEllipseInRect(context, currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;
case kImageShape:
break;
default:
break;
}
}

Because we want to paint both the outline of the ellipse and the rectangle and to fill their interiors, we add a call to set the fill color using currentColor:

CGContextSetFillColorWithColor(context, self.currentColor.CGColor);

Next, we declare a CGRect variable. We do this here because both the rectangle and ellipse are drawn based on a rectangle. We’ll use currentRect to hold the rectangle described by the user’s drag. Remember that a CGRect has two members: size and origin. A function calledCGRectMake() lets us create a CGRect by specifying the x, y, width, and height values, so we use that to make our rectangle.

The code to create the rectangle is pretty straightforward. We use the point stored in firstTouchLocation to create the origin. Next, we figure out the size by getting the difference between the two x values and the two y values. Note that, depending on the direction of the drag, one or both size values may end up with negative numbers, but that’s OK. A CGRect with a negative size will simply be rendered in the opposite direction of its origin point (to the left for a negative width; upward for a negative height):

CGRect currentRect = CGRectMake(self.firstTouchLocation.x,
self.firstTouchLocation.y,
self.lastTouchLocation.x -
self.firstTouchLocation.x,
self.lastTouchLocation.y -
self.firstTouchLocation.y);

Once we have this rectangle defined, drawing either a rectangle or an ellipse is as easy as calling two functions: one to draw the rectangle or ellipse in the CGRect we defined, and the other to stroke and fill it:

case kRectShape:
CGContextAddRect(context, currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;
case kEllipseShape:
CGContextAddEllipseInRect(context, currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;

Compile and run your application. Try out the Rect and Ellipse tools to see how you like them. Don’t forget to change colors, including using a random color.

Drawing the Image

For our last trick, let’s draw an image. The 16 – Image folder contains two images named iphone.png and iphone@2x.png that you can add to your project’s Images.xcassets item—create an image set called iphone and drop both images into it. Now add the following code to yourdrawRect: method:

- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();

CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, self.currentColor.CGColor);

CGContextSetFillColorWithColor(context, _currentColor.CGColor);
CGRect currentRect = CGRectMake(self.firstTouchLocation.x,
self.firstTouchLocation.y,
self.lastTouchLocation.x -
self.firstTouchLocation.x,
self.lastTouchLocation.y -
self.firstTouchLocation.y);

switch (self.shapeType) {
case kLineShape:
CGContextMoveToPoint(context,
self.firstTouchLocation.x,
self.firstTouchLocation.y);
CGContextAddLineToPoint(context,
self.lastTouchLocation.x,
self.lastTouchLocation.y);
CGContextStrokePath(context);
break;
case kRectShape:
CGContextAddRect(context, currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;
case kEllipseShape:
CGContextAddEllipseInRect(context, currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;
case kImageShape: {
CGFloat horizontalOffset = self.image.size.width / 2;
CGFloat verticalOffset = self.image.size.height / 2;
CGPoint drawPoint = CGPointMake(self.lastTouchLocation.x -
horizontalOffset,
self.lastTouchLocation.y -
verticalOffset);
[self.image drawAtPoint:drawPoint];
break;
}
default:
break;
}
}

Note Notice that, in the switch statement, we added curly braces around the code following case kImageShape:. That’s because the compiler has a problem with variables declared in the first line after a case statement. These curly braces are our way of telling the compiler to stop complaining. We could also have declared horizontalOffset before the switch statement, but our chosen approach keeps the related code together.

First, we calculate the center of the image, since we want the image drawn centered on the point where the user last touched. Without this adjustment, the image would be drawn with the upper-left corner at the user’s finger, also a valid option. We then make a new CGPoint by subtracting these offsets from the x and y values in lastTouchLocation:

CGFloat horizontalOffset = self.image.size.width / 2;
CGFloat verticalOffset = self.image.size.height / 2;
CGPoint drawPoint = CGPointMake(self.lastTouchLocation.x -
horizontalOffset,
self.lastTouchLocation.y -
verticalOffset);

Now we tell the image to draw itself. This line of code will do the trick:

[self.image drawAtPoint:drawPoint];

Build and run the application, select Image from the segmented control and check that you can place an image on the drawing canvas.

Optimizing the QuartzFun Application

Our application does what we want, but we should consider a bit of optimization. In our little application, you won’t notice a slowdown; however, in a more complex application that is running on a slower processor, you might see some lag.

The problem occurs in QuartzFunView.m, in the methods touchesMoved:withEvent: and touchesEnded:withEvent:. Both methods include this line of code:

[self setNeedsDisplay];

Obviously, this is how we tell our view that something has changed and that it needs to redraw itself. This code works, but it causes the entire view to be erased and redrawn, even if only a tiny bit has changed. We do want to erase the screen when we get ready to drag out a new shape, but we don’t want to clear the screen several times a second as we drag out our shape.

Rather than forcing the entire view to be redrawn many times during our drag, we can use setNeedsDisplayInRect: instead. setNeedsDisplayInRect: is a UIView method that marks just one rectangular portion of a view’s region as needing redisplay. By using this method, we can be more efficient by marking only the part of the view that is affected by the current drawing operation as needing to be redrawn.

We need to redraw, not just the rectangle between firstTouchLocation and lastTouchLocation, but any part of the screen encompassed by the current drag. If the user touched the screen and then scribbled all over, but we redrew only the section betweenfirstTouchLocation and lastTouchLocation, then we would leave a lot of stuff drawn on the screen by the previous redraw that we don’t want to remain.

The solution is to keep track of the entire area that has been affected by a particular drag in a CGRect instance variable. In touchesBegan:withEvent:, we reset that instance variable to just the point where the user touched. Then, in touchesMoved:withEvent: andtouchesEnded:withEvent:, we use a Core Graphics function to get the union of the current rectangle and the stored rectangle, and we store the resulting rectangle. We also use it to specify which part of the view needs to be redrawn. This approach gives us a running total of the area impacted by the current drag.

At the moment, we calculate the current rectangle in the drawRect: method for use in drawing the ellipse and rectangle shapes. We’ll move that calculation into a new method, so that it can be used in all three places without repeating code. Ready? Let’s do it.

Make the following changes at the top of QuartzFunView.m:

@interface QuartzFunView ()

@property (assign, nonatomic) CGPoint firstTouchLocation;
@property (assign, nonatomic) CGPoint lastTouchLocation;
@property (strong, nonatomic) UIImage *image;
@property (readonly, nonatomic) CGRect currentRect;
@property (assign, nonatomic) CGRect redrawRect;

@end

We declare a CGRect called redrawRect that we will use to keep track of the area that needs to be redrawn. We also declare a read-only property called currentRect, which will return the rectangle that we were previously calculating in drawRect:. Add the accessor method for thecurrentRect property, at the end of the file:

- (CGRect)currentRect {
return CGRectMake (self.firstTouchLocation.x,
self.firstTouchLocation.y,
self.lastTouchLocation.x - self.firstTouchLocation.x,
self.lastTouchLocation.y - self.firstTouchLocation.y);
}

Now, in the drawRect: method, change all references to currentRect to self.currentRect, so that the code uses that new accessor we just created. Next, delete the lines of code where we calculated currentRect:

- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();

CGContextSetLineWidth(context, 2.0);
CGContextSetStrokeColorWithColor(context, self.currentColor.CGColor);

CGContextSetFillColorWithColor(context, self.currentColor.CGColor);

CGRect currentRect = CGRectMake(self.firstTouchLocation.x,
self.firstTouchLocation.y,
self.lastTouchLocation.x -
self.firstTouchLocation.x,
self.lastTouchLocation.y -
self.firstTouchLocation.y);

switch (self.shapeType) {
case kLineShape:
CGContextMoveToPoint(context,
self.firstTouchLocation.x,
self.firstTouchLocation.y);
CGContextAddLineToPoint(context,
self.lastTouchLocation.x,
self.lastTouchLocation.y);
CGContextStrokePath(context);
break;
case kRectShape:
CGContextAddRect(context, self.currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;
case kEllipseShape:
CGContextAddEllipseInRect(context, self.currentRect);
CGContextDrawPath(context, kCGPathFillStroke);
break;
case kImageShape: {
CGFloat horizontalOffset = self.image.size.width / 2;
CGFloat verticalOffset = self.image.size.height / 2;
CGPoint drawPoint = CGPointMake(self.lastTouchLocation.x -
horizontalOffset,
self.lastTouchLocation.y -
verticalOffset);
[self.image drawAtPoint:drawPoint];
break;
}
default:
break;
}
}

We also need to make some changes to touchesEnded:withEvent: and touchesMoved:withEvent:. We will recalculate the space impacted by the current operation and use that to indicate that only a portion of our view needs to be redrawn. Replace the existingtouchesEnded:withEvent: and touchesMoved:withEvent: methods with these new versions:

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
self.lastTouchLocation = [touch locationInView:self];

if (self.shapeType == kImageShape) {
CGFloat horizontalOffset = self.image.size.width / 2;
CGFloat verticalOffset = self.image.size.height / 2;
self.redrawRect = CGRectUnion(self.redrawRect,
CGRectMake(self.lastTouchLocation.x -
horizontalOffset,
self.lastTouchLocation.y -
verticalOffset,
self.image.size.width,
self.image.size.height));
} else {
self.redrawRect = CGRectUnion(self.redrawRect, self.currentRect);
}
self.redrawRect = CGRectInset(self.redrawRect, -2.0, -2.0);
[self setNeedsDisplayInRect:self.redrawRect];
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
self.lastTouchLocation = [touch locationInView:self];

if (self.shapeType == kImageShape) {
CGFloat horizontalOffset = self.image.size.width / 2;
CGFloat verticalOffset = self.image.size.height / 2;
self.redrawRect = CGRectUnion(self.redrawRect,
CGRectMake(self.lastTouchLocation.x -
horizontalOffset,
self.lastTouchLocation.y -
verticalOffset,
self.image.size.width,
self.image.size.height));
} else {
self.redrawRect = CGRectUnion(_redrawRect, self.currentRect);
}
[self setNeedsDisplayInRect:self.redrawRect];
}

Also add the following line to the touchesBegan:withEvent: method:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if (self.useRandomColor) {
self.currentColor = [UIColor randomColor];
}
UITouch *touch = [touches anyObject];
self.firstTouchLocation = [touch locationInView:self];
self.lastTouchLocation = [touch locationInView:self];
self.redrawRect = CGRectZero;
[self setNeedsDisplay];
}

Build and run the application again to see the final result. You probably won’t see any difference, but with only a few additional lines of code, we reduced the amount of work necessary to redraw our view by getting rid of the need to erase and redraw any portion of the view that hasn’t been affected by the current drag. Being kind to your iOS device’s precious processor cycles like this can make a big difference in the performance of your applications, especially as they get more complex.

Note If you’re interested in a more in-depth exploration of Quartz 2D topics, you might want to take a look at Beginning iPad Development for iPhone Developers: Mastering the iPad SDK by Jack Nutting, Dave Wooldridge, and David Mark (Apress, 2010). This book covers a lot of Quartz 2D drawing. All the drawing code and explanations in that book apply to the iPhone as well as the iPad.

Drawing to a Close

In this chapter, we’ve really just scratched the surface of the drawing capabilities built into iOS. You should feel pretty comfortable with Quartz 2D now; and with some occasional references to Apple’s documentation, you can probably handle most any drawing requirement that comes your way.

Now it’s time to level up your graphics skills even further! Chapter 17 will introduce you to the Sprite Kit framework, introduced in iOS 7, which lets you do blazingly-fast bitmap rendering for creating games or other fast-moving, interactive content.