Drawing Images - iOS Drawing: Practical UIKit Solutions (2014)

iOS Drawing: Practical UIKit Solutions (2014)

Chapter 3. Drawing Images

This chapter introduces image drawing. It surveys techniques for creating, adjusting, and retrieving image instances. You can do a lot with images, drawing, and iOS. You can render items into graphics contexts, building altered versions. You can produce thumbnail versions or extract portions of the original. You can create items that know how to properly stretch for buttons and others that work seamlessly with Auto Layout. Contexts provide ways to convert image instances to and from data representations. This enables you to apply image-processing techniques and incorporate the results into your interfaces. In this chapter, you’ll read about common image drawing challenges and discover the solutions you can use.

UIKit Images

UIKit images center around the UIImage class. It’s a powerful and flexible class that hides its implementation details, enabling you to perform many presentation tasks with a minimum of code. Its most common pattern involves loading data from a file and adding the resulting image to aUIImageView instance. Here’s an example of this:

UIImage *image = [UIImage imageNamed:@"myImage"];
myImageView.image = image;

You are not limited to loading images from external data, of course. iOS enables you to create your own images from code when and how you need to. Listing 3-1 shows a trivial example. It builds a new UIImage instance, using a color and size you specify. This produces a color swatch that is returned from the function.

To accomplish this, Listing 3-1 starts by establishing an image context. It then sets a color and fills the context using UIRectFill(). It concludes by retrieving and returning a new image from the context.

Listing 3-1 demonstrates the basic drawing skeleton. Where this function draws a colored rectangle, you can create your own Mona Lisas. Just supply your own custom drawing routines and set the target drawing size to the extent your app demands.

Listing 3-1 Creating a Custom Image

UIImage *SwatchWithColor(UIColor *color, CGFloat side)
// Create image context (using the main screen scale)
CGSizeMake(side, side), YES, 0.0);

// Perform drawing
[color setFill];
UIRectFill(CGRectMake(0, 0, side, side));

// Retrieve image
UIImage *image =
return image;

Here are a few quick things you’ll want to remember about images:

Image You query an image for its extent by inspecting its size property, as in this example. The size is returned in points rather than pixels, so data dimensions may be double the number returned on Retina systems:

UIImage *swatch = SwatchWithColor(greenColor, 120);
NSLog(@"%@", NSStringFromCGSize(swatch.size));

Image You transform an image instance to PNG or JPEG data by using the UIImagePNGRepresentation() function or the UIImageJPEGRepresentation() function. These functions return NSData instances containing the compressed image data.

Image You can retrieve an image’s Quartz representation through its CGImage property. The UIImage class is basically a lightweight wrapper around Core Graphics or Core Image images. You need a CGImage reference for many Core Graphics functions. Because this property is not available for images created using Core Image, you must convert the underlying CIImage into a CGImage for use in Core Graphics.


UIImage supports TIFF, JPEG, GIF, PNG, DIB (that is, BMP), ICO, CUR, and XBM formats. You can load additional formats (like RAW) by using the ImageIO framework.

Building Thumbnails

By producing thumbnails, you convert large image instances into small ones. Thumbnails enable you to embed versions into table cells, contact summaries, and other situations where images provide an important supporting role. Chapter 2 introduced functions that calculate destination aspect. Thumbnails provide a practical, image-oriented use case for these, as well as a good jumping-off point for simple image drawing.

You build thumbnails by creating an image context sized as desired, such as 100 by 100. Use drawInRect: to draw the source into the context. Finish by retrieving your new thumbnail.

UIImage *image = [UIImage imageNamed:@"myImage"];
[image drawInRect: destinationRect];
UIImage *thumbnail =

The key to proper image thumbnails is aspect. Regardless of whether you fit or fill, you’ll want to preserve the image’s internal features without distortion. Figure 3-1 shows a photograph of a cliff. The image is taller than it is wide—specifically, it is 1,933 pixels wide by 2,833 pixels high. The thumbnail shown on the right was drawn without any concern for aspect. Because the source is taller than it is wide, the result is vertically squeezed.


Figure 3-1 The thumbnail on the right is vertically compressed, providing an inaccurate representation of the original image. Pay attention to image aspect when drawing thumbnails to avoid squeezing. Public domain images courtesy of the National Park Service.

It’s not a bad-looking result—in fact, if you didn’t have the reference on the left, you might not notice any issues at all—but it’s not an accurate result. Using an image like this helps showcase the perils of incorrect thumbnail production. Errors may not jump out at you for your test data, but they assuredly will for users when applied to more scale-sensitive data like human faces.

The images in Figure 3-2 demonstrate what proper thumbnails should look like. Instead of drawing directly into the target, they calculate rectangles that fill (left) or fit (right) the target area. Listing 3-2 reveals the difference in approach, creating a fitting or filling rectangle to draw to instead of using the target rect directly.


Figure 3-2 Filling (left) and fitting (right) create thumbnail representations that preserve aspect. Public domain images courtesy of the National Park Service.

Listing 3-2 Building a Thumbnail Image

UIImage *BuildThumbnail(UIImage *sourceImage,
CGSize targetSize, BOOL useFitting)
targetSize, NO, 0.0);

// Establish the output thumbnail rectangle
CGRect targetRect = SizeMakeRect(targetSize);

// Create the source image's bounding rectangle
CGRect naturalRect = (CGRect){.size = sourceImage.size};

// Calculate fitting or filling destination rectangle
// See Chapter 2 for a discussion on these functions
CGRect destinationRect = useFitting ?
RectByFittingRect(naturalRect, targetRect) :
RectByFillingRect(naturalRect, targetRect);

// Draw the new thumbnail
[sourceImage drawInRect:destinationRect];

// Retrieve and return the new image
UIImage *thumbnail =
return thumbnail;

Extracting Subimages

Unlike thumbnails, which compress image data to a smaller version, subimages retrieve portions of an image at the same resolution as the original. Figure 3-3 shows a detail version of a ferret’s head, extracted from the image at the top left. The enlarged subimage highlights the extracted portion. As you see, you cannot add resolution; the result grows fuzzier as you zoom in.


Figure 3-3 The bottom image is a subimage extract of a portion from the original image. Public domain images courtesy of the National Park Service.

Listing 3-3 shows the function that created this subimage. It uses the simple Quartz function CGImageCreateWithImageInRect() to build its new image from the content of the original. Going with Quartz instead of UIKit in this case provides a ready-built single function to work with.

When you use this Core Graphics function, the rectangle is automatically adjusted on your behalf to pixel lines using CGRectIntegral(). It’s then intersected with the natural image rectangle, so no portions of the subrectangle fall outside the original image bounds. This saves you a lot of work. All you have to do is convert the CGImageRef returned by the function into a UIImage instance.

A drawback occurs when your reference rectangle is defined for a Retina system, and you’re extracting data in terms of Quartz coordinates. For this reason, I’ve included a second function in Listing 3-3, one that operates entirely in UIKit. Rather than convert rectangles between coordinate systems, this function assumes that the rectangle you’re referencing is defined in points, not pixels. This is important when you’re asking an image for its bounds and then building a rectangle around its center. That “center” for a Retina image may actually be closer to its top-left corner if you’ve forgotten to convert from points to pixels. By staying in UIKit, you sidestep the entire issue, making sure the bit of the picture you’re extracting is the portion you really meant.

Listing 3-3 Extracting Portions of Images

UIImage *ExtractRectFromImage(
UIImage *sourceImage, CGRect subRect)
// Extract image
CGImageRef imageRef = CGImageCreateWithImageInRect(
sourceImage.CGImage, subRect);
if (imageRef != NULL)
UIImage *output = [UIImage imageWithCGImage:imageRef];
return output;

NSLog(@"Error: Unable to extract subimage");
return nil;

// This is a little less flaky
// when moving to and from Retina images
UIImage *ExtractSubimageFromRect(
UIImage *sourceImage, CGRect rect)
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 1);
CGRect destRect = CGRectMake(
-rect.origin.x, -rect.origin.y,
sourceImage.size.width, sourceImage.size.height);
[sourceImage drawInRect:destRect];
UIImage *newImage =
return newImage;

Converting an Image to Grayscale

Figure 3-4 shows an image of a black bear. The center of this picture is a grayscale representation of the bear image, with all color removed.


Figure 3-4 Color spaces enable you to convert images to grayscale. Public domain images courtesy of the National Park Service.

To create this figure, I drew the black bear image twice. The first time, I drew the entire image. The second, time I clipped the context to the inner rectangle and converted the image to grayscale. I drew the new version on top of the original and added a black border around the grayscale image.

Here are the steps involved:

// Clip the context
CGRect insetRect = RectInsetByPercent(destinationRect, 0.40);

// Draw the grayscale version

// Outline the border between the two versions

The details of the grayscale conversion appear in Listing 3-4. The GrayscaleVersionOfImage() function works by building a new context using the source image dimensions. This “device gray”-based context stores 1 byte per pixel and uses no alpha information. It forms a new drawing area that can only handle grayscale results.

In the real world, when you draw with a purple crayon, the mark that appears on any paper will be purple. A grayscale drawing context is like black-and-white photo film. No matter what color you draw to it, the results will always be gray, matching the brightness of the crayon you’re drawing with but discarding the hue.

As with all contexts, it’s up to you to retrieve the results and store the data to an image. Here, the function draws the source image and retrieves a grayscale version, using CGBitmapContextCreateImage(). This call is analogous to theUIGraphicsGetImageFromCurrentImageContext() function for bitmap contexts, with a bit more memory management required.

You end up with a UIImage instance that retains the original’s luminance values but not its colors. Quartz handles all the detail work on your behalf. You don’t need to individually calculate brightness levels from each set of source pixels. You just specify the characteristics of the destination (its size and its color space), and the rest is done for you. This is an extremely easy way to work with images.

Listing 3-4 Building a Grayscale Version of an Image

UIImage *GrayscaleVersionOfImage(UIImage *sourceImage)
// Establish grayscale color space
CGColorSpaceRef colorSpace =
if (colorSpace == NULL)
NSLog(@"Error creating grayscale color space");
return nil;

// Extents are integers
int width = sourceImage.size.width;
int height = sourceImage.size.height;

// Build context: one byte per pixel, no alpha
CGContextRef context = CGBitmapContextCreate(
NULL, width, height,
8, // 8 bits per byte
width, colorSpace,
(CGBitmapInfo) kCGImageAlphaNone);
if (context == NULL)
NSLog(@"Error building grayscale bitmap context");
return nil;

// Replicate image using new color space
CGRect rect = SizeMakeRect(sourceImage.size);
context, rect, sourceImage.CGImage);
CGImageRef imageRef =

// Return the grayscale image
UIImage *output = [UIImage imageWithCGImage:imageRef];
return output;

Watermarking Images

Watermarking is a common image drawing request. Originally, watermarks were faint imprints in paper used to identify the paper’s source. Today’s text watermarks are used differently. They’re added over an image either to prevent copying and reuse or to brand the material with a particular hallmark or provenance.

Listing 3-5 shows how to create the simple text watermark shown in Figure 3-5. Watermarking involves nothing more than drawing an image and then drawing something else—be it a string, a logo, or a symbol—over that image and retrieving the new version.


Figure 3-5 A text watermark overlays images with words, logos, or symbols. Public domain images courtesy of the National Park Service.

The example in Listing 3-5 draws its string (“watermark”) diagonally across the image source. It does this by rotating the context by 45 degrees. It uses a blend mode to highlight the watermark while preserving details of the original photo. Because this listing is specific to iOS 7, you must include a text color along with the font attribute when drawing the string. If you do not, the string will “disappear,” and you’ll be left scratching your head—as I did when updating this example.

Other common approaches are using diffuse white overlays with a moderate alpha level and drawing just a logo’s shadow (without drawing the logo itself) onto some part of an image. Path clipping helps with that latter approach; it is discussed in more detail in Chapter 5.

Each watermark approach changes the original image in different ways. As image data becomes changed or obscured, removing the watermark becomes more difficult.

Listing 3-5 Watermarking an Image

targetSize, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();

// Draw the original image into the context
CGRect targetRect = SizeMakeRect(targetSize);
UIImage *sourceImage = [UIImage imageNamed:@"pronghorn.jpg"];
CGRect imgRect = RectByFillingRect(
SizeMakeRect(sourceImage.size), targetRect);
[sourceImage drawInRect:imgRect];

// Rotate the context
CGPoint center = RectGetCenter(targetRect);
CGContextTranslateCTM(context, center.x, center.y);
CGContextRotateCTM(context, M_PI_4);
CGContextTranslateCTM(context, -center.x, -center.y);

// Create a string
NSString *watermark = @"watermark";
UIFont *font =
[UIFont fontWithName:@"HelveticaNeue" size:48];
CGSize size = [watermark sizeWithAttributes:
@{NSFontAttributeName: font}];
CGRect stringRect = RectCenteredInRect(
SizeMakeRect(size), targetRect);

// Draw the string, using a blend mode
CGContextSetBlendMode(context, kCGBlendModeDifference);
[watermark drawInRect:stringRect withAttributes:
NSForegroundColorAttributeName:[UIColor whiteColor]}];

// Retrieve the new image
UIImage *image =
return image;

Retrieving Image Data

Although you can query an image for its PNG (UIImagePNGRepresentation()) or JPEG (UIImageJPEGRepresentation ()) representations, these functions return data suitable for storing images to file formats. They include file header and marker data, internal chunks, and compression. The data isn’t meant for byte-by-byte operations. When you plan to perform image processing, you’ll want to extract byte arrays from your contexts. Listing 3-6 shows how.

This function draws an image into a context and then uses CGBitmapContextGetData() to retrieve the source bytes. It copies those bytes into an NSData instance and returns that instance to the caller.

Wrapping the output data into an NSData object enables you to bypass issues regarding memory allocation, initialization, and management. Although you may end up using this data in C-based APIs like Accelerate, you’re able to do so from an Objective-C viewpoint.


The byte processing discussed here is not meant for use with Core Image, which has its own set of techniques and practices.

Creating Contexts

You’ve already met the CGBitmapContextCreate() function several times in this book. It requires seven arguments that define how the context should be created. For the most part, you can treat this as boilerplate, with very little variation. Here’s a breakdown of the parameters and what values you supply:

Image void *data—By passing NULL to the first argument, you ask Quartz to allocate memory on your behalf. Quartz then performs its own management on that memory, so you don’t have to explicitly allocate or free it. You access the data by calling CGBitmapContextGetData(), which is what Listing 3-6 does in order to populate the NSData object it creates. As the get in the name suggests, this function reads the data but does not copy it or otherwise interfere with its memory management.

Image size_t width and size_t height—The next two arguments are the image width and height. The size_t type is defined on iOS as unsigned long. Listing 3-6 passes extents it retrieves from the source image’s size to CGBitmapContextCreate().

Image size_t bitsPerComponent—In UIKit, you work with 8-bit bytes (uint_8), so you just pass 8 unless you have a compelling reason to do otherwise. The Quartz 2D programming guide lists all supported pixel formats, which can include 5-, 16-, and 32-bit components. A componentrefers to a single channel of information. ARGB data uses four components per pixel. Grayscale data uses one (without alpha channel data) or two (with).

Image size_t bytesPerRow—You multiply the size of the row by the bytes per component to calculate the number of bytes per row. Typically, you pass width * 4 for ARGB images and just width for straight (non-alpha) grayscale. Take special note of this value. It’s not just useful as a parameter; you also use it to calculate the (x, y) offset for any pixel in a byte array, namely (y * bytesPerRow + x).

Image CGColorSpaceRef colorspace—You pass the color space Quartz should use to create the bitmap context, typically device RGB or device gray.

Image CGBitmapInfo bitmapInfo—This parameter specifies the style of alpha channel the bitmap uses. As a rule of thumb, use kCGImageAlphaPremultipliedFirst for color images and kCGImageAlphaNone for grayscale. If you’re curious, refer to the Quartz 2D programming guide for a more complete list of options. In iOS 7 and later, make sure to cast any alpha settings to (CGBitmapInfo) to avoid compiler issues.

Listing 3-6 Extracting Bytes

NSData *BytesFromRGBImage(UIImage *sourceImage)
if (!sourceImage) return nil;

// Establish color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
NSLog(@"Error creating RGB color space");
return nil;

// Establish context
int width = sourceImage.size.width;
int height = sourceImage.size.height;
CGContextRef context = CGBitmapContextCreate(
NULL, width, height,
8, // bits per byte
width * 4, // bytes per row
(CGBitmapInfo) kCGImageAlphaPremultipliedFirst);
if (context == NULL)
NSLog(@"Error creating context");
return nil;

// Draw source into context bytes
CGRect rect = (CGRect){.size = sourceImage.size};
CGContextDrawImage(context, rect, sourceImage.CGImage);

// Create NSData from bytes
NSData *data =
[NSData dataWithBytes:CGBitmapContextGetData(context)
length:(width * height * 4)]; // bytes per image
return data;


When responsiveness is of the essence, don’t wait for UIImage instances and their underlying CGImage representations to decompress. Enable caching of decompressed images by loading your CGImage instances via CGImageSource. This implementation, which is part of the ImageIO framework, enables you to specify an option to cache decompressed data (kCGImageSourceShouldCache). This results in much faster drawing performance, albeit at the cost of extra storage. For more details, seewww.cocoanetics.com/2011/10/avoiding-image-decompression-sickness.

Creating Images from Bytes

Listing 3-7 reverses the bytes-to-image scenario, producing images from the bytes you supply. Because of this, you pass those bytes to CGBitmapContextCreate() as the first argument. This tells Quartz not to allocate memory but to use the data you supply as the initial contents of the new context.

Beyond this one small change, the code in Listing 3-7 should look pretty familiar by now. It creates an image from the context, transforming the CGImageRef to a UIImage, and returns that new image instance.

Being able to move data in both directions—from image to data and from data to image—means that you can integrate image processing into your drawing routines and use the results in your UIViews.

Listing 3-7 Turning Bytes into Images

UIImage *ImageFromBytes(NSData *data, CGSize targetSize)
// Check data
int width = targetSize.width;
int height = targetSize.height;
if (data.length < (width * height * 4))
NSLog(@"Error: Got %d bytes. Expected %d bytes",
data.length, width * height * 4);
return nil;

// Create a color space
CGColorSpaceRef colorSpace =
if (colorSpace == NULL)
NSLog(@"Error creating RGB colorspace");
return nil;

// Create the bitmap context
Byte *bytes = (Byte *) data.bytes;
CGContextRef context = CGBitmapContextCreate(
bytes, width, height,
BITS_PER_COMPONENT, // 8 bits per component
width * ARGB_COUNT, // 4 bytes in ARGB
(CGBitmapInfo) kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace );
if (context == NULL)
NSLog(@"Error. Could not create context");
return nil;

// Convert to image
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:imageRef];

// Clean up

return image;

Drawing and Auto Layout

Under Auto Layout, the new constraint-based system in iOS and OS X, a view’s content plays as important a role in its layout as its constraints. This is expressed through each view’s intrinsic content size. This size describes the minimum space needed to express the full view content, without squeezing or clipping that data. It derives from the properties of the content that each view presents. In the case of images and drawings, it represents the “natural size” of an image in points.

When you include embellishments in your pictures such as shadows, sparks, and other items that extend beyond the image’s core content, that natural size may no longer reflect the true way you want Auto Layout to handle layout. In Auto Layout, constraints determine view size and placement, using a geometric element called an alignment rectangle. As you’ll see, UIKit calls help control that placement.

Alignment Rectangles

As developers create complex views, they may introduce visual ornamentation such as shadows, exterior highlights, reflections, and engraving lines. As they do, these features are often drawn onto images. Unlike frames, a view’s alignment rectangle should be limited to a core visual element. Its size should remain unaffected as new items are drawn onto the view. Consider Figure 3-6 (left). It depicts a view drawn with a shadow and a badge. When laying out this view, you want Auto Layout to focus on aligning just the core element—the blue rectangle—and not the ornamentation.


Figure 3-6 A view’s alignment rectangle (center) refers strictly to the core visual element to be aligned, without embellishments.

The center image highlights the view’s alignment rectangle. This rectangle excludes all ornamentation, such as the drop shadow and badge. It’s the part of the view you want considered when Auto Layout does its work. Contrast this with the rectangle shown in the right image. This version includes all the visual ornamentation, extending the view’s frame beyond the area that should be considered for alignment.

The right-hand rectangle encompasses all the view’s visual elements. It encompasses the shadow and badge. These ornaments could potentially throw off a view’s alignment features (for example, its center, bottom, and right) if they were considered during layout.

By working with alignment rectangles instead of frames, Auto Layout ensures that key information like a view’s edges and center are properly considered during layout. In Figure 3-7, the adorned view is perfectly aligned on the background grid. Its badge and shadow are not considered during placement.


Figure 3-7 Auto Layout considers the view’s alignment rectangle when laying it out as centered in its parent. The shadow and badge don’t affect its placement.

Alignment Insets

Drawn art often contains hard-coded embellishments like highlights, shadows, and so forth. These items take up little memory and run efficiently. Therefore, many developers predraw effects because of their low overhead.

To accommodate extra visual elements, use imageWithAlignmentRectInsets:. You supply a UIEdgeInset structure, and UIImage returns the inset-aware image. Insets define offsets from the top, left, bottom, and right of some rectangle. You use these to describe how far to move in (using positive values) or out (using negative values) from rectangle edges. These insets ensure that the alignment rectangle is correct, even when there are drawn embellishments placed within the image.

typedef struct {
CGFloat top, left, bottom, right;
} UIEdgeInsets;

The following snippet accommodates a 20-point shadow by insetting the alignment rect on the bottom and right:

UIImage *image = [[UIImage imageNamed:@"Shadowed.png"]
imageWithAlignmentRectInsets:UIEdgeInsetsMake(0, 0, 20, 20)];
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];

It’s a bit of a pain to construct these insets by hand, especially if you may later update your graphics. When you know the alignment rect and the overall image bounds, you can, instead, automatically calculate the edge insets you need to pass to this method. Listing 3-8 defines a simple inset builder. It determines how far the alignment rectangle lies from each edge of the parent rectangle, and it returns a UIEdgeInset structure representing those values. Use this function to build insets from the intrinsic geometry of your core visuals.

Listing 3-8 Building Edge Insets from Alignment Rectangles

UIEdgeInsets BuildInsets(
CGRect alignmentRect, CGRect imageBounds)
// Ensure alignment rect is fully within source
CGRect targetRect =
CGRectIntersection(alignmentRect, imageBounds);

// Calculate insets
UIEdgeInsets insets;
insets.left = CGRectGetMinX(targetRect) –
insets.right = CGRectGetMaxX(imageBounds) –
insets.top = CGRectGetMinY(targetRect) –
insets.bottom = CGRectGetMaxY(imageBounds) –

return insets;

Drawing Images with Alignment Rects

Figure 3-8 demonstrates alignment rectangle layout in action. The image in the top view does not express alignment preferences. Therefore, the entire image (the translucent gray square, including the bunny, the shadow, and the star) is centered within the parent. The bottom screenshot uses alignment insets. These use just the bunny’s bounding box (the inner outline) as reference. Here, the centering changes. It’s now the bunny and not the overall image that’s centered in the parent. The extra drawing area and other image details no longer contribute to that placement.


Figure 3-8 You can integrate alignment rectangle awareness into your drawing routines to ensure that Auto Layout aligns them properly. In the top image, the large square outlined in black is used for alignment. In the bottom, the inset gray rectangle around the bunny is the alignmentrect.

Listing 3-9 details the drawing and alignment process behind the bottom screenshot. It creates an image with the gray background, the green bunny, the red badge, and the outline showing the bunny’s bounds. As part of this process, it adds a shadow to the bunny and moves the badge to the bunny’s top-right corner. Shadows and badges are fairly common items used in normal iOS visual elements, even in iOS 7’s slimmed down, clean aesthetic.

In the end, however, the part that matters is specifying how to align the output image. To do that, this code retrieves the bounding box from the bunny’s UIBezierPath. This path is independent of the badge, the background, and the drawn shadow. By applying the edge insets returned byListing 3-8, Listing 3-9 creates an image that aligns around the bunny and only the bunny.

This is a really powerful way to draw ornamented graphics using Quartz 2D and UIKit that work seamlessly within Auto Layout.

Listing 3-9 Drawing with Alignment in Mind

UIBezierPath *path;

// Begin the image context
targetSize, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect targetRect = SizeMakeRect(targetSize);

// Fill the background of the image and outline it
[backgroundGrayColor setFill];
path = [UIBezierPath bezierPathWithRect:targetRect];
[path strokeInside:2];

// Fit bunny into an inset, offset rectangle
CGRect destinationRect =
RectInsetByPercent(SizeMakeRect(targetSize), 0.25);
destinationRect.origin.x = 0;
UIBezierPath *bunny =
[[UIBezierPath bunnyPath]

// Add a shadow to the context and draw the bunny
CGContextSetShadow(context, CGSizeMake(6,6), 4);
[greenColor setFill];
[bunny fill];

// Outline bunny's bounds, which are the alignment rect
CGRect alignmentRect = bunny.bounds;
path = [UIBezierPath bezierPathWithRect:alignmentRect];
[darkGrayColor setStroke];
[path strokeOutside:2];

// Add a red badge at the top-right corner
UIBezierPath *badge = [[UIBezierPath badgePath]
pathWithinRect:CGRectMake(0, 0, 40, 40)];
badge = [badge pathMoveCenterToPoint:
[[UIColor redColor] setFill];
[badge fill];

// Retrieve the initial image
UIImage *initialImage =

// Build and apply the insets
UIEdgeInsets insets =
BuildInsets(alignmentRect, targetRect);
UIImage *image =
[initialImage imageWithAlignmentRectInsets:insets];

// Return the updated image
return image;

Building Stretchable Images

Resizable drawings enable you to create images whose edges are not scaled or resized when adjusted to fit a view. They preserve image details, ensuring that you resize only the middle of an image. Figure 3-9 shows an example of a stretched image. It acts as a button background, where its middle extent may grow or shrink, depending on the text assigned to the button. To ensure that only the middle is resized, a set of cap insets specify the off-limits edges. These caps ensure that the center region (the large purple expanse) can grow and shrink without affecting the rounded corners, with their lined artwork.


Figure 3-9 You can create stretchable images for use with buttons.

To get a sense of how cap insets work, compare Figure 3-9 with Figure 3-10. Figure 3-10 shows the same image assigned to the same button, but without the cap insets. Its edges and corners stretch along with the rest of the button image, producing a visually confusing result. The clean presentation in Figure 3-9 becomes a graphic mess in Figure 3-10.


Figure 3-10 Without caps, the button proportionately stretches all parts of the image.

Listing 3-10 shows the code behind the button image. It builds a 40-by-40 image context, draws two rounded rectangles into that context, and fills the background with a solid color. It retrieves this base image, but before returning it, Listing 3-10 callsresizableImageWithCapInsets:. This method creates a new version of the image that uses those cap insets. This one line makes the difference between the button you see in Figure 3-9 and the one you see in Figure 3-10.

Although iOS 7 introduced borderless, trimmed-away buttons, the traditional button design demonstrated in this listing continues to play a role in third party applications. This was emphasized in Apple’s own WWDC sessions, where demos included many skeumorphic examples. You read more about button effects, including glosses, textures, and other un-Ive enhancements, in Chapter 7. The age of custom buttons is not dead; they just need more thought than they used to.


Performance introduces another compelling reason for using stretchable images where possible. They are stretched on the GPU, providing a boost over manual stretching. The same is true for pattern colors, which you’ll read about later in this chapter. Filling with pattern colors executes faster than manually drawing that pattern into a context.

Listing 3-10 Images and Stretching

CGSize targetSize = CGSizeMake(40, 40);
targetSize, NO, 0.0);

// Create the outer rounded rectangle
CGRect targetRect = SizeMakeRect(targetSize);
UIBezierPath *path =
[UIBezierPath bezierPathWithRoundedRect:targetRect

// Fill and stroke it
[bgColor setFill];
[path fill];
[path strokeInside:2];

// Create the inner rounded rectangle
UIBezierPath *innerPath =
[UIBezierPath bezierPathWithRoundedRect:
CGRectInset(targetRect, 4, 4) cornerRadius:8];

// Stroke it
[innerPath strokeInside:1];

// Retrieve the initial image
UIImage *baseImage =

// Create a resizable version, with respect to
// the primary corner radius
UIImage *image =
[baseImage resizableImageWithCapInsets:
UIEdgeInsetsMake(12, 12, 12, 12)];
return image;

Rendering a PDF

Figure 3-11 shows an image built by rendering a PDF page into an image context. This task is made a little complicated by the Core Graphics API, which does not offer a “draw page in rectangle” option.


Figure 3-11 Rendering PDF pages into contexts requires a few tweaks.

Several stages are involved in the process, as detailed in the next listings. You start by opening a PDF document, as demonstrated in Listing 3-11. The CGPDFDocumentCreateWithURL() function returns a new document reference, which you can use to extract and draw pages.

When you have that document, follow these steps:

1. Check the number of pages in the document by calling CGPDFDocumentGetNumberOfPages().

2. Retrieve each page by using CGPDFDocumentGetPage(). The page count starts with page 1, not page 0, as you might expect.

3. Make sure you release the document with CGPDFDocumentRelease() after you finish your work.

Listing 3-11 Opening a PDF Document

// Open PDF document
NSString *pdfPath = [[NSBundle mainBundle]
pathForResource:@"drawingwithquartz2d" ofType:@"pdf"];
CGPDFDocumentRef pdfRef = CGPDFDocumentCreateWithURL(
(__bridge CFURLRef)[NSURL fileURLWithPath:pdfPath]);
if (pdfRef == NULL)
NSLog(@"Error loading PDF");
return nil;

// ... use PDF document here


Upon grabbing CGPDFPageRef pages, Listing 3-12 enables you to draw each one into an image context, using a rectangle you specify. What’s challenging is that the PDF functions draw using the Quartz coordinate system (with the origin at the bottom left), and the destinations you’ll want to draw to are in the UIKit coordinate system (with the origin at the top left).

To handle this, you have to play a little hokey-pokey with your destination and with the context. First, you flip the entire context vertically, to ensure that the PDF page draws from the top down. Next, you transform your destination rectangle, so the page draws at the right place.

You might wonder why you need to perform this double transformation, and the answer is this: After you flip your coordinate system to enable top-to-bottom Quartz drawing, your destination rectangle that used to be, for example, at the top right, will now draw at the bottom right. That’s because it’s still living in a UIKit world. After you adjust the drawing context’s transform, your rectangle must adapt to that transform, as it does about halfway down Listing 3-12. If you skip this step, your PDF output appears at the bottom right instead of the top right, as you intended.

I encourage you to try this out on your own by commenting out the rectangle transform step and testing various destination locations. What you’ll discover is an important lesson in coordinate system conformance. Flipping the context doesn’t just “fix” the Quartz drawing; it affects all position definitions. You’ll see this same problem pop up in Chapter 7, when you draw text into UIKit paths. In that case, you won’t be working with just rectangles. You must vertically mirror the entire path in the drawing destination.

When you’ve performed coordinate system adjustments, you calculate a proper-fitting rectangle for the page. As Chapter 2 discusses, fitting rectangles retain aspect while centering context within a destination. This requires one last context adjustment, so the drawing starts at the top left of that fitting rectangle. Finally, you draw. The CGContextDrawPDFPage() function renders page contents into the active context.

The DrawPDFPageInRect() function is meant only for drawing to UIKit images destinations. It cannot be used when drawing PDF pages into PDF contexts. It depends on retrieving a UIImage from the context in order to perform its geometric adjustments. To adapt this listing for more general use, you need to supply both a context parameter (rather than retrieve one from UIKit) and a context size, for the vertical transformation.

Listing 3-12 Drawing PDF Pages into Destination Rectangles

void DrawPDFPageInRect(CGPDFPageRef pageRef,
CGRect destinationRect)
CGContextRef context = UIGraphicsGetCurrentContext();
if (context == NULL)
NSLog(@"Error: No context to draw to");

UIImage *image =

// Flip the context to Quartz space
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformScale(transform, 1.0f, -1.0f);
transform = CGAffineTransformTranslate(
transform, 0.0f, -image.size.height);
CGContextConcatCTM(context, transform);

// Flip the rect, which remains in UIKit space
CGRect d = CGRectApplyAffineTransform(
destinationRect, transform);

// Calculate a rectangle to draw to
// CGPDFPageGetBoxRect() returns a rectangle
// representing the page's dimension
CGRect pageRect =
CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
CGFloat drawingAspect = AspectScaleFit(pageRect.size, d);
CGRect drawingRect = RectByFittingInRect(pageRect, d);

// Draw the page outline (optional)

// Adjust the context to the page draws within
// the fitting rectangle (drawingRect)
drawingRect.origin.x, drawingRect.origin.y);
CGContextScaleCTM(context, drawingAspect, drawingAspect);

// Draw the page
CGContextDrawPDFPage(context, pageRef);

Bitmap Context Geometry

When working with bitmap contexts, you can retrieve certain details, like the context’s height and width, directly from the context reference. The CGBitmapContextGetHeight() and CGBitmapContextGetWidth() functions report integer dimensions. You cannot, however, retrieve a context’s scale—the way the number of pixels in the context relates to the number of points for the output image. That’s because scale lives a bit too high up in abstraction. As you’ve seen throughout this book, scale is set by UIKit’sUIGraphicsBeginImageContextWithOptions() and is not generally a feature associated with direct Quartz drawing.

For that reason, Listing 3-12 doesn’t use the bitmap context functions to retrieve size. This is important because transforms operate in points, not pixels. If you applied a Retina-scaled flip to your context, you’d be off, mathematically, by a factor of two. Your 200-point translation would use a 400-pixel value returned by CGBitmapContextGetHeight().

Point-versus-pixel calculations aside, the context functions are not without use. They can retrieve the current context transformation matrix (CGContextGetCTM()), bytes per row (CGBitmapContextGetBytesPerRow()), and alpha level (CGBitmapContextGetAlphaInfo()), among other options. Search for “ContextGet” in the Xcode Documentation Organizer for more context-specific functions.

Building a Pattern Image

Pattern images are a lovely little UIKit gem. You craft a pattern in code or load an image from a file and assign it as a “color” to any view, as in this snippet:

self.view.backgroundColor =
[UIColor colorWithPatternImage:[self buildPattern]];

Figure 3-12 shows a simple pattern created by Listing 3-13. This listing uses common techniques: rotating and mirroring shapes and alternating major and minor sizes.


Figure 3-12 You can create UIColor instances built from pattern images.

If you’d rather create your patterns externally and import them into apps, check out the wonderful design tool Patterno ($19.99 at the Mac App Store). It enables you to build repeating textures that naturally tile.

Scalable Vector Graphics (SVG) patterns are widely available. SVG uses an XML standard to define vector-based graphics for the Web, providing another avenue for pattern design. Sites like http://philbit.com/svgpatterns offer simple but visually pleasing possibilities. PaintCode ($99.99 at the Mac App Store) converts SVG snippets to standard UIKit drawing, providing an easy pathway from SVG sources to UIKit implementation. Just paste the SVG directives into TextEdit, rename to .svg, and import the result into PaintCode. PaintCode translates the SVG to Objective-C, which you can add to your Xcode projects.


Unfamiliar with Clarus the DogCow? Google for Apple Technote 31 or visit http://en.wikipedia.org/wiki/Dogcow. Moof!

Listing 3-13 Building Clarus the DogCow

- (UIImage *) buildPattern
// Create a small tile
CGSize targetSize = CGSizeMake(80, 80);
CGRect targetRect = SizeMakeRect(targetSize);

// Start a new image
targetSize, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();

// Fill background with pink
[customPinkColor set];

// Draw a couple of dogcattle in gray
[[UIColor grayColor] set];

// First, bigger with interior detail in the top-left.
// Read more about Bezier path objects in Chapters 4 and 5
CGRect weeRect = CGRectMake(0, 0, 40, 40);
UIBezierPath *moof = BuildMoofPath();
FitPathToRect(moof, weeRect);
RotatePath(moof, M_PI_4);
[moof fill];

// Then smaller, flipped around, and offset down and right
RotatePath(moof, M_PI);
OffsetPath(moof, CGSizeMake(40, 40));
ScalePath(moof, 0.5, 0.5);
[moof fill];

// Retrieve and return the pattern image
UIImage *image =
return image;


This chapter surveys several common challenges you encounter when drawing images to iOS contexts. You read about basic image generation, converting between bytes and images, adding inset adjustments, and performing PDF drawing. Before you move on, here are a few final thoughts about this chapter:

Image Alignment insets are your friends. They enable you to build complex visual presentations while simplifying their integration with Auto Layout. They will save you a ton of detail work if you keep the core visuals in mind while doing your drawing. From there, all you have to do is add those images into standard image views and let Auto Layout do the rest of the work.

Image When you pull the bytes out from a drawing or photograph, it’s really easy to work with them directly or to move them to a digital signal processing (DSP) library to create visual effects. The routines you saw in this chapter (image to bytes and bytes to image) provide an important bridge between DSP processing and UIKit image presentation.

Image When moving between UIKit drawing and Quartz drawing, don’t overlook the difference in coordinate systems. The PDF example in this chapter avoids not one but two separate gotchas. To use UIKit-sourced rectangles in your Quartz-flipped drawing system, you must transform those rectangles to match the reversed coordinates.