HTML5 Video and Canvas - Beginning HTML5 Media. Make the most of the new video and audio standards for the Web (2015)

Beginning HTML5 Media. Make the most of the new video and audio standards for the Web (2015)

CHAPTER 5

image

HTML5 Video and Canvas

Up to this point in the book video has been treated as a somewhat static medium. As you have discovered, a video is nothing more than a series of images rendered on screen at a specific rate and the user’s only interaction with the video is to click a control and/or read a transcript or subtitles. Other than that, there really is nothing more for the user to do than to sit back and enjoy the show. With a bit of JavaScript and the use of the HTML5 canvas, you can actually make this passive medium interactive and, more important, transform it to a creative medium. It all starts with that all-important concept: imaging.

When it comes to drawing images on the screen, HTML5 can render two imaging types: SVG (Scalable Vector Graphics) or Raster Bitmap graphics. SVG images are, to be brief, composed of points, lines, and fills in space. They are commonly driven by code and because of their nature they are device independent, meaning they can be sized and relocated on the screen with no loss in resolution.

Raster Graphics, on the other hand, are pixel-based. They are essentially chained to the pixels in the screen. With the advent of HTML5 video and the canvas element, the screen becomes exactly what the name implies: a blank canvas where you can draw anything from straight lines to complex graphics.

While the SVG environment is a declarative graphics environment dealing with vector-based shapes, the HTML canvas provides a script-based graphics environment revolving around pixels or bitmaps. In comparison with SVG, it is faster to manipulate data entities in canvas, since it is easier to get directly to individual pixels. On the other hand, SVG provides a DOM (Document Object Model) and has an event model not available to canvas. What this should tell you is that applications that need interactive graphics will typically choose SVG, while applications that do a lot of image manipulation will more typically reach for canvas. The available transforms and effects in both are similar, and the same visual results can be achieved with both, but with different programming effort and potentially different performance.

When comparing performance between SVG and canvas, typically the drawing of a lot of objects will eventually slow down SVG, which has to maintain all the references to the objects, while for canvas it’s just more pixels to light up. So, when you have a lot of objects to draw and it’s not really important that you continue to have access to the individual objects but are just after pixel drawings, you should use canvas.

In contrast, the size of the drawing area of canvas has a huge impact on the speed of a <canvas>, since it has to draw more pixels. So, when you have a large area to cover with a smaller number of objects, you should use SVG.

Note that the choice between canvas and SVG is not fully exclusive. It is possible to bring a canvas into an SVG image by converting it to an image using a function called toDataURL(). This can be used, for example, when drawing a fancy and repetitive background for a SVG image. It may often be more efficient to draw that background in the canvas and include it into the SVG image through the toDataURL() function: which explains why the focus of this chapter is canvas.

Like SVG, the canvas is, at its heart, a visually oriented medium—it doesn’t do anything with audio. Of course, you can combine background music with an awesome graphical display by simply using the <audio> element as part of your pages. An amazing example of how audio and canvas come together can be found at 9elements (http://9elements.com/io/?p=153). The project is a stunning a visualization of Twitter chatter through the use of colored and animated circles on a background of music.

If you already have experience with JavaScript, canvas should not be too difficult to understand. It’s almost like a JavaScript library with drawing functionality. It supports, in particular, the following function categories:

· Canvas handling: creating a drawing area, a 2D context, saving and restoring state.

· Drawing basic shapes: rectangles, paths, lines, arcs, Bezier, and quadratic curves.

· Drawing text: drawing fill text and stroke text, and measuring text.

· Using images: creating, drawing, scaling, and slicing images.

· Applying styles: colors, fill styles, stroke styles, transparency, line styles, gradients, shadows, and patterns.

· Applying transformations: translating, rotating, scaling, and transformation matrices.

· Compositing: clipping and overlap drawing composition.

· Applying animations: execute drawing functions over time by associating time intervals and timeouts.

To start let’s work with video in canvas.

Video in Canvas

The first step to understanding how to work with video in canvas is to pull the pixel data out of a <video> element and “paint” it on a canvas element. Like any great artist facing a blank canvas, we need to draw an image onto the canvas.

drawImage( )

The drawImage() function accepts a video element as well as an image or a canvas element. Listing 5-1 shows how to use it directly on a video. You can follow along with the examples at http://html5videoguide.net.

Listing 5-1. Introducing the Video Pixel Data into a Canvas

<video controls autoplay height="240" width="360" >
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>

<canvas width="400" height="300" style="border: 1px solid black;">
</canvas>
<script>
var video, canvas, context;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
video.addEventListener("timeupdate", paintFrame, false);

function paintFrame() {
context.drawImage(video, 0, 0, 160, 120);
}
</script>

The HTML markup is simple. It contains only the <video> element and the <canvas> element into which we are painting the video data.

The JavaScript is pretty uncomplicated. The addEventListener is the key. Every time the video’s currentTime updates–timeupdate–the paintFrame function draws the captured pixels onto the canvas using the drawImage() method which is associated with thegetContext("2d") object. Those pixels are drawn, as shown in Figure 5-1, in the upper left corner of the <canvas> element (0,0) and fill a space that is 160 × 120. All browsers support this functionality.

9781484204610_Fig05-01

Figure 5-1. Painting a video into a canvas with every timeupdate event

You will notice that the video is playing at a higher framerate than the canvas. This is because the timeupdate event does not fire for every frame of the video. It fires every few frames—roughly every 100–250 ms. There currently is no function to allow you to reliably grab every frame. We can, however, create a painting loop that is updated every time the screen refreshes using the requestAnimationFrame() function. In typical browsers, that’s about 60 times a second and given that most modern videos are about 30 frames a second, it should get most if not all of the frames.

In the next example, we use the play event to start the painting loop when the user starts playback and run until the video is paused or ended. Another option would be to use the canplay or loadeddata events to start the display independently of a user interaction.

Also, let’s make the next example a bit more interesting. Since we now know how to capture and draw a video frame to the canvas, let’s start to play with that data. In Listing 5-2, we shift the frames by 10 pixels on each of the x- and y-axes each time the canvas redraws.

Listing 5-2. Painting Video Frames at Different Offsets into the Canvas Using requestAnimationFrame

<video controls autoplay height="240" width="360" >
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
<canvas width="400" height="300" style="border: 1px solid black;">
</canvas>
<script>
var video, canvas, context;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
video.addEventListener("play", paintFrame, false);
var x = 0, xpos = 10;
var y = 0, ypos = 10;
function paintFrame() {
context.drawImage(video, x, y, 160, 120);
if (x > 240) xpos = -10;
if (x < 0) xpos = 10;
x = x + xpos;
if (y > 180) ypos = -10;
if (y < 0) ypos = 10;
y = y + ypos;
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}
</script>

The result, as shown in Figure 5-2, can be rather interesting. The video, itself, seems to be a paintbrush on the canvas as it moves around the canvas painting the video frames into what seems to be random locations. In actual fact, if you carefully follow the paintFrame() function, this is not exactly the case. The size of each image is set to 160 × 120 and the motion of the video is determined by the xpos and ypos values. Every successive frame is offset from the previous one by 10 pixels each to the right and left until it hits the edge of the canvas, when the offset is negated.

9781484204610_Fig05-02

Figure 5-2. Painting a video into a canvas with the requestAnimationFrame function in Chrome

The framerate at which this example is painted equals the framerate of the requestAnimationFrame() function, which is typically 60Hz. This means that we are now updating the canvas as often or even more often than the video frame.

Since the requestAnimationFrame() method is still fairly new, in older browsers (particularly IE10 and below) you will need to use setTimeout() instead of requestAnimationFrame() to repeatedly grab a frame from the video after a given time interval.

Because the setTimeout() function calls a function after a given number of milliseconds, and we would normally run the video at 24 (PAL) or 30 (NTSC) frames per second, a timeout of 41 ms or 33 ms would be more than appropriate. To be on the safe side, you might want to go with a frame rate equal to the one of requestAnimationFrame(), which equals your typical screen refresh rate of 60Hz. Thus, set the timeout to 1,000/60 = 16 ms to achieve a similar effect to Figure 5-2. For your application, you might want to reduce the frequency even further to make your web page less CPU (Central Processing Unit) intensive.

As you start experimenting with the setTimeout() function, you will notice that it allows us to “render” video frames into a canvas at higher framerates than the original video and than requestAnimationFrame() allows. Let’s take the example from Listing 5-2 and rewrite it with setTimeout() and a 0 timeout, so you can see what we mean (see Listing 5-3).

Listing 5-3. Painting Video Frames at Different Offsets into the Canvas Using setTimeout

<video controls autoplay height="240" width="360" >
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
<canvas width="400" height="300" style="border: 1px solid black;">
</canvas>
<script>
var video, canvas, context;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
video.addEventListener("play", paintFrame, false);
var x = 0, xpos = 10;
var y = 0, ypos = 10;
var count = 0;
function paintFrame() {
count++;
context.drawImage(video, x, y, 160, 120);
if (x > 240) xpos = -10;
if (x < 0) xpos = 10;
x = x + xpos;
if (y > 180) ypos = -10;
if (y < 0) ypos = 10;
y = y + ypos;
if (video.paused || video.ended) {
alert(count);
return;
}
setTimeout(function () {
paintFrame();
}, 0);
}
</script>

The result, as shown in Figure 5-3, may at first be surprising. We see a lot more video frames being rendered into the canvas than with the requestAnimationFrame() approach. As you think about this some more, you will realize that all we have done is grab a frame from the video into the canvas as quickly as we can and not worry about whether it is a new video frame or not. The visual effect is that we get a higher framerate in the canvas than we have in the video. In fact, in Google Chrome on one of our machines we achieved 210 fps in the canvas. Note that your screen will not render it at that framerate, but still at typically around 60 fps, but with every rendering the canvas has placed three to four new frames, so it looks a lot faster than the previous version.

9781484204610_Fig05-03

Figure 5-3. Painting a video into a canvas with the setTimeout event in Chrome

If you have tried this in a variety of modern browsers you may have noticed each one managed to draw a different number of video frames during the playback of the complete 6-second clip. This is due to the varying speed of their JavaScript engines. They may even get stuck in the middle for a bit before continuing on with drawing more frames. This is because the browser has the ability to delay a setTimeout() call if there is other higher-priority work to do. The requestAnimationFrame() function doesn’t suffer from that problem and guarantees an equidistant recurring rendering call, which avoids playback jitter.

Image Note Though we have demonstrated one example, don’t forget this is code and the neat thing about code is the ability to play with it. For example, a simple thing like changing the xpos and ypos values can yield quite different results from that shown.

Extended drawImage( )

So far we have used the drawImage() function to draw the pixels extracted from a video onto the canvas. This drawing also includes scaling that the canvas does for us to fit the pixels into the given width and height dimensions. There is also a version of drawImage() that allows you to extract a rectangular area from the original video and paint it onto a region in the canvas. An example of just such an approach is tiling, where the video is divided into multiple rectangles and redrawn with a gap between the rectangles. Listing 5-4 shows a naïve implementation of this. We only show the new paintFrame() function since the remainder of the code is identical to Listing 5-2. We also choose the requestAnimationFrame() version of painting because we really don’t need to paint at a higher framerate than the video.

Listing 5-4. Naïve Implementation of Video Tiling into a Canvas

function paintFrame() {
in_w = 720; in_h = 480;
w = 360; h = 240;
// create 4x4 tiling
tiles = 4;
gap = 5;
for (x = 0; x < tiles; x++) {
for (y = 0; y < tiles; y++) {
context.drawImage(video, x*in_w/tiles, y*in_h/tiles,
in_w/tiles, in_h/tiles,
x*(w/tiles+gap), y*(h/tiles+gap),
w/tiles, h/tiles);
}
}
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}

The drawImage() function with its many parameters allows for the extraction of a rectangular region from any offset in the original video and the drawing of this pixel data into any scaled rectangular region in the canvas. Figure 5-4 shows how this function works. As you can see, a specific region of a video is taken from the source and drawn on to a specific area in the canvas. That specific region for both the source and the destination is set in the drawimage() parameters.

9781484204610_Fig05-04

Figure 5-4. Extracting a rectangular region from a source video into a scaled rectangular region in the canvas using drawImage()

The parameters are as follows: drawImage(image, sx, sy, sw, sh, dx, dy, dw, dh) (see Figure 5-4). In Listing 5-4 the parameters are used to subdivide the video into tiles whose sizes are set using in_w/tiles by in_h/tiles, where in_w and in_h are the intrinsic width and height of the used video file (i.e., video.videoWidth and video.videoHeight). These tiles are then scaled to size with w/tiles by h/tiles, where w and h are the scaled width and height of the video image in the canvas. Each tile is then placed on the canvas with a 5-pixel gap.

Image Note It is important you understand that the intrinsic width and height of the video resource is used to extract the region from the video and not the potentially scaled video in the video element. If this is disregarded, you may be calculating with the width and height of the scaled video and extract the wrong region. Also note it is possible to scale the extracted region by placing it into a destination rectangle with different dimensions.

Figure 5-5 shows the result of running Listing 5-4. As you can see, the video is broken into a series of tiles on a 4 × 4 grid and spaced 5 pixels from each other. All browsers show the same behavior.

9781484204610_Fig05-05

Figure 5-5. Tiling a video into a canvas in Chrome, video on left, canvas on right

This implementation is not exactly a best practice because we call the drawImage() function once per tile. If you set the variable tiles to a value of 32, some browsers can’t keep up with the canvas rendering and the framerate in the canvas drags to a halt. This is because each call todrawImage() for the video element during the setTimeout function retrieves the pixel data from the video each time it is drawn to the canvas. The result is an overworked browser.

There are three ways to overcome this. All of them rely on getting the video image via the canvas into an intermediate storage area and repainting the image from there. In the first approach you will grab frames and repaint them, in the second you will grab frames but repaint pixels, in the last you will use a second canvas for pixel manipulation.

Frame Grabbing

This approach consists of drawing the video pixels into the canvas, then picking up the pixel data from the canvas with getImageData() and writing it out again with putImageData(). Since putImageData() has parameters to draw out only sections of the picture again, you should be able to replicate the same effect as above. Here is the signature of the function: putImageData(imagedata, dx, dy [, dirtyx, dirtyy, dirtyw, dirtyh ]).

Unfortunately, the parameters are not the same as for the drawImage() function. The “dirty” rectangle defines the rectangle from the image data to draw (by default it’s the full image). Then the dx and dy allow moving that rectangle from its position further away on the x and y axis. No scaling will happen to the image.

You can see the code in Listing 5-5—again, only the paintFrame() function is provided since the remainder is identical with Listing 5-2.

Listing 5-5. Reimplementation of Video Tiling into a Canvas with putImageData( )

function paintFrame() {
in_w = 720; in_h = 480;
w = 360; h = 240;
context.drawImage(video, 0, 0, in_w, in_h, 0, 0, w, h);
frame = context.getImageData(0, 0, w, h);
context.clearRect(0, 0, w, h);

// create 4x4 tiling
tiles = 4;
gap = 5;
for (x = 0; x < tiles; x++) {
for (y = 0; y < tiles; y++) {
context.putImageData(frame,
x*gap, y*gap,
x*w/tiles, y*h/tiles,
w/tiles, h/tiles);
}
}
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}

In this version, the putImageData() function uses parameters to specify the drawing offset, which includes the gap and the size of the cut-out rectangle from the video frame. The frame has already been received through getImageData() as a resized image. Note that the frame drawn with drawImage() needs to be cleared before redrawing with putImageData(), because we don’t paint over the 5 px gaps. Figure 5-6 shows the result of running Listing 5-5.

9781484204610_Fig05-06

Figure 5-6. Attempted tiling of a video into a canvas using putImageData()

Image Note Note that you have to run this example from a web server, not from a file on your local computer. The reason is that getImageData() does not work cross-site and security checks will ensure it only works on the same http domain. That leaves out local file access.

Pixel Painting

The second approach is to perform the cut-outs manually. Seeing as we have the pixel data available through getImageData(), we can create each of the tiles ourselves and use putImageData() with only the offset attributes to place the tiles. Listing 5-6 shows an implementation of the paintFrame() function for this case.

Listing 5-6. Reimplementation of Video Tiling into a Canvas with createImageData

function paintFrame() {
w = 360; h = 240;
context.drawImage(video, 0, 0, w, h);
frame = context.getImageData(0, 0, w, h);
context.clearRect(0, 0, w, h);

// create 15x15 tiling
tiles = 15;
gap = 2;
nw = w/tiles;
nh = h/tiles;

// Loop over the tiles
for (tx = 0; tx < tiles; tx++) {
for (ty = 0; ty < tiles; ty++) {
output = context.createImageData(nw, nh);

// Loop over each pixel of output file
for (x = 0; x < nw; x++) {
for (y = 0; y < nh; y++) {
// index in output image
i = x + nw*y;
// index in frame image
j = x + w*y + tx*nw + w*nh*ty;
// copy all the colours
for (c = 0; c < 4; c++) {
output.data[4*i+c] = frame.data[4*j+c];
}
}
}

// Draw the ImageData object.
context.putImageData(output, tx*(nw+gap), ty*(nh+gap));
}
}

if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}

First we loop over each of the tiles and call createImageData() to create the tile image. To fill the tile with pixel data, we loop through the pixels of the tile image and fill it from the relevant pixels of the video frame image. Then we place the tile using putImageData(). Figure 5-7 shows the results with a 15 × 15 grid of tiles.

9781484204610_Fig05-07

Figure 5-7. Attempted tiling of a video into a canvas using putImageData() in Chrome

This could obviously be improved by just writing a single image and placing the gap in between the tiles as we write that one image. The advantage of having an image for each tile is that you can more easily manipulate each individual tile—rotate, translate, or scale it, for example—but you will need to manage the list of tiles (i.e., keep a list of pointers to them).

Image Note That you have to run this example from a web server, not from a file on your local computer. The reason is that getImageData() does not work cross-site and security checks will ensure it only works on the same http domain. That leaves out local file access.

Scratch Canvas

The final approach is to store the video images with drawImage() into an intermediate canvas–we’ll call it the scratch canvas since its only purpose is to hold the pixel data and it is not even displayed visually on screen. Once that is done, you use drawImage() with input from the scratch canvas to draw onto the displayed canvas. The expectation is that the image in the canvas is already in a form that can just be copied piece by piece, into the displayed canvas rather than continuous scaling as in the naive approach from earlier. The code in Listing 5-7 shows how to use a scratch canvas.

Listing 5-7. Reimplementation of Video Tiling into a Canvas Using a Scratch Canvas

<video controls autoplay height="240" width="360">
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
<canvas width="400" height="300" style="border: 1px solid black;">
</canvas>
<canvas id="scratch" width="360" height="240" style="display: none;"></canvas>

<script>
var context, sctxt, video;
video = document.getElementsByTagName("video")[0];
canvases = document.getElementsByTagName("canvas");
canvas = canvases[0];
scratch = canvases[1];
context = canvas.getContext("2d");
sctxt = scratch.getContext("2d");
video.addEventListener("play", paintFrame, false);
function paintFrame() {
// set up scratch frames
w = 360; h = 240;
sctxt.drawImage(video, 0, 0, w, h);
// create 4x4 tiling
tiles = 4;
gap = 5;
tw = w/tiles; th = h/tiles;
for (x = 0; x < tiles; x++) {
for (y = 0; y < tiles; y++) {
context.drawImage(scratch, x*tw, y*th, tw, th,
x*(tw+gap), y*(th+gap), tw, th);
}
}
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}
</script>

Notice the second canvas with id=“scratch” in the HTML. It has to be set large enough to be able to contain the video frame. If you do not give it a width and height attribute, it will default to 300 × 150 and you may lose data around the edges. The purpose of this scratch canvas is to receive and scale the video frames before they are handed off to the canvas. We don’t want to display it, which is why it is set to display:none. The tiles are then drawn (see Figure 5-8) into the displayed canvas using the extended drawImage() function shown in Listing 5-4.

9781484204610_Fig05-08

Figure 5-8. Using the “scratch canvas” technique in Chrome

Image Note This is the most efficient implementation of the tiling since it doesn’t have to repeatedly copy the frames from the video, and it doesn’t have to continuously rescale the original frame size. It also works across all browsers, including IE. It also doesn’t need to be run on a web server, which is an additional win.

As you may have gathered, tiling a video on the canvas offers some rather interesting creative possibilities. Due to the fact that each tile can be manipulated individually, each tile can use a different transform or other technique. An amazing example of tiling in combination with other canvas effects such as transformations is shown in “Blowing up your video” by Sean Christmann (see http://craftymind.com/factory/html5video/CanvasVideo.html). When you click on the video, the area is tiled and the tiles scatter as shown in Figure 5-9, creating an explosion effect.

9781484204610_Fig05-09

Figure 5-9. Tiling offers you some serious creative possibilities

Styling

Now that we know how to handle video in a canvas, let’s do some simple manipulations to the canvas pixels which will yield some rather interesting results. We’ll start by making certain pixels in the video transparent.

Pixel Transparency to Replace the Background

One of the hallmarks of Flash video, before the arrival of HTML5 video, was the ability to use alpha channel video on top of an animation or a static image. This technique can’t be used in the HTML5 universe, but manipulation of the canvas allows us to determine which colors are transparent and to overlay that canvas over other content. Listing 5-8 shows a video where all colors but white are made transparent before being projected onto a canvas with a background image. In the browser, pixels consist of a combination of three colors: red, green, and blue. Each of the r, g, and b components can have a value between 0 and 255 equating to 0% to 100% intensity. Black is when all rgb values are 0 and white when all of them are 1. In Listing 5-8, we identify a pixel whose r, g, and b components are all above 180 as close enough to white so we can keep some more “dirty” whites also.

Listing 5-8. Making Certain Colors in a Video Transparent Through a Canvas Manipulation

function paintFrame() {
w = 360; h = 240;
context.drawImage(video, 0, 0, w, h);
frame = context.getImageData(0, 0, w, h);
context.clearRect(0, 0, w, h);
output = context.createImageData(w, h);

// Loop over each pixel of output file
for (x = 0; x < w; x++) {
for (y = 0; y < h; y++) {
// index in output image
i = x + w*y;
for (c = 0; c < 4; c++) {
output.data[4*i+c] = frame.data[4*i+c];
}
// make pixels transparent
r = frame.data[i * 4 + 0];
g = frame.data[i * 4 + 1];
b = frame.data[i * 4 + 2];
if (!(r > 180 && g > 180 && b > 180))
output.data[4*i + 3] = 0;
}
}
context.putImageData(output, 0, 0);
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}

Listing 5-8 shows the essential painting function. The rest of the page is very similar to Listing 5-2 with the addition of a background image to the <canvas> styling. All pixels are drawn in exactly the same manner, except for the fourth color channel of each pixel, which is set to 0. This is the a channel, which determines opacity in the rbga color model, so we’re making all pixels opaque that are not white. Figure 5-10 shows the result with the stars being the only remaining nontransparent pixels and producing the effect of a firework on the image of Hong Kong.

9781484204610_Fig05-10

Figure 5-10. Projecting a masked video onto a background image in the canvas

Image Note This technique can also be applied to a blue or green screen video. In this case the pixels composing solid blue or green background in the video are turned transparent. This will not work if the lighting of the screen is uneven.

Scaling Pixel Slices for a 3D Effect

Videos are often placed in a 3D display to make them look more like real-world screens through the use of perspective. This requires scaling the shape of the video to a trapezoid where both width and height are independently scaled. In a canvas, you can achieve this effect by drawing vertical slices of the video picture with different heights and scaling the width using the drawImage() function. Listing 5-9 shows a great example of this technique.

Listing 5-9. Rendering a Video in the 2D Canvas with a 3D Effect

function paintFrame() {
// set up scratch frame
w = 270; h = 180;
sctxt.drawImage(video, 0, 0, w, h);

// width should be between -500 and +500
width = -250;
// right side scaling should be between 0 and 200%
scale = 2;

// canvas width and height
cw = 1000; ch = 400;
// number of columns to draw
columns = Math.abs(width);
// display the picture mirrored?
mirror = (width > 0) ? 1 : -1;
// origin of the output picture
ox = cw/2; oy= (ch-h)/2;
// slice width
sw = columns/w;
// slice height increase steps
sh = (h*scale-h)/columns;

// Loop over each pixel column of the output picture
for (x = 0; x < w; x++) {
// place output columns
dx = ox + mirror*x*sw;
dy = oy - x*sh/2;
// scale output columns
dw = sw;
dh = h + x*sh;
// draw the pixel column
context.drawImage(scratch, x, 0, 1, h, dx, dy, dw, dh);
}
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}

For this example, showing only the paintFrame() function, we use a 1,000 × 400 canvas and a scratch canvas into which we pull the pixel data.

As we pull the video frame into the scratch canvas, we scale the video to the size at which we want to apply the effect. Then we pull pixel column by pixel column into the displayed canvas. As we do that, we scale the width and height of the pixel column to the desired “width” and height of the output image. The width of the output image is given through the width variable. The height of the output image scales between the original height on the left side of the output image and scale times the original height on the right side. A negative width will determine that we are looking through the “back” of the video.

The example is written in such a way that you can achieve innumerable creative effects by simply changing the width and scale variables. For example, you can achieve a book page turning effect by changing the width and scale values synchronously.

Figure 5-11 shows the result in Chrome. All browsers, including IE, support this example and will display the same result.

9781484204610_Fig05-11

Figure 5-11. Rendering video in a 3D perspective in Chrome

Ambient CSS Color Frame

Another nice effect that the canvas can be used for is typically known as an ambient color frame for the video. In this effect, a colored frame or border area is created around the video, and the color of that frame is adjusted according to the average color of the video.

This technique is especially effective if your video needs a border on the page or you want it to be noticed. To that effect, you will frequently calculate the average color of the video and use it to fill a div that sits behind the video and is slightly larger than the video. Listing 5-10 shows an example implementation of this technique.

Listing 5-10. Calculation of Average Color in a Canvas and Display of Ambient Color Frame

<style type="text/css">
#ambience {
transition-property: all;
transition-duration: 1s;
transition-timing-function: linear;
padding: 40px;
width: 366px;
outline: black solid 10px;
}
video {
padding: 3px;
background-color: white;
}
canvas {
display: none;
}
</style>
<div id="ambience">
<video controls autoplay height="240" width="360">
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
</div>
<canvas id="scratch" width="320" height="160"></canvas>
</div>

<script>
var sctxt, video, ambience;
ambience = document.getElementById("ambience");
video = document.getElementsByTagName("video")[0];
scratch = document.getElementById("scratch");
sctxt = scratch.getContext("2d");
video.addEventListener("play", paintAmbience, false);

function paintAmbience() {
// set up scratch frame
sctxt.drawImage(video, 0, 0, 360, 240);
frame = sctxt.getImageData(0, 0, 360, 240);
// get average color for frame and transition to it
color = getColorAvg(frame);
ambience.style.backgroundColor =
’rgb(’+color[0]+’,’+color[1]+’,’+color[2]+’)’;
if (video.paused || video.ended) {
return;
}
// don’t do it more often than once a second
setTimeout(function () {
paintAmbience();
}, 1000);
}

function getColorAvg(frame) {
r = 0;
g = 0;
b = 0;
// calculate average color from image in canvas
for (var i = 0; i < frame.data.length; i += 4) {
r += frame.data[i];
g += frame.data[i + 1];
b += frame.data[i + 2];
}
r = Math.ceil(r / (frame.data.length / 4));
g = Math.ceil(g / (frame.data.length / 4));
b = Math.ceil(b / (frame.data.length / 4));
return Array(r, g, b);
}
</script>

Though the preceding code block appears to be rather complex, it is also fairly easy to follow.

We start by setting up the CSS style environment such that the video is placed in a separate <div> element whose background color starts with white but will change as the video plays. The video, itself, has a 3 px white padding frame to separate it from the color-changing <div>.

Thanks to the setTimeout() function the color around the video will only change once each second. We decided to use setTimeout() over requestAnimationFrame() for this example to adapt the framing around the video less often. To ensure a smooth color transition, we use CSS transitions to make the change over the course of a second.

The canvas being used is invisible since it is used only to pull an image frame every second and calculate the average color of that frame. The background of the <div> is then updated with that color. Figure 5-12 shows the result.

9781484204610_Fig05-12

Figure 5-12. Rendering of an ambient CSS color frame in Opera

If you are reading this in the print version, in Figure 5-12 you might see only a dark gray as the background of the video. However, the color actually changes to various shades of the predominant brown in the background.

Image Note Though this technique is right up there in the realm of “cool techniques,” use it sparingly. If there is a compelling design or branding reason to use it, by all means use it. Using it just because “I can” is not a valid reason.

Video as Pattern

The canvas provides a simple function to create regions tiled with images, another canvas, or frames from a video. The function is createPattern(). This will take an image and copy it into the given region until that region is filled with copies of the image or video. If your video doesn’t come in the size that your pattern requires, you will need to use a scratch canvas to resize the video frames first.

Listing 5-11 shows how it’s done.

Listing 5-11. Filling a Rectangular Canvas Region with a Video Pattern

<video autoplay style="display: none;" >
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
<canvas width="720" height="480" style="border: 1px solid black;">
</canvas>
<canvas id="scratch" width="180" height="120" style="display:none;">
</canvas>

<script>
var context, sctxt, video;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
scratch = document.getElementById("scratch");
sctxt = scratch.getContext("2d");
video.addEventListener("play", paintFrame, false);

function paintFrame() {
sctxt.drawImage(video, 0, 0, 180, 120);
pattern = context.createPattern(scratch, ’repeat’);
context.fillStyle = pattern;
context.fillRect(0, 0, 720, 480);
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}
</script>

We’re hiding the original video element, since the video is already painted 16 times into the output canvas. The scratch canvas grabs a frame roughly every 16 ms (assuming requestAnimationFrame() runs at 60 fps), which is then painted into the output canvas using the “repeat” pattern of createPattern().

Each time the paintFrame() function is called, the current image in the video is grabbed and used as the replicated pattern in createPattern(). The HTML5 canvas specification states if the image (or canvas frame or video frame) is changed after the createPattern()function call where it is used, the pattern will not be affected.

Knowing there is no means of specifying scaling on the pattern image being used, we have to first load the video frames into the scratch canvas and then create the pattern from this scratch canvas and apply it to the drawing region.

Figure 5-13 shows the result in Safari. Since all browsers show the same behavior, this is representative for all browsers.

9781484204610_Fig05-13

Figure 5-13. Rendering of a video pattern in Safari

Gradient Transparency Mask

Gradient masks are used to gradually fade the opacity of an object. Though the availability of transparency masking is quite widespread in practically every video editing application on the market, a gradient mask can also be programmatically added at runtime. This is accomplished by placing the page content–let’s assume an image–under the video and applying a grayscale gradient over the video. Using the CSS mask property, we can apply transparency to the grayscale mask where the gradient was opaque. We can also do this using the canvas.

With canvas, we have a bit more flexibility, since we can play with the rgba values of the pixels in the gradient. In this example we simply reuse the previous code block and paint the video into the middle of a canvas. The video is blended into the ambient background through use of a radial gradient.

Listing 5-12 shows the key elements of the code.

Listing 5-12. Introducing a Gradient Transparency Mark into the Ambient Video

<style type="text/css">
#ambience {
transition-property: all;
transition-duration: 1s;
transition-timing-function: linear;
width: 420px; height: 300px;
outline: black solid 10px;
}
#canvas {
position: relative;
left: 30px; top: 30px;
}
</style>
<div id="ambience">
<canvas id="canvas" width="360" height="240"></canvas>
</div>
<video autoplay style="display: none;">
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
<canvas id="scratch" width="360" height="240" style="display: none;">
</canvas>
<script>
var context, sctxt, video, ambience;
ambience = document.getElementById("ambience");
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
context.globalCompositeOperation = "destination-in";
scratch = document.getElementById("scratch");
sctxt = scratch.getContext("2d");
gradient = context.createRadialGradient(180,120,0, 180,120,180);
gradient.addColorStop(0, "rgba( 255, 255, 255, 1)");
gradient.addColorStop(0.7, "rgba( 125, 125, 125, 0.8)");
gradient.addColorStop(1, "rgba( 0, 0, 0, 0)");
video.addEventListener("play", paintAmbience, false);

function paintAmbience() {
// set up scratch frame
sctxt.drawImage(video, 0, 0, 360, 240);
// get average color for frame and transition to it
frame = sctxt.getImageData(0, 0, 360, 240);
color = getColorAvg(frame);
ambience.style.backgroundColor =
’rgba(’+color[0]+’,’+color[1]+’,’+color[2]+’,0.8)’;
// paint video image
context.putImageData(frame, 0, 0);
// throw gradient onto canvas
context.fillStyle = gradient;
context.fillRect(0, 0, 360, 240);
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintAmbience);
}
</script>

We do not repeat the getColorAvg() function, which we defined in Listing 5-10.

We achieve the video masking with a gradient by changing the globalCompositeOperation property of the display canvas to destination-in. This means that we are able to use a gradient that is placed over the video frame to control the transparency of the pixels of the video frame. In this case we use a radial gradient in the initCanvas function and reuse that for every video frame.

Figure 5-14 shows the results in all browsers.

9781484204610_Fig05-14

Figure 5-14. Rendering of video with a transparency mask onto an ambient color frame in a variety of browsers

Clipping a Region

Another useful compositing effect is to clip out a region from the canvas for display. This will cause everything else drawn onto the canvas afterward to be drawn only in the clipped-out region. For this technique, a path is “drawn” that may also include basic shapes. Then, instead of drawing these paths onto the canvas with the stroke( ) or fill( ) methods, we draw them using the clip() method, creating the clipped region(s) on the canvas to which further drawings will be confined. Listing 5-13 shows an example.

Listing 5-13. Using a Clipped Path to Filter out Regions of the Video for Display

<canvas id="canvas" width="360" height="240"></canvas>
<video autoplay style="display: none;">
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
<script>
var canvas, context, video;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
context.beginPath();
// speech bubble
context.moveTo(75,25);
context.quadraticCurveTo(25,25,25,62.5);
context.quadraticCurveTo(25,100,50,100);
context.quadraticCurveTo(100,120,100,125);
context.quadraticCurveTo(90,120,65,100);
context.quadraticCurveTo(125,100,125,62.5);
context.quadraticCurveTo(125,25,75,25);
// outer circle
context.arc(180,90,50,0,Math.PI*2,true);
context.moveTo(215,90);
// mouth
context.arc(180,90,30,0,Math.PI,false);
context.moveTo(170,65);
// eyes
context.arc(165,65,5,0,Math.PI*2,false);
context.arc(195,65,5,0,Math.PI*2,false);
context.clip();
video.addEventListener("play", drawFrame, false);

function drawFrame() {
context.drawImage(video, 0, 0, 360, 240);
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(drawFrame);
}
</script>

In this example, we don’t display the video element but only draw its frames onto the canvas. During setup of the canvas, we define a clip path consisting of a speech bubble and a smiley face. We then set up the event listener for the play event and start playback of the video. In the callback, we only need to draw the video frames onto the canvas.

This is a very simple and effective means of masking out regions of a video. Figure 5-15 shows the results in Chrome. It works in all browsers the same way, including IE.

9781484204610_Fig05-15

Figure 5-15. Rendering of video on a clipped canvas in Google Chrome

Image Note Keep in mind this example uses a rather simple programmatically drawn shape to mask the video. Using logos or complex shapes to accomplish the same results is a difficult task, at best.

Drawing Text

As you saw in the previous example, simple shapes can be used to create masks for video. We can also use text as a mask for video. This technique is rather simple to both visualize–the text color is replaced with the video–and accomplish. Listing 5-14 shows how it is done with a canvas.

Listing 5-14. Text Filled with Video

<canvas id="canvas" width="360" height="240"></canvas>
<video autoplay style="display: none;">
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>

<script>
var canvas, context, video;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
// paint text onto canvas as mask
context.font = ’bold 70px sans-serif’;
context.textBaseline = ’top’;
context.fillText(’Hello World!’, 0, 0, 320);
context.globalCompositeOperation = "source-in";
video.addEventListener("play", paintFrame, false);

function paintFrame() {
context.drawImage(video, 0, 0, 360, 240);
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}
</script>

We have a target canvas and a hidden video element. In JavaScript, we first paint the text onto the canvas. Then we use the globalCompositeOperation property to use the text as a mask for all video frames painted onto the canvas afterward.

Note that we used source-in as the compositing function. This works in all browsers except for Opera, which only briefly paints the text, but afterward ignores the fillText() cut-out and draws the full video frames again. Figure 5-16 shows the results in the other browsers that all support this functionality.

9781484204610_Fig05-16

Figure 5-16. Rendering of video used as a text fill in Google Chrome

Transformations

The usual transformations supported by CSS are also supported by canvas. These CSS Transforms include translation, rotation, scaling, and transformation matrices. We can apply them to the frames extracted from the video to give the video some special effects.

Reflections

A common visual effect used by web designers and developers is reflections. Reflections are relatively simple to implement and are quite effective, particularly when placed against a dark background. All you need to do is to copy the video frames into a canvas placed underneath the source video, flip the copy, reduce its opacity, and add a gradient, all of which we have learned before.

It would certainly be easier if we were able to create a reflection using a CSS-only approach which uses the box-reflect property. Unfortunately, this property is not yet standardized and therefore only blink and webkit-based browsers implement it. That’s the bad news.

The good news is canvas comes to the rescue. By using the canvas we can create consistent reflections in a cross-browser environment, while keeping the copied and source videos in sync.

Listing 5-15 is an example that works in all browsers.

Listing 5-15. Video Reflection Using a Canvas

<div style="padding: 50px; background-color: #090909;">
<video autoplay style="vertical-align: bottom;" width="360">
<source src="video/HelloWorld.mp4" type="video/mp4">
<source src="video/HelloWorld.webm" type="video/webm">
</video>
<br/>
<canvas id="reflection" width="360" height="55"
style="vertical-align: top;"></canvas>
</div>
<script>
var context, rctxt, video;
video = document.getElementsByTagName("video")[0];
reflection = document.getElementById("reflection");
rctxt = reflection.getContext("2d");
// flip canvas
rctxt.translate(0,160);
rctxt.scale(1,-1);
// create gradient
gradient = rctxt.createLinearGradient(0, 105, 0, 160);
gradient.addColorStop(0, "rgba(255, 255, 255, 1.0)");
gradient.addColorStop(1, "rgba(255, 255, 255, 0.3)");
rctxt.fillStyle = gradient;
rctxt.rect(0, 105, 360, 160);
video.addEventListener("play", paintFrame, false);

function paintFrame() {
// draw frame, and fill with the opacity gradient mask
rctxt.drawImage(video, 0, 0, 360, 160);
rctxt.globalCompositeOperation = "destination-out";
rctxt.fill();
// restore composition operation for next frame draw
rctxt.globalCompositeOperation = "source-over";
if (video.paused || video.ended) {
return;
}
requestAnimationFrame(paintFrame);
}
</script>

Image Note This example uses the <video> element to display the video, though a second canvas could be used for this purpose, as well. If you take this approach be sure to remove the @controls attribute as it breaks the reflection perception.

The example places the video and the aligned canvas underneath the video into a dark <div> element to provide some contrast for the reflection. Also, make sure to give the <video> and the <canvas> elements the same width. Though, in this example, we have given the reflection one-third the height of the original video.

As we set up the canvas, we prepare it as a mirrored drawing area using the scale() and translate() functions. The translation moves it down the height of the video and the scaling mirrors the pixels along the x axis. We then set up the gradient over the bottom 55 pixels of the video frames on the mirrored canvas.

The paintFrame() function applies the reflection effect after the video starts playback and while it is playing back at the maximum speed possible. Because we have decided to have the <video> element display the video, it is possible that the <canvas> cannot catch up with the display, which can result in a slight temporal disconnect between the <video> and its reflection. If that bothers you, the solution is to “paint” the video frames via another canvas and hide the video itself. You just need to set up a second <canvas> element and add a drawImage() function to that canvas above the paintFrame() function.

For the reflection, we “painted” the video frames onto the mirrored canvas. When using two <canvas> elements, you may be tempted to use getImageData() and putImageData() to apply the canvas transformations. However, canvas transformations are not applied to these functions. You have to use a canvas into which you have pulled the video data through drawImage() to apply the transformations.

Now we just need a gradient on the mirrored images.

To apply the gradient, we use a composition function of the gradient with the video images. We have used the composition before to replace the current image in the canvas with the next one. Creating a new composition property changes that. We therefore need to reset the compositing property after applying the gradient. Another solution would be to use the save() and restore() functions before changing the compositing property and after applying the gradient. If you change more than one canvas property or you don’t want to keep track of what previous value you have to reset the property to, using save() and restore() is indeed the better approach.

Figure 5-17 shows the resulting renderings.

9781484204610_Fig05-17

Figure 5-17. Rendering of video with a reflection

Spiraling Video

Canvas transformations can make the pixel-based operations that we saw at the beginning of this chapter a lot easier, in particular when you want to apply them to the whole canvas. The example shown in Listing 5-2 and Figure 5-2 can also be achieved with a translate() function, except you will still need to calculate when you hit the boundaries of the canvas to change your translate() function. This accomplished by adding a translate(xpos,ypos) function and always draw the image at position (0,0), which doesn’t win you very much.

We want to look here at a more sophisticated example for using transformations. We will use both a translate() and a rotate() to make the frames of the video spiral through the canvas. Listing 5-16 shows how we achieve this.

Listing 5-16. Video Spiral Using Canvas

<script>
var context, canvas, video;
var i = 0;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
// provide a shadow
context.shadowOffsetX = 5;
context.shadowOffsetY = 5;
context.shadowBlur = 4;
context.shadowColor = "rgba(0, 0, 0, 0.5)";
video.addEventListener("play", paintFrame, false);

function paintFrame() {
context.drawImage(video, 0, 0, 120, 80);
context.setTransform(1, 0,
0, 1,
0, 0);
i += 1;
context.translate(3 * i , 1.5 * i);
context.rotate(0.2 * i);
if (video.paused || video.ended) {
alert(i);
return;
}
requestAnimationFrame(paintFrame);
}
</script>

The <video> and <canvas> element definitions are unchanged from previous examples. We only need to increase the size of our canvas to fit the full spiral. We also have given the frames being painted into the canvas a shadow, which offsets them from the previously drawn frames.

Image Note Shadows attached to a video element in Chrome don’t work at the moment. Google is working on the bug.

The way in which we paint the spiral is such that we paint the new video frame on top of a translated and rotated canvas. In order to apply the translation and rotation to the correct pixels, we need to reset the transformation matrix after painting a frame.

This is very important, because the previous transformations are already stored for the canvas such that another call—to translate(), for example— will go along the tilted axis set by the rotation rather than straight down asyou might expect. Thus, the transformation matrix has to be reset; otherwise, the operations are cumulative.

We are also counting the number of frames that we’re displaying so we can compare performance between browsers. If you run the video through all the way to the end, you get an alert box with that number for comparison.

Figure 5-18 shows the resulting renderings in Firefox.

9781484204610_Fig05-18

Figure 5-18. Rendering of spiraling video frames in Firefox

You may be thinking this is rather neat but what kind of performance “hit” does that lay on the browser? Let’s do a small performance comparison between browsers.

The video file has a duration of 6.06 seconds. The requestAnimationFrame() function probes the video at 60Hz, thus in theory picking up about 363 frames over the video duration. Chrome, Safari, Opera, and IE all achieve rendering around that many frames. Firefox achieves only about 165 frames. After some experimentation, it turns out that the size of the canvas is the problem–the larger the canvas that drawImage() has to paint into, the slower Firefox. We hope that this is just a temporary problem that we observed.

This comparison was done on browsers downloaded and installed on Mac OS X without setting up extra hardware acceleration for graphics operations.

The upshot is there has to be a valid reason for using this technique because you have no control over which browser the user chooses.

Animations and Interactivity

We’ve already used requestAnimationFrame() and setTimeout() to create animated graphics from video frames in sync with the timeline of the video via the canvas. In this section we want to look at another way to animate the canvas: through user interaction.

The key “take-away” here is that the canvas only knows pixels and has no concept of objects. Thus it cannot associate events to a particular shape in the drawing. The canvas as a whole, however, accepts events, which means you can attach a click event to the <canvas> element, and then compare the [x,y] coordinates of the click event with the coordinates of your canvas to identify which object it might relate to.

In this section we will look at an example that is a bit like a simple game. After you start playback of the video through a click, you can click any time again to retrieve a quote from a collection of quotes. Think of it as a fortune cookie gamble. Listing 5-17 shows how we’ve done it.

Listing 5-17. Fortune Cookie and Video with User Interactivity in Canvas

<script>
var quotes = ["Of those who say nothing,/ few are silent.",
"Man is born to live,/ not to prepare for life.",
"Time sneaks up on you/ like a windshield on a bug.",
"Simplicity is the/ peak of civilization.",
"Only I can change my life./ No one can do it for me."];
var canvas, context, video;
var w = 640, h = 320;
video = document.getElementsByTagName("video")[0];
canvas = document.getElementsByTagName("canvas")[0];
context = canvas.getContext("2d");
context.lineWidth = 5;
context.font = ’bold 25px sans-serif’;
context.fillText(’Click me!’, w/4+20, h/2, w/2);
context.strokeRect(w/16,h/4,w*7/8,h/2);
canvas.addEventListener("click", procClick, false);
video.addEventListener("play", paintFrame, false);
video.addEventListener("pause", showRect, false);

function paintFrame() {
if (video.paused || video.ended) {
return;
}
context.drawImage(video, 0, 0, w, h);
context.strokeStyle=’white’;
context.strokeRect(w/16,h/4,w*7/8,h/2);
requestAnimationFrame(paintFrame);
}
function isPlaying(video) {
return (!video.paused && !video.ended);
}

function showRect(e) {
context.clearRect(w/16,h/4,w*7/8,h/2);
quote = quotes[Math.floor(Math.random()*quotes.length)].split("/");
context.fillStyle = ’blue’;
context.fillText(quote[0], w/4+5, h/2-10, w/2-10);
context.fillText(quote[1], w/4+5, h/2+30, w/2-10);
context.fillStyle = ’white’;
context.fillText("click again",w/10,h/8);
}

function procClick(e) {
var pos = canvasPosition(e);
if ((pos[0] < w/4) || (pos[0] > 3*w/4)) return;
if ((pos[1] < h/4) || (pos[1] > 3*h/4)) return;
!isPlaying(video) ? video.play() : video.pause();
}
</script>

In this example, we use an array of quotes as the source for the displayed “fortune cookies.” Note how the strings have a “/” marker in them to deal with breaking it up into multiple lines. It is done this way for ease of storage in a single string.

We proceed to set up an empty canvas with a rectangle in it that has the text: "Click me!". Callbacks are registered for the click event on the canvas, and also for pause and play events on the video. The trick is to use the “click” callback to pause and play the video, which will then trigger the respective effects associated with the video pause and play events. We restrict the clickable region to the rectangular region to show how regions can be made interactive in the canvas, even without knowing what shapes there are.

The pause event triggers the display of the fortune cookie within the rectangular region in the middle of the video. The play event triggers continuation of the display of the video’s frames, thus wiping out the fortune cookie. Note that we do not do anything in paintFrame() if the video is paused. This will deal with any potentially queued calls to paintFrame() from the setTimeout() function.

You may have noticed we are missing a function from the above example–the canvasPosition() function. This function is a helper to gain the x and y coordinates of the click within the canvas. It has been extracted into Listing 5-18 (you can find this example athttp://diveintohtml5.org/canvas.html) because it will be a constant companion for anyone doing interactive work with canvas.

Listing 5-18. Typical Function to Gain the x and y Coordinates of the Click in a Canvas

function canvasPosition(e) {
// from http://www.naslu.com/resource.aspx?id=460
// and http://diveintohtml5.org/canvas.html
if (e.pageX || e.pageY) {
x = e.pageX;
y = e.pageY;
} else {
x = e.clientX + document.body.scrollLeft +
document.documentElement.scrollLeft;
y = e.clientY + document.body.scrollTop +
document.documentElement.scrollTop;
}
// make coordinates relative to canvas
x -= canvas.offsetLeft;
y -= canvas.offsetTop;
return [x,y];
}

Figure 5-19 shows the rendering of this example with screenshots from different browsers.

9781484204610_Fig05-19

Figure 5-19. Rendering of the fortune cookies example through an interactive canvas with video

We can further improve this example by changing the mouse pointer display to a grabber hand while mousing over the box. To that end, we register a callback on the canvas for mousemove events, calling the function in Listing 5-19, which changes the pointer while within the box.

Listing 5-19. Function to Change the Mouse Cursor When over the Top of the White Box

function procMove(e) {
var pos = canvasPosition(e);
var x = pos[0], y = pos[1];
if (x > (w/16) && x < (w*15/16) && y > (h/4) && y < (h*3/4)) {
document.body.style.cursor = "pointer";
} else {
document.body.style.cursor = "default";
}
}

You have to reuse the canvasPosition() function from earlier to get the cursor position and then decide whether the cursor is within the box before setting it to “pointer.”

Image Note Note that the fonts are rendered differently between the browsers, but other than that, they all support the same functionality.

Summary

In this chapter we made use of some of the functionalities of canvas for manipulating video imagery.

We first learned that the drawImage() function allows us to pull images out of a <video> element and into a canvas as pixel data. We then determined the most efficient way of dealing with video frames in the canvas and found the “scratch canvas” as a useful preparation space for video frames that need to be manipulated once and reused multiple times as a pattern.

We identified the getImageData() and putImageData() functions as powerful helpers to manipulate parts of a video’s frame data.

We then made use of pixel manipulation functions such as changing the transparency of certain pixels to achieve a blue screen effect, scaling pixel slices to achieve a 3D effect, or calculating average colors on video frames to create an ambient surrounding. We also made use of thecreatePattern() function to replicate a video frame across a given rectangle.

Then we moved on to the compositing functionality of the canvas to put several of the individual functions together. We used a gradient to fade over from the video to an ambient background, a clip path, and text as a template to cut out certain areas from the video.

With the canvas transformation functionality we were finally able to create a video reflection that works across browsers. We also used it to rotate video frames and thus have them spiral around the canvas.

We concluded our look at canvas by connecting user interaction through clicks on the canvas to video activity. Because there are no addressable objects, but only addressable pixel positions on a canvas, it is not as well suited as SVG to catching events on objects.

In the next chapter, we take a deep dive into the Audio API. We’ll see you there.