Canvas: Universal 2D Drawing - Foundations - Programming 3D Applications with HTML5 and WebGL (2013)

Programming 3D Applications with HTML5 and WebGL (2013)

Part I. Foundations

Chapter 7. Canvas: Universal 2D Drawing

At the end of the day, 3D graphics are rendered on a 2D surface such as the display of your computer, tablet, or phone. What makes them 3D is the appearance of depth and perspective: some objects appear closer, others farther away. If we also want our 3D to be interactive, then the rendering must happen quickly enough so that the changes are displayed without a perceptible delay—at least 30, and ideally, up to 60 times per second.

WebGL and CSS3 enable real-time 3D rendering using the GPU, the specialized graphics-processing unit present on today’s computers and devices. While 3D hardware acceleration is extremely important to interactive 3D graphics, it is not a prerequisite. It is also possible to create compelling 3D experiences using software rendering. For web applications, software rendering means using the Canvas 2D context—the universal API for drawing 2D graphics in a browser.

There are a few situations in which we should consider using Canvas 2D over WebGL. First, while it is near ubiquitous, as of this writing WebGL is not supported in all mobile platforms, the most notable exception being Mobile Safari on iOS. For those platforms we can treat Canvas 2D as a fallback and deliver an experience that we know will work—albeit with potentially lower performance or less crisp graphics than its WebGL counterpart. Or we may be targeting power-challenged environments like certain smartphones where the GPU consumes battery quickly, and thus want to employ a software-only solution to extend battery life. Finally, we may want to create simple 3D effects for which WebGL is overkill but CSS3 is underpowered. Any of these are valid reasons to look into software-based rendering with the 2D Canvas API as an alternative to WebGL.

In this chapter, we will explore how the 2D Canvas API can be used to render 3D, and the performance and feature tradeoffs you should keep in mind when using Canvas 2D versus WebGL. We will also look at open source libraries that can be used to handle the 3D math and rendering, allowing us to focus on building the application.

Canvas Basics

Apple first introduced Canvas in 2004 to support advanced interface development in its Dashboard widgets and Safari browsers. The idea was to provide a general-purpose surface for drawing graphics. Over the next few years it was adopted in Mozilla’s Gecko engine, other WebKit-based browsers such as Google Chrome, and eventually in all HTML5 browsers and platforms.

Unlike DOM UI elements or SVG, the earlier standard for drawing 2D vector graphics, Canvas graphics are not constrained to a fixed set of shapes defined with markup tags; instead, an API is provided that allows JavaScript developers to draw and fill arbitrary shapes, including lines, curves, polygons, and text. Also unlike the DOM or SVG, Canvas employs a low-level procedural model akin to WebGL. The browser does not retain the visual content of Canvas-based elements in a scene graph; rather, the application must maintain its own objects and call drawing primitives each time the element needs to be redrawn (such as during an animation).

A full study of the Canvas API is beyond the scope of this book. But to understand Canvas drawing as it relates to 3D, we will go over the basics here.

The Canvas Element and 2D Drawing Context

HTML5 defines a new DOM element, <canvas>, which specifies a drawable region of the page with a given width and height. The Canvas element is similar to an Image element: you can create it in markup, or using a DOM API like document.createElement(). Once you’ve created it, you can style the Canvas element with CSS to give it borders and margins, position it, and even animate it with transitions.

The Canvas element simply defines the region on the page for drawing. In order to draw graphics, you must obtain a context, which is an object that exposes the drawing API. For Canvas drawing, we obtain a 2D context—as opposed to the 3D drawing context used to render WebGL graphics we have seen in previous chapters.

Example 7-1 shows how to create a Canvas element and draw a white square. In the styles section, we specify a black background for the canvas. In the markup, we create the canvas using a <canvas> tag, and specify a width and a height in pixels. In our page load function, we fetch the Canvas element by its id and get a 2D drawing context for it by calling canvas.getContext("2d"). Once we have a context, we can draw. We set the context’s fillStyle property to white using CSS color syntax; then we draw a filled rectangle by calling thecontext.drawRectangle(), passing the x,y coordinates of the top-left corner, and a width and height.

Example 7-1. Basic Canvas drawing example

<html>

<head>

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

<title>Programming 3D Applications in HTML5 and WebGL —

Basic Canvas Example</title>

</head>

<style>

#basicCanvas {

background-color:Black;

}

</style>

<body>

<canvas id="basicCanvas" width=500 height=500></canvas>

</body>

<script src="../libs/jquery-1.9.1/jquery-1.9.1.js"></script>

<script type="text/javascript">

$(document).ready(

function() {

var canvas = document.getElementById("basicCanvas");

var context = canvas.getContext("2d");

context.fillStyle = '#ffffff';

context.fillRect(125, 125, 250, 250);

}

);

</script>

</html>

The result should be quite familiar—see Figure 7-1.

Pretty simple stuff; this is a lot like the examples from Chapters 2 and 3; however, it took only about a half-dozen lines of JavaScript. (When I said there are easier ways to draw 2D on a page, I wasn’t kidding.) The code for this example can be found in Chapter 7/canvasbasic.html.

Drawing a square with the Canvas API

Figure 7-1. Drawing a square with the Canvas API

Canvas API Features

The Canvas 2D context provides a raster-based API; that is, drawing is done in pixels (versus the vectors found in some graphics systems, like SVG). If an application needs to scale graphics based on window size, it must do so manually. 2D Canvas API calls fall into the following rough categories:

Shape drawing

Rectangular, polygonal, and curved shapes; either filled or stroke outlined.

Line and path drawing

Line segments, arcs, and Bézier curves.

Image drawing

Bitmap data from other sources such as Image elements or another canvas.

Text drawing

Filled or stroked text, with text properties defined through CSS-style attributes.

Fill and stroke styles

CSS styles and gradients for defining fill patterns and stroked line patterns.

Transformations

2D transformations, including translate, rotate, scale, and an arbitrary 3×3 matrix.

Compositing

Control over how newly drawn shapes are blended with the existing canvas contents.

Figure 7-2 shows a screenshot of a Canvas element drawn with various calls to illustrate the API’s drawing features. We can see a filled rectangle; a rectangle drawn with a stroke outline; filled and stroked text; a filled polygon (triangle); a filled Bézier curve with a stroke outline; a bitmap image; a circle filled with a bitmap pattern; a polyline; and a gradient-filled rectangle.

Canvas API features

Figure 7-2. Canvas API features

Example 7-2 shows the JavaScript code for this example (source file Chapter 7/canvasfeatures.html). The rest of the markup has been omitted for brevity.

Example 7-2. Detailed Canvas drawing example

function init()

{

image1 = new Image;

image1.src = '../images/parisi1.jpg';

image2 = new Image;

image2.onload = function()

{

imagepattern = context.createPattern(image2, "repeat");

}

image2.src = '../images/ash_uvgrid01.jpg';

gradient = context.createLinearGradient(250,0,350,0);

gradient.addColorStop(0,"green");

gradient.addColorStop(1,"blue");

}

function run()

{

requestAnimationFrame(run);

draw(canvas, context);

}

$(document).ready(

function() {

canvas = document.getElementById("features");

context = canvas.getContext("2d");

init();

run();

}

);

First, our page load function finds the Canvas element and creates a 2D context. Then it calls init() and finally, run(), which implements a requestAnimationFrame()-based run loop. Unlike the previous example, which drew the canvas as a one-shot, this time we are going to paint it repeatedly. We do this for two reasons: 1) this is a more typical structure for a real Canvas-based application, where content is animated or reacting to user input in some way, so we may as well develop good practice now; and 2) we actually need it here for at least a few frames, because we need to test to see if our images have been loaded. We don’t want to try to draw the images unless their contents are ready to be painted. We will get into the details of this in a moment.

The function init() creates two Image elements, one for the bitmap contents, and one for the gradient that will be used to fill the rectangle on the bottom right. It also creates a fill pattern based on the second bitmap by adding an onload event handler to the image before loading it. Theonload handler uses the context’s createPattern() method to create the fill pattern; this method requires valid bitmap data, so we must wait until the image is loaded.

The function run() implements the run loop. First, it requests a new animation frame so that it will get called again the next time through the browser’s update cycle. Then, it calls draw(), which does the drawing. The code for this function is presented here in its entirety.

function draw(canvas, context)

{

context.clearRect(0, 0, canvas.width, canvas.height);

context.save();

context.translate(50, 0);

// Small red filled rectangle

context.save();

context.fillStyle = '#ff0000';

context.fillRect(25, 25, 100, 50);

context.restore();

// Small dark blue filled rectangle

context.save();

context.strokeStyle = 'DarkBlue';

context.strokeRect(250, 25, 100, 50);

context.restore();

// Filled text

context.save();

context.lineWidth = 1;

context.fillStyle = 'Black';

context.font = '30px sans-serif';

context.fillText('Fill', 50, 125);

context.restore();

// Stroked text

context.save();

context.lineWidth = 1;

context.strokeStyle = 'Orange';

context.font = 'italic 2em Verdana';

context.strokeText('Stroke', 250, 125);

context.restore();

// A triangle

context.save();

context.beginPath();

context.fillStyle = 'Yellow';

context.moveTo(75, 150);

context.lineTo(25, 225);

context.lineTo(125, 225);

context.lineTo(75, 150);

context.fill();

context.closePath();

context.restore();

// A filled Bezier curve

context.save();

context.beginPath();

context.strokeStyle = 'Green';

context.fillStyle = 'LightBlue';

context.moveTo(300,150);

context.bezierCurveTo(225,175,275,225,275,225);

context.bezierCurveTo(350,250,350,225,350,225);

context.bezierCurveTo(350,175,300,175,300,150);

context.stroke();

context.fill();

context.closePath();

context.restore();

// A bitmap

if (image1.width)

{

context.save();

context.drawImage(image1, 11, 250, 128, 128);

context.restore();

}

// A bitmap-filled circle

if (image2.width)

{

context.save();

context.strokeStyle = 'DarkGray';

context.fillStyle = imagepattern;

context.beginPath();

context.arc(300, 314, 64, 0, 2 * Math.PI, false);

context.scale(.5, .5);

context.fill();

context.stroke();

context.closePath();

context.restore();

}

// A polyline

context.save();

context.strokeStyle = "rgb(128, 0, 255)";

context.beginPath();

context.lineWidth = 3;

context.moveTo(25, 450);

context.lineTo(75, 425);

context.lineTo(150, 475);

context.stroke();

context.closePath();

context.restore();

// A gradient fill

context.save();

context.fillStyle = gradient;

context.fillRect(250, 425, 100, 50);

context.restore();

context.restore();

}

draw() shows off many features of the 2D Canvas API. I will highlight a few of them here.

§ context.clearRect() is called to clear the contents of the canvas. Without this, graphics will continually be added to the canvas on top of the ones drawn in previous frames.

§ context.translate() is used to translate the position of all objects subsequently drawn; the values supplied are essentially added to the positions of any other drawing operations.

§ Note the liberal use of context.save() and context.restore(). These methods allow the programmer to take a snapshot of the graphics state before making changes, and restore it to that state after drawing. The saved state includes transformations, stroke and fill styles, fonts, line widths, and more. The browser maintains state on a stack, so calls to these methods can be nested. This is really handy for drawing hierarchical objects. In general, you want to use these to keep state from “bleeding” from one drawing operation into another. However, understand that they incur some performance overhead, so you may need to put some thought into where and when to use them.

§ The context methods beginPath() and closePath() allow us to create user-defined paths for polylines and curves. The canvas maintains a virtual “pen” position, which we manipulate using methods such as moveTo(), lineTo(), and bezierCurveTo(). beginPath() resets thestate of the pen; closePath() connects the current pen position to the initial positioned defined with the first moveTo() call.

§ Image drawing is done via context.drawImage(). We need to wait until the image has been loaded before drawing it to the canvas; we do that by testing for a nonzero width. drawImage() can draw images at their natural size, or scale them if we pass a width and height in the fourth and fifth arguments. Images can also be used as a pattern to fill objects; we use that feature here by calling context.createPattern() once image2 has been loaded. The resulting pattern is saved in the variable imagepattern and used as a fill for the circle.

This example just touches on drawing 2D graphics with the Canvas API. It is a rich system with many capabilities. Rendering with Canvas also comes with a unique set of performance concerns and best practices. This is outside of what we can cover here. For a list of resources on the Canvas API, refer to the Appendix.

Rendering 3D with the Canvas API

Now that we have seen the basics of Canvas 2D drawing, we can discuss the issues involved in using it as a software rendering system for 3D. While there are many possible approaches, most software implementations mimic the operations of a hardware-based 3D rendering pipeline—namely, drawing shaded triangles, lines, and points in screen space after transforming them from model (object) space.

Using 3D hardware, we do most such calculations in GLSL shader code, with the help of very powerful built-in primitives compiled to low-level machine code on the GPU. Without 3D hardware, we need to do this in JavaScript before calling methods of the 2D Canvas API to render the final shaded, transformed objects on the screen. The calculations required to manipulate 3D geometry, transforms, lighting, and shading, as well as the math to project the 3D objects onto a 2D viewport, represent a lot of computation that can tax even the fastest machines—not to mention the brain power of the implementer.

A software renderer typically has to perform the following tasks:

§ Transform triangles from object space to screen space. This involves multiplying several matrices, depending on the complexity of the scene graph. At a minimum, it requires transforming the triangles of an object from world space (assuming it has no additional transforms) to camera space, then to 2D screen space via perspective projection.

§ Shade triangles based on materials. If lighting is involved, vertex normals and lighting have to be factored in. Using the 2D Canvas API requires dynamically generating textures or gradients to create the lighting effects; this can be very computationally expensive. If a material has textures, textures must also be filtered, perspective-corrected, and otherwise processed to look smooth and realistic. It is particularly difficult to perspective-correct and filter textures in real time. As you will discover in the examples to follow, texture mileage varies based on the application.

§ Sort triangles based on distance from the viewport. In order for our scenes to look correct, triangles that are closer to the viewport should be rendered in front of triangles that are farther away; that is, they should obscure them. Hardware-based systems use a depth buffer, also known as az-buffer, to track the distance of each drawn pixel to the camera. The depth buffer is a parallel array to the color buffer. At each coordinate corresponding to the color buffer, the depth buffer maintains a distance value, which the renderer tests before drawing a pixel to the color buffer. If a pixel is closer to the viewport than any previously drawn pixel recorded in the depth buffer, the pixel will be drawn; if it is farther away, it will be discarded. Software rendering systems almost never have a depth buffer, because it is too memory- and CPU-intensive to calculate the depth values. Instead, they sort by triangle, based on the location of a point on the triangle. Often, this is the geometric center of the triangle, though it can also be its closest or farthest away z value. There is no standard. Triangle sorting is one of the most performance-sensitive areas when it comes to software rendering, and you may find that overall triangle count represents one of your biggest performance bottlenecks.

Even a quality software rendering implementation that pulls out all the stops is facing certain obstacles. Antialiased rendering—the smoothing out of aliased, or jagged, lines at the edges of objects—is very difficult to do in real time in software, requiring multiple passes to render an object or the entire scene. Texture filtering with techniques like mip-mapping and bilinear filtering can be computationally prohibitive, and as a consequence software-based texturing goes without and tends to look rough and grainy. See Figure 7-3 for an example. Also, bitmap fill rates (e.g., for sprites) are much slower in software than in hardware.

Software-based texture mapping; reproduced with permission

Figure 7-3. Software-based texture mapping; reproduced with permission

In addition, triangle sorting, while being a fair substitute for a depth buffer in some circumstances, breaks down completely in others. For example, when two triangles partially overlap, there is no good way to sort them. Refer to Figure 7-4: from the camera’s point of view, triangle B is both behind and in front of triangle A. A software-sorting algorithm would have to choose either triangle A or B to draw in front, hence obscuring the other triangle completely. As a result, you will occasionally see triangles “popping” in and out of the scene as objects move relative to the camera. This never happens with a hardware depth buffer.

Depth sorting triangles: a portion of triangle A is closer to the camera than triangle B, but a portion of triangle B is also closer to the camera than triangle A, so there is no good solution (image from MSDN article on depth sorting; reproduced with permission)

Figure 7-4. Depth sorting triangles: a portion of triangle A is closer to the camera than triangle B, but a portion of triangle B is also closer to the camera than triangle A, so there is no good solution (image from MSDN article on depth sorting; reproduced with permission)

As we can see, software rendering is not an optimal solution if hardware rendering is available. It is difficult, if not impossible, to get the same visual quality and performance. But despite the inherent limitations, there have been some amazing efforts to create high-performance 3D in software using the 2D Canvas API.

A few years ago, UK-based Jean dArc created a 2D Canvas-based viewer for exploring Quake 3 level maps. Figure 7-5 shows a screenshot. Go to http://www.zynaps.com/site/experiments/quake.html to try it out. Performance is reasonable on a recent laptop, and the textures, though grainy because they are not filtered, look pretty good. This was a Chrome experiment originally designed to show off 2D Canvas capabilities, built at a time when WebGL was far from pervasive. While its significance now is largely historical, it shows what is possible with the 2D Canvas API and good software techniques.

Quake 3 map viewer rendered in software using the 2D Canvas API; reproduced with permission

Figure 7-5. Quake 3 map viewer rendered in software using the 2D Canvas API; reproduced with permission

Canvas-Based 3D Libraries

As discussed, there are intense technical problems to solve to render 3D in software using Canvas. Several libraries have cropped up to tackle the problem, including K3D, Cango3D, Nihilogic, and, of course, Three.js. In this section, we will take a look at two of these libraries, K3D and Three.js.

K3D

K3D is the creation of UK-based Kevin Roast (http://www.kevs3d.co.uk/dev/; Twitter @kevinroast). Kevin is a UI developer and graphics enthusiast. While K3D is early in its development and not as feature-rich as Three.js, it is very impressive. In particular, it is fast and does a great job with shading and textures. Figure 7-6 shows a screenshot from Asteroids [Reloaded], Kev’s K3D-based implementation of the arcade classic. Note the smooth shading, lighting, and highly detailed textures on the rocks.

Asteroids [Reloaded], a K3D-based 3D game rendered with the 2D Canvas API

Figure 7-6. Asteroids [Reloaded], a K3D-based 3D game rendered with the 2D Canvas API

Building upon his early work with K3D, Kev is now working on Phoria, a complete rewrite of K3D. Phoria promises to be more powerful and general purpose, but it is still in its early stages and, at the moment, the K3D demos are far more interesting.

The Three.js Canvas Renderer

Since we have been using Three.js to develop the other examples in the book, it makes sense to consider it as a solution for software-rendered 3D, especially if the main goal is to develop a fallback for non-WebGL platforms. By using Three.js, we can render to WebGL where it is available, and Canvas 2D where it is not, with a minimum of code changes. While the switch between 3D and 2D renderers is not completely transparent—you will have to make a few code changes to take full advantage of Canvas rendering—it is fairly unobtrusive.

NOTE

Three.js uses a plugin rendering architecture and comes with a ready-to-go renderer based on the 2D Canvas API. This is unsurprising, given its origin. Three.js was originally based on earlier work done by Mr.doob to render using Flash 2D graphics primitives, so the HTML5 Canvas renderer was a natural transition. In fact, the HTML5 Canvas renderer was implemented before the 3D WebGL renderer.

Three.js comes with a large number of Canvas-based samples. Unfortunately, most of them aren’t very interesting. There are a few worth noting here. In the Three.js project sources, open up examples/canvas_geometry_earth.html, shown in Figure 7-7. You will see a rotating texture-mapped Earth. It’s not as pretty as its WebGL counterpart, but it’s still nice. The biggest thing you might notice is that the sphere isn’t very highly tessellated; that is, there aren’t that many triangles used to render it. You can see the triangular edges as it rotates. It’s not quite a golf ball, but it’s cruder than we’d like. This is because of triangle depth sorting. You must take care to keep triangle counts down because depth sorting the triangle is at best a O(N log N) operation, so higher triangle counts mean slower sorting.

Texture-mapped Earth rendered with Three.js Canvas renderer

Figure 7-7. Texture-mapped Earth rendered with Three.js Canvas renderer

The Canvas renderer is really good at simple 3D panoramas. Here we are talking about rendering 12 triangles (i.e., the faces on the inside of a cube), so triangle count is not an issue. Open the Three.js sample in examples/canvas_geometry_panoramas.html to see the 3D panorama depicted inFigure 7-8. Use the mouse to rotate the scene. The navigation is smooth and the panoramic textures look great.

Canvas-rendered Three.js panorama

Figure 7-8. Canvas-rendered Three.js panorama

The Three.js Canvas renderer also excels at drawing lots of simple shapes, such as flat 2D polygons, laid out in 3D space. This is a great way to create fancy page effects such as animated particles. Figure 7-9 shows an example (source file examples/canvas_particles_random.html) of 1,000 randomly placed particles animating as the mouse moves around on the screen. The shapes are flat, but they move around in 3D space. As an alternative, imagine doing this with CSS 3D transforms. A thousand individual moving elements would most certainly place undue burden on the browser’s DOM. With a Canvas implementation, it’s peppy, and Three.js makes it easy to create.

1,000 animated particles using the Three.js Canvas renderer

Figure 7-9. 1,000 animated particles using the Three.js Canvas renderer

Using the Three.js Canvas renderer

Getting going with Canvas and Three.js is as simple as creating a different type of renderer object. But there are subtleties involved. Let’s take a look at a basic example to see it in operation. While we are at it, let’s also explore some visual and performance differences between the Three.js Canvas and WebGL renderers. Finally, we will do this in a context of something approaching a real-world example. The examples presented thus far are totally contrived—little more than tech demos. Let’s see what it would be like instead to try to build something tangible, like game graphics with polygonal models and textures.

Figure 7-10 shows a screenshot of an experiment using the Three.js Canvas renderer to build a game. It is a simple viewer intended to assess visual quality and frame rate. Open the file Chapter 7/threejscanvasmodel.html in your browser. You will see a trio of slowly rotating spaceships against a simple star background. Use the mouse to rotate the scene, and the scroll wheel to zoom in/out. You can start/stop the animation by clicking on the Animate checkbox. As a bonus, the demo allows you to switch between Canvas- and WebGL-based renderers to compare. But before we do that, let’s look at the Canvas version.

Note the frame rate counter at the top left. It stays in the range of 20–23 frames per second. If you stop the animation and zoom into the scene so that one of the ships is out of frame, you should see the frame rate bump up to around 30. Do it again, so that only one ship is visible; you will see around 50, perhaps up to 60, fps in the frame rate counter. This is a clear demonstration of the triangle-sorting issue discussed earlier. Because the Three.js Canvas renderer does not have a depth buffer, the library has to triangle sort. More triangles mean slower sorting. When we zoom in to see only one ship, Three.js is smart enough to ignore (cull) that object and thus not sort the triangles. These spaceship models are quite simple, around 1,200 triangles per model. This is not a very high number for modern games, so this example illustrates how thrifty we might need to be with our polygon budgets when rendering to the 2D Canvas. Now look at the materials. The ships are lit, and there is a directional light in the scene that should be highlighting various parts of the ships’ geometry; however, we don’t see that effect. Three.js is able to apply some lighting, but the effects are basic; we don’t see smooth shading across the faces. Play with some of the other Canvas examples in the Three.js project tree, and you will see how far you can take materials and lighting.

Rendering models with the Three.js Canvas renderer (spaceship models by gentlemenk, via Turbosquid)

Figure 7-10. Rendering models with the Three.js Canvas renderer (spaceship models by gentlemenk, via Turbosquid)

Comparing the Canvas renderer to the WebGL renderer

It’s time to switch renderers so that we can compare. Click the WebGL radio button to render the scene with WebGL. See Figure 7-11. The visual contrast is pretty stark. In the WebGL version, textures look smoother, especially as the object gets far away, whereas they are quite grainy in the Canvas version. Edge antialiasing is much smoother in WebGL, though it is also present in the Canvas rendering. Most dramatic is the lighting, where we can now clearly see highlights from the directional light that simply weren’t there in the Canvas version. As to performance, look at the frame rate counter. It stays at a steady 60 fps, with no need to cull out objects. This is unsurprising given that Three.js has very little work to do in software. There are only a few objects in the scene, with modest polygon counts and simple textures. Nearly all of the computation is handled in hardware (i.e., in the GLSL shader code built into the Three.js WebGL renderer).

Spaceship scene rendered with Three.js WebGL renderer

Figure 7-11. Spaceship scene rendered with Three.js WebGL renderer

All of this might seem to paint a discouraging picture of using Canvas for 3D game rendering. We topped out frame rate on a fairly simple scene, and we had to compromise visual quality. But that is the glass-half-empty point of view. If we look at this in another way, it’s pretty impressive that we can push thousands of textured triangles around on a page using JavaScript. If we are developing simple games, and we can create an art direction style conducive to the limitations of the medium (e.g., low polygon and prelit), we can do some amazing things.

It takes only a few lines of code to use the Three.js Canvas renderer. The source for this example can be found in the file Chapter 7/threejscanvasmodel.html. The listing in Example 7-3 shows the code for creating the renderer. Note the line in bold. Instead of creating a WebGL renderer, we create an object of type THREE.CanvasRenderer.

Example 7-3. Creating the Three.js Canvas renderer

function createRenderer(container, useCanvas)

{

if (useCanvas) {

renderer = new THREE.CanvasRenderer( { } );

}

else {

renderer = new THREE.WebGLRenderer( { antialias: true } );

}

container.appendChild( renderer.domElement );

// Set the viewport size

renderer.setSize(container.offsetWidth, container.offsetHeight);

}

Once the Canvas renderer is created, we can render to it in the same way we would render to WebGL:

// Render the scene

renderer.render( scene.scene, scene.camera );

For simple uses, that is actually the only line of code we need to change. But for this example, we are going to do one other thing. Three.js gives us the option of doing a simple edge antialiasing by “overdrawing” our triangles; that is, drawing everything a pixel bigger than it should be to hide seams between triangles. Unfortunately, instead of simply setting an antialias creation flag in the renderer (the way we would with WebGL), we need to set this up on a per-material basis. That requires iterating through the materials in the model after it is loaded, and setting the overdrawproperty to true. See Example 7-4. We set up a load callback for when each model is loaded. That callback iterates through the model by calling its traverse() method, which visits each descendant in its scene graph. Our helper function processNodes() tests to see if the object is a mesh. If so, it sets the overdraw property on the mesh’s material. This extra bit of work is a bit inconvenient, but overall the setup work required is still pretty trivial. These two changes are the only differences between the Canvas and WebGL-based versions of the code.

Example 7-4. Iterating through materials in the scene

function processNodes(n)

{

if (n instanceof THREE.Mesh)

{

n.material.overdraw = true;

}

}

function handleSceneLoaded(data, parent)

{

// Add the mesh to our group

parent.add( data.scene );

data.scene.traverse(function(n) { processNodes(n) });

}

Chapter Summary

This chapter took a detailed look at software-based 3D rendering using the 2D Canvas API supported in all HTML5 browsers. After taking a quick tour of the drawing features of the Canvas API, we examined issues inherent in software rendering, including transformations, shading, and depth sorting. We surveyed Canvas-based 3D libraries, in particular how to use the Three.js Canvas renderer as an alternative to WebGL. While there are many tradeoffs, especially in the areas of performance and visual fidelity, Canvas presents a viable alternative to WebGL for simple, limited use cases and as a fallback when WebGL is not present on the target platform.