Creating Audio for Games - Adding 3D and Sound - HTML5 Games: Creating Fun with HTML5, CSS3, and WebGL (2012)

HTML5 Games: Creating Fun with HTML5, CSS3, and WebGL (2012)

part 3

Adding 3D and Sound

Chapter 10 Creating Audio for Games

Chapter 11 Creating 3D Graphics with WebGL

Chapter 10

Creating Audio for Games

in this chapter

• Introducing HTML5 audio

• Dealing with audio formats

• Using the audio data APIs

• Implementing an audio module

• Adding sound effects to the game

Now that the visual aspect of the game is taken care of, you can turn toward adding audio. This chapter introduces you to the new HTML5 audio element that aims to solve the age-old problem of adding sound to web applications.

First, you explore the basics of the audio element, covering most of the details and API functions described in the HTML5 specification. You also see a few examples of how work-in-progress features such as audio data APIs will soon enable even cooler things like audio analysis and dynamic audio generation.

Finally, you get to use the HTML5 audio element to implement an audio module for Jewel Warrior and see how you can easily bind sound effects to game events to add an extra dimension to the game experience.

HTML5 Audio

In the early days of the web, there was no way to put sound on web pages. There wasn’t any need for it, either, because the web was largely used as a means to display documents. However, with the games and applications being produced today, it’s suddenly a feature that makes sense.

Microsoft introduced a bgsound element to Internet Explorer that allowed authors to attach a single audio file to a page, which would then play in the background. Its use was frowned upon, however, because there was no way for the users to turn off the sound, and instead of enhancing the page, it often detracted from the experience and annoyed the users.

Various alternative solutions have since been used whenever audio was needed. Embedding sound files in web pages is relatively straightforward using embed tags but depends on plug-ins and leaves much to be desired in terms of controlling the audio. Eventually, Flash took over and has dominated both audio and video on the web ever since. Until now, at least.

The HTML5 specification introduces new media elements that let you work with both audio and video without using any plug-ins at all. The two new HTML elements, audio and video, are very easy to use and in their basic form require just a single line of HTML. For example, embedding an autoplaying sound can be as simple as:

<audio src=”mysound.mp3” autoplay></audio>

Similarly, a video player with UI controls can be embedded with the following:

<video src=”myvideo.avi” controls></video>

Both the audio and video elements implement the HTML5 MediaElement interface and therefore share a good portion of their APIs. This means that, although I do not discuss the video element directly, you still can take some of what you learn in this chapter and apply it to the videoelement.

Detecting audio support

One way to determine whether the browser supports the audio element is simply to create one using document.createElement() and test whether one of the audio-specific methods exists on the created element. One such method is the canPlayType() method, which you see again in a bit:

var audio = document.createElement(“audio”);

if (typeof audio.canPlayType == “function”) {

// HTML5 audio is supported

} else {

// Load fallback code

}

Modernizr already has this test built in, so you can skip the preceding detection code and just check the Modernizr.audio property:

if (Modernizr.audio) {

// HTML5 audio is supported

} else {

// Load fallback code

}

If you need a good fallback solution, I recommend Scott Schiller’s excellent Sound Manager 2, available at www.schillmania.com/projects/soundmanager2/. This library makes it easy to use audio with HTML5 and JavaScript. If the browser doesn’t support HTML5 audio, it falls back seamlessly to a Flash-based audio player.

Understanding the audio format wars

Just knowing that the audio element is available isn’t enough, however. You also need to make sure the browser can play the type of audio you’re using, be it MP3, Ogg Vorbis, or some other audio format.

The HTML5 specification describes the functionality of the audio element in plenty of detail, but it doesn’t specify a standard audio format or even hint as to what formats a browser should support. It is entirely up to the browser vendors to pick the formats that they deem suitable for inclusion. Now, you might think that innovative companies and organizations such as Mozilla, Google, and Microsoft would quickly come to some sort of agreement and settle on a common format. Unfortunately, that has yet to happen.

Apparently, every browser vendor has its own idea of what makes a good audio format and which formats simply don’t align with its own strategies and agendas. Formats such as MP3, Ogg Vorbis, AAC, and Google’s WebM all have advantages and drawbacks, and issues such as software ideals and patent concerns have slowed down the standardization process. What we, as web developers, are left with is a fragmented landscape of audio support where it is actually impossible to find a single audio format that is supported across the board. Table 10-1 shows the formats supported by the major browsers.

Table 10-1

As Table 10-1 shows, no format is universally supported. To get audio working in all the browsers, you need at least two versions of all your audio files, for example, Ogg Vorbis and MP3.

Detecting supported formats

Because HTML5 audio isn’t meant for one specific audio format, the API provides a method for detecting whether the browser can play a given type of audio. This function is, of course, audio.canPlayType(), which was used earlier to detect audio support. The audio.canPlayType()method takes a single argument, a string containing the MIME type of the format you want to test. For example, the following tests whether Ogg Vorbis audio is supported:

var canPlayOGG = audio.canPlayType(“audio/ogg codecs=’vorbis’”);

Notice the codecs parameter in the MIME type. Some MIME types allow this optional parameter to specify not only the format, which in this case is an Ogg container, but also the codec, here Vorbis.

The equivalent test for MP3 audio is:

var canPlayMP3 = audio.canPlayType(“audio/mpeg”);

Now, you might think that the audio.canPlayType() method returns either true or false, but it’s slightly more complicated than that. The return value is a string that has one of three values:

• probably

• maybe

• an empty string

The value probably means that the browser is reasonably sure that it can play audio files of this type. If the browser isn’t confident that it can play the specified type but doesn’t know that it can’t either, you get the value maybe. The empty string is returned when the browser knows there is no way it can play that type of audio. Depending on how optimistic you want your application to be, you can choose to accept either just the probably value or both probably and maybe:

if (canPlayMP3 == “probably”) {

... // browser is confident that it can play MP3

}

if (canPlayMP3 == “probably” || canPlayMP3 == “maybe”) {

... // there’s a chance that it can play MP3

}

Note that, because the empty string evaluates to false when coerced to a Boolean value, you can simplify the last test to

if (canPlayMP3) {

... // there’s a chance that it can play MP3

}

Using Modernizr’s format detection

Modernizr not only tells you whether the audio element is supported, but also simplifies testing for individual formats by storing the canPlayType() return values for a number of often-used formats.

if (Modernizr.audio.mp3 == “probably”) {

...

}

The format properties available on Modernizr.audio are

• mp3

• ogg

• wav

• m4a

Finding sound effects

Not everyone has the talents necessary to create great sounding sound effects and background music, and, for hobby developers, budget concerns often get in the way of licensing readymade audio or hiring outside talent. Fortunately, plenty of sites offer both sound effects and music with few or no restrictions on how you use them.

The Freesound Project (www.freesound.org/) is a great site for finding samples and sound effects of all kinds. The sound files are all licensed under the Creative Commons (CC) Sampling Plus license, which means you are free to use them in your projects as long as you properly attribute the authors of the sound clips.

If you need full music tracks to add a little ambience to your game, SoundClick (www.soundclick.com/) features thousands of music tracks, many of them licensed under various CC licenses.

You can find many more sites that provide CC licensed content at the Creative Commons website at http://wiki.creativecommons.org/Content_Directories.

Often, the sounds you find need a few adjustments before they’re perfect for your game. For that purpose, I recommend the free, open-source audio editor, Audacity (http://audacity.sourceforge.net/). It has more features than you’ll probably ever need and lets you easily modify the audio, add effects, and convert between various formats.

Using the audio Element

You can create audio elements either by adding them to the HTML markup or by creating them with JavaScript. Adding an audio element to the HTML is as simple as using

<audio src=”mysound.mp3” />

Just like the canvas element, the audio element lets you put arbitrary content inside the tag that is rendered only if HTML5 audio is not supported. You can use that feature to, for example, include a Flash-based fallback solution or to simply display a helpful message:

<audio src=”mysound.mp3”>

Sorry, your browser doesn’t support HTML5 Audio!

</audio>

In the preceding snippet, any browser that supports the audio element will ignore the message.

You can also create audio elements with JavaScript:

var myaudio = new Audio(“mysound.mp3”);

If you don’t specify the source file in the Audio() constructor, you can specify it later by setting the src property on the audio element.

Adding user controls

The audio element comes with built-in UI controls. You can enable these controls by adding the controls attribute to the element:

<audio src=”mysound.mp3” controls />

This line tells the browser to render the element with the browser’s own controls. The specification doesn’t dictate what controls must be available or what they should look like; it only recommends that the browser provide controls for standard behavior such as playing, pausing, seeking, changing volume, and so on. Figure 10-1 shows audio elements with controls as rendered in Firefox, Chrome, and Internet Explorer.

Figure 10-1: Audio elements with native controls in Firefox (top), Chrome (middle), and Internet Explorer (bottom)

9781119975083-fg1001

As you can see, the overall appearance is the same, although the default dimensions vary a bit. If necessary, you can use CSS to change, for example, the width of the element.

note.eps

The controls attribute is a Boolean attribute, which means that its mere presence enables the feature. The only allowed value for a Boolean attribute is the name of the attribute itself, that is, controls=”controls”. That is optional, however; usually, you probably just want to use the shortened version.

If the controls attribute is absent, the audio element is simply not rendered and doesn’t affect the rest of the page content. You can still play the sound using the JavaScript API, though.

Preloading audio

In some cases, loading the audio before you’re going to use it makes sense. You can tell the browser to preload the audio file by setting the preload attribute on the audio element to one of three values:

• none

• metadata

• auto

Note that the preload attribute is just a hint to the browser. The browser is allowed to ignore the attribute altogether for any reason, such as available resources or user preferences.

The value none hints that the browser should not preload any data at all and start loading data only after the playback begins. For example, the following code hints that no data should be preloaded at all:

<audio src=”mysound.mp3” preload=”none” />

The metadata value makes the browser load only enough data that it knows the duration of the audio file. If the preload attribute is set to auto or if the attribute is absent, the browser decides for itself what gives the best user experience. This includes potentially loading the entire file.

You also can control the preloading in JavaScript through a property on the element:

audio.preload = “metadata”; // load only metadata

Although having the file ready for immediate playback is nice, you should always weigh this advantage against the added network traffic it requires. Preload only the files you are reasonably sure will actually be used.

Specifying multiple source files

I already mentioned that no single format is supported in all browsers, forcing you either to provide source files in multiple formats or leave out support for one or more browsers.

Fortunately, you can easily specify a list of audio files that the browser should try to play. The audio element can have one or more source child elements. These source elements must each point to an audio file. When the audio tag is parsed, the browser goes over the list of source elements and stops at the first one that it can play. The source element is relevant only as a child of an audio (or video) element; you can’t use it for anything on its own.

<audio controls>

<source src=”mysound.mp3” type=”audio/mpeg”>

<source src=”mysound.ogg” type=’audio/ogg; codecs=”vorbis”’>

</audio>

The preceding example makes the browser test for MP3 support first and then Ogg Vorbis, picking the first format that works. The type attribute specifies the MIME type of the audio file, optionally with a codec value. You can also use this attribute when specifying the audio source directly on the audio element with the src attribute. The type attribute is not required, but you should include it whenever you can. Without the MIME type, the browser is forced to download the audio file to determine whether it can play the file. If you let the browser know the type of audio, it can skip the resource fetching and just use the same mechanism as the canPlayType() method.

tip.eps

Firefox is a bit picky with respect to the format of the MIME type string and requires you to use double quotation marks around the codecs value, so be sure to use single quotation marks around the type value.

If you specify both an src attribute on the audio element and add source child elements, the src attribute takes precedence. Only the audio source specified by the src attribute is considered; the source elements are disregarded, even if the file from the src attribute is unplayable.

If you need to know which file ended up being selected, you can read the value from the currentSrc property on the audio element:

alert(“Picked the file: “ + audio.currentSrc);

If none of the specified audio sources are playable, the currentSrc property is set to the empty string.

Controlling playback

The audio element API exposes a few methods on the element, most importantly the play() method, which is used to start the playback:

audio.play();

This method begins playing the sound. If the audio was already at the end, the playback is restarted from the beginning. If you want to make the sound start playing automatically as soon as possible, you can use the autoplay attribute:

<audio src=”mysound.mpg” autoplay />

The other method on the audio element is the pause() method:

audio.pause();

This method pauses the playback and sets the paused property on the audio element to true. Calling pause() more than once has no effect; to resume playing, you must call play() again. Using these two methods and the paused property, you can easily create a function that toggles the pause state:

function togglePause(audio) {

if (audio.paused) {

audio.play();

} else {

audio.pause();

}

}

A common usage pattern is to make a sound loop back to the beginning and continue playing when it reaches the end. To make an audio clip loop, simply add the Boolean loop attribute to the audio element:

<audio src=”mysound.mpg” loop />

You can also set the loop property with JavaScript after the element is created:

audio.loop = true; // audio is now looping

note.eps

Depending on the browser and platform, you might experience a small pause before the audio begins playing when moving back to the beginning. Unfortunately, there’s no easy fix for this problem. It is hoped the implementations of HTML5 audio will improve with time so this problem is eliminated.

If you need to control the playback position in a more detailed manner, you can do so via the audio.currentTime property:

audio.currentTime = 60 * 1000; // skip to 1 minute into the clip

The audio specification describes no stop() method, but constructing one yourself is simple. Just reset currentTime to 0 and pause the playback:

function stopAudio(audio) {

audio.pause(); // pause playback

audio.currentTime = 0; // move to beginning

}

Controlling the volume

You can adjust the volume of the audio clip by setting the volume property on the audio element. The value of the volume property is a number between 0 and 1, where 0 is completely silent and 1 is maximum output:

audio.volume = 0.75; // set the volume to 75%

You can also mute the audio by setting the mute property to true. This property sets the effective volume to 0 but doesn’t touch the volume value, so when you unmute the audio by setting mute to false, the original volume is restored:

audio.volume = 0.75; // effective volume = 0.75;

audio.muted = true; // effective volume = 0.0;

...

audio.muted = false; // effective volume = 0.75;

Because mute is a Boolean, toggling between the two states is as easy as setting mute to its negated value:

function toggleMute(audio) {

audio.muted = !audio.muted;

}

Using audio events

You can use a number of events to detect when various events take place on the audio event. Table 10-2 shows a subset of the events.

Table 10-2 Audio events

Event name

Description

loadstart

Fired when the browsers starts loading the audio resource

abort

Fired if the loading is aborted for reasons other than an error

error

Fired if there was an error while trying to load the audio

loadedmetadata

Fired when the browser has loaded enough to know the duration of the sound

canplay

Fired when the browser can start playing from the current position

canplaythrough

Fired when the browser estimates that it can start playing from the current position and keep playing without running out of data

ended

Fired when the audio reaches the end

durationchange

Fired if the duration of the audio clip changes

timeupdate

Fired every time the position changes during playback

play

Fired when the audio starts playing

pause

Fired when the audio is paused

volumechange

Fired when the volume of the audio is changed

This list of events in Table 10-2 is not exhaustive but shows the most important events you need for most use cases.

Creating custom UI controls

Sometimes you might want to provide your own custom UI elements for controlling the audio playback. Perhaps the style of the native controls doesn’t fit with your application; perhaps you just want more control over their behavior. As you probably already figured out, you can easily use the aforementioned methods and events to create your controls, which is what the following example shows. Listing 10.1 shows the HTML elements.

Listing 10.1 Custom Elements for Audio Control

<audio id=”myaudio” loop>

<source src=”beat.mp3” type=”audio/mpeg” />

<source src=”beat.ogg” type=’audio/ogg; codecs=”vorbis”’ />

</audio>

<section>

<header><h3>Player Controls</h3></header>

<div id=”progress”><div class=”value”></div></div>

<button id=”btn-play”>Play</button>

<button id=”btn-pause”>Pause</button>

<button id=”btn-stop”>Stop</button>

<button id=”btn-mute”>Mute</button>

</section>

You can find this example in the file 01-customcontrols.html. I also added a few CSS rules to style the progress bar. The play, pause, stop, and mute buttons are trivial to implement. Listing 10.2 shows the click event handlers attached to the buttons.

Listing 10.2 Binding Click Events to Audio Actions

var audio = $(“#myaudio”)[0];

$(“#btn-play”)[0].addEventListener(“click”, function() {

audio.play();

}, false);

$(“#btn-pause”)[0].addEventListener(“click”, function() {

audio.pause();

}, false);

$(“#btn-stop”)[0].addEventListener(“click”, function() {

audio.pause();

audio.currentTime = 0;

}, false);

$(“#btn-mute”)[0].addEventListener(“click”, function() {

audio.muted = !audio.muted;

}, false);

The audio element should also automatically update the progress bar when it is playing. You can use the timeupdate event to read the currentTime value and update the progress bar element accordingly. Listing 10.3 shows how.

Listing 10.3 Updating the Custom Progress Bar

function updateProgress() {

var prog = $(“#progress .value”)[0],

pos = audio.currentTime / audio.duration * 100;

prog.style.width = pos + “%”;

}

audio.addEventListener(“timeupdate”, updateProgress, false);

Finally, the progress bar should respond to mouse clicks by changing the audio playback position. This problem is also easy to solve because you just need to update the currentTime value according to the relative click position. Listing 10.4 shows the click event handler for the progress bar.

Listing 10.4 Updating Playback Position

$(“#progress”)[0].addEventListener(“click”, function(e) {

var rect = this.getBoundingClientRect(),

pos = (e.clientX - rect.left) / rect.width;

audio.currentTime = audio.duration * pos;

}, false);

That’s all it takes to use your DOM elements to control the audio playback.

Using audio on mobile devices

But what about mobile devices? Things are progressing, but there are still some issues to work out. Current versions of iOS (from 3.0) and Android (from 2.3) both have support for HTML5 audio. Some earlier versions of Android had partial and broken audio support, but not until 2.3 was it possible to actually play sounds with HTML5 audio.

One of the issues concerns volume control. As you’ve now learned, the audio element has its own volume value that you can use to control the volume of that specific audio clip. On iOS devices, audio elements always play at full volume, and you cannot change the volume value. The sound volume is completely in the hands of the user, and any attempt to modify it via JavaScript is ignored. In addition to the limitations on volume control, iOS is further crippled by the fact that only one audio stream is allowed to play at any time. Starting a new sound pauses any sound that is already playing. That means you can’t have overlapping sound effects or, for example, play background music while also playing smaller clips tied to game events.

Android does allow multiple audio clips playing simultaneously but comes with the same volume restrictions as iOS. It also has some serious latency issues when starting playback, making it hard to get responsive sound effects.

Working with Audio Data

The specification for HTML5 audio is far from finished, and even today work is being put into expanding the audio element with capabilities such as direct, sample-level access to audio data to allow both advanced audio analysis and audio generation and filters. Because these features are far from mature, you don’t get to use them in the Jewel Warrior game, but I discuss the APIs a bit and show you a few examples.

Currently, two different APIs are proposed for manipulating audio data. People at Mozilla are working on one, and the other is coming out of the Chromium project. The two APIs are very different, but a W3C audio working group has been set down to work on finding some common ground and fleshing out a standard. Please see the following links for up-to-date information:

• Mozilla Audio Data API: https://wiki.mozilla.org/Audio_Data_API

• Web Audio API proposal: https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html

• W3C Audio Working Group: www.w3.org/2011/audio/

The Mozilla Audio Data API, which is available in Firefox 5, is very simple to work with, so I use that in the next section as I show you a few examples.

note.eps

Like the image data methods on the canvas element, the audio data API is also subject to same-origin restrictions. That means you can access audio data only from files hosted on the same domain as your application. You also need to run the code from a web server because access to local files (that is, file://) is similarly restricted.

Using the Mozilla Audio Data API

The Mozilla Audio Data API extends the audio element with extra events, properties, and methods. Let’s start by looking at a few of the new properties. The first property is audio.mozChannels, which lets you read the number of channels in the audio:

var channels = audio.mozChannels;

For a stereo audio clip, channels would now equal 2. You can also read the sample rate from the mozSampleRate property:

var sampleRate = audio.mozSampleRate;

This rate, which is usually a number such as 44100 or 22050, tells you how many data values, or samples, are used to describe 1 second of audio. Sample rates are often denoted using the unit Hz (hertz).

Reading audio data

The actual audio data is available via a new event, MozAudioAvailable. Attach a handler to this event and use it to read the data from the frameBuffer property on the event object:

audio.addEventListener(“MozAudioAvailable”,

function(event) {

var data = event.frameBuffer,

time = event.time;

// do stuff with audio data

}, false

);

The time property gives the time in seconds, measured from the start of the audio clip. The frameBuffer data is essentially an array of samples that describe the sound at the time the event is fired. Each value is a floating-point value between -1.0 and 1.0. When you are playing multichannel audio, rather than having arrays for each channel, the audio is interleaved with values alternating between the channels. For example, the data for stereo audio is arranged like this:

[

sample_000_left,

sample_000_right,

sample_001_left,

sample_001_right,

...

]

The size of the frame buffer is also available outside the MozAudioAvailable event handler via the audio.mozFrameBufferLength property:

var fbLength = audio.mozFrameBufferLength;

note.eps

The frame buffer array is not a regular JavaScript array but a Float32Array, which is part of the Typed Arrays specification currently being worked on by Khronos (which is also responsible for WebGL, where the typed arrays originated). Typed arrays allow only one type of data as opposed to regular JavaScript arrays that accept anything you throw at them. In the case of a Float32Array, the data type is 32-bit floating-point values. The upside to this limitation is that it allows for better performance in situations in which lots of values of the same type need to be processed — for example, in image and audio processing.

Writing audio data

Writing data to an audio element is just as easy. Create a new audio element and use audio.mozSetup()to initialize it. The audio.mozSetup() method takes two arguments, the first being the number of channels and the second being the desired sample rate:

var audio = new Audio();

audio.mozSetup(2, 44100);

This method sets up the audio element with two channels and a sample rate of 44100 samples per second. You can then use the audio.mozWriteAudio() method to write samples to the audio element:

audio.mozWriteAudio(data);

The data argument is an array of interleaved sample data like the one in event.frameBuffer in the MozAudioAvailable event hander. That means that to write 1 second of 44100Hz stereo audio data, you have to write 44100 * 2 = 88200 sample values to the audio element’s buffer. As long as there is data in the buffer, the audio element plays. The buffer isn’t unlimited, however, and you can write only a limited number of samples at a time. For very small audio clips, you can sometimes get away with writing all the needed data in one go, but for larger clips — and for any type of continuous audio stream — you have to write small chunks of data and periodically write new data to the buffer to make sure it is filled. If you pass more data to mozWriteAudio() than will fit, the remaining values are simply ignored. The return value indicates the number of values actually written so you know how much data still remains to be written.

A few examples

You’ve now learned the basics of the Mozilla Audio Data API, so let me show you a few simple examples of how you can use it.

Visualizing audio

Chances are your favorite music player has some sort of visualization feature that renders graphics that respond to the music. One possible use of the audio data API is to create visualizers. The first example I show you is a simple visualizer that paints the sample date on a canvas element. Listing 10.5 shows the necessary HTML.

Listing 10.5 HTML for the Audio Visualization

<div>

<canvas id=”output” width=”512” height=”256”></canvas>

</div>

<audio id=”myaudio” controls>

<source src=”beat.mp3” type=”audio/mpeg” />

<source src=”beat.ogg” type=’audio/ogg; codecs=”vorbis”’ />

</audio>

Just a canvas element and an audio element with UI controls so you can start and stop the audio. The full code for this example is located in the file 02-audiodata-visualizer.html in the code archive for this chapter. The two audio files are also included in the archive.

The visualization is done by attaching a function to the MozAudioAvailable event. You then can use the sample values in event.frameBuffer to plot points on the canvas element. Listing 10.6 shows the rendering code.

Listing 10.6 Visualizing Audio Data

var canvas = document.getElementById(“output”),

ctx = canvas.getContext(“2d”),

audio = document.getElementById(“myaudio”);

audio.addEventListener(“MozAudioAvailable”, render, false);

function render(event) {

var channels = audio.mozChannels,

ch = canvas.height / channels, // pixels per channel

fb = event.frameBuffer,

sample, i, j, x, y;

// fade to white

ctx.fillStyle = ”rgba(255,255,255,0.2)”;

ctx.fillRect(0,0, canvas.width, canvas.height);

// draw sample data

ctx.fillStyle = ”black”;

for (i=0; i < channels; i++ ) {

for (j=0; j < fb.length; j += channels) {

sample = fb[j + i];

x = j / fb.length * canvas.width;

y = ch * (i + 0.5) + sample * ch;

ctx.fillRect(x, y, 2, 2);

}

}

}

The interesting part is in the two nested loops. The inner loop is run once for each of the channels in the audio data. This loop iterates through all the data but looks only at the values for the current channel. It does so by adding i to the index when looking up samples in the frame buffer array. The x position is just the sample’s relative position in the frame buffer mapped to the width of the canvas. The y position uses the sample value but adds an offset so each channel is drawn in its own horizontal band. The resulting visualization renders a waveform for each channel, as shown in Figure 10-2.

Figure 10-2: The audio visualizer as rendered in Firefox 5

9781119975083-fg1002

More complicated analysis is often needed, and this simple example is just the tip of the iceberg. Although audio analysis and visualization are fascinating topics, they are outside the scope of this book. If you are interested in exploring this subject further in the context of HTML5 audio, check out the audio data page on Mozilla’s wiki: https://wiki.mozilla.org/Audio_Data_API.

There, you can find not only a good tutorial on the Mozilla API but also links to many examples, demos, and libraries such as Fast Fourier Transform (FFT) and various real-time audio effects.

Making a tone generator

Up next is an example of how to generate audio data dynamically. The generateTone() function in Listing 10.7 takes a frequency and sample rate and returns an array of sample values that generate a tone at the specified frequency if written to an audio element.

Listing 10.7 Generating Tone Data

function generateTone(freq, sampleRate) {

var samples = Math.round(sampleRate / freq),

data = new Float32Array(samples * 2),

sample, i;

for (i = 0; i < samples; i++) {

sample = Math.sin(Math.PI * 2 * i / samples);

data[i * 2] = sample;

data[i * 2 + 1] = sample;

}

return data;

}

Sounds can be described as sinusoidal waves where the wavelength (and therefore the frequency) determines the pitch of the sound. For example, the A note has a frequency of 440Hz, which means the wave oscillates 440 times per second. Because sampleRate is the number of samples per second, we need (sampleRate / freq) to describe a full sound wave.

The data array is created with a length of samples * 2 to make room for both the left and right channels. Finally, the individual samples are calculated and stored in the data array. The loop goes from 0 to (samples - 1) but writes two values in each iteration: first the left channel and then the right channel.

Let’s expand the code in Listing 10.7 a bit and turn it into a simple, interactive tone generator. The input mechanism I use is a single 512x512 div element for capturing mouse events:

<div id=”input” style=”width:512px;height:512px;”></div>

The idea is that holding down the mouse button on the div produces a tone with the frequency and left/right balance determined by the mouse coordinates. The mouse event handlers shown in Listing 10.8 calculate the right values.

Listing 10.8 Mapping Frequency and Balance to Mouse Position

var input = document.getElementById(“input”),

minFreq = 100,

maxFreq = 1200,

balance = 0,

freq = 0;

input.addEventListener(“mousedown”, function(e) {

var rect = this.getBoundingClientRect();

function update(e) {

var x = (e.clientX - rect.left) / rect.width,

y = (e.clientY - rect.top) / rect.height;

balance = (x - 0.5);

freq = minFreq + (maxFreq - minFreq) * (1 - y);

e.preventDefault();

}

input.addEventListener(“mousemove”, update, false);

input.addEventListener(“mouseup”, function(e) {

freq = 0;

input.removeEventListener(“mousemove”, update, false);

input.removeEventListener(“mouseup”, update, false);

}, false);

update(e);

}, false);

When you click the mouse on the div, the update() function is attached as a handler for the mousemove event. Now, every time you move the mouse, the freq and balance values are recalculated. The frequency is in the range [minFreq, maxFreq] and is calculated in a linear way using the relative y coordinate. The balance value uses the relative x coordinate and goes from -0.5 to +0.5, where -0.5 means the sound is turned all the way to the left channel and +0.5 is all the way to the right.

Adding a balance value to the generateTone() function is easy. Listing 10.9 shows the modified function.

Listing 10.9 Generating Tones with Left/Right Balance

function generateTone(freq, balance, sampleRate) {

var samples = Math.round(sampleRate / freq),

data = new Float32Array(samples * 2),

sample, i;

for (i = 0; i < samples; i++) {

sample = Math.sin(Math.PI * 2 * i / samples);

data[i * 2] = sample * (0.5 - balance);

data[i * 2 + 1] = sample * (0.5 + balance);

}

return data;

}

Subtracting the balance value from a constant value of 0.5 increases the value for the left channel when balance is negative. Similarly, adding the balance value to 0.5 increases the right value when balance is positive. This creates the desired effect where the values written to the left and right channels are increased or decreased, depending on whether the balance value is negative or positive.

Now for the interesting part, writing the tone data to an audio element. Listing 10.10 shows the code for writing the data.

Listing 10.10 Writing Tone Audio Data

var audio = new Audio(),

sampleRate = 44100,

totalSamples = 0,

minSamples = sampleRate * 0.25 * 2;

audio.mozSetup(2, sampleRate);

function updateAudio() {

if (!freq) {

return;

}

var sampleOffset = audio.mozCurrentSampleOffset(),

toneData = generateTone(

freq, balance, audio.mozSampleRate);

while (totalSamples - sampleOffset < minSamples) {

totalSamples += audio.mozWriteAudio(toneData);

}

}

setInterval(updateAudio, 10);

First, a new audio element is created and initialized with two channels and a 44100Hz sample rate. Because the sound should keep playing as long as the mouse button is held down and the frequency is set to a positive value, it is necessary to keep writing audio so the audio always has data available. The minSamples value determines the minimum number of sample values that the updateAudio() function should make sure exist in the buffer. I chose a minimum of 0.25 seconds of audio.

The audio.mozCurrentSampleOffset() method returns the number of samples played back so far. The totalSamples value is used in this example to keep track of how many samples have been written in total. It follows that the difference between totalSamples and the sample offset equals the number of samples still available. A single instance of the tone data is created and written to the audio element until the minSamples requirement is fulfilled. Note that this approach incurs a slight delay when switching to a new tone because there is a bit of audio data left in the buffer for the old tone.

In a more robust version of this example, you might want to check the return value from audio.mozWriteAudio() to see if all the samples were written and, if not, save them for the next update. Another issue to look out for is whether the updateAudio() function can actually keep up with the playback of the audio at the specified sample rate and interval time. If it lags behind, you can try using a lower sample rate.

Building the Audio Module

You’ve now seen how easy it is to use audio with HTML5. Now you can put that knowledge to use by adding an audio module to Jewel Warrior. Create a new audio module in a fresh audio.js file and start out with the basic module structure as shown in Listing 10.11.

Listing 10.11 The Audio Module

jewel.audio = (function() {

function initialize() {

}

return {

initialize : initialize

};

})();

Preparing for audio playback

The first task is to determine which audio format the audio module is going to use. In the code archive for this chapter, I included MP3 and Ogg Vorbis versions of all the sound effects to implement. Listing 10.12 shows a formatTest() function that returns the file extension of the most suitable audio format.

Listing 10.12 Determining a Suitable Format and File Extension

jewel.audio = (function() {

var extension;

function initialize() {

extension = formatTest();

if (!extension) {

return;

}

}

function formatTest() {

var exts = [“ogg”, “mp3”],

i;

for (i=0;i<exts.length;i++) {

if (Modernizr.audio[exts[i]] == “probably”) {

return exts[i];

}

}

for (i=0;i<exts.length;i++) {

if (Modernizr.audio[exts[i]] == “maybe”) {

return exts[i];

}

}

}

...

})();

The test is done by iterating over a list of audio formats and returning the first format that returns probably in Modernizr’s feature tests. If no such format is found, a second loop looks for the less confident maybe value. This test ensures that, for example, a probably value for WAV files is chosen over a maybe value for Ogg Vorbis, even if the former is usually a less desirable format for web applications.

Playing sound effects

The most important function of the audio module is to play sounds. Each sound effect that is played needs its own audio element. Listing 10.13 shows the createAudio() function responsible for creating these elements.

Listing 10.13 Creating Audio Elements

jewel.audio = (function() {

var extension,

sounds;

function initialize() {

extension = formatTest();

if (!extension) {

return;

}

sounds = {};

}

function createAudio(name) {

var el = new Audio(“sounds/” + name + “.” + extension);

sounds[name] = sounds[name] || [];

sounds[name].push(el);

return el;

}

...

})();

The createAudio() function has a single parameter, the name of the sound file minus the extension, which was determined previously in the initialization of the audio module. It doesn’t just return the element, however; it also keeps a reference to that element in the sounds object. This object contains an array for each sound effect with all the audio elements created so far for that specific sound. That makes it possible to reuse elements that have finished playing. You can see this being used in the getAudioElement() function in Listing 10.14.

Listing 10.14 Getting an Audio Element

jewel.audio = (function() {

...

function getAudioElement(name) {

if (sounds[name]) {

for (var i=0,n=sounds[name].length;i<n;i++) {

if (sounds[name][i].ended) {

return sounds[name][i];

}

}

}

return createAudio(name);

}

...

})();

The getAudioElement() function checks whether there is already an audio element that it can use. Only if no element is available — either because none have been created yet or because they are all playing — is a new element created. Now you can easily create a play() function that plays a given sound effect. Listing 10.15 shows the new function.

Listing 10.15 The Play Function

jewel.audio = (function() {

var extension,

sounds,

activeSounds;

function initialize() {

extension = formatTest();

if (!extension) {

return;

}

sounds = {};

activeSounds = [];

}

function play(name) {

var audio = getAudioElement(name);

audio.play();

activeSounds.push(audio);

}

return {

initialize : initialize,

play : play

};

})();

When the play() function plays a sound, it also stores a reference to that sound in an activeSounds array. We use this array to solve the next problem: stopping sounds.

Stopping sounds

Stopping any currently playing sounds is easy. Simply iterate through the activeSounds array, call the audio.stop() method on all the audio elements, and empty the array. Listing 10.16 shows the stop() function added to audio.js.

Listing 10.16 The Stop Function

jewel.audio = (function() {

...

function stop() {

for (var i=activeSounds.length-1;i>=0;i--) {

activeSounds[i].stop();

}

activeSounds = [];

}

return {

initialize : initialize,

play : play,

stop : stop

};

})();

Cleaning up

You have one more thing left to do. When a sound is started, it is added to the activeSounds array. You need to make sure that it is removed again after the playback finishes. To solve this problem, you can take advantage of the ended event that is fired when the end of the sound is reached. Whenever a new audio element is created, attach an event handler to the ended event that removes the audio element from the activeSounds array. Listing 10.17 shows the new event handler.

Listing 10.17 Maintaining the Active Sounds List

jewel.audio = (function() {

var dom = jewel.dom,

...

function createAudio(name) {

var el = new Audio(“sounds/” + name + “.” + extension);

dom.bind(el, “ended”, cleanActive);

...

}

function cleanActive() {

for (var i=0;i<activeSounds.length;i++) {

if (activeSounds[i].ended) {

activeSounds.splice(i,1);

}

}

}

...

})();

Because you don’t keep track of where in the activeSounds array the audio element exists, the easiest approach is to just do a blanket removal of any audio element that has ended. The elements are removed by using the splice() method, which modifies an array by removing a specified number of elements starting at a given index. This, of course, changes the length of the array, which is why it is important that the loop condition keeps comparing i to the current length and not, as is usually the best practice, a previously cached value.

Remember to add the audio.js file to the loader module. Add it in the second stage loader in the last batch of files:

// loading stage 2

if (Modernizr.standalone) {

Modernizr.load([

{

...

},{

load : [

“loader!scripts/audio.js”,

“loader!scripts/input.js”,

“loader!scripts/screen.main-menu.js”,

“loader!scripts/screen.game.js”,

“loader!images/jewels”

+ jewel.settings.jewelSize + “.png”

]

}

]);

}

Adding Sound Effects to the Game

Now that the audio module is complete, you can start adding sound effects to the game. I included a set of sound effects in the sounds folder in the code archive for this chapter. The included sound effects are to be used for the following game events:

• Successfully matching jewels

• Performing an invalid jewel swap

• Advancing to the next level

• Indicating the game is over

Playing audio from the game screen

Time to return to the game screen module, screen.game.js. The first thing to do is make sure the audio module is initialized when the game starts, that is, when the startGame() function in the game screen module is called. Listing 10.18 shows the modifications.

Listing 10.18 Initializing the Audio Module

jewel.screens[“game-screen”] = (function() {

var audio = jewel.audio,

...

function startGame() {

...

board.initialize(function() {

display.initialize(function() {

display.redraw(board.getBoard(), function() {

audio.initialize();

advanceLevel();

});

});

});

}

...

})();

Make sure you initialize the audio module before you call advanceLevel(). Playing sound effects is now as simple as adding audio.play() calls wherever you need them, as shown in Listing 10.19.

Listing 10.19 Adding Sound Effects

jewel.screens[“game-screen”] = (function() {

...

function advanceLevel() {

audio.play(“levelup”);

...

}

function gameOver() {

audio.play(“gameover”);

...

}

function playBoardEvents(events) {

if (events.length > 0) {

...

switch (boardEvent.type) {

...

case “remove” :

audio.play(“match”);

...

case “badswap” :

audio.play(“badswap”);

...

...

}

}

...

})();

And there you have it. The game now plays sound effects for the most significant events.

Summary

In this chapter, you learned how to use the new audio element to add sound to your games and applications without Flash or other plug-in-based technologies. It’s not all roses, though, because of audio format conflicts and issues on mobile devices.

Nevertheless, the level of support in modern desktop browsers has reached a level where you can confidently use HTML5 audio. The last part of this chapter showed you how to make an audio module and use it to add sound effects to Jewel Warrior.

You also got a peek at the future in the form of the audio data APIs that are slowly taking form. They are still early in their development, and there are many issues to work out, however, not least of which is the fact that the various parties involved in the development haven’t settled on a single specification yet. It is definitely an area to keep your eyes on in the near future, though.