Working with Video - Foundation ActionScript 3, Second Edition (2014)

Foundation ActionScript 3, Second Edition (2014)

Chapter 7. Working with Video

This chapter covers the following topics:

· Types of video format

· How to access, load, and display video

· How to control video once it has loaded

· How to send data to and from video

In this chapter, you will learn the basics of how to load and control video using ActionScript 3.0. After an introduction to encoding, delivering, and playing video, you’ll build a simple video player. This example will demonstrate how to define the video location, control its loading and use, and display information about it and its state.

Video on the Modern Web

We are currently in the middle of a video revolution on the Web, which is made possible, for the most part, by the availability of better Internet connections and better codecs. Video usage in rich Internet applications (RIAs) and websites is becoming very common.

A codec, in ActionScript video terms, is a device or program capable of performing encoding and decoding on a digital data stream or signal. The word codec can be a combination of any of the following: compressor-decompressor, coder-decoder, or compression/decompression algorithm.

The Video Experience

Developers are excited about the experience video allows them to give to web users, or “viewers,” as they have now become. This is often as good as or better than the standard TV experience, because it can be made in high-definition (HD) video, fully interactive and integrated with multiple data sources.

Marketing in general is making some excellent use of web video, creating beautifully shot and produced interactive experiences. Nokia’s interactive spoof “Great Pockets” clothes and online store (http://www.greatpockets.com), which ultimately is an advertisement for their excellent N95 phone, is a great example of this.

Other companies and individuals have gone the obvious route of creating their own “channels.” They usually record digital video (DV) clips of something they have a passion for (be it a summary of their band’s last concert or a video tutorial on how to create a Flash widget). Then they encode it so Flash can play it (which is very easy, as you’ll see in this chapter), and then share it on their site. Many people have scheduled regular programs and streamed live content (see http://www.stickam.com for an example of a site that specializes in live webcam feeds), and their channels are beginning to gather a rather respectable number of viewers. Ask a Ninja is a permanent favorite of mine (http://www.askaninja.com/). Issy Depew, of Denver, Colorado, created a channel for mothers (http://www.mommetv.com/), which is a massive broadcasting success. I’m a big golf fan and GolfSpan is very helpful for me and thousands of other golfers for video lessons (http://www.golflink.com).

Third-party video-sharing sites like YouTube (http://www.youtube.com) and Revver (http://www.revver.com) allow anyone to share their video clips, with or without a website or host of their own. Many companies have now jumped on this bandwagon—AOL, Google, Yahoo, and blip.tv are just a few.

In addition, companies such as PixelFish (http://www.eyespot.com) and Brightcove (http://www.brightcove.com) are providing APIs and hosting services that go even further, allowing you to add special effects, events, and more to your video clips; create custom players; edit your videos online; and so on. Also check out Jumpcut (http://www.jumpcut.com), Motionbox (http://www.motionbox.com), and Crackle (http://www.crackle.com).

Finally, major networks such as Fox, ABC, and CBS now offer their entire programming catalog via the Internet. This trend has spawned a series of third-party content distributors such as Hulu (http://www.hulu.com/).

I think you can start to see why this is so exciting. We are all filmmakers and broadcasters now—from large companies to the man on the street with nothing more than an Internet connection and a DV camera, or even a camera phone.

Where ActionScript Comes in

So where does ActionScript come into all this? Flash/ActionScript is at the center of the video revolution. This is because it provides one of the easiest and best ways to deliver video on to the Web, for the following reasons:

· Flash CC has an easy workflow available for video, including an easy-to-use video encoder, comprehensive support for video in ActionScript 3.0, and a video player component that can just be dropped onto the stage and wired up.

· The ubiquity of the Flash Player means that your new video application can be seen easily and quickly by almost any user. Also, with the recent introduction of support for the H.264 codec standard, repurposing existing broadcast and HD video is quick, easy, and affordable.

· It allows you to easily integrate animation, graphics, text, and even other videos, as dynamically and interactively as you can conceive.

Though a video player component is available in both Flex and Flash, using ActionScript is far more rewarding, provides for an optimal solution, and allows a greater diversity of video implementations. Often, you will want to create a more flexible or powerful video player. If you know ActionScript 3.0, creating a video player from scratch is just as quick as trying to learn about the video component for the first time. Once you have created all the class files for your video player, you have effectively the same thing as a video player component, but with none of the overhead. And you can reuse the class files any time you need to create a new video player, with very little modification of the original code (or none, if you’re a good object-oriented developer). The bottom line is that using ActionScript 3.0 allows you to take advantage of all of its extended video control and modification capabilities, many of which simply aren’t available or as flexible when using the built-in components and property panels within Flash or Flex.

But video players aside, video can be used in many ways that don’t require the standard player layout (with play, stop, fast-forward, and rewind buttons, and so on). Examples of these other uses include demos, interactive navigation, information, tutorials, banners, and backgrounds.

As I’ve said, the Flash Player is one of the best ways of delivering video to the Web. Let’s move on to the basics of just how to do that.

Encoding Your Video

Until very recently, your onlychoice for delivering video in the Flash Player was to encode it using the Sorenson Spark codec or On2 VP6.2 codec, giving you an FLV (Flash video) file. While these codecs were optimized for web delivery, they did not support HD. From a process point of view, they were time-consuming and poorly automated, and made commercial delivery of video comparatively costly. Many video professionals suggested to Adobe that support for the industry-standard H.264 codec would be very desirable for all these reasons, and supporting that codec would open many new doors for the use of video in the Flash Player. (I was among the early developers to complain about it when I was working for an Internet Protocol television project startup some years ago.)

Adobe listened to the increasing requests for the H.264 codec support and implemented it in Flash Player 9. This is a huge step forward and allows for the repurposing of commercial, broadcast-quality HD video directly in ActionScript. But this does not mean that FLV is now redundant! Far from it. The FLV format has its own strengths, and even within FLV files, the choice of Spark and VP6.2 codecs is based, for the most part, on their inherent and different strengths. We simply have more choices, so that we can provide our clients with the right solutions.

So let’s take a look at the process of getting your video ready for the Web. If your video source is in FLV or H.264 format, you really have to do very little with it. Supported H.264 formats include MOV, 3GP, MP4, M4V, and M4A. The Flash Player reads the file header to ascertain the file format, so provided your file has been encoded to a supported H.264 format, the file name extension doesn’t matter. You could name a supported MOV file file.txt, and it would still load and play. In addition, Flash Player 10 is capable of playing back On2’s VP7 content created in other software tools, provided it is saved with the .flv extension.

So, you can use a video file in a supported format directly in ActionScript. However, you may want to go on to encode the file into an FLV format for the benefits that format offers: the ability to provide cue points (timeline-related events, which are discussed in the “Cue point events” section later in this chapter) and a reduced file size. However, in some cases, the production of FLV-encoded footage takes more time, and it thus is not as cost-effective.

Now suppose you have a video in a format that is completely unsupported in the Flash Player—say an AVI file—and you want to share it with the world through the Flash Player. You have a couple of choices. The Flash Player supports H.264- and FLV-based codecs natively. You can therefore choose to encode your unsupported video format to H.264-based video or an FLV file.

H.264 is industry standard for most broadcast quality and HD format video, and this can be easily repurposed for use in the Flash Player. This approach is less time-consuming, and thus less costly. The main drawbacks of this method are that there can be licensing cost issues and you cannot place cue points into the video.

Although you probably won’t receive FLV-formatted video from the client, you are likely to receive video source files that are H.264-encoded. In fact, many companies pay good money to have H.264-based video files reduced to something more manageable before they even try to use the footage on the Web. Thankfully, this is no longer always necessary. We can now take things like movie trailers, advertisements, and even entire programs and films, and with little or no optimization, use them directly in our applications. However, often you will need to optimize it on delivery platforms that are associated with a different aspect ratio or resolution, such as mobile phones. Or you might want to remove specific advertisements or even add some. All of these things constitute repurposing.

Should you choose to deliver your video as an FLV, this requires some extra work, though to be ­honest, sometimes repurposing H.264 source format can require a little work, too. And you need to consider the quite legitimate need for cue points, which will require encoding.

Capturing Your Video

If you don’t yet have your video in a digital format (for example, it’s still on your DV camera), you’ll need to capture it. You can use some great commercial packages, like Adobe Premier Pro, Apple Final Cut, and so on, but they all cost a lot. I’m here to tell you how it can be done on a shoestring budget.

If you have Windows, the much-underrated Windows Movie Maker is the easiest and cheapest (free) choice. It automates much of the process for you. In fact, if you use a FireWire (DV) connection, Windows will even prompt you to launch Movie Maker to rip the video. Launch Movie Maker, as shown in Figure 7-1, and you will see that it offers a host of easy-to-use features—from format conversion to automatic scene definition and special effects. Although I have had the odd problem with it, it’s a great free capture and production package. However, at present it doesn’t support FLV format output. So save your final video as a .mov or an .avi file, and you won’t go wrong. At this point, you can just use the .mov file directly in ActionScript, as noted earlier. However, you may want to continue and encode it to an FLV file.

image

Figure 7-1. Windows Movie Maker

If you’re running Mac OS X, then the obvious choice for capturing your video is iMovie HD, shown in Figure 7-2. This little baby is free, user-friendly, and ready to capture your input from your DV camera, your HDV camera, an MPEG-4 source, or even your iSight camera. We’ve come to expect a lot of functionality and grace from Apple media software, and though iMovie HD will never be used on the next Hollywood blockbuster, it is perfect for your web-based video ventures. Once again, if you have an H.264 source, you can just use it or repurpose it first. IMovie HD supports full HD 1080 interlaced input. I have no doubt Apple will add true HD 1080 progressive in the next full release. But this kind of geek jargon is really for another book, so let’s move on.

image

Figure 7-2. Apple iMovie

Working with video in most video-capture and video-editing software often bears a remarkable resemblance to working in the Flash CC IDE itself, and this is no accident. They share many of the same workflow processes, naming conventions, and functionalities. In fact, if you take a look at Adobe After Effects, you’ll be surprised at how similar it is to the Flash IDE, and the differences are quickly being reduced as both of these software packages evolve. I foresee a time when the workflow between video applications and Flash will be seamless.

So, now you have your AVI, MOV, MP4, or other format video file. Next, to get an FLV file, you need to take your captured video files and encode them using the Flash Video Encoder.

Using the Flash Video Encoder

The Flash Video Encoder is a separate program installed along with Flash. It supports files in MOV, AVI, MP4, ASF, DV, MPG/MPEG, and WMV format. You can use any video you like if it’s in one of the supported formats. If you don’t have a video in one of these formats, you can use the video provided with this book’s downloadable code (available at www.friendsofed.com).

To begin, open the Flash Video Encoder. You will find it in a separate directory in the same place that Flash CC was installed. On your PC, that will usually be in C:\Program Files\Adobe\FlashMediaEncoder\. On a Mac, it will usually be found in Macintosh HD/Applications.

To add your video file, click the Add button and search for your video. Find it and click OK. Now you will see your source video as the one and only entry in the queue, as shown in Figure 7-3.

image

Figure 7-3. Adding a file to be encoded

To check and adjust encoding settings, if necessary, click Edit image Settings. This takes you to the Export Settings screen, as shown in Figure 7-4.

image

Figure 7-4. Flash Video Encoder Profiles Export Settings window

As you can see, this screen has a number of tabs. In most cases, you won’t make any changes on the Audio tab. You’ll learn about the settings on the Cue Points tab in the “Cue point events” section later in this chapter. The Crop and Resize tab is one I believe has very limited use. Frankly, if you’ve left your video sizing and cropping until you’re encoding it, you’re not implementing a good video production workflow. Videos should be edited before they are encoded using tools much better suited to the job (Adobe Premier Pro or After Effects, for example). That leaves the Profiles and Video tabs to address here.

If you try to encode your file to a child directory and that directory doesn’t exist, the encoder will generate an error, and you will have difficulty getting it to export again without first removing the entry from the encoder and starting from scratch! This appears to be a bug.

The Video tab, shown in Figure 7-5, allows you to set the following options:

· Video codec: Choose the compression codec, either Sorenson Spark or On2 VP6. On2 provides better compression and quality as a rule. Beneath this, you can choose Encode alpha channel and/or Deinterlace, if you have video with green-screen footage, for example, or broadcast footage that needs deinterlacing.

· Frame rate: It is often best to leave this at the default (Same as Source).

· Quality: You can set the quality of the video in both a generalized quality setting and maximum data rate. The higher the data rate, the better the quality but the larger the final encoded FLV will be. It takes some experimenting to get these settings right.

· Key frame placement: The encoder will place keyframes at a variable rate, depending on the changes in the displayed video. If you want to ensure quality (and increased size) by increasing that to a more formal rate of keyframes, you can set the Key frame placementoption to Custom and define a Key frame interval setting. Having more keyframes is actually less processor-intensive, so there is a benefit.

image

Figure 7-5. Flash Video Encoder Video tab

For this example, you will use the default settings. So return to the Flash Video Encoder queue screen and click the Start Queue button. You’ll see the encoding process kick off quickly and smoothly. It should look like Figure 7-6.

image

Figure 7-6. Flash video encoding in progress

You will see the video play in a preview panel. The progress bar shows the encoding status, and the elapsed time and the time left to finish encoding your video are displayed. When the video has finished encoding, you’ll be notified, and that’s it. You now have video encoded to FLV format.

Exactly how long it will take your video to encode depends on a lot of factors: how large your source file is, how powerful your machine is, what else is running on it, and what encoding settings you’ve chosen. I can say that any decent size, reasonable quality, usable video source is going to require enough encoding time to allow you to get a cup of coffee and read some more of this book. The example I used was a video of just under 6 seconds. With all of the default settings, it took 17 seconds to encode.

Delivering Your Video

Video can be referenced and used from a number of different places:

· From imported video embedded in the timeline or Flash Library

· From an external FLV/H.264 file, using progressive download or streamed through the Flash Media Server (FMS) or another server

· From a webcam, live encoded and streamed to and from the FMS, Flash Video Encoder, or similar

· From DV camera feeds, which can be streamed

So, the three methods of delivery are embedded, progressive, and streamed. The method you use has a direct bearing on the performance you can expect and the code you will write to access your video. Table 7-1 compares these methods, based on video delivery across the Internet.

Table 7-1. Comparison of video delivery methods

image
image

Embedded video is hardly ever used and is considered an amateur way of delivering video content in all but the odd rare example, so you can basically ignore it. Streaming, while offering the best control in terms of content and digital rights management (DRM), requires a server-side technology to control the streaming, such as the FMS. Streaming may be the best delivery platform in many situations, but it is quite complex. The examples in this chapter will demonstrate the most commonly used method of video delivery on the Web today: progressive download. Playing external FLV/H.264 files provides several advantages over embedding video in a Flash document, such as better performance and memory management, and independent video and Flash frame rates.

A note about security by default, loading and playing an audio or video file is not allowed if the SWF file is local and tries to load and play a remote file. A user must grant explicit permission to allow this. Additionally, the security sandbox must be placated by the use of a cross-domain policy file.

Using ActionScript to Play Videos

Generally, four main ActionScript classes are used to access, control, and render video:

· NetConnection: Sets up a connection to a video source, either network or local. It also provides the ability to invoke commands on a remote application server.

· NetStream: Provides methods and properties for playing, controlling, monitoring, and even recording video files.

· Video: Allows you to create a display object for your video feeds.

· Camera: Allows you connect to one or more webcam feeds for viewing or recording.

The NetConnection class is essentially a data access class. It connects your Flash movie to the video source location. Through this class, you can access the videos you need or call server-side ActionScript. The NetStream class provides control over the video stream connection thatNetConnection has set up, including play, pause, and seek functions. It also allows a video source to be recorded to a media server.

The Video and Camera class instances are display objects that actually display or capture the output and input for the end user. The Camera class is also capable of publishing back to the server (when used in conjunction with a streaming server like the FMS) or to the open page (when used in conjunction with a video object).

Before we get to building the video player application, let’s look at the classes and events you’ll be using in some detail. This will serve as a valuable reference section as you create your first projects. If you just want to build something and you can’t wait, you can skip these sections and get straight to building the video player (the “Building a video player” section).

Managing Connections with the NetConnection Class

The NetConnection class opens and closes connections to a video source, either network or local. It can invoke commands on a remote application server, such as the FMS or Flex server.

Table 7-2 briefly summarizes the public properties of the NetConnection class.

Table 7-2. NetConnection public properties

Property

Type

Description

client

Object

Indicates the object on which callback methods should be invoked.

Connected

Boolean

Read-only. Indicates whether Flash Player has connected to a server through a persistent Real Time Messaging Protocol (RTMP) connection (true) or not (false).

ConnectedProxyType

String

Read-only. If a successful connection is made, indicates the method that was used to make it: a direct connection, the connect() method, or HTTP tunneling.

DefaultObjectEncoding

uint

Static. The default object encoding (AMF version) for ­NetConnection objects created in the SWF file.

ObjectEncoding

uint

The object encoding (AMF version) for this NetConnection instance.

ProxyType

String

Determines whether native Secure Sockets Layer (SSL) is used for RTMPS (RTMP over SSL) instead of HTTPS (HTTP over SSL), and whether the connect() method of tunneling is used to connect through a proxy server.

Url

String

Read-only. The URI of the application server that was passed to NetConnection.connect(), if connect() was used to connect to a server.

UsingTLS

Boolean

Read-only. Indicates whether a secure connection was made using native Transport Layer Security (TLS) rather than HTTPS.

Here’s an example of using NetConnection:

// Import the NetConnection class
import flash.net.NetConnection;
. . .
// Declare a NetConnection data type variable in the class header
private var ncVideoPlayer:NetConnection;
. . .
// Create a new instance of the NetConnection class and connect it in
// the class constructor
ncVideoPlayer = new NetConnection();
ncVideoPlayer.connect(null);

This example connects the NetConnection instance to null. You would do this if you were actually going to connect to a local FLV file. If you want to connect to an FMS service, the connection instruction would look more like this:

ncVideoPlayer.connect (RTMP://www.flashcoder.net/videoChatRoom, parameters);

where parameters signifies any additional parameters.

Loading and Controlling Video with the NetStream Class

The NetStream class provides methods and properties for playing, controlling, and monitoring FLV files from the local file system or from an HTTP address. Using the NetStream class gives you a conduit through which to load and control video files (FLV or MPEG-4 files) to a Videoobject from a NetConnection object. The video you are working with through the NetStream instance can be changed dynamically at any time.

Event handlers need to be in place to handle onMetaData and, if they exist, ­onCuePoint events, as discussed in the “Handling video events” section later in this chapter. If they are not, the compiler will issue error warnings. This will not prevent the code from compiling successfully, nor prevent the final SWF from working properly. But be aware of this, as it is pointing out poor convention on your part as an object-oriented programmer.

Table 7-3 briefly summarizes the public properties of the NetStream class.

Table 7-3. NetStream class public properties

Property

Type

Description

bufferLength

Number

Read-only. The number of seconds of data currently in the buffer.

bufferTime

Number

Read-only. The number of seconds assigned to the buffer by NetStream.setBufferTime().

bytesLoaded

Number

Read-only. The number of bytes of data that have been loaded into the player.

bytesTotal

Number

Read-only. The total size in bytes of the file being loaded into the player.

checkPolicyFile

Boolean

Specifies whether Flash Player should attempt to download a cross-domain policy file from the loaded FLV file’s server before beginning to load the FLV file itself.

currentFps

Number

Read-only. The number of frames per second being displayed.

time

Number

Read-only. The position of the playhead, in seconds.

Initially, you need to import, declare, and instantiate the NetStream class, but you also need to give it a reference to an existing NetConnection object, so it will have access to the video source.

// Import the NetStream class
import flash.net.NetStream;
. . .
// Declare a NetStream data type variable in the class header
private var nsVideoPlayer:NetStream;
. . .
// Create a new instance of the NetStream class and pass it a reference
// to the NetConnection object we created for it.
nsVideoPlayer = new NetStream(ncVideoPlayer);

Now you want to be able to play, pause, stop, and otherwise control the FLV file. The NetStream instance needs to be assigned to an actual Video display object before you will see anything happening, as discussed in the upcoming section about the Video class.

Buffering Your Video

FLV buffering should definitely be a consideration. By default, the FLV you try to access and play will start playing as soon as the player receives one-tenth of a second of the file. For obvious reasons, you will probably want to allow a buffer of the video to load before you start playing it, to avoid jerky playback as much as possible. For very small videos, this is not so important; but these days, hardly anyone is showing small videos.

You can set the bufferTime parameter of your NetStream instance to tell the player how many ­seconds of FLV video to load before beginning playback. This is, of course, entirely individual to the project. It’s worth experimenting with this value to get it right for your expected bandwidth, FLV size, and so on. Here is an example of buffering 10 seconds of video before it starts to play:

// Set buffer load before playing
nsVideoPlayer.bufferTime = 10;

Playing Your Video

Playing the video couldn’t be simpler:

// Tell the NetStream instance what FLV file to play
nsVideoPlayer.play("video_final.flv");

You can easily make references to local or remote FLV files. However, remember that if they are on another domain, you will need to address the cross-domain security policy before you do that.

Pausing Your Video

Pausing the video requires another simple command:

nsVideoPlayer.pause();

From a display point of view, it’s important to remember to toggle the play button to be the pause button and vice versa when going from play to pause to play again.

Stopping Your Video

Use close() to stop playing the video:

nsVideoPlayer.close();

Stopping the video does not clear it from the cache (though it does unload it from the player). It automatically resets the NetStream.time property to 0, and It makes this NetStream instance available for use by another video, but not by another Video object.

The video will stop exactly where it is. This will make it appear as if it has paused at the point where you stopped it. This is not aesthetically or functionally pleasing. When the user clicks the play button again, because the NetStream.time property has been set to 0, your video will start again from the beginning. A better solution is to set the NetStream.seek() function to 0 first. If you still want the video to continue loading after you have stopped it, don’t use the NetStream.close() function. Instead, use the NetStream.pause() function after theNetStream.seek(0) function.

Fast-Forwarding and Rewinding Your Video

Fast-forwarding and rewinding are slightly less straightforward than the other functions (no single line of code here. I’m afraid). This is because these functions need to loop through incremental or decremental seek methods using a timer. You will need to decide on the size of the seek increment/decrement steps (in seconds) for the fast-forward (FF) and rewind (RW) functions to use.

// Set the seek increment/decrement in seconds
private var seekIncDec:int;
private var playHeadPosition:int;
private var timerFF:Timer;
private var timerRW:Timer;
...
seekIncDec = 3;
timerFF = new Timer(100, 0);
timerRW = new Timer(100, 0);

Fast-forwarding is simply a matter of seeking through the video feed in the specified increments (set in the seekIncDec variable; 3 seconds in this example), incrementing the playHeadPosition variable based on this, and seeking to that position while the fast-forward button is selected. (I have not included the button code here, as we’ll discuss the button classes shortly.)

private function onClickFF():void {
timerFF.addEventListener(TimerEvent.TIMER, FFward);
timerFF.start();
}

private function FFward():void {
playHeadPosition = Math.floor(ns1.time) + seekIncDec;
nsVideoPlayer.seek(playHeadPosition);
}

When the fast-forward button is released, simply clear the timer and tell the NetStream object to play(), and it will play from the new playhead position.

private function onReleaseFF():void {
timerFF.reset();
nsVideoPlayer.play();
}

Rewinding your video is almost the same as fast-forwarding, but in reverse:

private function onClickRW():void {
timerRW.addEventListener(TimerEvent.TIMER, RWind);
timerRW.start();
}
private function RWind():void {
playHeadPosition = Math.floor(ns1.time) - seekIncDec;
nsVideoPlayer.seek(playHeadPosition);
}

When the rewind button is released, simply clear the timer and tell the NetStream object to play(), and it will play from the new head position:

private function onReleaseRW():void {
timerRW.reset();
nsVideoPlayer.play();
}

Creating Video Objects with the Video Class

In previous versions of ActionScript, the Video class didn’t exist (at least not in the same way it does in ActionScript 3.0). You needed to create a physical video object and drag it on screen, then give it an instance name, which you would then use to link the NetStream instance to it. This meant that the development cycle of a video-based application could not be done by code alone (not something I mind, to be honest, as you can’t make syntax errors with a physical object). This was, however, a bit of a glaring inconsistency in the way we develop video-based applications, and it has now been addressed with the Video class.

Table 7-4 briefly summarizes the public properties of the Video class.

Table 7-4. Video class public properties

Property

Type

Description

deblocking

int

Indicates the type of filter applied to decoded video as part of postprocessing.

smoothing

Boolean

Specifies whether the video should be smoothed (interpolated) when it is scaled.

videoHeight

int

Read-only. Specifies the height of the video stream, in pixels.

videoWidth

int

Read-only. Specifies the width of the video stream, in pixels.

Create a Video object like this:

// Import the Video class
import flash.media.Video;
. . .
// Declare a Video data-typed variable in the class header
private var vid1:Video;
. . .
// Create an instance of the Video class in the class constructor
vid1 = new Video();

The Video class is a display object. This means it displays and can modify what it displays; however, it doesn’t have any control over the content it displays. Remember to add your Video object instance to the display list also.

// Add Video instance to display list
addChild(vid1);
. . .
// Set display properties

vid1.x = 166;
vid1.y = 77;
vid1.width = 490;
vid1.height = 365;

Since the Video instance has no control over the video it displays, you must attach a NetStream control object to the Video instance, as follows:

vid1.attachNetStream(nsVideoPlayer);

When your video is stopped using the NetStream.close() function, the video display object does not clear. To get around this cleanly, use the Video.clear() function once the video has stopped playing. You can also remove the Video object from the display list:

vid1.clear();
removeChild(vid1);

That’s pretty much it for setting up the Video class. It can be modified in other ways to alter how it is displayed, but that would be overkill for this example. Its work is done, and now the NetStream class will do all the controlling.

Creating Camera Objects with the Camera Class

The Camera class provides access to and control of the user’s webcam, so it can be used as the video source. Generally, the Camera class is used in conjunction with the FMS; however, it can be used without any back-end communication requirements.

When using the Camera class to call the camera in question, the user will be challenged for access to the camera. This is a security feature, and the user must agree to allow access in the pop-up box that appears.

The pop-up challenge box is 215x138 pixels, so the SWF in which you publish the FLV must be this size as a minimum.

Table 7-5 briefly summarizes the public properties of the Camera class.

Table 7-5. Camera class public properties

Property

Type

Description

activityLevel

Number

Read-only. Specifies the amount of motion the camera is detecting.

bandwidth

int

Read-only. Specifies the maximum amount of bandwidth the current outgoing video feed can use, in bytes.

Constructor

Object

A reference to the class object or constructor function for a given object instance.

currentFps

Number

Read-only. The rate at which the camera is capturing data, in frames per second.

fps

Number

Read-only. The maximum rate at which you want the camera to capture data, in frames per second.

height

int

Read-only. The current capture height, in pixels.

index

int

Read-only. A zero-based integer that specifies the index of the camera, as reflected in the array returned by the names property.

keyFrameInterval

int

Read-only. Specifies which video frames are transmitted in full (called keyframes) instead of being interpolated by the video compression algorithm.

loopback

Boolean

Read-only. Specifies whether a local view of what the camera is capturing is compressed and decompressed (true), as it would be for live transmission using FMS, or uncompressed (false).

motionLevel

int

Read-only. Specifies the amount of motion required to invoke the activity event.

motionTimeout

int

Read-only. The number of milliseconds between the time the camera stops detecting motion and the time the activity event is invoked.

muted

Boolean

Read-only. Specifies whether the user has denied access to the camera (true) or allowed access (false) in the Flash Player Privacy panel.

name

String

Read-only. Specifies the name of the current camera, as returned by the camera hardware.

names

Array

Static, read-only. Retrieves an array of strings reflecting the names of all available cameras without displaying the Flash Player Privacy panel.

prototype

Object

Static. A reference to the prototype object of a class or function object.

quality

int

Read-only. Specifies the required level of picture quality, as determined by the amount of compression being applied to each video frame.

width

int

Read-only. The current capture width, in pixels.

Creating a Camera object is very simple. However, just as with the NetStream object, it needs to be assigned a Video object in order to display it:

private var cam1:Camera;
. . .
cam1 = new Camera();
. . .
// Load camera source
public function loadCamera():void {
addChild(vid1);
vid1.x = 40;
vid1.y = 70;
vid1.width = 500;
vid1.height = 375;
cam1 = Camera.getCamera();
vid1.attachCamera(cam1);
}

The Camera class has a number of useful methods, which are summarized in Table 7-6.

Table 7-6. Camera class methods

Method

Description

getCamera(name:String = null):Camera [static]

Returns a reference to a Camera object for capturing video

setKeyFrameInterval(keyFrameInterval:int):void

Specifies which video frames are transmitted in full (called keyframes) instead of being interpolated by the video compression algorithm

setLoopback(compress:Boolean = false):void

Specifies whether to use a compressed video stream for a local view of the camera

setMode(width:int, height:int, fps:Number, favorArea:Boolean = true):void

Sets the camera capture mode to the native mode that best meets the specified requirements

setMotionLevel(motionLevel:int, timeout:int = 2000):void

Specifies how much motion is required to dispatch the activity event

setQuality(bandwidth:int, quality:int):void

Sets the maximum amount of bandwidth per second or the required picture quality of the current outgoing video feed

Handling Video Events

A number of useful events are associated with video. These include mouse events, status events, metadata events, and cue point events.

Mouse Events

All of the buttons for standard video player functionality—such as play, pause, fast-forward, rewind, and stop—require listeners for their mouse-based events to be handled. To set up these listeners, you need to import both the SimpleButton class and the MouseEvent class:

import flash.display.SimpleButton;
import flash.events.MouseEvent;

Then create the button instances:

// Rewind, Play, Pause, Stop and Fast Forward buttons
private var butRW:SimpleButton;
private var butPlay:SimpleButton;
private var butPause:SimpleButton;
private var butStop:SimpleButton;
private var butFF:SimpleButton;

Add MouseEvent listeners Here’s an example of adding listeners for the CLICK event:

// Add button listeners
butRW.addEventListener(MouseEvent.CLICK, doRewind);
butPlay.addEventListener(MouseEvent.CLICK, doPlay);
butPause.addEventListener(MouseEvent.CLICK, doPause);
butStop.addEventListener(MouseEvent.CLICK, doStop);
butFF.addEventListener(MouseEvent.CLICK, doFastForward);

In some cases, you will also need to listen for the MouseEvent.UP event, such as with the fast-forward and rewind buttons.

Status Events

Status events will be broadcast about any existing NetStream instance when it has a change in status. The event will tell you information such as if the NetStream stops or pauses, if the buffer is full, and if it errors out.

import flash.events.NetStatusEvent;
. . .
// Add a listener for any status events (playing, stopped, etc.)
nsVideoPlayer.addEventListener(NetStatusEvent.NET_STATUS, nsOnStatus);
. . .
private function nsOnStatus(infoObject:NetStatusEvent):void {
for (var prop in infoObject.info) {
trace("\t" + prop + ":\t" + infoObject.info[prop]);
}
}

The nsOnStatus function will trace through the infoObject and its contents whenever a status event is broadcast.

Table 7-7 summarizes the NetStream onStatus events and errors.

Table 7-7. NetSteam onStatus events and errors

Event/error

Description

NetStream.Buffer.Empty

Data is not being received quickly enough to fill the buffer. Data flow will be interrupted until the buffer refills, at which time a NetStream.Buffer.Full message will be sent and the stream will begin playing again.

NetStream.Buffer.Full

The buffer is full and the stream will begin playing.

NetStream.Buffer.Flush

Data has finished streaming, and the remaining buffer will be emptied.

NetStream.Play.Start

Playback has started.

NetStream.Play.Stop

Playback has stopped.

NetStream.Play.StreamNotFound

The video file passed to the play() method can’t be found.

NetStream.Seek.InvalidTime

For video downloaded with progressive download, the user has tried to seek or play past the end of the video data that has downloaded thus far, or past the end of the video once the entire file has downloaded. The Error.message.details property contains a time code that indicates the last valid position to which the user can seek.

NetStream.Seek.Notify

The seek operation is complete.

The Camera class has the same onStatus event as the NetStream class, and should be handled in the same way. Camera also has a unique activity event (ActivityEvent.ACTIVITY), which is fired whenever the camera detects or stops detecting motion.

First, import the event class:

import flash.events.ActivityEvent;

Then, once you have created a Camera instance, add a listener for any activity:

cam1.addEventListener(ActivityEvent.ACTIVITY, activityHandler);

Metadata Events

You can use the onMetaData callback handler to view the metadata information in your video files. Metadata includes information about your video file, such as duration, width, height, frame rate, and more. The metadata information that is added to your FLV or H.264 file depends on the software you use to encode it, or the software you use to add metadata information after encoding. You can use the metadata to do things like work out and display the video length. The metadata gives you a way to interrogate the video file prior to playing it, and gives the user this feedback as soon as the video file is accessible.

ActionScript 3.0’s metadata callback handler is considerably different from that of ActionScript 2.0, and indeed, from the callback handlers of just about any other class. It uses a client object, to which the onMetaData handler method is assigned. The callback method is invoked on whatever is set with the client property.

var objTempClient:Object = new Object();
objTempClient.onMetaData = mdHandler;
nsVideoPlayer.client = objTempClient;

// This function cycles through and displays all of the video file's
// metadata, so you can see what metadata it has and get used to
// seeing the sort of metadata that is attached to video files.
private function mdHandler(obj:Object):void {
for(var x in obj){
trace(x + " : " + obj[x]);
}
}

Using the previous code snippet to trace the returned metadata information object in the mdHandler() method creates the following output on the included FLV file:

width: 320
audiodatarate: 96
audiocodecid: 2
videocodecid: 4
videodatarate: 400
canSeekToEnd: true
duration: 16.334
audiodelay: 0.038
height: 213
framerate: 15

Table 7-8 shows the possible values for video metadata in FLV files.

Table 7-8. Video metadata in FLV files

Parameter

Description

audiocodecid

A number that indicates the audio codec (code/decode technique) that was used. Possible values are 0 (uncompressed), 1 (ADPCM), 2 (MP3), 5 (Nellymoser 8kHz mono), and 6 (Nellymoser).

audiodatarate

A number that indicates the rate at which audio was encoded, in kilobytes per second.

audiodelay

A number that indicates what time in the FLV file “time 0” of the original FLV file exists. The video content needs to be delayed by a small amount to properly synchronize the audio.

canSeekToEnd

A Boolean value that is true if the FLV file is encoded with a keyframe on the last frame that allows seeking to the end of a progressive download movie clip. It is false if the FLV file is not encoded with a keyframe on the last frame.

cuePoints

An array of objects, one for each cue point embedded in the FLV file. The value is undefined if the FLV file does not contain any cue points. Each object has the type, name, time, and parameters.

duration

A number that specifies the duration of the FLV file, in seconds.

framerate

A number that is the frame rate of the FLV file.

height

A number that is the height of the FLV file, in pixels.

videocodecid

A number that is the codec version that was used to encode the video. Possible values are 2 (Sorenson H.263), 3 (screen video; SWF 7 and later only), 4 (VP6.2; SWF 8 and later only), and 5 (VP6.2 video with alpha channel: SWF 8 and later only).

videodatarate

A number that is the video data rate of the FLV file.

width

A number that is the width of the FLV file, in pixels.

Table 7-9 shows the video metadata reported on H.264 files.

Table 7-9. Video metadata in H.264 files

Parameter

Description

duration

Shows the length of the video. (Unlike for FLV files, this field is always present.)

videocodecid

For H.264, avc1 is reported.

audiocodecid

For AAC, mp4a is reported. For MP3, mp3 is reported.

avcprofile

The H.264 profile. Possible values are 66, 77, 88, 100, 110, 122, and 144.

avclevel

A number between 10 and 51.

aottype

Audio type. Possible values are 0 (AAC Main), 1 (AAC LC), and 2 (SBR).

moovposition

The offset in bytes of the moov atom in a file.

trackinfo

An array of objects containing various information about all the tracks in a file.

chapters

Information about chapters in audiobooks.

seekpoints

You can directly feed into NetStream.seek();.

videoframerate

The frame rate of the video if a monotone frame rate is used. Most videos will have a monotone frame rate.

audiosamplerate

The original sampling rate of the audio track.

audiochannels

The original number of channels of the audio track.

width

The width of the video source.

height

The height of the video source.

“The time has come,” the Walrus said, “to talk of many things.” Of moov atoms and seekpoints, of cabbages and kings. Well, OK, perhaps we won’t get into vegetables and royalty, but now is a good time to talk about a couple of interesting differences when interrogating H.264-based video files, which involve moov atoms and seekpoints.

Atoms are metadata in their own right. Specifically, you can get information about the moov atom. The moov atom is movie resource metadata about the movie (number and type of tracks, location of sample data, and so on). It describes where the movie data can be found and how to interpret it.

Since H.264 files contain an index, unlike FLV files, you can provide a list of seekpoints, which are times you can seek to without having the playhead jump around. You’ll get this information through the onMetaData callback from an array with the name seekpoints. Some files, however, are not encoded with this information, which means that these files are not seekable at all. This works differently from keyframe-based FLV files, which use cue points rather than seekpoints. H.264-based video cannot use cue points.

Building a Video Player

So now that you have all the theory, let’s build an actual video player application. You will eventually end up with something like the player shown in Figure 7-7. All the physical assets have already been created for the example.

image

Figure 7-7. The final video player

This simple video player will have a loading progress bar incorporated into the scrubber bar. Beneath that is a status text field to tell the user when a video is loading, playing, paused, and so on. At the bottom are the standard play, pause, stop, rewind, and fast-forward buttons. The video play length will be displayed on the right of the scrubber bar, and the position of the playhead will be displayed on the left of the scrubber bar.

You’ll use four class files for the video player:

· Main.as: This class will contain the instances of the video and button controls. You will need to create, address, and display buttons, video, and text fields so you need to import the Flash classes for these.

· Videos.as: This class will handle the video control, which will load the video, read its metadata, and respond to button-control commands.

· ButtonManager.as: This class will handle the interactive controls.

· MediaControlEvent.as: This class will allow us to fire off bespoke events for button presses.

A number of “manager” classes that I use really should be made into singletons. A singleton is a pattern as well as a code implementation, which enforces the convention of creating only one instance of a given class. With the release of ActionScript 3.0, Adobe has chosen to comply with the ECMA-262 standard, and thus has been forced to disallow private constructors. These were essential for the Java standard way of implementing singletons. Without private constructors, implementing a singleton-based class is a sticky-tape and elastic-band quality build proposition that is doomed to lack consistency, and has my object-oriented spidey senses tingling away like crazy. Bring back the private constructor!

Setting up the Project

I have created the physical assets and initial FLA file (videoDemo_final.fla) to save you some time on this example. You can find the starting point for this exercise in this book’s downloadable code. You will also find all the class files (in case you have the urge to cheat).

Open videoDemo_final.fla in Flash. You will see all the graphical assets are already laid out on the stage for you, as shown in Figure 7-8. They also already have instance names for your convenience. Save this file to a new work directory of your choice.

image

Figure 7-8. Video player assets inside Flash

Creating the Main.as File

The FLA has a document class called Main.as, and this is where you will start. Let’s get to work.

1. Create a new .as file, and add the following code to it:

package foundationAS.ch07 {

import flash.display.MovieClip;
import foundationAS.ch07.Videos;
import foundationAS.ch07.ButtonManager;
import flash.text.TextField;
import flash.display.SimpleButton;

public class Main extends MovieClip {

}
}

2. Save the file in a subdirectory called foundationAS.ch07 as Main.as. You have created the class and imported the external classes you will be using.

3. Add the code to declare the Video and ButtonManager classes and create the Main constructor. These will take care of the video and the button control and functionality. Your file should look like this:

package foundationAS.ch07 {

import flash.display.MovieClip;
import foundationAS.ch07.Videos;
import foundationAS.ch07.ButtonManager;
import flash.text.TextField;
import flash.display.SimpleButton;

public class Main extends MovieClip {
private var vids:Videos;
public var buts:ButtonManager;

public function Main() {
}
}
}

The Main.as FLA document class is complete. This won’t do much at the moment, but never fear—you’ll come back and put the calls for the Videos and ButtonManager classes in afteryou’ve created those classes.

Creating the Videos.as File

Now it’s time to turn our attention to another class file: Videos.as. You will start by importing all the classes you will need for this file. I will explain what these are for as we go along.

The qualified constructor you will create is designed so you can pass references for the scrubber movie clip and the text fields I have physically put on the stage so the class can directly update them. I could have created them within the Videos class file in code, as they are specifically for the video player, but for the sake of simplicity (less code), I have chosen to create them physically and reference them. There is nothing wrong with this approach (no matter what strict object-oriented purists may tell you).

1. Create another new .as file and save it as Videos.as in your foundationAS.ch07 directory.

2. Add the following code to the Videos.as file, and then save it. You’ll notice that it declares a number of variables—all of the variables you will use in the code. This is to save time, so you don’t need to later go back to the top of the class to create them.

package foundationAS.ch07 {
import flash.net.NetConnection;
import flash.net.NetStream;
import flash.media.Video;
import flash.display.MovieClip;
import flash.events.TimerEvent;
import flash.events.NetStatusEvent;
import flash.utils.Timer;
import flash.text.TextField;
import foundationAS.ch07.MediaControlEvent;

public class Videos extends MovieClip {
private var vid1:Video;
private var ncVideoPlayer:NetConnection;
private var nsVideoPlayer:NetStream;
private var flvTarget:String;
private var vidDuration:Number;
private var trackLength:int;
private var timerLoading:Timer;
private var timerPlayHead:Timer;
private var timerFF:Timer;
private var timerRW:Timer;
private var txtStatus:TextField;
private var txtTrackLength:TextField;
private var txtHeadPosition:TextField;
private var bytLoaded:int;
private var bytTotal:int;
private var opct:int;
private var movScrubber:MovieClip;
private var ns_minutes:Number;
private var ns_seconds:Number;
private var seekRate:Number=3;
private var headPos:Number;

// CONSTRUCTOR
public function Videos(movScrubber:MovieClip, txtStatus:TextField,
txtHeadPosition:TextField, txtTrackLength:TextField):void {
}
}
}

3. Now assign the references you sent to the constructor to local variables. Also set the loading progress bar to its initial size, turn off the scrubber playhead (until you have loaded enough video to play), and set the initial status message. Among other things, these initial settings prove your references are working. Add the following code to the constructor:

// Set movies and text fields to local references and to start
// positions and contents
movScrubber = movScrubber;
txtStatus = txtStatus;
txtHeadPosition = txtHeadPosition;
txtTrackLength = txtTrackLength;
movScrubber.movLoaderBar.width = 1;
movScrubber.movHead.alpha = 0;
txtStatus.text = "AWAITING LOCATION";

4. Set up the netConnection and netStream classes, and set the video target file. To do so, add the following into the constructor, just before the closing curly brace:

// Instantiate vars, connect NC and NS
ncVideoPlayer = new NetConnection();
ncVideoPlayer.connect(null);
nsVideoPlayer = new NetStream(ncVideoPlayer);
flvTarget = "video_final.flv";

Although you are targeting an FLV file in this example, you could just as easily target an H.264-encoded file at this point.

5. Before playing the video, you want to set up the buffer so that you will have preloaded an acceptable amount of video into the buffer before you play it. Set the buffer to 5 seconds. Add the following to the constructor:

nsVideoPlayer.bufferTime = 5;

6. Instantiate the Video display object, like so (add this to the constructor):

vid1 = new Video();

7. You’re finally ready to call and play your video file. Add the following line to the constructor:

loadFLV();

But hang on—that’s a function call isn’t it? Yes, it is. This keeps the call to action modularized. The specifics of which video to load, where to place it, and how big to make it, along with the actual instructions to play it, are all in this one function. In a refactored future version, you can easily make this method public and allow the user to pass the video target and extend the video object settings when it is called.

8. Add the function the constructor calls to your Videos.as file:

// Load FLV source
private function loadFLV():void {
addChild(vid1);
vid1.x = 166;
vid1.y = 77;
vid1.width = 490;
vid1.height = 365;
vid1.attachNetStream(nsVideoPlayer);
nsVideoPlayer.play(flvTarget);
}

For the moment, you have hard-coded the dimensions and position of the Video instance and added it to the display list, before attaching the NetStream instance to it. Then you simply issue the NetStream play() command.

You now need to address a number of important and complementary issues in order to make use of the event handling and the control functionality that the NetStream class affords. You can see the buttons on the screen, and although they will respond when you click them, they do not have any control over the actual video yet.

This section leaves you with a lot of the basic code set up, although you still have a way to go. Next, you’ll turn your attention to the control of the video player.

Controlling the Video Player

Now we will add the status text field, loading progress bar, and playhead bar. You’ll also handle the metadata and the cue points.

Setting up the Status Text Field

Let’s start by setting up a listener and handler for the NetStream onStatus event. This event is fired off whenever the NetStream starts playing, stops playing, the buffer fills, and so on (see Table 7-7 for the NetStream events).

Using the NetStream onStatus event is a great way of populating the status text field initially. You might think the button event listeners would be the most consistent way to do that, but they are only command events and do not reflect if the video actually responded to those commands.

First, add the following event listener in the constructor, just before the closing curly brace:

nsVideoPlayer.addEventListener(NetStatusEvent.NET_STATUS, nsOnStatus);

You have set up your NetStream instance to listen for the NetStatus event NET_STATUS and call the nsOnStatus function when it receives an event object of that type. It will automatically send the event object with it when it calls the handler.

Next, create the following event handler in the Videos.as file, outside the Videos.as constructor, as a function in its own right:

public function nsOnStatus(infoObject:NetStatusEvent):void {
for (var prop in infoObject.info) {
// This trace will show what properties the NetStatus event contains
trace("\t" + prop + ":\t" + infoObject.info[prop]);
// This If checks to see if it is a code property and if that contains
// a stop notification. If it is, then it displays this in the status
// text field
if (prop == "code" && infoObject.info[prop] == "NetStream.Play.Stop") {
txtStatus.text = "Stopped";
}
// This If checks to see if it is a code property and if that contains
// a start notification. If it is, then it displays this in the status
// text field and makes the scrubhead movie clip visible
else if (prop == "code" && infoObject.info[prop] ==
"NetStream.Play.Start") {
txtStatus.text = "Playing";
movScrubber.movHead.alpha = 100;
}
}
}

The received object is of type NetStatusEvent. To give you a better idea of the sort of things the NET_STATUS event reports, the code includes a for in loop to cycle through all the contents of the code array of the returned NetStatusEvent and trace them. When you next publish your SWF, you will see a lot of NET_STATUS events being reported to the Output panel of your IDE, like the following:

level: status
code: NetStream.Play.Start

Of course, you really need the NET_STATUS event to confirm a few important things at the moment: when the video starts playing and when it stops, for example, because you need to adjust the status text, the playhead, and so forth. It would be better if you could get pause, fast-forward, and rewind status events also. However, although the NET_STATUS can provide seek event notification, it cannot report which way it’s going and it has no concept of pausing at all. So for these functions, you will need to rely on the buttons themselves dispatching these events. This is less satisfactory, as it tells you only that the command was sent, not that it has been executed, but it’s the best you’re going to do until Adobe extends the NetStream events.

Now you will implement the loading progress bar of your video player.

Implementing the Loading Progress Bar

The loading progress bar will display how far the video is through the loading process. This should complement the playhead bar, and indeed, it will operate within the scrubber movie clip. I have already created the physical asset. You will need to loop the check at regular intervals until the load is complete. In ActionScript 2.0, you would have used setInterval or an onEnterFrame. In ActionScript 3.0, it is more powerful and elegant to use the new Timer class. You have already imported this and declared it in the constructor, so let’s instantiate it, add a listener, and start the timer running.

Add the following code to your Videos.as constructor, just before the closing curly brace:

// Add Timers
timerLoading = new Timer(10, 0);
timerLoading.addEventListener(TimerEvent.TIMER, this.onLoading);
timerLoading.start();

The first line instantiates the new Timer instance with the parameters of interval and number of loops. You have set a 10-millisecond interval and told it to loop indefinitely. You will be a good programming citizen and stop the timer when it has finished its job.

Now that you have defined an event listener and started the timer, let’s take a look at the event handler code. Create the following function at the bottom of the Videos.as class file:

private function onLoading(event:TimerEvent):void {
bytLoaded = nsVideoPlayer.bytesLoaded;
bytTotal = nsVideoPlayer.bytesTotal;
opct = ((nsVideoPlayer.bytesTotal) / 100);
movScrubber.movLoaderBar.width = (Math.floor(bytLoaded / opct)) * 4;
if (bytLoaded == bytTotal) {
timerLoading.stop();
}
}

This is all fairly self-explanatory. The first few lines work out the amount loaded and the total size of the video in bytes. You then calculate what 1 percent of the total value would be. After this, it is a simple matter of setting the scrubber movie clip’s loader bar movie clip to be the appropriate width based on these calculations and taking into account that the entire bar is 400 pixels wide. Finally, you check if the video has completed loading (if the bytes loaded equal the bytes total), and if so, stop the loading Timer instance.

The movie loading will seem instantaneous if the file is being loaded locally. To see the loading progress bar in action, you really need to load a video from a web server.

Let’s follow this by creating the playhead bar.

Creating the Playhead Bar

The playhead bar will show wherethe playhead is when you are watching the video. Once again, I have already created the graphical object on the stage, within the scrubber movie clip. This will be coded similarly to the loading progress bar. You already have the necessary variables defined in the Videos.as class file, so let’s go ahead and create a Timer instance for this function in its constructor.

Add the following code to the Videos.as constructor, again just before the closing curly brace:

timerPlayHead = new Timer(100, 0);
timerPlayHead.addEventListener(TimerEvent.TIMER, this.headPosition);
timerPlayHead.start();

The first line instantiates the new Timer instance with the parameters of interval and number of loops. You have set a 100-millisecond interval and told it to loop indefinitely.

Now that you have defined an event listener and started the timer, let’s look at the event handler code. Create the following function at the bottom of the Videos.as class file:

private function headPosition(event:TimerEvent):void {
// Set Head movie clip to correct width but don't run till we get the
// track length from the metadata
if (trackLength > 0) {
movScrubber.movHead.width = (nsVideoPlayer.time / (trackLength / 100)) * 4;
}
// Format and set timer display text field
ns_minutes = int(nsVideoPlayer.time / 60);
ns_seconds = int(nsVideoPlayer.time % 60);
if (ns_seconds < 10) {
txtHeadPosition.text = ns_minutes.toString() + ":0" + ns_seconds.toString();
} else {
txtHeadPosition.text = ns_minutes.toString() + ":" + ns_seconds.toString();

}
}

As you can see, you don’t set the playhead movie clip width until you have received the duration metadata to establish the track length. You will learn how to handle the metadata in the next section.

The playhead movie clip calculations differ from the loader bar movie clip in that they cannot make use of the bytes loaded to give an indication of the playhead position, nor the bytes total to give an indication of the total track length. This is because you are working in chronological time units here, not bytes as you did with the loading progress bar. So you need the NetStream.Time information, which tells you the playhead position in seconds, and the duration metadata, which tells you the total duration of the video in seconds. Once you have the necessary calculation from the figures, you need to do a little formatting to show this in minutes:seconds format. Once the minutes:seconds formatting is done, you display it in the head position text field. Because this is on a timer, this will update in real time for the user.

I have deliberately not shown the duration in hours:minutes:seconds format. For these examples, you will not be playing anything that stretches into hours. If you need to do this, the calculation is simple and obvious.

Handling the Metadata

Now you need to get the duration of the video from the video’s metadata. Although you need only the duration, you will use a for in loop in the metadata event handler to see what metadata your video contains (see the “Metadata events” section earlier in the chapter for more information).

With FLV video that was encoded in versions before Flash CC, often the duration metadata was missing, and you would need to use some third-party software to specifically add the duration metadata. You may still find this is the case in any video you have not encoded yourself or not encoded in the most recent versions of the available video encoders. This is why it is important to specifically check for the duration metadata in early testing of any video you will play. It is a relatively simple matter to add the duration information to the metadata after the fact.

Let’s set up the metadata event listener. Again, you have already imported and defined any necessary classes and variables. Add the following lines of code to the Videos.as class file constructor:

// Create a metadata event handling object
var objTempClient:Object = new Object();
objTempClient.onMetaData = mdHandler;
nsVideoPlayer.client =o bjTempClient;

Metadata (and for that matter, cue point) handling is quite different in ActionScript 3.0 than it was in ActionScript 2.0. You now use NetStream’s client property (discussed in the “Metadata events” section earlier in the chapter).

Now that you’ve assigned the NetStream client property and set the metadata handler, let’s look at the event handler itself. You have assigned the mdHandler function to deal with all metadata events. Add this function to your Videos.as class file:

private function mdHandler(obj:Object):void {
for (var x in obj) {
trace("METADATA " + x + " is " + obj[x]);
// If this is the duration, format it and display it
if (x == "duration") {
trackLength = obj[x];
var tlMinutes:int = trackLength / 60;
if (tlMinutes < 1) {
tlMinutes = 0
}
var tlSeconds:int = trackLength % 60;
if (tlSeconds < 10) {
txtTrackLength.text =image
tlMinutes.toString() + ":0" + tlSeconds.toString();
} else {
txtTrackLength.text =image
tlMinutes.toString() + ":" + tlSeconds.toString();
}
}
}
}

You loop through the properties of the object that this function receives. You check specifically for only the duration property. Once you find it, you format it for use in the track length text field and store it for use in the playhead calculations. Now when you publish your movie, you will see all the metadata information in the Output panel.

Handling Cue Points

Cue point handling is very similar to metadata handling. As explained in the “Cue point events” section earlier in the chapter, not all videos have cue points, and it’s up to you or your production team to add them. You generally use them to add enhanced interactivity to video. This is an incredibly powerful feature of ActionScript.

So let’s set up the cue point event listener. As usual, you have already imported and defined any necessary classes and variables. Add the following line of code to the Videos.as class file constructor just below where you defined your metadata listener:

objTempClient.onCuePoint = cpHandler;

This should now leave this small section of the constructor looking like so:

// Create a metadata and cue point event handling object
var objTempClient:Object = new Object();
objTempClient.onMetaData = mdHandler;
objTempClient.onCuePoint = cpHandler;
nsVideoPlayer.client = objTempClient;

As you can see, both the cue point and metadata events use the same NetStream Client object. They only differ in the handler function.

Now add the following function to your Videos.as class file to handle the cue points:

private function cpHandler(obj:Object):void {
for (var c in obj) {
trace("CUEPOINT " + c + " is " + obj[c]);
if (c == "parameters") {
for (var p in obj[c]) {
trace(" PARAMETER " + p + " is " + obj[c][p]);
}
}
}
}

This function will loop through the returned cue point object to display the standard cue point information (time, type, and name). Most important, when it finds the parameters array within it, it will also loop through this to output any extra parameters you defined and set during the cue point encoding process.

CUEPOINT time is 5.38
CUEPOINT type is event
CUEPOINT name is onMary
CUEPOINT parameters is
PARAMETER activity is snowboarding
PARAMETER with is Fluffy
PARAMETER name is Mary
PARAMETER location is Chamonix

For the purposes of this demo, you do not actually use cue points to enhance the interaction with the video. I have added some simply to show the sort of output you can expect and that you can set.

OK, so you’ve loaded, played, monitored, and event-handled the video. Now you really need to exercise some control over it. You have a full complement of buttons on the stage, but as of yet, no control is exercised using them. So let’s change that now.

Controlling the Video on the Stage

Now you will create another classcalled ButtonManager, which will deal with all the button-based events in the video player.

Create a new file inside your foundationAS.ch07 directory called ButtonManager.as. Add the following code, which takes care of all the classes you need to import and the variable definitions you will need:

package foundationAS.ch07 {
import flash.net.*;
import flash.display.Sprite;
import flash.display.SimpleButton;
import flash.events.MouseEvent;
import flash.events.EventDispatcher;
import flash.events.Event;

import foundationAS.ch07.MediaControlEvent;

public class ButtonManager extends Sprite {
private var butRW:SimpleButton;
private var butPlay:SimpleButton;
private var butPause:SimpleButton;
private var butStop:SimpleButton;
private var butFF:SimpleButton;
private var eventDispatcherButton:EventDispatcher;
private var evtButRW:String;
private var pauseOn:Boolean = false;

public function ButtonManagerimage
(butRW:SimpleButton, butPlay:SimpleButton,image
butPause:SimpleButton, butStop:SimpleButton,image
butFF:SimpleButton):void {
butRW = butRW;
butPlay = butPlay;
butPause = butPause;
butStop = butStop;
butFF = butFF;
}
}
}

Because I have deliberately not added extra code to create the buttons for this example, and instead opted to create them graphically on the stage, you will see you have passed references to them into the class file constructor. You also immediately pass these references to your local variables so you can access them in the scope of the class.

Let’s now return to the Main.as file and instantiate our new Videos and ButtonManager classes and pass in the appropriate instance references. Add the following lines inside the Main function:

public function Main() {
vids = new Videos(movScrubber, txtStatus, txtHeadPosition, txtTrackLength);
addChild(vids);
buts = new ButtonManager(butRW, butPlay, butPause, butStop, butFF);
}

Adding Button Functionality

Now let’s add the button functionality. You need to start by adding event listeners to the ButtonManager.as class file constructor for each button to listen for MOUSE_DOWN events. They will tell you as soon as a button is pressed; you do not want to wait until the button is released to be notified. It is especially important with functions like fast-forward (FF) and rewind (RW), which rely on the user pressing and holding down the button to execute them. The FF and RW buttons use a Timer class instance to ­continue to run while the button is pressed, and they need a release event handler so you can stop them from executing when the user releases the mouse. You will use the CLICK mouse event rather than the more obvious MOUSE_UP event.

The MOUSE_UP event is fired on any button, even if it is not the one that fired the MOUSE_DOWN event. If you moved your mouse during the press, you could easily get an erroneous release function call for another button. The CLICK event registers the button that was pressed and registers a mouse release against only that button, no matter where the mouse may have slid before it was released.

In the ButtonManager.as constructor, add the following listener definitions:

// Add button listeners
butRW.addEventListener(MouseEvent.MOUSE_DOWN, doRewind);
butRW.addEventListener(MouseEvent.CLICK, stopRewind);
butPlay.addEventListener(MouseEvent.MOUSE_DOWN, doPlay);
butPause.addEventListener(MouseEvent.MOUSE_DOWN, doPause);
butStop.addEventListener(MouseEvent.MOUSE_DOWN, doStop);
butFF.addEventListener(MouseEvent.MOUSE_DOWN, doFastForward);
butFF.addEventListener(MouseEvent.CLICK, stopFastForward);

In the constructor, you also need to set the initial state of the buttons. By default, the enabled state of all the buttons is true. However, as the video will automatically begin playing, you really don’t need the play button to be enabled initially. In fact, it would be very poor usability to make it so. This also applies to other buttons based on the status of the video. If the video is paused, for example, then none of the other buttons need to be enabled. So, let’s start by adding the following to the end of the ButtonManager.as constructor before coding the other relational button states:

butPlay.enabled = false;

The event handling itself could have been done in a number of ways, and I spent considerable time deciding on the best way to do it here. I chose to use good convention over poor code. In principle, you need to fire off notification that a button has been pressed or released to theVideos.as class instance so it can execute that command on the video it is controlling. The simpler way might have been to add a function for every button and every press within the Videos.as class instance, and then add each one of these as event listeners to each button and each event. Long-winded, but it would look pretty simple and it would work. But that calls for a lot of functions and is really not great convention, so I decided to use button event handlers within the ButtonManager.as class instance. So add the following functions to the body of yourButtonManager.as class file:

private function doRewind(evnt:MouseEvent):void {
dispatchEvent(new MediaControlEvent("RW"));
}

private function stopRewind(evnt:MouseEvent):void {
dispatchEvent(new MediaControlEvent("RWEND"));
}

private function doPlay(event:MouseEvent):void {
butPlay.enabled = false;
butPause.enabled = true;
butRW.enabled = true;
butFF.enabled = true;
butStop.enabled = true;
dispatchEvent(new MediaControlEvent("PLAY"));
}

private function doPause(event:MouseEvent):void {
if (pauseOn) {
butRW.enabled = true;
butFF.enabled = true;
butStop.enabled = true;
pauseOn = false;
} else {
butRW.enabled = false;
butFF.enabled = false;
butStop.enabled = false;
pauseOn = true;
}
dispatchEvent(new MediaControlEvent("PAUSE"));
}

private function doStop(event:MouseEvent):void {
butPlay.enabled = true;
butPause.enabled = false;
butRW.enabled = false;
butFF.enabled = false;
butStop.enabled = false;
dispatchEvent(new MediaControlEvent("STOP"));
}

private function doFastForward(event:MouseEvent):void {
dispatchEvent(new MediaControlEvent("FF"));
}
private function stopFastForward(event:MouseEvent):void {
dispatchEvent(new MediaControlEvent("FFEND"));
}

Let’s look at thePause functionality for a moment before we move on. Notice that you have set a Boolean variable in the class called pauseOn. You need to use this because there is no easy way to detect whether the video is paused or unpaused when the pause button is pressed, as it toggles. You know that, by default, when the application loads, the video starts playing and the video is not paused. Therefore, the first time through, you know the pauseOn is false, so you can toggle the status within the pause event handler based on this knowledge, as you will see.

Next, notice that these functions are dispatching their own event: MediaControlEvent. This is an event that you will create by extending the Event class, in order to fire off notifications of stop, play, pause, and so on when the buttons are clicked. You’ll create the custom event after finishing the three class files, but let’s look at the reasoning for handling events this way.

You want to allow for simple, modular, extendable event handling and registration for any classes that need to use the media control buttons created in the ButtonManager class. Also, you want to allow for the possibility to add parameters to be sent with the returned object, which can be read by interrogation when it is received. You will include a string that shows what the button command was (“Rewind” or “Rewind End,” for example). You could, because of the design of this solution, refactor it to send any amount of data that the event handler might need in the future, or indeed allow for any other type of class that might need to use these media control buttons but need extended event data. Additionally you can assign a single event handler function in the listening class to handle any events for which it receives notification. (In a stricter object-oriented project, you would be looking at supporting the use of interfaces through this approach, but that’s a subject for another book.)

So let’s add an event listener for the new event you are going to create. Add this function into the ButtonManager class. It will register an external handler for any MediaControlEvent.CONTROL_TYPE events:

// This function adds any external objects to the listener list
// for the mediaControl event
public function addMediaControlListener(funcObj:Function):void {
addEventListener(MediaControlEvent.CONTROL_TYPE, funcObj);
}

Adding the listener now is a little back to front, as you should really create the event class first. However, it wouldn’t have meant that much to you if you had created the event class first, with no frame of reference to its purpose. Also, I wanted to keep the ButtonManager class code all together, as it’s a simple class.

And that’s it for the ButtonManager class. You will call this function from the FLA document class, Main.as. This will be the last line of code in the Main.as class:

buts.addMediaControlListener(vids.onControlCommand);

As you can see, you have defined the Videos.as class function onControlCommand to handle the MediaControlEvent.CONTROL_TYPE events, and you’ll add that next.

Save and close both the Main.as and ButtonManager.as classes. They are complete.

Your Main.as file should look like this:

package foundationAS.ch07 {

import flash.display.MovieClip;
import foundationAS.ch07.Sounds;
import foundationAS.ch07.Videos;
import foundationAS.ch07.ButtonManager;
import flash.text.TextField;
import flash.display.SimpleButton;

public class Main extends MovieClip {
private var sound1:Sounds;
private var vids:Videos;
public var buts:ButtonManager;

public function Main() {
vids =image
new Videos(movScrubber, txtStatus, txtHeadPosition, txtTrackLength);
addChild(vids);
buts = new ButtonManager(butRW, butPlay, butPause, butStop, butFF);
buts.addMediaControlListener(vids.onControlCommand);
}
}
}

And your ButtonManager.as file should look like this:

package foundationAS.ch07 {
import flash.net.*;
import flash.display.Sprite;
import flash.display.SimpleButton;
import flash.events.MouseEvent;
import flash.events.EventDispatcher;
import flash.events.Event;

import foundationAS.ch07.MediaControlEvent;

public class ButtonManager extends Sprite {
private var butRW:SimpleButton;
private var butPlay:SimpleButton;
private var butPause:SimpleButton;
private var butStop:SimpleButton;
private var butFF:SimpleButton;
private var eventDispatcherButton:EventDispatcher;
private var pauseOn:Boolean = false;

public function ButtonManager(butRW:SimpleButton,image
butPlay:SimpleButton, butPause:SimpleButton, butStop:SimpleButton,image
butFF:SimpleButton):void {
butRW = butRW;
butPlay = butPlay;
butPause = butPause;
butStop = butStop;
butFF = butFF;

// Add button listeners
butRW.addEventListener(MouseEvent.MOUSE_DOWN, doRewind);
butRW.addEventListener(MouseEvent.CLICK, stopRewind);
butPlay.addEventListener(MouseEvent.MOUSE_DOWN, doPlay);
butPause.addEventListener(MouseEvent.MOUSE_DOWN, doPause);
butStop.addEventListener(MouseEvent.MOUSE_DOWN, doStop);
butFF.addEventListener(MouseEvent.MOUSE_DOWN, doFastForward);
butFF.addEventListener(MouseEvent.CLICK, stopFastForward);
butPlay.enabled = false;
}

// This function adds any external objects to the listener list for
// the mediaControl event
public function addMediaControlListener(funcObj:Function):void {
addEventListener(MediaControlEvent.CONTROL_TYPE, funcObj);
}

private function doRewind(evnt:MouseEvent):void {
dispatchEvent(new MediaControlEvent("RW"));
}

private function stopRewind(evnt:MouseEvent):void {
dispatchEvent(new MediaControlEvent("RWEND"));
}

private function doPlay(event:MouseEvent):void {
butPlay.enabled = false;
butPause.enabled = true;
butRW.enabled = true;
butFF.enabled = true;
butStop.enabled = true;
dispatchEvent(new MediaControlEvent("PLAY"));
}

private function doPause(event:MouseEvent):void {
if (pauseOn) {
butRW.enabled = true;
butFF.enabled = true;
butStop.enabled = true;
pauseOn = false;
} else {
butRW.enabled = false;
butFF.enabled = false;
butStop.enabled = false;
pauseOn = true;
}
dispatchEvent(new MediaControlEvent("PAUSE"));
}

private function doStop(event:MouseEvent):void {
butPlay.enabled = true;
butPause.enabled = false;
butRW.enabled = false;
butFF.enabled = false;
butStop.enabled = false;
dispatchEvent(new MediaControlEvent("STOP"));
}

private function doFastForward(event:MouseEvent):void {
dispatchEvent(new MediaControlEvent("FF"));
}

private function stopFastForward(event:MouseEvent):void {
dispatchEvent(new MediaControlEvent("FFEND"));
}
}
}

Finishing the Videos.as Class

Open the Videos.as class file and add the following function to it to handle the MediaControlEvent.CONTROL_TYPE events:

public function onControlCommand(evt:MediaControlEvent):void {
switch (evt.command) {
//---- PAUSE ----
case "PAUSE":
nsVideoPlayer.togglePause();
txtStatus.text = (txtStatus.text == "Playing") ? "Paused" : "Playing";
break;
//---- PLAY ----
case "PLAY":
nsVideoPlayer.play(flvTarget);
break;
//---- STOP ----
case "STOP":
nsVideoPlayer.seek(0);
nsVideoPlayer.pause();
txtStatus.text = "Stopped";
break;
//---- RW ----
case "RW":
nsVideoPlayer.pause();
timerRW.start();
txtStatus.text = "Rewind";
break;
//---- RW END ----
case "RWEND":
nsVideoPlayer.resume();
timerRW.stop();
txtStatus.text = "Playing";
break;
//---- FF ----
case "FF":
timerFF.start();
txtStatus.text = "Fast Forward";
break;
//---- FF END ----
case "FFEND":
timerFF.stop();
txtStatus.text = "Playing";
break;
}
}

You have used a single switch/case statement to deal with every possibility or delegate it, as appropriate. As previously mentioned, dispatching your own custom event allows you to send extra parameters in the dispatched object, and you are going to be interrogating it for a variable called command. This is a String that contains the type of command that a particular button fired off (such as STOP, RW, FF, or FFEND). Once a case has been made, it will set the status text field to reflect this change and execute the appropriate NetStream function.

The PAUSE case needed some special handling. In ActionScript 2.0, you used the NetStream.pause() function (using true or false as parameters to pause or resume playing). In ActionScript 3.0, the pause() command still works to pause, but it does not resume play if clicked again, and it does not support the use of a Boolean parameter. ActionScript 3.0 has togglePause() and resume() functions, and for the example, you need only the togglePause() function. This doesn’t, however, fire off any event or give any indication as to what state it’s in, so you need to add some sort of logic to determine what the status text field should show based on whether the video is toggled to paused or resumed. This can be done by checking the status text field’s present text every time the pause button is clicked and toggling it accordingly, using the simplified if else statement:

txtStatus.text = (txtStatus.text == "Playing") ? "Paused" : "Playing";

The fast-forward (FF) and fast-forward end (FFEND) events, along with their rewind counterparts (RW and RWEND), also require special consideration. They need to continue to fire and implement as long as the FF or RW buttons are clicked. To accommodate this functionality, you must use a couple of Timer class instances for them. Add the following timer code into the constructor of the Videos.as class file:

timerFF = new Timer(100, 0)
timerFF.addEventListener(TimerEvent.TIMER, this.runFF);
...
timerRW = new Timer(100, 0)
timerRW.addEventListener(TimerEvent.TIMER, this.runRW);

You may notice that these Timer instances are not told to start yet. That is because they should start when the appropriate button is clicked, and this functionality will be dealt with by the onControlCommand event handler. If the case is RW or FF, the appropriate timer gets started, which in turn calls the timer event handlers. Add the following FF and RW timer handler functions to the bottom of the Videos.as class file:

private function runFF(event:TimerEvent):void {
headPos = Number(Math.floor(nsVideoPlayer.time) + seekRate);
nsVideoPlayer.seek(headPos);
}

private function runRW(event:TimerEvent):void {
headPos = Number(Math.floor(nsVideoPlayer.time) - seekRate);
nsVideoPlayer.seek(headPos);
}

Basically, the runFF() function increments the seekRate variable amount to set the headPos number, and then seeks that position in order to fast-forward the playhead. The runRW() function simply decrements this number in order to rewind the playhead. When the FF or RW button is released, the case changes to FFEND or RWEND, the timers are stopped, and the status text is changed to reflect this.

This example uses seek to fast-forward and rewind through a video, However, H.264-encoded video does not seek in the same way as FLV-encoded video. There is a parameter in the metadata of an H.264-encoded file called seekpoint, which is an array of saved seekpoints. You can seek directly to these time points, provided that that part of the video has downloaded when you try. However, there are presently a large number of encoded files that do not have this information embedded and are thus not seekable (that is, you cannot fast-forward or rewind through them in this way). This is a limitation of using H.264-based video at this time; however, I have no doubt that this issue will be addressed very soon.

So finally, you are also finished with the Videos.as file, which looks like this:

package foundationAS.ch07 {
import flash.net.*;
import flash.net.NetConnection;
import flash.net.NetStream;
import flash.media.Video;
import flash.display.MovieClip;
import flash.events.TimerEvent;
import flash.events.NetStatusEvent;
import flash.utils.Timer;
import flash.text.TextField;

import foundationAS.ch07.MediaControlEvent;

public class Videos extends MovieClip {
private var vid1:Video;
private var ncVideoPlayer:NetConnection;
private var nsVideoPlayer:NetStream;
private var flvTarget:String;
private var vidDuration:Number;
private var trackLength:int;
private var timerLoading:Timer;
private var timerPlayHead:Timer;
private var timerFF:Timer;
private var timerRW:Timer;
private var txtStatus:TextField;
private var txtTrackLength:TextField;
private var txtHeadPosition:TextField;
private var bytLoaded:int;
private var bytTotal:int;
private var opct:int;
private var movScrubber:MovieClip;
private var ns_minutes:Number;
private var ns_seconds:Number;
private var seekRate:Number = 3;
private var headPos:Number;

// CONSTRUCTOR
public function Videos(movScrubber:MovieClip, txtStatus:TextField,image
txtHeadPosition:TextField, txtTrackLength:TextField):void {
// Set movies and text fields to local references and to start
// positions and contents
movScrubber = movScrubber;
txtStatus = txtStatus;
txtHeadPosition = txtHeadPosition;
txtTrackLength = txtTrackLength;
movScrubber.movLoaderBar.width = 1;
movScrubber.movHead.alpha = 0;
txtStatus.text = "AWAITING LOCATION";

// Instantiate vars and connect NC
ncVideoPlayer = new NetConnection();
ncVideoPlayer.connect(null);
nsVideoPlayer = new NetStream(ncVideoPlayer);
nsVideoPlayer.bufferTime = 3;
flvTarget = "video_final.flv";

// Add event listeners and handlers
nsVideoPlayer.addEventListener(NetStatusEvent.NET_STATUS, nsOnStatus);

// Instantiate display objects
vid1 = new Video();

// Create a metadata and cue point event handling object
var objTempClient:Object = new Object();
objTempClient.onMetaData = mdHandler;
objTempClient.onCuePoint = cpHandler;
nsVideoPlayer.client = objTempClient;

// Add Timers
timerLoading = new Timer(10, 0);
timerLoading.addEventListener(TimerEvent.TIMER, this.onLoading);
timerLoading.start();
timerPlayHead = new Timer(100, 0);
timerPlayHead.addEventListener(TimerEvent.TIMER, this.headPosition);
timerPlayHead.start();
timerFF = new Timer(100, 0)
timerFF.addEventListener(TimerEvent.TIMER, this.runFF);
timerRW = new Timer(100, 0)
timerRW.addEventListener(TimerEvent.TIMER, this.runRW);

loadFLV();
}

// Load FLV source
public function loadFLV():void {
addChild(vid1);
vid1.x = 166;
vid1.y = 77;
vid1.width = 490;
vid1.height = 365;
vid1.attachNetStream(nsVideoPlayer);
nsVideoPlayer.play(flvTarget);
}

//------------- FLV's metadata ------------------------------
private function mdHandler(obj:Object):void {
for (var x in obj) {
trace("METADATA " + x + " is " + obj[x]);
// If this is the duration, format it and display it
if (x == "duration") {
trackLength = obj[x];
var tlMinutes:int = trackLength / 60;
if (tlMinutes < 1) {
tlMinutes = 0
}
var tlSeconds:int = trackLength % 60;
if (tlSeconds < 10) {
txtTrackLength.text =image
tlMinutes.toString() + ":0" + tlSeconds.toString();
} else {
txtTrackLength.text =image
tlMinutes.toString() + ":" + tlSeconds.toString();
}
}
}
}

//------------- FLV's cue points ------------------------------
private function cpHandler(obj:Object):void {
for (var c in obj) {
trace("CUEPOINT " + c + " is " + obj[c]);
if (c == "parameters") {
for (var p in obj[c]) {
trace(" PARAMETER " + p + " is " + obj[c][p]);
}
}
}
}

//--------------- ON STATUS LISTENER --------------------------
public function nsOnStatus(infoObject:NetStatusEvent):void {
for (var prop in infoObject.info) {
trace("\t" + prop + ":\t" + infoObject.info[prop]);
// If end of video is found, then stop the movHeadSlider moving.
if (prop == "code" && infoObject.info[prop] == "NetStream.Play.Stop") {
txtStatus.text = "Stopped";
} else if (prop == "code" && infoObject.info[prop] ==image
"NetStream.Play.Start") {
txtStatus.text = "Playing";
movScrubber.movHead.alpha = 100;
}
}
}

//------------------ HEAD POSITION & COUNT --------------------
private function headPosition(event:TimerEvent):void {
// Set Head movie clip to correct width but don't run till we get the
// track length from the metadata
if (trackLength > 0) {
movScrubber.movHead.width = (nsVideoPlayer.time / (trackLength / 100)) * 4;
}
// Set timer display text field
ns_minutes = int(nsVideoPlayer.time / 60);
ns_seconds = int(nsVideoPlayer.time % 60);
if (ns_seconds < 10) {
txtHeadPosition.text =image
ns_minutes.toString() + ":0" + ns_seconds.toString();
} else {
txtHeadPosition.text =image
ns_minutes.toString() + ":" + ns_seconds.toString();
}
}

//------------------- FILE LOADER -----------------------------
// --- Load bar calculations & text field settings----------

private function onLoading(event:TimerEvent):void {
bytLoaded = nsVideoPlayer.bytesLoaded;
bytTotal = nsVideoPlayer.bytesTotal;
opct = ((nsVideoPlayer.bytesTotal) / 100);
movScrubber.movLoaderBar.width = (Math.floor(bytLoaded / opct)) * 4;
if (bytLoaded == bytTotal) {
timerLoading.stop();
}
}

//----------------- CONTROL BUTTONS ---------------------------

public function onControlCommand(evt:MediaControlEvent):void {
switch (evt.command) {
//---- PAUSE ----
case "PAUSE":
nsVideoPlayer.togglePause();
txtStatus.text = (txtStatus.text == "Playing") ? "Paused" : "Playing";
break;
//---- PLAY ----
case "PLAY":
nsVideoPlayer.play(flvTarget);
break;
//---- STOP ----
case "STOP":
nsVideoPlayer.seek(0);
nsVideoPlayer.pause();
txtStatus.text = "Stopped";
break;
//---- RW ----
case "RW":
nsVideoPlayer.pause();
timerRW.start();
txtStatus.text = "Rewind";
break;
//---- RW END ----
case "RWEND":
nsVideoPlayer.resume();
timerRW.stop();
txtStatus.text = "Playing";
break;
//---- FF ----
case "FF":
timerFF.start();
txtStatus.text = "Fast Forward";
break;
//---- FF END ----
case "FFEND":
timerFF.stop();
txtStatus.text = "Playing";
break;
}
}

private function runFF(event:TimerEvent):void {
headPos = Number(Math.floor(nsVideoPlayer.time) + seekRate);
nsVideoPlayer.seek(headPos);
}

private function runRW(event:TimerEvent):void {
headPos = Number(Math.floor(nsVideoPlayer.time) - seekRate);
nsVideoPlayer.seek(headPos);
}
}
}

Creating a Custom Event

Finally, let’s code the custom event class. You’ll be amazed how short and very simple it is to create a custom event, so don’t panic.

Open a new ActionScript file and save it in foundationAS.ch07 as MediaControlEvent.as. Now put this code inside it:

package foundationAS.ch07 {
import flash.events.Event;

public class MediaControlEvent extends flash.events.Event {
public static const CONTROL_TYPE:String = "headControl";
public var command:String;

public function MediaControlEvent(command:String) {
super(CONTROL_TYPE);
this.command = command;
}
}
}

This class simply needs to extend the Event class. You add a static constant String variable to identify the event type when you interrogate the returned event object. In this case, you want it to identify itself as type headControl. Then you add as many variables as you want to be able to pass to it and get from it when the event is fired. In this case, you just want to set up a String variable called command. (Remember that you interrogated the returned event object in the MediaControlEvent event handler for the command variable in order to determine which command button was clicked.)

As you saw earlier, a MediaControlEvent instance is created on the fly when you dispatch the event in the ButtonManager.as class, like this:

dispatchEvent(new MediaControlEvent("button command"));

And that’s it. You’re finished! Save all your classes and publish your FLA. You’ll have a working video player. If you find it doesn’t work and you want to see the working version before you track down your bugs, just check it against the complete code you downloaded for this book.

Summary

This chapter covered the basics of video—enough to begin to use it in your own projects. An entire book could be written on the subject, so I’ve concentrated on the essentials. You’ve learned how to do the following:

· Load a video or access the camera

· Encode your videos

· Monitor and report on video load and play status

· Read metadata

· Create and read cue point data

· Control video loading and play back

You can experiment with the video player you built in this chapter and see what else you can do with it. For example, you could add filter effects, have multiple sources, or look at live streaming and recording.

We’ve done video, so now let’s trip into sound. Onward!