User Interfaces - Game Programming Algorithms and Techniques: A Platform-Agnostic Approach (2014)

Game Programming Algorithms and Techniques: A Platform-Agnostic Approach (2014)

Chapter 10. User Interfaces

A game typically has two main components to its user interface: a menu system and an in-game heads-up display (HUD). The menu system defines how the player gets in and out of the game—including changing modes, pausing the game, selecting options, and so on. Some games (especially RPGs) may also have menus to manage an inventory or upgrade skills.

The HUD includes any elements that give additional information to the player as he or she is actually playing the game. This can include a radar, ammo count, compass, and an aiming reticule. Although not all games will have a HUD (and some might have the option to disable the HUD), the vast majority have at least a basic one.

Menu Systems

A well-implemented menu system should provide for flexibility in a lot of different ways—there should be no limit to the number of elements and distinct screens, and it should be easy to quickly add new submenus. At the same time, it must be implemented in a manner that centralizes as much common functionality as possible. This section discusses what must be taken into account to implement a solid menu system; many of the techniques discussed here are also utilized in the tower defense game in Chapter 14, “Sample Game: Tower Defense for PC/Mac.”

Menu Stack

The menu system for a typical console game might start with the platform-mandated “Press Start” screen. Once the user presses Start, he enters the main menu. Perhaps he can go into Options, which brings up an options menu, or maybe he can take a look at the credits or instructions on how to play. Typically, the player is also provided a way to exit the current menu and return to a previous one. This sort of traditional menu flow is illustrated in Figure 10.1.

Image

Figure 10.1 Sample console game menu flow.

One way to ensure menu flow can always return to the base menu is by utilizing the stack data structure. The “top” element on the stack is the currently active menu item, and going to a new menu involves pushing that menu onto the stack. Returning to the previous menu involves popping the current menu off the stack. This can be further modified so that multiple menus can be visible at once—for example, a dialog box can appear on top of a particular menu if there’s a need to accept/reject a request. In order to do this, the menus would need to be rendered from the bottom to the top of the stack.

To store all the menus in one stack, you will need some sort of base class from which all menus derive. This base class may store information such as the title of the menu and a linked list of buttons (or sub-elements that the menu has). I used an approach similar to this to implement the menu system for Tony Hawk’s Project 8, and it’s also the approach that’s used for the overall UI in Chapter 14’s tower defense game. One aspect of this system that’s not quite a standard stack is the operation to allow the stack to be cleared and have a new element pushed on top. This is necessary when going from the main menu into gameplay, because you typically don’t want the main menu to still be visible while in gameplay.

Buttons

Almost every menu system has buttons that the user can interact with. For a PC or console game, there needs to be two visual states for a button: unselected and selected. This way, the currently selected button is clear to the user. The simplest way to signify a button is selected is by changing its color, but sometimes the texture might change or the size of the button increases. Some PC/console menus also use a third state to denote when a button is being pressed, but this third state is not required by any means. However, on a touch device a common approach is to have only the default (unselected) state and a pressed state. This way, when the user taps on the button, it will change to a different visual state.

If the menu can only be navigated with a keyboard or controller, supporting buttons is fairly straightforward. A particular menu screen could have a doubly linked list of buttons, and when the user presses the appropriate keys (such as up and down), the system can unselect the current button and select the next one. Because the buttons are in a doubly linked list, it’s easy to go back or forward, and also wrap around if the user goes past the first or last element in the list.

Usually when a button is pressed, the user expects something to happen. One way to abstractly support this is to have a member variable in the button class that’s a function. This will vary based on the language, but it could be an action (as in C#), a function pointer (as in C), or a lambda expression (as in C++). This way, when a new button is created, you can simply associate it with the correct function and that function will execute when the button is pressed.

If the game also supports menu navigation via the mouse, a little bit of complexity has to be added to the system. Each button needs to have a hot zone, or 2D bounding box that represents where the mouse can select a particular button. So as the user moves the mouse around the menu, the code needs to check whether the new position of the mouse intersects with the hot zone of any buttons. This approach is used for menu navigation in Chapter 14’s game, and is illustrated in Figure 10.2.

Image

Figure 10.2 A debug mode shows the hot zones of the main menu buttons in Chapter 14’s game.

It’s also possible to allow the user to seamlessly switch between mouse and keyboard navigation of the menus. One common way to accomplish this is to hide the mouse cursor (and ignore the mouse position) when the user presses a keyboard navigation button. Then, as soon as the mouse is moved again, mouse selection is activated once more. This type of system is further extended by games that support a keyboard/mouse as well as a controller. In these cases, it might also be designed so that the menu navigation tips change based on whether the controller or the keyboard/mouse is active.

Typing

A typical usage case for a computer game is to allow the player to type in one or more words while in a menu. This might be for the purposes of a high score list, or maybe so the player can type in a filename to save/load. When a traditional program supports typing in words, it’s usually done via standard input, but as previously covered in Chapter 5, “Input,” standard input typically is not available for a game.

The first step to allow typing is to start with an empty string. Every time the player types in a letter, we can then append the appropriate character to said string. But in order to do this, we need to determine the correct character. Recall that Chapter 5 also discussed the idea of virtual keys (how every key on the keyboard corresponds to an index in an enum). So, for example, K_A might correspond to the “A” key on the keyboard. Conveniently, in a typical system, the letter virtual keys are sequential within the enum. This means that that K_B would be one index after K_A, K_Cwould be an index after K_B, and so on. It just so happens that the ASCII character “B” is also sequentially after the ASCII character “A.” Taking advantage of this parallel allows us to implement a function that converts from a letter key code to a particular character:

function KeyCodeToChar(int keyCode)
// Make sure this is a letter key
if keyCode >= K_A && keyCode <= K_Z
// For now, assume upper case.
// Depending on language, may have to cast to a char
return ('A' + (char)(keyCode – K_A))
else if keyCode == K_SPACE
return ' '
else
return ''
end
end

Let’s test out this code with a couple examples. If keyCode is K_A, the result of the subtraction should be 0, which means the letter returned is simply “A.” If instead keyCode is K_C, the subtraction would yield 2, which means the function would return A + 2, or the letter “C.” These examples prove that the function gives the results we expect.

Once we’ve implemented this conversion function, the only other code that needs to be written is the code that checks whether any key from K_A to K_Z was “just pressed.” If this happens, we convert that key to its appropriate code and append said character to our string. If we wanted to, we could also further extend KeyCodeToChar to support upper- and lowercase by defaulting to lowercase letters, and only switching to uppercase letters if the Shift key is also down at the time.

HUD Elements

The most basic HUD (heads-up display) is one that displays elements such as the player’s score and number of lives remaining. This type of HUD is relatively trivial to implement—once the main game world has been rendered, we only need to draw on top of it some text or icons that convey the appropriate information. But certain games utilize more complex types of elements, including waypoint arrows, radars, compasses, and aiming reticules. This section takes a look at how some of these elements might be implemented.

Waypoint Arrow

A waypoint arrow is designed to point the player to the next objective. One of the easiest ways to implement such an arrow is to use an actual 3D arrow object that is placed at a set location onscreen. Then as the player moves around in the world, this 3D arrow can rotate to point in the direction that the player needs to travel. This type of waypoint arrow has been used in games such as Crazy Taxi, and an illustration of such an arrow is shown in Figure 10.3.

Image

Figure 10.3 A driving game where a waypoint arrow instructs the player to turn left.

The first step in implementing this type of waypoint arrow is to create an arrow model that points straight forward when no rotation is applied to it. Then there are three parameters to keep track of during gameplay: a vector that represents the facing of the arrow, the screen space location of the arrow, and the current waypoint the arrow should be tracking.

The facing vector should first be initialized to the axis that travels into the screen. In a traditional left-handed coordinate system, this will be the +z-axis. The screen space location of the arrow corresponds to where we want the arrow to be onscreen. Because it’s a screen space position, it will be an (x, y) coordinate that we need to convert into a 3D world space coordinate for the purposes of rendering the 3D arrow in the correct spot. In order to do this, we will need to use an unprojection, much like in the “Picking” section of Chapter 8, “Cameras.” Finally, the waypoint position is the target at which the arrow will point.

In order to update the waypoint arrow, in every frame we must construct a vector from the player’s position to the target; once normalized, this will give us the direction the arrow should be facing. Then it’s a matter of using the dot product and cross product to determine the angle and axis of rotation between the original (straight into the screen) facing vector and the new facing vector. The rotation can then be performed with a quaternion, using a lerp if we want the arrow to more smoothly rotate to the target. Figure 10.4 shows the basic calculations that must be done to update the waypoint arrow.

Image

Figure 10.4 Waypoint arrow calculations, assuming old and new are normalized.

An implementation of this type of waypoint arrow is provided in Listing 10.1. There are a couple of things to keep in mind with this implementation. First of all, it assumes that the camera is updated prior to the waypoint arrow being updated. That’s because it’s the only way to really guarantee that the waypoint arrow is always at the same spot on the screen. Otherwise, when the camera changes, the waypoint arrow will always be one frame behind.

Furthermore, the waypoint arrow must be rendered with z-buffering disabled and after all the other 3D objects in the world have been rendered. This ensures that the arrow is always visible, even if there is a 3D object that should technically be in front of it.

Listing 10.1 Waypoint Arrow


class WaypointArrow
// Stores the current facing of the arrow
Vector3 facing
// 2D position of the arrow on screen
Vector2 screenPosition
// Current waypoint the arrow points at
Vector3 waypoint
// World transform matrix used to render the arrow
Matrix worldTransform

// Computes world transform matrix from given position/rotation
function ComputeMatrix(Vector3 worldPosition, Quaternion rotation)
// Scale, rotate, translate (but we don't have a scale this time)
worldTransform = CreateFromQuaternion(rotation) *
CreateTranslation(worldPosition)
end

// Gets world position of the 3D arrow based on screenPosition
function ComputeWorldPosition()
// In order to do the unprojection, we need a 3D vector.
// The z component is the percent between the near and far plane.
// In this case, I select a point 10% between the two (z=0.1).
Vector3 unprojectPos = Vector3(screenPosition.x,
screenPosition.y, 0.1)

// Grab the camera and projection matrices
...

// Call Unproject function from Chapter 8
return Unproject(unprojectPos, cameraMatrix, projectionMatrix)
end

function Initialize(Vector2 myScreenPos, Vector3 myWaypoint)
screenPosition = myScreenPos
// For left-handed coordinate system with Y up
facing = Vector3(0, 0, 1)
SetNewWaypoint(myWaypoint)

// Initialize the world transform matrix
ComputeMatrix(ComputeWorldPosition(), Quaternion.Identity)
end

function SetNewWaypoint(Vector3 myWaypoint)
waypoint = myWaypoint
end

function Update(float deltaTime)
// Get the current world position of the arrow
Vector3 worldPos = ComputeWorldPosition()

// Grab player position
...
// The new facing of the arrow is the normalized vector
// from the player's position to the waypoint.
facing = waypointplayerPosition
facing.Normalize()

// Use the dot product to get the angle between the original
// facing (0, 0, 1) and the new one
float angle = acos(DotProduct(Vector3(0, 0, 1), facing))
// Use the cross product to get the axis of rotation
Vector3 axis = CrossProduct(Vector3(0, 0, 1), facing)
Quaternion quat
// If the magnitude is 0, it means they are parallel,
// which means no rotation should occur.
if axis.Length() < 0.01f
quat = Quaternion.Identity
else
// Compute the quaternion representing this rotation
axis.Normalize()
quat = CreateFromAxisAngle(axis, angle)
end

// Now set the final world transform of the arrow
ComputeMatrix(worldPos, quat)
end
end


Aiming Reticule

An aiming reticule is a standard HUD element used in most first- and third-person games that have ranged combat. It allows the player to know where he is aiming as well as to acquire further information regarding the target (such as whether the target is a friend or foe). Whether the reticule is a traditional crosshair or more circular, the implementation ends up being roughly the same. In fact, the implementation is extremely similar to mouse picking as discussed in Chapter 8.

As with a mouse cursor, with an aiming reticule there will be a 2D position on the screen. We take this 2D position and perform two unprojections: one at the near plane and one at the far plane. Given these two points, we can perform a ray cast from the near plane point to the far plane point. We can then use the physics calculations discussed in Chapter 7, “Physics,” to generate a list of all the objects the ray cast intersects with. From the resultant list, we want to select the first object that the ray cast intersects with. In a simple game, it may be the case that the first object the ray cast intersects with is also the one whose position is closest to the near plane point. But with larger objects, that may not always be the case. So for more complex games, we might need to actually compare the points of intersection between the ray and each object.

Once we know which object the aiming ray intersects with, we can then check to see whether it’s a friendly, foe, or an object that can’t be targeted. This then determines what color the reticule should be rendered in—most games use green to signify a friendly, red to signify a foe, and white to signify that the object is not a target (though this coloring scheme is not particularly usable for a player who happens to be red-green color blind). Figure 10.5 illustrates a couple of different scenarios with an aiming reticule in a hypothetical game.

Image

Figure 10.5 An aiming reticule changes colors based on the target.

Radar

Some games have a radar that displays nearby enemies (or friendlies) who are within a certain radius of the player. There are a couple of different radar variations; in one variation, anyone within the proper radius will show up on the radar. In another variation, enemies only show up on the radar if they have recently fired a weapon. Regardless of the variation, however, the implementation ends up being roughly the same. A sample screenshot of a game with a radar is shown in Figure 10.6.

Image

Figure 10.6 Radar (in the top right corner) in Unreal Tournament 3.

Two main things must be done in order to get a radar to work. First, we need a way to iterate through all of the objects that could show up on the radar, and check whether or not they are within the range of the radar. Then any objects that are within range of the radar must be converted into the appropriate 2D offset from the center of the radar in the UI. Both when calculating the distance and converting into a 2D offset, we want to ignore the height component. This means we are essentially projecting the radar objects onto the plane of the radar, which sounds harder than it actually is in practice.

But before we do these calculations, it’s worthwhile to declare a struct for a radar blip, or the actual dot that represents an object on the radar. This way, it would be possible to have different size and color blips depending on the target in question.

struct RadarBlip
// The color of the radar blip
Color color = Color.Red
// The 2D position of the radar blip
Vector2 position
// Scale of the radar blip
float scale = 1.0f
end

We could also inherit from RadarBlip to create all sorts of different types of blips for different enemies if we so desired. But that’s more of an aesthetic decision, and it’s irrelevant to the calculations behind implementing the radar.

For the actual radar class, there are two main parameters to set: the maximum distance an object can be detected in the game world, and the radius of the radar that’s displayed onscreen. With these two parameters, once we have a blip position it is possible to convert it into the correct location on the screen.

Suppose a game has a radar that can detect objects 50 units away. Now imagine there is an object 25 units straight in front of the player. Because the object positions are in 3D, we need to first convert both the position of the player and the object in question into 2D coordinates for the purposes of the radar. If the world is y-up, this means the radar’s plane is the xz-plane. So to project onto the plane, we can ignore the y-component because it represents the height. In other words, we want to take a 3D point that’s (x, y, z) and map it instead to a 2D point that’s essentially (x, z), though a 2D point’s second component will always be referred to as “y” in the actual code.

Once both the player and object’s positions are converted into 2D coordinates projected onto the plane of the radar, we can construct a vector from the player to the object, which for clarity we’ll refer to as vector Image. Although Image can be used to determine whether or not an object is within range of the radar (by computing the length of Image), there is one problem with this vector. Most game radars rotate as the player rotates, such that the topmost point on the radar (90° on a unit circle) always corresponds to the direction the player is facing. So in order to allow for such functionality, Imagemust be rotated depending on the player’s facing vector. This issue is illustrated in Figure 10.7.

Image

Figure 10.7 A top-down view of the player facing to the east (a), so east should correspond to the topmost point on the radar (b).

To solve this problem, we can take the normalized facing vector of the player and project it, too, onto the plane of the radar. If we then take the dot product between this projected vector and the “forward” vector of the radar onscreen, which will usually be Image0, 1Image, we can determine the angle of rotation. Then, much as in the “Rotating a 2D Character” sample problem in Chapter 3, “Linear Algebra for Games,” we can convert both vectors into 3D vectors with a z-component of 0 and use the cross product to determine whether the rotation should be performed clockwise or counterclockwise. Remember that if the cross product between two converted 2D vectors yields a positive z-value, the rotation between the two vectors is counterclockwise.

Armed with the angle of rotation and whether the rotation should be clockwise or counterclockwise, we can use a 2D rotation matrix to rotate Image to face the correct direction. A 2D rotation matrix only has one form, because the rotation is always about the z-axis. Assuming row-major vectors, the 2D rotation matrix is

Image

In this case, the θ we pass in will be either positive or negative, depending on whether the rotation should be counterclockwise (positive) or clockwise (negative), which we know based on the cross product result.

Returning to our example, if the vector Image is the vector from the player to an object 25 units in front of the player, after the rotation is applied we should end up with Image = Image0, 25Image. Once we have this 2D vector, we then divide each component of it by the maximum distance of the radar to get the vector that represents where the blip would be positioned if the radar were a unit circle—so in this case, Image0, 25Image/50 = Image0, 0.5Image. We then take the radius of the radar on the screen and multiply each component of our 2D vector by it. So if the radar onscreen had a radius of 200 pixels, we would end up with the 2D vector Image0, 100Image, which represents the offset from the center of the radar to where the blip should be positioned. Figure 10.8 shows what the radar would look like for our example of an enemy 25 units directly in front of the player.

Image

Figure 10.8 An enemy 25 units directly in front on a radar with a max range of 50.

These calculations can then be applied to every object that’s within range in order to determine which blips to draw. A full implementation of a radar is provided in Listing 10.2.

Listing 10.2 Radar System


class Radar
// Range of radar in game world units
float range
// (x,y) center of radar on screen
Vector2 position
// Radius of radar on screen
float radius
// Radar background image
ImageFile radarImage
// List of all active RadarBlips
List blips

// Initialize function sets range, center, radius, and image
...

function Update(float deltaTime)
// Clear out the List of blips from last frame
blips.Clear()
// Get the player's position
...
// Convert playerPosition to a 2D coordinate.
// The below assumes a y-up world.
Vector2 playerPos2D = Vector2(playerPosition.x, playerPosition.z)

// Calculate the rotation that may need to be applied to the blip
// Get the players normalized facing vector
...
// Convert playerFacing to 2D
Vector2 playerFacing2D = Vector2(playerFacing.x, playerFacing.z)
// Angle between the player's 2D facing and radar "forward"
float angle = acos(DotProduct(playerFacing2D, Vector2(0,1)))
// Convert 3D vector so we can perform a cross product
Vector3 playerFacing3D = Vector3(playerFacing2D.x,
playerFacing2D.y, 0)
// Use cross product to determine which way to rotate
Vector3 crossResult = CrossProduct(playerFacing3D,
Vector2(0,1,0))
// Clockwise is -z, and it means the angle needs to be negative.
if crossResult.z < 0
angle *= -1
end

// Determine which enemies are in range
foreach Enemy e in gameWorld
// Convert Enemy's position to 2D coordinate
Vector2 enemyPos2D = Vector2(e.position.x, e.position.z)
// Construct vector from player to enemy
Vector2 playerToEnemy = enemyPos2DplayerPos2D
// Check the length, and see if it's within range
if playerToEnemy.Length() <= range
// Rotate playerToEnemy so it's oriented relative to
// the player's facing (using a 2D rotation matrix).
playerToEnemy = Rotate2D(angle)
// Make a radar blip for this Enemy
RadarBlip blip
// Take the playerToEnemy vector and convert it to
// offset from the center of the on-screen radar.
blip.position = playerToEnemy
blip.position /= range
blip.position *= radius
// Add blip to list of blips
blips.Add(blip)
end
loop
end

function Draw(float deltaTime)
// Draw radarImage
...

foreach RadarBlip r in blips
// Draw r at position + blip.position, since the blip
// contains the offset from the radar center.
...
loop
end
end


This radar could be improved in a few different ways. First of all, we may want to have the radar not only show enemies, but allies as well. In this case, we could just change the code so that it loops through both enemies and allies, setting the color of each RadarBlip as appropriate. Another common improvement for games that have a sense of verticality is to show different blips depending on whether the enemy is above, below, or on roughly the same level as the player. In order to support this, we can compare the height value of the player to the height value of the enemy that’s on the radar, and use that to determine which blip to show.

A further modification might be to only show blips for enemies that have fired their weapons recently, as in Call of Duty. In order to support this, every time an enemy fires, it might have a flag temporarily set that denotes when it is visible on the radar. So when iterating through the enemies for potential blips, we can also check whether or not the flag is set. If the flag isn’t set, the enemy is ignored for the purposes of the radar.

Other UI Considerations

Other UI considerations include support for multiple resolutions, localizations, UI middleware, and the user experience.

Supporting Multiple Resolutions

For PC games, it’s rather common to have a wide range of screen resolutions—the most popular for new monitors is 1920×1080, but other common resolutions still see use, such as 1680×1050. This means that the total number of pixels the user interface has to play with can vary from monitor to monitor. But supporting multiple resolutions isn’t only in the realm of computer games. For example, both the Xbox 360 and PS3 require games to support traditional CRT televisions in addition to widescreen ones (though the Xbox One and PS4 do not support older televisions, so games for these platforms may not have to worry about more than one resolution).

One way to support multiple resolutions is to avoid using specific pixel locations, which are known as absolute coordinates. An example of an absolute coordinate would be (1900, 1000), or the precise pixel that a UI element is drawn at. The problem with using this type of coordinate is that if the monitor is only running at 1680×1050, a UI element at (1900, 1000) would be completely off the screen.

The solution to this type of problem is to instead use relative coordinates, or coordinates that are relative to something else. For example, if you want something in the bottom-right corner of the screen, you might place an element at (–100, –100) relative to the bottom-right corner. This means that the element would be placed at (1820, 980) on a 1920×1080 screen, whereas it would be placed at (1580, 950) on a 1680×1050 screen (as illustrated in Figure 10.9).

Image

Figure 10.9 A UI element is positioned relative to the bottom-right corner of the screen.

Relative coordinates can be expressed relative to key points on the screen (usually the corners or the center of the screen), or even relative to other UI elements. Implementing this second approach is a bit more complex, and beyond the scope of this chapter.

One refinement might also be to scale the size of UI elements depending on the resolution. The reason for this is that at very high resolutions, the UI might just become too small to be usable. So at higher resolutions, the UI can be scaled up so it’s more easily seen by the player. Some MMORPGs even have a UI scale slider that allows the player to adjust the UI with a high degree of granularity. If scaling is to be supported, it’s doubly important that relative coordinates are utilized.

Localization

Although it may be acceptable for smaller games to support only one language (usually English), most commercial games typically need to support more than one language. Localization is the process of adding support for these additional languages. Because many menus and HUD elements have text, it’s important to take localization into account when designing user interface systems. Even if a game does not need to be localized, it’s a bad idea to have onscreen text hard-coded into source files because it makes it more difficult for nonprogrammers to modify. But if the game is to be localized, staying away from hard-coded text is particularly critical.

The simplest solution to the text localization problem is to have an external file that stores all of the text in the game. This external file could be in XML, JSON, or the like (a more in-depth discussion of different file formats can be found in the following chapter). This could then map to a simple dictionary that has a key identifying the particular string. So whenever the code needs to display text onscreen, it will request the text from the dictionary using the appropriate key. This means that instead of having a button hard-coded to display the text “Cancel,” it may instead query the dictionary for the string associated with “ui_cancel.” In this scenario, supporting a new language might only require creating a different file to populate the dictionary from. This style of approach is utilized by the tower defense game example in Chapter 14.

Unfortunately, supporting languages that use different character sets can complicate the matter. The traditional ASCII character set (which is what’s used by the char type in most languages) can only support the English alphabet. There’s no support for accent marks, let alone any support for other writing systems such as Arabic and simplified Chinese.


Last-Minute Localization

On one project I worked on, all of the user interface was created using a scripting language. Throughout development, no one really considered localization. The result of this was that all of the text that was to be displayed onscreen was hard-coded into the scripts. When it was time to localize, no one wanted to manually go through and change all the strings in the script files into keys. So an idea was hatched: the English strings in the scripts were made into the keys for the localization dictionary, and then the values were set in external files for each language.

This worked, but there was one little problem: If someone changed a string in the script file, not knowing there was now a localization system, that string would no longer function properly. That’s because the localization file would not have the new English string listed in it. This caused some headaches on the project, and certainly is not a recommended approach for most games. It’s a cautionary tale as to why planning for localization is important, even if there are no immediate plans to localize a particular game.


Most games that need to support different writing systems end up using the Unicode character set. There are multiple ways to encode Unicode characters, but the most popular is arguably UTF-8, which is a variable-width encoding. Standard ASCII characters are only one byte in UTF-8, whereas Unicode characters can be two to six bytes. Due to this variable width, assumptions about the size of strings in memory cannot be made with UTF-8 because the size will vary from language to language.

But changing the text strings and character encoding are not the only things to consider. One issue is that certain languages have lengthier words than others. One language that is commonly localized for is German; it turns out that the average German word is longer than the average English word (I could not find a reputable article on the exact numbers, but I think 20%–25% longer is a decent rule of thumb). This means that if there’s a UI element that barely fits the text in English, it has a good chance of not fitting in other languages. These types of issues can be frustrating to fix, because they may require changing art assets months after they were to have been finalized. But this is one problem that consistently comes up late in the localization process. Figure 10.10 illustrates a button in English and a few other languages (translations courtesy of Google).

Image

Figure 10.10 A Cancel button and the problems with localizing it.

In addition to text and voice over, one other aspect of localization is localizing the actual content for certain countries. For example, any World War II game must be careful not to infringe on German laws, which forbid the usage of certain symbology related to the Third Reich. Other countries may have stricter laws on violence, which may require reducing or removing blood and/or objectionable scenes. And, finally, the most difficult type of localization error to catch is instances where idiomatic phrases or other references might be okay in the American culture, but confusing or perhaps even offensive in other cultures. These types of localization problems extend well beyond the realm of user interfaces, but they are something to keep in mind when preparing a game for another region.

UI Middleware

Recall from Chapter 1, “Game Programming Overview,” that middleware is an external code library that’s used to simplify certain parts of development. No discussion of user interfaces for games, especially AAA ones, would be complete without discussing the predominant middleware in this category: Autodesk Scaleform. Some games that have used Scaleform include Star Wars: The Old Republic and BioShock: Infinite. Although Scaleform is very popular for large-scale commercial games, it’s not as popular for smaller games because it’s not a free solution.

The premise behind Scaleform is that it allows artists and content creators to use Adobe Flash to design the entire layout of the UI. This way, a programmer does not have to spend time manually setting up the locations of all the elements. Much of the UI programming can also be done in ActionScript, which is a JavaScript-esque scripting language that Flash uses. Using Scaleform does mean that some time will have to be spent integrating it into the overall game engine, but once that is complete the UI can go through many iterations without any system-level changes.

User Experience

One very important aspect of UIs that this chapter has not discussed at all is user experience (or UX), which is the reaction a user has while actually using the interface. An example of a poorly designed game UI is one where the user feels like he or she has to make several button presses to perform simple actions, which is notoriously a problem in certain console RPGs. If you are tasked with not only programming but also designing the UI, it’s critical to take into account UX. However, because this topic is not specifically related to programming, I have relegated any further discussion on this topic to the references.

Summary

Game UIs can vary a great deal—a MMORPG such as World of Warcraft might have dozens of UI elements on screen at once, whereas a minimalistic game may eschew the UI almost entirely. Often, the UI is the player’s first interaction with the game as he or she navigates through the initial menu system. But the UI is also featured during gameplay through the HUD, using elements such as an aiming reticule, radar, and waypoint arrow to provide important information to the player. Although experienced programmers often express a dislike of programming UI systems, a well-implemented UI can greatly improve the player’s overall experience. A poorly implemented UI, however, can ruin an otherwise enjoyable game.

Review Questions

1. What are the benefits of using a “menu stack”?

2. What property of letter key codes can be taken into account when implementing a function to convert from key codes to characters?

3. When implementing a waypoint arrow, how does one determine the position of the 3D arrow in world space?

4. How are the waypoint arrow calculations for the axis and angle of rotation performed?

5. Describe the calculations that allow for an aiming reticule to determine whether a friend or foe is being targeted.

6. When implementing a radar, how is the 3D coordinate of an object converted into a 2D radar coordinate?

7. What is the difference between absolute and relative coordinates?

8. Why should UI text not be hard-coded?

9. What problem does the Unicode character set solve?

10. What is user experience (UX)?

Additional References

Komppa, Jari. “Sol on Immediate Mode GUIs.” http://iki.fi/sol/imgui/. An in-depth tutorial on the implementation of different elements of a game UI using C and SDL.

Quintans, Desi. “Game UI By Example: A Crash Course in the Good and the Bad.” http://tinyurl.com/d6wy2yg. This relatively short but informative article goes over examples of games that had good and bad UIs, and takeaways that should be considered when designing your own game’s UI.

Spolsky, Joel. User Interface Design for Programmers. Berkeley: Apress, 2001. This programmer’s approach to designing UI is not specifically written for games, but it does provide interesting insights into how to create an effective UI.