Network Topologies and Sample Games - Multiplayer Game Programming: Architecting Networked Games (2016)

Multiplayer Game Programming: Architecting Networked Games (2016)

Chapter 6. Network Topologies and Sample Games

The first part of this chapter takes a look at the two main configurations that can be used when multiple computers must communicate in a networked game: client-server and peer-to-peer. The remainder of the chapter combines all the topics covered up to this point in the book, and creates initial versions of two sample games.

Network Topologies

By and large, Chapters 1 to 5 have focused specifically on the issue of two computers communicating over the Internet and sharing information in a manner that is conducive to networked games. Although there absolutely are networked two-player games, many of the more popular games feature higher player counts. But even with only two players, some important questions arise. How will the players send game updates to each other? Will there be object replication as in Chapter 5, or will only the input state be replicated? What happens if the computers disagree on the game state? These are all important questions that must be answered for any networked multiplayer game.

A network topology determines how the computers in a network are connected to each other. In the context of a game, the topology determines how the computers participating in the game will be organized in order to ensure all players can see an up-to-date version of the game state. As with the decision of network protocol, there are tradeoffs regardless of the selected topology. This section explores the two main types of topologies used by games, client-server and peer-to-peer, and the small variations that can also exist within these types.


In a client-server topology, one game instance is designated the server, and all of the other game instances are designated as clients. Each client only ever communicates with the server, while the server is responsible for communicating with all of the clients. Figure 6.1 illustrates this topology.


Figure 6.1 Client-server topology

In a client-server topology, given n clients there are a total of O(2n) connections. However, it is asymmetric in that the server will have O(n) connections (one to each client), while each client will only have one connection to the server. In terms of bandwidth, if there are n clients and each client sends b bytes per second of data, the server must have enough bandwidth to handle b · n incoming bytes per second. Similarly, if the server needs to send c bytes per second of data to each client, the server must support c · n outgoing bytes per second. However, each client need only support c bytes per second downstream and b bytes per second upstream. This means that as the number of clients increase, the bandwidth required for the server will increase linearly. In theory, the bandwidth requirements for the client will not change based on the number of clients. However, in practice, supporting more clients leads to more objects in the world to replicate, which may lead to a slight increase in bandwidth for each client.

Although by no means the only approach to client-server, most games that implement client-server utilize an authoritative server. This means that the game server’s simulation of the game is considered to be correct. If the client ever finds itself in disagreement with the server, it should update its game state based on what the server says is the game state. For instance, in the sample Robo Cat Action game discussed later in this chapter, each player cat can throw a ball of yarn. But with an authoritative server model, the client is forbidden from making a determination of whether or not the yarn hits another player. Instead, the client must inform the server that it wants to throw a ball of yarn. The server then decides both if the client is even allowed to throw a ball of yarn and, if so, whether or not the other player is hit by the ball of yarn.

By placing the server as an authority, this means there is some amount of “lag” or delay in actions performed by the client. The topic of latency is discussed in great detail in Chapter 7, “Latency, Jitter, and Reliability,” but a brief discussion is in order. In the case of the ball throw, the server is the only game instance allowed to make a decision on what happens. But it will take some time to send a ball throw request to the server, which in turn will process it before sending the result to all of the clients. One contributing factor of this delay will be the round trip time, or RTT, which is the amount of time (typically expressed in milliseconds) that it takes for packets to travel to and back from a particular computer on the network. In an ideal scenario, this RTT is 100 ms or less, though even on modern Internet connections there are many factors that may not allow for such a low RTT.

Suppose there is a game with a server and two clients, Clients A and B. Because the server sends all game data to each client, this means that if Client A throws a ball of yarn, the packet containing the yarn throw request must first travel to the server. Then the server will process the throw before sending the result back to Clients A and B. In this scenario, the worst case network latency experienced by Client B would be equal to ½ Client A’s RTT, plus the server processing time, plus ½ Client B’s RTT. In fast network conditions, this may not be an issue, but realistically, most games must use a variety of techniques to hide this latency. This is covered in detail in Chapter 8, “Improved Latency Handling.”

There is also a subclassification of types of servers. Some servers are dedicated, meaning they only run the game state and communicate with all of the clients. The dedicated server process is completely separate from any client processes running the game. This means that the dedicated server typically is headless and does not actually display any graphics. This type of server is often used by big-budget games such as Battlefield, which allows the developer to run multiple dedicated server processes on a single powerful machine.

The alternative to a dedicated server is a listen server. In this configuration, the server is also an active participant in the game itself. One advantage of a listen server configuration is that it may reduce deployment costs, because it is not necessary to rent servers in a data center—instead, one of the players can use their machine as both a server and a client. However, the disadvantage of a listen server is that a machine running as a listen server must be powerful enough and have a fast enough connection to handle this increased load. The listen server approach is sometimes erroneously referred to as a peer-to-peer connection, but a more precise term is peer hosted. There is still a server, it just so happens that the server is hosted by a player in the game.

One caution about a listen server is that assuming it is authoritative it will have a complete picture of the game state. This means that the player running the listen server could potentially use this information to cheat. Furthermore, in a client-server model typically only the server knows the network address of all of the active clients. This can be a huge issue in the event that the server disconnects—whether due to a network issue or perhaps an angry player deciding to exit their game. Some games that utilize a listen server implement the concept of host migration, which means that if a listen server disconnects, one of the clients is promoted to be the new server. In order for this to be possible, however, there must be some amount of communication between the clients. This means that host migration requires a hybrid model where there are both a client-server and a peer-to-peer topology.


In a peer-to-peer topology, each individual participant is connected to every other participant. As is apparent from Figure 6.2, this means that there is a great deal of data transmitted back and forth between clients. The number of connections is a quadratic function, or in other words, given npeers, each peer must have O(n – 1) connections, which leads to O(n2) connections across the network. This also means that the bandwidth requirements for each peer increases as more and more peers connect to the game. However, unlike in client-server, the bandwidth requirements are symmetric, so every peer will require the same amount of available bandwidth upstream and downstream.


Figure 6.2 Peer-to-peer topology

The concept of authority is much more nebulous in a peer-to-peer game. One possible approach is that certain peers have authority over certain parts of the game, but in practice such a system can be difficult to implement. A more common approach in peer-to-peer games is to share all actions across every peer, and have every peer simulate these actions. This model is sometimes called an input sharing model.

One aspect of the peer-to-peer topology that makes input sharing more viable is the fact that there is less latency to be concerned about. As opposed to the client-server model, which has an intermediary between clients, in a peer-to-peer game all peers are communicating with each other directly. This means that at worst, the latency between peers is ½ RTT. However, there still is some latency, which can lead to what is the largest technical challenge in a peer-to-peer game: ensuring that all peers remain synchronized with each other.

Recall that the discussion of the deterministic lockstep model in Chapter 1 presented one such approach. To recap, in the Age of Empires implementation, the game was broken down into “turns” of 200 ms. All input commands during these 200 ms are queued up, and when the 200 ms ends, the commands are sent to all of the peers. Furthermore, there is a one turn delay such that when each peer is displaying the results of turn 1, the commands are being queued to be executed on turn 3. Although this type of turn synchronization is conceptually simple, the actual implementation details can be far more complex. The Robo Cat RTS sample game, discussed later in this chapter, implements a very similar model.

Furthermore, it is important to ensure that the game state is consistent between all peers. This means that the game implementation needs to be fully deterministic. In other words, a given set of inputs must always result in the same outputs. A few important aspects of this include using checksums to verify consistency of the game state across peer and synchronizing random number generation across all peers, both topics that are covered in detail later in this chapter.

Another issue that arises in peer-to-peer is connecting new players. Since every peer must know the address of every other peer, in theory a new player could connect to any peer. However, matchmaking services that list available games typically only accept a single address—in this case, one peer may be selected as a so-called master peer, who is the only peer that greets new players.

Finally, the server disconnection problem that is a concern in server-client doesn’t really exist in peer-to-peer. Typically, if communication is lost with a peer, the game may pause for a few seconds before removing the peer from the game. Once the peer is disconnected, the remaining peers can continue simulating the game.

Implementing Client-Server

Combining all of the concepts that have been covered to this point in the book, it is now possible to create an initial version of a networked game. This section discusses one such game, Robo Cat Action, a top-down game featuring cats competing to collect as many mice as possible, all while throwing balls of yarn at each other. The game is shown in action in Figure 6.3. This first version of this game code is in the Chapter6/RoboCatAction directory of the online code repository.


Figure 6.3 The initial version of Robo Cat Action

The controls for Robo Cat Action are not very complex. The D and A keys can be used to rotate the cat clockwise and counterclockwise, respectively. The W and S keys can be used to move the cat forward and back. The K key can be used to throw a ball of yarn that damages other cats. Mice can also be collected by moving over them.

This first version of the game code makes a large assumption: That there is little-to-no network latency, and that all packets will arrive at their destinations. This is clearly an unrealistic assumption for any networked game, and subsequent chapters, especially Chapter 7, “Latency, Jitter, and Reliability,” discuss how to remove these assumptions. But for now, it is useful to discuss the basics of a client-server game without worrying about the added complexity of handling latency or packet loss.

Separating Server and Client Code

One of the cornerstones of the client-server model with an authoritative server is that the code that executes on the server is different from the code that executes on each client. Take the example of the main character, the robo-cat. One of the properties of the cat is the mHealth variable that tracks its remaining health. The server needs to know about the health of the cat, because if the health hits zero, then the cat should go into its respawn state (cats have at least nine lives, after all). Similarly, the client needs to know how much health the cat has because it will display the remaining health in the top right corner. Even though the server’s instance of mHealth is the authoritative version of the variable, the client will need to cache the variable locally in order to display it in the user interface.

The same can be said about functions. There may be some member functions of the RoboCat class that are needed only for the server, some that are needed only for the client, and some that are needed for both. To account for this, Robo Cat Action takes advantage of inheritance and virtual functions. Thus, there is a RoboCat base class and two derived classes: RoboCatServer and RoboCatClient, both of which override and implement new member functions as necessary. From a performance standpoint, using virtual functions in this manner may not give the highest possible performance, but from the perspective of ease-of-use, an inheritance hierarchy is perhaps the simplest.

The concept of splitting up the code into separate classes is taken a step further—inspecting the code will reveal that the code is separated into three separate targets. The first target is the RoboCat library that contains the shared code that is used by both the server and the client. This includes classes such as the UDPSocket class as implemented in Chapter 3 and the OutputMemoryBitSteam class as implemented in Chapter 4. Next, there are two executable targets—RoboCatServer for the server, and RoboCatClient for the client.


Because there are two separate executables for the server and client, in order to test Robo Cat Action, you must run both executables separately. The server takes a single command line parameter to specify the port to accept connections on. For example:

RoboCatServer 45000

This specifies that the server should listen for connecting clients on port 45000.

The client executable takes in two command line parameters: the full address of the server (including the port) and the name of the connecting client. So for instance:

RoboCatClient John

This specifies that the client wants to connect to the server at localhost port 45000, with a player name of “John.” Naturally, multiple clients can connect to one server, and because the game does not use very many resources, multiple instances of the game can be run on one machine for the purposes of testing.

For the example of the RoboCat class hierarchy, the three individual classes reside in different targets—the base RoboCat class is in the shared library, and the RoboCatServer and RoboCatClient classes are unsurprisingly in their corresponding executable. This approach leads to a very clean separation of the code, and it makes it clear which code is specific to only the server or the client. To help visualize the approach, Figure 6.4 presents the class hierarchy for the GameObject class in Robo Cat Action.


Figure 6.4 Hierarchy of the GameObject class in Robo Cat Action (items in gold are in the shared library, items in blue are in the client executable, and items in green are in the server executable)

Network Manager and Welcoming New Clients

The NetworkManager and the derived classes NetworkManagerClient and NetworkManagerServer do much of the heavy lifting in terms of interacting with the network. For example, all of the code that reads in available packets into a queue of packets to be processed is placed in the base NetworkManager class. The code to handle packets is very similar to what was covered in Chapter 3, “Berkeley Sockets,” so it won’t be covered again here.

One of the other responsibilities of the NetworkManager is to handle new clients joining the game. Robo Cat Action is designed for drop in/drop out multiplayer, so at any time a new client can try to join the match. As you might imagine, the responsibilities when welcoming new clients are different between the server and the client, and thus the functionality is split up between NetworkManagerClient and NetworkManagerServer.

Before we dive into the code, it’s worthwhile to look at the connecting process at a high level. In essence, there are four steps to the procedure:

1. When a client wants to join a game, it sends the server a “hello” packet. This packet only contains the literal "HELO" (to identify the type of packet) and the serialized string representing the player’s name. The client will keep sending these hello packets until it is acknowledged by the server.

2. Once the server receives the hello packet, it assigns a player ID to the new player, and also does some bookkeeping such as associating the incoming SocketAddress with the player ID. Then the server sends a “welcome” packet to the client. This packet contains the literal "WLCM"and the ID assigned to the player.

3. When the client receives the welcome packet, it saves its player ID, and starts sending and receiving replication information to the server.

4. At some point in the future, the server sends the information about any objects spawned for the new client to both the new client and all of the existing clients.

In this particular case, it is fairly straightforward to build redundancy into the system in the event of packet loss. If the client doesn’t receive the welcome packet, it will continue sending hello packets to the server. If the server receives a hello packet from a client whose SocketAddress is already on file, it will simply resend the welcome packet.

Looking at the code more closely, there are two literals used to identify the packets, and so these are initialized as constants in the base NetworkManager class:

static const uint32_t kHelloCC = 'HELO';
static const uint32_t kWelcomeCC = 'WLCM';

Specifically on the client side of things, the NetworkManagerClient defines an enum to specify the current state of the client:

enum NetworkClientState

When the NetworkManagerClient is initialized, it sets its mState member variable to NCS_SayingHello. While in the NCS_SayingHello state, the client will keep sending hello packets to the server. On the other hand, if the client has been welcomed, then it needs to start sending updates to the server. In this case, the updates are input packets, which are covered shortly.

Furthermore, the client also knows the type of packets it is receiving based on the four-character literals that identify the packet. In the case of Robo Cat Action, there are only two types of packets it might receive: a welcome packet, and a state packet, which contains replication data. The code to handle sending and receiving packets is implemented in a manner similar to a state machine, as shown in Listing 6.1.

Listing 6.1 Client Sending and Receiving Packets

void NetworkManagerClient::SendOutgoingPackets()
case NCS_SayingHello:
case NCS_Welcomed:

void NetworkManagerClient::ProcessPacket
InputMemoryBitStream& inInputStream,
const SocketAddress& inFromAddress
uint32_t packetType;
case kWelcomeCC:
case kStateCC:

In terms of sending the hello packets, the only wrinkle is that the client ensures that it does not send hello packets too frequently. It does this by checking the elapsed time since the last hello packet. The actual packet itself is very straightforward, as the client need only to write the 'HELO'literal and its name. Similarly, the welcome packet only contains the player ID as a payload, so the client only needs to save this ID. This code is shown in Listing 6.2. Notice how HandleWelcomePacket tests to ensure that the client is in the expected state for a welcome packet. This is to ensure no bugs can result in the event that a welcome packet is received after the client has already been welcomed. A similar test is used in HandleStatePacket.

Listing 6.2 Client Sending Hello Packets and Reading Welcome Packets

void NetworkManagerClient::UpdateSayingHello()
float time = Timing::sInstance.GetTimef();

if(time > mTimeOfLastHello + kTimeBetweenHellos)
mTimeOfLastHello = time;

void NetworkManagerClient::SendHelloPacket()
OutputMemoryBitStream helloPacket;


SendPacket(helloPacket, mServerAddress);

void NetworkManagerClient::HandleWelcomePacket(InputMemoryBitStream&
if(mState == NCS_SayingHello)
//if we received a player id, we've been welcomed!
int playerId;
mPlayerId = playerId;
mState = NCS_Welcomed;
LOG("'%s' was welcomed on client as player %d",
mName.c_str(), mPlayerId);

The server side of things is a bit more complex. First, the server has a hash map called mAddressToClientMap that it uses to track all known clients. The key for the map is the SocketAddress, and the value is a pointer to a ClientProxy. We’ll discuss client proxies in more detail later in this chapter, but for now, you can think of it as a class that the server uses to track the state of all known clients. Keep in mind that because we are using the socket address directly, there could potentially be NAT traversal issues as previously discussed in Chapter 2. We will not worry about handling the traversal in the code for Robo Cat.

When the server first receives a packet, it performs a lookup into the address map to see whether or not the sender is known. If the sender is unknown, the server will then check to see if the packet is a hello packet. If the packet isn’t a hello packet, it will simply be ignored. Otherwise, the server will create a client proxy for the new client and send it a welcome packet. This is shown in Listing 6.3, though the code for sending a welcome packet is omitted as it is as straightforward as sending the hello packet.

Listing 6.3 Server Handling New Clients

void NetworkManagerServer::ProcessPacket
InputMemoryBitStream& inInputStream,
const SocketAddress& inFromAddress
//do we know who this client is?
auto it = mAddressToClientMap.find(inFromAddress);
if(it == mAddressToClientMap.end())
HandlePacketFromNewClient(inInputStream, inFromAddress);
ProcessPacket((*it).second, inInputStream);

void NetworkManagerServer::HandlePacketFromNewClient
InputMemoryBitStream& inInputStream,
const SocketAddress& inFromAddress
uint32_t packetType;
if(packetType == kHelloCC)
string name;

//create a client proxy
// ...

//and welcome the client ...

//init replication manager for this client
// ...
LOG("Bad incoming packet from unknown client at socket %s",

Input Sharing and Client Proxies

The implementation of replication for game objects in Robo Cat Action is very similar to the approach discussed in Chapter 5, “Object Replication.” There are three replication commands: create, update, and destroy. Furthermore, a partial object replication system is implemented to reduce the amount of information sent in an update packet. Since the game uses an authoritative server model, objects are only ever replicated from the server to the client—thus the server is responsible for sending the replication update packets (assigned the literal 'STAT'), and the client is responsible for processing the replication commands as necessary. There’s a bit of work that needs to be done in order to ensure that the appropriate commands are sent to each of the clients, which will be covered later in this section.

For now, consider what the client needs to send to the server. Since the server is the authority, the client ideally should not be sending any replication commands for objects. However, in order for the server to accurately simulate each client, it needs to know what each client is trying to do. This leads to the concept of an input packet. In every frame, the client processes the input events. If any of these input events lead to something that needs to be processed server side—such as movement of the cat or throwing of a ball of yarn—the client will send the input events to the server. The server then accepts the input packet, and saves the input state into a client proxy—an object used by the server to track a particular client. Finally, when the sever updates the simulation, it will take into account any input stored in a client proxy.

The InputState class tracks a snapshot of the client input on a particular frame. Every frame, the InputManager class updates the InputState based on the client’s input. What is stored in the InputState will vary from game to game. In this particular case, the only information stored is the desired movement offsets in each of the four cardinal directions, and whether or not the player pressed the button to throw a ball of yarn. This leads to a class with only a handful of members, as shown in Listing 6.4.

Listing 6.4 InputState Class Declaration

class InputState

float GetDesiredHorizontalDelta() const
{return mDesiredRightAmount - mDesiredLeftAmount;}
float GetDesiredVerticalDelta() const
{return mDesiredForwardAmount - mDesiredBackAmount;}
bool IsShooting() const
{return mIsShooting;}
bool Write(OutputMemoryBitStream& inOutputStream) const;
bool Read(InputMemoryBitStream& inInputStream);

friend class InputManager;
float mDesiredRightAmount, mDesiredLeftAmount;
float mDesiredForwardAmount, mDesiredBackAmount;
bool mIsShooting;

The GetDesiredHorizontalDelta and GetDesiredVerticalDelta functions are helper functions that determine the overall offset on each axis. So for example, if the player holds both the A and D keys, the overall horizontal delta should be zero. The code for the Read andWrite functions is not included in Listing 6.4—these functions just read and write the member variables to the provided memory bit stream.

Keep in mind that the InputState is updated every single frame by the InputManager. For most games, it would be impractical to send the InputState to the server at the same frequency. Ideally, the InputState over the course of several frames should be combined into a single move. To keep things simple, Robo Cat Action doesn’t combine the InputState in any way—instead, every x seconds, it will grab the current InputState and save this as a Move.

The Move class is essentially a wrapper for the InputState, with the addition of two floats: one to track the timestamp of the Move, and one to track the amount of delta time between the current move and the previous move. This is shown in Listing 6.5.

Listing 6.5 Move Class

class Move
Move() {}
Move(const InputState& inInputState, float inTimestamp,
float inDeltaTime):

const InputState& GetInputState() const {return mInputState;}
float GetTimestamp() const {return mTimestamp;}
float GetDeltaTime() const {return mDeltaTime;}
bool Write(OutputMemoryBitStream& inOutputStream) const;
bool Read(InputMemoryBitStream& inInputStream);
InputState mInputState;
float mTimestamp;
float mDeltaTime;

The Read and Write functions here will read and write both the input state and the timestamp from/to the provided stream.


Although the Move class is just a thin wrapper for InputState with additional time variables, the distinction is made in order to allow for cleaner code on a frame-to-frame basis. The InputManager polls the keyboard every frame, and saves the data into an InputState. Only when the client actually needs to create a Move does the timestamp matter.

Next, a series of moves is stored in a MoveList. This class contains, unsurprisingly, a list of moves, as well as the timestamp of the last move in the list. On the client side, when the client determines it should store a new move, it will add the move to the move list. Then theNetworkManagerClient will write out the sequence of moves into an input packet when it is time to do so. Note that the code for writing the sequence of moves optimizes the bit count by assuming that there will never be more than three moves to write at a time. It can make this assumption based on the constant factors that dictate the frequency of moves and input packets. The client code related to move lists is shown in Listing 6.6.

Listing 6.6 Client-Side Code for Move Lists

const Move& MoveList::AddMove(const InputState& inInputState,
float inTimestamp)
//first move has 0 delta time
float deltaTime = mLastMoveTimestamp >= 0.f ?
inTimestamp - mLastMoveTimestamp: 0.f;

mMoves.emplace_back(inInputState, inTimestamp, deltaTime);
mLastMoveTimestamp = inTimestamp;
return mMoves.back();

void NetworkManagerClient::SendInputPacket()
//only send if there's any input to send!
MoveList& moveList = InputManager::sInstance->GetMoveList();

OutputMemoryBitStream inputPacket;

//we only want to send the last three moves
int moveCount = moveList.GetMoveCount();
int startIndex = moveCount > 3 ? moveCount - 3 - 1: 0;
inputPacket.Write(moveCount - startIndex, 2);
for(int i = startIndex; i < moveCount; ++i)

SendPacket(inputPacket, mServerAddress);

Note that the code for SendInputPacket uses the array indexing operator on the MoveList. The MoveList internally uses a deque data structure, so this operation is constant time. In terms of redundancy, SendInputPacket really is not very fault tolerant. The client only ever sends the moves once. So for example, if an input packet contains a “throw” input command, but that packet never reaches the server, the client will never actually throw a ball of yarn. Clearly, this is not a tenable situation in a multiplayer game.

In Chapter 7, “Latency, Jitter, and Reliability,” you will see how some redundancy can be added to the input packets. In particular, each move will be sent three times in order to give the server three opportunities to recognize the move. This adds a bit of complexity on the server side of things, because the server needs to recognize whether or not it has already processed a move when it receives it.

As previously mentioned, the client proxy is what the server uses to track the state of each client. Among one of the client proxy’s most important responsibilities is that it contains a separate replication manager for each client. This allows the server to have a complete picture of what information it has or has not sent to each client. Since the server will most likely not send a replication packet to every client every frame, a separate replication manager for each client is necessary. This especially becomes important when redundancy is added, because it will allow the server to know the exact variables that need to be resent for a particular client.

Each client proxy also stores the socket address, name, and ID of each player. The client proxy is also where the move information for each client is stored. When an input packet is received, all of the moves associated with a client are added to the ClientProxy instance representing that client. Listing 6.7 shows a partial declaration of the ClientProxy class.

Listing 6.7 Partial Declaration of the ClientProxy Class

class ClientProxy
ClientProxy(const SocketAddress& inSocketAddress, const string& inName,
int inPlayerId);
// Functions omitted
// ...
MoveList& GetUnprocessedMoveList() {return mUnprocessedMoveList;}
ReplicationManagerServer mReplicationManagerServer;
// Variables omitted
// ...
MoveList mUnprocessedMoveList;
bool mIsLastMoveTimestampDirty;

Finally, the RoboCatServer class will use the unprocessed move data in its Update function, as shown in Listing 6.8. It is important to note that the delta time passed to each call of ProcessInput and SimulateMovement is based on the delta time between the moves, as opposed to the delta time of the server’s frame. This is how the server can try to ensure the simulation stays as close to the client’s actions as possible, even if it receives multiple moves in one packet. It also allows for the server and client to run at different frame rates. This can potentially add some complications for physics objects that must be simulated at set time steps. If this is the case for your game, you will want to lock the physics frame rate separate from other frame rates.

Listing 6.8 Updating the RoboCatServer Class

void RoboCatServer::Update()
// Code omitted
// ...

ClientProxyPtr client = NetworkManagerServer::sInstance->
if( client )
MoveList& moveList = client->GetUnprocessedMoveList();
for( const Move& unprocessedMove: moveList)
const InputState& currentState = unprocessedMove.GetInputState();
float deltaTime = unprocessedMove.GetDeltaTime();
ProcessInput(deltaTime, currentState);

// Code omitted
// ...

Implementing Peer-to-Peer

Robo Cat RTS is a real-time strategy game that supports up to four players. Each player is given a herd of three cats. Cats can be controlled by first left clicking to select a cat, and then right clicking on a target. If the target is a location, the cat will move to that location. If the target is an enemy cat, the cat will move into range of the enemy cat before beginning to attack. As in the action game, the cats attack each other by throwing balls of yarn. Robo Cat RTS is shown in action in Figure 6.5. The code for the initial version of the game is in Chapter6/RoboCatRTS.


Figure 6.5 Robo Cat RTS in action

Although both games utilize UDP, the network model used for Robo Cat RTS is very different from Robo Cat Action. As with the action game, this initial version of the RTS assumes there is no packet loss. However, due to the nature of lockstep turns, the game will still function with some amount of latency—though there definitely is a degradation in the quality of the experience if the latency becomes too high.

Because Robo Cat RTS uses a peer-to-peer model, there is no need to separate the code into multiple projects. Each peer uses the same exact code. This reduces the number of files somewhat, and also means that the same executable is used by all players in the game.


There are two different ways to launch Robo Cat RTS, although both use the same executable. To initialize as a master peer, specify a port number and player name:

RoboCatRTS 45000 John

To initialize as a normal peer, specify the full address of the master peer (including the port number), as well as a player name:

RoboCatRTS Jane

Note that if the address specified is of a non-master peer, the player will still successfully connect, though it is faster if the master peer is specified.

However, Robo Cat RTS does employ the idea of a master peer. The primary purpose of the master peer is to provide a known IP address of a peer in the game. This is especially relevant when using a matchmaking service that maintains a list of known available games. Furthermore, the master peer is the only peer who is allowed to assign a player ID to a new player. This is mostly to avoid a race condition that could occur if multiple peers were contacted by two different new players simultaneously. Other than this one special case, the master peer behaves in the same manner as all of the other peers. Because each peer independently maintains the state of the entire game, the game can still continue if the master peer disconnects.

Welcoming New Peers and Game Start

The welcoming process for a peer-to-peer game is bit more complex than in a client-server game. As in Robo Cat Action, the new peer first sends a “hello” packet with their player name. However, the hello packet ('HELO') can now have one of three responses:

1. Welcome (‘WLCM')—This means that the hello packet was received by the master peer, and the new peer is welcomed into the game. The welcome packet contains the new peer’s player ID, the player ID of the master peer, and the number of players in the game (not including the new peer). Furthermore, the packet contains the names and IP addresses of all of the peers.

2. Not joinable ('NOJN')—This means that either the game is already in progress, or the game is full. If the new peer receives this packet, the game exits.

3. Not master peer ('NOMP')—This happens if the hello packet was sent to a peer who is not the master peer. In this instance, the packet will contain the address of the master peer so that the new peer can send a hello packet to the master peer.

However, once a new peer receives the welcome packet, the process is not complete. It is also the responsibility of the new peer to send an introduction packet ("INTR") to every other peer in the game. This packet contains the new peer’s player ID and name. This way, each peer in the game is guaranteed to have the new peer stored in their data structures used to track the players in the game.

Because the addresses stored by each peer are based on addresses gleaned from incoming packets, there is potential for issue when one or more peer is connected on a local network. For example, suppose that Peer A is the master peer and Peer B is on the same local network as Peer A. This means that Peer A’s map of peers will include the local network address of Peer B. Now suppose a new peer, Peer C, connects to Peer A via an external IP address. Peer A will welcome Peer C to the game, and give Peer C the address of Peer B. However, the address of Peer B that is provided is not reachable by Peer C, because Peer C is not on the same local network as Peer A and Peer B. Thus Peer C will fail to communicate with Peer B, and will not be able to properly join the game. This problem is shown in Figure 6.6a.


Figure 6.6 (a) Peer C is unable to connect to Peer B; (b) A rendezvous server facilitates initial communication between peers

Recall that Chapter 2, “The Internet,” described one such solution to this problem via NAT punchthrough. Other approaches involve the use of an external server in some way. In one approach, the external server, sometimes called a rendezvous server, only facilitates the initial connection between peers. In this way, it is guaranteed that every peer connects to every other peer via an externally reachable IP address. Use of a rendezvous server is illustrated in Figure 6.6b.

Another approach used by some gaming services is to have a central server handle the entire packet routing between peers. What this means is that all peer traffic goes to the central server, and then is routed to the correct peer. Although this second approach requires a far more powerful server, it ensures that no peer will ever know the public IP address of any other peer. From a security standpoint, this may be preferred as it would, for instance, prevent one peer from trying to disconnect another peer via a distributed denial of service attack.

One other edge case worth considering is what should happen if a peer is only able to connect to some of the players in the game? This could happen even in the event of a rendezvous server or a central server routing packets. The simplest solution is to just not let this peer join the game, but you would need additional code to track this case. Since this chapter assumes that there are no connection issues to worry about, there is no code provided to handle this. But a commercial peer-to-peer game would absolutely need to include code to handle such a case.

When each peer is added to a game, their NetworkManager goes into a lobby state. When the master peer presses the return key, this will send a start packet ('STRT') to every peer in the game. This will signal all peers to enter a 3-second countdown. Once the countdown hits zero, the game officially begins.

Note that this starting approach is naïve in that the timer does not really compensate for any latency between the master peer and the other peers. Thus, the master peer will always end up starting the game before the other peers. This does not affect the synchronization of the game due to the lockstep model, but it may mean that the master peer’s game has to temporarily pause to allow the other peers to catch up. One way to solve this issue would be for each peer to subtract ½ RTT time from the timer duration. So if the master peer’s RTT to Peer A were 100 ms, Peer A could subtract 50 ms from the total time duration, which should allow it to be better synchronized.

Command Sharing and Lockstep Turns

To simplify things, Robo Cat RTS runs at a locked 30 FPS, with a locked delta time of ~33 ms. This means that even if a particular peer takes greater than 33 ms to render a frame, the simulation still runs as if it were a 33-ms frame. Robo Cat RTS refers to each of these 33-ms ticks as a “sub-turn.” There are three sub-turns per full turn. Thus, each full turn is 100 ms in length, or in other words, there are 10 turns per second. Ideally, the duration of sub-turns and full turns would be variable based on network and performance conditions. In fact, this is one of the topics of discussion in the Bettner and Terrano paper on Age of Empires. However, to keep things simple Robo Cat RTS never adjusts the length of turns or sub-turns.

In terms of replication, each peer runs a full simulation of the game world. This means that objects are not replicated in any way whatsoever. Instead, during gameplay only “turn” packets are transmitted. These packets contain a list of the commands issued by each peer on a particular turn, along with a couple of other key pieces of data.

It should be noted that there is a clear delineation between “commands” and input. For example, left clicking on a cat selects a particular cat. However, as this selection does not affect the game state in any way, it does not generate a command. On the other hand, if a cat is selected and the player right clicks, this means the player wants the cat to either move or attack. Since both of these actions would affect the game state, both will generate commands.

Furthermore, no command is executed the very instant it is issued. Rather, each peer queues up all commands issued on a particular turn. At the end of a turn, each peer will send its command list to every other peer. This command list is scheduled for execution on a future turn. Specifically, a command issued by a peer on turn x is not executed until turn x + 2. This allows for roughly 100 ms for turn packets to be received and processed by every peer. What this means is that in normal conditions, there is a delay of up to 200 ms from when a command is issued to when it is executed. However, because the delay is consistent, this does not really negatively affect the game experience, at least in the case of an RTS.

The concept of commands lends itself naturally to an inheritance hierarchy. Specifically, there is a base Command class, which is declared in Listing 6.9.

Listing 6.9 Declaration of the Command Class

class Command
enum ECommandType


//given a buffer, will construct the appropriate command subclass
static shared_ptr<Command> StaticReadAndCreate(
InputMemoryBitStream& inInputStream);

// ...

virtual void Write(OutputMemoryBitStream& inOutputStream);
virtual void ProcessCommand() = 0;
virtual void Read(InputMemoryBitStream& inInputStream) = 0;

ECommandType mCommandType;
uint32_t mNetworkId;
uint32_t mPlayerId;

The implementation of the Command class is mostly self-explanatory. There is an enum specifying the type of command, and an unsigned integer to store the network ID of the unit to whom the command was issued. The ProcessCommand pure virtual function is used when a command is actually executed. The Read and Write functions are used to read/write the commands to memory bit streams. The StaticReadAndCreate function first reads the command type enum from the memory bit stream. Then based on the value of the enum, it will construct an instance of an appropriate subclass and call the subclass’ Read function.

There are only two subclasses in this case. The “move” command moves a cat to the targeted location. The “attack” command tells a cat to attack an enemy cat. In the case of the move command, it has an additional member variable that is a Vector3 containing the target of the move. Each subclass command also has a custom StaticCreate function that is used as a helper to create a shared_ptr to a command. The implementation StaticCreate and ProcessCommand for the move command is shown in Listing 6.10.

Listing 6.10 Select Functions from MoveCommand

MoveCommandPtr MoveCommand::StaticCreate(uint32_t inNetworkId,
const Vector3& inTarget)
MoveCommandPtr retVal;
GameObjectPtr go = NetworkManager ::sInstance->
uint32_t playerId = NetworkManager::sInstance->GetMyPlayerId();

//can only issue commands to this unit if I own it, and it's a cat
if (go && go->GetClassId() == RoboCat::kClassId &&
go->GetPlayerId() == playerId)
retVal = std::make_shared<MoveCommand>();
retVal->mNetworkId = inNetworkId;
retVal->mPlayerId = playerId;
retVal->mTarget = inTarget;
return retVal;

void MoveCommand::ProcessCommand()
GameObjectPtr obj = NetworkManager::sInstance->
if (obj && obj->GetClassId() == RoboCat::kClassId &&
obj->GetPlayerId() == mPlayerId)
RoboCat* rc = obj->GetAsCat();

The StaticCreate function takes in the network ID of the cat who is receiving the command, as well as the target location. It also does some validation to ensure that the command is only being issued to a game object that exists, that the object is a cat, and that it is controlled by the peer issuing the command. The ProcessCommand function does some basic validation to ensure that the network ID it receives is that of a cat, and that the player ID corresponds to the player controlling the cat. The call to EnterMovingState simply tells the cat to start executing its moving behavior, which will occur over the course of one or more sub-turns. The moving state is implemented much like it would be in a single-player game, so it is not explained in this text.

Commands are stored in a CommandList. As with the MoveList class in the action game, the CommandList is just a wrapper for a deque of commands. It also has a ProcessCommands function that calls ProcessCommand on each command in the list.

Each peer’s input manager has an instance of CommandList. When a local peer either uses the keyboard or mouse to request a command, the input manager adds the command to its list. A class called TurnData is used to encapsulate a peer’s command list, as well as data related to synchronization, for each completed 100-ms turn. The network manager then has a vector where the index corresponds to a turn number. At each index, the network manager stores a map where the key is the player ID, and the value is the TurnData for that player. This way, for each turn, each player’s turn data is separate. This is what allows the network manager to validate it has received data from each peer.

When each peer completes a sub-turn, it checks to see whether or not the full turn is over. If the turn is over, then it prepares turn packets to send to each peer. This function is a bit involved, and therefore is shown in Listing 6.11.

Listing 6.11 Sending Turn Packets to Each Peer

void NetworkManager::UpdateSendTurnPacket()
if (mSubTurnNumber == kSubTurnsPerTurn)
//create our turn data
TurnData data(mPlayerId,
RandGen::sInstance->GetRandomUInt32(0, UINT32_MAX),

//we need to send a turn packet to all of our peers
OutputMemoryBitStream packet;
//we're sending data for 2 turns from now
packet.Write(mTurnNumber + 2);

for (auto &iter: mPlayerToSocketMap)
SendPacket(packet, iter.second);

//save our turn data for turn + 2
mTurnData[mTurnNumber + 2].emplace(mPlayerId, data);

if (mTurnNumber >= 0)
//a negative turn means there's no possible commands yet
mSubTurnNumber = 0;

Two of the parameters passed to the TurnData constructor—the random value and the CRC—are discussed in the next section. The main item to note for now is that the peer prepares a turn packet that includes a list of all the commands to be executed two turns from now. This turn packet is then sent to all of the peers. Furthermore, the peer locally keeps its own turn data before clearing the input manager’s command list.

Finally, there is code that checks for a negative turn number. When the game begins, the turn number is set to −2. This way, commands that are issued on turn −2 will be scheduled for execution on turn 0. This means that no commands are executed for the first 200 ms, but there is no way to avoid this initial delay—it is a property of the lockstep turn mechanism.

The TryAdvanceTurn function, shown in Listing 6.12, is named as such because it does not guarantee that the turn advances. This is because it is the responsibility of TryAdvanceTurn to enforce the lockstep nature of the turns. In essence, if it is currently turn x, TryAdvanceTurnwill only advance to turn x + 1 if all of the turn data for turn x + 1 has been received. If some turn data for turn x + 1 is still missing, the network manager will enter into a delay state.

Listing 6.12 TryAdvanceTurn Function

void NetworkManager::TryAdvanceTurn()
//only advance the turn IF we received the data for everyone
if (mTurnData[ mTurnNumber + 1].size() == mPlayerCount)
if (mState == NMS_Delay)
//throw away any input accrued during delay
mState = NMS_Playing;
//wait 100ms to give the slow peer a chance to catch up

mSubTurnNumber = 0;

if (CheckSync(mTurnData[mTurnNumber]))
//process all the moves for this turn
for (auto& iter: mTurnData[mTurnNumber])
//for simplicity, just end the game if it desyncs
//don't have all player's turn data, we have to delay:(
mState = NMS_Delay;

While in the delay state, objects in the world are not updated. Instead, the network manager will wait for the turn packets it still needs to receive. Every time a new turn packet is received while in delay, the network manager will again call TryAdvanceTurn, hoping that the new turn packet fills in the gap in turn data. This process will repeat until all necessary data is received. Similarly, if a connection is reset while in delay, the reset peer will be removed from the game and all other peers will attempt to continue.

Don’t forget that this first version of Robo Cat RTS is assuming that all packets are eventually received. To account for packet loss, the delay state could be augmented so that while in delay, the peer determines whose command data is missing. It could then send a request to the peer in question to resend the command data. If several such resend requests are ignored, the peer would eventually be dropped. Furthermore, future turn packets could contain previous turn data, so in the event that a prior turn packet was dropped, a subsequent incoming turn packet may contain the required data.

Maintaining Synchronization

One of the largest challenges in designing a peer-to-peer game in which each peer simulates the game independently is ensuring that each instance of the game stays synchronized. Even minor discrepancies such as inconsistent positions can propagate into more serious issues in the future. If these discrepancies are allowed to persist, over time the simulations will diverge. At some point, the simulations may be so different that it seems like the peers are playing a different game! Clearly, this cannot be allowed, so ensuring and verifying synchronization is very important.

Synchronizing Pseudo-Random Number Generators

Some sources of desynchronization are more apparent than others. For example, using a pseudo-random number generator (PRNG) is the only way for a computer to acquire numbers that are seemingly random. Random elements are a cornerstone of many games, so eliminating random numbers altogether typically is not a viable option. However, in a peer-to-peer game, it is necessary to guarantee that on any particular turn, two peers will always receive the same results from a random number generator.

If you have ever used random numbers in a C/C++ program, you are likely familiar with the rand and srand functions. The rand function generates a pseudo-random number, while the srand function seeds the PRNG. Given a particular seed, a particular PRNG guarantees to always produce the same sequence of numbers. A typical approach is to use the current time as a seed passed to srand. In theory, this means that the numbers will be different every time.

In terms of keeping the peers in sync, this means there are two main things that need to be done in order to ensure each peer generates the same numbers:

Image Each peer’s random number generator should be seeded to the same initial value. In the case of Robo Cat RTS, the master peer selects a seed when it sends out the start packet. The seed is then included inside the start packet, so every peer will know what seed value to start the game with.

Image It must be guaranteed that each peer will always make the same number of calls to the PRNG every turn, in the same order, and in the same location in the code. This means there cannot be different versions of the build that may use the PRNG more or less, such as for different hardware in cross-platform play.

However, there is a third issue that may not be apparent at first. It turns out that rand and srand are not particularly suitable for guaranteeing synchronization. The C standard does not specify which PRNG algorithm rand must use. This means that different implementations of the C library on different platforms (or even just in different compilers), are not guaranteed to use the same PRNG algorithm. If this is the case, it makes no difference whether or not the seeds are the same—different algorithms will give different results. Furthermore, because there are no guarantees regarding the PRNG algorithm used by rand, this means that the quality of the random numbers, or entropy of the values, is dubious.

In the past, the poorly defined nature of rand meant that most games implemented their own PRNG. Thankfully, C++11 introduced standardized and higher-quality pseudo-random number generators. Though the provided PRNGs are not considered cryptographically secure—meaning safe to use when random numbers are a part of security protocol—they are more than sufficient for the purposes of a game. Specifically, the code for Robo Cat RTS uses the C++11 implementation of the Mersenne Twister PRNG algorithm. The 32-bit Mersenne Twister, referred to as MT19937, has a period of 219937, meaning that the sequence of numbers will realistically never repeat during the course of a given game.

The interface for the C++11 random number generators is slightly more complex than the old rand and srand functions, so Robo Cat RTS wraps this in a RandGen class, as declared in Listing 6.13.

Listing 6.13 Declaration of the RandGen Class

class RandGen
static std::unique_ptr<RandGen> sInstance;

static void StaticInit();
void Seed(uint32_t inSeed);
std::mt19937& GetGeneratorRef() {return mGenerator;}

float GetRandomFloat();
uint32_t GetRandomUInt32(uint32_t inMin, uint32_t inMax);
int32_t GetRandomInt(int32_t inMin, int32_t inMax);
Vector3 GetRandomVector(const Vector3& inMin, const Vector3& inMax);
std::mt19937 mGenerator;
std::uniform_real_distribution<float> mFloatDistr;

The implementation of a handful of the RandGen functions is likewise shown in Listing 6.14.

Listing 6.14 Select Functions from RandGen

void RandGen::StaticInit()
sInstance = std::make_unique<RandGen>();
//just use a default random seed, we'll reseed later
std::random_device rd;

void RandGen::Seed(uint32_t inSeed)

uint32_t RandGen::GetRandomUInt32(uint32_t inMin, uint32_t inMax)
std::uniform_int_distribution<uint32_t> dist(inMin, inMax);
return dist(mGenerator);

Note that when the RandGen is first initialized, it seeds using the random_device class. This will yield a platform-specific random value. Random devices are intended to be used for seeding a random number generator, but the device itself should not be used as a generator. Theuniform_int_distribution class used in one of the functions simply is a way to specify a range of numbers, and receive a pseudo-random number within this range. This approach is preferable to the commonplace practice of taking an integer modulus of a random result. C++11 introduces several additional types of distributions.

To synchronize the random numbers, the master peer generates a random number to use as the new seed when the countdown begins. This random number is transmitted to all of the other peers to ensure that when turn −2 begins, all peers will have their generators seeded to the same value:

//select a seed value
uint32_t seed = RandGen::sInstance->GetRandomUInt32(0, UINT32_MAX);

Furthermore, when creating a turn packet at the end of a turn, each peer generates a random integer. This random integer is sent as part of the turn data inside the turn packet. This makes it easy for the peers to verify that all the random number generators remain in sync as the turns progress.

Keep in mind that if your game code requires random numbers that do not affect the game state in any way, it is possible to keep a different generator for these cases. One example is simulating random packet loss—this should not use the game’s generator, because it means every peer would simulate packet loss at the same time. However, be very careful when having multiple generators. You must make sure that any other programmers working on your game understand when to use which PRNG.

Verifying Game Synchronization

Other sources of desynchronization may not be as readily apparent as a PRNG. For example, while floating point implementations are deterministic, there can be discrepancies depending on the hardware implementations. For example, faster SIMD instructions may yield different results than regular floating point instructions. There typically are also different flags that can be set on a processor to change floating point behavior, such as whether or not it strictly follows the IEEE 754 implementation.

Other issues in synchronization may just be the result of an unintended error by a programmer. Perhaps the programmer wasn’t aware how the synchronization worked, or perhaps they just made a mistake. Either way, it is important that the game has code that checks for synchronization on a regular basis. This way, desynchronization bugs can hopefully be caught soon after they are introduced.

A common approach is to utilize a checksum, much like how network packets use checksums in order to validate integrity of packet data. In essence, at the end of each turn, a checksum of the game state is computed. This checksum is transmitted inside the turn packet so that every peer can validate that all game instances arrive compute the same checksum at the end of a turn.

In terms of selecting an algorithm for the checksum, there are many different choices. Robo Cat RTS uses the cyclic redundancy check (CRC), which yields a 32-bit checksum value. Rather than implement a CRC function from scratch, this game uses the crc32 function from the open-source zlib library. This was a matter of convenience, because zlib was already a dependency due to use of PNG image files. Furthermore, because zlib is designed to handle large amounts of data at once, it stands to reason that the CRC implementation is both vetted and performant.

In the spirit of further code reuse, the code for ComputeGlobalCRC, shown in Listing 6.15, uses the OutputMemoryBitStream class. Each game object in the world writes its relevant data into the provided bit stream via the WriteForCRC function. These objects are written in ascending order by network ID. Once every object has written its relevant data, the CRC is computed on the stream buffer as a whole.

Listing 6.15 ComputeGlobalCRC Function

uint32_t NetworkManager::ComputeGlobalCRC()
OutputMemoryBitStream crcStream;

uint32_t crc = crc32(0, Z_NULL, 0);
for (auto& iter: mNetworkIdToGameObjectMap)

crc = crc32(crc, reinterpret_cast<const Bytef*>
return crc;

There are a couple of additional items to consider regarding ComputeGlobalCRC. First, not every value for every game object is written into the stream. In the case of the RoboCat class, the values written are the controlling player ID, network ID, location, health, state, and target network ID. Some of the other member variables, such as the variable that tracks the cooldown between yarn throws, are not synchronized. This selectivity reduces the amount of time spent computing the CRC.

Furthermore, because the CRC function can be computed partially, it is not actually necessary to write all the data to a stream prior to computing the CRC. In fact, copying the data may be less efficient than computing the CRC of every value on the fly. It even would be possible to write an interface similar to OutputMemoryBitStream—essentially an instance of a class that only computes the CRC of values fed into it, but does not save it into a memory buffer. However, to keep the code simple, the existing OutputMemoryBitStream class was reused.

Back to the task at hand, recall that the TryAdvanceTurn function in Listing 6.12 makes a call to a CheckSync function when it advances the turn. This function loops through all of the random numbers and CRC values in every peer’s turn data, and ensures that every peer computed the same random number and same CRC value when the turn packet was sent out.

In the event that CheckSync detects a desynchronization, Robo Cat RTS simply ends the game immediately. A more robust system would be to utilize a form of voting. Suppose there are four players in the game. In the event that players 1 to 3 computed checksum value A and player 4 computed checksum value B, this means that three of the players are still in sync. Thus it may be possible for the game to continue if player 4 is dropped from the game.


When developing a peer-to-peer game with independent simulation, desynchronization is the source of much angst. Desynchronization bugs often are the most difficult to troubleshoot. In order to help debug these bugs, it is important to implement a logging system that can be enabled in order to see the commands executed by each peer in excruciating detail.

While developing the sample code for Robo Cat RTS, a desynchronization would occur if a client went into the delay state while a cat was moving. It turned out that the cause was that when the delayed player resumed the game, they would skip a sub-turn. This was determined, thanks to a logging system that wrote when a peer was executing a sub-turn as well as the location of each cat at the end of each a sub-turn. This made it possible to see that one of the peers was skipping a sub-turn. Without the logging, it would have been much more time consuming to locate and fix this issue.

In the same scenario, a much more complex approach would be to actually replicate the entire game state back to player 4 in an effort to resynchronize them with the game. Depending on the amount of data in a game, this may be impractical. But it is something to keep in mind if it is important that players are not dropped from the game when they desync.


Selecting a network topology is one of the most important decisions made when creating a networked game. In the client-server topology, one game instance is denoted as the server, and it is generally the authority of the entire game. All other game instances are clients, and only communicate with the server. This usually means that object replication data is sent from the server to the client. In a peer-to-peer topology, each game instance is more or less on equal footing. One approach in a peer-to-peer game is to have each peer simulate the game independently.

The deep dive of the code for Robo Cat Action covered several different topics. To help modularize the code, the code was split up into three separate targets: a shared library, a server, and a client. The process of the server welcoming new clients involves transmission of a hello packet to the server, and a welcome packet back to the client. The input systems has the client sending input packets that contain “moves” executed by a client, including moving a cat around and throwing balls of yarn. The server maintains a client proxy for each client, both in order to track what replication data needs to be sent to each client and to have an object in which pending moves can be stored.

The section on Robo Cat RTS discussed many of the major challenges in designing a peer-to-peer game with independent simulation. The use of a master peer allows for a known IP address to be associated with a particular game. Each peer maintains a list of the addresses of all the other peers in the game. The welcoming of new peers is a bit more complex than a client-server game, because the new peer needs to inform all other peers of their existence. The peers maintain a lockstep by transmitting turn packets at the end of each 100-ms turn. The commands in these turn packets are scheduled for execution two turns later. Each peer continues on to the next turn only after all of turn data for the next turn has been received. Finally, synchronizing random number generation and using checksums of the game state are necessary to keep each peer’s game instance in sync.

Review Questions

1. In the client-server model, how do the responsibilities of a client differ from the responsibilities of the server?

2. What is the worst possible latency in a client-server game, and how does it compare to the worst possible latency in a peer-to-peer game?

3. How many connections does a peer-to-peer game require in comparison to a client-server game?

4. What is one approach to simulating the game state in a peer-to-peer game?

5. The current implementation of Robo Cat Action does not average the input state over several frames when creating a move. Implement this functionality.

6. In what manner could the start procedure be improved in Robo Cat RTS? Implement this improvement.

Additional Reading

Bettner, Paul and Mark Terrano. 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond. Presented at the Game Developer’s Conference, San Francisco, CA, 2001.