Networking: Node’s true “Hello, World” - Node fundamentals - Node.js in Practice (2015)

Node.js in Practice (2015)

Part 1. Node fundamentals

Chapter 7. Networking: Node’s true “Hello, World”

This chapter covers

· Networking concepts and how they relate to Node

· TCP, UDP, and HTTP clients and servers

· DNS

· Network encryption

The Node.js platform itself is billed as a solution for writing fast and scalable network applications. To write network-oriented software, you need to understand how networking technologies and protocols interrelate. Over the course of the next section, we explain how networks have been designed around technology stacks with clear boundaries; and furthermore, how Node implements these protocols and what their APIs look like.

In this chapter you’ll learn about how Node’s networking modules work. This includes the dgram, dns, http, and net modules. If you’re unsure about network terminology like socket, packet, and protocol, then don’t worry: we also introduce key networking concepts to give you a solid foundation in network programming.

7.1. Networking in Node

This section is an introduction to networking. You’ll learn about network layers, packets, sockets—all of the stuff that networks are made of. These ideas are critical to understanding Node’s networking APIs.

7.1.1. Networking terminology

Networking jargon can quickly become overwhelming. To get everyone on the same page, we’ve included table 7.1, which summarizes the main concepts that will form the basis of this chapter.

Table 7.1. Networking concepts

Term

Description

Layer

A slice of related networking protocols that represents a logical group. The application layer, where we work, is the highest level; physical is the lowest.

HTTP

Hypertext Transfer Protocol—An application-layer client-server protocol built on TCP.

TCP

Transmission Control Protocol—Allows communication in both directions from the client to the server, and is built on to create application-layer protocols like HTTP.

UDP

User Datagram Protocol—A lightweight protocol, typically chosen where speed is desired over reliability.

Socket

The combination of an IP address and a port number is generally referred to as a socket.

Packet

TCP packets are also known as segments—the combination of a chunk of data along with a header.

Datagram

The UDP equivalent of a packet.

MTU

Maximum Transmission Unit—The largest size of a protocol data unit. Each layer can have an MTU: IPv4 is at least 68 bytes, and Ethernet v2 is 1,500 bytes.

To understand Node’s networking APIs, it’s crucial to learn about layers, packets, sockets, and all the other things that networks are made of. If you don’t learn about the difference between TCP (Transmission Control Protocol) and UDP (User Datagram Protocol), then it would be difficult for you to know when to use these protocols. In this section we introduce the terms you need to know and then explore the concepts a bit more so you leave the section with a solid foundation.

If you’re responsible for implementing high-level protocols that run on top of HTTP or even low-latency game code that uses UDP, then you should understand each of these concepts. We break each of these concepts down into more detail over the next few sections.

Layers

The stack of protocols and standards that make up the internet and internet technology in general can be modeled as layers. The lowest layers represent physical media—Ethernet, Bluetooth, fiber optics—the world of pins, voltages, and network adapters. As software developers, we work at a higher level than lower-level hardware. When talking to networks with Node, we’re concerned with the application and transport layers of the Internet Protocol (IP) suite.

Layers are best represented visually. Figure 7.1 relates logical network layers to packets. The lower-level physical and data-link layer protocols wrap higher-level protocols.

Figure 7.1. Protocols are grouped into seven logical layers. Packets are wrapped by protocols at consecutive layers.

Packets are wrapped by protocols at consecutive layers. A TCP packet, which could represent part of a series of packets from an HTTP request, is contained in the data section of an IP packet, which in turn is wrapped by an Ethernet packet. Going back to figure 7.1, TCP packets from HTTP requests cut through the transport and application layers: TCP is the transport layer, used to create the higher-level HTTP protocol. The other layers are also involved, but we don’t always know which specific protocols are used at each layer: HTTP is always transmitted over TCP/IP, but beyond that, Wi-Fi or Ethernet can be used—your programs won’t know the difference.

Figure 7.2 shows how network layers are wrapped by each protocol. Notice that data is never seen to move more than one step between layers—we don’t talk about transport layer protocols interacting with the network layer.

Figure 7.2. Network layer wrapping

When writing Node programs, you should appreciate that HTTP is implemented using TCP because Node’s http module is built on the underlying TCP implementation found in the net module. But you don’t need to understand how Ethernet, 10BASE-T, or Bluetooth works.

TCP/IP

You’ve probably heard of TCP/IP—this is what we call the Internet Protocol suite because the Transmission Control Protocol (TCP) and the Internet Protocol (IP) are the most important and earliest protocols defined by this standard.

In Internet Protocol, a host is identified by an IP address. In IPv4, addresses are 32-bit, which limits the available address space. IP has been at the center of controversy over the last decade because addresses are running out. To fix this, a new version of the protocol known as IPv6 was developed.

You can make TCP connections with Node by using the net module. This allows you to implement application layer protocols that aren’t supported by the core modules: IRC, POP, and even FTP could be implemented with Node’s core modules. If you find yourself needing to talk to nonstandard TCP protocols, perhaps something used internally in your company, then net.Socket and net.createConnection will make light work of it.

Node supports both IPv4 and IPv6 in several ways: the dns module can query IPv4 and IPv6 records, and the net module can transmit and receive data to hosts on IPv4 and IPv6 networks.

The interesting thing about IP is it doesn’t guarantee data integrity or delivery. For reliable communication, we need a transport layer protocol like TCP. There are also times when delivery isn’t always required, although of course it’s preferred—in these situations a lighter protocol is needed, and that’s where UDP comes in. The next section examines TCP and UDP in more detail.

UDP and how it compares to TCP

Datagrams are the basic unit of communication in UDP. These messages are self-contained, holding a source, destination, and some user data. UDP doesn’t guarantee delivery or message order, or offer protection against duplicated data. Most protocols you’ll use with Node programs will be built on TCP, but there are times when UDP is useful. If delivery isn’t critical, but performance is desired, then UDP may be a better choice. One example is a streaming video service, where occasional glitches are an acceptable trade-off to gain more throughput.

TCP and UDP both use the same network layer—IP. Both provide services to application layer protocols. But they’re very different. TCP is a connect-oriented and reliable byte stream service, whereas UDP is based around datagrams, and doesn’t guarantee the delivery of data.

Contrast this to TCP, which is a full-duplex[1] connection-oriented protocol. In TCP, there are only ever two endpoints for a given connection. The basic unit of information passed between endpoints is known as a segment—the combination of a chunk of data along with a header. When you hear the term packet, a TCP segment is generally being referred to.

1 Full-duplex: messages can be sent and received in the same connection.

Although UDP packets include checksums that help detect corruption, which can occur as a datagram travels across the internet, there’s no automatic retransmission of corrupt packets—it’s up to your application to handle this if required. Packets with invalid data will be effectively silently discarded.

Every packet, whether it’s TCP or UDP, has an origin and destination address. But the source and destination programs are also important. When your Node program connects to a DNS server or accepts incoming HTTP connections, there has to be a way to map between the packets traveling along the network and the programs that generated them. To fully describe a connection, you need an extra piece of information. This is known as a port number—the combination of a port number and an address is known as a socket. Read on to learn more about ports and how they relate to sockets.

Sockets

The basic unit of a network, from a programmer’s perspective, is the socket. A socket is the combination of an IP address and a port number—and there are both TCP and UDP sockets. As you saw in the previous section, a TCP connection is full-duplex—opening a connection to a given host allows communication to flow to and from that host. Although the term socket is correct, historically “socket” meant the Berkeley Sockets API.

The Berkeley Sockets API

Berkeley Sockets, released in 1983, was an API for working with internet sockets. This is the original API for the TCP/IP suite. Although the origins lie in Unix, Microsoft Windows includes a networking stack that closely follows Berkeley Sockets.

There are well-known port numbers for standard TCP/IP services. They include DNS, HTTP, SSH, and more. These port numbers are usually odd numbers due to historical reasons. TCP and UDP ports are distinct so they can overlap. If an application layer protocol requires both TCP andUDP connections, then the convention is to use the same port number for both connections. An example of a protocol that uses both UDP and TCP is DNS.

In Node, you can create TCP sockets with the net module, and UDP is supported by the dgram module. Other networking protocols are also supported—DNS is a good example.

The following sections look at the application layer protocols included in Node’s core modules.

7.1.2. Node’s networking modules

Node has a suite of networking modules that allows you to build web and other server applications. Over the next few sections we’ll cover DNS, TCP, HTTP, and encryption.

DNS

The Domain Name System (DNS) is the naming system for addressing resources connected to the internet (or even a private network). Node has a core module called dns for looking up and resolving addresses. Like other core modules, dns has asynchronous APIs. In this case, the implementation is also asynchronous, apart from certain methods that are backed by a thread pool. This means DNS queries in Node are fast, but also have a friendly API that is easy to learn.

You don’t often have to use this module, but we’ve included techniques because it’s a powerful API that can come in handy for network programming. Most application layer protocols, HTTP included, accept hostnames rather than IP addresses.

Node also provides modules for networking protocols that we’re more familiar with—for example, HTTP.

HTTP

HTTP is important to most Node developers. Whether you’re building web applications or calling web services, you’re probably interacting with HTTP in some way. Node’s http core module is built on the net, stream, buffer, and events modules. It’s low-level, but can be used to create simple HTTP servers and clients without too much effort.

Due to the importance of the web to Node development, we’ve included several techniques that explore Node’s http module. Also, when we’re working with HTTP we often need to use encryption—Node also supports encryption through the crypto and tls modules.

Encryption

You should know the term SSL—Secure Sockets Layer—because it’s how secure web pages are served to web browsers. Not just HTTP traffic gets encrypted, though—other services, like email, encrypt messages as well. Encrypted TCP connections use TLS: Transport Layer Security. Node’s tls module is implemented using OpenSSL.

This type of encryption is called public key cryptography. Both clients and servers must have private keys. The server can then make its public key available so clients can encrypt messages. To decrypt these messages, access to the server’s private key is required.

Node supports TLS by allowing TCP servers to be created that support several ciphers. The TCP server itself inherits from net.Server—once you’ve got your head around TCP clients and servers in Node, encrypted connections are just an extension of these principles.

A solid understanding of TLS is important if you want to deploy web applications with Node. People are increasingly concerned with security and privacy, and unfortunately SSL/TLS is designed in such a way that programmer error can cause security weaknesses.

There’s one final aspect of networking in Node that we’d like to introduce before we move on to the techniques for this chapter: how Node is able to give you asynchronous APIs to networking technologies that are sometimes blocking at the system level.

7.1.3. Non-blocking networking and thread pools

This section delves into Node’s lower-level implementation to explore how networking works under the hood. If you’re confused about what exactly “asynchronous” means in the context of networking, then read on for some background information on what makes Node’s networking APIs tick.

Remember that in Node, APIs are said to be asynchronous when they accept a callback and return immediately. At the operating system level, I/O operations can also be asynchronous, or they can be synchronous and wrapped with threads to appear asynchronous.

Node employs several techniques to provide asynchronous network APIs. The main ones are non-blocking system calls and thread pools to wrap around blocking system calls.

Behind the scenes, most of Node’s networking code is written in C and C++—the JavaScript code in Node’s source gives you an asynchronous binding to features provided by libuv and c-ares.

Figure 7.3 shows Apple’s Instruments tool recording the activity of a Node program that makes 50 HTTP requests. HTTP requests are non-blocking—each takes place using callbacks that are run on the main thread. The BSD sockets library, which is used by libuv, can make non-blocking TCP and UDP connections.

Figure 7.3. Node’s threads when making HTTP requests

For HTTP and other TCP connections, Node is able to access the network using a system-level non-blocking API.

When writing networking or file system code, the Node code looks asynchronous: you pass a function to a method that will execute the function when the I/O operation has reached the desired state. But for file operations, the underlying implementation is not asynchronous: thread pools are used instead.

When dealing with I/O operations, understanding the difference between non-blocking I/O, thread pools, and asynchronous APIs is important if you want to truly understand how Node works.

For those interested in reading more about libuv and networking, the freely available book, An Introduction to libuv (http://nikhilm.github.io/uvbook/networking.html#tcp) has a section on networking that covers TCP, DNS, and UDP.

Now on to the first set of networking techniques: TCP clients and servers.

7.2. TCP clients and servers

Node has a simple API for creating TCP connections and servers. Most of the lowest-level classes and methods can be found in the net module. In the next technique, you’ll learn how to create a TCP server and track the clients that connect to it. The cool thing about this is that higher-level protocols like HTTP are built on top of the TCP API, so once you’ve got the hang of TCP clients and servers, you can really start to exploit some of the more subtle features of the HTTP API as well.

Technique 45 Creating a TCP server and tracking clients

The net module forms the foundation of many of Node’s networking features. This technique demonstrates how to create a TCP server.

Problem

You want to start your own TCP server, bind to a port, and send data over the network.

Solution

Use net.createServer to create a server, and then call server.listen to bind it to a port. To connect to the server, either use the command-line tool telnet or create an in-process client connection with its client counterpart, net.connect.

Discussion

The net.createServer method returns an object that can be used to listen on a given TCP port for incoming connections. When a client makes a new connection, the callback passed to net.createServer will run. This callback receives a connection object which extendsEventEmitter.

The server object itself is an instance of net.Server, which is just a wrapper around the net.Socket class. It’s interesting to note that net.Socket is implemented using a duplex stream—for more on streams, see chapter 5.

Before going into more theory, let’s look at an example that you can run and connect to with telnet. The following listing shows a simple TCP server that accepts connections and echoes data back to the client.

Listing 7.1. A simple TCP server

To try out this example, run node server.js to start a server, and then run telnet localhost 8000 to connect to it with telnet. You can connect several times to see the ID incremented. If you disconnect, a message should be printed that contains the correct client ID.

Most programs that use TCP clients and servers load the net module . Once it has been loaded, TCP servers can be created using net.createServer, which is actually just a shortcut for new net.Server with a listener event listener. After a server has been instantiated, it can be set to listen for connections on a given port using server.listen .

To echo back data sent by the client, pipe is used . Sockets are streams, so you can use the standard stream API methods with them as you saw in chapter 5.

In this example, we track each client that has connected using a numerical ID by incrementing a “global” value that tracks the number of clients . The total number of connected clients is stored in the callback’s scope by creating a local variable in the connection callback calledclientId.

This value is displayed whenever a client connects or disconnects . The client argument passed to the server’s callback is actually a socket—you can write to it with client.write and data will be sent over the network.

The important thing to note is any event listener added to the socket in the server’s callback will share the same scope—it will create closures around any variables inside this callback. That means the client ID is unique to each connection, and you can also store other values that clients might need. This forms a common pattern employed by client-server applications in Node.

The next technique builds on this example by adding client connections in the same process.

Technique 46 Testing TCP servers with clients

Node makes creating TCP servers and clients in the same process a breeze—it’s an approach particularly useful for testing your network programs. In this technique you’ll learn how to make TCP clients, and use them to test a server.

Problem

You want to test a TCP server.

Solution

Use net.connect to connect to the server’s port.

Discussion

Due to how TCP and UDP ports work, it’s entirely possible to create multiple servers and clients in the same process. For example, a Node HTTP server could also run a simple TCP server on another port that allows telnet connections for remote administration.

In technique 45, we demonstrated a TCP server that can track client connections by issuing each client a unique ID. Let’s write a test to ensure this worked correctly.

Listing 7.2 shows how to create client connections to an in-process server, and then run assertions on the data sent over the network by the server. Of course, technically this isn’t running over a real network because it all happens in the same process, but it could easily be adapted to work that way; just copy the program to a server and specify its IP address or hostname in the client.

Listing 7.2. Creating TCP clients to test servers

This is a long example, but it centers around a relatively simple method: net.connect. This method accepts some optional arguments to describe the remote host. Here we’ve just specified a port number, but the second argument can be a hostname or IP address—localhost is the default . It also accepts a callback, which can be used to write data to the other end once the client has connected. Remember that TCP servers are full-duplex, so both ends can receive and send data.

The runTest function in this example will run once the server has started listening . It accepts an expected client ID, and a callback called done . The callback will be triggered once the client has connected, received some data by subscribing to the data event , and then disconnected.

Whenever clients are disconnected, the end event will be emitted. We bind the done callback to this event . When the test has finished in the data callback, we call client.end to disconnect the socket manually, but end events will be triggered when servers close connections, as well.

The data event is where the main test is performed . The expected message is passed to assert.equal with the data passed to the event listener. The data is a buffer, so toString is called for the assertion to work. Once the test has finished, and the end event has been triggered , the callback passed to runTest will be executed.

Error handling

If you need to collect errors generated by TCP connections, just subscribe to the error event on the EventEmitter objects returned by net.connect. If you don’t, an exception will be raised; this is standard behavior in Node.

Unfortunately, this isn’t easy to work with when dealing with sets of distinct network connections. In such cases, a better technique is to use the domain module. Creating a new domain with domain.create() will cause error events to be sent to the domain; you can then handle them in a centralized error handler by subscribing to error events on the domain.

For more about domains, refer to technique 21.

We’ve used two calls to runTest here by calling one inside the callback. Once both have run, the number of expected assertions is checked , and the server is shut down .

This example highlights two important things: clients and servers can be run together in-process, and Node TCP clients and servers are easy to unit test. If the server in this example were a remote service that we had no control over, then we could create a “mock” server for the express purpose of testing our client code. This forms the basis of how most developers write tests for web applications written with Node.

In the next technique we’ll dig deeper into TCP networking by looking at Nagle’s algorithm and how it can affect the performance characteristics of network traffic.

Technique 47 Improve low-latency applications

Although Node’s net module is relatively high-level, it does provide access to some low-level functionality. One example of this is control over the TCP_NODELAY flag, which determines whether Nagle’s algorithm is used. This technique explains what Nagle’s algorithm is, when you should use it, and how to turn it off for specific sockets.

Problem

You want to improve connection latency in a real-time application.

Solution

Use socket.setNoDelay() to enable TCP_NODELAY.

Discussion

Sometimes it’s more efficient to move batches of things together, rather than separately. Every day millions of products are shipped around the globe, but they’re not carried one at a time—instead they’re grouped together in shipping containers, based on their final destination. TCP works exactly the same way, and this feature is made possible by Nagle’s algorithm.

Nagle’s algorithm says that when a connection has data that hasn’t yet been acknowledged, small segments should be retained. These small segments will be batched into larger segments that can be transmitted when sufficient data has been acknowledged by the recipient.

In networks where many small packets are transmitted, it can be desirable to reduce congestion by combining small outgoing messages, and sending them together. But sometimes latency is desired over all else, so transmitting small packets is important.

This is particularly true for interactive applications, like ssh, or the X Window System. In these applications, small messages should be delivered without delay to create a sense of real-time feedback. Figure 7.4 illustrates the concept.

Figure 7.4. When Nagle’s algorithm is used, smaller packets are collected into a larger payload.

Certain classes of Node programs benefit from turning off Nagle’s algorithm. For example, you may have created a REPL that transmits a single character at a time as the user types messages, or a game that transmits location data of players. The next listing shows a program that disables Nagle’s algorithm.

Listing 7.3. Turning off Nagle’s algorithm

To use this example, run the program in a terminal with node nagle.js, and then connect to it with telnet 8000. The server turns off Nagle’s algorithm , and then forces the client to use character mode . Character mode is part of the Telnet Protocol (RFC 854), and will cause the Telnet client to send a packet whenever a key is pressed.

Next, unref is used to cause the program to exit when there are no more client connections. Finally, the data event is used to capture characters sent by the client and print them to the server’s terminal .

This technique could form the basis for creating low-latency applications where data integrity is important, which therefore excludes UDP. If you really want to get more control over the transmission of data, then read on for some techniques that use UDP.

7.3. UDP clients and servers

Compared to TCP, UDP is a much simpler protocol. That can mean more work for you: rather than being able to rely on data being sent and received, you have to cater to UDP’s more volatile nature. UDP is suitable for query-response protocols, which is why it’s used for the Domain Name System (DNS). It’s also stateless—if you want to transfer data and you value lower latency over data integrity, then UDP is a good choice. That might sound unusual, but there are applications that fit these characteristics: media streaming protocols and online games generally use UDP.

If you wanted to build a video streaming service, you could transfer video over TCP, but each packet would have a lot of overhead for ensuring delivery. With UDP, it would be possible for data to be lost with no simple means of discovery, but with video you don’t care about occasional glitches—you just want data as fast as possible. In fact, some video and image formats can survive a small amount of data loss: the JPEG format is resilient to corrupt bytes to a certain extent.

The next technique combines Node’s file streams with UDP to create a simple server that can be used to transfer files. Although this can potentially result in data loss, it can be useful when you care about speed over all else.

Technique 48 Transferring a file with UDP

This technique is really about sending data from a stream to a UDP server rather than creating a generalized file transfer mechanism. You can use it to learn the basics of Node’s datagram API.

Problem

You want to transfer data from a client to a server using datagrams.

Solution

Use the dgram module to create datagram sockets, and then send data with socket.send.

Discussion

Sending datagrams is similar to using TCP sockets, but the API is slightly different, and datagrams have their own rules that reflect the actual structure of UDP packets. To set up a server, use the following snippet:

This example creates a socket that will act as the server , and then binds it to a port . The port can be anything you want, but in both TCP and UDP the first 1,023 ports are privileged.

The client API is different from TCP sockets because UDP is a stateless protocol. You must write data a packet at a time, and packets (datagrams) must be relatively small—under 65,507 bytes. The maximum size of a datagram depends on the Maximum Transmission Unit (MTU) of the network. 64 KB is the upper limit, but isn’t usually used because large datagrams may be silently dropped by the network.

Creating a client socket is the same as servers—use dgram.createSocket. Sending a datagram requires a buffer for the payload, an offset to indicate where in the buffer the message starts, the message length, the server port, the remote IP, and an optional callback that will be triggered when the message has been sent:

var message = 'Sample message';

socket.send(new Buffer(message), 0, message.length, port, remoteIP);

Listing 7.4 combines a client and a server into a single program. To run it, you must issue two commands: node udp-client-server.js server to run the server, and then node udp-client-server.js client remoteIP to start a client. The remoteIP option can be omitted if you run both locally; we designed this example to be a single file so you can easily copy it to another computer to test sending things over the internet or a local network.

Listing 7.4. A UDP client and server

When you run this example, it starts by checking the command-line options to see if the client or server is required . It also accepts an optional argument for clients so you can connect to remote servers .

If the client was specified, then a new client will be created by making a new datagram socket . This involves using a read stream from the fs module so we have some data to send to the server —we’ve used __filename to make it read the current file, but you could make it send any file.

Before sending any data, we need to make sure the file has been opened and is ready for reading, so the readable event is subscribed to . The callback for this event executes the sendData function. This will be called repeatedly for each chunk of the file—files are read in small chunks at a time using inStream.read , because UDP packets can be silently dropped if they’re too large. The socket.send method is used to push the data to the server . The message object returned when reading the file is an instance of Buffer, and it can be passed straight tosocket.send.

When all of the data has been read, the last chunk is set to null. The socket.unref method is called to cause the program to exit when the socket is no longer required—in this case, once it has sent the last message.

Datagram packet layout and datagram size

UDP packets are comparatively simple. They’re composed of a source port, the destination port, datagram length, checksum, and the payload data. The length is the total size of the packet—the header size added to the payload’s size. When deciding on your application’s buffer size for UDP packets, you should remember that the length passed to socket.send is only for the buffer (payload), and the overall packet size must be under the MTU on the network. The structure of a datagram looks like the following.

The UDP header is 8 bytes, followed by an optional payload of up to 65,507 bytes for IPv4 and 65,527 bytes for IPv6.

The server is simpler than the client. It sets up a socket in the same way , and then subscribes to two events. The first event is message, which is emitted when a datagram is received . The data is written to the terminal by using process.stdout.write. This looks better than usingconsole.log because it won’t automatically add newlines.

The listening event is emitted when the server is ready to accept connections . A message is displayed to indicate this so you know it’s safe to try connecting a client.

Even though this is a simple example, it’s immediately obvious how UDP is different from TCP—you need to pay attention to the size of the messages you send, and realize that it’s possible for messages to get lost. Although datagrams have a checksum, lost or damaged packets aren’t reported to the application layer, which means data loss is possible. It’s generally best to use UDP for sending data where assured integrity is second place to low latency and throughput.

In the next technique you’ll see how to build on this example by sending messages back to the client, essentially setting up bidirectional communication channels with UDP.

Technique 49 UDP client server applications

UDP is often used for query-response protocols, like DNS and DHCP. This technique demonstrates how to send messages back to the client.

Problem

You’ve created a UDP server that responds to requests, but you want to send messages back to the client.

Solution

Once you’ve created a server and it has received a message, create a datagram connection back to the client based on the rinfo argument that’s passed to message events. Optionally create a unique reference by combining the client port and IP address to send subsequent messages.

Discussion

Chat servers are the classic network programming example for new Node programmers, but this one has a twist—it uses UDP instead of TCP or HTTP.

TCP connections are different from UDP, and this is apparent in the design of Node’s networking API. TCP connections are represented as a stream of bidirectional events, so sending a message back to the sender is straightforward—once a client has connected you can write messages to it at any time using client.write. UDP, on the other hand, is connectionless—messages are received without an active connection to the client.

There are some protocol-level similarities that enable you to respond to messages from clients, however. Both TCP and UDP connections use source and destination ports. Given a suitable network setup, it’s possible to open a connection back to the client based on this information. In Node the rinfo object that’s included with every message event contains the relevant details. Figure 7.5 shows how messages flow between two clients using this scheme.

Figure 7.5. Even though UDP isn’t full-duplex, it’s possible to create connections in two directions given a port number at both sides.

Listing 7.5 presents a client-server program that allows clients to connect to a central server over UDP and message each other. The server keeps details of each client in an array, so it can refer to each one uniquely. By storing the client’s address and port, you can even run multiple clients on the same machine—it’s safe to run this program several times on the same computer.

Listing 7.5. Sending messages back to clients

This example builds on technique 48—you can run it in a similar way. Type node udp-chat.js server to start a server, and then node udp-chat.js client to connect a client. You should run more than one client for it to work; otherwise messages won’t get routed anywhere.

The readline module has been used to capture user input in a friendly manner . Like most of the other core modules you’ve seen, this one is event-based. It’ll emit the line event whenever a line of text is entered .

Before messages can be sent by the user, an initial join message is sent . This is just to let the server know it has connected—the server code uses it to store a unique reference to the client .

The Client constructor wraps socket.send inside a function called sendData . This is so messages can be easily sent whenever a line of text is typed. Also, when a client itself receives a message, it’ll print it to the console and create a new prompt

Messages received by the server are used to create a unique reference to the client by combining the port and remote address . We get all of this information from the rinfo object, and it’s safe to run multiple clients on the same machine because the port will be the client’s port rather than the port the server listens on (which doesn’t change). To understand how this is possible, recall that UDP headers include a source and destination port, much like TCP.

Finally, whenever a message is seen that isn’t a control message , each client is iterated over and sent the message . The client that has sent the message won’t receive a copy. Because we’ve stored references to each rinfo object in the clients array, messages can be sent back to clients.

Client-server networking is the basis of HTTP. Even though HTTP uses TCP connections, it’s slightly different from the type of protocols you’ve seen so far: it’s stateless. That means you need different patterns to model it. The next section has more details on how to make HTTP clients and servers.

7.4. HTTP clients and servers

Today most of us work with HTTP—whether we’re producing or consuming web services, or building web applications. The HTTP protocol is stateless and built on TCP, and Node’s HTTP module is similarly built on top of its TCP module.

You could, of course, use your own protocol built with TCP. After all, HTTP is built on top of TCP. But due to the prevalence of web browsers and tools for working with web-based services, HTTP is a natural fit for many problems that involve communicating between remote systems.

In the next section you’ll learn how to write a basic HTTP server using Node’s core modules.

Technique 50 HTTP servers

In this technique you’ll learn how to create HTTP servers with Node’s http module. Although this is more work than using a web framework built on top of Node, popular web frameworks generally use the same techniques internally, and the objects they expose are derived from Node’s standard classes. Understanding the underlying modules and classes is therefore useful for working extensively with HTTP.

Problem

You want to run HTTP servers and test them.

Solution

Use http.createServer and http.createClient.

Discussion

The http.createServer method is a shortcut for creating a new http.Server object that descends from net.Server. The HTTP server is extended to handle various elements of the HTTP protocol—parsing headers, dealing with response codes, and setting up various events on sockets. The major focus in Node’s HTTP handling code is parsing; a C++ wrapper around Joyent’s own C parser library is used. This library can extract header fields and values, Content-Length, request method, response status code, and more.

The following listing shows a small “Hello World” web server that uses the http module.

Listing 7.6. A simple HTTP server

The http module contains both Node’s client and server HTTP classes . The http.createServer creates a new server object and returns it. The argument is a callback that receives req and res objects—request and response, respectively . You may be familiar with these objects if you’ve used higher-level Node web frameworks like Express and restify.

The interesting thing about the listener callback passed to http.createServer is that it behaves much like the listener passed to net.createServer. Indeed, the mechanism is the same—we’re creating TCP sockets, but layering HTTP on top. The main conceptual difference between the HTTP protocol and TCP socket communication is a question of state: HTTP is a stateless protocol. It’s perfectly acceptable and in fact typical to create and tear down TCP sockets per request. This partly explains why Node’s underlying HTTP implementation is low-level C++ and C: it needs to be fast and use as little memory as possible.

In listing 7.6, the listener runs for every request. In the TCP example from technique 45, the server kept a connection open as long as the client was connected. Because HTTP connections are just TCP sockets, we can use res and req like the sockets in listing 7.6: res.write will write to the socket , and headers can be written back with res.writeHead , which is where the socket connection and HTTP APIs visibly diverge—the underlying socket will be closed as soon as the response has been written.

After the server has been set up, we can set it to listen on a port with server.listen .

Now that we can create servers, let’s look at creating HTTP requests. The http.request method will create new connections , and accepts an options argument object and a callback that will be run when a connection is made. This means we still need to attach a data listener to theresponse passed to the callback to slurp down any sent data.

The data callback ensures the response from the server has the expected format: the body content and status code are checked. The server is told to stop listening for connections when the last client has disconnected by calling server.unref, which means the script exits cleanly. This makes it easy to see if any errors were encountered.

One small feature of the HTTP module is the http.STATUS_CODES object. This allows human-readable messages to be generated by looking up the integer status code: http.STATUS_CODES[302] will evaluate to Moved Temporarily.

Now that you’ve seen how to create HTTP servers, in the next technique we’ll look at the role state plays in HTTP clients—despite HTTP being a stateless protocol—by implementing HTTP redirects.

Technique 51 Following redirects

Node’s http module provides a convenient API for handling HTTP requests. But it doesn’t follow redirects, and because redirects are so common on the web, it’s an important technique to master. You could use a popular third-party module that handles redirection, like the popular requestmodule by Mikeal Rogers,[2] but you’ll learn much more about Node by looking at how it can be implemented with the core modules.

2 https://npmjs.org/package/request

In this technique we’ll look at how to use straightforward JavaScript to maintain state across several requests. This allows a redirect to be followed correctly without creating redirect loops or other issues.

Problem

You want to download pages and follow redirects if necessary.

Solution

Handling redirection is fairly straightforward once the basics of the protocol are understood. The HTTP standard defines status codes that denote when redirection has occurred, and it also states that clients should detect infinite redirect loops. To satisfy these requirements, we’ll use a simple prototype class to retain the state of each request, redirecting if needed and detecting redirect loops.

Discussion

In this example we’ll use Node’s core http module to make a GET request to a URL that we know will generate a redirection. To determine if a given response is a redirect, we need to check whether the returned status code begins with a 3. All of the status codes in the 3xx family of responses indicate that a redirect of some kind has occurred.

According to the specification, this is the full set of status codes that we need to deal with:

· 300 —Multiple choices

· 301 —Moved permanently

· 302 —Found

· 303 —See other

· 304 —Not modified

· 305 —See proxy

· 307 —Temporary redirect

Exactly how each of these status codes is handled depends on the application. For example, it might be extremely important for a search engine to identify responses that return a 301, because it means the search engine’s list of URLs should be permanently updated. For this technique we simply need to follow redirects, which means a single statement is sufficient to check whether the request is being redirected: if (response.statusCode >= 300 && response.statusCode < 400).

Testing for redirection loops is more involved. A request can no longer exist in isolation—we need to track the state of several requests. The easiest way to model this is by using a class that includes an instance variable for counting how many redirects have occurred. When the counter reaches a limit, an error is raised. Figure 7.6 shows how HTTP redirects are handled.

Figure 7.6. Redirection is cyclical, and requests will be made until a 200 status is encountered.

Before writing any code, it’s important to consider what kind of API we need. Since we’ve already determined a “class” should be used to manage state, then users of our module will need to instantiate an instance of this class. Node’s http module is asynchronous, and our code should be as well. That means that to get a result back, we’ll have to pass a callback to a method.

The signature for this callback should use the same format as Node’s core modules, where an error variable is the first parameter. Designing the API in this way has the advantage of making error handling straightforward. Making an HTTP request can result in several errors, so it’s important to handle them correctly.

The following listing puts all of this together to successfully follow redirects

Listing 7.7. Making an HTTP GET request that follows redirects

Running this code will display the last-fetched URL, and the number of times the request was redirected. Try it with a few URLs to see what happens: even nonexistent URLs that result in DNS errors should cause error information to be printed to stderr.

After loading the necessary modules , the Request constructor function is used to create an object that models the lifetime of a request. Using a class in this way keeps implementation details neatly encapsulated from the user. Meanwhile, the Request.prototype.get method does most of the work. It sets up a standard HTTP request, or HTTPS if necessary, and then calls itself recursively whenever a redirect is encountered. Note that the URL has to be parsed into an object that we use to create the options object that is compatible with Node’s http module.

The request protocol (HTTP or HTTPS) is checked to ensure we use the right method from Node’s http or https module. Some servers are configured to always redirect HTTP traffic to HTTPS. Without checking for the protocol, this method would repeatedly fetch the original HTTP URL until maxRedirects is hit—this is a trivial mistake that’s easily avoided.

Once the response has been received, the statusCode is checked . The number of redirects is incremented as long as maxRedirects hasn’t been reached . This process is repeated until there’s no longer a status in the 300 range, or too many redirects have been encountered.

When the final request has finished (or the first if there were no redirects), the user-supplied callback function is run. The standard Node API signature of error, result has been used here to stay consistent with Node’s core modules. An error is generated when maxRedirects is reached, or when creating the HTTP request by listening for an error event.

The user-supplied callback runs after the last request has finished, allowing the callback to access the requested resource. This is handled by running the callback after the end event for the last request has been triggered, and by binding the event handler to the current Request instance . Binding the event handler means it’ll have access to any useful instance variables that the user might need—including errors that are stored in this.error.

Lastly, we create an instance of Request to try out the class. You can use it with other URLs if you like.

This technique illustrates an important point: state is important, even though HTTP is technically a stateless protocol. Some misconfigured web applications and servers can create redirect loops, which would cause a client to fetch URLs forever until it’s forcibly stopped.

Though listing 7.7 showcases some of Node’s HTTP- and URL-handling features, it isn’t a complete solution. For a more advanced HTTP API, take a look at Request by Mikeal Rogers (https://github.com/mikeal/request), a widely used simplified Node HTTP API.

In the next technique we’ll dissect a simple HTTP proxy. This expands on the client and server techniques discussed here, and could be expanded to create numerous useful applications.

Technique 52 HTTP proxies

HTTP proxies are used more often than you might expect—ISPs use transparent proxies to make networks more efficient, corporate systems administrators use caching proxies to reduce bandwidth, and web application DevOps use them to improve the performance of their apps. This technique only scratches the surface of proxies—it catches HTTP requests and responses, and then mirrors them to their intended destinations.

Problem

You want to capture and retransmit HTTP requests.

Solution

Use Node’s built-in HTTP module to act as a simple HTTP proxy.

Discussion

A proxy server offers a level of redirection, which facilitates a variety of useful applications: caching, logging, and security-related software. This technique explores how to use the core http module to create HTTP proxies. Fundamentally all that’s required is an HTTP server that catches requests, and then an HTTP client to clone them.

The http.createServer and http.request methods can catch and retransmit requests. We’ll also need to interpret the original request so we can safely copy it—the url core module has an ideal URL-parsing method that can help do this.

The next listing shows how simple it is to create a working proxy in Node.

Listing 7.8. Using the http module to create a proxy

To use this example, your computer will need a bit of configuration. Find your system’s internet options, and then look for HTTP proxies. From there you should be able to enter localhost:8080 as the proxy. Alternatively, add the proxy in a browser’s settings if possible. Some browsers don’t support this; Google Chrome will open the system proxy dialog.

Figure 7.7 shows how to configure the proxy on a Mac. Make sure you click OK and then Apply in the main Network dialog to save the setting. And remember to disable the proxy once you’re done!

Figure 7.7. To use the Node proxy we’ve created, set localhost:8080 as the Web Proxy Server.

Once your system is set up to use the proxy, start the Node process up with node listings/network/proxy.js in a shell. Now when you visit web pages, you should see the successive requests and responses logged to the console.

This example works by first creating a server using the http module. The callback will be triggered when a browser makes a request. We’ve used url.parse (url is another core module) to separate out the URL’s various parts so they can be passed as arguments to http.request. The parsed URL object is compatible with the arguments that http.request expects, so this is convenient .

From within the request’s callback, we can subscribe to events that need to be repeated back to the browser. The data event is useful because it allows us to capture the response from the server and pass it back to the client with res.write . We also respond to the end of the server’s connection by closing the connection to the browser . The status code is also written back to the client based on the server’s response .

Any data sent by the client is also proxied to the remote server by subscribing to the browser’s data events . Similarly, the browser’s original request is watched for an end event so it can be reflected back to the proxied request .

Finally, the HTTP server used as the proxy is set up to listen on port 8080 .

This example creates a special server that sits between the browser and the server the browser wants to talk to. It could be extended to do lots of interesting things. For example, you could cache image files and compress them based on the remote client, sending mobile browsers heavily compressed images. You could even strip out certain content based on rules; some ad-blocking and parental filters work this way.

We’ve been using the DNS so far without really thinking about it too much. DNS uses TCP and UDP for its request/response-based protocol. Fortunately, Node hides this complexity for us with a slick asynchronous DNS module. The next section demonstrates how to make DNS requests using Node’s dns module.

7.5. Making DNS requests

Node’s DNS module lives outside of the net module, in dns. When the http or net modules are used to connect to remote servers, Node will look up IP addresses using dns.lookup internally.

Technique 53 Making a DNS request

Node has multiple methods for making DNS requests. In this technique you’ll learn how and why you should use each to resolve a domain name to an IP address.

When you query a DNS record, the results may include answers for different record types. The DNS is a distributed database, so it isn’t used purely for resolving IP addresses—some records like TXT are used to build features off the back of the DNS itself.

Table 7.2 includes a list of each type along with the associated dns module method.

Table 7.2. DNS record types

Type

Method

Description

A

dns.resolve

An A record stores the IP address. It can have an associated time-to-live (TTL) field to indicate how often the record should be updated.

TXT

dns.resolveTxt

Text values that can be used by other services for additional features built on top of DNS.

SRV

dns.resolveSrv

Service records define “location” data for a service; this usually includes the port number and hostname.

NS

dns.resolveNs

Used for name servers themselves.

CNAME

dns.resolveCname

Canonical name records. These are set to domain names rather than IP addresses.

Problem

You want to look up a single or multiple domain names quickly.

Solution

The dns.lookup method can be used to look up either IPv4 or IPv6 addresses. When looking up multiple addresses, it can be faster to use dns.resolve instead.

Discussion

According to Node’s documentation, dns.lookup is backed by a thread pool, whereas dns.resolve uses the c-ares library, which is faster. The dns.lookup API is a little friendlier—it uses getaddrinfo, which is more consistent with the other programs on your system. Indeed, theSocket.prototype.connect method, and any of Node’s core modules that inherit from the objects in the net module, all use dns.lookup for consistency:

This example loads the dns module , and then looks up the IP address using dns.lookup . The API is asynchronous, so we have to pass a callback to receive the IP address and any errors that were raised when looking up the address. Note that the domain name has to be provided, rather than a URL—don’t include http:// here.

If everything runs correctly, then you should see 68.180.151.75 printed as the IP address. Conversely, if the previous example is run when you’re offline, then a rather interesting error should be printed instead:

The error object includes a standard error code alongside the system call that raised the error . You can use the error code in your programs to detect when this kind of error was raised and handle it appropriately. The syscall property, meanwhile, is useful to us as programmers: it shows that the error was generated by a service outside of our Node code that is provided by the operating system.

Now compare this to the version that uses dns.resolve:

The API looks similar to the previous example, apart from dns.resolve . You’ll still see an error object that includes ECONNREFUSED if the DNS server couldn’t be reached, but this time the result is different: we receive an array of addresses instead of a single result. In this example you should see [ '68.180.151.75' ], but some servers may return more than one address.

Node’s dns module is flexible, friendly, and fast. It can scale up well from infrequent single requests to making batches of requests.

The last part of Node’s networking suite left to look at is perhaps the hardest to learn, yet paradoxically the most important to get right: encryption. The next section introduces SSL/TLS with the tls and https modules.

7.6. Encryption

Node’s encryption module, tls, uses OpenSSL Transport Layer Security/Secure Socket Layer (TLS/SSL). This is a public key system, where each client and server both have a private key. The server makes its public key available so clients can encrypt subsequent communications in a way that only that server can decrypt again.

The tls module is used as the basis for the https module—this allows HTTP servers and clients to communicate over TLS/SSL. Unfortunately, TLS/SSL is a world of potential pitfalls. Node potentially supports different cyphers based on what version of OpenSSL it has been linked against. You can specify what cyphers you want to use when creating servers with tls.createServer, but we recommend using the defaults unless you have specific expertise in this area.

In the following technique you’ll learn how to start a TCP server that uses SSL and a self-signed certificate. After that, we end the chapter with a technique that shows how encrypting web server communication works in Node.

Technique 54 A TCP server that uses encryption

TLS can be used to encrypt servers made with net.createServer. This technique demonstrates how to do this by first creating the necessary certificates and then starting a client and server.

Problem

You want to encrypt communication sent and received over a TCP connection.

Solution

Use the tls module to start a client and server. Set up the required certificate files using OpenSSL.

Discussion

The main thing to master when working with encryption, whether it’s web servers, mail servers, or any TCP-based protocol, is how to properly set up the key and certificate files. Public key cryptography is dependent on public-private key pairs—a pair is required for both clients and servers. But an additional file is needed: the public key of the Certificate Authority (CA).

Our goal in this technique is to create a TLS client and server that both report authorized after the TLS handshake. This state is reported when both parties have verified each other’s identity. When working with web server certificates, your CA will be the well-known organizations that commercially distribute certificates. But for the purposes of testing, you can become your own CA and sign certificates. This is also useful for secure communication between your own systems that don’t need publicly verifiable certificates.

That means before you can run any Node examples, you’ll need certificates. The OpenSSL command-line tools are required for this. If you don’t have them, you should be able to install them with your operating system’s package manager, or by visiting www.openssl.org.

The openssl tool takes a command as the first argument, and then options as subsequent arguments. For example, openssl req is used for X.509 Certificate Signing Request (CSR) management. To make a certificate signed by an authority you control, you’ll need to issue the following commands:

· genrsa —Generate an RSA certificate; this is our private key.

· req —Create a CSR.

· x509 —Sign the private key with the CSR to produce a public key.

When the process is broken down like this, it’s fairly easy to understand: certificates require an authority and must be signed, and we need a public and private key. The process is similar when creating a public and private key signed against a commercial certificate authority, which you’ll do if you want to buy certificates to use with public web servers.

The full command list for creating a public and private key is as follows:

After creating a private key , you’ll create a CSR. When prompted for the “Common Name” , enter your computer’s hostname, which you can find by typing hostname in the terminal on a Unix system. This is important, because when your code sends or receives certificates, it’ll check the name value against the servername property passed to the tls.connect method.

The next listing reads the server’s keys and starts a server running using tls.createServer.

Listing 7.9. A TCP server that uses TLS for encryption

The network code in listing 7.9 is very similar to the net.createServer method—that’s because the tls module inherits from it. The rest of the code is concerned with managing certificates, and unfortunately this process is left to us to handle and is often the cause of programmer errors, which can compromise security. First we load the private and public keys, passing them to tls.createServer. We also load the client’s public key as a certificate authority —when using a commercially obtained certificate, this stage isn’t usually required.

When clients connect, we want to send them some data, but for the purposes of this example we really just want to see if the client was authorized . Client authorization has been forced by setting the requestCert option .

This server can be run with node tls.js—but there’s something missing: a client! The next listing contains a client that can connect to this server.

Listing 7.10. A TCP client that uses TLS

The client is similar to the server: the private and public keys are loaded, and this time the server is treated as the CA . The server’s name is set to the same value as the Common Name in the CSR by using os.hostname —you could type in the name manually if you set it to something else. After that the client connects, displays whether it was able to authorize the certificates, and then reads data sent by the server and pipes it to the standard output .

Testing SSL/TLS

When testing secure certificates, it can be hard to tell whether the problem lies in your code or elsewhere. One way around this is to use the openssl command-line tool to simulate a client or server. The following command will start a client that connects to a server with the given certificate file:

openssl s_client -connect 127.0.0.1:8000 \

-CAfile ./server-cert.pem

The openssl tool will display a lot of extra information about the connection. When we wrote the example in this technique, we used it to figure out that the certificate we’d generated had the wrong value for its Common Name.

An instance of tls.Server is instantiated when you call tls.createServer. This constructor calls net.Server—there’s a clear inheritance chain between each networking module. That means the events emitted by net.Server are the same for TLS servers.

In the next technique you’ll see how to use HTTPS, and how this is also related to the tls and net modules.

Technique 55 Encrypted web servers and clients

Though it’s possible to host Node applications behind other web servers like Apache and nginx, there are times when you’ll want to run your own HTTPS servers. This technique introduces the https module and shows how it’s related to the tls module.

Problem

You want to run a server that supports SSL/TLS.

Solution

Use the https module and https.createServer.

Discussion

To run the examples in this technique, you’ll need to have followed the steps to create suitable self-signed certificates, as found in technique 54. Once you’ve set up some public and private keys, you’ll be able to run the examples.

The following listing shows an HTTPS server.

Listing 7.11. A basic HTTP server that uses TLS for encryption

The server in listing 7.11 is basically the same as the one in technique 54. Again, the private and public keys are loaded and passed to https.createServer.

When browsers request a page, we check the req.socket.authorized property to see if the request was authorized. This status is returned to the browser. If you want to try this out with a browser, ensure you type https:// into the address bar; otherwise it won’t work. You’ll see a warning message because the browser won’t be able to verify the server’s certificate—that’s OK; you know what’s going on because you created the server. The server will respond saying that you’re unauthorized because it won’t be able to authorize you, either.

To make a client that can connect to this server, follow the code shown next.

Listing 7.12. An example HTTPS client

This example sets the private and public keys for the client, which is what your browser does transparently when making secure requests. It also sets the server as a certificate authority , which wouldn’t usually be required. The hostname used for the HTTP request is the machine’s current hostname .

Once all of this setup is done, the HTTPS request can be made. This is done using https.request . The API is identical to the http module. In this example the server will ensure the SSL/TLS authorization procedure was valid, so the server will return text to indicate if the connection was fully authorized.

In real HTTPS code, you probably wouldn’t make your own CA. This can be useful if you have internal systems that you want to communicate with using HTTPS—perhaps for testing or for API requests over the internet. When making HTTPS requests against public web servers, Node will be able to verify the server’s certificates for you, so you won’t need to set the key, cert, and ca options.

The https module has some other features—there’s an https.get convenience method for making GET requests more easily. Otherwise, that wraps up our set of techniques on encryption in Node.

Secure pairs

Before moving off encryption for greener pastures, there’s one patch of delicious turf left to chew: SecurePair. This is a class in the tls module that can be used to create a secure pair of streams: one reads and writes encrypted data, and the other reads and writes clear text. This potentially allows you to stream anything to an encrypted output.

There’s a convenience method for this: tls.createSecurePair. When a SecurePair establishes a secure connection, it’ll emit a secure event, but you’ll still need to check for cleartext.authorized to ensure the certificates were properly authorized.

7.7. Summary

This chapter has been long, but that’s because networking in Node is important. Node is built on excellent foundations for network programming; buffers, streams, and asynchronous I/O all contribute to an environment that is perfect for writing the next generation of network-oriented programs.

With this chapter you should be able to appreciate how Node fits into the wider world of network software. Whether you’re developing Unix daemons, Windows-based game servers, or the next big web app, you should now know where to start.

It goes without saying that networking and encryption are closely related. With Node’s tls and https modules, you should be able to write network clients and servers that can talk to other systems without fear of eavesdroppers.

The next chapter is the last on Node’s core modules, child_process, and looks at techniques for interfacing with other command-line programs.