# Functional Programming in Scala (2015)

### Part 1. Introduction to functional programming

### Chapter 6. Purely functional state

In this chapter, we’ll see how to write purely functional programs that manipulate state, using the simple domain of *random number generation* as the example. Although by itself it’s not the most compelling use case for the techniques in this chapter, the simplicity of random number generation makes it a good first example. We’ll see more compelling use cases in *parts 3* and *4* of the book, especially *part 4*, where we’ll say a lot more about dealing with state and effects. The goal here is to give you the basic pattern for how to make *any* stateful API purely functional. As you start writing your own functional APIs, you’ll likely run into many of the same questions that we’ll explore here.

**6.1. Generating random numbers using side effects**

If you need to generate random^{[}^{1}^{]} numbers in Scala, there’s a class in the standard library, scala.util.Random,^{[}^{2}^{]} with a pretty typical imperative API that relies on side effects. Here’s an example of its use.

^{1} Actually, pseudo-random, but we’ll ignore this distinction.

^{2} Scala API link: *http://mng.bz/3DP7*.

**Listing 6.1. Using scala.util.Random to generate random numbers**

Even if we didn’t know anything about what happens inside scala.util.Random, we can assume that the object rng has some internal state that gets updated after each invocation, since we’d otherwise get the same value each time we called nextInt or nextDouble. Because the state updates are performed as a side effect, these methods aren’t referentially transparent. And as we know, this implies that they aren’t as testable, composable, modular, and easily parallelized as they could be.

Let’s just take testability as an example. If we want to write a method that makes use of randomness, we need tests to be reproducible. Let’s say we had the following side-effecting method, intended to simulate the rolling of a single six-sided die, which *should* return a value between 1 and 6, inclusive:

This method has an off-by-one error. Whereas it’s supposed to return a value between 1 and 6, it actually returns a value from 0 to 5. But even though it doesn’t work properly, five out of six times a test of this method will meet the specification! And if a test did fail, it would be ideal if we could reliably reproduce the failure.

Note that what’s important here is not this specific example, but the general idea. In this case, the bug is obvious and easy to reproduce. But we can easily imagine a situation where the method is much more complicated and the bug far more subtle. The more complex the program and the subtler the bug, the more important it is to be able to reproduce bugs in a reliable way.

One suggestion might be to pass in the random number generator. That way, when we want to reproduce a failed test, we can pass the same generator that caused the test to fail:

**def** rollDie(rng: scala.util.Random): Int = rng.nextInt(6)

But there’s a problem with this solution. The “same” generator has to be both created with the same seed, and also be in the same state, which means that its methods have been called a certain number of times since it was created. That will be really difficult to guarantee, because every time we call nextInt, for example, the previous state of the random number generator is destroyed. Do we now need a separate mechanism to keep track of how many times we’ve called the methods on Random?

No! The answer to all of this, of course, is that we should eschew side effects on principle!

**6.2. Purely functional random number generation**

The key to recovering referential transparency is to make the state updates *explicit*. Don’t update the state as a side effect, but simply return the new state along with the value that we’re generating. Here’s one possible interface to a random number generator:

**trait** RNG {

**def** nextInt: (Int, RNG)

}

This method should generate a random Int. We’ll later define other functions in terms of nextInt. Rather than returning only the generated random number (as is done in scala.util.Random) and updating some internal state by *mutating* it in place, we return the random number and the new state, leaving the old state unmodified.^{[}^{3}^{]} In effect, we separate the concern of *computing* what the next state is from the concern of *communicating* the new state to the rest of the program. No global mutable memory is being used—we simply return the next state back to the caller. This leaves the caller of nextInt in complete control of what to do with the new state. Note that we’re still *encapsulating* the state, in the sense that users of this API don’t know anything about the implementation of the random number generator itself.

^{3} Recall that (A,B) is the type of two-element tuples, and given p: (A,B), you can use p._1 to extract the A and p._2 to extract the B.

But we do need to have an implementation, so let’s pick a simple one. The following is a simple random number generator that uses the same algorithm as scala.util .Random, which happens to be what’s called a *linear congruential generator* (*http://mng.bz/r046*). The details of this implementation aren’t really important, but notice that nextInt returns both the generated value and a new RNG to use for generating the next value.

**Listing 6.2. A purely functional random number generator**

Here’s an example of using this API from the interpreter:

We can run this sequence of statements as many times as we want and we’ll always get the same values. When we call rng.nextInt, it will always return 16159453 and a new RNG, whose nextInt will always return -1281479697. In other words, this API is pure.

**6.3. Making stateful APIs pure**

This problem of making seemingly stateful APIs pure and its solution (having the API *compute* the next state rather than actually mutate anything) aren’t unique to random number generation. It comes up frequently, and we can always deal with it in this same way.^{[}^{4}^{]}

^{4} An efficiency loss comes with computing next states using pure functions, because it means we can’t actually mutate the data in place. (Here, it’s not really a problem since the state is just a single Long that must be copied.) This loss of efficiency can be mitigated by using efficient purely functional data structures. It’s also possible in some cases to mutate the data in place without breaking referential transparency, which we’ll talk about in *part 4*.

For instance, suppose you have a class like this:

**class** Foo {

**private var** s: FooState = ...

**def** bar: Bar

**def** baz: Int

}

Suppose bar and baz each mutate s in some way. We can mechanically translate this to the purely functional API by making explicit the transition from one state to the next:

**trait** Foo {

**def** bar: (Bar, Foo)

**def** baz: (Int, Foo)

}

Whenever we use this pattern, we make the caller responsible for passing the computed next state through the rest of the program. For the pure RNG interface just shown, if we reuse a previous RNG, it will always generate the same value it generated before. For instance:

**def** randomPair(rng: RNG): (Int,Int) = {

**val** (i1,_) = rng.nextInt

**val** (i2,_) = rng.nextInt

(i1,i2)

}

Here i1 and i2 will be the same! If we want to generate two distinct numbers, we need to use the RNG returned by the first call to nextInt to generate the second Int:

You can see the general pattern, and perhaps you can also see how it might get tedious to use this API directly. Let’s write a few functions to generate random values and see if we notice any repetition that we can factor out.

**Exercise 6.1**

Write a function that uses RNG.nextInt to generate a random integer between 0 and Int.maxValue (inclusive). Make sure to handle the corner case when nextInt returns Int.MinValue, which doesn’t have a non-negative counterpart.

**def** nonNegativeInt(rng: RNG): (Int, RNG)

**Dealing with awkwardness in functional programming**

As you write more functional programs, you’ll sometimes encounter situations like this where the functional way of expressing a program feels awkward or tedious. Does this imply that purity is the equivalent of trying to write an entire novel without using the letter *E*? Of course not. Awkwardness like this is almost always a sign of some missing abstraction waiting to be discovered.

When you encounter these situations, we encourage you to plow ahead and look for common patterns that you can factor out. Most likely, this is a problem that others have encountered, and you may even rediscover the “standard” solution yourself. Even if you get stuck, struggling to puzzle out a clean solution yourself will help you to better understand what solutions others have discovered to deal with similar problems.

With practice, experience, and more familiarity with the idioms contained in this book, expressing a program functionally will become effortless and natural. Of course, good design is still hard, but programming using pure functions greatly simplifies the design space.

**Exercise 6.2**

Write a function to generate a Double between 0 and 1, not including 1. Note: You can use Int.MaxValue to obtain the maximum positive integer value, and you can use x.toDouble to convert an x: Int to a Double.

**def** double(rng: RNG): (Double, RNG)

**Exercise 6.3**

Write functions to generate an (Int, Double) pair, a (Double, Int) pair, and a (Double, Double, Double) 3-tuple. You should be able to reuse the functions you’ve already written.

**def** intDouble(rng: RNG): ((Int,Double), RNG)

**def** doubleInt(rng: RNG): ((Double,Int), RNG)

**def** double3(rng: RNG): ((Double,Double,Double), RNG)

**Exercise 6.4**

Write a function to generate a list of random integers.

**def** ints(count: Int)(rng: RNG): (List[Int], RNG)

**6.4. A better API for state actions**

Looking back at our implementations, we’ll notice a common pattern: each of our functions has a type of the form RNG => (A, RNG) for some type A. Functions of this type are called *state actions* or *state transitions* because they transform RNG states from one to the next. These state actions can be combined using *combinators*, which are higher-order functions that we’ll define in this section. Since it’s pretty tedious and repetitive to pass the state along ourselves, we want our combinators to pass the state from one action to the next automatically.

To make the type of actions convenient to talk about, and to simplify our thinking about them, let’s make a type alias for the RNG state action data type:

**type** Rand[+A] = RNG => (A, RNG)

We can think of a value of type Rand[A] as “a randomly generated A,” although that’s not really precise. It’s really a state action—a *program* that depends on some RNG, uses it to generate an A, and also transitions the RNG to a new state that can be used by another action later.

We can now turn methods such as RNG’s nextInt into values of this new type:

**val** int: Rand[Int] = _.nextInt

We want to write combinators that let us combine Rand actions while avoiding explicitly passing along the RNG state. We’ll end up with a kind of domain-specific language that does all of the passing for us. For example, a simple RNG state transition is the unit action, which passes the RNGstate through without using it, always returning a constant value rather than a random value:

**def** unit[A](a: A): Rand[A] =

rng => (a, rng)

There’s also map for transforming the output of a state action without modifying the state itself. Remember, Rand[A] is just a type alias for a function type RNG => (A, RNG), so this is just a kind of function composition:

**def** map[A,B](s: Rand[A])(f: A => B): Rand[B] =

rng => {

**val** (a, rng2) = s(rng)

(f(a), rng2)

}

As an example of how map is used, here’s nonNegativeEven, which reuses nonNegative-Int to generate an Int that’s greater than or equal to zero and divisible by two:

**def** nonNegativeEven: Rand[Int] =

map(nonNegativeInt)(i => i - i % 2)

**Exercise 6.5**

Use map to reimplement double in a more elegant way. See *exercise 6.2*.

**6.4.1. Combining state actions**

Unfortunately, map isn’t powerful enough to implement intDouble and doubleInt from *exercise 6.3*. What we need is a new combinator map2 that can combine two RNG actions into one using a binary rather than unary function.

**Exercise 6.6**

Write the implementation of map2 based on the following signature. This function takes two actions, ra and rb, and a function f for combining their results, and returns a new action that combines them:

**def** map2[A,B,C](ra: Rand[A], rb: Rand[B])(f: (A, B) => C): Rand[C]

We only have to write the map2 combinator once, and then we can use it to combine arbitrary RNG state actions. For example, if we have an action that generates values of type A and an action to generate values of type B, then we can combine them into one action that generates pairs of bothA and B:

**def** both[A,B](ra: Rand[A], rb: Rand[B]): Rand[(A,B)] =

map2(ra, rb)((_, _))

We can use this to reimplement intDouble and doubleInt from *exercise 6.3* more succinctly:

**val** randIntDouble: Rand[(Int, Double)] =

both(int, double)

**val** randDoubleInt: Rand[(Double, Int)] =

both(double, int)

**Exercise 6.7**

*Hard:* If you can combine two RNG transitions, you should be able to combine a whole list of them. Implement sequence for combining a List of transitions into a single transition. Use it to reimplement the ints function you wrote before. For the latter, you can use the standard library function List.fill(n)(x) to make a list with x repeated n times.

**def** sequence[A](fs: List[Rand[A]]): Rand[List[A]]

**6.4.2. Nesting state actions**

A pattern is beginning to emerge: we’re progressing toward implementations that don’t explicitly mention or pass along the RNG value. The map and map2 combinators allowed us to implement, in a rather succinct and elegant way, functions that were otherwise tedious and error-prone to write. But there are some functions that we can’t very well write in terms of map and map2.

One such function is nonNegativeLessThan, which generates an integer between 0 (inclusive) and n (exclusive):

**def** nonNegativeLessThan(n: Int): Rand[Int]

A first stab at an implementation might be to generate a non-negative integer modulo n:

**def** nonNegativeLessThan(n: Int): Rand[Int] =

map(nonNegativeInt) { _ % n }

This will certainly generate a number in the range, but it’ll be skewed because Int.MaxValue may not be exactly divisible by n. So numbers that are less than the remainder of that division will come up more frequently. When nonNegativeInt generates numbers higher than the largest multiple of n that fits in a 32-bit integer, we should *retry* the generator and hope to get a smaller number. We might attempt this:

This is moving in the right direction, but nonNegativeLessThan(n) has the wrong type to be used right there. Remember, it should return a Rand[Int] which *is a function* that expects an RNG! But we don’t have one right there. What we would like is to chain things together so that theRNG that’s returned by nonNegativeInt is passed along to the recursive call to nonNegativeLessThan. We could pass it along explicitly instead of using map, like this:

**def** nonNegativeLessThan(n: Int): Rand[Int] = { rng =>

**val** (i, rng2) = nonNegativeInt(rng)

**val** mod = i % n

**if** (i + (n-1) - mod >= 0)

(mod, rng2)

**else** nonNegativeLessThan(n)(rng)

}

But it would be better to have a combinator that does this passing along for us. Neither map nor map2 will cut it. We need a more powerful combinator, flatMap.

**Exercise 6.8**

Implement flatMap, and then use it to implement nonNegativeLessThan.

**def** flatMap[A,B](f: Rand[A])(g: A => Rand[B]): Rand[B]

flatMap allows us to generate a random A with Rand[A], and then take that A and choose a Rand[B] based on its value. In nonNegativeLessThan, we use it to choose whether to retry or not, based on the value generated by nonNegativeInt.

**Exercise 6.9**

Reimplement map and map2 in terms of flatMap. The fact that this is possible is what we’re referring to when we say that flatMap is *more powerful* than map and map2.

We can now revisit our example from the beginning of this chapter. Can we make a more testable die roll using our purely functional API?

Here’s an implementation of rollDie using nonNegativeLessThan, including the off-by-one error we had before:

**def** rollDie: Rand[Int] = nonNegativeLessThan(6)

If we test this function with various RNG states, we’ll pretty soon find an RNG that causes this function to return 0:

scala> val zero = rollDie(SimpleRNG(5))._1

zero: Int = 0

And we can re-create this reliably by using the same SimpleRNG(5) random generator, without having to worry that its state is destroyed after it’s been used.

Fixing the bug is trivial:

**def** rollDie: Rand[Int] = map(nonNegativeLessThan(6))(_ + 1)

**6.5. A general state action data type**

The functions we’ve just written—unit, map, map2, flatMap, and sequence—aren’t really specific to random number generation at all. They’re general-purpose functions for working with state actions, and don’t care about the type of the state. Note that, for instance, map doesn’t care that it’s dealing with RNG state actions, and we can give it a more general signature:

**def** map[S,A,B](a: S => (A,S))(f: A => B): S => (B,S)

Changing this signature doesn’t require modifying the implementation of map! The more general signature was there all along; we just didn’t see it.

We should then come up with a more general type than Rand, for handling any type of state:

**type** State[S,+A] = S => (A,S)

Here State is short for *computation that carries some state along*, or *state action*, *state transition*, or even *statement* (see the next section). We might want to write it as its own class, wrapping the underlying function like this:

**case class** State[S,+A](run: S => (A,S))

The representation doesn’t matter too much. What’s important is that we have a single, general-purpose type, and using this type we can write general-purpose functions for capturing common patterns of stateful programs.

We can now just make Rand a type alias for State:

**type** Rand[A] = State[RNG, A]

**Exercise 6.10**

Generalize the functions unit, map, map2, flatMap, and sequence. Add them as methods on the State case class where possible. Otherwise you should put them in a State companion object.

The functions we’ve written here capture only a few of the most common patterns. As you write more functional code, you’ll likely encounter other patterns and discover other functions to capture them.

**6.6. Purely functional imperative programming**

In the preceding sections, we were writing functions that followed a definite pattern. We’d run a state action, assign its result to a val, then run another state action that used that val, assign its result to another val, and so on. It looks a lot like *imperative* programming.

In the imperative programming paradigm, a program is a sequence of statements where each statement may modify the program state. That’s exactly what we’ve been doing, except that our “statements” are really State actions, which are really functions. As functions, they read the current program state simply by receiving it in their argument, and they write to the program state simply by returning a value.

**Aren’t imperative and functional programming opposites?**

Absolutely not. Remember, functional programming is simply programming without side effects. Imperative programming is about programming with statements that modify some program state, and as we’ve seen, it’s entirely reasonable to maintain state without side effects.

Functional programming has excellent support for writing imperative programs, with the added benefit that such programs can be reasoned about equationally because they’re referentially transparent. We’ll have much more to say about equational reasoning about programs in *part 2*, and imperative programs in particular in *parts 3* and *4*.

We implemented some combinators like map, map2, and ultimately flatMap to handle the propagation of the state from one statement to the next. But in doing so, we seem to have lost a bit of the imperative mood.

Consider as an example the following (which assumes that we’ve made Rand[A] a type alias for State[RNG, A]):

It’s not clear what’s going on here. But since we have map and flatMap defined, we can use a for-comprehension to recover the imperative style:

This code is much easier to read (and write), and it looks like what it is—an imperative program that maintains some state. But it’s *the same code*. We get the next Int and assign it to x, get the next Int after that and assign it to y, then generate a list of length x, and finally return the list with all of its elements modulo y.

To facilitate this kind of imperative programming with for-comprehensions (or flatMaps), we really only need two primitive State combinators—one for reading the state and one for writing the state. If we imagine that we have a combinator get for getting the current state, and a combinator set for setting a new state, we could implement a combinator that can modify the state in arbitrary ways:

This method returns a State action that modifies the incoming state by the function f. It yields Unit to indicate that it doesn’t have a return value other than the state.

What would the get and set actions look like? They’re exceedingly simple. The get action simply passes the incoming state along and returns it as the value:

**def** get[S]: State[S, S] = State(s => (s, s))

The set action is constructed with a new state s. The resulting action ignores the incoming state, replaces it with the new state, and returns () instead of a meaningful value:

**def** set[S](s: S): State[S, Unit] = State(_ => ((), s))

These two simple actions, together with the State combinators that we wrote—unit, map, map2, and flatMap—are all the tools we need to implement any kind of state machine or stateful program in a purely functional way.

**Exercise 6.11**

*Hard:* To gain experience with the use of State, implement a finite state automaton that models a simple candy dispenser. The machine has two types of input: you can insert a coin, or you can turn the knob to dispense candy. It can be in one of two states: locked or unlocked. It also tracks how many candies are left and how many coins it contains.

**sealed trait** Input

**case object** Coin **extends** Input

**case object** Turn **extends** Input

**case class** Machine(locked: Boolean, candies: Int, coins: Int)

The rules of the machine are as follows:

· Inserting a coin into a locked machine will cause it to unlock if there’s any candy left.

· Turning the knob on an unlocked machine will cause it to dispense candy and become locked.

· Turning the knob on a locked machine or inserting a coin into an unlocked machine does nothing.

· A machine that’s out of candy ignores all inputs.

The method simulateMachine should operate the machine based on the list of inputs and return the number of coins and candies left in the machine at the end. For example, if the input Machine has 10 coins and 5 candies, and a total of 4 candies are successfully bought, the output should be (14, 1).

**def** simulateMachine(inputs: List[Input]): State[Machine, (Int, Int)]

**6.7. Summary**

In this chapter, we touched on the subject of how to write purely functional programs that have state. We used random number generation as the motivating example, but the overall pattern comes up in many different domains. The idea is simple: use a pure function that accepts a state as its argument, and it returns the new state alongside its result. Next time you encounter an imperative API that relies on side effects, see if you can provide a purely functional version of it, and use some of the functions we wrote here to make working with it more convenient.