Testing - Effective Ruby: 48 Specific Ways to Write Better Ruby (Effective Software Development Series) (2015)

Effective Ruby: 48 Specific Ways to Write Better Ruby (Effective Software Development Series) (2015)

6. Testing

Program testing can be used to show the presence of bugs, but never to show their absence! – Edsger W. Dijkstra

Testing is an important part of software development, especially with dynamic languages like Ruby. Ensuring that our code works as specified is one of the most critical parts of building a successful product. Projects which don’t have tests often collapse under the uncertainty that small changes might introduce unwanted and unanticipated side effects.

Ruby comes with a testing framework in its standard library and there are many, many testing libraries available as Ruby Gems. This chapter will show you how to use these tools to write effective tests. While testing doesn’t offer proof that your code is free of bugs, it does increase your confidence that you’ve found them before your customers will.

Item 36: Familiarize Yourself with MiniTest Unit Testing

While there are all sorts of fancy testing frameworks available for Ruby, there’s something refreshing about the easy-to-use alternative known as MiniTest. It has been distributed with Ruby since version 1.9 and continues to get better with each successive version. There’s nothing to install, you just have to require a file and write a test case. I think that’s a big win for MiniTest and Ruby.

MiniTest replaced an existing standard testing framework known as TestUnit. One of the really nice things about MiniTest is that it includes a compatibility shim which emulates the TestUnit interface. This means that if you find yourself updating older code to work with newer versions of Ruby, you can leave the existing TestUnit tests alone and they’ll work just fine.

In addition to being one of Ruby’s standard library packages, MiniTest is also an independent Ruby Gem. If you need a newer version of MiniTest and can’t upgrade Ruby itself, you can always add an external dependency to your project (see Item 42). The vast majority of the time you won’t need to do this. The unit testing component of MiniTest is very stable and hasn’t changed much from version to version. That said, some of MiniTest’s advanced features have been greatly improved over the years and you may need to upgrade to take advantage of them. (Mocking is one such feature, which we’ll look at in Item 38.)

Getting started with MiniTest is easy. Traditionally, each test case is written into its own file and covers a single “unit” of code. In Ruby, that’s often a single class. Let’s write a test case which validates the features of a class we’ve already seen.

Back in Item 13 we played with the Version class for parsing and comparing version numbers. Given a string containing a version number, the Version::new method returns a new object which can be used to access the individual components of a version number: major, minor, and patch level. We can also compare two version objects using the standard comparison operators. Assuming that the Version class is defined in a file named “version.rb”, it’s common practice to store its test case in a file named “version_test.rb” within a “test” or “tests” folder. It’s also fairly common to call the file “test_version.rb”. Choose the naming convention you like best and the one which fits into any application frameworks you might be using. (For example, Ruby on Rails prefers the “_test” suffix.)

The first step when using MiniTest is to require the appropriate library file. There are a handful of files you can choose from depending on which specific features you plan on using. Most of the time, however, it’s easiest to load the entire library which also brings in some glue code to automatically execute any tests you define in the test case file:

require('minitest/autorun')

The “autorun” file begins by loading the three major components of the MiniTest library (unit testing, spec testing, and mocking). It then registers itself with Ruby so that after all tests have been defined they will be executed. This means that you don’t have to do anything special to run your tests, you just have to execute the file they’re defined in.

After the MiniTest library is loaded the next step is to define a class to represent the test case. The name of the class isn’t important (it’s usually named after the file) but it does need to use MiniTest::Unit::TestCase as its superclass. Since we’re testing the Version class and the tests are in the “version_test.rb” file, we’ll call the class VersionTest:

class VersionTest < MiniTest::Unit::TestCase
...
end

Now comes the fun part, writing the actual testing logic. Individual tests are written as instance methods whose names must begin with the “test_” prefix. How you divide the test case up into individual test methods is entirely up to you. Typically you’ll want each test method to be as small as possible. Just like any method, this makes it easier to troubleshoot and maintain. Smaller methods can also shed light on test ordering dependencies—tests which only work when called in a specific order—since MiniTest randomizes the methods before executing each test case. Let’s start with a simple test to see if the Version class can properly parse the major version number out of a string:

def test_major_number
v = Version.new("2.1.3")
assert(v.major == 2, "major should be 2")
end

This method shows how to actually get something tested, through assertions. There are a lot of assertion methods to choose from, the easiest is the assert method. It has one mandatory argument, a value which should be true. If the argument is true then the assertion passes and the code continues along its normal path. But if the value is not true then the assertion halts the current test method and reports a failure. In this case, the optional second argument can be used to give additional details about why the assertion failed.

While you could certainly stick with the simple assert method for all of your assertions, there’s a good reason to explore the MiniTest::Assertions module documentation and use more specific assertions. The more appropriate the assertion is for your test, the better the error message will be when an assertion fails. The plain assert method relies on its second argument to provide a meaningful failure message. Other assertion methods can automatically report what was being tested, what was expected, and what actually happened. It also makes your test methods more succinct. Let’s rewrite the previous test method with a different assertion and expand it to test other components of a version number:

def test_can_parse_version_string
v = Version.new("2.1.3")
assert_equal(2, v.major, "major")
assert_equal(1, v.minor, "minor")
assert_equal(3, v.patch, "patch")
end

The assert_equal method takes two mandatory arguments and like assert, an optional message which is displayed for failures. The first argument is the expected value and the second is the actual value. It might not seem like the order of the two arguments matters much, but it turns out to be quite useful for the assertion failure message. Let’s look at a test method which intentionally fails so we can see the message produced by assert_equal:

def test_force_assertion_failure
v = Version.new("3.8.11")
assert_equal(4, v.major, "parsing major")
end
VersionTest#test_force_assertion_failure:
parsing major.
Expected: 4
Actual: 3

Each assertion method generates its own failure message which incorporates the optional description given as its last argument. This makes it easy to understand why an assertion is failing. Before you go too far with MiniTest make sure you read the documentation and familiarize yourself with the available assertion methods. As I mentioned previously, they’re documented in the MiniTest::Assertions module.

Sometimes you might notice a lot of duplication among your test methods, especially in the steps they take to prepare the objects to be tested. There are two main ways to deal with this. First, since MiniTest will only invoke methods whose names begin with “test_”, you’re free to write helper methods in your test case which can then be called from the test methods. It’s common practice to mark these methods as private to further indicate that they won’t be called directly by MiniTest. The second way is more automated. If you define a method named setup, it will be invoked before each test method. Its partner, the teardown method, is called after each test method finishes. Typically you’ll use the setup method to create some test objects which get assigned to instance variables. Then each of the test methods can use them directly. Here’s an example:

def setup
@v1 = Version.new("2.1.1")
@v2 = Version.new("2.3.0")
end
def test_version_compare
refute_equal(@v1, @v2)
assert_operator(@v1, :<, @v2)
end

You might have noticed that the test_version_compare method used a method called refute_equal. Nearly all of the assertion methods have matching refutations which negate their test. For example, the refute_equal method fails if its first two arguments are the same (when compared with the “==” operator), just the opposite of assert_equal. Again, this is mostly useful for producing meaningful failure messages without having to manually write specific details into every test. Each of the refutation methods are documented along with the assertion methods in the MiniTest::Assertions module.

When you have more than a single test file you should find a way to run all of your tests from a single location. Let’s face it, tests which don’t get run aren’t of much use. The most common way to group all of your tests into a single command is by using a tool that is part of every Ruby installation: Rake. Frameworks like Ruby on Rails already provide a configuration file for Rake which can execute all of your tests. But writing one of these files yourself is pretty easy:

require('rake/testtask')

Rake::TestTask.new do |t|
t.test_files = FileList['test/*_test.rb']
t.warning = true # Turn on Ruby warnings.
end

Putting these lines into a file named “Rakefile” will give you everything you need to execute all of the test files. Rake will use the shell pattern given to the FileList class to find test files and execute them. You probably noticed that a “Rakefile” is really just Ruby code, which means you don’t have to learn a new language to use Rake. With just these few lines in place you can run “rake test” from your favorite shell and watch all of your tests execute.

Things to Remember

• Test methods must begin with the “test_” prefix.

• Keep your test methods short to help with troubleshooting and maintenance.

• Use the most appropriate assertions possible to get better failure messages.

• Assertions and refutations are documented in the MiniTest::Assertions module.

Item 37: Familiarize Yourself with MiniTest Spec Testing

Many testing methodologies have emerged over the past few decades. The Ruby community is mostly divided between two major testing paradigms, unit testing (often associated with test-driven development) and spec testing (more formally referred to as behavioral specification and associated with behavior-driven development). Spec tests can be divided into two styles of behavioral specification, those written in a formal language which is shared by programmers and non-programmers alike, and those written in the syntax of the hosting programming language. The previous item dealt with unit testing using the MiniTest framework, this item focuses on using the spec interface for writing behavioral specifications in Ruby.

Spec testing in MiniTest is done through a thin wrapper around its unit testing interface and mostly provides an alternative style for writing tests. Therefore we won’t go into any specific details about behavior-driven development but instead we’ll focus on this alternate interface for writing unit tests.

As with the previous item, we’ll use the Version class which was developed in Item 13 as the subject for our tests. Let’s jump right in and explore the previously written unit test for the Version class, this time implemented as spec tests:

require('minitest/autorun')
describe(Version) do
describe("when parsing") do
before do
@version = Version.new("10.8.9")
end

it("creates three integers") do
@version.major.must_equal(10)
@version.minor.must_equal(8)
@version.patch.must_equal(9)
end
end

describe("when comparing") do
before do
@v1 = Version.new("2.1.1")
@v2 = Version.new("2.3.0")
end

it("orders correctly") do
@v1.wont_equal(@v2)
@v1.must_be(:<, @v2)
end
end
end

Notice that the very first line uses require to load in the entire MiniTest library. It’s the same require from the previous item which means that executing this file with the Ruby interpreter will cause MiniTest to execute all of the tests defined within it. This is the common way to load the MiniTest library regardless of which style of testing you happen to be performing.

I wanted to specifically point out the require line because I’m willing to bet your eyes jumped right to the describe method and that you probably noticed there’s no class definition. Also missing are the top-level assertion methods which we used in the unit testing item. They’ve been replaced by method calls which begin with “must_”, pretty strange indeed. I’ll tackle these one at a time.

Each invocation of the describe method automatically defines a new class which inherits from MiniTest::Spec. While you can certainly define your own class directly like we did when working with the unit testing interface, it’s much more common to use the describe method to define the class indirectly. The argument given to the describe method can be any object. It’s converted into a string and used as an internal label for the new, anonymous class. This label is included with test failure messages in a similar way to how the unit testing interface uses the class name. As you can see from the example, the describe method is given a block which is used to nest additional describemethods for more context in failure messages. The outer describe method is usually given the class name which is being tested and the nested describe methods are given strings which provide human-readable messages about the context in which the test is taking place. You don’t have to nest describe methods, one is enough. You’re also free to add as much context through them as you need.

The before method can be used inside a describe block and acts just like the setup method from MiniTest unit testing. There’s also an after method which corresponds to the unit testing teardown method. Both before and aftertake blocks which are evaluated before and after each test, respectively. In the example spec test, the before method is used to create Version objects and store them in instance variables so they can be used by each of the tests in the same describe block. And that brings us to the actual testing.

A test is defined using the it method. Like describe, the it method takes a description which is used in failure messages, and a block where the actual testing takes place. In spec testing, assertions have been replaced with some monkey patching which allows you to call methods like must_equal on any object. The assertion methods we used in the unit testing item are still available in spec tests, but in spec tests it’s more common to use what are called “expectations”. These are the familiar assertion methods injected into the Object class with new names and are documented in the MiniTest::Expectations module. The refutations from the unit testing interface are also available in spec testing, those methods begin with “wont_”. In the example above you can see wont_equal which is the same as the refutation refute_equal.

There’s not much more to it than that. Under the hood, spec testing and unit testing with MiniTest are very similar. Which you choose is mostly a matter of taste. If you like the style of testing offered by the MiniTest spec testing interface, you might want to take a look at the RSpec or the Cucumber Ruby Gems which take behavior-driven development much further than MiniTest.

Things to Remember

• Use the describe method to create a testing class and the it method to define your tests.

• While assertion methods are still available, spec tests generally use expectation methods which are injected into the Object class.

• Expectations are documented in the MiniTest::Expectations module.

Item 38: Simulate Determinism with Mock Objects

Suppose you’ve written a monitoring program which can report the status of servers in production. It does this by sending an HTTP request to each server with some data that the server should echo back. If the HTTP request is successful and the response contains the correct echo data then the server is consider to be alive and healthy. Let’s take a look at the class which is responsible for making and verifying the HTTP request:

require('uri')

class Monitor
def initialize (server)
@server = server
end

def alive?
echo = Time.now.to_f.to_s
response = get(echo)
response.success? && response.body == echo
end

private

def get (echo)
url = URI::HTTP.build(host: @server,
path: "/echo/#{echo}")

HTTP.get(url.to_s)
end
end

There are two interesting parts to this class. The Monitor#get method constructs a URL and then initiates an HTTP request. All the heavy lifting is in the HTTP class which isn’t shown here. The HTTP::get method takes a URL and returns an object which contains all the information we need to know in order to determine the status of the server. The Monitor#alive? method uses this response object to test if the request was successful and whether the correct echo data was included in the response body. So far, so good.

Now you’d like to test this Monitor class. You could certainly write a unit test with some assertions to ensure that the Monitor#alive? method works correctly, but what if one of the production servers is offline while the tests are running? Along the same lines, what if you wanted to pretend that one of the servers is offline to make sure that Monitor#alive? returns false?

This sort of thing comes up all the time in testing. If at all possible it’s best to isolate your tests from the non-determinism of the outside world. This allows you to simulate successful and unsuccessfully conditions to ensure that your code can correctly handle both. This extends beyond network services. It’s important to test situations like disks filling up and February having an extra day due to a leap year. The way we handle this in our tests is through mocking, and as we’ll see, a pinch of metaprogramming.

With test mocking you can build objects which respond in any way you need. Mock objects (also called test doubles) stand in for code which is non-deterministic but for which you need specific results. The HTTP class is a good candidate for mocking. We don’t want the unit tests to make connections to the production servers every time they run, and we certainly don’t want the tests to depend on the state of the servers either. Instead, we’d like to create HTTP response objects that can simulate healthy and unhealthy servers.

One of the downsides to mocking is that we’re no longer testing just the interface of a class, we exploiting specific details about its implementation. This is known as white-box testing (as opposed to black-box testing) and the unit tests will be coupled to a specific implementation of the class. If we mock the response object from the HTTP::get method and then later the Monitor class starts using a different HTTP library, the tests might silently start using the network again instead of a simulated network.

To combat this, mocking libraries support something called expectations. Not only can we simulate specific conditions with mock objects but we can also require that certain methods be called during the lifetime of the mock object. If the expected methods are not invoked during the test, it will be considered a failure. Expectations also allow us to ensure that methods on a mock object are called with specific arguments and return specific results.

The MiniTest library which we’ve been exploring throughout this chapter comes with a simple and easy to use class for mocking. Let’s use it to simulate successful and unsuccessfully HTTP requests to the production servers. We don’t need to do anything special to use the MiniTest mocking class. Assuming we already have a test class similar to the one in Item 36, let’s take a look at a test method which uses mocking:

def test_successful_monitor
monitor = Monitor.new("example.com")
response = MiniTest::Mock.new

monitor.define_singleton_method(:get) do |echo|
response.expect(:success?, true)
response.expect(:body, echo)
response
end

assert(monitor.alive?, "should be alive")
response.verify
end

The MiniTest::Mock::new method returns a blank object which is ready and eager to pretend to be any other object. Since the Monitor#get method deals with all of the HTTP work it’s the ideal method to replace using define_singleton_method. Mock objects respond to the expect method which takes at least two arguments. The first is a symbol which is the name of the method which is expected to be called. The second is the return value for that method. The monkey patched get method creates two expectations on the mocked object, its success? method will return true and its body method will return the echo data. Exactly what we need in order to simulate a successful connection and healthy server. After setting up some expectations, the get method returns the mocked response object back to its caller, just like the real Monitor#get method does.

When Monitor#alive? is invoked it will call the get method and then use the mocked response object to determine if it should return true or false. Since the get singleton method always returns a successful response the tests should always pass (as long as alive? is doing the right thing).

Did you notice the call to response.verify at the end of the test method? This is an important step in using MiniTest mock objects. When you call MiniTest::Mock#verify the mock object confirms that all expected methods were called with the correct arguments. If any of the expected methods were not called, the verify method will raise an exception which will cause the tests to fail.

The next test we need to write should simulate a failed HTTP request. This is where the beauty of mocking starts to show. We can create a response that looks like a network failure regardless of the current network status. This test method is even simpler than the successful case:

def test_failed_monitor
monitor = Monitor.new("example.com")
response = MiniTest::Mock.new

monitor.define_singleton_method(:get) do |echo|
response.expect(:success?, false)
response
end

refute(monitor.alive?, "shouldn't be alive")
response.verify
end

This test definitely exploits our knowledge of the alive? method’s internal implementation. Unlike with the successfully mock object, in the failure case we don’t create an expectation for the body method. If the implementation of the alive? method changed and it started calling the body method on the response object even when success? returned false, this test would raise a NoMethodError explaining that an unmocked method was called. Yet another benefit of explicit expectations on mock objects.

With a little bit of metaprogramming you can do quite a bit with the MiniTest::Mock class. Everything shown in this item will work with all of the MiniTest versions shipped since Ruby 1.9.3. However, MiniTest mocking has undergone some improvements over the years and you may want to consider using the Ruby Gem version over the one packaged with Ruby. That said, it’s a very simple mocking library which isn’t very sophisticated compared to other mocking libraries. One of my favorite libraries is Mocha, which is available as a Ruby Gem. Mocha allows you to do everything which MiniTest::Mock can do and a whole lot more. MiniTest is nice because it’s bundled with Ruby, but if you find yourself outgrowing it there are plenty of other options available as Ruby Gems.

Things to Remember

• Use mocking to isolate tests from the non-determinism of the outside world.

• Be sure to call MiniTest::Mock#verify before the end of the test method.

Item 39: Strive for Effectively Tested Code

Ruby has a lot going for it. Each of us came to the language for different reasons, but I’m willing to bet there’s a lot of overlap between them. For me, the best way I can describe my first experience with Ruby is that it was fun. The syntax was really easy to learn and behind it was consistent language that, compared to most other languages, didn’t appear to have any dark corners, places where even experts feared to tread. You know, those strange parts of other languages where some odd construct is allowed but produces undefined behavior, something you probably have to end up learning the hard way.

I would venture to say that most Ruby programmers consider working with the language to be a joy. I’ve been to a lot of Ruby get-togethers and the meetings are always electric with enthusiasm from everyone involved. But that doesn’t mean that Ruby doesn’t have a darker side. The same features which make Ruby so great also produce sharp edges that are not always so easy to recognize. Being an interpreted language, everything in Ruby happens at runtime. Even simple mistakes like typos can’t be found without actually executing your code. Consider this:

w = Widget.new
w.seed(:name)

If I were seeing this code for the first time I would wonder if the Widget class actually responds to the seed message. But let’s say you were able to read the definition of the Widget class and there was no seed method to be found. Does that mean that you can’t send the seed message to the Widget object? What if that method was monkey patched into the Widget class or the Kernel module? What if that method only existed after loading a specific library? What I’m getting at is that it’s really hard to tell if Widget#seed is a real method or just a typo. (Could the author have meant to type send instead of seed?) The only way to know for sure is to run the code and see what happens. Let’s look at something similar:

def update (location)
@status = location.description
end

I could ask all the same questions from earlier about this location object and the description method, but here are few new ones. What is a location? What if the location object isn’t of the expected class, or worse, what if it’s nil? Is there a way for us to be warned that location might not be what we think it is? The only way to know for sure is to run the code. And so, as Ruby programmers, that’s what we do, we run the code. We just do it in a controlled way, by writing tests which execute our code and hopefully verify that it works correctly. We write tests to make sure we don’t have any syntax errors in our code, to make sure we haven’t accidentally introduced typos, and just as important, to make sure the business logic we’re weaving into the code conforms to its specification. But there’s a lot more we should be doing.

Throughout this chapter we’ve mostly been performing functional testing, that is, confirming that the logic in the code behaves the way we expect it to. The beautiful part about writing tests is that we can run them as often as we like. They give us a baseline to work from so that after changing code we can verify that the changes haven’t altered the code in such a way that it no longer behaves as expected. This only works, however, if we write tests which themselves don’t contain any bugs. I’ve worked with too many projects where the tests all continue to pass even when I sabotaged the code in a way which violated its specification. My point, it’s easy to write functional tests which pass even when they shouldn’t.

I consider this to be one of the major motivating forces behind methodologies like test-driven development (TDD). On the surface it makes sense that writing a test before the implementation leads to better code and better tests. But in my experience it’s still really easy to write tests that miss the mark and pass when they shouldn’t. And with something like TDD, when the tests start to pass, it’s suppose to be a signal that that you’re done and can move on to the next feature. Clearly, writing tests that actually verify that your code works as expected isn’t as easy as it might first appear. Let’s look closer at some reasons how our tests can be inadequate and what we can do about that.

A common mistake found in all forms of testing is only performing happy path testing. This is especially common if you’re writing tests for code which you recently authored. Happy path testing is when you carefully establish all of the preconditions for the code you’re testing and then provide only valid inputs to it. You’re basically asking the question: “Does this code work in a perfect world?” But the world isn’t perfect. Customers feed invalid data into our applications. We forget to validate an input field and a sloppy database schema allows a NULL value to slip in where it shouldn’t. Testing the happy path is very valuable but doesn’t expose any of these bugs because we’ve tailored tests to only exercise the code in expected ways. One way fix this is with exception path testing, that is, sending in various inputs and ensuring that all code branches are executed. This can become very complicated, very fast. Fortunately, there are tools which can help us, namely, fuzz testing and property testing.

These two forms of testing are related but have different goals. The basic idea is that we want to feed many different types of data into our code and ensure that it holds up. Traditionally, fuzz testing was focused on security. Sending lots of random data into a program or a specific method is a great way to find out if it crashes or exposes a security hole. With fuzzing our focus moves away from the idea of passing and failing and instead we look to see if we can cause some code to crash or raise an unexpected exception. The process usually involves a generator that produces random values and a piece of code we want to test. For example, suppose we want to know if the URI::HTTP::buildmethod would crash if given completely random and invalid host names. To answer that question we could turn to FuzzBert, one of the fuzzing libraries available for Ruby as a Ruby Gem:

require('fuzzbert')
require('uri')

fuzz('URI::HTTP::build') do
data("random server names") do
FuzzBert::Generators.random
end

deploy do |data|
URI::HTTP.build(host: data, path: '/')
end
end

This fuzz test is made up of two parts, the generator and the test. The data block is used to configure the random data we want to use as host names. There are several types of generators available in FuzzBert, the random generator produces completely random values. You’re not limited to using a single data block, you can define as many as you want. This allows you to feed various types of random data into the code you’re testing. The actual testing is done in the deploy block. This block is invoked over and over again and is yielded values from the generator. Notice that we don’t test the return value from the URI::HTTP::build method, we’re only concerned with it raising an exception or crashing the test program. FuzzBert goes to great lengths to run the fuzz testing in a separate process in order to detect if the fuzzing caused a crash.

Fuzz testing is definitely interesting but it’s not practical for everyday use. Part of this is because tools like FuzzBert run your fuzz tests continuously until you manually terminate them. In order for fuzz tests to be effective you should leave them running for several days. There are configuration options available to limit how long FuzzBert executes the tests, but the longer you let it run the more confident you can be that your code doesn’t contain any crashing bugs.

Another way to exercise the happy and exception paths of your code is through property-based testing. Like fuzz testing, property testing involves sending random inputs into your code, but with the added test that the code should behave in a specified way. It is also much more practical than fuzz testing because the number of tests are finite and can be run along with your automated unit tests. To explore property testing let’s turn our attention back to Item 13 where we wrote a Version class for parsing a version number out of a string. Previously, we focused on comparing different version numbers, but for now we’ll only concern ourselves with parsing the version string. Here’s a simplified Version class with an added method, to_s:

class Version
def initialize (version)
@major, @minor, @patch =
version.split('.').map(&:to_i)
end

def to_s
[@major, @minor, @patch].join('.')
end
end

When performing property-based testing you define properties that your code should satisfy and then a tool generates a large number of tests cases in an attempt to falsify the property. One easy way to come up with properties is to think of inverses. For the Version class, the initialize method turns a version string into three integers. The inverse of that is the to_s method which takes those same three integers and turns them back into a version string. Therefore one property of the Version class is that the string returned from the to_s method should have the same content as the string given to initialize. If they’re not the same then there’s a bug in either initialize or to_s. Let’s use the MrProper Ruby Gem to define and test this property:

require('mrproper')

properties("Version") do
data([Integer, Integer, Integer])

property("new(str).to_s == str") do |data|
str = data.join('.')
assert_equal(str, Version.new(str).to_s)
end
end

The data method is a lot like the data block we used with FuzzBert. It tells MrProper the type of random data we want to generate for the test cases. Since the property we’re testing requires three integers we can give the datamethod an array with three elements, all of which are the Integer class. MrProper will then generate random arrays of three integers and feed them into each of the property blocks. Since the properties are turned into MiniTest unit tests you can use all of the assertion methods you’ve come to know and love. For the Version class, the property block turns the three integers into a version string like “2.3.4”. It then confirms that the to_s method can produce the same string. Property testing can help expose any assumptions that were written into your code. For example, if you assumed that a major version number would never be bigger than 2 digits then the MrProper example would fail by raising a MrProper::FalsableProperty exception.

Even after you’ve made an effort to test the happy and exception paths of your code, how do you know if you’ve succeeded? That’s where test coverage tools like the SimpleCov Ruby Gem come in. Code coverage tools generate reports that tell you which lines of your code were actually executed while the tests were running. SimpleCov produces detailed HTML reports which include all of the source code from your project with highlighting so that executed lines have a green background and lines which were not executed have a red background. Test code coverage analysis can be useful for guiding you while writing tests. Specifically, making sure you’ve exercised all branches of a particularly complex method. But it can also give you a false sense of security since executed code isn’t necessarily correct code. You still need to make sure you’re writing effective tests for branches you want to execute.

Using the SimpleCov tool is so simple that I won’t go into it here. The website contains all the information you’ll need for using it with any Ruby project, including Ruby on Rails applications. As long as you keep its downsides in mind and don’t get lured into a false sense of security then coverage tools can be very, very helpful when writing tests.

Regardless of the tools you choose to use for testing, there are a few general rules you should follow. The most important is to write tests as soon as possible. Waiting until you’re nearing the completion of a project is much too late. It makes testing a separate project in and of itself and in the meantime you’ve likely forgotten some important properties which you wanted to test. It’s much easier to test a feature while you’re writing it.

When you do write tests, make sure they fail. I like to comment out a critical section of code and check to see if the tests for it start failing. If they don’t you’ve got a major problem with your tests. This goes hand in hand with bug finding. Before you start to search for the root cause of a bug, write a test which fails because of it. Reproducing the bug is the first step to fix it, and having a test specifically for that bug means it should never return after it has been fixed.

Finally, automate your tests as much as possible. The best tests are useless if you don’t run them. Some people like to configure their source code control system to reject a commit if the tests don’t pass. You can also use a continuous integration tool to run your tests automatically when new commits are pushed up to a central repository. It can be fun to shame the developer who pushed changes which broke the tests.

You can also turn to tools like ZenTest which watch your source code and automatically run the project’s tests when a source file changes. Which ever method you choose, make sure you’re actually running your tests and running them often. They’re your last defense against preventable bugs showing up in production.

Things to Remember

• Use fuzzing and property-based testing tools to help exercise both the happy and exception paths of your code.

• Test code coverage can give you a false sense of security since executed code isn’t necessarily correct code.

• It’s much easier to test a feature while you’re writing it.

• Before you start to search for the root cause of a bug, write a test which fails because of it.

• Automate your tests as much as possible.