Data-driven Testing- Selenium Design Patterns and Best Practices (2014)

Selenium Design Patterns and Best Practices (2014)

Chapter 4. Data-driven Testing

"Errors using inadequate data are much less than those using no data at all."

--Charles Babbage

Test data is a crucial part of automated tests; the old truism garbage in, garbage out applies especially well in this case. Tests will feed some data into our Turing machine and compare the output with the expectations. In a manner of speaking, a perfect test is a little insane; it will keep doing the same things over and over while expecting a different result.

As automated test developers, our goal is to make the tests fulfill their destiny of endlessly repeating identical steps, forever. The only way to accomplish this goal is to have as much control as possible over every single piece of data our application consumes. Test data is not just the text our test will type into the purchase form; test data is the complete state of the whole environment we are testing. In this chapter, we will take control of the environment we are testing by using these concepts:

· Fixtures

· Stubs

· YAML

· JSON

· Using API endpoints

· Generating test cases with a loop

· The default values pattern

· The faker library

Data relevance versus data accessibility

Controlling the test data, or the state of our environment, is a continuous battle of how relevant our data is versus how easily accessible it is. Relevance is a scale of how closely our environment mimics our production environment. Accessibility is a scale of how easy it is to control the data in a given environment. Each of the environments we will test will fall somewhere in between these two scales. The following graph is a rough representation of this idea:

Data relevance versus data accessibility

In this graph, points higher up on the y axis mimic production data the closest. Conversely, the lower points do not resemble production at all. The x axis represents the ability to have control on our data and environment, with the rightmost point having full control and leftmost point having close to zero control.

Testing our application on a localhost will yield some of the most consistent results, as we have full and total control over every variable. But this comes at a cost; our tests may be missing bugs since we might not be able to use real production data, or simulate production like load on the website. Running tests against the production environment is generally frowned upon, since our tests will fill up the database with fake usernames. Worse still, they might be making test purchases in your store!

Never write a Selenium test to make real currency purchases on your production website; if you choose to disobey this rule, at least make sure not to leave your personal credit card information in the test for all to see! We will now try to resolve this delicate balance of data we can and cannot control between the different environments. A very good starting point is to extract as much of the test data out of the test implementation as possible.

Hardcoding input data

Hardcoding test data is just like hardcoding anything in a software; a quick and dirty fix that will forever haunt your nightmares. In Chapter 3, Refactoring Tests, we refactored out some bad practices from our tests. We, however, left test data still hardcoded in the tests. Let's take a look at how each piece of the test data can make our life difficult:

· URL of the website: Like most web projects, we have several testing environments: staging, localhost, development, and so on. Our tests have the URL of the application hardcoded; thus, without changing the test code, we cannot have the tests execute on both the staging and production environments.

· Hardcoded product: Typically, different test environments do not share the same identical data such as products. Furthermore, most environments will only have a subset of the products available in production. Test environments in particular will have products that never did and never will exist in production.

· Private user data: Due to legal reasons, our test environment should never contain user data from the production environment. This is doubly true for sensitive user information, such as credit card numbers and e-mails.

Our test should be able to, within reason, run on any environment we have. But this is not possible if every single piece of data we use is hardcoded for a test environment.

Hiding test data from tests

The act of hiding data from tests sounds counterintuitive at first; the tests need to do things with the data after all. To make our tests flexible enough to work on any test environment we want, we will need to provide them with data applicable to the said environment. However, the test itself does not need to know what data we are using. When the data is properly hidden, the test does not care what username and password is used; the information fed into the test from the outside is stored as a variable.

To start hiding our data, we will need a single place that stores data and provides it to the test on request. For this, we will create a new class called TestData in the test_data.rb file. Let's create this file and add an empty class inside it:

Hiding test data from tests

The first variable we want to move into the TestData class is the URL of the website we are testing. It is the simplest and fastest way to start adding this functionality. Let's take a look at the get_base_url method we created:

Hiding test data from tests

Now that we have a simple way to get the URL of our test environment, all we have to do is call TestData.get_base_url from anywhere in the test. We are ready to hide the test environment URL from the tests.

Note

The naming convention of the get_base_url method is slightly different from before; it now begins with a keyword, self. By adding self in front of a method name, we turn it into a class method (static method), which will allow us to call it directly without first creating a new instance of the TestData object.

Let's modify the product_review_test.rb file; we will need to tell our test to include the code from test_data.rb, making the TestData class accessible.

Note

The File.dirname(__FILE__) call is used to locate the current relative directory of our test file, and File.join is used to join the relative path with a file called test_data.

The code in the following screenshot shows how to require another Ruby file such as test_data.rb:

Hiding test data from tests

Part of the refactoring effort in Chapter 3, Refactoring Tests, was to create the navigate_to_homepage method. Both of the tests in product_review_test.rb use this method, so we only need to modify our code in one place to start using the TestData class. Without the DRY principle, we would have to locate every test that navigated to the home page and modify the URL. Instead, our modification simply looks like this:

Hiding test data from tests

We have successfully obfuscated the URL of the environment from the test. As always, when refactoring, let's run our test and verify we did not break our tests. Our refactored tests yield the following results:

Hiding test data from tests

Choosing the test environment

Now that the environment URL is hidden from the test, switching between the staging, test, and production environments will become easy. By using environment variables, we can control a lot of the test data at runtime.

Note

Environment variables are dynamically named values at the operating system level. Using the environment variables, the application behavior can be easily altered. To set an environment variable value, through the Command Line Interface on Windows, we run the following command:

set environment=staging

On UNIX-based systems, we use the export command to set environment variables, as follows:

export environment=production

Let's create a method our tests will use to retrieve the current test environment. Let's take a look at this method in the TestData class:

Choosing the test environment

The get_environment method uses the ENV['environment'] call to see whether the environment variable is set on the current system. If it's not set, then our environment will default to test; this way we are never accidently testing in the production environment.

Tip

Always have safety measures in place to prevent production testing with automated tests. Having a localhost or test environment is the best default value.

Next, let's update our get_base_url method to hold every test environment we have in a hash. As you can see in the following screenshot, the hash contains a key value pair of the environment name and URL it uses, and we use the get_environment method to choose the appropriate URL:

Choosing the test environment

Let's set the environment in the terminal and run our tests again to make sure everything is still passing. The following screenshot demonstrates the test run output; the underlined section shows us how to set a different environment:

Choosing the test environment

And that's it! Our tests can now run on three different environments, and all we have to do is specify the test environment we want our tests to run against. Now that the URL of the website is no longer hardcoded and can be dynamically specified at runtime, we can start migrating test data into fixtures.

Note

For the purpose of this book, our test, staging, and production environments are actually the same, but we will pretend that different web addresses go to a different environment.

Introducing test fixtures

In software development, test fixtures (fixtures) is a term used to describe any test data that lives outside of that particular test, and is used to set the application to a known fixed state. Fixtures allow us to have a constant to compare individual test runs against.

Fixtures work best in any environment that is high on the accessibility scale. If we are testing on the localhost or in the CI environment, we can start with a completely empty test database and fill it up with fixture data. When the tests are ready to run, the tests will know the exact state of the application, how many registered users we have, prices of every product, and so on. Let's take a look at a sample fixture, which was used to create a product on our website:

Introducing test fixtures

A script parsed the YAML fixture file and then inserted the YAML data into the website's database. As you can see, our fixtures are really simple and easy to read. This is a great advantage of using the YAML format for data, because it is easy for both humans and machines to read.

Note

YAML is an acronym for YAML Ain't a Markup Language. Unlike XML and Comma Separated Value (CSV) formats, YAML tries to display data in a matter that is as readable as possible.

Parsing fixture data

Parsing YAML fixtures in Ruby is surprisingly simple. After telling Ruby where the fixture file is, it will do a lot of work for us; the end result is a simple hash filled with data.

Tip

Parsing YAML, or any other data representation, differs between different programming languages. Since the programming idioms vary so greatly between programming languages, follow the best standards for the toolset you have at hand.

Since the test fixture file is quite large, we will need to download it from http://awful-valentine.com/code/chapter-4 and save it as product_fixtures.yml to continue our work. After the fixture file has been downloaded, let's modify the test_data.rb file to look like this:

Parsing fixture data

The TestData class has two modifications; the first one is an online one, where we required a YAML parsing library in our class. The second modification is the addition of the get_product_fixtures method, which reads the contents of product_fixtures.yml and returns the parsed file as a large hash.

Using fixture data in the tests

In Chapter 3, Refactoring Tests, we created the select_desired_product_on_homepage method to click on the MORE INFO button for a given product. The method looks like this:

Using fixture data in the tests

As explained previously, this method chooses a product to review based on the HREF attribute of the MORE INFO button. When inspecting the fixture data, it is easy to find the permalink URL for every single product offered on the website. The permalink is a permanent static and unique link for any given page. Let's take a look at the permalink in fixtures:

Using fixture data in the tests

Because the fixtures allow us to use the permalink of every product available, we no longer need to have the HREF attribute hardcoded. Let's modify the select_desired_product_on_homepage method to accept different permalinks such as the one shown in the following screenshot:

Using fixture data in the tests

Now, let's update the setup method to get the product permalink from TestData, and store it as a @product_permalink instance variable, as follows:

Using fixture data in the tests

The final change is to modify the generate_new_product_review method, so that it uses the @product_permalink variable, as seen here:

Using fixture data in the tests

Once more, let's rerun our tests to make sure everything is still passing. The test results should look something like this:

Using fixture data in the tests

Using fixtures to validate products

Before we started to refactor tests in Chapter 3, Refactoring Tests, our tests were working too much. Not only were they trying to create a new product review, but they were also trying to verify the information displayed on the page for a given product. We removed that assertion with a promise to create a test, whose only job would be to verify products. Now that we have access to the product fixtures, we can write a test for every product sold on our website. Let's create product_validation_test.rb to do just that. The contents of the file are as follows:

Using fixtures to validate products

So far, everything in our new file should look familiar to the previous tests we have written. Since we have the permalink for the product being tested, we do not need to land on the home page and click on the MORE INFO button for that product. Instead, we will have our test go directly to the product page.

Tip

Since we navigate directly to the product's page, we are making our tests more resilient. Even if the home page of the website does not load properly, this test will be able to check individual products.

In the following code, we store the fixture for the tested product in the product_info variable and then combine it with TestData.get_base_url to navigate directly to the product's page:

Using fixtures to validate products

Once the product pages load, we can start validating that everything was rendered correctly. Let's add four assertions to our test, shown as follows, and understand what each one does:

Using fixtures to validate products

Our first assertion is on line 19; we compare the product's name from fixtures against what is displayed in the DIV with the category-title class. On the next line, we compare the current URL of the product page against the URL generated from the fixtures. On line 21, we verify the product description, followed by the product's price on line 22.

Note

When comparing the product price, we used the gsub method to find and delete any instance of a new line character (\n) and the dollar sign ($).

Let's see the test result of our new test case:

Using fixtures to validate products

Before we move on to the next section, let's refactor a little. Since searching for individual elements on the page may look too cryptic, let's move these out into methods, which are easy to understand. Our refactored code will look like this:

Using fixtures to validate products

Is test refactoring becoming a habit yet? It should be! Our goal is to constantly improve the quality of our tests; even if it is something as simple as renaming a method so it better explains its actions.

Tip

This refactoring might seem unnecessary at first, but six months from now when we are updating this test to accommodate new functionalities, will you remember what #main-products .content is?

Testing the remaining products

We are currently at a crossroad, and need to make a decision on how to proceed with adding tests for the remaining products. We can create a test for every single product or loop through the fixtures and programmatically test every product. Technically, there is no right or wrong choice here; both the options have advantages and disadvantages. When faced with a similar situation, we should weigh the pros and cons of each approach and select the right answer for the given moment. Let's compare multiple test models to the loop model.

Multiple test models

There are several advantages to writing a test for every single product in our store; let's take a look at some of them:

· Clear test: Each test clearly describes the product it is testing. At a glance, we can tell how many products we are testing and how long the test run will take.

· Clear test failure: When a test fails for any product, we will know right away which product it was by simply looking at the name of the test. A clear test failure should never be underestimated, especially if the test suite has 1,000 similar-looking tests.

· Parallel execution: When we have many individual tests, we can execute them all in parallel.

Since every coin has two sides, this approach has some disadvantages too. Let's take a look at those:

· More verbose: If we are only going to test a handful of products, this approach is perfectly good. However, if we were to test 30 products, our test file would grow in size rapidly.

· Duplication: Each test is identical to every other test; the only difference being the product it is testing. Managing these many tests can get tiresome quickly.

A single test model

Now that we weighed the advantages and disadvantages of the verbose option, let's take a look at the idea of having a single test that loops through the products. It has some very distinct advantages:

· Less duplication: This one is obvious; a single test is always cleaner than two dozen duplicates.

· Automatic catalog updates: If our product catalog changes in the future and we add or subtract some items in the fixture, our test will follow suit. There's no need to add or delete new tests at all, out of sight and out of mind!

· Faster runtime: Having a single test means that we will not have to restart our web browser every time a test is completed. These restarts will save a significant portion of the runtime compared to multiple tests.

Tip

If we run our test suite in parallel, this argument becomes weaker.

However, there are some disadvantages to looping though all of the products:

· Obfuscation: Every time a new product is tested, there is no clear separation between the products. We will have to add some very clear messages to our tests to make sure that we can quickly find which product was not meeting our expectations.

· Testing all the products: The old proverb goes, "When you are a hammer, everything looks like a nail." Just because we can test every single product, should we really do it? Generally, if three or four of our products are rendered correctly, chances are the rest will be rendered in a similar fashion. Automatically testing every new product added can be a waste of resources; if we have to write a test for each product, we might get tired and stop adding new ones after the fifth or sixth test we write.

· Single test runtime increase: If our test is very involved, we can look at a 20-minute runtime to cover several products. This might not seem like a big problem at first, but let's pretend that we want to reduce the test execution time by running our tests in parallel. By running seven tests at the same time, we can reduce the suite runtime to 10 minutes, except for that one 20 minute test; our test suite is as fast as the slowest running test.

Implementing multiple test models

At some point, we all have to make a decision between having many tests that are easy to debug and having one complicated test. Make sure you do not rush into anything without considering the consequences of every approach. Since copying and pasting half a dozen new tests does not take much imagination, we will implement a single complicated test here.

Let's start by renaming test_validate_our_love_is_special to something a little more generic, such as test_all_products_with_fixtures. Next, we create a loop to go through all of the parsed fixtures; the loop looks like this:

TestData.get_product_fixtures.values.each

Now, every time the loop moves to the next product from the fixtures file, it will store the current fixture in the product_info variable (designated with an arrow). The refactored test now looks like this:

Implementing multiple test models

After we run the new test, we will see right away that every single product page is now visited in the browser. Also, notice that the assertion count went up to 24:

Implementing multiple test models

Making test failures more expressive

Sadly, by using the looping approach, we gave up expressive test failures. If we had an individual test per product, we could look at the test name and instantly know which product failed. In the current state, test failures will look like this:

Making test failures more expressive

Outside the very long string of text that shows the difference between expectation and reality, we have very little clue about the product that is not being rendered properly. We now have to open up the fixture file and find the description that will match our failure, so that we can understand why our test failed.

There is a way to make our tests more descriptive; we do this by passing a third parameter into the assert_equals method. The third parameter can be an arbitrary string that will be displayed on a failed assertion. Let's store some information about the product in the failure_infovariable, like this:

Making test failures more expressive

Our assertions now accept the failure_info parameter, and they look like this:

Making test failures more expressive

The result of this simple modification is that our test failures are a lot simpler to understand. Take a look at the new test failure message:

Making test failures more expressive

The additional information in a test failure does not take a lot of effort to implement, which makes debugging failures so much simpler for everyone involved.

Tip

Make sure your test provides too much information, because having too little information is always regrettable.

Using an API as a source of fixture data

Using fixtures for test data is great for environments that are highly accessible. If we need to test something other than the localhost or CI environment, where we cannot easily load fixture data into the database, we will have to use a different approach. The trick is to utilize any and all the resources we can find to make testing possible.

One of these resources is a public-facing web API. If your website has a native cell phone application or uses a lot of AJAX to load content, then our tests can have some data to test. All we have to do is interrogate the API to get an idea of the state of the application.

A common API endpoint for most e-commerce websites is a list of all the available products. This list is used by mobile phones to display what a user can purchase. Our website stores the product catalog at http://api.awful-valentine.com/; if you navigate to this URL in your browser, you will see something like this:

Using an API as a source of fixture data

Our API endpoint returns a product catalog in the JavaScript Object Notation (JSON) format. If we compare our test fixtures to the returned JSON, we will find a lot of similarities; in fact, the data is identical! By consuming the product catalog, we are able to create a similar test we just made. Let's begin by adding two more libraries to the test_data.rb file, shown here:

Using an API as a source of fixture data

We will be using these libraries to make an HTTP request against the website's API endpoint. Then, we will use the json library to parse the received data and use it in the test. Now we are going to add a method to the TestData class called get_products_from_api. It looks like this:

Using an API as a source of fixture data

Let's take a look at individual things happening in this method. On line 12, we create a URI object from a string. We pass this object to the Net::HTTP.get method call on line 13 and get a string of unparsed JSON in return. Finally, use the JSON.parse method to parse the string and return the value as a hash. We have a way to get the product catalog from the environment; let's create a test to take advantage of this data.

We will add a new test called test_all_products_with_api_response and it will be 98 percent identical to test_all_products_with_fixtures. Let's take a look at both the tests side by side, with the major differences pointed out:

Using an API as a source of fixture data

Let's save and run both the tests. Our test results should now look like this:

Using an API as a source of fixture data

As always, when we get our tests running, it's time to refactor the duplication. Let's take a look at the final product; all of it should look familiar and make sense:

Using an API as a source of fixture data

Using data stubs

Modern websites are incredibly complicated and combine many external services. Most e-commerce websites do not actually process the credit cards themselves. Instead, the payment information is passed on to the bank, and the bank tells the website whether the transaction is successful.

Getting all of the external services running is a difficult task, especially if the service our website is using is also being developed at the same time as our project. We cannot afford to wait until all of the services are completely written and integrated to start writing our tests. So, we have to stub some of the services until they are fully developed.

Stubs are premade responses to our application's requests. Stub responses are formatted and look like real responses, but they don't actually do any work. For example, the API endpoint we used to write our test does not actually communicate with the database. Instead, it responds with a premade JSON file. Stubbing the application is a great way to set up a test environment for automated tests, and should be used as much as possible when running automated tests in the CI system.

The default values pattern

Filling out form information is one of the core principles of writing tests with Selenium. The test will need to register a new user, or make a purchase, or log in to an account at some point. The default values pattern aims to extract any data that our test does not actually care about. Tests should not have to know what the username and password are for every test user on every environment we have. Instead, it should rely on defaults that are appropriate for the current state.

Advantages of the default values pattern

Isolating irrelevant data from the test implementation has many advantages:

· Need to know basis: If our test is testing whether a purchase can be made with a credit card, the test does not need to know which credit card was used. However, if our test needs to check whether a certain credit card is accepted, then the card number is known to the test.

· Simpler tests: Extracting all of the unnecessary data out of the test implementation makes the test easier to read and understand.

· More focus: While writing the test, it is easy to get distracted with data that is used in the test setup. Having all of the setup data handed to us as we are writing the test, allows us to concentrate on the test implementation.

· Overwrite only important values: If we are testing the registration flow, we only care that the username is unique. The default values pattern allows you to provide just the important values while reusing the defaults.

Disadvantages of the default values pattern

There aren't many disadvantages to the default values pattern, but here are the top two:

· Implementing overwrite: Depending on the programming language and framework used to write the test, we might need to implement the data merge/overwrite logic ourselves.

· Homogeneous data: Having static default data might not always be preferable. In the comment-creation test, we had to add timestamps to the comment string to make the website accept our new comments. Using a library like faker can alleviate this pain point.

Merging the default values pattern and the faker library

Every test should strive to use input data that is as close to real life scenarios as possible. If our test always uses test_selenium_user_34256 as the user's first and last name, we are not using our application in the same manner as our customers. For example, how will our application handle having a title in the name such as Mr., Sr., or PhD?

Faker is a library that was written to solve these types of scenarios. It has been ported into many programming languages including Perl, Java, and Ruby. For the rest of this chapter, we will implement the default values pattern and integrate the faker library into our test to help us create default values that mimic real world scenarios.

Implementing faker methods

Let's install the faker gem and implement several methods that will be used for the new comment form functionalities. We need to install the faker gem, since it is not shipped with Ruby; run the following command in your terminal:

gem install faker

Now, we are ready to modify the test_data.rb file. As always, we will require a faker library at the top of the file. Then, add a couple of methods to get some life-like data for our tests. The code for the TestData class, with additions annotated, looks like this:

Implementing faker methods

All of these faker methods should be self-explanatory, except for get_buzzword. This method is used to generate a catchphrase that some fortune 500 companies would use in their advertisements. Since these phrases are a collection of randomly pieced buzzwords, they will most likely be unique enough to be used in the comment section of our reviews. Let's create a method that ties all of these items together for us; we will call it get_comment_form_values and it will look like this:

Implementing faker methods

This method is not very complicated; all it does is create a new hash and then populates it with faker data. Here are a couple of key parts:

· This method accepts an optional parameter, overwrites, which defaults to an empty hash. This will allow us to overwrite any field value at will. Also, if we are so inclined, we can add a new key and value that is not set by this method. This makes the get_comment_form_valuesmethod incredibly flexible.

· After the hash with new fake data is created, we overwrite the generated values with values from overwrites by using the merge(overwrites) method.

Every time the get_comment_form_values method is executed, it will create beautifully nonsensical but real-world looking data. If we are to invoke this method in irb, we will get this output:

Implementing faker methods

Note

The pp method call before the TestData.get_comment_form_values call is Ruby shorthand for Pretty Print. This allows us to see each value of the hash on a new line instead of a single long string.

Updating the comment test to use default values

We need to revisit product_review_test.rb to take advantage of the default values we just implemented in the TestData class. The implementation is actually quite simple and fast. Let's make it happen!

Remember the fill_out_comment_form method, which we wrote to fill out the review form? It looked like this the last time we modified it:

Updating the comment test to use default values

As we can see, most of the data it fills out is hardcoded, and only the portion that is not hardcoded is the comment variable. Our goal is to pass in every piece of data this method uses. We will rename the comment argument to form_info, to make our intentions more clear. This new argument is a hash, so we will have to retrieve the appropriate key for each field we fill out in the form. Let's take a look at the new code with the changes highlighted:

Updating the comment test to use default values

Let's modify test_add_new_review to use the faker methods. Our test, with changes highlighted, will now look like this:

Updating the comment test to use default values

The only major change in our test is on line 20, where we no longer use the generate_unique_comment method, calling TestData.get_comment_form_values instead. Note that we are overwriting the faker value for :name with Dima to demonstrate the overwriting capability of our new method.

Finally, let's update test_adding_a_duplicate_review in a similar fashion so that it looks like this:

Updating the comment test to use default values

And this wraps up the changes that we needed to finish. Let's run our tests to make sure everything still passes.

Tip

Since we are no longer using the generate_unique_comment method in our tests, this is probably a good time to clean up our code base by deleting this and any other unused methods.

The test output should look like this:

Updating the comment test to use default values

Summary

To be completely honest, managing test data is by far the single most difficult task with test automation. Locating an element on a complicated web page pales in comparison in complexity, compared to dealing with test data. There are so many technical and legal restrictions whenever production data is used that maintaining a grid of hundreds of browsers will feel like a vacation.

In this chapter, we only scratched the surface of data management. By using fixtures, we can control some of the chaos in the CI test environment. When fixtures are not an option, we can find other ways to interrogate the state of the application by using API endpoints, or we can stub out external services to make sure our application can still function. With the use of the faker library and default values pattern, we can simplify our test implementation by generating real-looking data that has been abstracted away.

In the next chapter, we will be improving the stability of our small test suite. We will fix a lot of the common causes of instability, thereby making our test suite as stable as humanly possible.