Приглашаем посетить
Грин (grin.lit-info.ru)

3.4 Testing Legacy Code

Previous Table of Contents Next

3.4 Testing Legacy Code

"This is all well and good," I can hear you say, "but I just inherited a swamp of 27 programs and 14 modules and they have no tests. What do I do?"

By now you've learned that it is far more appealing to write tests as you write the code they test, so if you can possibly rewrite this application, do so. But if you're stuck with having to tweak an existing application, then adopt a top-down approach. Start by testing that the application meets its requirements . . . assuming you were given requirements or can figure out what they were. See what a successful run of the program outputs and how it may have changed its environment, then write tests that look for those effects.

3.4.1 A Simple Example

You have an inventory control program for an aquarium, and it produces output files called cetaceans.txt, crustaceans.txt, molluscs.txt, pinnipeds.txt, and so on. Capture the output files from a successful run and put them in a subdirectory called success. Then run this test:

Example 3.6. Demonstration of Testing Program Output

1  my @Success_files;

2  BEGIN {

3    @Success_files = glob "success/*.txt";

4  }

5

6  use Test::More tests => 1 + 2 * @Success_files;

7

8  is(system("aquarium"), 0, "Program succeeded");

9

10 for my $success (@Success_files)

11 {

12   (my $output = $success) =~ s#.*/##;

13

14   ok(-e $output, "$output present");

15

16   is(system("cmp $output $success > /dev/null 2>&1"),

17      0, "$output is valid");

18 }

First, we capture the names of the output files in the success subdirectory. We do that in a BEGIN block so that the number of names is available in line 6. In line 8 we run the program and check that it has a successful return code. Then for each of the required output files, in line 14 we test that it is present, and in line 16 we use the UNIX cmp utility to check that it matches the saved version. If you don't have a cmp program, you can write a Perl subroutine to perform the same test: Just read each file and compare chunks of input until finding a mismatch or hitting the ends of file.

3.4.2 Testing Web Applications

A Common Gateway Interface (CGI) program that hasn't been developed with a view toward automated testing may be a solid block of congealed code with pieces of web interface functionality sprinkled throughout it like raisins in a fruit cake. But you don't need to rip it apart to write a test for it; you can verify that it meets its requirements with an end-to-end test. All you need is a program that pretends to be a user at a web browser and checks that the response to input is correct. It doesn't matter how the CGI program is written because all the testing takes place on a different machine from the one the CGI program is stored on.

The WWW::Mechanize module by Andy Lester comes to your rescue here. It allows you to automate web site interaction by pretending to be a web browser, a function ably pulled off by Gisle Aas' LWP::UserAgent module. WWW::Mechanize goes several steps farther, however (in fact, it is a subclass of LWP::UserAgent), enabling cookie handling by default and providing methods for following hyperlinks and submitting forms easily, including transparent handling of hidden fields.[9]

[9] If you're thinking, "Hey! I could use this to write an agent that will stuff the ballot box on surveys I want to fix," forget it; it's been done before. Chris Nandor used Perl to cast thousands of votes for his choice for American League All-Star shortstop [GLOBE99]. And this was before WWW::Mechanize was even invented.

Suppose we have an application that provides a login screen. For the usual obscure reasons, the login form, login.html, contains one or more hidden fields in addition to the user-visible input fields, like this:


<FORM ACTION="login.cgi" METHOD="POST">

 <INPUT NAME="username" TYPE="text">

 <INPUT NAME="password" TYPE="text">

 <INPUT NAME="fruglido" TYPE="hidden" VALUE="grilku">

 <INPUT TYPE="Submit">

</FORM>

On successful login, the response page greets the user with "Welcome, " followed by the user's first name. We can write this test for this login function:

Example 3.7. Using WWW::Mechanize to Test a Web Application

1  #!/usr/bin/perl

2  use strict;

3  use warnings;

4

5  use WWW::Mechanize;

6  use Test::More tests => 3;

7

8  my $URL = 'http://localhost/login.html';

9  my $USERNAME = 'peter';

10 my $PASSWORD = 'secret';

11

12 my $ua = WWW::Mechanize->new;

13 ok($ua->get($URL)->is_success, "Got first page")

14   or die $ua->res->message;

15

16 $ua->set_fields(username => $USERNAME,

17                 password => $PASSWORD);

18 ok($ua->submit->is_success, "Submitted form")

19   or die $ua->res->message;

20

21 like($ua->content, qr/Welcome, Peter/, "Logged in okay");

In line 12 we create a new WWW::Mechanize user agent to act as a pretend browser, and in line 13 we test to see if it was able to get the login page; the get() method returns a HTTP::Response object that has an is_success() method. If something went wrong with fetching the page the false value will be passed through the ok() function; there's no point in going further so we might as well die() (line 14). We can get at the HTTP::Response object again via the res() method of the user agent to call its message() method, which returns the text of the reason for failure.

In lines 16 and 17 we provide the form inputs by name, and in line 18 the submit() method of the user agent submits the form and reads the response, again returning an HTTP::Response object allowing us to verify success as before. Once we have a response page we check to see whether it looks like what we wanted.

Note that WWW::Mechanize can be used to test interaction with any web application, regardless of where that application is running or what it is written in.

3.4.3 What Next?

The kind of end-to-end testing we have been doing is useful and necessary; it is also a lot easier than the next step. To construct comprehensive tests for a large package, we must include unit tests; that means testing each function and method. However, unless we have descriptions of what each subroutine does, we won't know how to test them without investigative work to find out what they are supposed to do. I'll go into those kinds of techniques later.

    Previous Table of Contents Next