Painless Automated Testing
I’m guessing that you don’t like testing. Fortunately, we can make testing a lot easier to do by writing a script to do it for us. In this post, you’ll learn how to write a simple shell script which will run your program with input and compare it against the expected output.
This post looks long, but it’s mostly sections with one paragraph each. Just follow the instructions step-by-step and you’ll be set.
What’s a shell script?
When you log into CAEN over SSH, you’re presented with a prompt where you can
type things like cd
, cp
, mv
, and so on. The program that does this for you
is called a shell. Instead of typing in commands by hand, we can put them in a
file to be executed automatically, called a shell script.
Writing our first shell script
Create a file called helloworld.sh
and open it up in your editor. The .sh
extension is the conventional extension for a shell script. Type this out in your file:
Then go to your terminal and run the script using bash
:
That wasn’t too hard!
Example test files
For the sake of example, let’s say we have a program adder
which adds up all
the numbers given to it on standard input and prints the result to standard
output. (This means it reads from cin
and writes to cout
.)
Let’s write the first test. Create test-1.input
with the following contents:
2 3 4
And test-1.output
:
9
Writing the script
Now onto writing the test runner. Create run_tests.sh
:
As you can see, we have a for
-loop here. But what’s this ./test-*.input
?
This is a type of pattern-matching statement called a glob. It matches all of
the files in the current directory which start with test-
and end with
.input
. For each such file, it puts the file name in the $test
variable and
runs the for
-loop body. Basically, we’re iterating over a list of file names
matching the glob pattern.
A brief note on bash syntax
Some notes before we continue: in bash
, whenever we want to read the value of
a variable, we have to prefix it with $
. (In the for
-loop condition, we were
writing to the variable, so we didn’t need a $
.)
Additionally, bash
will automatically interpolate string variables. If we
say this:
then bash
will automatically replace $test
with the actual contents of the
test
variable, so we’ll get something like Test name: test-1.input
.
Getting the output file name
Now that we have the input file name, we need to get the output file name. In
short, we need to remove the .input
extension and replace it with the
.output
extension.
We can use some convenient bash
string manipulation facilities to do this.
Let’s declare a new variable inside the for
-loop:
If we want to remove a substring from the end of a string in bash, we can use
the %
operator:
Now we just need to tack on the .output
extension:
Actually doing the comparison
First, we want to run our program and see its output:
We’re using the input redirection operator <
here to feed the contents of the
input test file to our adder
program. But as it is, this just prints the
result to the screen. We want to feed it to the diff
program, along with the
correct output, to tell us if the files were identical.
To hook together our program and diff
, we can use a pipeline. This will
redirect the standard output of adder
to the standard input of diff
. To use
a pipeline for cmd1
and cmd2
, we join the two with the |
operator to get
cmd1 | cmd2
:
You’ll notice that we have the file -
as one of the parameters to diff. This
is special code meaning “use standard input for this file”. The other file is
the correct output file.
Checking to see if the test passed
Finally, we just need to check the return value of diff
to see if the files
were identical. We can use bash
’s if
-statement to determine if a
command succeeded or not. If the command has an exit code of zero, the if
statement considers it a success and goes into the if
-branch; otherwise, it
goes into the else
-branch (if there is one).
We’ll just move the command into the condition of an if
statement:
That’s it! We have our automated test runner in ten lines of code. If we run
bash run_tests.sh
now, it will automatically run all of the tests in the
current directory. If any of them fail, then the test runner will exit 1
.
Adding automatic test-running to our Makefile
Now that we’ve written our test runner, we can do a few nifty things with it.
First off, let’s make it so that if we run make test
, our tests will be run.
Add these lines to your Makefile
:
Now if we run make test
, we’ll either get output like this:
Or, if we’re unlucky, this:
Preventing submission if our tests failed
Since we’ve added the test
target to our Makefile
, we can also make it so
that make
will refuse to make submit.tar.gz
if your tests failed to pass.
(This can save you quite a few submits down the road if you’re lazy and don’t
want to test every single thing by hand.) Just modify the submit
target from
this:
to this:
Now if you try to run make submit
and your tests fail, make
will refuse to
build your submission file and let you know.
If you get stuck…
Shell scripting can have some really hard-to-diagnose errors, especially if you’re not experienced in it. If you get stuck, make a new post on Piazza, or just email me!
If you’re writing a significantly-sized shell script and are having trouble with it, consider enabling the so-called bash strict mode. This might cause your error to trigger earlier so you can figure out what went wrong.