June 24, 2008

Pairwise Data-Driven Automation - Part 2

In this post I'm going to use the PICT model to define the "unsuccessful login" test cases for our application.

I'll freely admit that the example I use here is probably not the strongest application of PICT, being relatively straightforward in the number of variants and potential cases it can output (i.e. I can keep all these rules in my head an construct a small number of tests to cover all of the variants anyway). Where it really comes into it's own is when there are more complex interactions and constraints that blow out the number of possible test combinations. I started looking at using it for Credit Card validation rules within the app, where there are a lot more rules regarding the minimum data requirements and the validation rules. But that's all internal logic, and the login screen anyone can see - so lets play with that.

Let's start by defining the rules for the login screen.
To login you need a valid email address, a password and an account in the system with those credentials. The system gives one of three error messages different error messages
...depending on the rules we'll specify in the model file.

Read up about the way that pict works in the user guide (comes with the download) or the article i referred to last post. Basically the above "spec" translates into a ModelFile like this, with 2 input parameters and an output parameter. I have taken a minimal set of invalid email address data to illustrate the point (i.e. there are others I'd include here if I were being throrough).

#
# Login To ActionThis
#

EMAIL: nodomain,,@domain.only, onepart@domain,correct@format.em.ail
PASSWORD: password,
$RESULT: The Email Address entered does not appear to be valid., Email Address and Password may not be blank., Login failed. Please check your username and password and try again.

# Blank Email or Password rule
IF [EMAIL] = "" OR [PASSWORD] = ""
THEN [$RESULT] = "Email Address and Password may not be blank.";

# Invalid Email address rule
IF [EMAIL] IN {"nodomain", "@domain.only", "onepart@domain" }
THEN [$RESULT] = "The Email Address entered does not appear to be valid.";

# Valid email / password but no account rule
IF [EMAIL] = "correct@format.em.ail" AND [PASSWORD] = "password"
THEN [$RESULT] = "Login failed. Please check your username and password and try again.";

Assuming you have pict installed, you can run pict directly from the commandline as : pict.exe ModelFile.txt > Outputfile.txt (try it and see). The output file is a tab delimted text file, which I'm intending to feed into the automation.

So lets get a new C# test project happening in VS2008. I'll plumb the PICT in as a pre-build event for the project, so that any updates in the Modelfile are pulled through for the test run directly as and when that happens.

One gotcha here is that running the pict.exe anywhere under the local solution directory gives me 9009 errors. Not sure why, possibly a PATH conflict? I was forced to running the pict.exe from it's native location (i.e. in C:\Program Files\PICT\) but referencing the files in the project folder. anyway, the pre-build event command line looks like :

cd \"Program Files"\PICT\
pict.exe "$(ProjectDir)ModelFile.txt" > "$(ProjectDir)pictoutput.txt"

OK so if your input file exists in the project dir, that should actually build and output a tab file, pictoutput.txt.

Next up we'll wire it up to WatiN via a unit test in VisualStudio....

June 20, 2008

Pairwise Data-Driven Automation

I've spent a bit of time in the last few days getting the WATIN Framework going with called PICT (the Pairwise Independent Combinatorial Testing tool), for pairwise test case generation.

To date we've used hardcoded data values for most automated tests, and this leads to either embedded data all over the show (smelly!) and/or multiple calls to the execution logic handled within the test (inelegant)...and of course the coverage could be better.

My first step towards a data-driven test used a "blunt force" attack on the data inputs (i.e an all-pairs approach, varying one input at a time) which generated a phenomenal number of test cases. As it was, our suite took quite some time to complete, but this approach just made it insane - anything that reduces execution time without compromising the new juicy coverage was going to be awesome.

That's where PICT comes in. It's developed by a couple of engineers at Microsoft and is used internally by them top generate test cases "pairwise" in order to get the Input combination level most likely to find issues. You can read about the tool here and download v3.3 from here. Basically it offers a way to specify a set of outputs, their dependencies or constraints if any and also the expected results of those input(s). This gives us the ability to specify the input/output rules in one place (the PICT Input file) and get an output set of test cases optimised for coverage without redundancy.

So where I'll get to over the next few posts is a rather trivial but nonetheless working sample of PICT / WATIN working together to give us data-driven automation, hopefully in a maintainable way. There were some issues that I've found I've had to work around on the way, so I'll let you know about those too!

Next Up: I'll Dive in to the PICT Mapping Rules for our example.