December 23, 2008

A new virtual horizon

James Whittaker points to virtualisation one of the key things that will emerge in the future of testing and in my mind the Hyper-V revolution is definitely shaping up. At work here, we're already planning on ways to cope with private and shared VM "whole deployment" test environments as we iterate through the development cycles. Perhaps this is old-ish news to teams in product-focussed businesses. For Intergen as an IT service provider working in this space, with our small (days to weeks) test windows and many concurrent projects, this is huge.


I am also about to start execution of performance tests against a P to V (production to virtualised) environment - I remember talking to a client 18 months ago about how performance testing against a Virtualised environment should be treated as purely indicative, as the capability of VM's under load just didn't really approach that of real servers. Perhaps this is still true under stress, but the difference here is that we're talking about achieving acceptable "production-ready" response, and throughput, with only the CPU utilisation explicitly excluded from the set of acceptance criteria we'd use against the base metal. Funny how quickly things change. In any case, no doubt we'll learn something new.


I have a weather eye on the emergence of cloud computing Windows Azure / Amazon's EC3. Presuming that the application scales at all, I predict that these services will make it possible to scale out an app even if it is inefficient, just by buying more capacity. While you could worry that it threatens the 'Performance testing' discipline in some ways, I predict that it will make the justification for measurement and tuning more direct.


In the old days if it your app works acceptably well on the machines you already had (with some headspace) that was all you cared about. With the cloud, however, there is no fixed asset, so any performance improvements will save you operational cost pretty much immediately, and there will probably need to be a more regular assessment of your needs, especially if you have variability in your weekly / monthly / yearly demand.


In any case, this whole area is going to be new frontier, with a myriad of concerns not only technically but morally as well (e.g. legal jurisdiction, security/privacy). Exciting!

Feel free to share your thoughts on how VM technology is going to affect the tester.

December 3, 2008

User Modelling and Test Automation

Not many people have talked about the "other side" of test modelling - what I call the user model, or the human proxy :-) And it's taken me some time to fall on this pattern to separate the input from the app model (decouple it) in a way that I'm happy with.

My Test projects typically have 3 main pieces:


  • A test fixture/s .. the test/s themselves - uses both the page and user model

  • An app model .. whatever test framework drives the app (i.e. WatiN /UIAutomation etc...)

  • and a user model

The user model is the mechanism I use to deliver data to the AUT. Think of the typical user view of the app - as a User of the app I have a username, password, email and dob etc. The user might need a shopping list which in itself might have many items, or only a few. They may need credit card with specific properites - Good, Expired, noAvailableCredit, weirdCardtype etc

So for my example app here, I have simple property bag classes of type User, PurchaseItem and CreditCard - with a User instance having a ShoppingList (made up of PurchaseItems) and a CreditCard. The nature of this model will depend on what your app does - a zoo management app would probably need animal, cage and refreshment-kiosk objects.
Broadly speaking I end up with model object per form in the UI, although things like registering for an account and logging in will both use the "User" object.

When I initialise a test I intantiate pieces of the user model and assign them values. Mostly the defaults will be fine, in other situations I construct specific data required for the test. By having mandatory data defaults built into the model (or even better dynamically retrieved from a known DB state and set in the fixture setup) - I'll keep clear of the "setup is good" vs "setup is bad" debate here...

It means that the intent of THIS TEST is more obvious, as only the data that really matters for the test gets set here. I quite often find I use random generators where uniqueness is required. I have a set of helper classes in the user model to do this kind of thing with guids text dates etc.

Once the user model is initialised I can start calling the page model and passing the user model components in as parameters.

Here is an example....in this case the existingUser object, their default shopping list and the expiredMasterCard would be set in the setup fixture.

[Test, Description("Purchase fails when attempting to use an expired CreditCard ")]

public void CreateNewAccountVerifyDetails()

{

//Setup Test Context

user.CreditCard = expiredMasterCard

using (HomePage homePage = new HomePage(ConfigurationHandler.PreProdUrl))

{

LoginPage loginPage = homepage.GoToLogin()

ProfilePage profilePage = loginPage.Login(user)

StorePage storePage = profilePage.GoToStore()

CheckOutPage checkOutPage = storePage(user.shoppinglist)

PaymentPage paymentPage = checkOutPage(user.CreditCard)



Assert.IsTrue(enterPaymentDetails.ExpiredCardMessage())



}




This way the test code becomes pretty generic for the "login, select items and pay" flow. Only the expected result would change for different card info.Maybe you could even parameterise this further with something like the MBUnit rowtest pattern, one per card-type.

Anyway this seems to work for me at the moment, but I'm sure there are many more ways to skin this cat. Let me know what works for you.

November 5, 2008

VS2010 Testing Features - Part 1 Camano

So two of our "strategy and innovation" team represented Intergen up at the Microsoft Professional Developers Conference in Los Angeles last week. They brought back a hard drive packed with goodies including the new Windows 7 OS and the latest CTP for VisualStudio 2010. I was super exited to get the Virtual Machine running today and play with the new testing features in VS2010 - there are several massive steps forward for QA pros (...and developers that care).

We're a Microsoft shop. I've been using the MS VS tools since 2005 for web / load test projects and using the unit test types for hooking into WatiN and other frameworks.What has been lacking has been support for planning tests and managing the results. Camano is the intended solution and it's a standalone app that runs against a TFS repository, with Test artifacts all managed as workitems.

The first step (and most important) is to get those document-based test plans/ test cases into a centralised tool that allows:
  • progress tracking
  • run reporting
  • bug / issue administration
  • It will also allow requirements and user story tracebility, if you go as far as managing these in TFS - which you should. (i.e. I can see you need not .. but you still should)

By recording a manual test you can generate a number of artifacts including a video (to assist with communicating the activities that uncover bugs) and an "automation strip" for playback and also to act as a basis of generating a true automated test. .

Playing with the features to "automate a manual test" reveals some of the more alpha quality areas, it just doesn't seem baked, although having said that the 1 GB restriction on my VM was not helping things (grindingly sloooow). I haven't quite grasped the Camano mechanism for running an automated test - perhaps that is just not there yet.

To be honest though, these particular features are icing, and I'll live through some pain in that area to get the managment stuff I need. I also understand that the CTP was cut in July - so they are 3 or 4 months on from this at Redmond.

Where things aren't quite there in Camano it does appear that TFS itself is further along. For example the reporting features and linking a test case with an automated test works in TFS but not in Camano. The only real downside as I see it is the need for a Team Foundation Server and the associated maintenance / cost overhead. I could see that being a battle for test teams that would otherwise win big out of going in this direction.

While there are some complexities in learning how this works, it offers a powerful features. There are also some promises of "test environment" VM management in the Team Lab SKU.
Comparing to the Rational / Mercury players isn't easy but for the price point (ESPECIALLY if your org uses VS / TFS already) its likely to come out swinging...

October 9, 2008

Learning and Understanding

I've just read this sample chapter from Andy Hunt's new book: Pragmatic Thinking and Learning: Refactor Your "Wetware", that deals with the Dreyfus model of skill acquisition. While this kind of introspection is probably more the domain of Michael (the Braidy Tester) Hunter's blog, I found it fascinating.
In particular it explained two things for me:

  1. That Agile Projects really do need those really skilled practioners driving them in order to succeed. These are the Jedi Masters that just "feel the force" rather than "read the manual".
  2. That a lot of the anxiety I feel when learning something new comes from that fact that I don't yet have that fully-formed picture of the conceptual framework. I can almost feel that discomfort lessen each time I get one of those "a-ha" moments. Which interestingly occur not when something I'm trying works, but rather, when I understand why all the previous attempts didn't.

I feel that this second point is important for testers to tune into - the better your internal model of the business/ techological processes, the less likely you are to miss important bugs in business/technological logic.

I'm sure others will relate to this idea, let me know if you "feel it" too. Anyway I can't wait to digest the rest of the book.

October 7, 2008

Visual Studio Web Tests - Random Access is not quite Random

I've just been grappling with an issue in VS2008 Tester Edition that was skewing the load profile I'm after for my latest performance test assignment.

In a WebTest (either coded or UI-friendly) if you set "Random" access on Data Source and set the Number of Test Runs to a "Fixed Run count", the test runner will randomly select from all but the last row of data.
It works fine for 1 row, but with 2 or more rows, it will always ignore the last row. There is an n-1 bug in there somewhere....I've reported it to MS via the forums.

I've checked and VS2005 seems to have the same issue, and it affects at least csv and xml data source types.

The workaround is easy, as all you need is a padding row, but if you're wondering why the maths is off when using random access, this might be why.

August 29, 2008

Pairwise Data-Driven Automation - Post Script

A caveat at this point: James Bach and Patrick Schroeder are of the opinion that pairwise testing should be approached with eyes open, and understanding the deficiencies of the approach, suggesting that random testing may be just as good. I've read their excellent article and I think I grok what they're saying.

They propose that the effectiveness of pairwise testing depends on 7 factors:
1) The actual interdependencies among variables in the product under test.
2) The probability of any given combination of variables occurring in the field.
3) The severity of any given problem that may be triggered by a particular combination of
variables.
4) The particular variables you decide to combine.
5) The particular values of each variable you decide to test with.
6) The combinations of values you actually test.
7) Your ability to detect a problem if it occurs.

The features of PICT address some of these points - not on it's own, but alongside robust analysis of the requirements and correctly implementing a model that follows them. The advantage it gives us in automation is that the model is external to the test (and therefore maintainable) and it generates the dataset rather than requiring a static source.

Anyway I've not seen anything else like this approach - I'd be interested to hear thoughts about whether you think it's useful.

Future thoughts - I could dynamically generate some wrapper code to refer to the model elements, this would give me build errors if the expected paramenters weren't present in the source, and do the tilda stipping or any other input polishing outside the test.

July 22, 2008

Pairwise Data-Driven Automation - Part 3

So at what we hope to achieve in this post is
1. Get a WatiN test going under the Visual Studio test framework
2. Hook the appropriate PICT output up as a data source for the test
3. Run the the test and see many inputs and many results running and passing.


1. First specify a data source from the Pict output file. You can also use the app.config to hook this up if you wish.
i.e.



[TestClass]
public class UnitTest1
{

[DataSource("System.Data.OleDb",
"Provider=Microsoft.Jet.OLEDB.4.0;
Data Source=.;Extended Properties='text;
FMT=TabDelimited;HDR=YES'",
"frombat#txt",
DataAccessMethod.Sequential),
TestMethod]

}


One gotcha here, you'll need to include the model input file and schema.ini (for tab delimted text files) in the test deployment settings
( Test > Edit Test Run Configurations > Deployment )

2. I'll assume you have some familiarity - with WatiN, so build a test method that hits our login page and wire in the parameters from the Pict output.

For our purposes we'll want to supply a bad username / password and verify that the right error message is returned.
Something like this...another gotcha - the tildes need to be stripped (I've left that in....)



public void MainSiteIsUp()
{
using (IE ie = new IE("http://www.actionthis.com"))
{
// Load variables from the file

string email = context.DataRow["EMAIL"].ToString();
//strip the leading tilde if there is one...
if (email.StartsWith("~")) email = email.Remove(0, 1);

string password
= context.DataRow["PASSWORD"].ToString();
string message
= context.DataRow["$RESULT"].ToString();

LoginPage.Login(ie, email, password);
Assert.IsTrue(ie.Text.Contains(message));
}


3. Select and run the test. It should build, cycle through the 7 combinations and check for the various error messages - all green!

Yay - big pat on the back!

June 24, 2008

Pairwise Data-Driven Automation - Part 2

In this post I'm going to use the PICT model to define the "unsuccessful login" test cases for our application.

I'll freely admit that the example I use here is probably not the strongest application of PICT, being relatively straightforward in the number of variants and potential cases it can output (i.e. I can keep all these rules in my head an construct a small number of tests to cover all of the variants anyway). Where it really comes into it's own is when there are more complex interactions and constraints that blow out the number of possible test combinations. I started looking at using it for Credit Card validation rules within the app, where there are a lot more rules regarding the minimum data requirements and the validation rules. But that's all internal logic, and the login screen anyone can see - so lets play with that.

Let's start by defining the rules for the login screen.
To login you need a valid email address, a password and an account in the system with those credentials. The system gives one of three error messages different error messages
...depending on the rules we'll specify in the model file.

Read up about the way that pict works in the user guide (comes with the download) or the article i referred to last post. Basically the above "spec" translates into a ModelFile like this, with 2 input parameters and an output parameter. I have taken a minimal set of invalid email address data to illustrate the point (i.e. there are others I'd include here if I were being throrough).

#
# Login To ActionThis
#

EMAIL: nodomain,,@domain.only, onepart@domain,correct@format.em.ail
PASSWORD: password,
$RESULT: The Email Address entered does not appear to be valid., Email Address and Password may not be blank., Login failed. Please check your username and password and try again.

# Blank Email or Password rule
IF [EMAIL] = "" OR [PASSWORD] = ""
THEN [$RESULT] = "Email Address and Password may not be blank.";

# Invalid Email address rule
IF [EMAIL] IN {"nodomain", "@domain.only", "onepart@domain" }
THEN [$RESULT] = "The Email Address entered does not appear to be valid.";

# Valid email / password but no account rule
IF [EMAIL] = "correct@format.em.ail" AND [PASSWORD] = "password"
THEN [$RESULT] = "Login failed. Please check your username and password and try again.";

Assuming you have pict installed, you can run pict directly from the commandline as : pict.exe ModelFile.txt > Outputfile.txt (try it and see). The output file is a tab delimted text file, which I'm intending to feed into the automation.

So lets get a new C# test project happening in VS2008. I'll plumb the PICT in as a pre-build event for the project, so that any updates in the Modelfile are pulled through for the test run directly as and when that happens.

One gotcha here is that running the pict.exe anywhere under the local solution directory gives me 9009 errors. Not sure why, possibly a PATH conflict? I was forced to running the pict.exe from it's native location (i.e. in C:\Program Files\PICT\) but referencing the files in the project folder. anyway, the pre-build event command line looks like :

cd \"Program Files"\PICT\
pict.exe "$(ProjectDir)ModelFile.txt" > "$(ProjectDir)pictoutput.txt"

OK so if your input file exists in the project dir, that should actually build and output a tab file, pictoutput.txt.

Next up we'll wire it up to WatiN via a unit test in VisualStudio....

June 20, 2008

Pairwise Data-Driven Automation

I've spent a bit of time in the last few days getting the WATIN Framework going with called PICT (the Pairwise Independent Combinatorial Testing tool), for pairwise test case generation.

To date we've used hardcoded data values for most automated tests, and this leads to either embedded data all over the show (smelly!) and/or multiple calls to the execution logic handled within the test (inelegant)...and of course the coverage could be better.

My first step towards a data-driven test used a "blunt force" attack on the data inputs (i.e an all-pairs approach, varying one input at a time) which generated a phenomenal number of test cases. As it was, our suite took quite some time to complete, but this approach just made it insane - anything that reduces execution time without compromising the new juicy coverage was going to be awesome.

That's where PICT comes in. It's developed by a couple of engineers at Microsoft and is used internally by them top generate test cases "pairwise" in order to get the Input combination level most likely to find issues. You can read about the tool here and download v3.3 from here. Basically it offers a way to specify a set of outputs, their dependencies or constraints if any and also the expected results of those input(s). This gives us the ability to specify the input/output rules in one place (the PICT Input file) and get an output set of test cases optimised for coverage without redundancy.

So where I'll get to over the next few posts is a rather trivial but nonetheless working sample of PICT / WATIN working together to give us data-driven automation, hopefully in a maintainable way. There were some issues that I've found I've had to work around on the way, so I'll let you know about those too!

Next Up: I'll Dive in to the PICT Mapping Rules for our example.