December 23, 2008

A new virtual horizon

James Whittaker points to virtualisation one of the key things that will emerge in the future of testing and in my mind the Hyper-V revolution is definitely shaping up. At work here, we're already planning on ways to cope with private and shared VM "whole deployment" test environments as we iterate through the development cycles. Perhaps this is old-ish news to teams in product-focussed businesses. For Intergen as an IT service provider working in this space, with our small (days to weeks) test windows and many concurrent projects, this is huge.


I am also about to start execution of performance tests against a P to V (production to virtualised) environment - I remember talking to a client 18 months ago about how performance testing against a Virtualised environment should be treated as purely indicative, as the capability of VM's under load just didn't really approach that of real servers. Perhaps this is still true under stress, but the difference here is that we're talking about achieving acceptable "production-ready" response, and throughput, with only the CPU utilisation explicitly excluded from the set of acceptance criteria we'd use against the base metal. Funny how quickly things change. In any case, no doubt we'll learn something new.


I have a weather eye on the emergence of cloud computing Windows Azure / Amazon's EC3. Presuming that the application scales at all, I predict that these services will make it possible to scale out an app even if it is inefficient, just by buying more capacity. While you could worry that it threatens the 'Performance testing' discipline in some ways, I predict that it will make the justification for measurement and tuning more direct.


In the old days if it your app works acceptably well on the machines you already had (with some headspace) that was all you cared about. With the cloud, however, there is no fixed asset, so any performance improvements will save you operational cost pretty much immediately, and there will probably need to be a more regular assessment of your needs, especially if you have variability in your weekly / monthly / yearly demand.


In any case, this whole area is going to be new frontier, with a myriad of concerns not only technically but morally as well (e.g. legal jurisdiction, security/privacy). Exciting!

Feel free to share your thoughts on how VM technology is going to affect the tester.

December 3, 2008

User Modelling and Test Automation

Not many people have talked about the "other side" of test modelling - what I call the user model, or the human proxy :-) And it's taken me some time to fall on this pattern to separate the input from the app model (decouple it) in a way that I'm happy with.

My Test projects typically have 3 main pieces:


  • A test fixture/s .. the test/s themselves - uses both the page and user model

  • An app model .. whatever test framework drives the app (i.e. WatiN /UIAutomation etc...)

  • and a user model

The user model is the mechanism I use to deliver data to the AUT. Think of the typical user view of the app - as a User of the app I have a username, password, email and dob etc. The user might need a shopping list which in itself might have many items, or only a few. They may need credit card with specific properites - Good, Expired, noAvailableCredit, weirdCardtype etc

So for my example app here, I have simple property bag classes of type User, PurchaseItem and CreditCard - with a User instance having a ShoppingList (made up of PurchaseItems) and a CreditCard. The nature of this model will depend on what your app does - a zoo management app would probably need animal, cage and refreshment-kiosk objects.
Broadly speaking I end up with model object per form in the UI, although things like registering for an account and logging in will both use the "User" object.

When I initialise a test I intantiate pieces of the user model and assign them values. Mostly the defaults will be fine, in other situations I construct specific data required for the test. By having mandatory data defaults built into the model (or even better dynamically retrieved from a known DB state and set in the fixture setup) - I'll keep clear of the "setup is good" vs "setup is bad" debate here...

It means that the intent of THIS TEST is more obvious, as only the data that really matters for the test gets set here. I quite often find I use random generators where uniqueness is required. I have a set of helper classes in the user model to do this kind of thing with guids text dates etc.

Once the user model is initialised I can start calling the page model and passing the user model components in as parameters.

Here is an example....in this case the existingUser object, their default shopping list and the expiredMasterCard would be set in the setup fixture.

[Test, Description("Purchase fails when attempting to use an expired CreditCard ")]

public void CreateNewAccountVerifyDetails()

{

//Setup Test Context

user.CreditCard = expiredMasterCard

using (HomePage homePage = new HomePage(ConfigurationHandler.PreProdUrl))

{

LoginPage loginPage = homepage.GoToLogin()

ProfilePage profilePage = loginPage.Login(user)

StorePage storePage = profilePage.GoToStore()

CheckOutPage checkOutPage = storePage(user.shoppinglist)

PaymentPage paymentPage = checkOutPage(user.CreditCard)



Assert.IsTrue(enterPaymentDetails.ExpiredCardMessage())



}




This way the test code becomes pretty generic for the "login, select items and pay" flow. Only the expected result would change for different card info.Maybe you could even parameterise this further with something like the MBUnit rowtest pattern, one per card-type.

Anyway this seems to work for me at the moment, but I'm sure there are many more ways to skin this cat. Let me know what works for you.