February 10, 2010

Reporting on VSTS and external perfmon data

Given an ideal world, after any test run I would have just one centralised store of data the held all the performance related information. So, while the VSTS Load test data collation works really well,there are situations where you can’t avoid reporting across multiple data-sources.

For instance, when testing locked down servers from outside the firewall (with no option to tunnel through), the only real option is to log the counters on the server-side and get these sent back for processing and correlation with the response stats(Grant H shows you here how to re-format a binary log file into another format e.g. SQL via the ReLog tool, which is really handy for this kind of thing ). Another situation could be correlating a test run log file with some type of information (text etc) that isn’t suited to placing in a custom performance counter.

For reporting across multiple datasources I have been extremely impressed by Tableau – a data visualisation tool, but I’m sure that this could be done with other charting software too.

The key issues are –


  • creating a Union select statement that draws the results together across the two databases
and

  • ensuring that the timestamps play nicely together.


I had issues with an invalid (line return or somesuch) character getting appended to the perfmon date time (hence the “LEFT([CounterDateTime], LEN([CounterDateTime])-1” to trim that off) and then needed to convert the formatted result into a datetime datatype for consistency with the VSTS data.

If you have the Desktop professional version of Tableau you can manage the custom SQL directly within the Tableau data connection – I only have the personal version, so the data from the following query was drawn into excel as an intermediate step.


To get the VSTS data into a suitable form, here is a query that’ll compile a good data-set for this sort of view, it takes some trawling through the VSTS Beta 2 database schema to link this up, so here is a something to get you going....

--Select the perfmon data from the SQL database [PDB]
SELECT Convert(datetime, LEFT([CounterDateTime], LEN([CounterDateTime])-1) ) AS time_stamp, ObjectName, CounterName, CounterValue
FROM [PDB].[dbo].[CounterData] Inner join [PDB].[dbo].CounterDetails on [CounterDetails].CounterID = [CounterData].CounterID

Union


--Select the VSTS data from the load test database [2010ResultsDB]
Select pd.TimeStamp as time_stamp, wrm.RequestUri as ObjectName, 'Response Time' as CounterName, pd.ResponseTime as CounterValue from [2010ResultsDB].dbo.LoadTestPageDetail as pd
Inner Join [2010ResultsDB].dbo.WebLoadTestRequestMap as wrm on wrm.RequestId = pd.PageId
Where pd.LoadTestRunId =
::your test run id here::
order by time_stamp Asc

Here you can see the response time stats and (from VSTS) against the server CPU from perfmon.




If you have a lot of data it can really pay to create the suggested Tableau data extract which will speed up the rate at which the display refreshes when you dig around the results with the visualisations.

December 23, 2008

A new virtual horizon

James Whittaker points to virtualisation one of the key things that will emerge in the future of testing and in my mind the Hyper-V revolution is definitely shaping up. At work here, we're already planning on ways to cope with private and shared VM "whole deployment" test environments as we iterate through the development cycles. Perhaps this is old-ish news to teams in product-focussed businesses. For Intergen as an IT service provider working in this space, with our small (days to weeks) test windows and many concurrent projects, this is huge.


I am also about to start execution of performance tests against a P to V (production to virtualised) environment - I remember talking to a client 18 months ago about how performance testing against a Virtualised environment should be treated as purely indicative, as the capability of VM's under load just didn't really approach that of real servers. Perhaps this is still true under stress, but the difference here is that we're talking about achieving acceptable "production-ready" response, and throughput, with only the CPU utilisation explicitly excluded from the set of acceptance criteria we'd use against the base metal. Funny how quickly things change. In any case, no doubt we'll learn something new.


I have a weather eye on the emergence of cloud computing Windows Azure / Amazon's EC3. Presuming that the application scales at all, I predict that these services will make it possible to scale out an app even if it is inefficient, just by buying more capacity. While you could worry that it threatens the 'Performance testing' discipline in some ways, I predict that it will make the justification for measurement and tuning more direct.


In the old days if it your app works acceptably well on the machines you already had (with some headspace) that was all you cared about. With the cloud, however, there is no fixed asset, so any performance improvements will save you operational cost pretty much immediately, and there will probably need to be a more regular assessment of your needs, especially if you have variability in your weekly / monthly / yearly demand.


In any case, this whole area is going to be new frontier, with a myriad of concerns not only technically but morally as well (e.g. legal jurisdiction, security/privacy). Exciting!

Feel free to share your thoughts on how VM technology is going to affect the tester.

December 3, 2008

User Modelling and Test Automation

Not many people have talked about the "other side" of test modelling - what I call the user model, or the human proxy :-) And it's taken me some time to fall on this pattern to separate the input from the app model (decouple it) in a way that I'm happy with.

My Test projects typically have 3 main pieces:


  • A test fixture/s .. the test/s themselves - uses both the page and user model

  • An app model .. whatever test framework drives the app (i.e. WatiN /UIAutomation etc...)

  • and a user model

The user model is the mechanism I use to deliver data to the AUT. Think of the typical user view of the app - as a User of the app I have a username, password, email and dob etc. The user might need a shopping list which in itself might have many items, or only a few. They may need credit card with specific properites - Good, Expired, noAvailableCredit, weirdCardtype etc

So for my example app here, I have simple property bag classes of type User, PurchaseItem and CreditCard - with a User instance having a ShoppingList (made up of PurchaseItems) and a CreditCard. The nature of this model will depend on what your app does - a zoo management app would probably need animal, cage and refreshment-kiosk objects.
Broadly speaking I end up with model object per form in the UI, although things like registering for an account and logging in will both use the "User" object.

When I initialise a test I intantiate pieces of the user model and assign them values. Mostly the defaults will be fine, in other situations I construct specific data required for the test. By having mandatory data defaults built into the model (or even better dynamically retrieved from a known DB state and set in the fixture setup) - I'll keep clear of the "setup is good" vs "setup is bad" debate here...

It means that the intent of THIS TEST is more obvious, as only the data that really matters for the test gets set here. I quite often find I use random generators where uniqueness is required. I have a set of helper classes in the user model to do this kind of thing with guids text dates etc.

Once the user model is initialised I can start calling the page model and passing the user model components in as parameters.

Here is an example....in this case the existingUser object, their default shopping list and the expiredMasterCard would be set in the setup fixture.

[Test, Description("Purchase fails when attempting to use an expired CreditCard ")]

public void CreateNewAccountVerifyDetails()

{

//Setup Test Context

user.CreditCard = expiredMasterCard

using (HomePage homePage = new HomePage(ConfigurationHandler.PreProdUrl))

{

LoginPage loginPage = homepage.GoToLogin()

ProfilePage profilePage = loginPage.Login(user)

StorePage storePage = profilePage.GoToStore()

CheckOutPage checkOutPage = storePage(user.shoppinglist)

PaymentPage paymentPage = checkOutPage(user.CreditCard)



Assert.IsTrue(enterPaymentDetails.ExpiredCardMessage())



}




This way the test code becomes pretty generic for the "login, select items and pay" flow. Only the expected result would change for different card info.Maybe you could even parameterise this further with something like the MBUnit rowtest pattern, one per card-type.

Anyway this seems to work for me at the moment, but I'm sure there are many more ways to skin this cat. Let me know what works for you.