October 3, 2005

MAF - Data wars

Here we go. I'm on MAF with the Featherston Street team. Prob'ly be there in the new year.

Weird kind of testing this Data Migration - Building an application you hope will be run as few times as possible. I'm guessing that because there is no UI (and because Mr Andrew Peters is on-board) I'll be encouraged to implement a Fitnesse suite.

Check out Fitnesse here.

September 9, 2005

VS2005 Web test custom validation rule

/*********************************************************/

using System;

using System.Text;

using System.ComponentModel;

using System.Collections.Generic;

using System.Collections.Specialized;

using Microsoft.VisualStudio.QualityTools.WebTestFramework;

using Microsoft.VisualStudio.QualityTools.WebTestFramework.Rules;

/**********************************************************************************************/

// Handle validation of response header values

/**********************************************************************************************/

namespace HeaderValidationRule

{

/**********************************************************************************************/

// Inherits validation rule

/**********************************************************************************************/

public class ValidateHeader : ValidationRule

{

// Header name

private string m_headerStringParameterName;

public string HeaderStringParameterName

{

set

{

m_headerStringParameterName = value;

}

get

{

return m_headerStringParameterName;

}

}

// Header value

private string m_headerStringParameterExpectedValue;

public string HeaderStringParameterExpectedValue

{

get

{

return m_headerStringParameterExpectedValue;

}

set

{

m_headerStringParameterExpectedValue = value;

}

}

// Fail if not found?

private bool m_isRequired = true;

[DisplayName("Is Required")]

public bool IsRequired

{

get

{

return m_isRequired;

}

set

{

m_isRequired = value;

}

}

/**********************************************************************************************/

// Add logic to handle header validation

/**********************************************************************************************/

public override void Validate(object sender, ValidationEventArgs e)

{

e.IsValid = false;

e.Message = "Not found";

for(int i = 0; i < e.Response.Headers.Keys.Count; ++i)

{

if (e.Response.Headers.GetKey(i) == HeaderStringParameterName)

{

string strVal = e.Response.Headers.Get(HeaderStringParameterName);

if (strVal.Contains(HeaderStringParameterExpectedValue))

{

e.IsValid = true;

e.Message = "Found";

break;

}

}

}

}

}

}

Binding CSV data to a web test in VS2005

(nicked from the VSTS Test Tools Forum)

To add a csv file as a datasource, do the following:
1) Create a csv file that looks like the following:
username,password
user1,password1
user2,password2

The first row is for column headers.

2) Click the add datasource button for the webtest
3) Choose Microsoft Jet 4.0 OLE DB Provider as the OLE DB Provider
4) Click the Datalinks button
5) On the connection tab, enter the directory that the csv file is in for the "Select or enter a database name:" text box. Enter just the directory.
6) Click on the all tab.
7) Double Click Extended Properties
8) Enter text and hit OK
9) Click Ok for the Data Link Properties dialog
10) Click Ok for the connection properties dialog
11) Choose the csv file in the choose tables dialog.
12) Add the datasource to the field you want to bind to. The column headers in the csv file will be used for the field names.

The Perfect Bug

(nicked from a VSTS Quality Tools blog) Whether we’re running automated or manual test cases, whenever we come across a failure it is often a good idea to log the issue in the bug database. In the Test Results window after a test run, you can use the list of failures to investigate a failure, rerun a test under the debugger, and finally associate the failure with a work item.

When you activate this feature, the product does a little bit of the work for you by opening a new bug form and filling in some of the fields.

If you have access to a Team Foundation Server, I recommend trying this out. First execute a test case that will fail, publish it, and then execute the “Create Work Item” menu item off of the context menu from the failed case.

[TestMethod]

public void TestMethod1()

{

Assert.Fail("Misc. bug in this code");

}

What you’ll see is a new bug form open with a bug title prefix (in my case “TestMethod1: “). You’ll also see in the Comment and History section the error message text (in my case “Assert.Fail failed. Misc. bug in this code”).

Note: Alternatively, if the bug you want to associate this failure with already exists, you can execute another menu item: Add to Work Item. This helps you add failure information from this case to an existing bug.

The rest, and arguably the real value, comes from you – we’ve just tried to automate some of the process that slows you down.

So, what do you put into the bug? Another way to ask this question is: what will the developer need to see in order to fix the bug in a very efficient manner? What information will increase the likelihood of a bug fix? How can I reduce the number of bugs that get resolved as Not Repro? A process that helps you enter bugs that achieve all those things could be called the Perfect Bug.

The Perfect Bug is a good thing to achieve. Others will more quickly understand it. Management will make quicker, but more informed and accurate decisions for the product. The developer will be less likely to misinterpret the bug and provide the wrong fix. You have supplied the developer with critical information that makes fixing the issue as quick and painless as is possible. Everyone will spend less time on the bug (reading, comprehending, etc).

The absolute perfect bug is not usually attainable. There is obviously a limit to the amount of time you should put into a bug. There has to be a corresponding benefit to the time you put into it. However, we can first focus on entering really good bugs and work our way up.

A high quality bug is easy to read. It is concise, yet contains additional crucial information. How can we communicate so concisely and clearly?

Some of these items are specific rules to remember, but in general it can be wrapped up with a set of principles:

  • Make it concise and easy to parse.
  • Considerably large data that would break the previous principle can be included elsewhere and simply referred to.
  • It should be easy to find by another person (tester looking for duplicate, others looking for bug they once saw).
  • The bug entry should make the best and most accurate case for fixing.

A perfect bug has…

An accurate and concise title

  • A bug title should not be too generic. The reader won’t understand what the bug really is. Bad example: App doesn’t work. What app? How doesn’t it work? What is specific about this scenario that causes it to happen?
  • A bug title should not be too long. The longer the title is, the more the reader has to concentrate; they may have to reread it several times. Bad example: Leave defaults in a new test and run it. Test outcome is "Failed". Should it be "Error" or "Not Runnable" instead? This really could be said in a lot fewer words. Most of that title belongs in the Repro Steps section. Try Defaults for new test yields a ‘Failed’ result.
  • A bug title should include relevant error messages or crash address. This makes it easier for others who are searching for duplicates. Good example: FileNotFound Exception in ObjectStore.css line 47 when opening file with .xxx extension. If I get this error when testing and search on it, this bug title will immediately pop out at me. Perfect!
  • A bug title should all words spelled correctly, especially error messages. Take the time to make sure you spell words in your title correctly (do I hear an endorsement for built in spell check?). People looking for duplicates will not find yours and this causes more work for everyone.

Severity and Priority that are accurate

  • Really think through the severity and priority you set for a bug. Consider other bugs you have entered and compare this bug to those. Know that developers (hopefully) use priority to set the order in which they fix bugs.
  • If you enter a bug with what may seem like an unusual sev/pri or if the sev/pri are high, it would be very helpful to others if you explain your case for it. Especially do this if you change pri/sev.
  • As a product group, define what a bug means to be sev 1 and so forth. The work item tracking solution has a feature to show a tool tip if you hover over the labels which can be used to reinforce the definitions. Here are some typical definitions used by Microsoft for your benefit:

Sev 1: Critical Failure. Completely breaks product or large set of features. Unusable. Significant risk or liability if release (security, legal).

Sev 2: Major Impact / Functionality Broken. Breaks major functionality contributes to overall instability in this area, non-fatal assertions. E.g., Statement Completion not active at all or memory leaks. Regression from prior release.

Sev 3: Minor Impact / Functionality Impaired. Breaks major functionality in a minor way or breaks minor functionality completely. E.g., missing item from list for statement completion.

Sev 4: Little / No user impact. Can still use product / features. Minor functionality problems or UI blemishes or other issues that do not impact customers use or perception.

Pri 0: “NOW” Bug. Work stoppage, no work around. Blocking further progress in area or by group. Fix Immediately! 24 hr turn around expected!

Pri 1: Showstopper. This is deeply impacting customers OR internal progress. Worthy of a Service Pack or QFE. Fix Soon! Also, required to fix for RTM.

Pri 2: Important Bug. Required fix for RTM. Can be fixed any time before RTM.

Pri 3: Something we would like to fix but not required to fix to ship the product.

Pri 4: An unimportant bug or request. A bug that will likely not be fixed.

One manager at MS puts bug priority in terms that might resonate better with you for priority:

Pri 1: Will slip the product indefinitely to get this in.

Pri 2: Will slip our date within limits to get this in. Painful cut if not in.

Pri 3: Will not even think about slipping the product for any of these.

If you are unsure about pri/sev, chat with others to level set your expectations.

If you notice that management or others regularly resets your bugs’ sev/pri, ask them to discuss why they see it differently.

Filled in fields (customize your bug form)

  • These are all examples of custom fields we use at Microsoft. You can also add these to your own custom bug form.
  • How Found

The idea is that the tester indicates how they found the bug, as in via what kind of testing. Was it a Test Pass? Automation? Ad hoc Testing? Customer Feedback? Bug Bash?

Your organization can use this field for metrics. If you know you found 30% of your bugs via automation, it’s a compelling reason for your organization to invest more heavily in it. If you find that bug bashes result in more bugs then you’ll know to plan more of those. You get the idea.

  • Environment

Can be very helpful with reproducing a bug. Enter OS, proc (32/64-bit), product flavor.

  • Blocking

Is this a blocking bug? You should definitely mark the bug as such and explain why in the description.

A blocking bug usually is around one of these:

    1. Precludes a build from being generally testable
    2. Prevents testing of other features in the area
    3. Precludes a build from being generally safe for dogfooding
    4. Breaks defined user scenario
    5. Degrades an feature area quality bar so that it is not meeting expectations

Repro Steps

  • You can add a large, multi-line pane next to Comment and History to hold the repro steps. Unlike Comment and History, this field can be updated at any time, whereas Comment and History can only be appended to.
  • This is where you can make the biggest difference. Repro steps tell the reader a LOT about the bug. How easy to repro is this? Is the customer likely to run into it? Does it require the planets to align?
  • Should include three sections: Repro Steps, Results, and Expected Results.
  • If your actual steps to repro seem long, you will confuse the reader. They will also think the likelihood of the bug impacting most customers is slight. My suggestion is to include two repro steps. Have one be how most customers will hit it. The second one will be how to actually reproduce the bug from start to finish for a developer or another tester. Blurring the line between these two often masks how likely a bug is to be encountered. Make a good case for your bug to be fixed by describing the customer scenario separate from the straight steps to repro.

Extra info in the description

  • Here is where you can put any other input you have. Your expertise is highly valued. Did you debug into the code a bit and find something relevant? Put that here!

Your bug may include a reference to the offending source. Excellent example: Incorrect permission looked up - uses UPDATE_DATA permission instead of PUBLISH_DATA. The description could say, “Note: This is because <path to file>\Service.cs:80 is requesting "UPDATE_DATA" rather than the publish permission.”

You could include a detailed description of the architectural flaws that led to the problem.

Do you have any root cause analysis? Do you have any proposed fixes and comments about pros/cons of each fix?

  • Are there caveats to the bug that you think should be known? Is the problem easily avoided or resolved? Does the issue never go away? What about after reopening the solution or restarting the IDE?
  • This is also where you can comment on your sev/pri rating. If this is a regression, here is where you should mark it. A regression is a set of repro steps that used to work, either in a previous release or previous build that no longer works. Management may be more likely to take a bug if quality has regressed.
  • Is this issue a crashing bug?

If the OS has crashed (blue screen), do you have a Kernel Dump or a Kernel Mode Debugger attached to a repro machine?

UI crashing bugs should have a call-stack and mini-dump.

ICE (internal compiler error) bugs should have a preprocessed file. If a CHK build is available does the crash repro on CHK, and are there any Asserts hit? When in doubt, keep the debug session active and contact the owning dev.

  • You could include a list of test cases to run to verify the fix. Oooh, ahhhh.
  • Still very important… make all of the above readable. Format it in such a way that one can read through the bug and quickly identify relevant information.

Attached files

  • There are several different kinds of files that may be very relevant. Perhaps the most helpful is a screenshot of UI that is bugged, especially in localization issues. Often you can say a lot more with a picture than you can accurately convey in text. Indicate that you’ve attached a picture so others know to look in the Files tab.

The picture should be JPG or PNG format. BMP and other formats are generally too big.

Crop the picture to what is needed to show.

If you have a lot of code to include in a repro step, it might be best to simply attach a file and refer to it in the bug Repro Steps.

August 25, 2005

VS2005 Load Testing

Ok, so the ACT testing didn't go so great. Basically the Testing window was 4 hours (from 3p.m. to 7p.m.) and there were some significant responsivity issues I saw during that period. Anecdotally we saw a huge improvement after changing (a seemingly un-related) registry setting regarding the Active Directory implementation, but who knows? (we didn't re-test...)

I've been asked to look at the performance for another client and have spent quite some time with the VS2005 Beta 2 and getting a series of tests automated. It is a huge step up from the ACT tool although grappling with the most beta aspects is still a challenge.

Things I really like:
  • The web test recorder works with HTTP 1.1, SSL and NTLM etc...
  • The automatic handling of VIEWSTATE (thankyou, thankyou, thankyou)
  • Out of the box extraction / validation rules and their extensibility (very powerful).
  • Working in an Intellisense IDE.
  • Visual representation of the requests, querystrings, form posts etc
I'll just quickly list some of the areas I've had some pain - if there is anyone out there who needs more info about any of these, i'm happy to share....
  1. XML data binding - Nope not supported just yet as far as I can tell - use an SQL table if it's required.
  2. Web /Load Test renaming and moving - also works inconsistently. I get warnings and misnamed files quite regularly when doing these prety basic operations. I've learned to be careful about this and usually any problems are obvious immediately. I just kill the copy and re-try until the tool gets it right (which is does eventually).
  3. Load Test Results - Don't seem to be saved in the same way as other test type results are. When I end a test session and shutdown VS2005, the result view basically disappears. I suspect that there is something more I could do if we had the ability to publish the results to the Foundation Reporting service, but I'm not sure. The backend data can be pulled from the DB still though, but the front end presentation is so good. What I'm intending to do to work-around this is run the tests in VS2005 but have perf mon record the counters separately.
  4. Test Result DB connectivity - This required some in-depth research to get running. There is information out there around this (http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=55303) but the conection string dialog is still flakey. I did have to re-install the tool to get back at the appropriate setting when I wanted to re-set it.
  5. Random crashes - these happen a lot with test creation and de-bugging -although I must admit that I haven't lost data as a result, and I haven't had to abort a load test run in this manner either....When running web tests it is common to get the system "freezing".
  6. The API is largely undocumented as yet- a pain if you want to do anything beyond what MSFT have provided for.

May 22, 2005

Start at Intergen and perfomance testing

This is my first post since joining Intergen. A pretty scary first week just trying to get to grips with the what and how of the projects and company ... seems like a good bunch of people, though.

I'm going to have to dust off the VBScript skills for the perfomance testing I'm doing with Damon. The ACT tool has some serious shortcomings in it's 'development environment' which will have the side effect of making me think HARD. This will be kind of good because it will drum in some understanding, but will probably take longer than the PM would like. In any case, it's nice to have some under the hood work to do and is precisely the kind of thing that MoE just didn't get near.

A couple of things that DC mentioned that seemed important so I will note them here while I remember.
  1. DC mentioned that the is a machine config setting that needs to be watched on the webserver under test. I think in the web server machine.config file: the enableViewStateMac="false"...
  2. The Client machine needs to be quite good for even emulating 100 concurrent users. As the spec is calling for 2400 for extreme load this will be interesting to try on anything that isn't massive.
  3. I'll need to ensure I get permissions on the target web and db servers so I have access to the performance metrics.
Here is an interesting white paper from AK uni, which have yet to digest... and an interesting idea for a converted development framework.


April 23, 2005

Good Ideas

I reckon everyone has good ideas. I mean really good ideas. Ideas that would probably make you rich, powerful or insanely popular. I have two and I'm not sharing. That's the problem. Nobody shares these - and what good are they sitting in my head? Just because they require as-yet-not-invented pre-requisite technology or ridiculously large sums of capital investment doesn't mean they should be secret. Yet most of us would rather die with them than see someone else use them, profit from them or take the credit. I am damn sure I would!

Not sure at all if this idea - writing real thoughts in diarised form for public consumption- is good. Possibly risky ... but probably ill-advised (::mental note - don't post credit card details::)

By the way, you can tell ME your good idea/s and I promise I wouldn't tell.

Foundation

Right, so the environment is ready.... now I just need something to say.