September 9, 2005
VS2005 Web test custom validation rule
using System;
using System.Text;
using System.ComponentModel;
using System.Collections.Generic;
using System.Collections.Specialized;
using Microsoft.VisualStudio.QualityTools.WebTestFramework;
using Microsoft.VisualStudio.QualityTools.WebTestFramework.Rules;
/**********************************************************************************************/
// Handle validation of response header values
/**********************************************************************************************/
namespace HeaderValidationRule
{
/**********************************************************************************************/
// Inherits validation rule
/**********************************************************************************************/
public class ValidateHeader : ValidationRule
{
// Header name
private string m_headerStringParameterName;
public string HeaderStringParameterName
{
set
{
m_headerStringParameterName = value;
}
get
{
return m_headerStringParameterName;
}
}
// Header value
private string m_headerStringParameterExpectedValue;
public string HeaderStringParameterExpectedValue
{
get
{
return m_headerStringParameterExpectedValue;
}
set
{
m_headerStringParameterExpectedValue = value;
}
}
// Fail if not found?
private bool m_isRequired = true;
[DisplayName("Is Required")]
public bool IsRequired
{
get
{
return m_isRequired;
}
set
{
m_isRequired = value;
}
}
/**********************************************************************************************/
// Add logic to handle header validation
/**********************************************************************************************/
public override void Validate(object sender, ValidationEventArgs e)
{
e.IsValid = false;
e.Message = "Not found";
for(int i = 0; i < e.Response.Headers.Keys.Count; ++i)
{
if (e.Response.Headers.GetKey(i) == HeaderStringParameterName)
{
string strVal = e.Response.Headers.Get(HeaderStringParameterName);
if (strVal.Contains(HeaderStringParameterExpectedValue))
{
e.IsValid = true;
e.Message = "Found";
break;
}
}
}
}
}
}
Binding CSV data to a web test in VS2005
To add a csv file as a datasource, do the following:
1) Create a csv file that looks like the following:
username,password
user1,password1
user2,password2
The first row is for column headers.
2) Click the add datasource button for the webtest
3) Choose Microsoft Jet 4.0 OLE DB Provider as the OLE DB Provider
4) Click the Datalinks button
5) On the connection tab, enter the directory that the csv file is in for the "Select or enter a database name:" text box. Enter just the directory.
6) Click on the all tab.
7) Double Click Extended Properties
8) Enter text and hit OK
9) Click Ok for the Data Link Properties dialog
10) Click Ok for the connection properties dialog
11) Choose the csv file in the choose tables dialog.
12) Add the datasource to the field you want to bind to. The column headers in the csv file will be used for the field names.
The Perfect Bug
(nicked from a VSTS Quality Tools blog) Whether we’re running automated or manual test cases, whenever we come across a failure it is often a good idea to log the issue in the bug database. In the Test Results window after a test run, you can use the list of failures to investigate a failure, rerun a test under the debugger, and finally associate the failure with a work item. When you activate this feature, the product does a little bit of the work for you by opening a new bug form and filling in some of the fields. If you have access to a Team Foundation Server, I recommend trying this out. First execute a test case that will fail, publish it, and then execute the “Create Work Item” menu item off of the context menu from the failed case. [TestMethod] public void TestMethod1() { Assert.Fail("Misc. bug in this code"); } What you’ll see is a new bug form open with a bug title prefix (in my case “TestMethod1: “). You’ll also see in the Comment and History section the error message text (in my case “Assert.Fail failed. Misc. bug in this code”). Note: Alternatively, if the bug you want to associate this failure with already exists, you can execute another menu item: Add to Work Item. This helps you add failure information from this case to an existing bug. The rest, and arguably the real value, comes from you – we’ve just tried to automate some of the process that slows you down. So, what do you put into the bug? Another way to ask this question is: what will the developer need to see in order to fix the bug in a very efficient manner? What information will increase the likelihood of a bug fix? How can I reduce the number of bugs that get resolved as Not Repro? A process that helps you enter bugs that achieve all those things could be called the Perfect Bug. The Perfect Bug is a good thing to achieve. Others will more quickly understand it. Management will make quicker, but more informed and accurate decisions for the product. The developer will be less likely to misinterpret the bug and provide the wrong fix. You have supplied the developer with critical information that makes fixing the issue as quick and painless as is possible. Everyone will spend less time on the bug (reading, comprehending, etc). The absolute perfect bug is not usually attainable. There is obviously a limit to the amount of time you should put into a bug. There has to be a corresponding benefit to the time you put into it. However, we can first focus on entering really good bugs and work our way up. A high quality bug is easy to read. It is concise, yet contains additional crucial information. How can we communicate so concisely and clearly? Some of these items are specific rules to remember, but in general it can be wrapped up with a set of principles: A perfect bug has… An accurate and concise title Severity and Priority that are accurate Sev 1: Critical Failure. Completely breaks product or large set of features. Unusable. Significant risk or liability if release (security, legal). Sev 2: Major Impact / Functionality Broken. Breaks major functionality contributes to overall instability in this area, non-fatal assertions. E.g., Statement Completion not active at all or memory leaks. Regression from prior release. Sev 3: Minor Impact / Functionality Impaired. Breaks major functionality in a minor way or breaks minor functionality completely. E.g., missing item from list for statement completion. Sev 4: Little / No user impact. Can still use product / features. Minor functionality problems or UI blemishes or other issues that do not impact customers use or perception. Pri 0: “NOW” Bug. Work stoppage, no work around. Blocking further progress in area or by group. Fix Immediately! 24 hr turn around expected! Pri 1: Showstopper. This is deeply impacting customers OR internal progress. Worthy of a Service Pack or QFE. Fix Soon! Also, required to fix for RTM. Pri 2: Important Bug. Required fix for RTM. Can be fixed any time before RTM. Pri 3: Something we would like to fix but not required to fix to ship the product. Pri 4: An unimportant bug or request. A bug that will likely not be fixed. One manager at MS puts bug priority in terms that might resonate better with you for priority: Pri 1: Will slip the product indefinitely to get this in. Pri 2: Will slip our date within limits to get this in. Painful cut if not in. Pri 3: Will not even think about slipping the product for any of these. If you are unsure about pri/sev, chat with others to level set your expectations. If you notice that management or others regularly resets your bugs’ sev/pri, ask them to discuss why they see it differently. Filled in fields (customize your bug form) The idea is that the tester indicates how they found the bug, as in via what kind of testing. Was it a Your organization can use this field for metrics. If you know you found 30% of your bugs via automation, it’s a compelling reason for your organization to invest more heavily in it. If you find that bug bashes result in more bugs then you’ll know to plan more of those. You get the idea. Can be very helpful with reproducing a bug. Enter OS, proc (32/64-bit), product flavor. Is this a blocking bug? You should definitely mark the bug as such and explain why in the description. A blocking bug usually is around one of these: Repro Steps Extra info in the description Your bug may include a reference to the offending source. Excellent example: Incorrect permission looked up - uses UPDATE_DATA permission instead of PUBLISH_DATA. The description could say, “Note: This is because <path to file>\Service.cs:80 is requesting "UPDATE_DATA" rather than the publish permission.” You could include a detailed description of the architectural flaws that led to the problem. Do you have any root cause analysis? Do you have any proposed fixes and comments about pros/cons of each fix? If the OS has crashed (blue screen), do you have a Kernel Dump or a Kernel Mode Debugger attached to a repro machine? UI crashing bugs should have a call-stack and mini-dump. ICE (internal compiler error) bugs should have a preprocessed file. If a CHK build is available does the crash repro on CHK, and are there any Asserts hit? When in doubt, keep the debug session active and contact the owning dev. Attached files The picture should be JPG or PNG format. BMP and other formats are generally too big. Crop the picture to what is needed to show.
If you have a lot of code to include in a repro step, it might be best to simply attach a file and refer to it in the bug Repro Steps.