Staying Positive

Testing is a joyful and creative exercise for some of us that are professional testers and/or test managers.  We automatically view an identified and confirmed issue as a victory of sorts, and strive to find more.  To the non-professional testers on the team, however, finding issues can be disheartening.  The greatest danger to the team and project is when the disheartened are the end-users and/or customers fulfilling their part of the testing required on the solution (the user-testers).  If “working software” is the goal of the development episodes in the project, then “adopted solution” is the goal of the project once working software has been achieved.

Test managers will in particular have to be diligent about keeping end-user testers positive during the user acceptance testing stage.  Issues discovered in this phase reflect the organization’s ability to adopt the solution, so they feel particularly severe to the testers no matter how fast a fix comes in.  Trying to maintain a positive atmosphere and to avoid finger-pointing of blame between the customer and supplier organizations (even within the same company) demands creativity.  The last thing a test manager wants to see is a rift between developers and testers.

Here are some ideas:

  • Change “root cause analysis” to “resolution classification” so that user-testers witness where the resolution of any identified issue will come from.  One thing people learn quickly is to be-friend problem solvers.  On our last project, we took this so far as to change the title of the “root cause” field to “resolution category” in the issue log.
  • Report positive headlines (passing tests) as frequently as negative headlines (failing tests).  Some teams focus closely on issues and timelines to resolution and rightly so – but for the few minutes that it takes to also collect and highlight items that have passed – the effort is well worth it.
  • Iterations.  Completing an iteration yields a sense of accomplishment, even when there are issues discovered.  The end-of-iteration retrospective is a chance to build hope and confidence for the next iteration. Don’t skip the retrospectives.  We did on our last project and regretted it later because we began to lose focus on our process improvement efforts after that.
  • Continuous retrospective – post a learning matrix (things to change, things to keep, things to research, things to praise) in the testing common area (you do have one, don’t you?) and encourage people to post things to it on a regular basis.  Follow up on each item, from every category. It’s easy to address only the “things to change” but avoid that pitfall. Address items in the other quadrants during the daily stand-up/triage, or create an event just for that purpose.
  • Model the test logistics using a big visible model in any notation you desire.  Find a way to enable people to mark up the model as they find things that bug them about the test process or any other logistics.  We found that we got more consistent test results from people that shared a mental model of what we were up to.
  • Use test missions and a hierarchical sense of what needs to be tested so that part-time testers can participate on an equal footing with full-time testers.  If you’re using business testers, some of them will, for at least some of time, be part-time.  If you can provide them with an achievable “call sheet” or session sheet that describes their test mission then they usually work to satisfy the mission as opposed to just putting in time.  You’ll squeak a bit more work out of them and they will leave with a sense of accomplishment.  You might have to coach those individuals that are writing the test missions – that’s not an easy task, and not everyone can do it.

I’m sure there are many more ways of encouraging testers during the intense periods of the project.  Contact me on-line on twitter @testfirst if you have more.

Posted in Uncategorized | Leave a comment

PSExpect is Back

For anyone that might have happened to stumble on this blog and was looking for the PSExpect testing framework, it’s live again at

Thanks for your patience – my over-the-top complex project is finished and I expect to be able to respond to questions and issues again.


Posted in Uncategorized | Leave a comment

Developers: How to Save Money and Release Faster with PowerShell

I use a fictional Volunteer Coordination System (VCS) in much of my work as an example.  It’s a small application based on the need to coordinate volunteers at conferences.  I was a volunteer coordinator for some large conferences, and while (I think) I was successful, it was a lot of manual work.

Hence the idea to provide some assistance with an application.  The goal of the application is to help volunteer coordinators to ensure adequate coverage of all the various requirements the conference managers have, and to help volunteers optimize their conference experience, between balancing their work requirements and their desire to attend sessions. 

The Short Version

Building a graphical user interface is more complex than building a PowerShell cmdlet.  If you build the System Administrator and User Administrator tasks as cmdlets, you save the effort required to build and test graphical user interfaces for those tasks.  Your PowerShell-friendly system administrators will love you for considering their perspective, and you’ve delivered faster because of the reduced effort on the project.

How do I know cmdlets are less expensive than graphical user interfaces?  You save time by avoiding most (but not all) user interaction design effort, and by reducing the test effort.  If you write your cmdlets using a test-driven development style, then you can promote all those automated tests that helped you work out the cmdlet interface as regression tests with minimal effort.  To do the same thing with a graphical user interface, you will need at least another tool to help with the automation, the tests will be harder to maintain, and on top of that, you will probably still need the expertise of an exploratory tester to give you feedback on the user experience.

How many functions could I divert to cmdlets? This will depend on the application target domain.  In the example below, 9 of 28 user tasks might be candidates for building as cmdlets.  This could mean that you’ve saved the effort required to build 9 screens, or more, depending on the desired user interaction. Seems worthy from my perspective.

What if I built cmdlets for all the functionality and then layered the graphical user interface on top of them as needed?  By Jove, I think you get the point.  That’s a great idea.  You’ve saved testing time since you can use simple PowerShell scripts to test all the domain functionality without the GUI (bypass testing).  This reduces the scope of the testing that occurs outside the GUI and helps you to respond to changes later on in the project.  And, it leaves you room to adapt to the user’s needs, just in case they decide later they need a GUI for one or more of their tasks.  In the meantime, you’ve deferred the development of that GUI to as late as is responsible (ideally).

The Long Version

Consider the functionality of this application based on a breakdown by user type (or persona).  Some of these have some business rules behind them, some of them are aspects of CRUD (create, read, update, delete).  Some of the functionality is required before the conference, some of it is required after the conference.

Conference Manager, focused more on the conference program than on logistics

  • Add Conference
  • Remove Conference
  • Add Conference Session
  • Remove Conference Session
  • Set Volunteer Work Requirement

Volunteer Coordinator, focused more on conference logistics than on the program

  • Add Volunteer Role
  • Remove Volunteer Role
  • Screen Applicant
  • Hire Volunteer
  • Drop Volunteer
  • Add Work Shift
  • Remove Work Shift
  • Check Coverage
  • Assess Volunteers Against Volunteer Work Requirement
  • Approve Work Shift Trade
  • Notify Volunteer
  • Broadcast Notify Volunteers

Applicant, will work for conference fee waiver

  • Apply for Volunteer Position
  • Accept Volunteer Position Offer

Volunteer, wants to maximize conference experience

  • View Volunteer Role Description
  • Sign Up for Work Shift
  • Block off Time to Attend Conference Session
  • Initiate Work Shift Trade
  • Accept Work Shift Trade
  • Decline Work Shift After Having Signed Up
  • View Personal Conference Schedule
  • Assess Self Against Volunteer Work Requirement
  • Notify Volunteer Coordinator

There might be more.  As a scope definition, that’s plenty of detail.  Certainly more than the goal description, so it’s a start.  Strong verbs, strong nouns, makes for a reasonable model of the desired functionality.  There’s some ambiguity since the business rules are not modeled, however, for the most part, it seems clear what the intended functionality is.

Solution Concept #1: All functionality is exposed via the web browser.

To build this solution, you first do a couple of architectural spikes to set the high-level design, and then, armed with the feedback you got from those spikes, you plan a number of releases and iterations that ultimately deliver the desired functionality.  For each of the above, the tasks would be something like, ordered in any which way you feel comfortable, but in any case, refactoring as you go:

  • Elaborate requirements by identifying acceptance tests
  • Create the user interface storyboard (say using balsamiq)
  • Verify and validate the user interface storyboard against usability, branding objectives
  • Automate acceptance tests (excluding GUI)
  • Automate selected acceptance tests for regression suite (including GUI)
  • Build the presentation layer (user interface objects and tests)
  • Build the business services layer (process and control objects, entities, and tests)
  • Build the data services layer (database and tests)
  • Verify and validate the functionality (exploratory testing using the GUI)

Of course, these are arbitrary.  Maybe your list is longer.  I’m also excluding the logistics around source code management and version control.

Solution Concept #2: User administration functionality is exposed via PowerShell cmdlets, remainder exposed via web browser

To build this solution, you would first make a decision whether or not the user task would be considered “administration”.  To make this decision, the greatest factors are the user persona, the frequency and timing of the usage of the functionality, and how well the functionality maps to a graphical user interface.

I posit that all the Conference Manager tasks are administrative.  They won’t be run very often, do not require a lot of information to be typed in, and are performed by a role that aligns pretty well with my definition of “administrator”.  That saves 5 web forms.

I posit that some of the Volunteer Coordinator tasks are administrative, and I would break it down based on when the task is most likely to be needed.  Everything that is typically run before the conference is administrative, everything typically run during the conference  is not.  That saves 4 web forms.

For the tasks that are exposed by graphical user interface, the development tasks from solution concept #1 would apply.  For those based on cmdlets, the development tasks are as follows:

  • Elaborate the requirements by identifying acceptance tests
  • Automate the acceptance tests by writing PowerShell scripts
  • Build the presentation layer (the command interface and snap-in)
  • Build the business services layer (process and control objects, entities, and tests)
  • Build the data services layer (database and tests)
  • Verify and validate the functionality (exploratory testing using the PowerShell console)

This feels like a simpler list of development tasks.  Depending on the target architecture, this could also mean one less programming language and one less third party library (for example, no javascript and no jQuery) for these functions.

Posted in Uncategorized | Leave a comment

Failing Generously

“The way you explore complex ecosystems is you just try lots and lots and lots of things, and you hope that everybody who fails fails informatively so that you can at least find a skull on a pikestaff near where you’re going.”
– Clay Shirky,

I know that @cshirky wasn’t talking about agile development in that post.  My belief, however, is that software development does qualify as a complex ecosystem and that there is tremendous value in “failing generously” when you do fail.

“Failing generously” has two aspects – the informative aspect highlighted by @cshirky being the public aspect, the one that you might hear in discussions at conferences or read in publications.

“Failing generously” is also a willingness, an openness, an agape of sorts, to avoid blaming the tool, the process, co-workers, the customer, or yourself.  Truth be known, the real reason for the failure is probably a myriad of those things, slap-dashed together with just the right timings put in place for … the failure. The etymology of the failure might be as complex as the ecosystem itself.

Fail generously. Study the context, plant pikestaffs for yourself and for others. Learn from it and understand what you could do better next time. If you simply must blame, then at least be willing to forgive, yourself included.

Posted in Uncategorized | Leave a comment

Test-Infecting a Powershell Script

Writing Powershell scripts is generally a lot of fun, but once in a while you might come across a need to become test-infected, that is, become a little obsessive about how well your script is tested before you distribute it.  There might be several reasons for getting infected: you’ve been burned, you have to distribute your scripts to people that don’t have quick and easy access to you, or the intent of the script has rather serious overtones and/or consequences should something go wrong.

Not to say there is anything wrong with _always_ being test-infected.  But sometimes you just want to have fun with the script and jam it out there for other Powershell hacks like me to have a look at.  There is real value in that.

Caveats aside, let’s look at test-infecting an existing script.  Steve Murawski gleefully (via twitter) offered his script Show-ADObject.ps1 as lab rat for my intended infection.  Here’s what happened.

My stated starting point was the existing Show-ADObject script.  To infect the script, the first step I undertook was to create a test script that describes the intended behaviour.  The second step was to refactor the target script until such a time that a) all test cases are automated and b) all test cases pass.

After running Steve’s script, I came up with seven behaviour-defining test cases.  I chose to describe each one of those test cases using the given-when-then phrasing from behaviour-driven development (BDD) since the appropriate aliases are built into the PSExpect testing framework.

Steve’s code had some dependencies that I have inoculated against infection through isolation.  In other words, I made them out of scope of my tests.  In particular, the graphing routine and the graphing class library that was used.  Consequently none of the following defining behaviours refer to the actual visual graph itself – only the items that are being fed into that graphing routine.

Written this way, the tests should read as naturally as plain text.  When you say each test, don’t skip to the text in quotes – read them with the words given-when-then included.

    # first defining test case - collect the nodes for a valid AD class name
    given "an empty list of objects to map and a valid class name 
that I know has entries"
when "I request to view the node map for that class" then "there should be more than zero nodes to view" # second defining test case - collect the nodes for a valid set of AD class names # # given "an empty list of objects to map and a list of valid class names
that I know each have entries"
when "I request to view the node map for those classes" then "there should be more nodes than were discovered with
just a single class name"
# third defining test case - display a list of AD class names # # given "an empty list of AD class names" when "I request the list of valid AD class names" then "there should be more than zero valid AD class names returned" then "known class names like group, organizationalunit should be in the list" # fourth defining test case - adding color to the nodes # # given "an empty list of objects to map and a list of valid class names
that I know each have entries"
when "I request to view a colorized node map for those classes" then "there should be more than zero nodes to view and filtering only nodes
without color should not yield any objects"
# fifth defining test case - displaying help and usage examples # # when "I request to view the help and usage examples" then "the answer should be text and contain key help contents" # sixth defining test case - performing the check for required files # when all files are in place # given "a list of files that must be available for the script to perform
as expected and that all the files are in place"
when "I request to check to see if those files exist" then "the answer from the check should be true and the missing files report
should be empty"
# seventh defining test case - performing the check for required files # when all files are _NOT_ in place # given "a list of files that must be available for the script to perform
as expected and that _not_ all the files are in place"
when "I request to check to see if those files exist" then "the answer from the check should be false and the missing files report
should contain the number of missing files"

These seven defined behaviours form the backlog of work that I have to do in order to make the infection complete.  The next step is to provide the script blocks for each of the given-when-then phrases, in effect, automating the test case.  It’s best to implement one test case at a time so that you can get to the first passing test quicker – you’ll enjoy the positive feedback you get when you work this way.

    given "an empty list of objects to map and a valid class name `
        that I know has entries"
        $script:ObjectsToMap = @()
        $script:TargetADClass = "group"
    when "I request to view the node map for that class" {
        $script:ObjectsToMap = Get-ADObjectsToMap $TargetADClass 
    then "there should be more than zero nodes to view" {
        $script:NumberOfNodesFromOneClass = `
            $script:ObjectsToMap.GetUpperBound(0) + 1
        Assert ($script:NumberOfNodesFromOneClass -gt 0) `
            -Label ("ObjectsToMap.Single.Count:" + `

Writing these code blocks is a translation exercise – you translate the natural language in text into Powershell script, calling the target script in order to exercise the true target of your test.  In this case, the target of the test is the function Get-ADObjectsToMap.  I’ve refactored its signature based on what the test needed.  Sure, there is a design element to the translation and you might come up with a different signature, but the point is – the function signature needs to support the test.  So the function Get-ADObjectsToMap isn’t allowed to perform any input or output – it must be written to accept input from the test script and then it must return something that the test script can inspect.  This enables the test oracle – the indicator of the pass/fail of the test – to be automated.

The ‘then’ clause has one test condition in it, confirming that at least something came back from the dip into Active Directory. The test condition is the line that starts with ‘Assert’, that is, a call to one of the functions in PSExpect.  I thought about adding another test condition to confirm that a specific item was being returned, but rejected the idea since I thought that was more a test of the Get-QADObject cmdlet than it was of my script.  Borderline.  At least this way, I’ve not injected any test condition that is specific to my Active Directory.

Running the script the first time failed miserably since I hadn’t created the target function with the right signature yet.  So parsing errors everywhere.  Next step: fix the parsing errors so that the script actually runs cleanly, albeit still fails the test.  This is an important step in test-driven development – getting to the first clean fail.  Next step after that, perhaps obviously, is to write the script so that the test passes.

Querying for group
03/01/2009 3:20:29 PM,SHOULDPASS,PASSED,PSpec,an empty list of objects to map and a valid class name that I know has entries
03/01/2009 3:20:29 PM,SHOULDPASS,PASSED,PSpec,I request to view the node map for that class
03/01/2009 3:20:42 PM,SHOULDPASS,PASSED,PSpec,there should be more than zero nodes to view
03/01/2009 3:20:42 PM,SHOULDPASS,PASSED,ObjectsToMap.Single.Count:237

Now you can choose to move on to the next test case, or you can choose to refactor the code written so far in order to improve its quality.  I recommend you refactor immediately so that it doesn’t feel like work later on.  Keep your focus tight.  Refactor to eliminate side effects, improve modularity, improve readability, etc.  Do this as you go instead of waiting until the end.

The process really doesn’t change for the remaining test cases with the exception that, every once in a while, you may have to go back to a previously-passing test and revise it based on what you have learned from getting other test cases to pass.  This was certainly required in this case once it got time to handle the coloring of the nodes that were intended to be graphed.

Download both the test script and the (refactored) target script here:


Steps for test-infecting a script:

  1. Write a test script that describes the intended behaviour.  The script should not run since you haven’t written the target script yet.
  2. Get the test script running by filling in _just enough_ of the target script so that there are no syntax or parsing errors – the test script should run smoothly, but still fail the test cases.  The PSExpect testing framework was designed to be able to highlight test case failures yet still have the test script run smoothly.
  3. Choose a single defining test case – ideally the most significant or intended-behaviour-defining one – and write the target script so that the test case passes.  Avoid over-engineering the target script to handle the other test cases – instead, focus on the one test case that you’ve chosen and get it passing, quickest way possible.
  4. Refactor your target script to meet your personal or enterprise standards.  If in Step 3 you’ve got the test case passing in the simplest possible way, Step 4 is to keep the test passing but to improve its quality.
  5. Choose the next-best defining test case and repeat Steps 3 and 4.  This time, you will need to ensure that the first test case stays passing.  This might mean refactoring your target script some more in order to handle the new test case.  It may also mean refactoring your first test case.  By the end of this step, though, all the tests you’ve tried to get passing, should be.
  6. Rinse. Lather. Repeat.  That is, repeat Step 5 until you’ve run out of behaviour-defining test cases.
  7. Refactor.  This time, add some test cases that are only there to confirm the stated quality.  These are tests like how to deal with null inputs, how to deal with exceptions, etc.   These are supporting test cases since they don’t describe core intended behaviour, nonetheless may be necessary to confirm the intended design or quality level.

Posted in Uncategorized | Leave a comment

Have Fun With Your Tweets in Powershell

I can’t remember who got me interested in but it struck me that I would love to see all of my Twitter posts as a “Beautiful Word Cloud” (from their site). The Java applet on that site transforms a bunch of text (or a site) into a word cloud with the size of the word being representative of its frequency in the text.  I saw a the post (and source code) from Adam Franco for retrieving Twitter content using PHP but was more interested in a Powershell version, and I didn’t want the posts in XML – that wouldn’t be useful input to Wordle.

Starting from the end … what I wanted was to use either of the following lines in Powershell:

"testfirst" | Export-Tweets | out-file adam.txt
Export-Tweets -User "testfirst" | out-file adam.txt

The first of these uses the pipeline to receive the Twitter username, useful if you want to put it into a loop to say, extract the contents of more than one user account.  The second one is a more literal request for the tweets of a single Twitterer.  In both cases, the output is to be saved in a file name of my choice using the handy out-file cmdlet.  In other words, I don’t want a script that hard-codes the output file name.

Recognize that first you have to get the Export-Tweets function into memory, so perhaps a more complete scenario that depicts my intended usage is the following, typed into a Powershell console:

"testfirst" | Export-Tweets | out-file adam.txt

The following script accomplishes these modest goals.  There are two Powershell tricks to point out.  The first trick is enabling pipeline input for a function.  You’ll see that in the definition of the function Export-Tweets since it has the begin{}, process{}, and end{} blocks.  The pipeline input is enabled in the process{} block while the explicit syntax with the User parameter is enabled in the end{} block.  The trick is to define the function that does the work in the begin{} block.

The second trick is substituting % for foreach.  This really cuts down on the script length, however, use that substitution with caution if you’re writing scripts that other people have to read.  It’s a useful and learnable enough reduction that I wanted to use it in this case.

Here’s the script:

function global:AppendTweets([xml] $tweetsPage)
    $tweetsPage.selectnodes("/statuses/status") | % { $_.selectSingleNode("text").get_InnerXml() | write-output }

function global:CountTweets([xml] $tweetsPage)
    return $tweetsPage.selectnodes("/statuses/status").Count

function global:GetTweetsPage([string] $user, [string] $page)
    [string]$url=$urlbase + $user + ".xml?page=" + $page
    write-host "Connecting to URL " $url

    $webclient= New-Object "System.Net.WebClient"
    [xml] $tweetsPage=$webclient.DownloadString($url)
    return $tweetsPage

function global:Export-Tweets([string] $User)
        function GetUserTweets ([string] $User)
            $pageIndex = 1
            $numTweets = 1
            while ($numTweets -ge 1)
                $tweetsPage = GetTweetsPage $user $pageIndex
                $numTweets = CountTweets $tweetsPage
                if ($numTweets -ge 1)
                    AppendTweets $tweetsPage
                $pageIndex += 1


        if ($_)
            GetUserTweets $_

        if ($User)
            GetUserTweets $User


Posted in Uncategorized | Leave a comment

AA-FTT Workshop – Iteration Two

Otherwise known as the functional testing tools workshop, this workshop will be held in advance of the Agile 2008 conference in Toronto this coming August.

My goals/intentions for a next-generation functional testing tool are simple:

  • test-first support
  • test-last support
  • exploratory testing support
  • one environment for all three of the above

In other words, I want it all.  Because I believe that testers need to have all styles/techniques in their personal portfolio, and I don’t want them to have to switch between tools when they choose the style that best fits their current test mission.

So what does that mean?

Test-first support means elaborating requirements and designs using tests.  That may or may not be a specification, that’s a semantic I’m choosing not to care about.  I just want to elaborate the requirements using examples and I want to run those examples and fulfill that happy test-driven development-with-customer-tests cycle.  Ditto for designs.  An environment that supports this would allow me to craft these examples in the absence of the system and it needs to do that because I’m not describing a system (yet).  I’m describing a domain.  I envision doing things like registering verbs, registering nouns and then constructing examples using verb-noun combinations of those registered verbs and nouns in specific sequences.  The nouns have state-like attributes so that I can describe things like ‘good customer’ and ‘bad customer’ without a ton of syntax.  Domain-specific, yes.  Language? Script? Don’t know.  I know it’s not XML because ‘flow’ doesn’t just appear out of an XML document, and it’s important that sequence and control does come easily when reading whatever artifact I’m describing.

Test-last support means verifying and validating a system under test, again using examples.  I may choose to run these as manual or automated tests.  Same thing – register verbs, nouns, build tests from verb-noun combinations in sequence.  With states.  You record-playback-refactor within the context of a single verb-noun combination; building larger scripts later, and only once those verb-noun combinations are registered.  In the age of Supervising Controller or Model-View-Presenter or Model-View-Controller, I believe these verb-noun combinations should work inside or outside the presentation layer; so back-end business process testing can be done at a different time than the front-end human-computer interaction testing.  But the scenarios we use at the back-end can in fact be re-run to include the front-end when the time is right.

Exploratory testing support means interacting with the system to explore its functionality, maintaining state as I go, providing me with logs and the ability to adjust that system state as I go.  I can adapt those logs for auditors to see later.  I can share the logs with other testers, and they are more productive because of that.  In other words, I can share what I have learned about the domain and the system with others on the team.  I would expect that again, the common verbs, nouns, and states that have been registered form the backdrop for this.

And yes, I want to switch between any of test-first, test-last, and exploratory testing, even within a single project.  Because all requirements are not equal, and the micro-context for any given requirement might just implicate one testing style over the other.

Maybe this is too abstract, but we have to start somewhere.  I’d like to think that we start by confirming support for all schools of testing, and to quote Jerry Weinberg at CAST 2008 – "Business is too complicated for any one school of testing.  Learn from them all.  Taste everything, only swallow what you want/need."

Posted in Uncategorized | Leave a comment