It’s been pretty well-documented that test automation, especially at the higher test levels such as system or acceptance testing, is something you should undertake carefully. Like any other development project, the economic argument has to be there to support the effort – and that economic model can be screwed up pretty easily by analysis-paralysis (planning tests without ever writing or running one) or by creating unmaintainable, brittle test scripts.
There are ways of lowering the cost of automating tests. There is a PowerShell hook, honest.
Increase the ‘Test Cases to Test Script’ Ratio
One of the reasons that FIT is spectacular is that it standardizes the test script and permits the tester to focus on providing examples in the form of combinations of test input and expected output. It does this through the use of test fixtures like the ColumnFixture. To use ColumnFixture, you design your test by creating a table containing all the test data (inputs and expected outputs) and then write a test fixture that translates a row in the table into a call to the system under test (SUT). Each row in the table represents a test case – and the running of each test case is all done implicitly by FIT and your test fixture.
It’s pure-and-simple data-driven testing. You achieve a high ‘test case to test script’ ratio by virtue of never having to write the script beyond writing the test fixture. The framework implicitly provides the script – and runs it.
You can replicate the use of ColumnFixture in PowerShell/PSExpect by storing the data in Excel (and learning what a named range in Excel is). See samplestest-getweatherperf.ps1 for an example. There are several advantages of using Excel for the test data:
- You can change the scope of a test run by changing the definition of the named range on the worksheet – eliminating the need to comment out parts of an input file or worse yet, storing multiple versions of it.
- You can use formulas to help specify the test data – so some of the test cases can use randomly-generated data instead of the same data every time.
- You can ask business users or other domain experts to create or revise the worksheet – Excel is ubiquitous in corporations. Little or no training required.
But I’ve written about PowerShell/PSExpect and Excel before. On to other interesting methods of minimizing test automation costs.
Eliminate Waste – Part 1 – Use Test Targets Wisely (Lean Test Management)
‘Waste’ in testing comes in two forms: writing test scripts that target the wrong thing, and writing scripts in a form that makes them difficult to evolve as the system under test evolves, or as your understanding of the system under test evolves.
You can avoid testing the wrong thing by first building a list of ‘test targets’. A test target is basically anything that needs testing. Application functionality, security, performance, etc. might be types of tests but they are not test targets. Well at least they aren’t specific enough to be used wisely.
Break out the types of tests to get specific test targets. List the application functions (use cases, user stories, whatever) – those are functional test targets. List the application interfaces – those are integration test targets. Security test targets include authentication, authorization, and then any specific vulnerability you want to test for (sql command injection, os command injection, etc.). Performance test targets will be objectives-based like throughput, resource consumption limits, or user concurrency.
You need this list so that you can be wise about setting priorities and so that you can plan to write scripts for only those test targets that really matter. Host a workshop to decide these priorities so that the whole team is on the same page.
The list of test targets is – dare I say it – a testing backlog. If you’re familiar with Scrum and the use of a product backlog, then take those same concepts and apply them to testing and the testing backlog. This process works well – I’ve used testing backlogs and Scrum concepts now on a large enterprise project (an SAP upgrade), an infrastructure project (computing infrastructure migration to Windows Server 2003), and a business intelligence project. All with favourable results.
Using a testing backlog is only part of the picture though – the scripts themselves can become waste if they are not written wisely.
Eliminate Waste – Part 2 – Write Lean Scripts (Or None At All)
Test scripts are unmaintainable when they hard-wire the test logic, the test data, and the calls to the system under test. Basically, if any of those three factors changes, then you have to revise the script. As a test author, you want a test automation strategy that enables you to avoid the hard-wiring trifecta.
For manual tests, avoid writing scripts if you can – instead, rely on your testers to make the leap from the testing backlog to the test results. they can journal their way through the test targets and still provide excellent test results documentation. If you can’t avoid writing manual test scripts, then choose to write them using only language from the domain – avoid referring to specific user interface elements, avoid binding the script to a particular processing style, etc. For example, say ‘approve invoice’ not ‘click the Approve button’. Then if that handy little ‘Approve’ button disappears in a later revision of the system, the test script survives without changes.
Eliminate Waste – Part 3 – Dependency Injection for Testers (Keyword-Driven Testing)
Automate tests but minimize the test script’s dependency on the automation API of the system under test. This is analogous to writing manual test scripts that minimize the script’s dependency on the user interface elements such as buttons or menus.
Consider the test script whose automation depends directly on the system under test. Any changes whatsoever, and the test script may have to be revised. To do this in PowerShell, for example, your PowerShell test script would make calls directly against the assemblies that make up the system under test.
Now consider injecting a dependency that shields the test script from the system under test. It’s true that the test fixture may have to change, but your test scripts – ideally – would not be subject to the same degree of change as the test fixture might.
Unlike the FIT test fixtures that allow you to run several test cases, these injected test fixtures translate test steps into calls to the system under test. That’s key. A better name for them is actually ‘test-step fixtures’, given their role. Give these test-step fixtures names that originate with the business domain, and you have given life to ‘keyword-driven testing’ in the domain of the system under test. Otherwise known as the ubiquitous language or domain-driven design.
I’m advocating using PowerShell to write the test scripts and PowerShell cmdlets to implement the test-step fixtures (the bridge). Until their usefulness is proven in context, the cmdlets can be prototyped as PowerShell functions.
Two roles emerge: the business test scripter that knows the business domain, and the developer scripter that knows the automation API of the system under test. The test scripter uses the test-step fixtures to assemble scenario tests. The developer scripter builds the test-step fixtures, either proactively or in response to the needs of the test scripter.
So does this lower the cost of test automation? We’ve decreased the coupling between the tests and the system under test by injecting a dependency that we control – so we’ve made the test less susceptible to technical changes, if not entirely to requirements changes. But even then – the scenario tests are built from modular components (test-step fixtures) – and modularity is a good thing – because it increases the likelihood of high maintainability and extensibility. The test-step fixtures may change, but ideally the test scripts themselves would only be adapted as the complexity of the system under test evolves.
On to the PowerShell hook. Write the test-step fixtures in the form of cmdlets or PowerShell functions, following the same verb-noun naming convention that we might use for building any cmdlet or function. EXCEPT, for testing purposes, we would prefer to take the ‘nouns’ from the domain (maybe even the verbs but then we could no longer build cmdlets unless the verb was also on the standard list of PowerShell verbs).
$MyAccount = get-account ‘name’
Once you’ve established a number of the domain-specific verb-noun combinations, then you can build test scenarios in PowerShell using them whose complexity matches the complexity of the system under test – they can evolve independently but at the same pace. Those test-steps listed above in the example expose nothing of the automation API for the system under test – a good thing if we want to retain the test scripts usefulness over time.
In summary, here are the ways that PowerShell excels for keyword-driven testing:
- The verb-noun style is PowerShell vernacular – we’re not introducing anything new by asking the test scripters to use cmdlets or functions that follow a verb-noun naming convention.
- PowerShell provides access to both System and custom assemblies and namespaces. You might use either or both to implement the test-step fixtures.
- The system under test may require an administration module that might also utilize PowerShell and cmdlets – there would be overlap in administration-type cmdlets and the test-step fixtures so that building them for one purpose would help satisfy the other requirement as well.
- PowerShell provides exception handling so that test scripters can control the impact of any failed test-steps.
- PowerShell provides all the control structures so that test scripters can control the flow and execution of test scripts – including looping. Many other test automation styles only provide rudimentary control, if any.
- The test script development cycle is short – you write a script, you run it, edit, run it again, etc. very quickly. You can even prototype script steps in the console.
- Test-step fixtures can vary in their complexity – you can extend their domain-specific nature by giving them domain-oriented parameters. For example, new-customer ‘GOOD CREDIT’, might generate a new customer with a good credit rating to use in the test script.
For a slightly more detailed example, there is one posted for PSExpect that uses this style of test scripting - samplesTest-Vcs.ps1. The sample domain is a volunteer coordination system (vcs).