Testability is a software quality attribute that is a measure of how easy (or difficult) it is to create test cases for a test target and, in combination, how easy (or difficult) it is to run those tests. High testability is a good thing since it means that the total cost of making changes to the software is correspondingly lower. It typically also means that you can have more than one individual contributing to the software since it is easy for them to determine whether or not their contribution has been a net positive or net negative contribution, simply by running the tests again. Hence, we like test automation.
It turns out that when we value testability, we tend to write software that enables us to automate more types of tests, and we set up this fantastic feedback loop where defining a test becomes a means of production – the tests specify the desired behaviour, guide the development, and then verify the as-built software for as long as that behaviour remains desired. Pretty nice, when we get to that level.
Tools like xUnit were revolutionary in the sense that they gave developers the means for valuing testability and a mechanism for expressing the low-level behaviour in the same language that our target software was (or would be) written in. Developers liked it, and now teams in general are much more inclined to automate unit tests than ever before.
But is it enough? That’s a question that Brian Marick has been pondering publicly in his blog (it’s one of those blogs that actually expands your mind when you read it) and in an on-line forum discussing automated functional testing. I was struck by the phrase "Perhaps it would be better to invest heavily in unprecedented amounts of built-in support for manual exploratory testing." the most. An alternative, he indicates, to automated functional testing with tools like FIT/FITNesse (read the blog entry to find out why this isn’t agile heresy).
You can see where I’m going. If you have a set of Powershell cmdlets that exposes an application’s functionality – and if the cmdlets expose ALL of the functionality – then ‘exploring’ means working with those cmdlets (or an API) in a Powershell session and using whatever testing heuristics you might have in your tester’s toolkit to test the application. You may not be using the same UI as the end-users, however, there is great value to first confirming the functionality is good enough, and then second moving on to confirm that the UI is good enough.
The Powershell features that make exploratory testing an excellent experience are (but not limited to):
- Objects on the pipeline, and in fact, having objects to work with at all.
- The get-help cmdlet that can access custom help files for custom cmdlets.
- The get-member (gm) cmdlet.
- The get-history (h) cmdlet.
- The naming convention for cmdlets.
You don’t even really have to know the cmdlets ahead of time because of those features since you can retrieve their calling syntax at run-time. The get-history cmdlet can then be used to save useful cmdlets and sequences of calls, in order to support future semi-automated exploratory testing (where some scripts are run, but only for the sake of returning to a known state) or ultimately fully automated testing for something like detecting regression. Seems that the ability of a script for passing on knowledge isn’t limited to administrators …
I can envision, however, a mismatch between the cmdlets that an application designer foresees might be a useful way of exposing the functionality and the cmdlets that a tester might find useful. They will each bring, I believe, a different perspective.
So it might be wise to undertake this ‘unprecedented support for exploratory testing’ or hyper-testability using a test-driven approach where the functionality of those cmdlets is influenced, or possibly specified, by the examples that a tester might deem appropriate. Supporting manual exploratory testing by writing automated test scripts ahead of time … seems contradictory. But there is a distinction between ‘understanding-clarifying tests’ (Brian Marick again) and ‘verification and validation’ tests. So using a test-driven approach for understanding and clarifying requirements, and ultimately the design of the cmdlets, isn’t the same as writing tests for verifying and validating the resulting application.
The bottom line? Exposing application functionality in Powershell makes your application hyper-testable. Use test scripts that you/your testers write ahead of time as a means of designing the cmdlets and separate the concern of testing the functionality of the application from the concern of testing the user interface.