Context: For 2.5 years, I wrote Java-based desktop software at software company A (the supplier). The software also has a server and a web component. The software was custom-built for company B (the client). After a short time not being involved, I left company A and joined company B.
The software is being developed in a scope-to-budget kind of way, where business analysis is done by the supplier. There is no clear list of requirements, and the relationship is very open and trusting.
- what has been tested; what specific test steps were used?
- Currently, it is very difficult to write a good bug report. “I cliked around and suddenly: error message!”
- what are the zones which were not tested (due to blocking bugs), and what did we plan to test?
- did we understand the functional scope correctly?
As the software was developed scope-to-budget, we do not even have a good overview of all features that have been developed. Currently, there is a constant back-and-forth between client and supplier. The client does free-testing, discovers a 10-odd number of bugs, some of which are blocking. The supplier then delivers a new version, after which the client does some more free-testing and discovers more bugs (some of those could have also been found in the old version).
The solution: I want to solve this issue by introducing Gherkin-style tests, which may or may not be automated. The goal is two-fold:
- Force us (the client) to describe how we think functionality should work. This ensures we are indeed reporting bugs, and not just misunderstanding the functional scope.
- Make it clear how well we cover the full software
My question: Are there any good tools to manage Gherkin-syntax (Given-When-Then) test cases in manual client-side validation? Something which can manage Features, and all of the different scenario’s? And ideally, can easily record the result of a manual testing run ?