lone software tester agile

PART ONE

Introduction

Until relatively recently, the chances are that if you were a on a project, you’d be one of a number of such people. You’d have other members of the team to try ideas out, share the workload, cover for you when you’re away.

With the recent drive towards agile, we’re seeing the makeup of the team change dramatically. Projects can typically be supported by much smaller working permanently to evolve the product. This can often result in there only being a single tester on a team.

What are the being the tester on such a project? How can you work with these constraints? This has been the subject of a series of workshops with fellow testers within my company, which I’m excited to share the outcome with you …

The Iron Triangle

Before we get underway, it’s useful to revisit the following principle within project management which we found underpinned many of our conversations. It’s useful for thinking about the constraints we’re working within on a project, especially in agile.

Image result for scope cost schedule quality

In the iron triangle gives us the idea that the “quality” of a project is defined by three attributes of your project – cost, scope, schedule (or time).

You might have heard the adage “cost, scope, schedule … pick two”. However, ideally on a project, there should be only one cast iron attribute – what management consultant Johanna Rothman calls “your project driver” in her book Manage It.

Within any project you can only really have only one attribute which is fixed – it could be “we need this done by X” (schedule) or “there is only Y budget for this” (cost). The skill of a manager is to work with this constraint and plan around what can be done with the other two attributes to achieve this goal.

Within traditional test management, there are a lot of parallels for applying this same theory to test planning. Within this dynamic the fields are,

  • Scope – how much testing you’d like to achieve
  • Cost (or rather typically resources) – having more testers allow you to execute more activity
  • Schedule or timeframe – how long you have to do things

It should be obvious that if you have a large scope, and a short timeframe, one solution would be to have more testers on it. Although of course in the real world, there are constraints as to how much this can be pushed, and good test management revolves around knowing and pragmatically working within these constraints.

Another solution, of course, is less testers, but it means that it takes longer to get through everything you’d like. Great for the test budget, but typically people like developers need to be paid to be on call to fix the bugs and the bugs are found later in the cycle, so developers need to be available longer.

Finally, if you find yourself in a situation where your available people and schedule are fixed, the only thing to do it to prioritise your scope as it’s the only thing you have control of.

Understanding this dynamic and the trade-offs is important because it was a core part of the discussions that were held, together with ways they could be handled and occasionally hacked.

Under pressure

A common initial feeling of someone stepping into the role of a sole tester was that of feeling under pressure.

Especially in an agile project, the timeframe is set by the sprint duration and your testing team size (although this can be “hacked” as we’ll discuss later).

Just back in 2013, one of our projects would have an annual release, which would involve a two-month testing window and would keep our test team of six busy.

Fast forward to 2018, and we’re now working in agile teams where we are creating deliverable code in a two-week sprint window using only two testers.

A key enabler in this was adopting a robust automated testing framework, which was easy to maintain with changes in the system under test. Such a suite did not grow overnight – and required a lot of work between testers and developers to build the right thing from a framework perspective, as well as to work through a prioritised list of useful automated tests to have in place. In working out idea scenarios and prioritisation, testers found themselves well-placed to lead these discussions. Over time, this suite was able to carry the functional regression load.

Automated testing helped, however, it didn’t eliminate the testing role. But testers found that their role did change dramatically. Most manual testing effort now focused on testing new or changed functionality in-depth during a sprint, as well as helping out with increasing ownership on test scenario selection for the automated suite (as well as shock-horror, learning to code their own tests).

In teams which are still undergoing a level of “forming” – a term used to describe those that have relatively new team members, some of whom were relatively new to working in an agile team – it was quite common for the sole tester to feel initially like they were the “point of blame”. If something gets out into production, the inevitable uncomfortable question can be asked of “why didn’t you test that?”

We shared a few of our experiences looking for general themes. Part of the problem that we were acutely aware of was time, and it’s not always possible to test everything you want to.

In many examples of a release where a defect had been undetected, manual testing had always occurred. Typically though, something was missed, or it was not imagined that a particular scenario could have been capable of causing an issue.

It’s worth taking a moment to think about how this was addressed in “classic” waterfall projects. A test lead would create a plan of what’s to be covered in consultation with many people on the project, but especially using the requirements. From this, they would build up a series of scenarios to be covered and make estimations around the resources and timescale.

However, on these classic projects, this was not the end of the story. It was the tester’s job to produce the best schedule they could, but it was known that this would not be perfect on the first draft. This was why such emphasis was put on the importance of reviewing – firstly by peer testers to see if enough testing heuristic variation has been employed, but also by a wider team such as project managers, customers, developers.

The aim with reviews was to find gaps in the plan and address them. This allowed the final testing scheme to be the most robust scheme of testing possible. This could come from developers saying, “we’re also making changes in this area” or our customers stating there’s an expectation that “most people will…”.

Within agile, it can be easy to forget that this level of contribution is still required. It needs to occur, however, it’s in a more informal, often verbal manner.
Within my colleagues, there is a general consensus that the tester becomes more responsible for facilitating a discussion around testing, much closer to what some organisations will call “a quality coach”.

A core tool for having these conversations is the use of mind maps, which the group has been using with success since 2013. A mind map allows the author to show for a particular feature, all the different variations, and the factors that they’re planning to follow in a one-page diagram.

When done well, they’re intuitive to read and can even be posted in common areas for people to look at. Their brevity helps get people to read them — “I haven’t had time to read that thirty-page document you’ve sent yet” is a frequent complaint in IT.

Even with a mind map in place, there is a natural tendency for the rest of the team to rubber stamp things. A sample conversation might go like this:

Tester: Did you have anything to add to the test mind map I sent out?

Team member: Uh … I guess it’s okay?

We all have a tendency to say something along the lines of “I guess so” for something we’ve not properly read. It’s important to still follow up with a brief conversation about what’s in your coverage – this can be individually with each team member, but often it’s better with the whole team. Just after stand-up can be a great time for this to occur.

If a member of the team notices there’s a mistake about the approach or some items that are missing, it’s expected for them to provide that feedback. Likewise, if the developer does more change than initially anticipated, there’s an expectation for them to tell the tester what they might also want to consider.

Often what you’ll read in agile literature about a “whole team approach” is essentially this: the whole team takes responsibility to give feedback whether it’s about how a story is defined, how a feature is being developed, or how testing is being planned.

A good indicator of when a team has made this mind shift is the use in retrospective of “we” instead of “you” – “WE missed this, WE need to fix this”. Teams where this happens have a much more positive dynamic. It’s important that this applies not just to testing.

Other examples include when a developer builds exactly what was on the story card, but not what was actually wanted (“we failed to elaborate”), when a story turns out much bigger than first thought (“we failed to estimate”) etc.

That said though, agile does not mean the breakdown of individual responsibility. A clear part of the tester’s role is to set clear expectations for the team of what they can do, how much effort it will take, and how you’re approaching it. But there needs to be team input to fine tune this to deliver the best value.

Mainly testing will revolve around changes to a product, for which the rest of your team are your first “go-tos” as fellow subject matter experts on the item. Occasionally as a tester though, you will find the value to consult with another peer tester – and there is an expectation that testers who are part of the same organisation but in other teams can be approached to be asked for their advice and thoughts on a test approach. Within our company, there is an expectation that all testers make some time in their schedule to support each other in this way. This, in many ways, echoes the “chapter” part of the Spotify model, with testing being it’s its own chapter of specialists spread across multiple teams/squads who provide test discipline expertise.

Reaching out to other testers like this is important; it creates a sense of community and the opportunity to knowledge share across your organisation.

Waterfall into agile won’t go…

There have been some “agile-hybrid” projects where there has been an expectation of set numbers of people being able to perform a set volume of testing in a set time (sprint). This can sometimes be problematic as the tester involved in execution hasn’t been involved in setting the expectation of what volume of tests are likely. And hence, it can feel like working against an arbitrary measure not based in reality.

In such a situation, it’s like being given an iron triangle where someone has given you “here’s your schedule, here’s your resources … so you need to fit in this much scope”. When faced with so many tests to run, it obviously helps to have them prioritised so that you’re always running the most important test next. When three areas are fixed, what suffers is the quality – it gets squished.

On projects where test scripting was not mandated by contract, there was always a preference for use of exploratory testing – this being because it allowed the manual tester to focus their time on test execution with very little wastage, meaning more tests could be run, which helped reduce the risk.

Summing up for now …

There was so much material, we had to split it up. So far we’ve taken a dip in, looking at how teams found themselves evolving to a whole team responsibility to quality.
Next time we’ll look at how testers found their voice, and some of the key skills and approaches my colleagues found increasingly pivotal in their day-to-day role.

Thank you to Janet Gregory for reviewing, editing, and donating her expertise for this article.



Source link https://crossbrowsertesting.com/blog/uncategorized/challenges-lone-software-tester/

LEAVE A REPLY

Please enter your comment!
Please enter your name here