amy phillips lessons automated testing

Automated testing is increasingly becoming the future of testing in DevOps and Continuous Delivery environments. It’s assumed that simply training to move into purely technical roles where they focus almost entirely on test automation is the solution. In this article, we explore some of the pitfalls that can arise from this assumption.

I once quit a job over automated testing. Not because of something philosophical like the test strategy or the team’s perception of testing. No, I quit because I wasn’t allowed to contribute to the automated tests. The reason given: I was a performance tester, not an automation tester.

All around us, we’re trying to put people into boxes. We talk about manual testers, security testers, agile testers and act like we have a hierarchy of skills. Almost every week I hear of someone, usually not a tester, discussing how to “up skill” manual testers to become automation engineers. We expect all of our testing needs to fit neatly into predetermined scripts that can be executed time after time without any deep testing expertise.

The rise in DevOps, Continuous Delivery, and Lean development practices has been influential in moving testing from an afterthought to being seen as an integral part of successful software delivery. We’re aware that batching up large changes can increase risk and there’s growing demand for repeatable and reliable, build, test, and release pipelines. Unfortunately, the desire to break down siloed test teams and replace manual test phases with automated test suites to keep up with the fast-paced delivery cycles can lead to unintended consequences.

Lesson #1

One of my first experiences of streamlined release processes resulted in the test team being broken up to form cross-functional teams along with developers, designers, and product managers. It can be hard to break a team up, and there was pressure for every tester to act as an expert in all types of testing. Previously the test team had been diverse and our skills complementary. Maybe we should have been “up-skilling” to spread the skills to every tester, or maybe we should have recognized that testing is a broad role requiring different approaches and specialties.

The goal of creating cross-functional teams to develop and release every two weeks had another hitch. The first ten days of the iteration were a mix of developing and testing new features. The final two days of the iteration were dedicated to release testing and bug fixing. Sometimes things were delayed and we’d end up with untested features in the release candidate. Already tight, the release testing time would now need to include feature testing as well as release testing. The responsibility — and extra work — to get the release testing completed fell exclusively to the testers.

During the days leading up to and immediately after the release, the testers were obviously rather busy. Rather than sitting around waiting for testing to complete, the developers started working on the next round of features because there were deadlines to meet. It sounded sensible, but this subtle separation of the team led to the testers becoming more and more behind with their normal “feature testing”.

Eventually, someone suggested that testing shouldn’t even be in the same sprint as the development, but should intentionally take place in the following iteration to try and reduce the amount of rework on the automated tests. Despite being a single “team,” splitting responsibility for work created a new type of silo.

If we had owned a single vision of achieving fully released features, encompassing both development and testing, we might have had better success at collaborating together to meet the goal.

Lesson Learned – Own the vision but share the work.

Lesson #2

On my next project, the testers had been hired for cross-functional teams and our skills were complementary to the entire Scrum team. We followed a process that allowed new features to be exploratory tested before automated tests were added to the regression test suite. More importantly, the entire team was expected to contribute to testing. On paper, we had everything we needed to succeed.

Releases took place once a month and the entire team of developers and testers performed the release testing to check for regressions as well as make sure the new features were working as expected. One of the test engineers was responsible for maintaining and executing the automated regression test suite. Everyone else used ad-hoc scripts or exploratory testing to check high-risk areas.

One day, the release testing coincided with the automated regression test engineer’s holiday. The automated tests failed.

As we investigated the failures it became obvious that there was some incredible duplication going on with our release testing. We had failed to share the test scenarios with the entire team. Automated regression tests were being duplicated by ad-hoc scripts, and by exploratory testing too. No one in the team felt like they had the authority to disregard the automated testing results to make the release so we were forced to delay the release date to give us time to fix all the broken regression tests.

Lesson Learned – Share the test scenarios with the entire team. New automated tests should be replacing testing rather than duplicating effort.

Lesson #3

On another project, we’d managed to avoid most of the testing politics. Developers and testers were bought into the cause and actively contributing to the test suites. As we built new features, we exploratory tested the features and automated the tests that we believed we’d want to run against future release candidates. We had tests running in different browsers and on multiple devices. The test suites were well respected and formed the essential part of the automated pipeline that allowed us to deploy to production as frequently as we needed.

For a while, at least.

As time passed and we continued to diligently add tests we built up such a suite of tests that were taking a long time to execute. Developers were becoming more reluctant to run the tests locally before committing to the build because of the time it took to execute all the tests.

We’d already worked out which tests were most likely to fail and were executing these first. Now there was talk of running the tests overnight instead of with each commit. The team started to question whether we’d finally outgrown our release approach and looked at moving to scheduled releases to give us more time to execute the vast number of automated tests.

Fortunately, we decided to analyze what we were actually testing before doing anything drastic. As we dug deeper we discovered that we had a number of problems — we were running automated tests on features that were fine to break as long as we fixed them soon after (a.k.a. non-critical features); we also had a significant number of tests for each feature when, in, fact a simple sanity test would often have sufficed.

We’d fallen into the biggest automation trap: believing that more automation would make us safer.

As we switched our test approach to only automate the things that really mattered, we were able to get our release cycles back to a reasonable duration and avoided having to change our release cadence.

The lesson learned – Only automate the things you actually care about.

Lesson #4

Test automation can be a powerful tool in helping to create fast and flexible release processes. As the rise of DevOps and Continuous Delivery continues we should embrace the fact that testing is such an integral piece of the puzzle. Building true cross-functional teams consisting of the skills we really need rather than just job titles as well as creating and sharing a testing vision avoids so many of the test automation pitfalls. Teams that agree on the end goal are less likely to fall into the “automate everything” trap that can be the undoing of so many projects.

As you go through your testing journey, you’ll make mistakes and discover unexpected opportunities. The most important lesson of all is to use other people’s stories and lessons as a starting point for your own, rather than treating them as a direct set of instructions.

I remember listening to a talk of how Google had achieved incredible things with their test automation. It was inspiring, with so many technical and cultural issues solved by a deep and fully featured test suite. Finally, someone in the audience asked how long all the tests took to run. The answer: six minutes. We were amazed, until we discovered that Google used whole data centers to run their tests.

Most of us can’t ever hope to replicate Google’s test execution power, but that doesn’t mean the talk was wasted. Taking the inspiration of what can be done and then using it to pragmatically approach your own situation will help you create your own success.

Lesson Learned – Your team is unique.

About the Author: Amy Phillips is an Engineering Manager at MOO. She manages the Platform team and supports cross-functional product teams in their quest to build awesome products. To learn more about Amy, visit her website or follow her on Twitter.





Source link https://crossbrowsertesting.com/blog/test-automation/lessons-automated-testing/

LEAVE A REPLY

Please enter your comment!
Please enter your name here