These are really good questions.Here are my thoughts on it.
Automation testing can be done at every level, starting right from the requirements phase and all the way till the user acceptance and deployment phase. This is especially true in the current era of DevOps.
DevOps has helped software development teams and operation teams to better collaborate, thereby ensuring there is constant automation and monitoring throughout the software development lifecycle (SDLC), which includes infrastructure management as well.
You may ask, how is this going to influence automation testing? The answer: Everything we do as part of testing is going to change. The changes I foresee include:
- A need to start automation right at the beginning of the SDLC and
ensure nearly all test cases are automated
- All the QA tasks would
need to be aligned to ensure a smooth CI/CD cycle
- A high level of
collaboration would be needed to ensure there is continuous
monitoring in the production environment All the QA environments
would need to be standardized
- The testing mindset changes from “completed testing on this module” to “What are the business risks
that have been mitigated in the release candidate?”
The key to all the above changes is automation. DevOps and automation go hand in hand—without one, the other won’t work. This is where smart people and tools can help in bringing shorter and more dependable release cycles.
As you can see Automation Testing is not only about writing code. There are various facets to test automation and writing code is only one aspect of it.
I have worked in companies where-
- In Unit Testing Level, We have automated unit tests that kicks off every time a code is checked into the branch to ensure the newly implemented features do not break existing features.This helps to find out issues like memory leaks, stale code, code vulnerabilities, buffer overflow issues apart from just ensuring whether the new code is working
- In Integration/System Testing level, we have our UI tests which run through different scenarios of our application. Usually this consists of Smoke Tests to ensure the major functionalities of the application is still workin, Regression Tests which ensure other features of the application is not affected by the new feature and also have feature level tests to ensure the new feature which is implement is working properly.
The Smoke Tests are usually run after every code check in. The regression tests are run once on a daily basis (usually at night), the feature level tests are run continuously till the story is deemed complete. After which, we push that to be part of our smoke or regression tests. It depends on the project and team
Apart from this we may also have API level tests which run in parallel to ensure the application is working as expected underneath the GUI as well
- In User Acceptance Testing, usually the Smoke Tests are run again and in parallel some high level manual testing is done as well
All these becomes more relevant as teams are trying to release faster by implementing seamless CI/CD pipeline. For this we need automation as well.
Source link https://sqa.stackexchange.com/questions/30691/how-will-automation-testing-be-different-at-all-software-testing-level