, machine learning, and related technologies are more popular than ever, but when it comes to , the hype outpaces the reality.  While there are a few testing solutions that take advantage of AI and perhaps machine learning or deep learning, the level of chatter might lead one to believe that such tools and services are more pervasive than they actually are, yet.

“There’s a lot of confusion in the market. Everything and everybody is [claiming to have] AI or machine learning, but when you go to a trade show and ask, ‘What you really do?’ they say, ‘Record and playback,’ or ‘static code analysis.’ Neither of those are really AI,” said Theresa Lanowitz, founder and head analyst, market research firm Voke.  “We’re a way from people implementing AI technology in the enterprise to help them with their development, test and operations, but there are tools going in that direction.”

For example, there’s AppliTools Eyes for UI testing; AutonomIQ and Functionize autonomous testing solutions; Mabl, a machine-learning test automation service for web apps and websites; and Parasoft SOAtest for API testing.

It’s a good idea to experiment with the tools to understand the scope of their capabilities.   It’s also wise to learn a bit about AI, machine learning, and deep learning to understand what they do.

“I think it would be a mistake to use it without understanding it’s limitations,” said Bob Binder, senior software engineer at the Carnegie Mellon Software Engineering Institute.

How AI can help automated testing
Software teams are under constant pressure to deliver better quality products in ever-shorter timeframes.  To do that, testing has shifted both left and right, and the automation of tests has become critical. However, in the meantime, traditional test automation has become a bottleneck.

“Over the past several years, we’ve told testers you have to become more technical, you have to learn how to code,” said Voke’s Lanowitz. “They’ve now become Selenium coders, and that’s really not the best use of their time.  If you’re an enterprise, you want to take those expensive resources and have them developing products, not just test cases.”

Rather than having engineers write scripts, there are now solutions that can automatically generate them.

“When we talk about test automation today, for the most part, we are really talking about the automated ‘execution’ of tests.  We’re not we’re not talking about the automated ’creation’ of tests,” said Joachim Hershmann, research director at Gartner.

One approach is to provide an autonomous testing solution with a test case written in a natural language and it will autonomously create the test scripts, test cases, and test data. The autonomous nature of the system frees up testers to do other things such as explore new technologies, advocate for the customer or line of business, as well as be more influential and strategic, said Voke’s Lanowitz.

Web and mobile app developers have options that help ensure the user experience is as expected.

“The AppliTools Eyes product uses the same style of algorithms that Google would use for facial recognition or people are using for other types of optical recognition,” said Thomas Murphy, research director at Gartner.  “Using Eyes, I can tell you whether the application is working right and whether it looks the way you expect it to.”

Software development firm Gunner Technology has used AI-aided automated testing a couple of times.

“So much of it is repetitive because you’re doing the same thing over and over again,” said Dary Merkens, CTO of Gunner Technology, a custom software development shop.  “AI can look at your application, parse out the elements on a page and apply pre-existing heuristics to generate thousands of tests itself, which would be a tremendous human effort.”

At the present time, Gunner Technology is developing a mobile app that will launch at the end of the summer for which it is also using AI-based automated testing.

“There are these little edge cases where you don’t know what is going to happen on a pixel-by-pixel basis,” said Merkens.

The benefit of machine learning is pattern identification.  In the case of automated testing, that means, given the right training, it is able to distinguish between a failed test and a past test, although there are other interesting possibilities.

“It could be used to understand which tests should be run based on a change that was made or risk of going into production with a particular release,” said Gartner’s Murphy.  “This is where the more that people use cloud-based tools that allow them to run analytics across anonymized data, you can start looking for patterns and trends to help people to understand what to focus on, what to do.  We’re in the early phase of this.”

Getting Started
Some vendors are promoting the merits of AI and machine learning, whether their product actually uses the technologies or not.  As with other types of products, the noise of similar sounding offerings can make it difficult to discern between what’s real and what isn’t, particularly for the uninitiated.

“Really do due diligence and a POC to make sure it’s going to reduce or eliminate that human interaction you have to have,” said Voke’s Lanowitz.  “Realize this is new technology and new innovation, because we haven’t had a lot of innovation in the testing space. Tools have gotten less expensive, they produce fewer false positives, but we haven’t had a lot of innovation. This is innovation.”

While it’s fun to be technologically curious, it’s also wise to consider how the organization could benefit from such a product or service and whether the organization is actually ready for it.  Machine learning requires data which may not be readily available. Alternatively, if the data is available, it may not have been curated because no one knows how to make sense out of it.

“Increasingly people talk about shift right and the idea is essentially I have all these data about how end users are they using my application, where errors are occurring and the load in the system.  I can use AI to make it much more meaningful,” said Gartner’s Herschmann. “The whole notion of testing and QA broadened in scope from the ideation phase to the requirements phase all the way back to when things are live in production.  I can use the data in a machine learning context to identify patterns, and then based on the patterns, I can make certain changes. Then rinse and repeat all the time.”

It’s a mistake to underestimate the dynamic nature of machine learning, because it’s a continuous process as opposed to an event.  Common goals are to teach the system something new and improve the accuracy of outcomes, both of which are based on data. For example, to understand what a test failure looks like, the system has to understand what a test pass looks like.  Every time a test is run, new data is generated. Every time new code is generated, new data is generated. The reason some vendors are able to provide users with fast results is because the system is not just using the user’s data, it’s comparing what the user provided with massive amounts of relevant, aggregated data.

“Three or four years ago, Google said that their code base then was like 100 million lines and it’s well past that now.  Every day, that code base is growing linearly and so is their test code base, so that means that test execution is growing exponentially and at some point it’s no longer affordable,” said Gartner’s Murphy.  “They built tools to determine which tests need to be fixed or thrown out, which tests are of no value anymore, what tests should be run based on what changes have been checked into [a] build. These things are what organizations have to look at and now you’re seeing other companies other than Google do this.”

What to Expect Along the Way
While autonomous testing and AI technologies aren’t new, the combination of them are in the early stages.  More and different types of products will hit the market in the coming months and years. Meanwhile, there will be a lot of trial and error involved by end users and vendors.

“If you look at the Gartner Hype Cycle, all of the technologies that are in some shape or form related to machine learning are all just climbing the slope.  Basically that means they are still ahead of getting into the trough of disillusionment,” said Gartner’s Herschmann. “I think we will see people fail at using these kinds of technologies [because] there’s a lot of over-promise.  We tell people you’ve got to have the right expectations about what you can do with this because yes, we’ve seen some very cool things like Google and Facebook and some of the other big guys, but keep in mind this is very, very narrowly focused. We’re decades away from anything that’s general purpose AI.”

Voke recommends taking a long-term view of the technology and consider how it’s going to impact the mix of skills in the organizations and workflows.

“Understand where skills can go and how you can use the skills to benefit the overall software life cycle,” said Lanowitz. “You can’t absolve all your responsibility and say we have this new tool so we don’t have to have a test engineer sitting there monitoring it.  The role changes. Also, you can’t just plug these things in and let it them go. You can’t assume they’re perfect out of the box. That’s where the idea of training comes in.”

While it’s common to focus on the technology aspect of this or any other technology for that matter, what’s often underestimated are the impacts on people and processes.

“You can’t have one group trying to control the software life cycle through a turf war.  This going to change the way software is developed, tested, deployed, so I think the organization has to be ready to embrace this innovation and not stay stuck,” said Lanowitz. “We know from our research that there’s not a lot of automated testing [or] software release management going on. People say they’re releasing more frequently, but what they’re doing is still very manual so there’s a lot of room for error.  People have to be ready for this mode of more autonomous, more artificially intelligent, guided solutions within their organization to be ready to embrace it.”

Gartner’s Murphy said organizations should plan for a four-month window to understand how the new tools differ from traditional tools, how to apply the new tools well and get the staff trained.

“Expect to get some positive benefits over a year’s worth of time, but expect it to take some time to get it going and have things move forward,” said Murphy.

Don’t get too comfortable, though, because many of the companies behind AI and machine-learning-aided automated testing tools and services are startups. They may look hot or be hot now, but some of them may fail while others may be acquired by industry titans.

“Most of these new AI-ish guys are $5 million and under.  They’re not enterprise-scale types of organizations,” said Murphy.  “Open source requires some assembly, sometimes major assembly, so it’s a market that’s going to be changing quite a bit over the next few years.”

Meanwhile, Lanowitz thinks some automated testers may evolve into automated testing system trainers.

“If you’re currently an automated test engineer, this is an opportunity to increase your skill set because what you’re going to be doing is not just using a tool, you’re going to be training that software that fits within the scope of what you want to do in your own organization,” said Lanowitz.

Potential Pitfalls to Avoid
The Silver Bullet mentality is alive and well in the AI space, and that will trickle down to the automated testing space. Despite the fact there are no silver bullets, massive amounts of industry hype continue to set unrealistic expectations. Organizations sometimes jump on the latest bandwagon without necessarily understanding what they’re adopting, why, and what their current state really is.  They want answers that will help them navigate the new territory faster and more intelligently, but they don’t always know which questions to ask.

“If you’re only looking at what you’re spending versus what others are spending, how is that going to affect if you’re getting better or not?” said Gartner’s Murphy.  “I try to get clients to understand what would make them better, such as what their weaknesses are, but they often don’t have a very good handle on that. These are big transitions culturally and technically so you should move forward in an agile, incremental fashion:  pick a team, a project, school them up and see how it works.”

Don’t plan a wholesale shift that involves the entire organization, in other words.  Start small, experiment, make mistakes, learn from the mistakes, build upon successes, and keep learning.

One Gartner client spent a year experimenting with what was available and doing a pilot.  The results were not as expected, but instead of considering the endeavor a failure, the organization realized the tool that had a lot of potential that would probably take another year or two to realize.

“I think [the ability to pivot] is more important here than in many other technologies because you can’t just drop this thing in tomorrow and then be good for the next five years,” said Gartner’s Herschmann. “This is an investment in the sense that you need to adapt to how it’s changing.”

Apparently, the lead analysts at Gartner think that the AI landscape may look very different five years from now.  There’s a lot of innovation and a lot of venture capital and private equity flowing into the space.

“Don’t expect that this is going to be only the way forward for the next 10 years,” said Gartner’s Herschmann.

5 Intelligent Test Automation Tools
AI and machine-assisted automated testing tools are relatively new.  The only way to understand exactly what they do and how their capabilities can benefit your organization is to try them. Following are five of the early contenders:

AppliTools Eyes is an automated visual AI testing platform targeted at test automation engineers, DevOps and front-end developers who want to ensure their mobile, web and native apps look right, feel right and deliver the intended user experience.

AutonomIQ is an autonomous platform that automates the entire testing life cycle from test case creation to impact analysis.  It accelerates the generation of test cases, data and scripts. It also self-corrects tests assets automatically to avoid false positives and script issues.

Functionize is an autonomous cloud testing platform that accelerates test creation and executes thousands of tests in minutes.  It also enables autonomous test maintenance.

Mabl is machine learning-driven test automation for web apps that simplifies the creation of automated tests.  It also identifies regressions and automatically maintains tests.

Parasoft SOAtest API testing is not a new product.  However, the latest release introduces AI to convert manual UI tests into automated, scriptless API tests.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here