Testing autonomously is only possible if AI conducts Discovery.
Post by Apr 2, 2024 8:58:12 AM · 3 min read

Everybody is talking about autonomous software testing. Are we there yet?

In February, Antithesis came out of stealth mode with a seed round of $47 million and a post-valuation of $215 million, according to PitchBook. Antithesis describes itself as a “Developer of quality assurance software intended to facilitate autonomous software testing” in PitchBook.

This announcement was a significant shot in the arm to all of us trying to build the future of testing. It validates the opportunity in the software testing market. It is another marker in the arrival of autonomous AI-based testing. In general, it's excellent news for us. Plus, they are focusing on software reliability, while our focus will start on functional testing.

Now, the title of this blog post suggests there is a “but” to my comments. Here it is: can we achieve the goal of autonomously testing software products? Are we there yet? Several vendors say they have autonomous platforms but are missing a key component.

Before we go into the missing component, let’s review how testing autonomously is defined. In an article for Splunk by Shanika Wickramasinghe, Shanika defines the practice of testing autonomously as “an emerging technology that uses AI/ML to create and drive software testing without human intervention. From data creation to execution, autonomous tests can perform a full end-to-end test by working as an independent entity. In addition, autonomous tests can learn from historical data and evolve.” In a medium post by Brian Anderson, Brian defines this same goal as “an emerging testing practice in which tests are created, driven, and managed by AI/ML, completely removing the need for human intervention.

Sounds great! Isn’t it? What is the problem? Well, in both articles, the issues are apparent. Shanika mentions several problems with the products in her post. The article contains sentences like the following: 

It can be complex to learn and use. Also, pricing can be higher than automated testing tools if you have a limited budget.
Cons of this tool include limited customization, a larger learning curve, and higher subscription costs.
One disadvantage of this tool is that pricing can be higher than automated testing tools if you have a limited budget.

Do you notice a pattern? They are complex to use and expensive. What does that mean? What it means setup is complicated. It takes time to get these tools working. Why? Well, the answer lies in the missing component. According to Shanika, the features of autonomous platforms are:

  • AI-based test generation – Independent testing leverages ML and other AI techniques to automatically generate test cases. This feature helps increase the test coverage and minimize human intervention.
  • Self-healing – The self-healing feature enables such AI-based testing systems to adapt to changes in the software systems and recover from unexpected issues, helping reduce bugs in tests and improving the reliability of the testing process.
  • Predictive analysis – AI/ML independent testing tools can perform predictive analysis by identifying patterns from historical test data. It helps such systems to identify and address potential issues before they occur.

All those are great features, but they are of little value if it takes days or weeks to complete the setup. Some of these tools take months to fully set up as you must manually codify the application model. A model that you then need to maintain as you make application changes.

As Brian Anderson states in his blog post, “Currently, autonomous testing is only in its infancy. Even the leading brands who have implemented AI features to their automated software testing processes still require a certain level of human intervention.” He concludes, “For now, there is no true autonomous testing platform on the market, but many software testing solutions are heading towards that future.

How do you resolve the setup problem? That is what escapes all the so-called autonomous platforms. They cannot discover the application. Notice how app discovery is not in the list of features. Some try to use log files, but that is too late and insufficient.

At Testaify, we recognize that you can only generate tests if you know what you are testing. We also know your product changes regularly, and a one-time defined static model will only be valid for a short time. That is why the critical component of the Testaify platform is AI Application Discovery. The setup for Testaify takes minutes. You only enter the URL and user credentials. You can start testing immediately. The Testaify AI Discovery Engine will discover your application whenever you begin a new test session. It will discover the changes. And we do not need access to your logs or code to do it.

Welcome to the Future of Testing! Sign up for the Testaify waitlist now!

About the Author

Rafael E Santos is Testaify's COO. He's committed to a vision for Testaify: Delivering Continuous Comprehensive Testing through Testaify's AI-first testing platform.Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.

Take the Next Step

Join the waitlist to be among the first to know when you can bring Testaify into your testing process.