Testers shouldn't fear AI testing tools; they support testers' testing goals!
Post by Jun 24, 2024 7:45:00 AM · 3 min read

Embracing AI Testing Tools: A New Frontier for Testers

In the last few years, I have participated in discussions about AI testing tools with many people. Interestingly, the data does not support the general assumption behind many conversations. What is that assumption? It is the same one everyone makes about every new technology. That will take your job or make your job obsolete.

Allow me to share a personal encounter that sheds light on this. Last year, a fellow Testaify partner and I conversed with an entrepreneur. He had embarked on a venture to build a company around an AI Visual Testing tool. While I have my reservations about visual testing in general, I want to focus on a different aspect of our discussion. At one point, he posed the question: Who would be our target audience for Testaify? He shared his struggle selling his tool, as the testers he encountered were resistant at every turn.

It's crucial to recognize that the primary challenge in selling testing tools, as highlighted by the entrepreneur, is testers' resistance. Their role in adopting new technologies is pivotal, and this resistance can be a significant roadblock.

It's a common human reaction to be cautious when faced with significant change.

Like many others, testers may not trust the manager suggesting new tools X or Y, fearing these tools might replace them. This shared experience of skepticism is a natural response to new technologies.

The basic assumption about new technologies is that they will harm you. Developers are probably going through the same emotional reaction as their bosses, who ask them to embrace GitHub Copilot and similar AI code assistant tools.

Interestingly, the data up to this point does not support this assumption. What am I talking about? Well, I have written about it before in this blog post. In a September 2023 issue of The Economist magazine, Carl Benedikt Frey and Michael Osborne wrote about AI and its impact on jobs.

We have learned that the most recent set of AI tools is helping junior employees improve their performance.

For example, Microsoft GitHub Copilot helps junior software engineers much more than senior software engineers. It also allows them to learn by reviewing and analyzing all the recommendations coming from Copilot. As stated in the article, It’s not the seasoned experts whose productivity is increasing the most, but rather those with the least experience in programming.

There is no reason to assume that the AI testing tools in or coming to the market will have a different impact. I have worked with many testers in my career. Many of them have something in common. They all feel like they need more time to test. They can create more tests and find more issues. In other words, they are never satisfied with the number of tests they get to design and run. With AI testing tools, testers can significantly increase their productivity, running more tests in less time and improving the overall quality of the software.

If you are a tester and feel like you always need more time to test, you should embrace the new AI testing tools like Testaify.

For example, we are conducting alpha testing using Testaify. One of the software products we are testing is a CRM application. It is large and complicated, like most CRM applications. Interestingly, we have learned that our AI testing tool has the same goal as most testers we worked with. It wants to create and run more tests.

In a recent test run with added guardrails, over 3,600 test cases were still created. The difference is that the AI tool designed and executed all those test cases in a couple of hours. Imagine you are a tester with two years of experience. You are testing a real estate management application. Currently, you only have a suite of 200 test cases because that is something you can test in the amount of time allowed. 

Now, you add Testaify to your toolbox. You run Testaify, and it creates and executes 800 test cases in one hour. After you review the Testaify test cases, you notice that some of the test cases in your test suite are missing. Let's say 25 still need to be included. Instead of executing 200 test cases, you run those 25 test cases one day to ensure nothing breaks. After you finish, you still have four days left, based on your old schedule, to execute your 200 test case suite. What do you do?

Well, if you are like the testers I have worked with, you are going back and designing new test cases. Those test cases you dream about adding to your suite but never get the chance to do so. Suddenly, you add 50 new test cases to your suite and still have time for additional exploratory testing.

As a tester using Testaify, you can unleash your potential to design and run at least four times more tests in less time. Embracing AI testing tools is not just about efficiency; it's about conquering that nagging feeling with every release that you didn't run enough tests. 

The Future of Software Testing is Coming Soon!

Testers, be ready and don’t be left behind. 

About the Author

Rafael E Santos is Testaify's COO. He's committed to a vision for Testaify: Delivering Continuous Comprehensive Testing through Testaify's AI-first testing platform.Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.

Take the Next Step

Join the waitlist to be among the first to know when you can bring Testaify into your testing process.