How AI Performance Testing Works
AI to the Rescue - Part 7
This blog post is the seventh in a long series. We recently introduced the concept of Continuous Comprehensive Testing (CCT), and we still need to discuss in depth what that means. This series of blog posts will provide a deeper understanding of CCT.
In our introductory CCT blog post, we said the following:
Our goal with Testaify is to provide a Continuous Comprehensive Testing (CCT) platform. The Testaify platform will enable you to evaluate the following aspects:
While we cannot offer all these perspectives with the first release, we want you to know where we want to go as we reach for the CCT star.
In the previous blog posts, we talked about performance testing. Performance testing covers several quality attributes: responsiveness, scalability, stability, and reliability.
While our first product will focus on functional testing, the second will focus on performance. Still, the underlying platform is the same for all our products. As such, it is important to discuss the Testaify discovery process. It is an essential component of our platform.
AI Application Discovery
Testaify’s discovery uses AI to navigate your application and build a model of it. It does this every time you execute a test run. We know how your product evolves as you add features and fix defects. That data will allow us to identify potential response time issues immediately.
Our performance testing team used to discuss the idea of a single-user test with engineers at Ultimate Software. If you are developing a new page or modifying a page, you can easily do a manual test and time the response time for a single user. Essentially, we are trying to help the engineers shift-left performance testing. Only a few engineers engage in that effort.
Testaify Performance for Web will do this automatically as part of every run. We will have an early warning of where a performance problem might occur. While discovering the application, we can identify specific pages with increased response time. Users can review those performance warnings before the test run starts.
AI Test Design and Execution
After we build the model of the application, Testaify Performance for Web will design a set of tests to determine the system's scalability. The model will inform our system, which will develop a set of performance test scripts to create a comprehensive suite.
Inspired by our experience at Ultimate Software, we will execute simultaneous user tests against critical pages. Our AI system will choose the targets for that type of testing. It will also design a concurrent user test scenario against the overall system.
Our AI workers can generate different load levels to determine the application's scalability threshold. The amazing serverless architecture built by our excellent engineering team will allow us to find that threshold quickly. We can horizontally scale as needed to complete the test.
One component we will not be able to control is the test environment, as it is provided by our customers when they register the specific application instance.
Testaify Performance for Web will provide their users with a clear picture of the response time of every page at a single user level. It will tell the customers what is the scalability threshold of the application. It will provide insights into the overall performance of the system. It will show the long-term trend for every page in the system and provide an early warning if specific pages are getting slower or becoming error-prone during performance testing.
As we expand the capabilities of Testaify Performance for Web, we will integrate our data with customers’ APM systems. That will open up the opportunity to provide more information about the other quality attributes of the application. The combination of the test data and production data will allow us to provide forecasts regarding the system's scalability, stability, and reliability. It will also enable Testaify Performance for Web to refine the test suite for every test run.
And, yes, we will tell you if the baby is breathtaking like Elaine.
Stay tuned as we move on to discuss Usability in future blog posts.
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver Continuous Comprehensive Testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Join the waitlist to be among the first to know when you can bring Testaify Functional for Web into your testing process.
Are you interested in the other blogs in this series?
- The Heartbreaking Truth About Functional Testing (AI to the Rescue - Part 1)
- Have you said, “AI won’t help me as much as I thought?” (AI to the Rescue - Part 2)
- What is Performance Testing? (AI to the Rescue - Part 3)
- How to Conduct Performance Testing (AI to the Rescue - Part 4)
- Performance Testing: What about Scalability, Stability, and Reliability? (AI to the Rescue - Part 5)
- Back in the ‘SSR (AI to the Rescue - Part 6)