PSSR, or PSR for short, helps us know how to ask tough performance testing questions.
Post by Dec 4, 2023 11:29:16 AM · 3 min read

Performance Testing: What about Scalability, Stability, & Reliability?

AI to the Rescue - Part 5

This blog post is the fifth in a long series. We recently introduced the concept of Continuous Comprehensive Testing (CCT), and we still need to discuss in depth what that means. This series of blog posts will provide a deeper understanding of CCT.

In our introductory CCT blog post, we said the following:

Our goal with Testaify is to provide a Continuous Comprehensive Testing (CCT) platform. The Testaify platform will enable you to evaluate the following aspects:

  • Functional
  • Usability
  • Performance
  • Accessibility
  • Security

While we cannot offer all these perspectives with the first release, we want you to know where we want to go as we reach for the CCT star.

Our previous blog post on performance testing discussed defining a test. That test must answer the following questions: 

  • Performance – Can the system provide an acceptable response time with no errors and efficient use of resources?
  • Scalability – At what load does the system stop having an acceptable response time with no errors?
  • Stability – How long can the system provide acceptable response time with no errors for a significant period without intervention?
  • Reliability – How reliable is the system after months of use without intervention?

In this blog post, we will continue analyzing Performance testing.

Performance: Can the system provide an acceptable response time with no errors and efficient use of resources?

Before we move from the “P” to the two “S”s and the “R,” let me add one more thought regarding the first question about the system providing an acceptable response time with zero errors: As discussed in my previous blog post on performance testing, we evaluate products using the smallest footprint possible. We also came up with an approach that focuses on breaking the system.

We do this because performance testing is about critiquing the product (you can learn about Marick’s testing quadrants in this post). At Ultimate Software, we used a threshold to evaluate products—that famous 100 simultaneous users for single transactions and 100 concurrent users for the whole system. And we do not use “think times.”

To be completely transparent, most of our tested products did not meet this threshold. Less than 20% of the products we tested met that threshold. The ones that did meet this threshold had the highest margins because the TCO (Total Cost of Ownership) was low. We made a lot of money on those products.

However, many did not meet the threshold. In many cases, the company acquired them (the whole company or the codebase), even when we, the engineering team, told the leadership team not to buy them. In one instance, Ultimate Software acquired a company for one of its talent management products. That product could not cross 25 users. After a difficult period in production, the company decided to rewrite the product.

It’s essential to understand the business perspective. In my experience, I know you can have a successful business with a product that only reaches 35 users in our test. Your TCO will be very high, but you can make a lot of money if you have little competition in a niche market. Technical debt is usually a slowly growing curve. You can throw resources at it until you hit the critical threshold of scalability. Eventually, that technical debt will get you and completely slow down your business unless you aggressively address the issues. The longer you wait, the more it is going to hurt.

Because the company decided to buy many products against our recommendation, we had no choice but to figure out the other quality attributes of those products, too. Even our homegrown product had a difficult journey trying to meet the threshold. Besides, there is a big difference between a product that gets to 25 users versus one that gets to 75 users.

As such, we had to answer other questions, but we’ll pause here. You know you love a good cliffhanger. 

About the Author

Rafael E Santos is Testaify's COO. He's committed to a vision for Testaify: Delivering Continuous Comprehensive Testing through Testaify's AI-first testing platform.Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.

Take the Next Step

Join the waitlist to be among the first to know when you can bring Testaify into your testing process. 

 

Are you interested in the other blogs in this series?