A conductor waves his baton over performance testing efforts.
Post by Nov 28, 2023 3:36:38 PM · 5 min read

How to Conduct Performance Testing

AI to the Rescue! - Part 4

This blog post is the fourth in a long series. We recently introduced the concept of Continuous Comprehensive Testing (CCT), and we still need to discuss in depth what that means. This series of blog posts will provide a deeper understanding of CCT.

In our introductory CCT blog post, we said the following:

Our goal with Testaify is to provide a Continuous Comprehensive Testing (CCT) platform. The Testaify platform will enable you to evaluate the following aspects:

  • Functional
  • Usability
  • Performance
  • Accessibility
  • Security

While we cannot offer all these perspectives with the first release, we want you to know where we want to go as we reach for the CCT star.

In this blog post, we continue our discussion about Performance testing.

Our previous blog post on performance testing discussed defining a test. That test must answer the following question: Can the system provide an acceptable response time with no errors and efficient use of resources?

One of my most significant issues with how the industry treats performance testing is how it pushes the specialists in the field to become tool users who provide data without any analysis except what is in the tool. It is incredible how much money you can waste on performance testing if the person using the tool is a glorified test runner.

At Ultimate Software, our PSR team had the reputation of being the smartest guys in the company. We move away from tool experts to performance test engineers. Our team knew how to use the tool, but more importantly, they could dig deep to develop their analysis and provide valuable feedback to the software engineers.

As such, we had a very opinionated team. Because Ultimate Software was one of the early SaaS companies, we had to support our infrastructure (before AWS existed). We were obsessed with reducing TCO (Total Cost of Ownership).

We designed our tests in that direction and concluded that certain industry practices made no sense in that pursuit. In particular, we did not believe in using tests with active users.

It's time to provide some definitions.

Simultaneous users are all executing the same transaction at the same time. For example, if you have 100 users, all log in simultaneously. In that case, you are using simultaneous users. More specifically, this test will use a rendezvous; users will stay in sync and wait for all other users so they can all work simultaneously. This type of test is advantageous to identify issues in specific transactions. We use it often, especially on log-in and landing pages. You want your first impression to be great.

Concurrent users are all executing a transaction at the same time. For example, ten users are logging in, 15 are checking their paystub (our suite was HR, Benefits, Payroll, etc.), 20 are entering PTO, etc. The key is that all users simultaneously do something, even if it is not the same thing. We love concurrent users. It allows us to test the whole system and quickly evaluate its architecture.

Finally, the industry popularized certain types of scenario testing using active users. Active users are all in the system at the same time, but they are not all doing something at the same time. These tests will use random wait times (usually between 1 and 30 seconds). The argument is that not all users will do something simultaneously in real life. Some will think about what they will eat for lunch or drink coffee. Perhaps, but as a performance test, using active users is useless.

DIY Due Diligence.

As a technology executive, I had to perform due diligence on products we considered buying and adding to our portfolio. We always asked for performance testing results (ideally, we wanted to run our own). Whenever a vendor provided a performance testing paper with a scenario test using active users (wait times), I knew their product architecture was terrible. Nothing reveals the quality of the architecture of a system like performance testing. But if you want to hide those issues, drink the industry Kool-aid and use active users.

For us, only performance tests using simultaneous or concurrent users (no wait times) are worth doing. The rest is just smoke and mirrors. As I said, we were an opinionated group. The objective of testing is to reveal problems, not to hide them. I might be showing my age here, but if you are a fan of Seinfeld, you remember the episode The Hamptons (the original title was The Ugly Baby). Yes, that is the one. Performance testing is about radical honesty; if the baby is ugly, you must tell them.

Testing a Replica? 

Another industry practice with little value is the idea of testing in a replica of your production environment. First, how do you know what the production environment requires? Of course, I forgot the architect who designed the system told you. Performance testing is about testing the architecture, so by definition, we are trying to break the architect’s work. Instead, we test with the smallest possible resource footprint. The company's objective is to make money, and we help them do that by reducing the TCO, not trying to make the architect happy. Remember, it is the architect’s baby. If it is ugly, you have to tell him. Besides, the results from the PSR team tests will tell us what the production environment should look like, not vice versa.

We started with load tests using the smallest setup of resources needed. If the system had web servers, app servers, and database servers, then we test with one web server, app server, and database server. We wanted the least amount of stuff between our test and the code. These days, with serverless architecture, you will get the actual cost of the test. It's easier, as you just have to reduce that amount as much as possible to improve your TCO.

For single transactions like logging in, we use simultaneous users. For the whole application, we use concurrent users. We also target 100 users passing to say an application was well designed. You will not believe all our fights with engineering when we picked that number. 

In other words, to say a system meets the minimum requirement, you have to support 100 simultaneous users per transaction with a 90 percentile response time below 10 seconds, no errors, and no excessive use of resources (85% or less).

Why 90 percentile? Hopefully, you know the answer, but the average is only 50% of the users meeting the criteria. You want most of the users to meet the requirements. Another important lesson is that if someone gives you results using averages, feel free to slap them. Also, provide them with a copy of The Flaw of Averages book. Another sign of a performance paper trying to hide something is only reporting average results.

AI is Coming to the Rescue!

At Testaify, we are equally opinionated. If your application cannot meet specific requirements, we will tell you. Sorry, but you have an ugly baby; we might even show a photo of Kramer (kidding, we do not want to pay royalties).

An advantage of Testaify is that we will discover your system in a secure way - before your customers find any issues. We will know the different transactions and the navigation of your system. We will learn your domain. Using that information, Testaify can design your performance tests, execute them, and tell you whether your baby is ugly.

What about the other questions? “You need to tell us the whole PSR process.” Relax; that information will come in the next blog post. Stay tuned!

Special note for those who enjoy our content: Please feel free to link to any of our blog posts if you want to refer to any in your materials.

Are you interested in the other blogs in this series? 

About the Author

Rafael E Santos is Testaify's COO. He's committed to a vision for Testaify: Delivering Continuous Comprehensive Testing through Testaify's AI-first testing platform.Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver Continuous Comprehensive Testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.

Take the Next Step

Join the waitlist to be among the first to know when you can bring Testaify Functional for Web into your testing process.