Steve, a fictional technology executive, needs to teach his peers about product quality: not all defects are equal, testing is sampling, and there is no perfect software.
Post by Nov 6, 2023 12:13:23 PM · 5 min read

Ready to Teach Your Exec Team about Product Quality? Follow Steve!

As a technology executive, one of your primary responsibilities is keeping the release pipeline moving and delivering features. In a SaaS product, those releases are essential to maintaining high retention and bringing in new customers. But what happens when a release goes bad?

When you have a bad release, the number of customer calls to support increases by over 400%. A prospect stops the sales process because the referral just told them about your terrible release. Online messages about the poor quality of your product start showing up. Now, you are praying your product does not become a viral meme about how not to build software.

Suddenly, everyone on the executive team wants to discuss quality and testing. All eyes turn to Steve, a tech exec. Steve’s peers ask him: “Why did we not know about these issues before the release? How come our testing did not find these issues before the release? Why can’t we test everything?”

His boss might devise some “brilliant” idea about defining a single metric that will provide the quality of the product release—one number to rule them all. Steve knows what I am talking about.

As a technology executive like Steve who understands testing, the first thing you need to do is educate your peers (and probably your boss). Most executives need to learn about software testing. One of the first lessons is that defects are not all created equal.

Most people do not realize you only need one defect to put you in this horrible situation. On Wednesday, 9 April 2014, a single coding error caused a six-hour multi-state outage of the 911 system. According to the FCC report

At 11:54 p.m. PDTon April 9, 2014, the PSAP Trunk Member’s (PTM) counter at Intrado’s Englewood, Colorado, ECMC exceeded its threshold and could send no more 911 calls to PSAPs using CAMA trunks. Under normal operations, the PTM assigns a unique identifier for each call that terminates using CAMA trunks. This is how Intrado has implemented the ATIS protocol commonly used to complete 911 calls over CAMA trunks, which (unlike SS7) require additional features to carry the signaling along the TDM path.

In this case, the trunk assignment counter reached a pre-set capacity limit to assign trunks, which meant that no additional database entries to reserve a PSAP CAMA trunk could be created, no trunk assignments for call delivery could be made for PSAPs with CAMA trunks, and, therefore, no 911 calls could be completed to these PSAPs or any backup PSAP through the Englewood ECMC.

A single number in a single line of code stopped the 911 system from working for 6 hours in several states. Clearly, no one created a test case for this scenario. In one of my blog posts (The Heartbreaking Truth About Functional Testing), I discussed the impossibility of exhaustively testing every possible path. You need to learn the testing methodologies that will allow you to find well-known categories of defects. Software testing is a risk management exercise.

In other words, testing is sampling. That is an important lesson. To be a good tester, you need to become good at coming up with the best sample of test cases. This specific defect in the 911 system looks like a system boundary. It was probably determined by developers and never documented outside of the codebase. So, if the developer who wrote this code did not write at least one unit test for it, it most likely did not have any tests. A performance test focused on stressing the system limits might have revealed this issue, too, but I will bet money no one was conducting that kind of testing.

So, what do you do after an incident like this one if you are a technology executive like Steve? You need to educate your executive team about the nature of software testing. We hope to get them to avoid silver-bullet thinking.

The third lesson to teach them is there is no perfect software. The impossibility of testing every possible path guarantees there is no perfect software. Yet, you can still develop a better testing strategy.

That strategy will require teaching your team about the importance of different kinds of testing. Having the QA team manually test your extensive product portfolio will not cut it.

You can start by reviewing the agile testing quadrants as defined by Brian Marick:

Blog_img_Marick_Matrix_Feature_Testaify

You must cover each quadrant to develop a comprehensive testing strategy to produce high-quality products.

Quadrants 1 and 2 are about building quality-in. Today, you implement unit testing (hopefully TDD) for Q1 and Behavior Driven Development (BDD) for Q2. This approach will expand the responsibility regarding testing beyond your QA team to include developers, product managers, UX designers, etc. You will Shift Left.

Quadrants 3 and 4 are about critiquing the product. You must remember them. That means having an automated regression testing suite. To ensure you have a solid regression suite, you need to bring a QA Architect or get training for your team. They need to learn the software testing methodologies. If you care about your users, you also want to have usability testing in place. Plus, you need time to conduct exploratory testing to go beyond your regression suite and try hard to find obscure defects. That will get you to cover Q3.

For Q4, you must implement performance, security, accessibility, and any other testing types essential to your business context. The testing for quadrant 4 requires a group of specialists and expensive tools.

While Steve presents this comprehensive testing strategy to his peers in the executive team, their faces start changing and growing more pensive. The realization that achieving high quality will require a significant investment is becoming evident to all.

Steve’s boss might say, “Okay, that testing strategy sounds fine, but how do we know it works?” At this point, Steve will cover some metrics you can track, like escaped defects, defect density, test coverage, code coverage, etc. You can start measuring each release's Defect Removal Efficiency (DRE). To make DRE more effective, you want to use the defects reported by your customers, not just internally. Besides, in an Agile environment with a “zero defects” approach, you do not want to release known issues, so DRE is only meaningful if you include customer-found defects. The problem is that now DRE is a lagging indicator—besides, more than DRE is needed to determine if you can release it.

Another executive will say, “That’s great to measure and improve our quality in the long run, but how do we know the quality of the release before it goes out? Is there a way to know this ahead of time?” So, Steve will talk about defining quality gates. Every release must meet exit criteria before it goes out. The measures might include some of these lines:

  • Performance test suite variance should be less than 8% from the previous release. It should be getting better, not worse.
  • All regression suite test cases are passing.
  • The usability test did not show a degradation in the user experience.
  • The team did not introduce new security issues.

I can keep going, but you get the point.

Finally, the CEO will ask three crucial questions:

  • How much will this cost?
  • Will these changes slow down the development process?
  • Can you guarantee that this investment will avoid the experience we just had this year?

Here is the ugly truth regarding these questions: Steve's answers today are not encouraging. Why? How can we change that dynamic to protect product quality and the bottom line? In my next blog, we’ll discuss the answers to these questions today and how to answer them with Testaify. Read part two of this blog now: Turn Testing into a Productivity Powerhouse (like Steve!).

About the Author

Rafael E Santos is Testaify's COO. He's committed to a vision for Testaify: Delivering Continuous Comprehensive Testing through Testaify's AI-first testing platform.Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.

Take the Next Step

Join the waitlist to be among the first to know when you can bring Testaify into your testing process.