A digital school bus is navigating a road with signs pointing to the analytical, factory, QA, and context-driven schools of test automation.
Post by Mar 30, 2026 8:43:04 AM · 5 min read

A History of Test Automation — Part 1: Before Agile Changed Everything

Test automation evolved through competing “schools” of thought—analytical, factory, QA, and context-driven—each shaping tools, roles, and practices before Agile emerged to address the growing need for speed and better feedback loops. 

TABLE OF CONTENTS

This is Part 1 of a three-part series on how test automation evolved and where it's going.

The history of test automation is not a clean, linear story.

It's a series of collisions between what developers believed software should be, what managers needed to ship, and what testers could actually pull off with the tools they had. To understand where we are today and what's coming, you have to start at the beginning.

That means starting before Agile. Before CI/CD. Before sprints, backlogs, and stand-ups. It means starting in an era when testing was either a formal science or an afterthought, depending on where you worked.

I've written before about the schools of testing — the framework Bret Pettichord used to organize the different belief systems that have competed for dominance in our field. That post covers the five schools from a conceptual angle. This series tells the same story through the lens of automation: what the tools and workforce looked like, and what was actually happening on the ground within QA organizations at each stage.

Let me walk you through it.

The Analytical School: Testing as an Engineering Discipline

The earliest serious attempts at test automation came from an environment that treated software like mathematics.

If you wrote a specification, you derived tests from it. If you defined a state machine, you generated paths through it. Testing was a branch of computer science, not a craft you learned by doing. Pioneers like Edsger Dijkstra and later Boris Beizer gave us structured testing techniques: equivalence partitioning, boundary value analysis, basis path coverage, and state transition testing.

This school produced things we still use daily, even if we've forgotten where they came from.

The Analytical School's belief system:

  • Software is a logical artifact
  • Testing techniques must have a logical-mathematical form
  • Coverage is measurable and should be maximized
  • Key question: Which techniques should we use?

Automation in this era looked like what we'd now call unit testing, small, deterministic programs that verified functions against specifications. Tools were largely homegrown. There was no Selenium. No JUnit. No frameworks. If you wanted an automated test, you wrote one from scratch.

The workforce in this era barely registered as a distinct category. The BLS didn't track "software QA analysts" as a separate occupation until the late 1990s. Testing was what developers did after they finished developing.

The Factory School: Testing as Production Line

The software industry scaled during the 1980s and 1990s, and with that scale came a new problem: how to manage testing across large teams, long release cycles, and complex product portfolios?

The answer, largely, was to industrialize it.

This gave us test plans, test cases, traceability matrices, defect-tracking systems, and the concept of the testing phase, a defined period at the end of a waterfall cycle, when a dedicated team put the product through its paces. Testing became a project. It had a budget, a schedule, and a headcount.

The Factory School:

  • Software development is a project
  • Testing is a measure of progress
  • Testing must be managed and cost-effective
  • Test coverage is tracked through metrics
  • Key question: What metrics should we use?

The commercial test automation tools of this era were built for the Factory School. Mercury Interactive gave us WinRunner, then QuickTest Professional. Compuware gave us QADirector. IBM Rational gave us an entire platform. These were expensive, enterprise-grade, heavyweight tools.

And they created an entirely new kind of job: the test automation engineer.

By the late 1990s, the BLS was starting to capture QA-specific employment for the first time. Estimates put the dedicated US QA workforce at 150,000–200,000 by the end of the decade, growing rapidly and fueled by the dot-com boom. Companies were hiring anyone who could run a test script. People were getting CSTE certifications. Test automation was becoming a career path.

The problem was that the tools were brittle. Record-and-playback worked until the UI changed, which was constantly. Maintaining a suite of WinRunner scripts for a rapidly evolving application was a full-time job. A painful one.

I lived this. If you worked in QA during this era, you know exactly what I mean.

The QA School: Testing as Gatekeeper

Running alongside the Factory School, sometimes overlapping, sometimes in conflict, was an older tradition that saw testing as more than metric production.

The QA School believed software quality required discipline. Testers weren't just running scripts or tracking defects. They were guardians. They held the gate between development and users. They enforced the process. Some even believed they needed to police developers.

The QA School:

  • Software quality requires discipline
  • Testing determines whether development processes are followed
  • Testers protect users from bad software
  • Quality is a state achieved through rigor
  • Key question: Are we following a good process?

This school gave us the title "QA" and a lot of the professional identity that came with it. At companies like the one where I spent many years, there were testers who wore that title with genuine pride. They had deep product knowledge. They cared intensely about the user experience. They weren't just running scripts; they were system thinkers.

Automation in the QA School was often shallow. Heavy investment in manual test cases, complex test plans, and regression suites that humans executed and occasionally, partially automated. The economics didn't always favor deep automation because the processes assumed human judgment at the center.

The Context-Driven School: Testing as Skilled Craft

By the late 1990s, a group of testing thinkers like James Bach, Michael Bolton, Cem Kaner, Bret Pettichord, and others pushed back against what they saw as the over-industrialization of testing.

They argued that testing was fundamentally a human, intellectual activity. Techniques weren't one-size-fits-all. The value of a test depended on context. And perhaps most controversially: you couldn't automate judgment.

The Context-Driven School:

  • People create software; people test it
  • Testing finds bugs — a bug is anything that could bug a stakeholder
  • Testing provides information to the project
  • Testing is skilled, creative, and multidisciplinary
  • Key question: What tests would be most valuable right now?

This school gave us exploratory testing, the practice of simultaneously designing and executing tests, learning about the system as you go. It was a deliberate counterweight to the scripted, factory model.

Automation wasn't rejected outright, but it was put in its place. "Automated checking," as Bach called it, was useful for confirming known behavior. It was not a substitute for skilled human investigation.

The Peak, and the Warning Signs

The pre-Agile era ended somewhere around 2001.

The dot-com crash wiped out a significant portion of the software workforce, and QA was not spared. Employment in the field had likely peaked around 260,000 by 2000–2001 before contracting sharply. Many expensive enterprise testing tools have lost their customer base. Offshore testing became attractive. The manual testing model, with large teams executing scripted test cases, suddenly looked economically fragile.

Something needed to change.

The Four Schools had each contributed something essential. The Analytical School gave us techniques. The Factory School gave us scale. The QA School gave us product ownership and depth. The Context-Driven School gave us honesty about what testing could and couldn't do.

But none of them had fully solved the speed problem. Waterfall cycles were too long. The test phases were too late. Feedback loops were too slow.

A small group of developers was about to propose a different way to build software. It would reshape testing more dramatically than anything that came before it.

About the Author

Rafael E Santos is Testaify's COO. He's committed to a vision for Testaify: Delivering Continuous Comprehensive Testing through Testaify's AI-first testing platform.Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.

Take the Next Step

Testaify is in managed roll-out. Request more information to see when you can bring Testaify into your testing process.