The Sixth School
The “Sixth School” replaces human-written tests with autonomous agents that discover, design, and run tests, shifting humans to strategy and oversight.
TABLE OF CONTENTS
A History of Test Automation, Part 3
Part 3 of a three-part series. Read Part 1, Before Agile Changed Everything, and Part 2, How Agile Rewrote the Rules, first.
Every school of testing emerged because something wasn't working.
The Analytical School emerged because ad-hoc testing wasn't rigorous enough. The Factory School emerged because rigorous testing wasn't scalable. The QA School emerged because scale without product ownership produced hollow quality. The Context-Driven School emerged because scripted processes couldn't replace human judgment. The Agile School emerged because testing phases were too slow for iterative development.
Each school was a correction. And each correction introduced new problems.
We're now watching a sixth school form in real time. It's being shaped by the same pressure that produced every previous one: the gap between what we need from testing and what current methods can deliver.
The Problem Statement
The Agile School solved the speed problem by making testing continuous. But it never solved the cost problem.
Automated tests don't maintain themselves. Every UI change breaks something. Every refactoring ripples through the test suite. SDETs are expensive to hire and expensive to retain. And even the best-maintained automation suites cover a fraction of what a thorough human tester would actually check.
There's also the coverage problem that nobody likes to talk about. The test pyramid told us to invest heavily in unit tests and lightly in end-to-end tests. That was sound advice for the economics of 2010. But it left large portions of the application, the actual user experience, under-tested.
Large language models changed the economics of all of this.
The Sixth School: Autonomous Testing
I've spent years thinking about what comes after the Agile School. What I see forming is something I'll call the Autonomous Testing School — though I suspect the final name is still being debated somewhere.
Here is its belief system:
- AI agents can discover, design, execute, and report tests independently
- Test creation and maintenance can be largely automated
- The application under test is the specification
- Testing should be continuous and comprehensive, not sampled
- Human testers shift from execution to strategy, oversight, and judgment
- The ratio of testing effort to code changes should approach zero
- Key question: What is the AI finding that we haven't thought to test?
This is not an incremental improvement on the Agile School. It's a different theory of software testing.
What Changed
The catalyst was the capability jump in large language models beginning around 2022–2023.
AI had been in testing tools for years before, anomaly detection, flaky test identification, and visual regression tools like Applitools. But those were narrow applications. Smart features in traditional tools.
What changed with LLMs was the ability to understand context, reason about intent, and generate novel test scenarios without being explicitly told what to test. An agent could look at an application, infer its purpose, and produce tests that a human would recognize as sensible without anyone writing a test plan, a Gherkin file, or a Selenium script.
The agentic coding tools arrived around the same time. GitHub Copilot. Later, tools like Claude Code. These started helping developers write test code faster. That's useful, but it's still the Agile School model. A human engineer, now with an AI assistant, is still maintaining a test codebase. The bottleneck moves, but doesn't disappear.
The more interesting development is the autonomous agent: software that can explore an application the way a human tester would, discovering behavior, identifying anomalies, and producing findings — without a human in the loop during execution.
What This Means for the Workforce
Let's be honest about what the data says.
The BLS floor for dedicated QA analysts and testers stands at about 201,700 as of 2024. The broader quality-focused workforce, including SDETs and quality engineers classified as software developers, is estimated at 400,000–450,000. And the ACS survey, which captures self-reported occupation data, puts the number closer to 88,000 for those who identify QA as their primary work.
All three numbers are declining as a share of the total software workforce.
The Agile School didn't shrink the QA profession — it transformed it. The people who survived the shift from manual testing to Agile testing were the ones who learned to write code, maintain frameworks, and think in terms of CI pipelines. The ones who didn't adapt lost their jobs when "shift left" became shorthand for "we're moving this work to developers."
The Autonomous Testing School will produce a similar reckoning. The people best positioned for it aren't the ones who can write the best Selenium suite. It's the ones who can define what quality means for a product, interpret what an AI agent found, decide what matters, and design testing strategies that a machine can execute but can't invent on its own.
The Analytical School would call that a testing architect. The Context-Driven School would call it a skilled investigator. I call it the future of the QA career.
The Sixth School Is Not the End of Human Testing
I want to be careful here, because this is where the conversation usually goes wrong.
People hear "autonomous testing" and assume "no more testers." That's not what the Sixth School believes.
It believes that the execution layer — the writing and running of tests — is increasingly automatable. What isn't automatable—at least not yet—is the judgment layer that decides what matters. Interpreting edge cases, defining acceptable behavior for complex, ambiguous, human-facing systems, and advocating for the user in product decisions.
Those are deeply human skills. The context-driven thinkers were right about that.
The honest version of this is: there will be fewer QA roles. The ones that remain will require more, not less, strategic capability. And entirely new roles will emerge around managing, interpreting, and improving autonomous testing systems. Roles that don't yet exist in the way software developers, QA analysts, and SDETs do today.
We went through this once before. The Agile School eliminated many manual testing roles and created SDET roles. The net effect on total employment was roughly neutral, but the type of work changed completely. Most people who saw it coming adapted. Most people who didn't see it coming didn't.
The Sixth School's Contribution
Every school left something behind that the next ones built on.
The Sixth School is inheriting the techniques of the Analytical School, the scale of the Factory School, the product ownership of the QA School, the judgment of the Context-Driven School, and the speed of the Agile School.
It's not discarding any of them. It's putting them together in a machine.
The contribution of the Autonomous Testing School what it will leave for whatever comes after is something we don't have a name for yet. Comprehensive coverage, maybe. The idea that every application can be tested thoroughly, continuously, without the bottleneck of human execution time.
That would be genuinely new. None of the previous five schools could offer it.
This Is What We're Building
I'm not just observing the Sixth School from the outside. We're building it.
At Testaify, we've spent years working on what true autonomous testing actually means in practice — not AI-assisted script generation, not smarter record-and-playback, but an agent that can discover an application, design tests based on what it finds, execute them, and report meaningful findings without a human engineer in the loop.
No test scripts. No source code access. No manual configuration.
The core belief behind Testaify is the same belief behind the Sixth School: the application under test is the specification. If you can see it, you can test it. And if you can test it autonomously, you can test it comprehensively — every build, every change, every release.
We're still early. So is the Sixth School. But the direction is clear, and the economics are already moving. Teams that start thinking in these terms now will be in a very different position from those that wait for the Agile School's model to stop working before they look for an alternative.
If you're curious what this looks like in practice, take a look at what Testaify does. And if any of this resonates — or if you think I'm wrong about something — I'd genuinely like to hear from you.
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Testaify is in managed roll-out. Request more information to see when you can bring Testaify into your testing process.