How Agile Rewrote the Rules - History of Test Automation, Part 2
Agile transformed test automation from a cost-saving afterthought into a continuous, developer-driven safety net—but left teams with a new bottleneck: the heavy human effort required to build and maintain those automated tests.
TABLE OF CONTENTS
A History of Test Automation — Part 2
Missed Part 1? Read Before Agile Changed Everything, A History of Test Automation — Part 1 first.
In 2001, seventeen software developers met at a ski resort in Snowbird, Utah.
They weren't there to ski. They were there because they were frustrated with long release cycles, heavyweight processes, and the persistent gap between what software teams promised and what they delivered. What came out of that meeting was the Agile Manifesto. Twelve principles. A few hundred words. And an earthquake that's still reshaping software development today.
For testing, the earthquake hit harder than most people realized at the time.
The Agile School: Testing as Continuous Feedback
Agile didn't invent test automation. But it fundamentally changed what test automation was for.
Before Agile, automation was about reducing the cost of regression testing — replacing human effort with scripts that verified known behavior. After Agile, automation became something more foundational: a safety net that let teams move fast without breaking things. If you were shipping every two weeks instead of every six months, you couldn't afford a two-week testing phase. You needed testing baked into the process, not bolted onto the end.
The Agile School:
- Testing can drive development
- Tests can be the specification (TDD, BDD)
- Tests should be automated — especially below the UI
- Run tests as often as possible, ideally with every code check-in
- Deliver frequently and reduce lead time as much as possible
- Practice ongoing exploratory testing to surface hidden assumptions
- Key question: Are the automated tests passing?
This school gave us the unit test as a first-class artifact. It gave us TDD, writing the test before the code. It gave us BDD, which extended that idea to acceptance criteria and brought testers, developers, and product owners into the same conversation. It gave us CI/CD pipelines, where automated tests ran on every commit, and a failing test was a blocker, not a report for next week's meeting.
The New Automation Engineer
The Agile School created a new archetype: the SDET.
Software Development Engineer in Test. The title said it all. These weren't testers who learned some automation. These were engineers coding-fluent, framework-savvy, architecture-aware, who happened to focus on quality. They wrote Selenium tests, JUnit suites, and API test harnesses. They maintained the CI pipeline. They lived in the same tools as developers and committed to the same repositories.
Open-source automation tools exploded during this period. Selenium was launched in 2004. JUnit had been around since the late 1990s, but became standard practice in Agile shops. Cucumber gave BDD a practical implementation. Later, frameworks like Cypress, Playwright, and RestAssured raised the bar again. And the entire ecosystem was free, which meant the expensive enterprise tool vendors from the Factory School era lost a significant portion of their market.
The workforce changed with it. Employment in the dedicated US QA analyst category, the BLS floor number flattened after the dot-com bust and stayed roughly flat through the 2010s, hovering between 195,000 and 215,000. That sounds like stability. It wasn't. The tech industry was doubling in size during this period. QA's share of total software employment was quietly shrinking.
Where were the quality-focused jobs going? Into the Software Developer category. SDETs, automation engineers, and quality engineers who wrote code were counted as developers because, by job function, that's what they were.
Shift Left, and What It Actually Meant
The phrase "shift left" became the decade's testing mantra.
The idea was simple: find defects earlier. Move testing closer to development. Don't wait until the end of the sprint, let alone the end of the release cycle. The earlier you catch a bug, the cheaper it is to fix.
In practice, this meant developers owned more testing. Unit tests became a developer's responsibility. Integration tests were written alongside feature code. Testers focused on higher-level scenarios, exploratory testing, and test strategy — in theory.
The reality was messier.
"Shift left" often became an excuse to reduce QA headcount while claiming the work was being absorbed by developers. Sometimes that was true. Often it wasn't. What got eliminated was the systematic, thorough, regression-focused testing that dedicated QA teams did well. What replaced it was fast-moving CI pipelines with incomplete coverage and developers who wrote unit tests for the happy path.
I've seen this pattern in too many organizations to dismiss it as an edge case.
The other shift was geographic. Offshore QA primarily manual testing in lower-cost markets had grown significantly through the 2000s. Many of those roles survived longer than the domestic manual testing workforce because the economics were different. But Agile's emphasis on embedded, collaborative quality put pressure even on that model.
BDD and the Promise of Collaboration
One of the genuinely important contributions of the Agile School was BDD.
The idea was that tests written in natural language — Gherkin's Given/When/Then syntax — could serve as living documentation, executable specifications, and a shared language between technical and non-technical team members. You wrote the acceptance criteria, and those criteria became the tests.
It was a powerful idea. In many teams, it worked.
But it also created new maintenance burdens. Someone had to write and maintain the feature files. Someone had to map the natural-language steps to code. When business requirements changed, which was always, someone had to update both the feature files and the step definitions. BDD frameworks could become their own kind of technical debt.
The pattern was consistent throughout the Agile era: each advance in automation capability created new complexity that required more sophisticated engineers to manage.
What the Agile School Built — and What It Left Unfinished
By the mid-2010s, the best Agile shops had impressive automation infrastructure. Thousands of unit tests. Hundreds of integration tests. End-to-end suites running in CI. Deployment pipelines that took code from commit to production in hours.
But there were still gaps.
End-to-end test suites were famously fragile. UI tests broke when designers changed a button. Test data management was a nightmare. The "test pyramid", the idea that you should have many unit tests, some integration tests, and few end-to-end tests, became orthodoxy, but it left entire categories of testing underfunded: exploratory testing, accessibility testing, performance testing, and security testing.
And the maintenance burden was relentless. A large automation codebase is a living codebase. It needs refactoring, cleanup, and continuous investment. Many organizations found that maintaining their test suite took as much engineering effort as writing new features.
The BLS data tells a quiet version of this story. The broader QA workforce, when you include SDETs and quality engineers classified as developers, had grown to perhaps 400,000 by the late 2010s. But that growth wasn't keeping pace with the software industry. The ratio of quality-focused engineers to total software engineers was declining.
More software was being shipped. Less of it was being tested the way it probably should be.
Something was about to change.
The Setup
The Agile School's contribution to testing is enormous and largely positive. It brought quality closer to development. It eliminated a lot of waste. It created a generation of engineers who think about testability from the start, not as an afterthought.
But it left one fundamental problem unsolved: test automation still required significant human engineering effort to create, maintain, and evolve. The bottleneck had shifted from "someone needs to manually execute this test" to "someone needs to manually write and maintain this test."
That bottleneck is exactly what the next school is designed to eliminate.
A History of Test Automation — Part 2
Missed Part 1? Read Before Agile Changed Everything, A History of Test Automation — Part 1 first.
In 2001, seventeen software developers met at a ski resort in Snowbird, Utah.
They weren't there to ski. They were there because they were frustrated with long release cycles, heavyweight processes, and the persistent gap between what software teams promised and what they delivered. What came out of that meeting was the Agile Manifesto. Twelve principles. A few hundred words. And an earthquake that's still reshaping software development today.
For testing, the earthquake hit harder than most people realized at the time.
The Agile School: Testing as Continuous Feedback
Agile didn't invent test automation. But it fundamentally changed what test automation was for.
Before Agile, automation was about reducing the cost of regression testing — replacing human effort with scripts that verified known behavior. After Agile, automation became something more foundational: a safety net that let teams move fast without breaking things. If you were shipping every two weeks instead of every six months, you couldn't afford a two-week testing phase. You needed testing baked into the process, not bolted onto the end.
The Agile School:
- Testing can drive development
- Tests can be the specification (TDD, BDD)
- Tests should be automated — especially below the UI
- Run tests as often as possible, ideally with every code check-in
- Deliver frequently and reduce lead time as much as possible
- Practice ongoing exploratory testing to surface hidden assumptions
- Key question: Are the automated tests passing?
This school gave us the unit test as a first-class artifact. It gave us TDD, writing the test before the code. It gave us BDD, which extended that idea to acceptance criteria and brought testers, developers, and product owners into the same conversation. It gave us CI/CD pipelines, where automated tests ran on every commit, and a failing test was a blocker, not a report for next week's meeting.
The New Automation Engineer
The Agile School created a new archetype: the SDET.
Software Development Engineer in Test. The title said it all. These weren't testers who learned some automation. These were engineers coding-fluent, framework-savvy, architecture-aware, who happened to focus on quality. They wrote Selenium tests, JUnit suites, and API test harnesses. They maintained the CI pipeline. They lived in the same tools as developers and committed to the same repositories.
Open-source automation tools exploded during this period. Selenium was launched in 2004. JUnit had been around since the late 1990s, but became standard practice in Agile shops. Cucumber gave BDD a practical implementation. Later, frameworks like Cypress, Playwright, and RestAssured raised the bar again. And the entire ecosystem was free, which meant the expensive enterprise tool vendors from the Factory School era lost a significant portion of their market.
The workforce changed with it. Employment in the dedicated US QA analyst category, the BLS floor number flattened after the dot-com bust and stayed roughly flat through the 2010s, hovering between 195,000 and 215,000. That sounds like stability. It wasn't. The tech industry was doubling in size during this period. QA's share of total software employment was quietly shrinking.
Where were the quality-focused jobs going? Into the Software Developer category. SDETs, automation engineers, and quality engineers who wrote code were counted as developers because, by job function, that's what they were.
Shift Left, and What It Actually Meant
The phrase "shift left" became the decade's testing mantra.
The idea was simple: find defects earlier. Move testing closer to development. Don't wait until the end of the sprint, let alone the end of the release cycle. The earlier you catch a bug, the cheaper it is to fix.
In practice, this meant developers owned more testing. Unit tests became a developer's responsibility. Integration tests were written alongside feature code. Testers focused on higher-level scenarios, exploratory testing, and test strategy — in theory.
The reality was messier.
"Shift left" often became an excuse to reduce QA headcount while claiming the work was being absorbed by developers. Sometimes that was true. Often it wasn't. What got eliminated was the systematic, thorough, regression-focused testing that dedicated QA teams did well. What replaced it was fast-moving CI pipelines with incomplete coverage and developers who wrote unit tests for the happy path.
I've seen this pattern in too many organizations to dismiss it as an edge case.
The other shift was geographic. Offshore QA primarily manual testing in lower-cost markets had grown significantly through the 2000s. Many of those roles survived longer than the domestic manual testing workforce because the economics were different. But Agile's emphasis on embedded, collaborative quality put pressure even on that model.
BDD and the Promise of Collaboration
One of the genuinely important contributions of the Agile School was BDD.
The idea was that tests written in natural language — Gherkin's Given/When/Then syntax — could serve as living documentation, executable specifications, and a shared language between technical and non-technical team members. You wrote the acceptance criteria, and those criteria became the tests.
It was a powerful idea. In many teams, it worked.
But it also created new maintenance burdens. Someone had to write and maintain the feature files. Someone had to map the natural-language steps to code. When business requirements changed, which was always, someone had to update both the feature files and the step definitions. BDD frameworks could become their own kind of technical debt.
The pattern was consistent throughout the Agile era: each advance in automation capability created new complexity that required more sophisticated engineers to manage.
What the Agile School Built — and What It Left Unfinished
By the mid-2010s, the best Agile shops had impressive automation infrastructure. Thousands of unit tests. Hundreds of integration tests. End-to-end suites running in CI. Deployment pipelines that took code from commit to production in hours.
But there were still gaps.
End-to-end test suites were famously fragile. UI tests broke when designers changed a button. Test data management was a nightmare. The "test pyramid", the idea that you should have many unit tests, some integration tests, and few end-to-end tests, became orthodoxy, but it left entire categories of testing underfunded: exploratory testing, accessibility testing, performance testing, and security testing.
And the maintenance burden was relentless. A large automation codebase is a living codebase. It needs refactoring, cleanup, and continuous investment. Many organizations found that maintaining their test suite took as much engineering effort as writing new features.
The BLS data tells a quiet version of this story. The broader QA workforce, when you include SDETs and quality engineers classified as developers, had grown to perhaps 400,000 by the late 2010s. But that growth wasn't keeping pace with the software industry. The ratio of quality-focused engineers to total software engineers was declining.
More software was being shipped. Less of it was being tested the way it probably should be.
Something was about to change.
The Setup
The Agile School's contribution to testing is enormous and largely positive. It brought quality closer to development. It eliminated a lot of waste. It created a generation of engineers who think about testability from the start, not as an afterthought.
But it left one fundamental problem unsolved: test automation still required significant human engineering effort to create, maintain, and evolve. The bottleneck had shifted from "someone needs to manually execute this test" to "someone needs to manually write and maintain this test."
That bottleneck is exactly what the next school is designed to eliminate.
About the Author
Testaify founder and COO Rafael E. Santos is a Stevie Award winner whose decades-long career includes strategic technology and product leadership roles. Rafael's goal for Testaify is to deliver comprehensive testing through Testaify's AI-first platform, which will change testing forever. Before Testaify, Rafael held executive positions at organizations like Ultimate Software and Trimble eBuilder.
Take the Next Step
Testaify is in managed roll-out. Request more information to see when you can bring Testaify into your testing process.