The 10 Most Common Myths About the Waterfall Model

Everyone has an opinion about waterfall, and many of those opinions are objectively wrong.

Not because waterfall is secretly great, because it isn’t, but because the myths that have grown around it on both sides obscure the real lessons. The waterfall defenders insist it works if only people would follow it properly. The agile evangelists treat it as a cautionary tale without ever understanding what it was trying to solve.

Both camps are doing the industry a disservice, and both have a stake in keeping the myths alive.

So let’s clear the air.


Myth 1: “Waterfall was designed by software engineers”

It wasn’t, and the origin story matters more than most people realise.

Winston Royce’s 1970 paper, the one everyone cites as the birth of waterfall, described the sequential phase model as an example of a flawed process, not a recommended one. He literally called it risky and invited failure. The industry read the diagram, ignored the surrounding text, and standardised on it anyway. It is one of the most successful misreadings in the history of engineering.

What most organisations standardised on wasn’t even a coherent interpretation of Royce’s diagram; it was a cargo-cult approximation, with infinite local variations in phase count, gate structure, and feedback loops, all called “waterfall” regardless of whether they resembled each other.

Waterfall was borrowed from manufacturing and civil engineering, where fully specifying a bridge before pouring concrete is both possible and necessary. Software has none of those physical constraints. You can’t fix a borrowed process by following it more carefully if it was never designed for the problem you’re solving.


Myth 2: “Agile killed waterfall”

Agile didn’t kill waterfall; waterfall died of its own complications, and it did so in the late 1980s, well over a decade before the Agile Manifesto was written in 2001.

The Chaos Reports of the 1990s documented the wreckage in uncomfortable detail: projects overrunning budgets, products delivered late or not at all, requirements that bore no resemblance to what users actually needed, and integration phases that turned into emergency rooms. Waterfall was already on life support by the time Beck, Fowler, Martin, and the rest gathered in Snowbird, Utah.

Agile didn’t kill waterfall; it held the funeral.

The replacement processes were already emerging independently across the industry, in XP, in Scrum, in the lightweight methods practitioners had invented out of necessity, because the alternative was continuing to fail on a predictable schedule.

The “agile vs. waterfall” framing has been obsolete for at least fifteen years. The practitioners who are actually building software in complex organisations aren’t asking which side they’re on; they’re asking what a specific project needs, given specific constraints, with a specific team. That’s the right question, and it doesn’t fit on a conference slide.


Myth 3: “Waterfall works fine for large enterprises”

This one persists because large enterprises still use waterfall, and because the people running those enterprises often genuinely believe it is working, which is not quite the same thing.

Large enterprises have the budget to absorb failure gracefully enough that nobody calls it failure.

They have legal departments that love the predictable contract structures waterfall enables, procurement processes that prefer a five-hundred-page requirements document to a Product Backlog because at least it looks like something you can sign, and program offices that measure success by whether documentation was completed on schedule rather than whether the software solved a real problem.

These are not signs that waterfall works; they are signs that large organisations are very good at surviving their own processes, and at building cultures that reward process compliance over outcomes.

Ask the engineers and product managers inside those organisations how the last waterfall project actually went, what was cut, what was delivered late, how much of what shipped was actually used, and then decide whether “works fine” is the right characterization.


Myth 4: “Waterfall produces better documentation”

Waterfall produces more documentation, and that is not the same thing, though it is also worth being clear that agile’s track record here is not something to be smug about.

The waterfall failure mode is documentation that is voluminous, formally structured, and often wrong in the subtle way AI text is.

This only become visible when someone tries to build against it, certify against it, or hand it to a maintenance team who weren’t on the original project. A specification written in month one and frozen in month three is archaeology by the time the project ships.

Agile’s failure mode is different, not better. Agile teams produce working software and a backlog of user stories describing intentions rather than outcomes, while the architecture lives in the heads of the three senior engineers who’ve been there from the start.

In regulated domains, where traceability is a certification requirement, neither tradition has a satisfying answer without deliberate, sustained effort. However, traceability doesn’t require a waterfall way of producing the documentation.


Myth 5: “Agile is just waterfall without the planning”

This one comes from waterfall defenders who’ve watched agile teams fail and concluded that insufficient planning was the cause, which is sometimes true but misses the broader point.

Agile does less upfront planning, and that is a deliberate design choice. It does not do less planning overall. A well-run agile team plans continuously, at the sprint level, at the release level, and at the roadmap level, and re-plans when reality contradicts the assumptions the plan was built on, which happens constantly in any domain where the problem is not fully understood at the start.

The difference isn’t the quantity of planning; it’s the timing of commitment, and the willingness to update the plan when the plan is demonstrably wrong. Waterfall commits early and changes late, when the cost of change is highest. Agile commits late and changes often, when the cost of change is manageable.

Neither approach is universally correct, but only one of them is designed to handle the fact that you will always know more at the end of a project than you knew at the beginning.


Myth 6: “Waterfall is safer because the requirements are locked”

Locked requirements are not a safety feature; they are a risk concentration device

Such device becomes a particularly dangerous one because they feel like certainty. There will never be more than 2 digits in a year.

When requirements are frozen early, every error, omission, and misunderstanding is locked in with them, and the cost of those errors compounds through design, implementation, and test. A wrong requirement in month one is cheap to fix in month one. The same error discovered during system integration in month fourteen means rework across every phase it touched, which can mean the whole project.

Waterfall doesn’t eliminate risk; it delays it, and delayed risk is expensive risk. The late-stage integration phase is where all the quietly accumulating problems surface at once, in the most expensive context, with the least available time to fix them.

“Locked” requirements are not stable requirements; they are unverified requirements that haven’t yet had the opportunity to be proven wrong. Stability and correctness are not the same thing, and a process that achieves one by preventing the discovery of the other is certainly not making you safer.


Myth 7: “Waterfall failed because teams didn’t follow it properly”

This is the waterfall version of the No True Scotsman fallacy, and it appears with remarkable regularity in post-mortems.

“It failed because the requirements phase wasn’t done properly.” “It failed because testing was compressed to hit the delivery date.” “If only the stakeholders had been more disciplined about change control…” The implication is always the same: the method is sound, and the humans are the problem. Follow the process perfectly, with complete information, stable requirements, disciplined stakeholders, and no surprises, and waterfall will work.

That’s not an argument for a method; it’s a description of conditions that almost always are violated in software development, sooner or later. A method that only works under ideal conditions is not a practical method; it’s a theoretical exercise.

The value of a real engineering process is measured by how it performs when the conditions aren’t ideal, because the conditions are never ideal. A process that requires perfection to function is a liability masquerading as a safeguard, and defending it by listing the ways people failed to be perfect is not a defence, it’s a confession.


Myth 8: “The V-model is just waterfall with a bend in it”

This is the myth that costs the most credibility in a room full of safety-critical engineers.

The V-model and waterfall share a sequential structure and an upfront requirements phase, but they are not the same thing. Waterfall’s core failure mode is verification at the end, when the cost of a requirements error is at its maximum. The V-model addresses that directly, by pairing each level of specification with a corresponding verification activity defined at the same time, so system requirements are paired with system tests, architectural design with integration tests, and detailed design with unit tests.

‘That is a structural difference, not a cosmetic one. A requirement that cannot be expressed as a verifiable acceptance criterion is a bad requirement, and the V-model surfaces that at specification time rather than at integration. It maps naturally onto what DO-178C, ISO 26262, and IEC 62304 require: bidirectional traceability from requirement to test at every level.

The V-model has its own failure mode when verification criteria are written once and filed until the auditor arrives. But conflating it with waterfall signals to anyone in a regulated domain that your mental model stopped at the introductory chapter.


Myth 9: “Modern teams have moved past waterfall entirely”

Walk into any regulated industry, medical devices, aerospace, defence, or automotive electronics, and you will find waterfall, often required by law, mandated by certification standards written before the Agile Manifesto existed, or embedded in contractual frameworks designed around sequential phases and formal gate reviews.

The question isn’t whether waterfall is used, because it clearly is; the question is whether it’s appropriate. In domains where a wrong answer in production is catastrophic and irreversible, where regulatory traceability is required, and where the difference between a requirement and its implementation can be a matter of life and safety, heavy upfront specification has genuine merit.

The mistake is generalising that experience to domains where those constraints don’t exist. The engineer who insists on a full-spec waterfall process for a three-month internal tool project is making the same category error as the startup that runs safety-critical avionics development on two-week sprints with no formal verification. Both are applying the wrong method to the wrong problem.

Context is everything, and the engineer who understands that is worth more than one who has simply picked a side.


Myth 10: “Waterfall is the opposite of agile”

Waterfall and agile are not opposites on a linear scale, and treating them as such makes it harder to choose between them intelligently, or to combine elements of both where that makes sense.

They make different bets about the nature of the work. Waterfall bets that requirements can be known and fully specified before work begins, and that the cost of upfront analysis is lower than the cost of mid-project change. Agile bets that requirements will change as understanding grows, and that continuous adaptation is worth paying for throughout the lifecycle.

Both bets are sometimes right. Hardware bring-up, one-way-door regulatory submissions, and large capital programs are problems where the waterfall assumptions hold reasonably well. Consumer products, internal tools, and platforms that need to respond to user feedback are problems where the agile assumptions are more realistic.

The failure isn’t in using waterfall; it’s in using it on problems that demand agility, or in using agile on problems that require rigor and traceability that waterfall, done well, can provide. Know your problem. Pick your method. Hold it loosely enough to change it when the evidence says you should.


The Actual Lesson

Waterfall isn’t evil, and it isn’t a mistake; it’s a tool with a specific set of assumptions baked in, assumptions that were reasonable for the contexts it was borrowed from but unreasonable for most of the contexts it was applied to.

When those assumptions hold, when requirements are stable, when change after execution is genuinely expensive, when formal accountability and traceability are required, waterfall can work, and in some domains it works well. When they don’t hold, which is most of the time in software, it fails in predictable ways that have been documented since the 1980s.

The lesson isn’t “don’t use waterfall.” The lesson is: understand what you’re using, understand the assumptions it makes, understand what happens when those assumptions break, and be honest enough to change methods when the evidence says your current one isn’t working. That’s not a waterfall lesson or an agile lesson; that’s an engineering lesson, and it applies regardless of what color your sticky notes are.


Modelithe is built on more than 25 years of hard-won lessons across small, high-velocity teams and large, multi-country enterprises. We believe the best method is the one that fits your actual problem, not the one your consultant sold you last quarter, and we’ve built Modelithe to support you wherever you are on that spectrum.