SAFe (the Scaled Agile Framework) is one of the most deployed scaling frameworks in enterprise software development. It is also one of the most polarizing, and after twenty-five years of watching organizations adopt, adapt, abandon, and misapply scaling frameworks across industries ranging from embedded systems to financial platforms, the same misunderstandings keep surfacing: in kick-off presentations, in consultant pitches, in the quiet resignation of engineers asked to implement something nobody in the room fully understands.
The problem is not that SAFe doesn’t work. The problem is that most organizations implement a myth, then blame the framework when reality pushes back.
But some problems are not myths: a few of them belong to the framework itself.

A note on versions: SAFe has evolved substantially across its major releases. SAFe 6.0 introduced a meaningful shift toward flow metrics, team topologies, and Business Agility as organizing concepts, and it differs from SAFe 5.x in ways that matter in practice. Much of the organizational folklore about SAFe, including some of the myths below, was formed during earlier versions and has survived into organizations now nominally running 6.0. Where the distinction is significant, it is worth noting.
SAFe is also a large, complex body of practice, and that size creates fertile ground for myths. Some emerge from oversimplification, others from motivated reasoning (either by those selling the transformation or by those resisting it), and a few are remnants of older versions of the framework that have simply never been updated in people’s mental models, surviving as organizational folklore long after the framework itself moved on.
What follows is an attempt to name them honestly; not to defend SAFe, nor to dismiss it, but to describe it accurately enough that the decision to adopt it, adapt it, or replace it can be made on the basis of what it actually is.
Myth 1: SAFe is just Scrum at scale
This is perhaps the most widespread misconception, repeated most often by people who have heard of SAFe but never studied it carefully enough to distinguish its layers. Scrum is a lightweight framework for a single small team, typically ten or fewer people (the 2020 Scrum Guide’s wording), working toward a shared product goal. SAFe is a multi-layered system that addresses portfolio management, program-level coordination, value stream alignment, and enterprise architecture; none of which Scrum even attempts to touch.
SAFe draws from Scrum, but it also draws heavily from Lean product development, Kanban flow theory, XP engineering practices, and systems thinking at the organizational level. The Planning Interval (PI), the Agile Release Train, the PI Planning event, the portfolio Kanban board: none of these constructs exist anywhere in the Scrum Guide, and treating them as extensions of familiar Scrum concepts causes practitioners to misapply them from the very first day of adoption.
The practical consequence of this myth is that organizations scope their SAFe implementation as if it were simply a matter of connecting their existing Scrum teams with an extra layer of coordination. When the ART fails to deliver on its PI objectives, or when the portfolio remains a chaotic queue of competing demands, the temptation is to blame the teams rather than acknowledge that the organizational transformation required by SAFe goes substantially deeper than anything a Scrum adoption ever demanded. The two frameworks solve genuinely different problems, at genuinely different scales, and confusing them is a reliable way to ensure both are done poorly.
Myth 2: Adopting SAFe equals an agile transformation
This myth is expensive in the most literal sense, because organizations that believe it tend to declare victory at the point of installation rather than at the point of cultural change.
Adopting SAFe is not a transformation; it is the beginning of a framework installation.
Whether it leads to genuine agility depends entirely on whether the organization changes how it thinks about planning, about leadership authority, about failure as information rather than blame, and about the relationship between the people doing the work and the people funding it.
A company that installs SAFe without changing its culture has not become agile. It has added ceremony on top of the same organizational dysfunction it had before.
What is worth saying plainly here is that SAFe’s structural design makes it particularly susceptible to this failure mode. The framework accommodates existing organizational hierarchies rather than dismantling them; it introduces new roles (Release Train Engineer, Product Management, Business Owners) that sit alongside rather than replace existing management structures, and the result in many organizations is not a leaner, more responsive way of working but a parallel bureaucracy that consumes coordination overhead without delivering the alignment it was meant to create. This is not only a misapplication problem; it is a structural tension within the framework itself.
The related failure mode is cargo cult adoption: organizations that run all the SAFe ceremonies correctly (PI Planning attended, Inspect and Adapt workshops scheduled, ART cadence maintained) while the underlying decisions about funding, staffing, priorities, and architecture continue to be made in the same rooms by the same people using the same criteria as before the transformation. The ceremonies become a reporting layer rather than a coordination mechanism, and the teams below them learn quickly that the plan is decorative. That outcome is not inevitable in SAFe, but the framework’s tolerance for existing hierarchy makes it considerably more likely than in frameworks that require structural simplification as a precondition.
Myth 3: SAFe is only relevant for very large enterprises
The name contains the word “scaled”, and SAFe’s documentation tends to illustrate its examples with hundreds of engineers distributed across multiple value streams and time zones, which leads medium-sized companies to dismiss it as irrelevant to their situation and then proceed to build bespoke scaling solutions of their own that end up recreating the same coordination structures, without the benefit of the underlying thinking that went into the framework’s design.
Essential SAFe, the smallest configuration, is designed for a single Agile Release Train, nominally fifty to one hundred and twenty-five people. For organizations with several teams that need to coordinate deliverables across a shared release boundary, the core constructs of SAFe address real coordination problems that informal mechanisms often handle poorly as headcount grows past twenty or thirty people.
That said, the honest version of this is more qualified than SAFe’s own documentation tends to be. Essential SAFe can still feel heavy for organizations with only three to five tightly coupled teams, particularly if the teams are working on a single product with a shared codebase and a small, accessible leadership structure. In those contexts, the overhead of ART ceremonies, the formality of PI Planning, and the role definitions can create more coordination cost than they remove, and frameworks with a lighter coordination model (LeSS, or even well-run multi-team Scrum with a shared backlog) may produce better outcomes with less organizational friction. The framework’s fit depends not only on headcount but on the degree of coupling between teams and the maturity of the organization’s product management function, and the honest answer is that Essential SAFe is sometimes the right choice for a fifty-person organization and sometimes excessive for one of two hundred.
Myth 4: SAFe requires all teams to use Scrum
SAFe specifies that Agile teams should use either Scrum or Kanban, or a combination of the two. It does not mandate Scrum. The framework explicitly acknowledges that some teams work better with flow-based Kanban, particularly those dealing with high rates of incoming unplanned work, support queues, or maintenance work where fixed-length iterations create more friction than value by forcing artificial decomposition of work that does not naturally fit a two-week boundary.
The myth likely persists because SAFe’s most visible program-level artifacts (the Program Board, the PI objectives, the iteration-level commitments) are easier to explain and coordinate in a Scrum vocabulary, since most practitioners encountered SAFe through a Scrum background and tend to translate unfamiliar structures into familiar ones. But forcing Scrum on a team for whom it is genuinely the wrong tool is a reliable way to generate sustained resistance, inflate ceremony without value, and undermine the credibility of the SAFe adoption in the teams that would actually benefit from Scrum’s cadence and accountability structure.
What SAFe does require at the team level is a regular cadence, a visible backlog, and the ability to demonstrate working output at the end of each iteration, regardless of whether that iteration is called a Sprint or a flow period. Teams that meet those requirements using Kanban contribute to PI Planning, participate in System Demos, and integrate their work into the ART’s shared cadence in exactly the same way as Scrum teams. The framework’s value comes from the coordination structures above the team level, not from prescribing identical ceremonies at every team in the organization.
Myth 5: PI Planning is just a big, expensive meeting
The PI Planning event is a typically two-day, ART-wide planning session: all teams, all product managers, the System Architect, Business Owners, and usually senior leadership as well, aligning on a shared set of objectives for the next eight to twelve weeks (the Planning Interval) and making the dependencies between teams visible before work begins rather than discovering them mid-PI when they become blockers. SAFe guidance notes that rich, real-time interaction is the goal, whether that is achieved through physical co-location (where practical) or through effective virtual facilitation when it is not. When it is run well, it is one of the highest-leverage events in the entire framework. Two days of synchronized planning can prevent months of misaligned execution.
The cost of PI Planning is not the two days. It is the organizational discipline required to make those two days count.
The remote and distributed reality deserves more than a passing acknowledgement here. Post-2020, a substantial proportion of ARTs operate across multiple time zones, sometimes spanning half a globe, and the logistical challenge of running an effective PI Planning event under those conditions is genuinely material, not merely a facilitation inconvenience. Asynchronous preparation, time zone compression for synchronous sessions, the loss of the informal corridor conversations that resolve half the dependency conflicts before they reach the program board: these are real costs that organizations running distributed ARTs pay with every PI, and the available tooling (digital program boards, video conferencing, shared backlogs) partially compensates but does not fully substitute for the density of communication that physical co-location enables. Organizations entering SAFe under the assumption that PI Planning works equally well in all configurations should pressure-test that assumption against their specific geographic distribution before committing.
Tooling dependency is the other material reality this myth tends to obscure. A small ART of fifty people can coordinate PI Planning on physical boards and shared spreadsheets, but as the ART grows toward one hundred and twenty-five people with forty or more dependencies on the program board, the practical management of that coordination without dedicated tooling (Jira Align, Planview Portfolios, Rally, and their equivalents) becomes unworkable. That tooling introduces its own failure modes: significant licensing cost, steep learning curves, the political dynamics of who controls the tool’s configuration, and the tendency of any sufficiently complex tracking system to become the thing the organization optimizes for rather than the delivery outcomes it was meant to support. The cost of PI Planning is not only the two days; it is also the infrastructure required to make those two days operationally viable at scale, and that infrastructure budget deserves to be in any honest SAFe business case.
Myth 6: SAFe eliminates the need for long-term planning
This myth tends to emerge specifically from practitioners who came from a heavy waterfall background and who embraced SAFe as permission to stop making commitments they could not keep. Since SAFe is “agile”, they reason, long-term commitments are no longer necessary or even permissible, and any request for a roadmap beyond the current PI is evidence of the old thinking the transformation was supposed to leave behind.
SAFe does not eliminate long-term planning; it restructures it. The portfolio level explicitly handles strategy, funding decisions, and investment horizons that span years rather than quarters. The concept of a Value Stream exists precisely to connect long-term business outcomes to the cadence of quarterly planning at the program level and the iteration-level delivery at the team level. Lean portfolio management, when understood correctly, is the replacement of fixed annual project budgets and detailed up-front requirement specifications with rolling, outcome-oriented investment theses that can be adjusted as the organization learns, while still providing the business with the financial predictability and strategic visibility it needs.
The honest caveat here is that SAFe’s economic prioritization mechanisms, principally Weighted Shortest Job First (WSJF), are frequently misunderstood in practice, and when they are understood, they are frequently gamed.
WSJF requires genuine estimates of cost of delay, job duration, and relative value across initiatives that are often highly uncertain.
Naturally, WSJF is also politically charged, and subject to the same anchoring biases that afflict any prioritization exercise.
In many organizations, WSJF scoring becomes a post-hoc rationalization of decisions already made by seniority rather than a genuine analytical tool, and the framework provides limited structural protection against that outcome. The gap between how Lean portfolio management is described in SAFe training and how it functions in organizations where the product management function is immature or politically fractured is one of the most consistent disappointments practitioners report from real implementations.
Myth 7: More SAFe means more agility
SAFe is a large framework by design, because it needs to address the coordination problems of organizations large enough that a single team cannot solve them and complex enough that informal communication channels have broken down.
But size does not mean completeness, and implementing every role, every event, every artifact in the framework regardless of whether the organization has the dysfunction those elements were designed to address is the fastest path back to the same bureaucratic overhead that agile was supposed to replace.
The Swedish concept of lagom applies here as well as anywhere: not too little, not too much, but exactly the right amount for the specific organization at its specific stage of growth. SAFe’s own guidance recommends starting with Essential SAFe and adding layers when specific problems emerge, and that guidance is sound.
But it is worth being clear about what the framework actually is: SAFe is a defined, branded framework with named roles, named events, and a structured configuration model. It is not a pick-and-mix menu from which organizations freely select only the elements they find convenient. Teams that selectively implement only the parts of SAFe they prefer while omitting the coordination mechanisms and portfolio management elements they find difficult are not running a lean version of SAFe; they are running something else and calling it SAFe, which tends to produce the worst of both worlds.
The version question is also relevant here. SAFe 6.0 introduced meaningful changes: a greater emphasis on flow metrics (rather than velocity), an explicit integration of team topologies, and a sharper focus on Business Agility as the organizing concept at the portfolio level. Organizations currently running what they describe as SAFe but trained and configured on SAFe 5.x patterns are operating from a different set of assumptions than the current framework documentation describes. The gap between installed version and current guidance is often invisible to the teams living inside the ART, but it shapes how they interpret the framework’s intent and whether their Inspect and Adapt cycles are asking the right questions. A practical implication is that inherited SAFe implementations should be periodically audited against the current framework version rather than assumed to reflect it.
The deeper structural critique is this: SAFe’s complexity is not solely an implementation risk that careful adoption prevents. It is also an inherent tradeoff that some organizations will never justify economically. The coordination overhead of maintaining an ART, running quarterly PI Planning, sustaining the role structure, and operating the portfolio Kanban is a fixed cost that the framework’s benefits must offset. For organizations whose coordination problem is not large enough, not distributed enough, or not chronically enough to warrant that overhead, simpler alternatives will consistently outperform SAFe regardless of how carefully it is implemented. That is not a failure of implementation; it is a mismatch between the problem and the tool.
Myth 8: SAFe handles engineering quality by default
SAFe has a Built-In Quality principle and includes practices drawn directly from Extreme Programming: test-driven development, continuous integration, pair programming, collective code ownership, and refactoring. It is one of the few scaling frameworks that explicitly addresses the technical hygiene without which any scaling effort will eventually drown in accumulated defects, unresolvable integration failures, and an architecture too rigid to support the rate of change the business is demanding.
The myth is that adopting SAFe transfers these practices to the teams automatically. It does not. The XP practices are recommended, not enforced, and in the understandable rush to establish the organizational structure, stand up the ART, and run the first PI Planning, the engineering practices are frequently the last thing addressed; and sometimes they are never addressed at all, because they are technically demanding, require sustained coaching investment, and their absence is invisible to everyone except the engineers accumulating the debt.
It is worth being precise about what this means in practice. An Agile Release Train that cannot continuously integrate and cannot demonstrate working software at the System Demo is violating the Built-In Quality principle in a way that will eventually undermine the ART’s ability to deliver on its PI objectives. Whether that constitutes “not running SAFe” is a normative question rather than a descriptive one; many organizations are unambiguously running SAFe ceremonies while systematically violating Built-In Quality, and calling that something else does not help them fix it. The more useful observation is that Built-In Quality is a precondition for SAFe working at the claimed level of throughput and predictability, and organizations that treat it as an optional engineering preference rather than a structural dependency are making a bet that accumulated technical debt will not eventually collapse their ART cadence; a bet that compound interest tends to collect on.
There is also a genuine tension between SAFe’s engineering quality guidance and the continuous delivery models that many modern software organizations operate. SAFe’s cadence-based model (iterate, integrate, demo at the end of each PI) was designed for a world where software releases are bounded events; it fits less naturally for organizations practicing continuous deployment, trunk-based development, and feature flagging, where the concept of a bounded iteration sits awkwardly alongside a deployment pipeline that may release dozens of times per day. SAFe 6.0 acknowledges this tension and provides some guidance on continuous flow, but the fundamental cadence structure of the ART remains a source of friction for teams at the high end of continuous delivery maturity, and that friction is a framework-level tension rather than an implementation error.
Myth 9: SAFe can be implemented in a few months
Vendors and consultants with implementation timelines to sell have contributed substantially to this myth, and the SAFe certification ecosystem has, arguably, reinforced it. A Scaled Agile Partner or consultant can deliver a Leading SAFe training program in two days and certify participants as SAFe Agilists (SA) at the end of it. A Release Train Engineer certification (RTE) takes somewhat longer. The existence of a certification pathway with defined timelines creates the impression that the framework can be absorbed and operationalized in the same timeframe that a certification can be obtained, and organizations making their first significant investment in a SAFe implementation tend to anchor their expectations to the training calendar rather than the transformation reality.
The certification economy around SAFe deserves a more explicit critique than it usually receives. The SP, SPC, and RTE certification structure generates significant revenue for Scaled Agile Inc. and its network of training partners, and the incentives embedded in that structure do not always align with the framework’s stated goals. Certifications are time-bounded, renewable at cost, and tied to continuing education requirements that run through the same partner network that sold the original certification. The result is an ecosystem in which the people best positioned to evaluate whether a SAFe implementation is working (the SPCs and RTEs inside the organization) have a professional and financial stake in the continuation of the implementation, regardless of whether it is producing the outcomes it was meant to produce. This is not a conspiracy; it is a straightforward misalignment of incentives, and organizations should factor it into how they commission and evaluate SAFe transformations.
A genuine SAFe implementation involves retraining leadership on Lean-Agile thinking in a way that actually changes how they make decisions; restructuring how teams are formed, staffed, and funded; establishing new cadences at team and program level; redesigning how the portfolio is managed and prioritized; and changing how the organization responds to the information those cadences generate when what they reveal is inconvenient. It also involves the real, non-trivial costs of that work: consulting fees for the initial launch and ongoing coaching, training investment across the ART and into the portfolio layer, tooling costs for the platforms required to make the coordination visible at scale, and the productivity dip that any significant organizational change produces in its first several months. None of that appears in the training calendar, and organizations that budget for certification without budgeting for the full cost of transformation consistently discover the gap at the worst possible moment.
The organizations that report genuine, durable results from SAFe are those that treat it as a multi-year evolution rather than a six-month installation. The first PI is not a success condition; it is the first data point, and the most valuable thing it produces is usually a clear picture of all the organizational assumptions that need to change before the second PI can be more effective than the first. Acting on that feedback, consistently and without defaulting to the organizational habits and political dynamics that pre-dated the transformation, is the real work of a SAFe adoption, and it is work that requires sustained leadership commitment over a timescale that most transformation roadmaps are not honest about.
Myth 10: SAFe is the only serious option for scaling agile
SAFe is the most widely deployed scaling framework, and that market position reflects genuine adoption across industries where the coordination problems it addresses are common. But market leadership is not the same as universal suitability, and treating SAFe as the only serious option leads organizations either to apply a framework whose overhead and prescription are disproportionate to their actual coordination problem, or to dismiss the entire category of scaling frameworks because a SAFe implementation that was never right for them failed predictably.
The landscape of serious alternatives is broader than many practitioners realize. LeSS (Large-Scale Scrum) works well for organizations willing to invest in genuine organizational simplification: fewer roles, fewer handoffs, feature teams with direct and persistent customer contact, and a willingness to remove the managerial layers that SAFe accommodates rather than requiring their elimination. Scrum@Scale addresses coordination at the Scrum-of-Scrums level without prescribing the portfolio and solution layers, which makes it a better fit for organizations whose primary problem is team-level coordination rather than enterprise portfolio management. Disciplined Agile (DA), acquired by PMI in 2019, offers a toolkit-based approach that is explicitly context-sensitive; rather than prescribing a fixed set of roles and events, it provides decision-support guidance for choosing practices based on the organization’s specific context, scale, and constraints. DA’s approach is less prescriptive than SAFe’s, which makes it harder to implement consistently but potentially more appropriate for organizations whose contexts are genuinely too varied for a single framework to cover well.
Spotify’s model (more honestly described as an aspiration than a repeatable blueprint, and one that Spotify itself has evolved significantly since the model was first documented) demonstrated that autonomous squads with lightweight coordination mechanisms and a strong engineering culture could scale without a heavyweight framework, though the number of organizations that have successfully replicated that model outside of the specific cultural and organizational context in which it emerged is considerably smaller than the number that have tried.
Choosing the wrong tool, the wrong process, at the wrong time in the organization’s growth journey can cause more harm than good.
None of the existing scaling frameworks (not even SAFe) fully solve the path from a small founding team to a world-wide enterprise without tradeoffs in overhead, prescription, or the degree to which they preserve the speed and autonomy of the small team while managing the coordination demands of the large one. Every framework was a reasonable answer to the specific dysfunction of a specific type of organization at a specific point in time. SAFe was a reasonable answer to the coordination and portfolio management problems of large, traditionally structured software organizations in the 2010s. Whether it remains the most reasonable answer for any given organization in the mid-2020s depends on the organization’s specific dysfunction, not on SAFe’s market share.
Scaling without losing what made you effective
SAFe is a serious framework built on serious thinking, and the myths above do real, measurable damage (either by setting unrealistic expectations that produce disillusionment when the transformation proves harder than the training material suggested, or by dismissing SAFe entirely in organizations that could genuinely benefit from its coordination structures and are instead building ad hoc alternatives that recreate the same problems with less rigour). Neither outcome serves the engineers and teams doing the actual work.
It is also worth being honest about something else: not all of the problems described above are myths about SAFe. Some of them are real, inherent tensions within the framework itself (the susceptibility to hierarchy entrenchment, the certification ecosystem’s misaligned incentives, the complexity overhead that smaller organizations may never justify, the friction with continuous delivery models) and acknowledging them is not a dismissal of SAFe but a precondition for using it with clear eyes.
This is one of the core problems that Modelithe was designed to address: how to grow from a single agile team to a multi-team, multi-country organization without abandoning the principles that made the small team effective in the first place.
However, this while at the same time acknowledging that there are parts of predictive product management and enterprise budgeting that need to be done on a timescale well exceeding that of a Sprint – even that of a Planning Interval – and that the tools needed for those decisions are different in kind from the tools that help a three-person team ship a feature.
SAFe, when understood correctly and implemented with discipline, honest self-assessment, and the willingness to measure outcomes rather than ceremony compliance, is one of the more complete answers the industry has produced to the scaling problem. But it is only an answer for the organizations that have the specific coordination dysfunctions it was designed to address, the organizational maturity to sustain its cadences, the leadership commitment to actually change in response to what the framework’s inspect-and-adapt cycles reveal, and the budget to fund the full cost of transformation rather than only the cost of training.
Take a moment to think about your own organization. Which of these myths are quietly shaping your current implementation, your resistance to it, or the way leadership talks about the transformation in all-hands meetings? And how many of the genuine framework tensions are being honestly named in those conversations, rather than quietly attributed to the teams for not implementing SAFe correctly?

