IT Directors are often sold a binary. Keep the legacy platform and accept a growing maintenance tax, or commit to a multi-year rewrite and accept the cancellation risk. Both are losing bets.
A big-bang rewrite concentrates every integration, data-migration, and user-training risk into one cutover event. Standing still, meanwhile, means ceding budget share to keep Oracle Forms, COBOL, or aging Java stacks alive while retiring specialists take institutional knowledge with them.
There is a third path: phased legacy system migration. The pattern replaces the whole-system cutover with a series of small, reversible module releases, each validated in parallel against the legacy system before it takes traffic. Big-bang rewrites concentrate every migration risk into a single cutover event. Phased modernization distributes it.
This article walks through the enterprise software modernization methodology Pretius uses on engagements like the Sweco Obsurv rebuild: what phased modernization means, how to sequence modules, how to run a parallel-run program, and what a 24-week timeline looks like from Audit through Adoption.
Phased legacy modernization is an architectural strategy that replaces a legacy system one module at a time, with each module’s cutover validated in production before the next begins. It rests on two foundational patterns from the continuous-delivery community: the Strangler Fig pattern and parallel run.
Definition — Strangler Fig Application. An architectural approach that incrementally replaces a legacy system by routing traffic through a facade that gradually shifts requests from legacy to new components. Introduced by Martin Fowler in 2004 (martinfowler.com/bliki/StranglerFigApplication.html).
Definition — Parallel Run. Legacy and new systems process the same inputs simultaneously. Outputs are compared until equivalence is proven, at which point traffic is promoted to the new system.
Fowler’s original framing remains the clearest description of the pattern:
“An alternative route is to gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled.” — Martin Fowler
Reference implementations are documented by the Azure Architecture Center, AWS Prescriptive Guidance, and Thoughtworks. The facade is usually an API gateway, reverse proxy, or feature-flag layer. Over time, each route points at the new implementation and the old code path is removed.
When a module cannot be physically separated from the monolith — typically because of a shared database or deeply coupled functions — Branch by Abstraction is the complement to Strangler Fig. Paul Hammant popularized the term and Jez Humble formalized it in Continuous Delivery. A stable interface is introduced between callers and the legacy implementation, the new implementation is built behind the same interface, and callers are switched once parity is proven.
A modern parallel-run program combines three techniques:
GitHub’s open-source Scientist library and Stripe’s public writing on “shadow mode” migrations show the same mechanics applied at internet scale. Parallel run reduces rollback cost from “restore from backup” to “stop routing to the new system.”
Big-bang vs. phased migration: at a glance
| Dimension | Big-bang rewrite | Phased (Strangler Fig) |
| Cutover risk | Single large event | Distributed across sprints |
| Rollback blast radius | Whole system | One module |
| Time to first business value | 12-24+ months | 4-8 weeks |
| Data validation | End-of-project | Continuous (parallel run) |
| Typical failure mode | Cancelled or shipped broken | Module-level reverts; program continues |
| Team learning curve | Concentrated; late | Distributed; early and ongoing |
Joel Spolsky argued in 2000 that “the single worst strategic mistake that any software company can make [is to] rewrite the code from scratch” (joelonsoftware.com). Per the Standish Group CHAOS research, large IT projects succeed less than 10% of the time — a figure the Standish methodology has academic critics, but one that still anchors how boards think about rewrite risk.
Pretius structures phased modernization engagements into four sequential phases across a typical 24-week window. The table below is the anchor; the subsections below expand each phase.
| Phase | Weeks | Key activities | Deliverable |
| Audit & Discovery | 1-2 | Dependency map; stakeholder interviews; risk x impact matrix | Modernization roadmap |
| Technology Toolkit | 3-6 | Stack shortlist; PoC on a representative module; target architecture | Approved tech stack and architecture |
| Core Implementation | 7-18 | Module sprints; parallel run; continuous data validation | Incrementally modernized modules |
| Adoption & Scale | 19-24 | Training; KPI baselining, operational handover | Operationalized system |
The Pretius phased-migration methodology runs four sequential phases across 24 weeks: Audit, Technology Toolkit, Core Implementation, and Adoption & Scale.
The goal of the Audit phase is to produce a defensible modernization roadmap rather than a polished architecture diagram. Activities include static code analysis, database schema inventory, API mapping, and stakeholder interviews across business, operations, security, and compliance.
Dependency mapping surfaces the hidden coupling that kills naive module separation. A per-module risk-and-impact matrix plots each candidate on two axes: technical risk (complexity, unknown integrations, state migration) and business impact (revenue contribution, user count, regulatory criticality).
The deliverable is a prioritized modernization roadmap, not a green-lit implementation plan. It names which modules go in which sprint, which stay on legacy, and which are candidates for retirement.
In the Toolkit phase, the modernization team narrows the stack. The shortlist Pretius runs most often includes Oracle APEX, Mendix (the low-code platform owned by Siemens), custom Java with Angular, and optional Oracle Forms bridging for modules that need to coexist with legacy during cutover.
Selection criteria are pragmatic: functional fit, total cost of ownership, existing team skills, vendor-lock risk, and roadmap alignment. The phase ends with a proof-of-concept on one representative module and an approved target-state architecture.
| Option | Best fit when… | Watch out for… |
| Oracle APEX | Oracle DB already deployed; forms-heavy enterprise apps | Skills availability outside Oracle shops |
| Mendix (low-code, Siemens) | Speed-to-market matters; non-differentiated business logic | Long-term vendor lock-in; runtime licensing model |
| Custom Java + Angular | Competitive differentiation; complex UX requirements | Longer build time; higher headcount |
| Oracle Forms bridging | Temporary coexistence during migration | Technical debt if used beyond transition window |
Core Implementation is the longest phase and the one where phased modernization earns its keep. Sprints run on a two-to-three-week cadence, and each sprint delivers at least one module cutover.
Every module moves through the same incremental ladder: read-only shadow mode, shadow writes, canary at 1-5%, full cutover, then legacy retired for that module. Reconciliation jobs run continuously. The regression test suite grows sprint-over-sprint because each retired module adds its invariants to the shared guardrail.
The program maintains a short, written rollback-trigger list: data divergence detected in reconciliation, performance regression below SLA, integration failure with a downstream system, regulatory objection, or user-workflow regression. Any trigger returns traffic to legacy for that module — not for the program.
Adoption closes the engagement with the operational handover that big-bang rewrites almost always skip. End-user training is delivered module by module rather than all at once. KPIs are baselined against the legacy SLAs so that regressions are visible in week one, not month six.
Operational runbooks are handed to the client’s operations team. The phase ends with a post-migration review and a forward-looking roadmap for any modules still on legacy.
Sequencing is the first place phased programs go wrong. The right order is rarely “easiest first” or “biggest first.” It is value-weighted and risk-aware.
A simple 2×2 maps each module by business value and technical risk and produces clear sequencing guidance.
| Business value | Technical risk | Sequencing guidance | Typical example |
| High | Low | Migrate first — build stakeholder confidence with quick wins | User-facing reporting module with a stable data model |
| High | High | Migrate second — invest heavily in parallel run and canary | Core transactional module with complex integrations |
| Low | Low | Migrate opportunistically alongside higher-priority work | Internal admin screen |
| Low | High | Consider retirement or indefinite legacy hold | Rarely used integration with brittle downstream |
For backlogs that need a numerical score rather than a quadrant, teams running SAFe use WSJF. WSJF prioritizes modules by dividing cost of delay by job duration.
Cost of Delay = User-Business Value + Time Criticality + Risk Reduction / Opportunity Enablement
Source: Scaled Agile Framework (framework.scaledagile.com/wsjf/). Higher WSJF scores move up the queue.
MoSCoW (Must / Should / Could / Won’t) is most useful during the Audit phase for scope negotiation, not for ongoing sequencing. The Gartner TIME model — Tolerate, Invest, Migrate, Eliminate — is the portfolio lens for application rationalization and helps separate migration candidates from retirement candidates before sequencing even starts.
Data integrity is what IT Directors actually lose sleep over, and it is why parallel run is the single most important risk-reduction mechanism in phased migration. Data integrity incidents are the highest-probability trigger for migration rollback.
The validation stack layers techniques in order of cost and coverage. Row-count checksums catch gross data loss at near-zero cost. Hash-based field checksums catch field-level drift. Sampled full-record diffs test semantic equivalence on a representative slice. A 100% record diff catches everything and is the gold standard for financial and regulated workloads. Business-rule invariants catch violations that the legacy system silently tolerated for years. Replay testing, in which production traffic is replayed against the new system in staging, proves behavioral parity under real load.
A mature program runs multiple layers in parallel. The rollback-trigger list stays short: data divergence, performance regression, integration failure with a downstream system, regulatory objection, and user-workflow breakage. Any one of them returns traffic to the legacy module while the team investigates.
The DORA DevOps metrics — from DevOps Research and Assessment, now part of Google Cloud, not to be confused with the EU Digital Operational Resilience Act, which is a financial-services regulation — are the SLOs of the migration program itself.
| Metric | Definition | Migration-program target |
| Deployment frequency | How often production deployments occur | Weekly, trending to daily as the pipeline matures |
| Lead time for changes | Commit to production | Under one week on modernized modules |
| Change failure rate | Deployments causing a production incident | Under 15% (DORA elite band: 0-15%) |
| Mean time to recover | Time to restore service after an incident | Under one hour for modernized modules |
Source: dora.dev/guides/dora-metrics/.
Sweco AB is a European engineering, architecture, and consultancy group of approximately 22,000 people across urban development, water, energy, transport, and industry verticals. One of Sweco’s products, Obsurv, is a public-space and municipal asset management platform used by cities to manage streets, signage, sewers, and green spaces.
Obsurv had become costly to maintain. Each municipal deployment required heavy custom development, and the platform’s architecture limited Sweco’s ability to grow the product commercially. Sweco engaged Pretius to modernize it.
The engagement was a module-by-module rewrite on Oracle APEX, explicitly not a big-bang. Advanced GIS functionality was integrated into the core, and a new Ribic module for sewer manhole inspection was built and shipped as a SaaS product — a new recurring-revenue line for Sweco on the modernized platform.
Publicly stated outcomes on the Pretius case study page: the platform is easier and more cost-effective to maintain, customer onboarding is faster, deployments and updates that once took months now take minutes, and Obsurv is in production with more than 120 cities. The partnership is ongoing. Pretius modernized Sweco’s Obsurv asset management platform using Oracle APEX in a module-by-module rewrite now serving more than 120 cities.
Read the full Sweco case study here.
Phased legacy modernization is not a silver bullet. It is a disciplined way to reduce rollback risk by turning a single cutover into a sequence of reversible module releases, anchored to a modernization roadmap, a technology toolkit, and a parallel-run program that keeps legacy authoritative until the new system earns its seat.
If you are evaluating enterprise software modernization methodology for an Oracle-heavy or forms-heavy portfolio, the Sweco Obsurv rebuild shows what this approach looks like in production: a module-by-module rewrite on Oracle APEX, a new SaaS product line, and a platform now serving more than 120 cities.
Read the full Sweco case study here.
A focused program typically runs three to six months, anchored to the 24-week Pretius phased-migration timeline: two weeks of Audit, four weeks of Toolkit selection, twelve weeks of Core Implementation with parallel run, and six weeks of Adoption. Larger portfolios often require several sequential engagements, each scoped to a coherent module cluster.
Oracle APEX is the strongest fit for Oracle-database-heavy shops and forms-style enterprise apps. Mendix suits speed-to-market programs where the business logic is non-differentiated and low-code delivers faster. Custom Java with Angular is the right answer when the user experience or business logic is a competitive asset.
Layer the techniques. Start with row-count checksums and field-level hashes for cheap drift detection, add sampled full-record diffs for semantic equivalence, and codify business-rule invariants to catch violations the legacy system silently tolerated. For high-stakes workloads, add replay testing against production traffic.
Parallel run keeps the legacy system authoritative until the new module is proven in production, so rollback is not “restore from backup.” It is “stop routing traffic to the new module” at the facade layer. That is the defining risk-reduction property of phased migration.
Start with a business-value-by-technical-risk matrix to identify quick wins and high-leverage risks. Use WSJF for numerical backlog ordering in SAFe programs. Use MoSCoW during the Audit phase to negotiate scope with business stakeholders.
What team structure supports a phased migration?
A cross-functional squad per module stream, a platform or architecture function owning the facade and the shared data-validation tooling, and a product or stakeholder owner anchoring business priorities. The facade layer is a shared asset — it should not be owned by any single module squad.
Keep legacy support live through the full parallel-run window. Retire modules from the support scope only after cutover is proven and stable. Negotiate stepped-down support pricing with the legacy vendor as modules retire so that run-rate savings start accruing during the program, not after it ends.