Migrating Legacy .NET to Modern Architecture: What Actually Works
I have migrated more legacy .NET applications than I can count at this point. SOAP services that nobody dared touch. WebForms apps where the business logic lived in code-behind files. VB6 applications that predated half the team. Each migration was different in the details, but the patterns that actually work have been remarkably consistent.
If you are staring down a legacy .NET codebase and wondering where to start, this is what I have learned.
The Strangler Fig Is Not Optional
Every successful migration I have been part of used some form of the Strangler Fig pattern. You do not rewrite the whole thing at once. You identify a module, build the modern replacement alongside the legacy version, route traffic to the new one when it is ready, and decommission the old one. Repeat.
On one project, we migrated a .NET Framework pricing engine to .NET Core this way. The pricing rules were the lifeblood of the business, so we could not afford downtime or regressions. We migrated one module at a time, ran both versions in parallel, compared outputs, and only switched over when the numbers matched exactly.
// The bridge pattern in practice — route to new or legacy based on readiness
app.MapGet("/api/fares/{route}", async (string route, IFeatureFlags flags) =>
{
if (await flags.IsEnabled("new-fare-engine"))
return await newFareService.Calculate(route);
return await legacyFareService.Calculate(route);
});This is not fancy architecture. It is disciplined engineering. And it works every single time.
The Anti-Corruption Layer Changes Everything
Here is something I wish I had understood earlier in my career: you do not have to make your new code understand the legacy system's data model. That is what anti-corruption layers are for.
On a government modernization project, we were updating applications with backends that had data models reflecting decades of accumulated business rules. If we had tried to make the new frontends speak the legacy language directly, we would have imported all the technical debt into the new system.
Instead, we built an anti-corruption layer using serverless functions and message queues. The new APIs spoke clean, modern contracts. The functions translated between the new world and the old world behind the scenes. The new frontend developers never had to know or care about the legacy data model.
// Clean API contract for the new world
public record CitizenProfile(string Id, string FullName, string Email, Address Address);
// The anti-corruption function handles the messy translation
[Function("SyncCitizenProfile")]
public async Task Run([QueueTrigger("citizen-updates")] CitizenUpdate update)
{
var legacyData = await _legacyDb.GetCitizen(update.LegacyId);
// Map the legacy model (with its quirks) to the clean model
var profile = new CitizenProfile(
Id: update.NewId,
FullName: $"{legacyData.FIRST_NM} {legacyData.LAST_NM}".Trim(),
Email: legacyData.EMAIL_ADDR ?? legacyData.ALT_EMAIL,
Address: MapLegacyAddress(legacyData)
);
await _modernDb.UpsertProfile(profile);
}This pattern kept the new codebase clean, let the legacy system continue running without modification, and gave us a clear path to eventually decommission it.
Database Migrations Are Where Projects Go to Die
I am not exaggerating. I have seen more migration projects stall on database issues than on any other single factor.
On one project, we had to upgrade from SQL Server 2012 to SQL Server 2019. On paper, this sounds routine. In practice, it involved rewriting queries that relied on deprecated behavior, validating data integrity across millions of rows, and testing that execution plans had not regressed. We built automated comparison tools that ran the same queries against both versions and diffed the results.
On another project, migrating VB6 applications meant the database schemas were designed for a different era of software. Column names like CUST_NM_1 and PROC_FLG_3. No foreign keys. Business logic encoded in stored procedures that nobody fully understood.
The lesson: budget twice as much time for the database as you think you need, and build automated validation early. Manual spot-checking does not scale, and a single corrupted record in a financial system can cascade into a major incident.
Feature Flags Make You Brave
You know what makes developers hesitant to ship migrations? The fear that they cannot roll back. Feature flags solve this completely.
Every migration I run now uses feature flags from day one. New code path ready? Toggle it on for 5% of traffic. Metrics look good? Bump it to 50%. Something weird? Toggle it off instantly, no deployment required.
This is not just a technical practice. It changes the culture. Developers who know they can roll back in seconds are willing to ship more frequently. Product managers who can see the new and old systems running side by side gain confidence faster. Everyone sleeps better.
What I Would Tell My Younger Self
If I could go back and talk to the version of me starting my first legacy migration, I would say three things.
Resist the rewrite urge. It feels productive to start from scratch. It is almost always a trap. The legacy system works. It just works in ways that are hard to maintain. Your job is to make it maintainable, not to prove you can write it better.
Invest in testing before you change anything. Write characterization tests that capture the legacy system's current behavior. You need a safety net before you start moving the trapeze.
Talk to the people who use the system. The codebase will not tell you which features matter, which edge cases are critical, and which parts nobody has used in years. The users will.
Legacy migration is not glamorous work. But it is some of the most valuable. Every system you modernize unlocks years of future productivity for the team that inherits it. That is a pretty good reason to get up in the morning.
Alvin Almodal
Cloud & Data Engineering Consultant. Your partner for cloud-native builds and data pipelines.