System Migration Services

Home Software Solutions System Migration Services

Overview

Migration projects fail more often than most other categories of software work. Not because they are technically complex beyond the abilities of the team involved — but because they are consistently underestimated. The data has more edge cases than the schema suggests. The downstream systems have more undocumented dependencies on the source system's behaviour than anyone knew. The cutover window turns out to be smaller than planned. The rollback procedure that was designed on paper does not work as expected when it is actually needed.

The consequences of a failed migration range from hours of unplanned downtime to permanent data loss. Either outcome is unacceptable when the system being migrated is critical to business operations — and the systems that are hardest to migrate are precisely the ones that are most critical, because that is why they have been running so long without being touched.

We approach migrations as infrastructure engineering problems rather than data movement exercises. The technical work of moving data from one system to another is straightforward. The engineering work — designing the migration so that it can be validated before cutover, executed within the available window, verified after cutover, and rolled back cleanly if something goes wrong — is where migrations succeed or fail.


Types of Migration We Handle

Database Migrations Database migrations are the most common and often the most consequential migrations a business undertakes. The database holds the record of the business — customer data, transaction history, operational state — and migrating it incorrectly means losing or corrupting that record.

Database migrations we handle include version upgrades within the same database platform — PostgreSQL 12 to 16, MySQL 5.7 to 8.0 — where the migration path is well-defined but the compatibility implications for application code need to be evaluated and addressed. Cross-platform migrations — Oracle to PostgreSQL, SQL Server to MySQL, proprietary database to open source — where schema translation, function compatibility, data type mapping, and query rewriting are all required. Schema modernisation migrations where the data model itself is being restructured — normalising a denormalised schema, splitting a monolithic table, restructuring hierarchies — alongside the platform migration or independently.

Database migrations at scale — large datasets that cannot be migrated in a single offline window — require a live migration strategy: replicating data from the source to the destination while the source remains live, keeping the destination in sync with ongoing changes, and cutting over when the destination is sufficiently current. We design and execute these live migration strategies with explicit handling of the cutover moment — the window during which in-flight transactions need to be drained, the source locked, the destination validated, and traffic switched — keeping the cutover window as short as the data volume and application architecture allow.

Platform Migrations Platform migrations move an application or service from one hosting environment to another — from on-premises infrastructure to cloud hosting, from one cloud provider to another, from a managed hosting environment to self-managed infrastructure, or from an end-of-life platform to a supported one.

The challenge of platform migrations is that the application being migrated was built for the source environment — it may rely on filesystem paths that do not exist in the destination, environment variables that are configured differently, network topology assumptions that do not hold, or platform-specific services that have no direct equivalent. A platform migration that treats the application as a black box to be moved without inspection will discover these dependencies during cutover, when they are most expensive to address.

We audit the application's platform dependencies before designing the migration — identifying every assumption the application makes about its environment and resolving each one explicitly before the migration runs.

Application and Service Migrations Migrating from one application to another — replacing a legacy CRM with a modern alternative, moving from one ERP to another, switching ecommerce platforms — requires migrating not just the data but the business processes that the source system supports. Data that was structured for the source system's model needs to be transformed for the destination system. Integrations built against the source system's APIs need to be rebuilt against the destination. Workflows that were supported by the source system's features need to be replicated or redesigned in the destination.

We scope application migrations with explicit identification of every capability the source system provides, how each capability maps to the destination system, what transformation is required for the data, and what integration work is required for the connections the business depends on.

Cloud Provider Migrations Moving from AWS to Azure, from Azure to GCP, from a cloud provider to Hetzner or another VPS provider — or in any direction — requires mapping every cloud-native service used by the application to its equivalent in the destination environment. Object storage, managed databases, queue services, DNS, load balancers, and secrets management all have provider-specific implementations that need to be replaced with their equivalents or with self-managed alternatives.

Cloud provider migrations are also an opportunity to audit cloud spend and architecture — removing services that are no longer needed, right-sizing infrastructure, and improving the architecture in ways that were not possible within the constraints of the source provider's service model.

Data Format and Protocol Migrations Some migrations do not involve moving to a new platform but changing how data is structured or how systems communicate. Migrating from a file-based integration to an API-based one. Replacing a legacy messaging protocol with a modern REST or WebSocket interface. Restructuring data formats — from XML to JSON, from CSV to Parquet, from a proprietary binary format to an open standard — for compatibility with modern tooling. These migrations require the same discipline around validation and cutover management as platform migrations, even though the infrastructure itself may not change.


How We Approach Every Migration

Regardless of migration type, the methodology is consistent. The specifics vary — the tools used, the cutover strategy, the validation approach — but the structure is the same.

Inventory and dependency mapping. Before designing the migration, we produce a complete picture of what is being migrated and what depends on it. Every table, every integration, every downstream consumer of the source system, every assumption the application makes about the source system's behaviour. Surprises during migration execution are almost always things that were present in the source system but not in the inventory. We invest in making the inventory complete rather than discovering it mid-migration.

Destination environment design. The destination environment is designed before migration begins — not provisioned and assumed to be appropriate. Schema design for the destination database, infrastructure configuration for the destination platform, and the transformation logic that maps source data to destination structure are all specified and reviewed before the first record is moved.

Test migration on production data. Migrations are tested against a copy of production data, not against a sample or a synthetic dataset. Production data contains the edge cases that synthetic data does not — the records with null values in columns that the schema declares non-nullable, the strings with encoding anomalies, the timestamp values at the boundaries of valid ranges. Running the migration against production data in a test environment surfaces these issues before the production migration window.

Validation framework. Every migration has a validation framework — a set of checks that confirm the destination system contains what it should contain after the migration completes. Record counts by table and by meaningful business category. Aggregate value comparisons — total transaction amounts, total inventory quantities — that confirm no data was lost or corrupted in transformation. Referential integrity checks that confirm foreign key relationships are intact. Application-level smoke tests that confirm the application behaves correctly against the migrated data.

Cutover planning. The cutover — the moment of transition from source to destination — is planned at the procedure level, not the concept level. Each step is documented with its expected duration, its success criterion, and its rollback action if the success criterion is not met. The cutover procedure is rehearsed against the test environment before the production cutover runs. The people executing the cutover know exactly what they are doing and what to do if something goes wrong.

Rollback design. Every migration has a rollback path that has been tested. For database migrations, the rollback path is the ability to redirect application traffic back to the source database, which remains live until the migration is validated. For platform migrations, the rollback path is a running instance of the source environment that can receive traffic within the switchover time. Rollback procedures that exist only on paper and have never been executed are not reliable.

Post-migration validation and monitoring. After cutover, the destination system is monitored closely — error rates, query performance, data access patterns — for the period during which issues are most likely to surface. The source system remains available as a fallback until the destination system has been validated under production load for a sufficient period.


Data Quality as a Migration Input

Migrations expose data quality problems that the source system has accumulated over years of operation. Duplicate records that the source system's UI prevented but that exist in the database. Orphaned records that reference parents that no longer exist. Values in columns that do not match the validation rules that were added after the data was written. Inconsistent categorisation across records written by different users over different periods.

These problems do not go away during migration — they arrive in the destination system unless they are addressed. A migration that moves dirty data from a source system to a destination system delivers a dirty destination system.

We treat data quality as a migration input — auditing the source data for quality issues during the inventory phase, scoping the data cleaning work required alongside the migration work, and delivering a destination system with data that is cleaner than the source rather than identically dirty.


Zero-Downtime Migration Strategies

For systems where any downtime is unacceptable — production databases serving live applications, platforms that operate continuously without maintenance windows — zero-downtime migration strategies keep the source system available throughout the migration and minimise the cutover window to seconds rather than hours.

Dual-write patterns. During the migration period, writes go to both the source and destination systems simultaneously. The destination is populated with historical data via bulk migration while new writes keep it current. When the destination is validated as current and correct, reads switch to the destination and dual-write stops.

Change data capture. For database migrations, change data capture monitors the source database for all inserts, updates, and deletes and replicates them to the destination in near real time. The destination starts from a bulk copy of the source and then stays current through CDC replication. The cutover window is the time needed to drain in-flight transactions and switch the application connection string — typically seconds.

Feature flag cutover. For application migrations, feature flags in the application layer control which system handles each category of request. Traffic is migrated progressively — first read traffic, then write traffic for low-risk operations, then write traffic for high-risk operations — with each step validated before proceeding. Rollback at any stage is a feature flag change.


Technologies We Migrate Between

Our migration work covers the full range of platforms and technologies in our stack and beyond:

Database platforms. PostgreSQL, MySQL, SQLite, SQL Server, Oracle, MongoDB — migrations between any combination, in either direction, with schema transformation and data type mapping as required.

Application platforms. Linux VPS and bare metal, AWS EC2, Hetzner, DigitalOcean, Azure, GCP — migrations between hosting environments with full application dependency auditing and environment configuration translation.

ERP and business platforms. Exact Online, AFAS, Twinfield, SAP, Salesforce, HubSpot — data migrations between business platforms with transformation logic that maps the source system's data model to the destination system's.

Ecommerce platforms. Shopify, WooCommerce, Magento, Bol.com — product catalogues, customer records, order history, and operational data migrated between platforms with transformation for structural differences between the source and destination data models.

File and protocol formats. XML to JSON, CSV to Parquet, legacy binary formats to open standards, file-based integrations to API-based integrations — format and protocol migrations that update the interfaces between systems without requiring simultaneous changes to all connected systems.


Technologies Used

  • Rust — high-throughput data transformation pipelines, binary format parsing, migration tooling where processing speed matters
  • C# — migration tooling for enterprise system integrations, Excel and file processing, .NET ecosystem source or destination systems
  • SQL (PostgreSQL, MySQL, SQLite) — schema design, migration scripting, validation queries, analytical reconciliation
  • Python — data transformation scripting, format conversion, validation tooling where ecosystem breadth matters
  • REST / WebSocket — API-based data extraction from source systems and loading into destination systems
  • AWS S3 / SQS — staging storage and queuing for large-scale migration pipelines
  • Linux / Systemd — migration job orchestration and scheduling on Linux infrastructure

Starting a Migration Project

Migration projects begin with an honest assessment of the source system — what it contains, what depends on it, what the risk profile of the migration is, and what the available cutover window looks like. We do not propose a migration approach before we understand the source system.

For complex migrations — large datasets, many downstream dependencies, zero-downtime requirements — we scope a discovery phase before the migration itself. The discovery phase produces the inventory, the dependency map, the data quality assessment, the destination design, and the migration plan. The migration execution then proceeds against a plan that has been reviewed and validated rather than a concept that is being figured out as it runs.


Move With Confidence

The goal of every migration is to arrive at the destination with everything intact — data complete, integrations working, application behaviour preserved, and the source system available to fall back to if anything is not right. That outcome is not achieved by moving fast and hoping. It is achieved by planning thoroughly, testing against real data, designing rollback as carefully as the forward path, and validating completely before decommissioning the source.