Organizations are frantically searching for a cloud data migration process which will both migrate and modernize legacy Oracle databases. Open source database alternatives are cheaper, more scalable, and require much less effort to manage as services from cloud providers.
The potential cost savings alone are compelling. According to Gartner, organizations can reduce their database spend by 80% or more with a data migration process to an open source database such as EDB Postgres in place of a traditional Oracle database.
Current results are falling short
While the rewards may be lucrative, a data migration process is full of risk. Production must continue without disruption. And, data must remain secure. Cloud data must be verified to be consistent with on-prem data before making any changes to legacy on-prem databases.
Several tools and products claim to fit the requirements, but in many cases the wrong data migration process is chosen or poorly applied with disastrous results. According to Gartner, through 2019, more than 50 percent of data migration projects will exceed budget and/or result in some form of business disruption due to flawed execution.
|Use Case||Attributes||Storage tools |
|VM replication |
|Lift and shift to identical stack||Identical hypervisor, OS/DB version||Significant downtime as source VM and storage is migrated to target.||Live motion requires specialized networking within distance limitations. Disk replication requires downtime, crash-recovers database after migration.||Live migration with zero switch over. Automated bulk load and cutover to incremental changes.|
|Lift and shift to non-identical stack||Minimal change in software stack||Limited hypervisor upgrade and changes to storage configuration. Other changes after migration using point tools and component upgrades.||Live migration independent of cloud, stack versions database type|
|Data and database transformation||Convert legacy databases and schema, protect PII||Unaware of schema and data structure. No data/DB transformations/privacy controls.||Live migration for heterogeneous targets, policy definition of data transformation, partial data, and data privacy controls.|
Database lift and shift – still trapped in legacy technology
Oracle offers several database lift and shift tools as the basis for a customized data migration process. The possible options include
- plugging and unplugging
- remote cloning
- Oracle Datapump
- RMAN backup and recovery manager
- SQL Developer with either INSERT statements or SQL*Loader
Most vendors provide database tools for migrating from one deployment of that vendor’s database to another. As a natural result, these tools perpetuate use of the legacy database. Unfortunately, that fact usually defeats much of the desired cost benefit of migrating to the cloud. This is true because legacy databases are expensive and their applications are a significant drain on most IT budgets.
In addition, vendor tools are generally laborious. To be successful, they demand significant expertise in vendor technology and insight into each migrating database instance.
The actual choices available for a specific migration depend on several factors. These factors include the Oracle version, character set, quantity of data, use of indexes, data types, and storage. The length of acceptable system downtime and network bandwidth are also key issues during the data migration process. These factors are critical because lift and shift requires at least some downtime for the migration and switchover to occur.
Hypervisor lift and shift – much tougher than it looks
Modern hypervisors provide both storage and live VM migration for load balancing, hardware maintenance, high availability (HA), or disaster recovery.
Storage migration copies VM storage between clouds. Even with a well-designed storage infrastructure, application I/O characteristics can dramatically impact the performance of storage migration. The most feasible data migration process using storage motion is a cold copy of the storage after a database shutdown. While this approach works, it increases database downtime.
Live migration copies the memory and device state of a VM from one physical host to another. It imposes limits on the maximum network round-trip time, disqualifying this choice for many data migration processes. In addition, live migration requires L2 connectivity between data centers. Stretching networks to achieve L2 connectivity is one of the most challenging aspects of a data migration process. Besides, hypervisor lift and shift is rare to most public clouds who probably don’t support on-premise hypervisors.
Logical replication – a better data migration process
Griddable uses a completely different data migration process. Griddable operates at the data layer, so it seamlessly migrates across heterogeneous clouds and database types. The Griddable data pipeline preserves the transaction sequence, synchronizing the source and target databases for near-zero switchover. Best of all, the Griddable policy engine provides user-friendly controls to select exactly which data to migrate and transform in transit. Griddable masks or encrypts any number of individual data elements using separate algorithms or encryption keys. Griddable also filters and replaces data values, or selectively removes entire rows or columns, with an easily-defined user policy.
Click the “Live Demo” button at the top of this page for a 10 minute, no-obligation tour.