Eclipsys Blog

Zero Downtime Database Migration using Oracle Cloud Infrastructure (OCI) Database Migration Service (DMS) – Eclipsys

Written by Dinusha Rathnamalala | Apr 14, 2022 4:30:00 PM

When you see “Zero Downtime”, you might think we are going to discuss Oracle Zero Downtime Migration (ZDM) – Oracle’s premier solution for a simplified and automated database migration solution. ZDM is a well-known Oracle Maximum Availability Architecture (MAA) recommended solution to migrate Oracle databases to and between any Oracle infrastructures, including Exadata Database Machine, Exadata Cloud at Customer (ExaC@C), and Oracle Cloud.

But what does that have to do with Oracle Database Migration Service (DMS)? Well, it is pretty much a new addition to the OCI service offering that can be used as an alternative for ZDM when migrating databases to the Oracle Cloud. Oracle DMS is backed by the Oracle ZDM Engine and Oracle GoldenGate replication. However, it is a fully-managed service that provides a high-performing, self-service experience for migrating databases to the Oracle Cloud from various sources such as on-premises, third-party Cloud, or from Oracle legacy Cloud to OCI or to migrate databases within OCI Cloud.

One of the biggest advantages of using DMS in your migration is that it eliminates complex infrastructure preparation. You only need to provide the connectivity between the source and the target databases via the DMS service. Though it is good to have, you need zero knowledge of GoldenGate or ZDM because DMS handles it on your behalf.

In this article, I will be explaining how to migrate the Oracle 12c database hosted on the OCI VM Instance (assuming this is our on-premises source database) to the OCI DB System 19c pluggable database instance (this is going to be our target database).

Before we begin, there are some prerequisites that I have already completed. I have noted them below:

  • The source database is installed, configured, and loaded with test data in the MIS schema
  • The Target database is provisioned and necessary schemas (MIS in our case) and tablespaces are created
  • Vault and the Master Encryption Key are created under Identity and Security
  • Oracle Object Storage Bucket is created (Bucket name DMS in our case)
  • The ports required for communication are opened on OCI NSG or Security List

For more details about prerequisites please refer to the OCI documentation and Database Migration Service documentation (https://docs.oracle.com/en/cloud/paas/database-migration/dmsus/getting-started-oracle-cloud-infrastructure-database-migration.html)

Let’s start with deploying the Oracle Cloud marketplace solution – “Oracle GoldenGate for Oracle – Database Migrations”. It will be used as a mechanism of data replication in this migration.

 

Configure Oracle GoldenGate Microservices for Data Replication:

Login to the OCI Console and navigate to the Marketplace. Under All Applications, search for “Oracle GoldenGate for Oracle – Database Migrations” and select it.

 

Hit the Launch Stack to create the necessary resources then go to the Next

 

Select your network settings accordingly

 

On the Create OGG Deployments, you need to select Oracle 12c as Source Deployment 1- Database and Oracle 19c as Target Deployment 2 – Database. Add your SSH public key under the Shell Access and hit Next to review. If everything is ok hit Create to start provisioning.

 

When provisioning is done, you should be able to see a similar log output as below.

 

Navigate to the Compute on the cloud console. Under the Instances, you should be able to see the Oracle GoldenGate Microservices VM is created. Note the public IP of the server.

 

Using the web browser, load the Oracle GoldenGate Service Manager. Ignore any certificate error. If you get any connection timeout error, check the ingress rule (443) in your NSG or Security List.

 

The Service Manager's username is oggadmin. Log in to the server as the OPC User to get the password. You will find a JSON document – ogg-credentials.json  under the OPC’s home directory. Get oggadmin user credential as below. We will be using them later in the configuration.

 

Before registering the source and target database with DMS, we need to do a few mandatory steps as described below.

 

Preparing the Source Database for Migration:

Create a GoldenGate User on the source database.

CREATE USER ggadmin IDENTIFIED BY ggadmin_password DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp;
ALTER USER ggadmin QUOTA 100M ON USERS;
GRANT UNLIMITED TABLESPACE TO ggadmin;
GRANT CONNECT, RESOURCE TO ggadmin;
GRANT SELECT ANY DICTIONARY TO GGADMIN;
GRANT CREATE VIEW TO GGADMIN;
GRANT EXECUTE ON dbms_lock TO ggadmin;
EXEC dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');

 

If the source database is multitenant, create the ggadmin user in the PDB, and also create a different user in the CDB root (for example, c##ggadmin).

CREATE USER c##ggadmin IDENTIFIED BY c##ggadmin_password DEFAULT TABLESPACE
users TEMPORARY TABLESPACE temp;
ALTER USER c##ggadmin QUOTA 100M ON USERS;
GRANT UNLIMITED TABLESPACE TO c##ggadmin;
GRANT CONNECT, RESOURCE TO c##ggadmin container=all;
GRANT SELECT ANY DICTIONARY TO C##GGADMIN container=all;
GRANT CREATE VIEW TO C##GGADMIN container=all;
GRANT EXECUTE ON dbms_lock TO c##ggadmin container=all;
EXEC dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('c##ggadmin',container=>'all');

Check GLOBAL_NAMES parameter. If it’s set to true, change it to false.

sqlplus > alter system set global_names=false;

Enable ARCHIVELOG if it is not already enabled.

Enable supplemental logging.

sqlplus > SELECT supplemental_log_data_min, force_logging FROM v$database;
sqlplus > ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
sqlplus > ALTER DATABASE FORCE LOGGING;

Enable GoldenGate Replication:

sqlplus > ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;

Create a database directory. This is the location where the dump files are created.

sqlplus > CREATE OR REPLACE DIRECTORY backup_dir AS '/u01/backups/DB12C';

 

Preparing the Target Database for Migration:

Create a GoldenGate User on the target database. In our case, I created ggadmin user in the PDB. You do not need to create any user in the container database for the target database.

CREATE USER ggadmin IDENTIFIED BY ggadmin_password DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp;
ALTER USER ggadmin QUOTA 100M ON USERS;
GRANT UNLIMITED TABLESPACE TO ggadmin;
GRANT CONNECT, RESOURCE TO ggadmin;
GRANT SELECT ANY DICTIONARY TO GGADMIN;
GRANT CREATE VIEW TO GGADMIN;
GRANT EXECUTE ON dbms_lock TO ggadmin;
EXEC dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');

Check GLOBAL_NAMES parameter. If it’s set to true, change it to false.

sqlplus > alter system set global_names=false;

Enable ARCHIVELOG if it is not already enabled.

Create a database directory. This is the location where the dump files are stored.

sqlplus > CREATE OR REPLACE DIRECTORY backup_dir AS '/u01/backups/PDB1';

 

Now we are ready to register the source and the target databases with the DMS service. Navigate to the Database Migration on the cloud console and click on Registered Databases. Hit Register Database to register our first database. Enter source database details and hit Next.

 

Give your database user (system) credentials and add the database server IP for SSH connectivity. Then hit the Register button.

 

Once your source database is added, now you can proceed to the target database. Follow the same steps as we did for the source database. Change the Connect String to reflect the target database IP address and PDB’s service name.

 

Enter your target database connection details as below and hit the Register.

 

You should be able to see both source and target databases are Active.

 

We have just completed registering our source and target databases with the DMS service. Now we can create a migration job. Navigate to the Migrations and hit the Create Migration button. Name it accordingly and hit Next.

 

Select your source and target database and navigate to the Migration Options.

 

For the initial load, select the option Data Pump via Object Storage and select the bucket we have already created.

 

Make sure to select the checkbox – Use Online Replication and enter the GoldenGate HUB URL. Update reset of the details as given below.

 

Hit the Create button to create the migration job.

 

Once the job is created, we need to validate it before starting the actual migration. Hit the Validate button to start the validation process.

 

Click on the View Details to see the job progress.

 

Validation goes in several phases and if your configuration is all good, you should be able to see a similar outcome as below. In my case, it failed several times and I had to revalidate after fixing the issues. Most probably you might hit ‘connection timeout’ errors due to incorrect ingress/egress rules.

 

Once everything is validated, we can start the migration. Here, we have the option to pause it after starting the replication. I would suggest pausing it at this stage as the next stage is switchover. We need to do data validation before going to switch over.

 

When you start the migration, DMS performs a series of actions in phases. If you look at the name of each phase, you might find quite familiar phases if you have done a similar migration using Oracle GoldenGate. DMS wraps every step behind the scenes by connecting to the Oracle GoldenGate Microservices instance. Now you can sit back and relax until it comes to the Monitor Replication Lag. Please note the duration depends on the data volume and your network bandwidth.

 

Now data replication is in progress. Technically, our migration is done as of now. Let’s do some testing and check some stats to figure out if data is in sync between the two databases. Log in to the Oracle GoldenGate Service Manager and open the Administration Server in another tab. Click on port 9011 to log in to the Source – Administration Server. Use the same credentials as the Service Manager.

 

On the Administration Server home page, click on the Extract – EXTZSU4F (name could differ in your setting) and navigate to the Statistics page.

 

Note that only 2 inserts have been processed on the source database. I inserted two records to the MIS.T1 table just to make sure replication is working.

 

Similarly, log in to the Target – Administration server and navigate to the Statistics of Replicat process (RU1K0). We can see that two inserts have been applied to the target MIS.T1 table as well.

Now let’s insert a few more records and update a record on the source database to verify it further. My green color putty session is the source database and the blue one is the target database.

 

Once again check the statistics of the Extract and the Replicat processes. You should be able to see a total of five inserts and one update has been processed in both Extract and Replicat processes.

 

We can see that both our source and target databases are in sync and whenever a change happens on the source side it gets replicated to the target. Additionally, you can check the process performance on the Performance Metrics Server. In this demo, we do not see a significant load because no changes are happening on the source database.

 

Finally, we have reached the cutover. If everything is set, we can start switching over to the target. Click on the Resume button on the migration job to kickstart the switchover.

 

Once the switchover is completed you should be able to see on the source database alert log that the Capture process has been detached from the source database.

 

As the final steps, you can initiate the cleanup process to remove the capture process from the source database.

This concludes our migration and it has finished successfully using Oracle Database Migration Service. It is super easy to operate, and you do not have to be an expert of Oracle Goldengate or ZDM to perform “Zero Downtime” Database Migration. I hope this will be useful to anyone who is planning to migrate on-premises databases to the Oracle Cloud. Happy Migrations folks!