Reference
This section contains reference materials.
Set Up Source and Target Configurations for Online Migrations for Your GoldenGate Instance
To use your own GoldenGate instance, Oracle Cloud Infrastructure Database Migration service has a few additional prerequisite tasks, create GoldenGate users on the source database and unlock the GoldenGate user on the target database (optional).
You can have a single user for database connection, if you have the required privileges. For the source database, the user for CDB and PDBs has all the privileges for GoldenGate and Data Pump.
Create GoldenGate Users on the Source Database
On the source database, you must create a GoldenGate administration user, for example ggadmin
.
If the source database is multitenant, create the user in the PDB, and
also create a different user in the CDB root, for example
c##ggadmin
.
ggadmin
, connect to the PDB and run the following commands:CREATE TABLESPACE gg_admin DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER ggadmin IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE gg_admin TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON gg_admin;
GRANT CONNECT TO ggadmin;
GRANT RESOURCE TO ggadmin;
GRANT CREATE TO ggadmin;
GRANT SELECT_CATALOG_ROLE TO ggadmin;
GRANT DV_GOLDENGATE_ADMIN TO ggadmin;
GRANT DV_GOLDENGATE_REDO_ACCESS TO ggadmin;
GRANT ALTER SYSTEM TO ggadmin;
GRANT ALTER USER TO ggadmin;
GRANT DATAPUMP_EXP_FULL_DATABASE TO ggadmin;
GRANT DATAPUMP_IMP_FULL_DATABASE TO ggadmin;
GRANT SELECT ANY DICTIONARY TO ggadmin;
GRANT SELECT ANY TRANSACTION TO ggadmin;
GRANT INSERT ANY TABLE TO ggadmin;
GRANT UPDATE ANY TABLE TO ggadmin;
GRANT DELETE ANY TABLE TO ggadmin;
GRANT LOCK ANY TABLE TO ggadmin;
GRANT CREATE ANY TABLE TO ggadmin;
GRANT CREATE ANY INDEX TO ggadmin;
GRANT CREATE ANY CLUSTER TO ggadmin;
GRANT CREATE ANY INDEXTYPE TO ggadmin;
GRANT CREATE ANY OPERATOR TO ggadmin;
GRANT CREATE ANY PROCEDURE TO ggadmin;
GRANT CREATE ANY SEQUENCE TO ggadmin;
GRANT CREATE ANY TRIGGER TO ggadmin;
GRANT CREATE ANY TYPE TO ggadmin;
GRANT CREATE ANY SEQUENCE TO ggadmin;
GRANT CREATE ANY VIEW TO ggadmin;
GRANT ALTER ANY TABLE TO ggadmin;
GRANT ALTER ANY INDEX TO ggadmin;
GRANT ALTER ANY CLUSTER TO ggadmin;
GRANT ALTER ANY INDEXTYPE TO ggadmin;
GRANT ALTER ANY OPERATOR TO ggadmin;
GRANT ALTER ANY PROCEDURE TO ggadmin;
GRANT ALTER ANY SEQUENCE TO ggadmin;
GRANT ALTER ANY TRIGGER TO ggadmin;
GRANT ALTER ANY TYPE TO ggadmin;
GRANT ALTER ANY SEQUENCE TO ggadmin;
GRANT CREATE DATABASE LINK TO ggadmin;
GRANT EXECUTE ON dbms_lock TO ggadmin;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('ggadmin');
c##ggadmin
, connect to the CDB and run the following commands:CREATE USER c##ggadmin IDENTIFIED BY cggadmin_pwd CONTAINER=ALL DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS;
GRANT CONNECT TO c##ggadmin CONTAINER=ALL;
GRANT RESOURCE TO c##ggadmin CONTAINER=ALL;
GRANT CREATE TABLE TO c##ggadmin CONTAINER=ALL;
GRANT CREATE VIEW TO c##ggadmin CONTAINER=ALL;
GRANT CREATE SESSION TO c##ggadmin CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO c##ggadmin CONTAINER=ALL;
GRANT DV_GOLDENGATE_ADMIN TO c##ggadmin CONTAINER=ALL;
GRANT DV_GOLDENGATE_REDO_ACCESS TO c##ggadmin CONTAINER=ALL;
GRANT ALTER SYSTEM TO c##ggadmin CONTAINER=ALL;
GRANT ALTER USER TO c##ggadmin CONTAINER=ALL;
GRANT SELECT ANY DICTIONARY TO c##ggadmin CONTAINER=ALL;
GRANT SELECT ANY TRANSACTION TO c##ggadmin CONTAINER=ALL;
GRANT EXECUTE ON dbms_lock TO c##ggadmin CONTAINER=ALL;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('c##ggadmin',CONTAINER=>'ALL');
Create or Unlock the GoldenGate User on the Target Database
On co-managed targets:
If the target is not Oracle Autonomous Database, create a ggadmin
user in the target PDB. This user is similar to the ggadmin
user you created on the source database, but will require more privileges. See Establishing Oracle GoldenGate Credentials for information about privileges required for a "Replicat all modes" user.
On Autonomous targets:
Autonomous Database has a pre-created ggadmin
user that you must unlock. These commands need to be run on the GoldenGate Target instance.
-
Connect to the target database as
admin
.export TNS_ADMIN=/u02/deployments/Target/etc export ORACLE_HOME=/u01/app/client/oracle19 $ $ORACLE_HOME/bin/sqlplus admin/ADW_password@ADW_name
An example of the ADW_name would be targetatp_high.
-
Unlock
ggadmin
.SQL> ALTER USER ggadmin IDENTIFIED BY ggadmin_password ACCOUNT UNLOCK;
-
Verify that
ggadmin
is unlocked.export TNS_ADMIN=/u02/deployments/Target/etc export ORACLE_HOME=/u01/app/client/oracle19 $ORACLE_HOME/bin/sqlplus ggadmin/ADW_password@ADW_name
Oracle Cloud Infrastructure Database Migration Service supports only those scenarios where the Target database and Oracle GoldenGate, both run on private IP addresses.
Database Migration Data Pump Defaults
Oracle Cloud Infrastructure Database Migration automatically sets optimal defaults for Data Pump parameters to achieve better performance and ensure security of data. There are also Data Pump errors that are ignored by default.
The following table lists the Data Pump parameters set by Database Migration, and the values they are set to. If there is a Database Migration migration setting available to override the default, it is listed in the Optional DMS Setting to Override column.
Table 10-1 Database Migration Data Pump Parameter Defaults
Data Pump Parameter | Default Value | Optional DMS Setting to Override |
---|---|---|
EXCLUDE |
index (ADW-S) cluster (ADB-D, ADB-S) indextype (ADW-S) materialized_view (ADW-S) materialized_view_log (ADW-S) materialized_zonemap (ADW-S) db_link (ADB) statistics (User managed Target and ADB) |
Excluded Objects: Object Owner and Object Name |
PARALLEL |
Database Migration sets PARALLEL parameter by default as follows For user-managed database (Sum of (2 x (no. of physical CPU) per node ) ) with Max 32 cap. For Autonomous Database, number of OCPUs |
Import Parallelism Degree Export Parallelism Degree See Configuring initial load advanced options for Oracle migrations |
CLUSTER |
Database Migration always sets the Cluster mode as default |
Cluster See Configuring initial load advanced options for Oracle migrations |
COMPRESSION |
COMPRESSION_ALGORITHM is set to BASIC(for Oracle Database 11.2) and MEDIUM (for Oracle Database 12.1 and later) COMPRESSION is set to ALL |
N/A |
ENCRYPTION |
ENCRYPTION is set to ALL ENCRYPTION_ALGORITHM is set to AES128 ENCRYPTION_MODE is set to PASSWORD |
N/A |
FILESIZE |
FILESIZE is set to 5G |
N/A |
FLASHBACK_SCN |
For offline migrations, sets FLASHBACK_TIME System time now. For online migrations, uses neither FLASHBACK_SCN nor FLASHBACK_TIME |
N/A |
REUSE_DUMPFILES |
Always set to YES |
N/A |
TRANSFORM |
Always sets OMIT_ENCRYPTION_CLAUSE:Y for Oracle Database 19c and later target releases Always sets LOB_STORAGE:SECUREFILE For an Autonomous Database target, the following transform is set by default SEGMENT_ATTRIBUTES:N DWCS_CVT_IOTS:Y CONSTRAINT_USE_DEFAULT_INDEX:Y |
Allows additional TRANSFORM to be specified |
METRICS |
Always set to Yes |
N/A |
LOGTIME |
Always set to ALL |
N/A |
TRACE |
Always set to 1FF0b00 |
N/A |
LOGFILE |
Always set to Data Pump job name and created under the specified export or import directory object. For example, if a Data Pump job is ZDM_2_DP_EXPORT_8417 and directory object used is DATA_PUMP_DIR, then the operation log is created by name ZDM_2_DP_EXPORT_8417.log under DATA_PUMP_DIR. |
N/A |
Oracle Data Pump errors that are ignored by default are as follows:
-
ORA-31684: XXXX already exists
-
ORA-39111: Dependent object type XXXX skipped, base object type
-
ORA-39082: Object type ALTER_PROCEDURE: XXXX created with compilation warnings
This is not configurable.
Oracle migration phases
A migration job in Oracle Cloud Infrastructure Database Migration runs in operational phases as a work flow.
The phases in Database Migration are shown in the console with user-friendly names (DMS Phase), and in the REST API with the codes prefixed with "ODMS_", as shown in the table below.
Note that Database Migration harnesses the Zero Downtime Migration tool to run the migration job work flow, so in the logs the migration phase names will have a "ZDM_" prefix. Also note that one Database Migration phase corresponds to one or more Zero Downtime Migration phases, which will give you a more granular look at the work flow.
Table 10-2 Database Migration Process Phase Descriptions
DMS Phase NameConsole (API Codes) | Description | ZDM Phase Name | Description |
---|---|---|---|
Validate ( |
Performs validation of the source and target database, the GoldenGate Hub, and Data Pump configuration. |
Validate Source ( |
Validates the source database access credentials, database parameter
settings, |
Validate Target ( |
Verifies that the target database exists and that the database type is Autonomous Database, and validates access credentials, security, and connectivity. |
||
Validate Pre-migration Advisor ( |
Cloud Pre-Migration Advisor Tool is run. |
||
Validate GoldenGate Hub ( |
Verifies GoldenGate Microservices REST endpoints, software configuration, health, and connectivity to the source and target databases. |
||
Validate Datapump Source Settings ( |
Validates the export directory object (if applicable), and checks for sufficient space and permission for specified user in the source database to export dumps. Checks if the specified Oracle Cloud Object Store buckets, data bucket, and wallet bucket are accessible from the source. Also validates the proxy configuration if applicable. |
||
Validate Datapump Target Settings ( |
Verifies that the Data Pump import directory object exists. If a pre-existing DBLINK was specified, checks if it exists and is valid, and ensures that the ADB requirements for the DBLINK and wallet files are met. |
||
Prepare ( |
Prepares for and starts the GoldenGate Extract process, and enables supplemental logging |
Prepare GoldenGate Hub ( |
Registers database connection details and credentials with GoldenGate Microservices. |
|
Creates GoldenGate heartbeat table in the source database. If the table already exists, sets update frequency to 60 seconds. |
||
|
Prepares the source database schemas for instantiation by enabling schema level supplemental logging. |
||
Create GoldenGate Source Extract ( |
Starts the GoldenGate Extract process at the source database |
||
Prepare (PREPARE_INITIAL_LOAD )
|
Creates any necessary directory objects for Data Pump, and creates a DBLINK, if applicable. |
Prepare Source Datapump ( |
Creates a new directory object for Data Pump, if required. Creates OCI Auth Token to access OCI OSS bucket if required. |
Prepare Target Datapump ( |
Creates a new directory object for Data Pump, if required. Stores OCI Auth token in the database for secure OSS access. If migrating via DBLINK, and a DBLINK must be created, creates the necessary database credentials to access the source and create a new DBLINK. Ensures Autonomous Database security requirements are met using DBLINK over SSL. |
||
Datapump Source Export ( |
Starts and monitors the Data Pump Export on the source database. |
Datapump Source Export |
Starts and monitors the Data Pump Export on the source database. |
Upload Source Dump Files ( |
Uploads Data Pump dump files from the source to OCI OSS. |
Upload Source Dump Files ( |
Uploads Data Pump dump files from the source to OCI OSS. |
Datapump Target Import ( |
Starts import of Data Pump Dumps to the target database, either from the OCI OSS bucket or via DBLINK, and monitors the Data Pump import progress. |
Datapump Target Import ( |
Starts import of Data Pump Dumps to the target database, either from the OCI OSS bucket or via DBLINK, and monitors the Data Pump import progress. |
Post Datapump ( |
Removes directory objects, access credentials, and DBLINK that were created for Data Pump by Database Migration. |
Post Source Datapump ( |
Removes any Data Pump directory object created by Database Migration. |
Post Target Datapump ( |
Fixes any invalid objects in the target database. Removes the database access and OCI OSS access credentials that were created for the migration. Removes any DBLINK created by Database Migration. Optionally, removes source database dumps stored in OCI OSS bucket. |
||
Prepare GoldenGate ( |
Prepares for GoldenGate replication. |
|
Creates the GoldenGate heartbeat table in the target database. If the table already exists, sets update frequency to 60 seconds. |
|
Creates GoldenGate checkpoint table in the target database to track Replicat progress. |
||
|
Creates GoldenGate Replicat process for the target database. |
||
START_GG_REPLICAT_TGT |
Starts GoldenGate Replicat process for the target database. |
||
Monitor GoldenGate Lag ( |
Polls the GoldenGate checkpoint and heartbeat data to measure end-to-end apply lag until lag decreases below desired threshold. |
Monitor GoldenGate Lag ( |
Polls the GoldenGate checkpoint and heartbeat data to measure end-to-end apply lag until lag decreases below desired threshold. |
Switchover App ( |
When the user is ready to perform the Switchover and resumes the migration job that is currently is in a waiting state after the Monitor Replication Lag phase, then the service stops GoldenGate Extract, waits for GoldenGate Replicat to complete apply, and then stops GoldenGate Replicat. |
Switchover App ( |
When the user is ready to perform the Switchover and resumes the migration job that is currently in a waiting state after the Monitor Replication Lag phase, then the service stops GoldenGate Extract, waits for GoldenGate Replicat to complete apply, and then stops GoldenGate Replicat. |
Cleanup ( |
Performs cleanup operations such as deleting GoldenGate Extract and GoldenGate Replicat processes and connection details on source and target database respectively, removing Autonomous Database access to wallet, and so on. |
Post Switchover ( |
Performs post-switchover actions for the target database. |
|
Deletes GoldenGate Extract process on source database |
||
|
Deletes GoldenGate Replicat process on target database |
||
|
Disables schema level supplemental logging on source database |
||
|
Drops the GoldenGate heartbeat table in source database, if the table was created by Database Migration. Otherwise, resets update frequency to original setting. |
||
|
Drops the GoldenGate checkpoint table in the target database. |
||
|
Drops the GoldenGate heartbeat table in the target database, if the table was created by Database Migration. Otherwise, resets the update frequency to the original value. |
||
Clean GoldenGate Hub ( |
Deletes the database connection details and credentials saved with GoldenGate Microservices |
||
Post Actions ( |
Removes Autonomous Database access wallet from Database Migration. |
||
CLEANUP_SRC
|
Deletes the Cloud Pre-Migration Advisor Tool related binaries on source database servers. |
MySQL migration phases
A migration job in Oracle Cloud Infrastructure Database Migration runs in operational phases as a work flow.
The phases in Database Migration service are shown in the console with user-friendly names (DMS Phase), and in the REST API with the codes prefixed with "ODMS_", as shown in the table below.
Table 10-3 MySQL Validation/Evaluation Job Phases (offline and online)
Phase NameConsole (API Codes) | Phase enum value | Description |
---|---|---|
Validate source |
|
Validates the source database connectivity. |
Validate target | ODMS_VALIDATE_TGT |
Validates target database connectivity |
Validate source metadata | ODMS_FETCH_METADATA_SRC |
Validates source database basic configuration |
Validate target metadata | ODMS_FETCH_METADATA_TGT |
Validates target database basic configuration |
Run premigration advisor | ODMS_VALIDATE_PREMIGRATION_ADVISOR |
Runs MySQL Evaluator (Export/Import and GoldenGate checks) |
Table 10-4 MySQL Migration Job Phases (offline)
Phase NameConsole (API Codes) | Phase enum value | Description |
---|---|---|
Validate |
|
Validates the source and target database connectivity. |
Export initial load and upload data | ODMS_INITIAL_LOAD_EXPORT_DATA_UPLOAD |
Perform MySQL source dump to object storage. |
Import initial load | ODMS_INITIAL_LOAD_IMPORT |
Perform MySQL target import from object storage |
Validate target configuration | ODMS_FETCH_METADATA_TGT |
Validates target database basic configuration |
Cleanup | ODMS_CLEANUP |
Cleanup temporary artifacts, object storage |
Table 10-5 MySQL Migration Job Phases (online GGS)
Phase NameConsole (API Codes) | Phase enum value | Description |
---|---|---|
Initialize replication infrastructure |
|
Setup GGS deployment, connections, networking |
Validate | ODMS_VALIDATE |
validates source and target database connectivity. |
Export initial load and upload data | ODMS_INITIAL_LOAD_EXPORT_DATA_UPLOAD |
Perform MySQL source dump to object storage |
Import initial load | ODMS_INITIAL_LOAD_IMPORT |
Perform MySQL target import from object storage |
Prepare | ODMS_PREPARE |
Setup GoldenGate replicat to apply to the target |
Monitor replication lag | ODMS_MONITOR_REPLICATION_LAG |
Monitor GoldenGate replication lag. The replication lag needs to be a value equal or lower than the one defined when creating the migration before moving to the next phase. |
Switchover | ODMS_SWITCHOVER |
Stop the replication based on the switchover logic. Extract and Replicat are monitored to ensure that there are no pending transactions. After confirmation, Extract and Replicat are stopped. |
Cleanup | ODMS_CLEANUP |
Cleanup temporary artifacts, object storage |
Database Migration Events for OCI Monitoring
Oracle Cloud Infrastructure Database Migration service emits events in Oracle Cloud Infrastructure (OCI), which are structured messages that indicate state changes in resources.
You can define rules in the OCI Event Service to get notified of events happening in an OCI native service and use the Notification Service (ONS) to send notifications or trigger events.
Table 10-6 Database Migration Service Event Types
Resource | Attributes (Common for Object) | Event name | Notes |
---|---|---|---|
Migration |
|
Migration Clone Begin | When a migration clone request is received. |
Migration Clone End | When a migration clone request is completed. | ||
Migration Create Begin | When a migration create request is received. | ||
Migration Create End | When a migration create request is completed. | ||
Migration Delete Begin | When a migration delete request is received. | ||
Migration Delete End | When a migration delete request is completed. | ||
Migration Evaluate Begin | When a migration evaluate request is received. | ||
Migration Evaluate End | When a migration evaluate request is completed. | ||
Migration Start Begin | When a migration start request is received. | ||
Migration Start End | When a migration start request is completed. | ||
Migration State Change | When a migration state change request is received. | ||
Migration Update Begin | When a migration update request is received. | ||
Migration Update End | When a migration update request is completed. | ||
Job |
compartmentId compartmentName resourceName resourceIdavailabilityDomain eventID |
Job Abort Begin |
When a job abort request is received. |
Job Abort End |
When a job is aborted. | ||
Job Delete Begin |
When a job delete request is received. | ||
Job Delete End |
When a job is deleted. | ||
Job Resume Begin |
When a job resume request is received. | ||
Job Resume End |
When a job resume is completed. | ||
Job State Changed |
When a job state is changed. | ||
Job State Failed |
When a job fails. | ||
Job State Successful |
When a job state is successful. | ||
Job State Waiting |
When a job update request is in waiting state. | ||
Job Update Begin |
Called when the state of job is changed. | ||
Job Update End |
When a job update request is completed. | ||
Connection |
|
Connection Create
Begin |
When a connection create request is sent. |
Connection Create End |
When a connection create request is completed. | ||
Connection Delete Begin | When a connection delete request is sent. | ||
Connection Delete End | When a connection delete request is complete. | ||
Connection Update Begin |
When a connection update request is sent. | ||
Connection Update End |
When a connection update request is completed. | ||
Phase |
compartmentName compartmentId migrationName migrationId jobName jobId phaseName phaseStatus |
Phase Begin | Emitted when the phase starts. |
Phase End | Emitted when the phase ends. |
Table 10-7 Example Use Cases
Use case | Event | Attribute filters | Action |
---|---|---|---|
To start a process when Migration XYZ is started | Migration - Start | migrationName=XYZ | Function or Streaming |
To start a function for a given job after the Data Pump file is uploaded and before import starts (You must also configure a pause after phase ZDM_UPLOAD_DATAPUMP_DUMP_FILES , and the function must call the API to resume the job)
|
Job - State Change |
jobId=job_OCID jobLifecycleState=PAUSED phaseName=ZDM_UPLOAD_DATAPUMP_DUMP_FILES |
Function |
To send a notification whenever a Migration in my compartment is failing | Job - State Change | compartmentId=myCompartmentIdmigrationLifecycleState=FAILED | Notification |
To send a notification when Migration XYZ starts waiting on replication | Job - State Change |
migrationName=XYZ jobLifecycleState=PAUSED phaseName=ZDM_MONITOR_GG_LAG |
Notification |
To send a notification when Migration XYZ starts Data Pump export | Phase - Begin |
migrationName=XYZ phaseName=ZDM_DATAPUMP_EXPORT |
Notification |
For information about the migration job phases, see Oracle migration phases.
Database Migration Port Requirements
The ports required for communication when using Oracle Cloud Infrastructure Database Migration are described in the following table.
Table 10-8 Database Migration Communication Ports
Initiator | Target | Protocol | Port | Purpose |
---|---|---|---|---|
Source database servers |
Oracle Cloud Object Store Service |
SSL |
443 |
This port allows Data Pump dumps to be uploaded to Oracle Cloud Storage |
Database Migration |
Oracle Autonomous Database Serverless target |
TCP |
1522 |
Allow Oracle client connections to the database over Oracle's SQL*Net protocol |
Database Migration |
Oracle Autonomous Database on Dedicated Exadata Infrastructure target |
TCP |
2484 |
Allow Oracle client connections to the database over Oracle's SQL*Net protocol |
Database Migration agent service host |
Source and target database servers |
TCP |
22 |
SSH Authentication-based operations to run Database Migration operational phases Source and target database servers should accept incoming connections from the Database Migration agent service host Not applicable to Autonomous Database targets Note
Required only for SSH connection. |
Target database servers |
Source database servers |
TCP |
1521 or a database SCAN Listener port applicable |
Should allow Oracle client connections to the database over Oracle's SQL*Net protocol Allows redo log shipping if source database needs to be in sync with the new primary on Oracle Cloud after switchover. Note: If you are using a non-default port number (that is, something other than port 1521) for the local listener address, then the non-default port should allow connections. |
If you are using a non-default port number (that is, something other than port 1521) for the local listener address, then the non-default port should allow connections.
Configuring Network Security Rules
If you have Oracle Database or Oracle GoldenGate compute instances in private subnets, ensure their subnet security rules or network security groups allow traffic required for Database Migration jobs.
Database Migration allows you to specify a subnet to create a Private Endpoint for Database Migration Connections (Connections). Refer to steps 9 and 10 in Managing Connections. For Autonomous Database Connections, the Console pre-populates the subnet field using the Autonomous Database (ADB) subnet; however, you can use the dropdown list to select a different subnet. The corresponding Database Migration API is CreateConnection.
-
The following EGRESS security rules must be configured for your subnet specified for privateEndpointDetails when creating Database Migration connections:
Rule Type Stateful Direction Source Port Range Protocol Destination Destination Port Range SecurityList No Egress All TCP CIDR (Classless Inter-Domain Routing) of subnet hosting Co-managed database or Oracle Autonomous Database Serverless 1521-1523 SecurityList No Egress All TCP CIDR of subnet hosting Oracle Autonomous Database on Dedicated Exadata Infrastructure 2484 SecurityList No Egress All TCP CIDR of subnet hosting Co-managed database Note
Required only for SSH connection.22 SecurityList No Egress All TCP CIDR of subnet hosting Oracle GoldenGate compute instance 443 -
The following INGRESS security rules must be configured for the subnets hosting your databases or Oracle GoldenGate compute instances:
Subnet Hosting Co-managed System
Rule Type Stateful Direction Source Port Range Protocol Destination Destination Port Range SecurityList No Ingress CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection) All TCP 1521-1523 SecurityList No Ingress CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection) Note
Required only for SSH connection.All TCP 22 Subnet Hosting ADB-S
Rule Type Stateful Direction Source Port Range Protocol Destination Destination Port Range SecurityList No Ingress CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection) All TCP 1521-1523 Subnet Hosting ADB-D
Rule Type Stateful Direction Source Port Range Protocol Destination Destination Port Range SecurityList No Ingress CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection) All TCP 2484 Subnet Hosting Oracle GoldenGate Compute Instance
Rule Type Stateful Direction Source Port Range Protocol Destination Destination Port Range SecurityList No Ingress CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection) for target database All TCP 443 -
Additionally, if you have configured Network Security Groups (NSGs) for ADB-S or Oracle GoldenGate compute instances, then the following INGRESS rules must be set for the Network Security Groups:
NSG Associated With ADB-S
Rule Type Stateful Direction Source Port Range Protocol Destination Destination Port Range NSG rule No Ingress CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection) All TCP 1521-1523 NSG Associated With Oracle GoldenGate Compute Instance
Rule Type Stateful Direction Source Port Range Protocol Destination Destination Port Range NSG rule No Ingress CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection) for target database All TCP 443
Network security groups (NSGs) are associated with individual virtual network interface cards (VNIC), ADBs, compute instances, and so on. You can configure INGRESS and EGRESS NSG rules.
Security lists apply to entire subnet.
You can use both security lists and NSGs. In this case, a union of security list rules and NSG rules is applied.
For more details, see Comparison of Security Lists and Network Security Groups and If You Use Both Security Lists and Network Security Groups
Private Endpoint Support
CreateConnection supports the following use cases for databases with private IP addresses:
-
Database is in subnetA and customer specifies subnetB to create a PrivateEndpoint:
-
SubnetA must allow INGRESS from subnetB for relevant ports
-
SubnetB must allow EGRESS to subnetA for relevant ports
-
-
Database is in subnetA and customer selects subnetA to create a PrivateEndpoint
-
SubnetA’s INGRESS rules must not prohibit subnetA as source for relevant ports
-
SubnetA’s EGRESS rules must not prohibit subnetA as destination for relevant ports
-
Migrating Databases from Amazon Web Services RDS to Supported Oracle Targets
You can migrate an Oracle Database from Amazon Web Services (AWS) RDS to the supported Oracle targets using the Oracle Cloud Infrastructure Database Migration offline and online migration methods.
Configuring Secure Connections
Ensure that the subnet Amazon RDS security policy allows connections from Database Migration to the DB instance on the specified secure port. See the AWS documentation for details:
Allowing Database Migration to connect to Amazon RDS Oracle DB instance using SSL/TLS
-
Enable Secure Socket Layer (SSL) or Transport Layer Security (TLS) in the Amazon RDS Oracle Instance to secure the connection from Database Migration to Amazon RDS Oracle Instance. See Encrypting client connections with SSL for details.
-
Create an orapki wallet as detailed in Updating applications to use new SSL/TLS certificates.
Configuring a Connection for an Amazon RDS Source
-
Find the Amazon RDS Oracle Instance endpoint (DNS name) and port number in the RDS console DB Instance Connectivity & security tab.
See Finding the endpoint of your Oracle DB instance for detailed help.
-
In OCI Database Migration, create the Connection resource for the Amazon RDS Oracle source database, using the following guidelines.
-
In the Database Connections wizard, Database details step, select Manually configure database, choose Amazon RDS in the Database type list, and enter the full connect string with host, port, and service name in the following format:
host:port/db-service-name
-
In the Connection details step, enter the database administrator credentials for the Amazon RDS Oracle source database. The user must have full Data Pump Export privileges.
If you intend to use a database link to transfer the data, also set the TLS parameters.
-
Configuring a Migration Resource with an Amazon RDS Source
To transfer the data from AWS, you have the following options:
- Amazon Simple Storage Service (Amazon S3) Bucket
- Database link
When you create the Migration resource, in the Migration options step configure one of the initial load settings as follows.
-
Datapump via database link: Enable this option to use a direct SQL*Net connection between the source and target databases. Note that using Data Pump with a database link to Autonomous Database targets requires that the source database be set up with SSL encryption.
To use a database link to migrate Amazon RDS Oracle Database schema to Oracle Autonomous Database (ADB), you must have direct network connectivity between the Amazon RDS Oracle instance and the ADB target.
-
Datapump via object storage: This option lets you select the Amazon S3 bucket option to let Data Pump temporarily store the exported database in an Amazon S3 bucket.
Enter the details for the Amazon S3 bucket. This option is only shown if the source Connection is of type Amazon RDS.
The bucket Name must be between 3 and 63 characters, and can consist only of lower case letters, numbers, dots (.), and hyphens (-). It must begin and end with a letter or number.
The Region must be in the same region as the RDS Oracle database. For example us-east-1
Note that you must also configure the OCI Object Storage bucket so that Database Migration can store Cloud Pre-migration Advisor Tool reports, Database Migration logs, and Data Pump logs there.