Run a Data Load
After you create the data load, you are taken to the Data Load Detail page that displays the details that you need to run a data load.
It includes the details of the source schema, the data entities that are loaded from the source schema, and the details of the target schema. You can choose the action that you want to apply on each data entity – recreate, truncate, append - and load tables in bulk to the target schema. When you run a data load, multiple jobs run in the background to complete the request.
You can run a data load in either of the following ways:
Data load is not supported for Oracle Object Storage connections.
Running a Data Load from the Data Load Detail Page
The Data Load Detail page displays the information that you need to run a data load. You can apply different actions - incremental merge, incremental append, recreate, truncate, append - on the data entities before loading it to the target schema.
APPLIES TO: Data Transforms that is available as a separate listing on Marketplace called Data Integrator: Web Edition.
If the data load is huge, you might want to increase the memory of the ODI Agent to avoid any issues. Follow the instructions in Increase the Memory of ODI Agent before you start to run the data load.
To run a data load from the Data Load Detail Page:
- In the Data Load Detail page, select the data entities that you want to move to the target schema.
To filter the list, you can do one of the following:
- Enter the name or part of the name in the Name text box. This search returns data entities that include the specified string in the name. For example, the search string AD returns results such as ADMIN, ADDRESS, BADGE, UPGRADE, WORKLOAD, and so on.
- Turn on the Use Regular Expression toggle to use pattern matching to search for particular strings of characters. For example, the search string CO.* returns results such as CONTACT, COUNT, COUNTRY and so on.
You can select multiple data entities and load them to the target schema. You can also sort the displayed list using the following options:- All - Displays all the data entities that match the search criteria.
- Selected - Displays all the rows that you selected.
- Unselected - Displays all the unselected rows.
- Invalid – Displays all the invalid rows.
Note
These options display the list of data entities based on the search criteria. To view the list of all data entities, clear any applied filters. - Click on the required icon to choose any of the following actions:
- Incremental Merge - Updates the data in the selected column by comparing the source table with the target table based on the specified merge key. To use this option, select the column that you want to merge and then select the merge key. Click the Validate icon () to validate the selected values.
- Incremental Append - Updates data in the selected column in the target schema. To use this option, select the column that you want to update and click the Validate icon () to validate the selection.
- Recreate – If the table is already present in the target schema, drops the existing table and recreates it.
Note
This option is not available for data entities that are loaded using OCI GoldenGate. - Truncate – If the table is already present in the target schema, deletes all the data from the selected table. Nothing is dropped.
Note
For Delta Share data loads the Data Load Detail page only includes the option. You cannot apply different actions - incremental merge, incremental append, recreate, truncate, append - on the data entities before loading it to the target schema. This is to make sure that the data is consistent between the Delta Sharing server and the target schema. - Append – If the table is already present in the target schema, adds rows to the table.
- Do Not Load – Skips the selected data entity from the data load job. After you click Save, these data entities are no longer available for future data load jobs.
You can select multiple data entities and apply different actions. The unsaved rows are highlighted in bold.
Note
These options are not available for Delta Share connections. - To specify how you want to store the source column names in the target tables, click Advanced Options, which is on the right side of the Data Load Detail page. Choose one of the following:
- Retain original names by enclosing all names with delimiters - Creates column names with the same names as is from the source tables in the target table.
- Use no delimiters - This is the default selection. Converts all the column names to upper case and replaces spaces and special characters with underscores.
The following options are applicable to reserved words such as
Date
,Timestamp
,Start
, and so on.- Enclose with delimiters - This is the default selection. Encloses column names that are reserved words with delimiters (not all column names).
- Use a prefix - Adds the specified prefix to column names that are reserved words (not all column names).
For column names that have the same name after conversion, the names are suffixed with a numeric value to maintain uniqueness. For example column names
Date
,date
,DATE
,Item_@Code
,Item$$Code
,Item%%Code
are created in the target table as
DATE
,DATE_0
,DATE_1
,ITEM__CODE
,ITEM__CODE_0
,ITEM__CODE_1
.
Note
Once the data load is run, the selected options are applied and retained for all subsequent runs. You cannot change the configuration. - Click to save the changes. A green checkmark () in the row indicates that the changes are saved.
- To start the data load,
- Click .
- For GoldenGate data loads, click .
A confirmation prompt appears when the data load starts successfully.
To check the status of the data load, see the Status panel on the right below the Target Schema details. For details about the Status panel, see Monitor Status of Data Loads, Data Flows, and Workflows. This panel shows links to the jobs that execute to run this data load. Click the link to monitor the progress on the Job Details page. For more information about jobs, see Create and Manage Jobs.
Running a Data Load from the Workflow Details Page
You can add multiple data loads to a workflow along with data flows or workflows and run them as separate steps. The left panel of the Workflow Details page lists the data flows, workflows, and data loads that are available for use.
- Data Loads - This folder lists all the data loads that you have created in the local Data Transforms instance. When you select a step in the workflow that is a local data load, the Properties Panel available on the right side of the design canvas displays the Type as
Data Load
and the Linked Object as Home><nameofDataLoad>
. - Remote Data Loads - This folder lists all the data loads that you have created in a remote Data Transforms instance. See Create a Data Transforms Connection for Remote Data Load. When you select a step in the workflow that is a remote data load, the Properties Panel available on the right side of the design canvas displays the Type as
Remote Data Load
and the Linked Object asName of the Data Transforms connection
><nameofDataLoad>
.
After you have added the data loads to the workflow, click to execute them.
- Increase the Memory of ODI Agent
If the data that you are loading from the source schema is huge, then you may want to increase the memory of the ODI Agent to avoid OutOfMemory exception errors.
Parent topic: The Data Transforms Page
Increase the Memory of ODI Agent
APPLIES TO: Data Transforms that is available as a separate listing on Marketplace called Data Integrator: Web Edition.
To increase the memory of the ODI Agent:
- Edit the
/u01/oracle/transforms_home/common/scripts/jettyServer.sh
file. - Add the java -Xms1024m -Xmx4096m parameter.
- Restart the jetty server. Log in as OPC user and execute the following commands:
ssh -i <path to id_rsa> opc@<Instance IP> sudo su systemctl stop|start jettyserver.service exit
Parent topic: Run a Data Load