Importing Using the Data Import Feature

Use the data import feature to import data from an Object Storage bucket to a standalone DB system.

You can import to a standalone DB system in the same region as the Object Storage bucket only. Also, you can import a dump while it is still being exported to an Object Storage bucket, but this may prevent bulk ingest from being used.

To import data to a high availability DB system, first import data to a standalone DB system and then enable high availability.

HeatWave can use bulk ingest to speed up the data import when the following conditions are met:
  • The MySQL version of the DB system is 8.4.0 or higher.
  • The CSV file is not compressed or compressed with the zstd compression.
  • The column terminator in the CSV file is a single-byte character.
  • The table has an explicitly created primary key that does not use prefix index. GIPK (Generated Invisible Primary Key) is not supported.
  • The table uses a file-per-table tablespace.
  • The table uses the dynamic row format.
  • The table has no generated (virtual or stored) columns.
  • The table has no CHECK constraints.
  • The table only use the following supported data types:
    • INTEGER, INT, SMALLINT, TINYINT, MEDIUMINT, BIGINT (UNSIGNED is supported)
    • NUMERIC, DECIMAL (UNSIGNED is not supported, deprecated in MySQL 8.0.17)
    • FLOAT, DOUBLE (UNSIGNED is not supported, deprecated in MySQL 8.0.17)
    • CHAR, VARCHAR (No large data support, record must fit in page)
    • DATE, DATETIME
  • In version 9.0 or higher, the table can also use the following supported data types:
    • TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT
    • JSON
    • VARCHAR (Support the maximum length)
Note

It is recommended to import data using the data import feature in the Console. The import is managed by the HeatWave Service and optimized for fast import processing.
Note

You should use the latest version of MySQL Shell to export the data with the ocimds option enabled. This can avoid potential data import errors.

Using the Console

Use the data import feature in the Console to import data from an Object Storage bucket to a MySQL DB system. Ensure to define enough storage to accommodate imported data.

  1. Open the navigation menu, and select Databases. Under HeatWave MySQL, click DB Systems.
  2. Click Create DB system.
  3. Configure the DB system, and then click Show advanced options.
  4. Click the Data import tab and provide the following information:
    • PAR source URL: If you have the Pre-Authenticated Request (PAR) URL, specify the PAR URL for the bucket or bucket prefix.
    • Click here to create a PAR URL for an existing bucket: If you do not have a PAR URL, click the link to create a PAR URL for an existing bucket, and provide the following information:
      • Select a bucket in <CompartmentName>: Select the Object Storage bucket that contains your dump.
      • Configure prefix:
        • Select the prefix: Select the prefix from the list of valid prefixes.
        • Enter a prefix: Select the option to enable you to define a bucket prefix, similar to a folder name. The prefix must exist in the selected bucket. Prefix names take the format prefixName/. Omitting the forward slash delimiter in the PAR results in an invalid URL. You can specify paths with multiple level of folders, prefixName/prefixName1/prefixName2/.

        HeatWave supports the folder-type of prefix only. The filename-matching prefix type is not supported.

      • Specify an expiration time for the PAR: Select an expiration time for the PAR. The default value is one week.
  5. Click Create and set PAR URL to generate the PAR URL and populate the PAR source URL field with the generated PAR URL.
  6. Click Create.