Help Sheet - Appliance Import Procedures
Follow the tasks in this help sheet after you login to the host where you will be mounting the Data Transfer Appliance (import appliance) and copying data.
You need to run all Command Line Interface (CLI) tasks as sudo
.
-
Have the IP address for the import appliance.
-
Have the access token for the import appliance.
-
Have the transfer job OCID.
-
Have the appliance label information.
-
Ensure there is no firewall and communication is open between the import appliance the host where it will be mounted.
-
Open the firewall to the Data Transfer service on the IP address ranges: 140.91.0.0/16.
-
Open the firewall to the Object Storage IP address ranges: 134.70.0.0/17.
-
Set the environment variable If HTTP proxy environment is needed to allow access to the internet. The proxy environment allows Oracle Cloud Infrastructure CLI to communicate with the Data Transfer Appliance Management Service and the import appliance over a local network connection.
export HTTPS_PROXY=http://www-proxy.myorg.com:80
-
Go to root as
sudo
and install NFS utilities if they are not already installed (first command for RHEL, OEL and second command for Debian, Ubuntu):sudo yum install nfs-utils sudo apt-get install nfs-common
-
Continue as root.
-
Initialize the appliance. Have the import appliance access token ready.
oci dts physical-appliance initialize-authentication --job-id job_id --appliance-cert-fingerprint appliance_cert_fingerprint --appliance-ip appliance_ip --appliance-label appliance_label
When prompted, supply the access token, and answer
y
to permit overwriting of data. -
Configure the import appliance encryption:
oci dts physical-appliance configure-encryption --job-id job_id --appliance-label appliance_label
-
Unlock the import appliance:
oci dts physical-appliance unlock --job-id job_id --appliance-label appliance_label
-
Create an NFS dataset:
oci dts nfs-dataset create --name dataset_name
-
Export the dataset:
oci dts nfs-dataset set-export --name dataset_name --rw true --world true
-
Activate the dataset:
oci dts nfs-dataset activate --name dataset_name
-
Check the dataset is exported:
showmount -e appliance_ip
-
Mount the dataset:
mkdir -p /mnt/mountpoint_name oci dts nfs-dataset activate --name dataset_name
-
Copy files to the DTA. The
tar
method is recommended but other types of copies such ascp
orrsync
can also be used.Here are two examples:
-
tar -cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/
-
tar cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/ |xargs -I '{}' sh -c "test -f '{}' && md5sum '{}'"|tee tarzip_md5
-
-
Deactivate the dataset:
oci dts nfs-dataset deactivate --name dataset_name
-
Seal the dataset. Note this can be a long running process.
oci dts nfs-dataset seal --name dataset_name [--wait]
-
Monitor the sealing process:
oci dts nfs-dataset seal-status --name dataset_name
-
Download the dataset seal manifest:
oci dts nfs-dataset get-seal-manifest --name dataset_name --output-file output_file_path
-
Finalize the import appliance:
oci dts physical-appliance finalize --job-id job_id --appliance-label appliance_label
- Shut down the import appliance by selecting option #8 on the terminal emulation host.
- Have the import appliance packed and shipped to Oracle.
-
Monitor the status of the data upload from the DTA to your object storage bucket in OCI:
oci dts appliance show --job-id job_id --appliance-label appliance_label
- Once the data upload is finished, check the object storage bucket from the Console and get the upload file location
-
Download the upload file and review them to understand what was transferred:
oci os object get --namespace object_storage_namespace --bucket-name bucket_name --name object_name --file file_location
-
Close the import job:
oci dts job close --job-id job_id
-
Delete the import appliance associated with the import job:
oci dts appliance delete --job-id job_id --appliance-label appliance_label
-
Delete the transfer job:
oci dts job delete --job-id job_id