Creating a Job

Create and run a job in Data Science.

Ensure that you have created the necessary policies, authentication, and authorization for your jobs.

Before you begin:

  • Create a job artifact file or build a custom container.

  • To store and manage job logs, learn about logging.

  • To use storage mounts, you must have an Object Storage bucket or OCI File Storage Service (FSS) mount target and export path.

    To use FSS, you must first create the file system and the mount point. Use the custom networking option and ensure that the mount target and the notebook are configured with the same subnet. Configure security list rules for the subnet with the specific ports and protocols.

    Ensure that service limits are allocated to file-system-count and mount-target-count.

  • To use storage mounts, you must have an Object Storage bucket or OCI File Storage Service (FSS) mount point.

    1. Use the Console to sign in to a tenancy with the necessary policies.
    2. Open the navigation menu and click Analytics & AI. Under Machine Learning, click Data Science.
    3. Select the compartment that contains the project in question.

      All projects in the compartment are listed.

    4. Click the name of the project.

      The project details page opens and lists the notebook sessions.

    5. Under Resources, click Jobs.

      A tabular list of jobs in the project is displayed.

    6. Click Create job.
    7. (Optional) Select a different compartment for the job.
    8. (Optional) Enter a unique name and description for the job (limit of 255 characters). If you don't provide a name, a name is automatically generated.

      For example, job20210808222435.

    9. (Optional) To use Bring Your Own Container, in Environment configuration click Select.
      In the Set your BYOC environment panel, follow these steps:
      1. In Repository select a repository from the list. If the repository is in a different compartment, click Change compartment.
      2. In Image select an image from the list.
      3. (Optional) In Entrypoint enter an entry point. To add another, click +Add parameter.
      4. (Optional) In CMD enter a CMD. To add another, click +Add parameter.
        Note

        Use CMD as arguments to the ENTRYPOINT or the only command to run in the absence of an ENTRYPOINT.
      5. (Optional) In Image digest enter an image digest.
      6. (Optional) In Signature ID, if using signature verification, enter the OCID of the image signature. For example, ocid1.containerimagesignature.oc1.iad.aaaaaaaaab....
      7. Click Select.
    10. (Optional) This step is optional only if BYOC is configured. Upload the job artifact by dragging the required job artifact file into the box.
    11. (Optional) Create a default job configuration that's used when the job is run using these options.

      Enter or select any of the following values:

      Custom environment variable key

      The environment variables that control the job.

      Note

      If you uploaded zip file or compressed tar file, add the JOB_RUN_ENTRYPOINT as a custom environment variable to point to the file.

      Value

      The value for the custom environment variable key.

      You can click Additional custom environment key to specify more variables.

      Command line arguments

      The command line arguments that you want to use for running the job.

      Maximum runtime (in minutes)

      The maximum number of minutes that the job can run. The service cancels the job run if its runtime exceeds the specified value. The maximum runtime is 30 days (43,200 minutes). We recommend that you configure a maximum runtime on all job runs to prevent runaway job runs.

    12. Select a Compute shape.
    13. (Optional) Change the Compute shape by clicking Change shape. Then, follow these steps in the Select compute panel.
      1. Select an instance type.
      2. Select an shape series.
      3. Select one of the supported Compute shapes in the series.
      4. Select the shape that best suits how you want to use the resource. For the AMD shape, you can use the default or set the number of OCPUs and memory.

        For each OCPU, select up to 64 GB of memory and a maximum total of 512 GB. The minimum amount of memory allowed is either 1 GB or a value matching the number of OCPUs, whichever is greater.

      5. Click Select shape.
    14. (Optional) To use logging, click Select, and then ensure that Enable logging is selected.
      1. Select a log group from the list. You can change to a different compartment to specify a log group in a different compartment from the job.
      2. Select one of the following to store all stdout and stderr messages:
        Enable automatic log creation

        Data Science automatically creates a log when the job starts.

        Select a log

        Select a log to use.

      3. Click Select to return to the job run creation page.
    15. For Storage. enter the amount of block storage to use between 50 GB and 10, 240 GB (10 TB). You can change the value by 1 GB increments. The default value is 100 GB.
    16. Select one of the following options to configure the network type:
      • Default networking—The workload is attached by using a secondary VNIC to a preconfigured, service-managed VCN, and subnet. This provided subnet lets egress to the public internet through a NAT gateway, and access to other Oracle Cloud services through a service gateway.

        If you need access only to the public internet and OCI services, we recommend using this option. It doesn't require you to create networking resources or write policies for networking permissions.

      • Custom networking—Select the VCN and subnet that you want to use for the resource (notebook session or job).

        For egress access to the public internet, use a private subnet with a route to a NAT gateway.

        If you don't see the VCN or subnet that you want to use, click Change Compartment, and then select the compartment that contains the VCN or subnet.

        Important

        Custom networking must be used to use a file storage mount.

    17. (Optional) To use storage mounts, click +Add storage mount.
      1. Select a storage mount type, OCI Object Storage or OCI File Storage.
      2. Select a compartment that contains the storage resource that you want to mount.
      3. Select one of the following:
        Object Storage

        The bucket you want to use.

        You can add an object name prefix. The prefix must start with an alphanumeric character. The allowed characters are alphanumerics, slash ( / ), hyphen ( - ) and underscore ( _ ).

        File Storage

        The mount target and export path you want to use.

        You must use a custom network to use file storage.

      4. Enter the path under which the storage is to be mounted.

        Storage is mounted under the specified mount path. The path must start with an alphanumeric character. The destination directory must be unique across the storage mounts provided. The allowed characters are alphanumerics, hyphen ( - ) and underscore ( _ ).

        You can specify the full path, such as /opc/storage-directory. If only a directory is specified, such as /storage-directory, then it's mounted under the default /mnt directory. You can't specify OS specific directories, such as /bin or /etc.

      5. Click Submit.

        Repeat these steps to add up to two storage mounts for notebook sessions and five storage mounts for jobs.

    18. (Optional) Click Show advanced options to add tags to the job.
    19. (Optional) Enter the tag namespace (for a defined tag), key, and value to assign tags to the resource.

      To add more than one tag, click Add tag.

      Tagging describes the various tags that you can use organize and find resources including cost-tracking tags.

    20. Click Create.

      After the job is in an active state, you can use job runs to repeatedly run the job.

  • These environment variables control the job.

    Use the Data Science CLI to create a job as in this example:

    1. Create a job with:
      oci data-science job create \
      --display-name <job_name>\
      --compartment-id <compartment_ocid>\
      --project-id <project_ocid> \
      --configuration-details file://<jobs_configuration_json_file> \
      --infrastructure-configuration-details file://<jobs_infrastructure_configuration_json_file> \
      --log-configuration-details file://<optional_jobs_infrastructure_configuration_json_file>
    2. Use this jobs configuration JSON file:
      {
        "jobType": "DEFAULT",
        "maximumRuntimeInMinutes": 240,
        "commandLineArguments" : "test-arg",
        "environmentVariables": {
          "SOME_ENV_KEY": "some_env_value" 
        }
      }
    3. Use this jobs infrastructure configuration JSON file:
      {
        "jobInfrastructureType": "STANDALONE",
        "shapeName": "VM.Standard2.1",
        "blockStorageSizeInGBs": "50",
        "subnetId": "<subnet_ocid>"
      }
    4. (Optional) Use this jobs logging configuration JSON file:
      {
        "enableLogging": true,
        "enableAutoLogCreation": true,
        "logGroupId": "<log_group_ocid>"
      }
    5. Upload a job artifact file for the job you created with:
      oci data-science job create-job-artifact \
      --job-id <job_ocid> \
      --job-artifact-file <job_artifact_file_path> \
      --content-disposition "attachment; filename=<job_artifact_file_name>"
  • The ADS SDK is also a publicly available Python library that you can install with this command:

    pip install oracle-ads

    It provides the wrapper that makes the creation and running jobs from notebooks or on your client machine easy.

    Use the ADS SDK to create and run jobs.