Creating and Saving a Model with the Console
Create a model in the Console and save it directly to the model catalog.
To document a model, you must prepare the metadata before you create and save it.
This task involves creating a model, adding metadata, defining the training environment, specifying predictions schemas, and saving the model to the model catalog.
-
We recommend that you create and save models to the model catalog programmatically instead, either using ADS or the OCI Python SDK.
-
You can use ADS to create large models. Large models have artifacts limitations of up to 400 GB.
If you're saving a model trained elsewhere or want to use the Console, use these step to save a model:
- Use the Console to sign in to a tenancy with the necessary policies.
- Open the navigation menu and select Analytics & AI. Under Machine Learning, select Data Science.
-
Select the compartment that contains the project that you want to save the model in.
All projects in the compartment are listed.
-
Select the name of the project.
The project details page opens and lists the notebook sessions.
-
Under Resources, click Models.
A tabular list of models in the compartment is displayed.
-
Create a model artifact zip archive on your local machine containing the
score.py
andruntime.yaml
files (and any other files needed to run your model). Select Download sample artifact zip to get sample files that you can change to create your model artifact. - Select Create model.
- Select the compartment to contain the model.
- (Optional)
Enter a unique name (limit of 255 characters). If you don't provide a name, a name is automatically generated.
For example,
model20200108222435
. - (Optional) Enter a description (limit of 400 characters) for the model.
-
In the Upload model artifact box, select
Select to upload the model artifact archive (a zip file).
- Drag the zip file into the Upload an artifact file box, and then select Upload.
- (Optional) In the Model version set box, select Select, and then configure with an existing version set or create a new set.
- (Optional)
In the Model provenance box, select
Select.
- Select Notebook session or Job run depending on where you want to store the taxonomy documentation.
-
Find the notebook session or job run that the model was trained with by
using one of the following options:
- Choose a project:
-
Select the name of the project to use in the selected compartment.
The selected compartment applies to both the project and the notebook session or job run, and both must be in the same compartment. If not, then use the OCID search instead.
You can change the compartment for both the project and notebook session or job run.
The name of the project to use in the selected compartment.
Select the notebook session or job run that the model was trained with.
- OCID search:
-
If the notebook session or job run is in a different compartment than the project, then enter the notebook session or job run OCID that you trained the model in.
- Select the notebook session or job run that the model was trained with.
- (Optional)
Select Show advanced options to identify Git and
model training information.
Enter or select any of the following values:
- Git repository URL
-
The URL of the remote Git repository.
- Git commit
-
The commit ID of the Git repository.
- Git branch
-
The name of the branch.
- Local model directory
-
The directory path where the model artifact was temporarily stored. This could be a path in a notebook session or a local computer directory for example.
- Model training script
-
The name of the Python script or notebook session that the model was trained with.
Tip
You can also populate model provenance metadata when you save a model to the model catalog using the OCI SDKs or the CLI.
- Select Select.
- (Optional)
In the Model taxonomy box, select
Select to specify what the model does, machine
learning framework, hyperparameters, or to create custom metadata to document
the model.
Important
The maximum allowed size for all the model metadata is 32000 bytes. The size is a combination of the preset model taxonomy and the custom attributes.
-
In the Model taxonomy section, add preset labels
as follows:
Enter or select the following:
Model taxonomy- Use case
-
The type of machine learning use case to use.
- Model framework
-
The Python library you used to train the model.
- Model framework version
-
The version of the machine learning framework. This is a free text value. For example, the value could be 2.3.
- Model algorithm or model estimator object
-
The algorithm used or model instance class. This is a free text value. For example,
sklearn.ensemble.RandomForestRegressor
could be the value. - Model hyperparameters
-
The hyperparameters of the model in JSON format.
- Artifact test results
-
The JSON output of the introspection test results run on the client side. These tests are included in the model artifact boilerplate code. You can run them optionally before saving the model in the model catalog.
Create custom label and value attribute pairs- Label
-
The key label of your custom metadata
- Value
-
The value attached to the key
- Category
-
(Optional) The category of the metadata from many choices including:
-
performance
-
training profile
-
training and validation datasets
-
training environment
-
other
You can use the category to group and filter custom metadata to display in the Console. This is useful when you have many custom metadata that you want to track.
-
- Description
-
(Optional) Enter unique description of the custom metadata.
- Select Select.
-
In the Model taxonomy section, add preset labels
as follows:
- (Optional)
Select Select in the Document model input and
output data schema box to document the model predictions. You
define model prediction features that the model requires to
make a successful prediction. You also define input and output schemas
that describe the predictions returned by the model (defined in the
score.py
file with thepredict()
function).Important
The maximum allowed file size for the combined input and output schemas is 32000 bytes.
- Drag your input schema JSON file into the Upload an input schema box.
- Drag your output schema JSON file into the Upload an output schema box.
- Select Select.
Important
You can only document the input and output data schemas when you create the model. You can't edit the schemas post model creation.
- (Optional) Select Show Advanced Options to add tags.
- (Optional)
Enter the tag namespace (for a defined tag), key, and value to assign tags to the resource.
To add more than one tag, select Add tag.
Tagging describes the various tags that you can use organize and find resources including cost-tracking tags.
-
Click Create.