The Catalog Page
The Catalog page displays information about entities in the Oracle Autonomous Database.
To reach the Catalog page, select the Catalog menu in the Data Studio tab of the Launchpad.
or click the Selectoricon and select Catalog from the Data Studio menu in the navigation pane.
If you do not see the Catalog card then your database user is missing the required DWROLE role.
- About the Catalog Page
Use the Catalog page to get information about the entities in and available to your Oracle Autonomous Database. You can see the data in an entity, the sources of that data, the objects that are derived from the entity, and the impact on derived objects from changes in the sources. - Browse and Search Catalogs
After you load data into the Data Studio tool, you can easily find data for analysis with the catalog tool. - Sorting and Filtering Entities
On the Catalog page, you can sort and filter the displayed entities. - Filter Entities
By selecting different quick filters, the faceted results of the entity types also vary. - Display Entity Search
If you do not select a quick filter, the Tables and Views filter is selected by default. - Viewing Entity Details
To view details about an entity, click the Actions icon at the top right of the Table and View Loads header entry, then click Table and select View Details. - Registering Cloud Links to access data
Cloud Links enable you to remotely access read only data on an Autonomous Database instance. - Export Data to Cloud
Use the Export Data to Cloud menu in the table actions of the Browse Catalog page to export data as text from an Autonomous Database to a cloud Object Store. The text format export options are CSV, JSON, Parquet, or XML. - Gathering Statistics for Tables
You can generate statistics that measure the data distribution and storage characteristics of tables. - Editing Tables
You can create and edit objects using Edit Table wizard available from the Edit menu in Actions (three vertical dots) besides the table entity.
Parent topic: Analyze
Registering Cloud Links to access data
Cloud Links enable you to remotely access read only data on an Autonomous Database instance.
You can register a table for remote access for a selected audience. Scope indicates who can remotely access the data. Scope can be set to various levels, including to the region where the database resides, to individual tenancies, or to compartments. You can provide a namespace and a name other than the original schema and object names. For example, you can register a table under the namespace FOREST and for security purposes or for naming convenience, you can provide a namespace and a name other than the original schema and object names.
To register a table to cloud link, click Actions next to the Table entity and select Register to Cloud Link.
- Cloud Link Namespace: Enter a namespace to register the table. For example, ADMIN.
- Cloud Link Name: Enter a name for the cloud link. For example, Sales.
- Description: Specify the description of your cloud link.
- Scope: Specifies who and from where a user is allowed to access the registered table.
You can choose from any of the following available options:
- MY$REGION: You can grant remote data access to other tenancies in the region of the Autonomous Database instance that is registering the data set. This is the least restrictive scope.
- MY$TENANCY: You can grant remote data access to any resource, tenancy, compartment, or database in the tenancy of the Autonomous Database instance that is registering the data set. This scope is more restrictive than MY$REGION scope.
- MY$COMPARTMENT: You can grant remote data access to any resource, compartment, or database in the compartment of the Autonomous Database instance that is registering the data set.
- OCID: Access to the data set is allowed for the specific Autonomous Database instances identified by OCID.
Click OK to finish the registration of the selected table with the cloud link.
Parent topic: The Catalog Page
Export Data to Cloud
Use the Export Data to Cloud menu in the table actions of the Browse Catalog page to export data as text from an Autonomous Database to a cloud Object Store. The text format export options are CSV, JSON, Parquet, or XML.
To export data to a cloud storage location, click Actions next to the Table entity on the Browse Catalog page and select Export Data to Cloud.
The Export Data to Cloud Storage Location displays the following fields:
- Table Name: This field displays the table name.
- Export As: This drop-down displays the list of format options you wish to export the data. You can select CSV, JSON, Parquet, or XML. When you export the data as CSV, and click Show Options, you can:
- Set the maximum file size value using the slider. The maximum file size you can export is from 10MB to 1GB.
- Select the header option to write column names as the first line in output files.
- Select the Escape option to specify the occurrence of the quote character in the field value using the “\” character.
- Text Enclosure and Field delimiter: These options are visible only when the selected file is in CSV format. Select or enter the character used in the source file for text enclosure and field delimiter.
- Compression: Select No Compression to disable compression or GZIP to enable GZIP compression for selected file.
- Cloud storage Location: Select the uri value to the URL for an existing bucket on your Cloud Object Storage from the drop-down. If you do not have a cloud storage location, select Create Cloud Location to create a cloud storage location to export your data. For more details, see Create Credentials.
Note
The Cloud storage location is highlighted in red if the tool doesn’t accept the selected cloud location. In this case, you will receive a Cloud Location Access Error that says “You do not have access to Read, Write and Delete
”. - File prefix: Specify the File prefix. This will be the prefix of the file you export.
Click Export.
After the file export is complete, you will view the newly created cloud object in the entity list of the Browse Catalog page.
Parent topic: The Catalog Page
Gathering Statistics for Tables
You can generate statistics that measure the data distribution and storage characteristics of tables.
You must gather statistics periodically for tables where the statistics become stale over time because of changing data volumes or changes in column values. The Gather Statistics icon gathers new statistics after the table’s structure are modified in ways that make the previous statistics inaccurate. For example, after loading a significant number of rows into a table, collect new statistics on the number of rows. After updating data in a table, you do not need to collect new statistics on the number of rows, but you might need new statistics on the average row length.
Table Statistics also include column statistics. The statistics you generate include the following:
- Table Size: Specifies the size of the table in bytes.
- Number of rows: Displays the number of rows in the table.
- Number of columns: Displays the number of columns in the table
- Compressed size: Displays the size of compressed table in bytes.
- Statistics gathered on: Displays date and time of last statistics gathered.
- Number of distinct values (NDV) in column
- Number of nulls in column
- Data distribution (histogram)
The above information is displayed in the statistics details of tables.
Parent topic: The Catalog Page
Editing Tables
You can create and edit objects using Edit Table wizard available from the Edit menu in Actions (three vertical dots) besides the table entity.
Clicking Edit from the Actions menu opens the Edit Table wizard. You can visit the panes in any order to edit a table. The table properties are grouped in several panes.
- Schema: Database schema in which the table exists.
- Name: Name of the table.
- Columns Pane
- Primary Key Pane
- Unique Keys Pane
- Indexes Pane
- Foreign Keys Pane
- Table Constraints Pane
- Comments Pane
- Storage Pane
- External Table Properties Pane
- Materialized View Pane
- DDL Pane
- Output Pane
- Columns Pane
Specifies properties for each column in the table. - Primary Key Pane
Specifies the primary key for the table. - Unique Keys Pane
Specifies one or more unique constraints for the table. - Indexes Pane
Lists the indexes defined for the table. - Foreign Keys Pane
Specifies one or more foreign keys for the table. - Table Constraints Pane
Specifies one or more check constraints for the table. - Comments Pane
Enter descriptive comments in this pane. - Storage Pane
Enables you to specify storage options for the table. - External Table Properties Pane
Specifies options for an external table. - Materialized View Pane
Specifies options for a materialized view. - DDL Pane
You can review and save the SQL statements that are generated when creating or editing the object. If you want to make any changes, go back to the relevant panes and make the changes there. - Output Pane
Displays the results of the DDL commands.
Parent topic: The Catalog Page
Columns Pane
Specifies properties for each column in the table.
General tab
- Name: Name for the column.
- Datatype: Data type for the column.
- Default: If no value is specified, the default value is null.
- Default on NULL: Applicable for Oracle Database 12c and later releases. If this option is selected, when a row is inserted into the table and the value specified for the column is NULL, the default value is inserted into the column.
- Expression: Expression for computing the value in the column.
- Comments: Optional descriptive comments about the column. Use this field to provide descriptions for the attributes.
- PK: If this option is selected, the column becomes the primary key.
- Identity Column : If this option is selected, the column becomes an identity column. This is applicable only for Oracle Database 12c and later releases. For more details, see the Identity Column tab.
Constraints tab
- Not Null Constraint: Name: Name for the Not Null constraint.
- Not Null Constraint: Not Null: If this option is selected, the column must contain data. You cannot specify no value or an explicit null value for this column when you insert a row. If this option is not checked, the column can contain either data or no data. A primary key column cannot be null.
- Check Constraint: Name: Name for the check constraint definition.
- Check Constraint: Constraint: Condition that must be met for a column to fulfill the check constraint. You can use any valid CHECK clause (without the CHECK keyword). For example, to indicate that the value in a numeric column named RATING must be from 1 to 10, you can specify: rating >=1 and rating <= 10.
- Enabled: If this option is selected, the constraint is checked when data is entered or updated in the column.
- Deferrable: If this option is selected, you can defer checking the validity of the constraint until the end of a transaction.
- Initially Immediate: If this option is selected, the constraint is checked whenever you add, update, or delete data from the column.
- Validate: If this option is selected, the existing data is checked to see if it conforms to the constraint.
Parent topic: Editing Tables
Primary Key Pane
Specifies the primary key for the table.
The primary key is the column, or set of columns, that uniquely identifies each row in the table. If the Primary Key checkbox is selected for a column in the General tab, the corresponding fields are automatically populated in the Primary Key pane. You can make changes to the properties as required.
An index is automatically created on the primary key.
- Name: Name of the constraint to be associated with the primary key definition.
- Enabled: If this option is checked, the primary key constraint is enforced: that is, the data in the primary key column (or set of columns) must be unique and not null.
- Index: Name of the index to which the primary key refers.Tablespace: Name of the tablespace associated with the index.
- Available Columns: Lists the columns that are available to be added to the primary key definition. You can select multiple attributes, if required, for the primary key.
- Selected Columns: Lists the columns that are included in the primary key definition.
To add a column to the primary key definition, select it in Available Columns and click the Add (>) icon; to remove a column from the primary key definition, select it in Selected Columns and click the Remove (<) icon. To move all columns from available to selected (or the reverse), use the Add All (>>) or Remove All (<<) icon. To move a column up or down in the primary key definition, select it in Selected Columns and use the arrow buttons.
Parent topic: Editing Tables
Unique Keys Pane
Specifies one or more unique constraints for the table.
A unique constraint specifies a column, or set of columns, whose data values must be unique: each data value must not be null, and it must not be the same as any other value in the column.
-
Name: Name of the unique constraint.
- Enabled: If this option is selected, the unique constraint is enforced.
- Rely: If this option is selected, the constraint in NOVALIDATE mode is taken into account during query rewrite.
- Deferrable: If this option is selected, in subsequent transactions, constraint checking can be deferred until the end of the transaction using the SET CONSTRAINT(S) statement.
- Initially Immediate: If this option is selected, the constraint is checked at the end of each subsequent SQL statement.
- Validate: If the option is selected, the existing data is checked to see if it conforms to the constraint.
- Index: Name of the index to which the unique key refers.
- Tablespace: Name of the tablespace associated with the index.
- Available Columns: Lists the columns that are available to be added to the unique constraint definition.
- Selected Columns: Lists the columns that are included in the unique constraint definition.
To add a column to the unique constraint definition, select it in Available Columns and click the Add (>) icon; to remove a column from the unique constraint definition, select it in Selected Columns and click the Remove (<) icon. To move all columns from available to selected (or the reverse), use the Add All (>>) or Remove All (<<) icon. To move a column up or down in the unique constraint definition, select it in Selected Columns and use the arrow buttons.
Parent topic: Editing Tables
Indexes Pane
Lists the indexes defined for the table.
To add an index, click Add Index (+); to delete an index, select it and click Remove Index (-).
- Name: Name of the index.
- Type: The type of Oracle index. Non-unique means that the index can contain multiple identical values; Unique means that no duplicate values are permitted; Bitmap stores rowids associated with a key value as a bitmap.
- Tablespace: Name of the tablespace for the index.
- Expression: A column expression is an expression built from columns, constants, SQL functions, and user-defined functions. When you specify a column expression, you create a function-based index.
- Available Columns and Selected Columns: Columns selected for the index. To select a column, click the column in the Available Columns box, and then click the click the Add Selected Columns icon to move it to the Selected Columns box.
Parent topic: Editing Tables
Foreign Keys Pane
Specifies one or more foreign keys for the table.
A foreign key specifies a column ("local column"), whose data values match values in the primary key or unique constraint of another table.
- Name: Name of the foreign key definition.
- Enabled: If this option is checked, the foreign key is enforced.
- Rely, Deferrable, Initially Immediate, Validate: See the description of these fields in the Unique Keys pane.
- Referenced Constraint: Schema: Name of the schema containing the table with the primary key or unique constraint to which this foreign key refers.
- Referenced Constraint: Table: Name of the table with the primary key or unique constraint to which this foreign key refers.
- Referenced Constraint: Constraint: Name of the primary key or unique constraint to which this foreign key refers.
- Referenced Constraint: On Delete: Action to take automatically when a row in the referenced table is deleted and rows with that value exist in the table containing this foreign key: NO ACTION (shown by a crossing line in diagrams) performs no action on these rows; CASCADE (shown by an "X") deletes these rows; SET NULL (shown by a small circle) sets null all columns in those rows that can be set to a null value.
- Associations: Local Column: Lists the column in the currently selected (local) table that is included in the foreign key definition. For each referenced column in the foreign key definition, select the name of a column in the edited table.
- Associations: Referenced Column: For each local column, identifies the column in the other (foreign) table that must have a value matching the value in the local column.
Parent topic: Editing Tables
Table Constraints Pane
Specifies one or more check constraints for the table.
A check constraint specifies a condition that must be met when a row is inserted into the table or when an existing row is modified.
- Name: Name of the check constraint definition.
- Check Condition: Condition that must be met for a row to fulfil the check constraint. You can use any valid CHECK clause (without the CHECK keyword). For example, to indicate that the value in a numeric column named RATING must be from 1 to 10, you can specify rating >=1 and rating <= 10.
- Enabled: If this option is checked, the check constraint is enforced.
Parent topic: Editing Tables
Comments Pane
Enter descriptive comments in this pane.
This is optional.
Parent topic: Editing Tables
Storage Pane
Enables you to specify storage options for the table.
When you create or edit a table or an index, you can override the default storage options.
- Organization: Specifies that the table is stored and organized with (Index) or without an index (Heap) or as an external table (External).
- Tablespace: Name of the tablespace for the table or index.
- Logging: ON means that the table creation and any subsequent INSERT operations against the table are logged in the redo log file. OFF means that these operations are not logged in the redo log file.
- Row Archival: YES enables in-database archiving, which allows you to archive rows within the table by marking them as invisible.
Parent topic: Editing Tables
External Table Properties Pane
Specifies options for an external table.
An external table is a read-only table whose metadata is stored in the database but whose data in stored outside the database.
-
Access Driver Type: Specifies the type of external table.
- ORACLE_LOADER: Extracts data from text data files. This is the default access driver, which loads data from external tables to internal tables.
- – ORACLE_DATAPUMP: Extracts data from binary dump files. This access driver can perform both loads and unloads.
- – ORACLE_BIGDATA: Extracts data from Oracle Big Data Appliance.
- – ORACLE_HDFS: Extracts data stored in a Hadoop Distributed File System (HDFS).
- – ORACLE_HIVE: Extracts data stored in Apache HIVE.
-
Default Directory: Specifies the default directory to use for all input and output files that do not explicitly name a directory object. The location is specified with a directory object, not a directory path.
- Access Params: Assigns values to the parameters of the specific access driver for the external table. Access parameters are optional.
- OPAQUE_FORMAT_SPEC: The opaque_format_spec specifies all access parameters for the ORACLE_LOADER, ORACLE_DATAPUMP, ORACLE_HDFS, and ORACLE_HIVE access drivers. For descriptions of the access parameters, see Oracle Database Utilities. Field names specified in the opaque_format_spec must match columns in the table definition, else Oracle Database ignores them.
- USING CLOB: Enables you to derive the parameters and their values through a subquery. The subquery cannot contain any set operators or an ORDER BY clause. It must return one row containing a single item of data type CLOB.
- Reject Limit: The number of conversion errors that can occur during a query of the external data before an Oracle Database error is returned and the query is aborted.
- Project Column: Determines how the access driver validates the rows of an external table in subsequent queries.
- ALL: Processes all column values, regardless of which columns are selected, and validates only those rows with fully valid column entries. If any column value raises an error, such as a data type conversion error, the row is rejected even if that column was not referenced in the select list of the query.
- REFERENCED: Processes only those columns in the select list of the query. The ALL setting guarantees consistent result sets. The REFERENCED setting can result in different numbers of rows returned, depending on the columns referenced in subsequent queries, but is faster than the ALL setting. If a subsequent query selects all columns of the external table, then the settings behave identically.
- Location: Specifies the data files for the external table. Use the Add (+) icon to add each location specification.
-
ORACLE_LOADER and ORACLE_DATAPUMP, the files are named in the form directory:file. The directory portion is optional. If it is missing, then the default directory is used as the directory for the file. If you are using the ORACLE_LOADER access driver, then you can use wildcards in the file name. An asterisk (*) signifies multiple characters and a question mark (?) signifies a single character.
- For ORACLE_HDFS, LOCATION is a list of Uniform Resource Identifiers (URIs) for a directory or for a file. There is no directory object associated with a URI.
- For ORACLE_HIVE, LOCATION is not used. Instead, the Hadoop HCatalog table is read to obtain information about the location of the data source (which could be a file or another database).
-
Opaque Format Spec
Specifies all access parameters for the ORACLE_LOADER, ORACLE_DATAPUMP, ORACLE_HDFS, and ORACLE_HIVE access drivers.
CLOB Subquery
Type or copy and paste the query.
Parent topic: Editing Tables
Materialized View Pane
Specifies options for a materialized view.
Query: Contains the SQL code for the query part of the view definition. Type or copy and paste the query.
- On Pre-built Table: If Yes, an existing table is registered as a preinitialized materialized view. This option is particularly useful for registering large materialized views in a data warehousing environment. The table must have the same name and be in the same schema as the resulting materialized view, and the table should reflect the materialization of a subquery.
- Reduced Precision: Yes authorizes the loss of precision that will result if the precision of the table or materialized view columns do not exactly match the precision returned by the subquery. If No, the precision of the table or materialized view columns must exactly match the precision returned by the subquery, or the create operation will fail.
- For Update: Select Yes to allow a subquery, primary key, object, or rowid materialized view to be updated. When used in conjunction with Advanced Replication, these updates will be propagated to the master.
- Real Time MV: Select Yes to create a real-time materialized view or a regular view. A real-time materialized view provides fresh data to user queries even when the materialized view is not in sync with its base tables due to data changes. Instead of modifying the materialized view, the optimizer writes a query that combines the existing rows in the materialized view with changes recorded in log files (either materialized view logs or the direct loader logs). This is called on-query computation.
- Query Rewrite: If Enable, the materialized view is enabled for query rewrite, which transforms a user request written in terms of master tables into a semantically equivalent request that includes one or more materialized views.
- Build: Specifies when to populate the materialized view. Immediateindicates that the materialized view is to be populated immediately. Deferred indicates that the materialized view is to be populated by the next refresh operation. If you specify Deferred, the first (deferred) refresh must always be a complete refresh; until then, the materialized view has a staleness value of unusable, so it cannot be used for query rewrite.
- Use Index: If Yes, a default index is created and used to speed up incremental (fast) refresh of the materialized view. If No, this default index is not created. (For example, you might choose to suppress the index creation now and to create such an index explicitly later.)
- Index Tablespace: Specifies the tablespace in which the materialized view is to be created. If a tablespace is not selected, the materialized view is created in the default tablespace of the schema containing the materialized view.
- Cache: If Yes, the blocks retrieved for this table are placed at the most recently used end of the least recently used (LRU) list in the buffer cache when a full table scan is performed. This setting is useful for small lookup tables. If No, the blocks are placed at the least recently used end of the LRU list.
Refresh Clause
-
Refresh: Select Yes to enable refresh operations.
- Refresh Type: The method of refresh operation to be performed:
- Complete Refresh: Executes the defining query of the materialized view, even if a fast refresh is possible.
- Fast Refresh: Uses the incremental refresh method, which performs the refresh according to the changes that have occurred to the master tables. The changes for conventional DML changes are stored in the materialized view log associated with the master table. The changes for direct-path INSERT operations are stored in the direct loader log.
- Force Refresh: Performs a fast refresh if one is possible; otherwise, performs a complete refresh.
- Action: The type of refresh operation to be performed:
- On Demand: Performs a refresh when one of the DBMS_MVIEW refresh procedures are called.
- On Commit: Performs a fast refresh whenever the database commits a transaction that operates on a master table of the materialized view. This may increase the time taken to complete the commit, because the database performs the refresh operation as part of the commit process.
- Specify: Performs refresh operations according to what you specify in the Start on and Next fields.
- Start Date: Starting date and time for the first automatic refresh operation. Must be in the future.
- Next Date: Time for the next automatic refresh operation. The interval between the Start on and Next times establishes the interval for subsequent automatic refresh operations. If you do not specify a value, the refresh operation is performed only once at the time specified for Start on.
- With: Refresh type, which determines the type of materialized view:
- Primary Key: Creates a primary key materialized view, which allows materialized view master tables to be reorganized without affecting the eligibility of the materialized view for fast refresh.
- Row ID: Creates a rowid materialized view, which is useful if the materialized view does not include all primary key columns of the master tables.
- Default Storage: If Yes, DEFAULT specifies that Oracle Database will choose automatically which rollback segment to use. If you specify DEFAULT, you cannot specify the
rollback_segment
. DEFAULT is most useful when modifying, rather than creating, a materialized view. - Storage Type: MASTER specifies the remote rollback segment to be used at the remote master site for the individual materialized view. LOCAL specifies the remote rollback segment to be used for the local refresh group that contains the materialized view. This is the default.
- Rollback Segment: Enter the name of the rollback segment.
- Using Constraint: If this option is checked, more rewrite alternatives can be used during the refresh operation, resulting in more efficient refresh execution. The behavior of this option is affected by whether you select Enforced or Trusted.
- Enforced: Causes only enforced constraints to be used during the refresh operation.
- Trusted: Enables the use of dimension and constraint information that has been declared trustworthy by the database administrator but that has not been validated by the database. If the dimension and constraint information is valid, performance may improve. However, if this information is invalid, then the refresh procedure may corrupt the materialized view even though it returns a success status.
Parent topic: Editing Tables
DDL Pane
You can review and save the SQL statements that are generated when creating or editing the object. If you want to make any changes, go back to the relevant panes and make the changes there.
For a new table, click CREATE to view the generated DDL statements.
When you edit table properties, click UPDATE to view the generated ALTER statements.
For a new table, the UPDATE tab will not be available. When you are finished, click Apply.
Parent topic: Editing Tables
Output Pane
Displays the results of the DDL commands.
If there are any errors, go to the appropriate pane, fix the errors, and run the commands again. You can save to a text file or clear the output.
Parent topic: Editing Tables