Creating a PySpark Data Flow Application
Follow these steps to create a PySpark application in Data Flow.
Upload your Spark-submit files to an Oracle Cloud Infrastructure Object Storage. See Set Up Object Store for details. - Open the navigation menu, and click Analytics and AI. Under Data Lake click Data Flow.
- In the left-side menu, click Applications.
- Under List scope, select the compartment that you want to create the application in.
- On the Applications page, click Create application.
- In the Create application panel, enter a name for the application and an optional description that can help you search for it.
-
Under Resource configuration, provide the following
values. To help calculate the number of resources that you need, see Sizing the Data Flow Application.
- Select the Spark version.
- (Optional) Select a pool.
- For Driver shape, select the type of cluster node to use to host the Spark driver.
- (Optional) If you selected a flexible shape for the driver, customize the number of OCPUs and the amount of memory.
- For Executor shape, select the type of cluster node to use to host each Spark executor.
- (Optional) If you selected a flexible shape for the executor, customize the number of OCPUs and the amount of memory.
- (Optional) To enable use of Spark dynamic allocation (autoscaling), select Enable autoscaling.
- Enter the Number of executors you need. If you selected to use autoscaling, enter a minimum and maximum number of executors.
-
Under Application configuration, provide the following
values.
- (Optional) If the application is for Spark streaming, select Spark streaming
-
Note
You must have followed the steps in Getting Started with Spark Streaming for your streaming application to work. - Don't select Use Spark-Submit options.
- Select Python from the Language options.
- Under Select a file, Enter specify the File file
URL to the application. There are two in one of the following ways to do
this:
- Select the file from the Object Storage file name list. Click Change compartment if the bucket is in a different compartment.
- Select Enter the file URL manually and
enter the file name and the path to it using this format:
oci://<bucket_name>@<objectstore_namespace>/<file_name>
- Enter the Main class name.
- (Optional) Enter any arguments to use to invoke the main class. There is
no limit to their number or their names. For example, in the
Arguments field,
enter:
You are prompted for the default value. It's a good idea to enter these now. Each time you add an argument, a parameter is displayed with the name, as entered in the Argument field and a text box in which to enter the parameter value.${<argument_1>} ${<argument_2>}
If Spark streaming is specified, then you must include the checkpoint folder as an argument. See an example from the sample code on GitHub for how to pass a checkpoint as an argument.
Note
Don't include either "$" or "/" characters in the parameter name or value. - (Optional) If you have an
archive.zip
file, upload the file to Oracle Cloud Infrastructure Object Storage and then populate Archive URI with the path to it. There are two ways to do this:- Select the file from the Object Storage file name list. Click Change compartment if the bucket is in a different compartment.
- Click Enter the file path manually and
enter the file name and the path to it using this format:
oci://<bucket_name>@<namespace_name>/<file_name>
- Under Application log location, specify where you
want to ingest Oracle Cloud
Infrastructure Logging in one of the following ways:
- Select the
dataflow-logs
bucket from the Object Storage file name list. Click Change compartment if the bucket is in a different compartment. - Select Enter the bucket path manually and
enter the bucket path to it using this format:
oci://dataflow-logs@<namespace_name>
- Select the
- (Optional) Select the metastore from the list. If the metastore is in a different compartment, click Change compartment. The default managed table location is automatically populated based on the metastore.
- (Optional) To add tags to the application, select a tag namespace (for defined tags) and populate then specify a tag key and value. Add more tags as needed. For more information about tagging, see Overview of Tagging.
- (Optional)
Add advanced configuration options.
- Click Show advanced options.
- (Optional) Select Use resource principal auth to enable faster starting or if you expect the Run to last more than 24 hours.
- (Optional) Click Enable Spark Oracle data source to use Spark Oracle Datasource.
- Select a Delta Lake version. The value you select is reflected in the Spark configuration properties Key/Value pair. See Data Flow and Delta Lake for information on Delta Lake.
- In the Logs section, select the Logs groups and the application logs for Oracle Cloud Infrastructure Logging. You can change compartment if the logs groups are in a different compartment.
- Enter the key of the Spark configuration property and a value.
- If you're using Spark
streaming, include a key of
spark.sql.streaming.graceful.shutdown.timeout
with a value of no more than 30 minutes (in milliseconds). - If you're using Spark
Oracle Datasource, include a key of
spark.oracle.datasource.enabled
with a value oftrue
.
- If you're using Spark
streaming, include a key of
- Click + Another property to add another configuration property.
- (Optional) Override the default value for the warehouse bucket by
populating Warehouse Bucket bucket URI in the
following
format:
oci://<warehouse-name>@<tenancy>
- Select the network access.
- If you're attaching a private
endpoint to Data Flow, click Secure access to private
subnet. Select the private endpoint from the
resulting list. Note
You can't use an IP address to connect to the private endpoint, you must use the FQDN. - If you're not using a private endpoint, click Internet access (No subnet).
- If you're attaching a private
endpoint to Data Flow, click Secure access to private
subnet. Select the private endpoint from the
resulting list.
- (Optional) To enable data lineage collection:
- Click Enable data lineage collection.
- Click Enter data catalog into manually or select a Data Catalog instance from a configurable compartment in the current tenancy.
- (Optional) If you clicked Enter data catalog into manually in the previous step, enter the values for Data catalog tenancy OCID, Data catalog compartment OCID, and Data Catalog instance ODID.
- For Max run duration in minutes, enter a value between 60 (1 hour) and 10080 (7 days). If you don't enter a value, the submitted run continues until it succeeds, fails, is canceled, or reaches its default maximum duration (24 hours).
-
Click Create to create the application, or click
Save as stack to create it later.
To change the values for language, name, and file URL in the future, see Editing an Application. You can change the language only between Java and Scala. You can't change it to Python or SQL.
Use the create command and required parameters to create an application:
For a complete list of flags and variable options for CLI commands, see the CLI Command Reference.oci data-flow application create [OPTIONS]
Run the CreateApplication operation to create an application.