Using the Code Editor
Learn about Data Flow and the Oracle Cloud Infrastructure Code Editor.
- Create, build, edit, and deploy Applications in Java, Scala, and Python, without having to switch between the Console and the local development environment.
- Get started with Data Flow templates that are included with the Code Editor.
- Run and test your code locally with the Cloud Shell, before deploying to Data Flow.
- Set Spark parameters.
- Git integration that enables you to clone any Git-based repository, track changes made to files, and commit, pull, and push code directly from within the Code Editor, letting you to contribute code and revert code changes with ease. See the Developer Guide for information on using Git and GitHub.
-
A persistent state across sessions auto-saves progress and persists the state across many user sessions, so the Code Editor automatically opens the last edited page on start-up.
- Direct access to Apache Spark and over 30 tools, including sbt, and Scala pre-installed with Cloud Shell.
-
Over a dozen Data Flow examples covering different features bundled as Templates to help get you started.
For more information about the Code Editor's features and functionality, see the Code Editor documentation.
Prerequisites
-
The Code Editor uses the same IAM policies as Cloud Shell. For more information, see Cloud Shell Required IAM Policy.
- Confirm that the languages and tools needed are installed in the Cloud Shell.
- If you're using Data Catalog Metastore, then you need the appropriate policies set up.
Tool | Version | Description |
---|---|---|
Scala | 2.12.15 | Used to write Scala-based code in the Code Editor. |
sbt | 1.7.1 | Used to Interactively build Scala applications. |
Python | 3.8.14 | Python interpreter |
Git | 2.27.0 | Git bash to interactively run GIT commands. |
JDK | 11.0.17 | Used to develop, build, and test Data Flow Java Applications. |
Apache Spark | 3.2.1 | A local Instance of Apache Spark running on the Cloud Shell. used to test the code. |
Limitations
- Data Flow is only able to access resources against the region selected in the Console's Region selection menu when Cloud Shell was started.
- Only Java-based, Python-based, and Scala-based Data Flow Applications are supported
- The Code Editor doesn't support Compilation and Debugging. You must do those in the Cloud Shell.
- The plug-in is supported only with Apache Spark version 3.2.1.
- All the limitations of Cloud Shell apply.
Setting Up the Data Flow Spark Plug-In
Follow these steps to set up the Data Flow Spark Plug-in.