New release for Data Integration (and bug fixes)

You can now:

  • Use the following system and output parameters when configuring incoming parameters in a pipeline: SYS.TASK_KEY and SYS.TASK_NAME
  • Specify a SYS.LAST_LOAD_DATE value in the format "HH:mm:ss UTC"
  • Use scalar parameters in a schema name in SQL tasks

This release also contains security remediations and bug fixes such as the following:

  • Fixed the task run error that occurred when a task run had inputs and outputs with the same "args" map key.
  • Fixed the issue that caused columns generated through an expression operator to be dropped when data type was inferred and data was flattened by a flatten operator.
  • Fixed the issue with a table function operator not loading on first-time rendering.
  • Fixed the issue in pipeline downstream nodes when an upstream node was removed and those downstream nodes had configured parameters that were associated with upstream outputs.
  • Fixed the issue where the attributes of a source in a data flow were not loaded in the Property Inspector when schema drift was disabled.
  • Fixed the issue that caused column names with special characters to not display on the Data tab in the Property Inspector.
  • Fixed the issue with source attributes failing to reappear after the source was added with schema drift disabled and the source schema and data entity parameterized.
  • Fixed the region ID issue that caused an OCI Function in a data flow to fail to be invoked.

The following bug fix was completed in a previous release:

  • Fixed the issue that caused concurrently running tasks with user-defined functions to intermittently fail with the error "DIS_EXEC_0025 - Unable to generate scala code".

For details, see Data Integration.