New Release for Data Integration (revised)
- Services: Data Integration
- Release Date: October 05, 2022
A new release for Data Integration is now available.
You can now:
- Connect to a data source and extract data using the base URL of an endpoint to a public or private REST API.
- Use the Excel file type when you configure the Object Storage, S3, or HDFS source in a data flow or data loader task.
- Use the flatten operator in a data flow to denormalize hierarchical array data from JSON, Avro, and Parquet files into a relational format.
- Use the decision operator in a pipeline to specify the branching flow based on a conditional expression that evaluates to a Boolean value.
- Select the type of columns to include as target attributes when you configure the BICC Fusion Applications source in a data flow or data loader task.
- Use the multiple data entities load type when working with a BICC Fusion Applications source in a data loader task.
- Use a file pattern in a data loader task with a logical entity qualifier to select multiple Object Storage, S3, or HDFS source data entities, and consolidate matching files into logical entities at runtime.
- Use the JSON_TEXT type in CAST expressions with the
json_path
function when configuring polling in a REST task. - Use the JSON data type response outputs SYS.RESPONSE_PAYLOAD_JSON and SYS.RESPONSE_HEADERS_JSON in pipeline REST tasks.
Deprecation:
The REST task String output parameters SYS.RESPONSE_PAYLOAD and SYS.RESPONSE_HEADERS are deprecated. We recommend converting any existing usage to the JSON data type equivalents of SYS.RESPONSE_PAYLOAD_JSON and SYS.RESPONSE_HEADERS_JSON, respectively.
For details, see Data Integration.