package output
- Alphabetic
- Public
- All
Type Members
-
case class
BigqueryDatasourceOutput(table: String, project: String, dataset: String, saveMode: SaveMode = SaveMode.Append, gcsKeyJsonFilePath: Option[String], bigqueryOptions: BigqueryOutputOptions, sparkOptions: Map[String, String]) extends DatasourceOutput with Product with Serializable
Represents a BigQuery datasource output configuration.
Represents a BigQuery datasource output configuration.
- table
The BigQuery table name.
- project
The BigQuery project.
- dataset
The BigQuery dataset.
- gcsKeyJsonFilePath
The optional file path to the GCS key JSON file.
- bigqueryOptions
Additional options for the BigQuery output.
- sparkOptions
Spark bigquery datasource related options
-
case class
BigqueryOutputOptions(temporaryBucket: Option[String], persistentBucket: Option[String]) extends Product with Serializable
Represents options for writing data to a BigQuery data source.
Represents options for writing data to a BigQuery data source.
- temporaryBucket
An optional temporary bucket for BigQuery data.
- persistentBucket
An optional persistent bucket for BigQuery data.
-
case class
CassandraDatasourceOutput(table: String, saveMode: SaveMode, options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents a Cassandra datasource output configuration.
Represents a Cassandra datasource output configuration.
- options
Additional options for the Cassandra output.
-
case class
DatasetOutput(name: String, dataset: String, saveMode: SaveMode = SaveMode.Append, format: Option[String], columnTags: Option[List[Map[String, Any]]], assertions: Option[List[Map[String, Any]]], options: DatasetOutputOptions = ...) extends OutputConfig with Product with Serializable
Represents the configuration for a dataset output.
Represents the configuration for a dataset output.
- name
The name of the dataset output.
- dataset
The dataset address.
- saveMode
The SaveMode for writing data to the dataset. Default is SaveMode.Append.
- format
The output format.
- columnTags
The tags associated with each column of the dataset.
- assertions
The assertions for the dataset.
- options
Additional options for the dataset output.
-
case class
DatasetOutputOptions(title: Option[String], description: Option[String], tags: Option[List[String]], saveMode: SaveMode = SaveMode.Append, queryParams: Option[String], pathParams: Map[String, String] = Map.empty, sparkOptions: Map[String, String], sortOptions: Option[SortOptions], partitionColumns: Seq[String] = Seq.empty, streamingConfig: Option[Streaming], icebergOptions: Option[IcebergOutputOptions], jdbcOptions: Option[JDBCOutputOptions], bigqueryOptions: Option[BigqueryOutputOptions]) extends Product with Serializable
Represents options for writing data to a dataset output.
Represents options for writing data to a dataset output.
- title
An optional title for the dataset output.
- description
An optional description for the dataset output.
- tags
An optional list of tags for the dataset output.
- saveMode
The SaveMode for writing data to the dataset. Default is SaveMode.Append.
- queryParams
An optional String representing query parameters to append to the resolved dataos URL.
- pathParams
The map of path parameters to replace in the resolved dataos URL.
- sparkOptions
The map of Spark options for configuring the dataset output.
- sortOptions
An optional SortOptions for sorting data during output.
- partitionColumns
The sequence of partition columns for the dataset output.
- streamingConfig
An optional Streaming configuration for the dataset output.
- icebergOptions
An optional IcebergOutputOptions for writing data to Iceberg data sources.
- jdbcOptions
An optional JDBCOutputOptions for writing data to JDBC data sources.
- bigqueryOptions
An optional BigqueryOutputOptions for writing data to BigQuery data sources.
-
class
DatasetOutputOptionsBuilder extends AnyRef
Builder class for constructing DatasetOutputOptions.
-
abstract
class
DatasourceOutput extends OutputConfig
This abstract class represents a DatasourceOutput, which is a subclass of OutputConfig.
This abstract class represents a DatasourceOutput, which is a subclass of OutputConfig. DatasourceOutput provides a configuration for output settings related to a data source.
-
case class
ElasticsearchDatasourceOutput(nodes: String, index: String, username: Option[String], password: Option[String], saveMode: SaveMode, options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for an Elasticsearch datasource.
Represents the output configuration for an Elasticsearch datasource.
- nodes
The Elasticsearch nodes to connect to.
- index
The name of the Elasticsearch index.
- username
(Optional) The username for authentication.
- password
(Optional) The password for authentication.
- options
(Optional) Spark elastic search datasource options
-
case class
EventHubDatasourceOutput(endpoint: String, eventhubName: String, sasKeyName: String, sasKey: String, options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for an Event Hub datasource.
Represents the output configuration for an Event Hub datasource.
- endpoint
The Event Hub endpoint.
- eventhubName
The name of the Event Hub.
- sasKeyName
The SAS key name for authentication.
- sasKey
The SAS key for authentication.
- options
(Optional) Additional options for the Event Hub spark datasource.
-
case class
FileDatasourceOutput(path: Option[String], warehousePath: Option[String], catalogName: Option[String], icebergCatalogType: Option[String], schemaName: Option[String], tableName: Option[String], format: String = "parquet", saveMode: SaveMode = SaveMode.Append, metastoreUris: Option[String], title: Option[String], description: Option[String], tags: Option[List[String]], icebergOptions: Option[IcebergOutputOptions], sortOptions: Option[SortOptions], partitionColumns: Seq[String], sparkOptions: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for a file-based datasource.
Represents the output configuration for a file-based datasource.
- path
(Optional) The output file path.
- warehousePath
(Optional) The warehouse path.
- catalogName
(Optional) The catalog name.
- icebergCatalogType
(Optional) The type of the Iceberg catalog.
- schemaName
(Optional) The schema name.
- tableName
(Optional) The table name.
- format
The output format, default is "parquet".
- metastoreUris
(Optional) The URIs of the metastore.
- title
(Optional) The title of the output.
- description
(Optional) The description of the output.
- tags
(Optional) The tags associated with the output.
- icebergOptions
(Optional) Additional options for the iceberg datasource.
- sparkOptions
(Optional) Additional options for the file-based output.
-
case class
IcebergMergeOptions(onClause: String, whenClause: String) extends Product with Serializable
Represents options for merging Iceberg data.
Represents options for merging Iceberg data.
- onClause
The ON clause for the merge.
- whenClause
The WHEN clause for the merge.
-
case class
IcebergOutputOptions(properties: Map[String, String] = Map.empty, partitionSpec: List[IcebergPartitionSpecItem] = List.empty, merge: Option[IcebergMergeOptions]) extends Product with Serializable
Represents options for writing data to an Iceberg data source.
Represents options for writing data to an Iceberg data source.
- properties
Additional properties for the Iceberg data source.
- partitionSpec
The list of Iceberg partition specification items.
- merge
Optional merge options for Iceberg data.
-
case class
IcebergPartitionSpecItem(type: String, column: String, name: Option[String], numBuckets: Option[Int]) extends Product with Serializable
Represents an item in the Iceberg partition specification.
Represents an item in the Iceberg partition specification.
- column
The column used for partitioning.
- name
An optional name for the partition.
- numBuckets
An optional number of buckets for the partition.
-
case class
JDBCDatasourceOutput(url: String, username: String, password: String, table: String, saveMode: SaveMode = SaveMode.Append, jdbcOptions: JDBCOutputOptions, sparkOptions: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for a JDBC datasource.
Represents the output configuration for a JDBC datasource.
- url
The JDBC connection URL.
- username
The username for authentication.
- password
The password for authentication.
- table
The table to write into.
- jdbcOptions
(Optional) Additional options for the JDBC connection.
- sparkOptions
(Optional) JDBC Spark Datasource related options
-
case class
JDBCOutputOptions(driver: String, query: String, maxBatchSize: Int = 500, minPartitions: Option[Int], maxPartitions: Option[Int]) extends Product with Serializable
Represents options for writing data to a JDBC data source.
Represents options for writing data to a JDBC data source.
- driver
The JDBC driver class name.
- query
The SQL query to write data to the data source.
- maxBatchSize
The maximum batch size for writing data. Default is 500.
- minPartitions
An optional parameter to specify the minimum number of partitions for writing data.
- maxPartitions
An optional parameter to specify the maximum number of partitions for writing data.
-
case class
JDBCQueryDatasourceOutput(url: String, username: String, password: String, jdbcQueryOptions: JDBCOutputOptions, sparkOptions: Map[String, Any] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for a JDBC query-based datasource.
Represents the output configuration for a JDBC query-based datasource.
- url
The JDBC connection URL.
- username
The username for authentication.
- password
The password for authentication.
- sparkOptions
(Optional) Jdbc spark datasource related options
-
case class
KafkaDatasourceOutput(brokers: String, topic: String, format: String, saveMode: SaveMode = SaveMode.Append, schemaRegistryUrl: Option[String], options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for a Kafka datasource.
Represents the output configuration for a Kafka datasource.
- brokers
The Kafka broker addresses.
- topic
The Kafka topic.
- schemaRegistryUrl
(Optional) The URL of the schema registry for Avro format.
- options
(Optional) Additional options for the Kafka spark datasource.
-
case class
MongoDbDatasourceOutput(nodes: List[String], subprotocol: String, database: String, table: String, username: String, password: String, saveMode: SaveMode = SaveMode.Append, connectionProps: Option[String], sparkOptions: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for a MongoDB datasource.
Represents the output configuration for a MongoDB datasource.
- nodes
The list of MongoDB server nodes.
- subprotocol
The MongoDB connection subprotocol.
- database
The name of the MongoDB database.
- table
The name of the MongoDB collection/table.
- username
The username for authentication.
- password
The password for authentication.
- connectionProps
MongoDB connection url specific properties
- sparkOptions
(Optional) Additional options for the MongoDB spark datasource.
-
case class
OpensearchDatasourceOutput(nodes: String, index: String, username: Option[String], password: Option[String], saveMode: SaveMode, options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the output configuration for an Opensearch datasource.
Represents the output configuration for an Opensearch datasource.
- nodes
The Opensearch nodes to connect to.
- index
The name of the Opensearch index.
- username
(Optional) The username for authentication.
- password
(Optional) The password for authentication.
- options
(Optional) Spark open search datasource options
-
trait
OutputConfig extends AnyRef
Represents the configuration for Flare output.
-
case class
PulsarDatasourceOutput(serviceUrl: String, adminUrl: String, topic: String, options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the configuration for a Pulsar datasource output.
Represents the configuration for a Pulsar datasource output.
- serviceUrl
The Pulsar service URL.
- adminUrl
The Pulsar admin URL.
- topic
The Pulsar topic.
- options
Additional options for the datasource output (optional).
-
case class
RedisDatasourceOutput(host: String, port: Int, table: String, db: Int, password: Option[String], saveMode: SaveMode = SaveMode.Append, options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the configuration for a Redis datasource output.
Represents the configuration for a Redis datasource output.
- host
The Redis server host.
- port
The Redis server port.
- table
The Redis table.
- db
The Redis database index.
- password
The password for Redis authentication (optional).
- options
Additional options for the spark redis datasource output (optional).
-
case class
RedshiftDatasourceOutput(jdbcUrl: String, tempDir: String, username: String, password: String, dbTable: String, saveMode: SaveMode, options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the configuration for a Redshift datasource output.
Represents the configuration for a Redshift datasource output.
- jdbcUrl
The JDBC URL for the Redshift connection.
- tempDir
The temporary directory path for Redshift data staging.
- username
The username for Redshift authentication.
- password
The password for Redshift authentication.
- dbTable
The Redshift database table.
- options
Additional options for the redshift datasource output (optional).
-
case class
SnowflakeDatasourceOutput(url: String, database: String, table: String, schema: String, saveMode: SaveMode = SaveMode.Append, warehouse: Option[String], user: String, password: Option[String], token: Option[String], pemPrivateKey: Option[String], options: Map[String, String] = Map.empty) extends DatasourceOutput with Product with Serializable
Represents the configuration for a Snowflake datasource output.
Represents the configuration for a Snowflake datasource output.
- url
The Snowflake URL.
- database
The Snowflake database name.
- table
The Snowflake table name.
- schema
The Snowflake schema name.
- warehouse
The Snowflake warehouse name (optional).
- user
The Snowflake username.
- password
The Snowflake password (optional).
- token
The Snowflake authentication token (optional).
- pemPrivateKey
The Snowflake PEM private key (optional).
- options
Additional options for the snowflake spark datasource output (optional).
-
case class
SortColumnSpecItem(name: String, order: SortOrder = SortOrder.ASC) extends Product with Serializable
Represents a sort column specification item.
Represents a sort column specification item.
- name
The name of the sort column.
- order
The sort order for the column. Default is ASC.
-
case class
SortOptions(columns: List[SortColumnSpecItem], mode: SortMode = SortMode.PARTITION) extends Product with Serializable
Represents options for sorting data during output.
Represents options for sorting data during output.
- columns
The list of sort column specifications.
- mode
The sort mode. Default is SortMode.PARTITION.
Value Members
- object DatasetOutputOptions extends Serializable
-
object
OutputFormat extends Enumeration
Represents the available output formats supported by Flare.
-
object
SortMode extends Enumeration
Enumeration for specifying sort mode.
-
object
SortOrder extends Enumeration
Enumeration for specifying sort order.