pyflare package
Subpackages
- pyflare.sdk package
- Subpackages
- pyflare.sdk.config package
- pyflare.sdk.core package
- pyflare.sdk.depots package
- pyflare.sdk.readers package
- Submodules
- pyflare.sdk.readers.bigquery_reader module
- pyflare.sdk.readers.delta_reader module
- pyflare.sdk.readers.elasticsearch_reader module
- pyflare.sdk.readers.fastbase_reader module
- pyflare.sdk.readers.file_reader module
- pyflare.sdk.readers.iceberg_reader module
- pyflare.sdk.readers.jdbc_reader module
- pyflare.sdk.readers.minerva_reader module
- pyflare.sdk.readers.reader module
- pyflare.sdk.readers.snowflake_reader module
- Module contents
- pyflare.sdk.utils package
- pyflare.sdk.writers package
- Submodules
- pyflare.sdk.writers.bigquery_writer module
- pyflare.sdk.writers.delta_writer module
- pyflare.sdk.writers.elasticsearch_writer module
- pyflare.sdk.writers.fastbase_writer module
- pyflare.sdk.writers.file_writer module
- pyflare.sdk.writers.iceberg_writer module
- pyflare.sdk.writers.jdbc_writer module
- pyflare.sdk.writers.snowflake_writer module
- pyflare.sdk.writers.writer module
- Module contents
- Module contents
- Subpackages
Module contents
- pyflare.load(name, format, driver=None, query=None, options=None)[source]
Read a dataset from the source.
- Parameters:
name (str) – Depot address of the source.
format (str) – Read format.
driver (str) – Driver needed to read from the source (optional).
query (str) – Query to execute (optional).
options (dict) – Additional Spark and source properties (optional).
- Returns:
A Spark DataFrame with governed data.
- Return type:
pyspark.sql.DataFrame
- Raises:
PyflareReadException – If the dataset does not exist or read access fails.
Examples
Iceberg:
read_options = { 'compression': 'gzip', 'iceberg': { 'table_properties': { 'read.split.target-size': 134217728, 'read.split.metadata-target-size': 33554432 } } } load(name="dataos://lakehouse:retail/city", format="iceberg", options=read_options)
JDBC:
read_options = { 'compression': 'gzip', 'partitionColumn': 'last_update', 'lowerBound': datetime.datetime(2008, 1, 1), 'upperBound': datetime.datetime(2009, 1, 1), 'numPartitions': 6 } load(name="dataos://sanitypostgres:public/city", format="postgresql", driver="com.mysql.cj.jdbc.Driver", options=read_options)
- pyflare.minerva_input(name, query, cluster_name='system', driver='io.trino.jdbc.TrinoDriver', options=None)[source]
- pyflare.save(name: str, dataframe, format: Optional[str] = None, mode='append', driver=None, options=None)[source]
Write the transformed dataset to the output sink.
- Parameters:
name (str) – Output key to write.
dataframe (pyspark.sql.DataFrame) – The DataFrame to write.
format (str) – Output format (e.g., iceberg, parquet).
mode (str) – Write mode (default is “append”).
driver (str) – Driver to use for the sink (optional).
options (dict) – Additional write configuration (optional).
- Raises:
PyflareWriteException – If dataset does not exist or write access fails.
Example
write_options = { "compression": "gzip", "iceberg": { "table_properties": { "write.format.default": "parquet", "write.parquet.compression-codec": "gzip", "write.metadata.previous-versions-max": 3, "parquet.page.write-checksum.enabled": "false" }, "partition": [ {"type": "months", "column": "ts_city"}, {"type": "bucket", "column": "city_id", "bucket_count": 8}, {"type": "identity", "column": "city_name"} ] } } save(name="dataos://lakehouse:sdk/city", format="iceberg", mode="append", options=write_options)