pyspark.pandas.
read_delta
Read a Delta Lake table on some file system and return a DataFrame.
If the Delta Lake table is already stored in the catalog (aka the metastore), use ‘read_table’.
Path to the Delta Lake table.
Specifies the table version (based on Delta’s internal transaction version) to read from, using Delta’s time travel feature. This sets Delta’s ‘versionAsOf’ option. Note that this paramter and timestamp paramter cannot be used together, otherwise it will raise a ValueError.
Specifies the table version (based on timestamp) to read from, using Delta’s time travel feature. This must be a valid date or timestamp string in Spark, and sets Delta’s ‘timestampAsOf’ option. Note that this paramter and version paramter cannot be used together, otherwise it will raise a ValueError.
Index column of table in Spark.
Additional options that can be passed onto Delta.
See also
DataFrame.to_delta
read_table
read_spark_io
read_parquet
Examples
>>> ps.range(1).to_delta('%s/read_delta/foo' % path) >>> ps.read_delta('%s/read_delta/foo' % path) id 0 0
>>> ps.range(10, 15, num_partitions=1).to_delta('%s/read_delta/foo' % path, ... mode='overwrite') >>> ps.read_delta('%s/read_delta/foo' % path) id 0 10 1 11 2 12 3 13 4 14
>>> ps.read_delta('%s/read_delta/foo' % path, version=0) id 0 0
You can preserve the index in the roundtrip as below.
>>> ps.range(10, 15, num_partitions=1).to_delta( ... '%s/read_delta/bar' % path, index_col="index") >>> ps.read_delta('%s/read_delta/bar' % path, index_col="index") id index 0 10 1 11 2 12 3 13 4 14