pyspark.sql.DataFrameWriter.parquet¶
-
DataFrameWriter.
parquet
(path, mode=None, partitionBy=None, compression=None)[source]¶ Saves the content of the
DataFrame
in Parquet format at the specified path.New in version 1.4.0.
- Parameters
- pathstr
the path in any Hadoop supported file system
- modestr, optional
specifies the behavior of the save operation when data already exists.
append
: Append contents of thisDataFrame
to existing data.overwrite
: Overwrite existing data.ignore
: Silently ignore this operation if data already exists.error
orerrorifexists
(default case): Throw an exception if data already exists.
- partitionBystr or list, optional
names of partitioning columns
- Other Parameters
- Extra options
For the extra options, refer to Data Source Option in the version you use.
Examples
>>> df.write.parquet(os.path.join(tempfile.mkdtemp(), 'data'))