write.parquet {SparkR} | R Documentation |
Save the contents of a SparkDataFrame as a Parquet file, preserving the schema. Files written out with this method can be read back in as a SparkDataFrame using read.parquet().
write.parquet(x, path) saveAsParquetFile(x, path) ## S4 method for signature 'SparkDataFrame,character' write.parquet(x, path) ## S4 method for signature 'SparkDataFrame,character' saveAsParquetFile(x, path)
x |
A SparkDataFrame |
path |
The directory where the file is saved |
Other SparkDataFrame functions: SparkDataFrame-class
,
[[
, agg
,
arrange
, as.data.frame
,
attach
, cache
,
collect
, colnames
,
coltypes
, columns
,
count
, dapply
,
describe
, dim
,
distinct
, dropDuplicates
,
dropna
, drop
,
dtypes
, except
,
explain
, filter
,
first
, group_by
,
head
, histogram
,
insertInto
, intersect
,
isLocal
, join
,
limit
, merge
,
mutate
, ncol
,
persist
, printSchema
,
registerTempTable
, rename
,
repartition
, sample
,
saveAsTable
, selectExpr
,
select
, showDF
,
show
, str
,
take
, unionAll
,
unpersist
, withColumn
,
write.df
, write.jdbc
,
write.json
, write.text
## Not run:
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D path <- "path/to/file.json"
##D df <- read.json(sqlContext, path)
##D write.parquet(df, "/tmp/sparkr-tmp1/")
##D saveAsParquetFile(df, "/tmp/sparkr-tmp2/")
## End(Not run)