write.text {SparkR}R Documentation

Save the content of SparkDataFrame in a text file at the specified path.

Description

Save the content of the SparkDataFrame in a text file at the specified path. The SparkDataFrame must have only one column of string type with the name "value". Each row becomes a new line in the output file. The text files will be encoded as UTF-8.

Usage

write.text(x, path, ...)

## S4 method for signature 'SparkDataFrame,character'
write.text(x, path, mode = "error", ...)

Arguments

x

A SparkDataFrame

path

The directory where the file is saved

...

additional argument(s) passed to the method. You can find the text-specific options for writing text files in https://spark.apache.org/docs/latest/sql-data-sources-text.html#data-source-option Data Source Option in the version you use.

mode

one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore' save mode (it is 'error' by default)

Note

write.text since 2.0.0

See Also

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream()

Examples



## Not run: 
##D sparkR.session()
##D path <- "path/to/file.txt"
##D df <- read.text(path)
##D write.text(df, "/tmp/sparkr-tmp/")
## End(Not run)




[Package SparkR version 3.2.4 Index]