pyspark.sql.SparkSession.range¶
-
SparkSession.
range
(start: int, end: Optional[int] = None, step: int = 1, numPartitions: Optional[int] = None) → pyspark.sql.dataframe.DataFrame[source]¶ Create a
DataFrame
with singlepyspark.sql.types.LongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with step valuestep
.New in version 2.0.0.
Changed in version 3.4.0: Supports Spark Connect.
- Parameters
- startint
the start value
- endint, optional
the end value (exclusive)
- stepint, optional
the incremental step (default: 1)
- numPartitionsint, optional
the number of partitions of the DataFrame
- Returns
Examples
>>> spark.range(1, 7, 2).show() +---+ | id| +---+ | 1| | 3| | 5| +---+
If only one argument is specified, it will be used as the end value.
>>> spark.range(3).show() +---+ | id| +---+ | 0| | 1| | 2| +---+