RDD.
groupBy
Return an RDD of grouped items.
New in version 0.7.0.
a function to compute the key
the number of partitions in new RDD
RDD
a function to compute the partition index
a new RDD of grouped items
See also
RDD.groupByKey()
pyspark.sql.DataFrame.groupBy()
Examples
>>> rdd = sc.parallelize([1, 1, 2, 3, 5, 8]) >>> result = rdd.groupBy(lambda x: x % 2).collect() >>> sorted([(x, sorted(y)) for (x, y) in result]) [(0, [2, 8]), (1, [1, 1, 3, 5])]