Home
Trees
Indices
Help
PySpark
[
frames
] |
no frames
]
[
Module Hierarchy
|
Class Hierarchy
]
Class Hierarchy
pyspark.mllib._common.LinearModel
pyspark.mllib._common.LinearModel
pyspark.storagelevel.StorageLevel
:
Flags for controlling the storage of an RDD.
object
:
The most base type
pyspark.mllib.recommendation.ALS
pyspark.accumulators.Accumulator
:
A shared variable that can be accumulated, i.e., has a commutative and associative "add" operation.
pyspark.accumulators.AccumulatorParam
:
Helper object that defines how to accumulate values of a given type.
pyspark.broadcast.Broadcast
:
A broadcast variable created with
SparkContext.broadcast()
.
pyspark.mllib.clustering.KMeans
pyspark.mllib.clustering.KMeansModel
:
A clustering model derived from the k-means method.
pyspark.mllib.regression.LassoWithSGD
pyspark.mllib.regression.LinearModel
:
Something that has a vector of coefficients and an intercept.
pyspark.mllib.regression.LinearRegressionWithSGD
pyspark.mllib.classification.LogisticRegressionWithSGD
pyspark.mllib.recommendation.MatrixFactorizationModel
:
A matrix factorisation model trained by regularized alternating least-squares.
pyspark.mllib.classification.NaiveBayes
pyspark.mllib.classification.NaiveBayesModel
:
Model for Naive Bayes classifiers.
pyspark.rdd.RDD
:
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
pyspark.mllib.regression.RidgeRegressionWithSGD
pyspark.mllib.classification.SVMWithSGD
pyspark.serializers.Serializer
pyspark.conf.SparkConf
:
Configuration for a Spark application.
pyspark.context.SparkContext
:
Main entry point for Spark functionality.
pyspark.files.SparkFiles
:
Resolves paths to files added through
SparkContext.addFile()
.
pyspark.statcounter.StatCounter
Home
Trees
Indices
Help
PySpark
Generated by Epydoc 3.0.1 on Thu Jul 17 20:36:18 2014
http://epydoc.sourceforge.net