pyspark.ml.feature.
VectorIndexer
Class for indexing categorical feature columns in a dataset of Vector.
Automatically identify categorical features (default behavior) This helps process a dataset of unknown vectors into a dataset with some continuous features and some categorical features. The choice between continuous and categorical is based upon a maxCategories parameter. Set maxCategories to the maximum number of categorical any categorical feature should have. E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories = 2, then feature 0 will be declared categorical and use indices {0, 1}, and feature 1 will be declared continuous. Index all features, if all features are categorical If maxCategories is set to be very large, then this will build an index of unique values for all features. Warning: This can cause problems if features are continuous since this will collect ALL unique values to the driver. E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories >= 3, then both features will be declared categorical.
This helps process a dataset of unknown vectors into a dataset with some continuous features and some categorical features. The choice between continuous and categorical is based upon a maxCategories parameter.
Set maxCategories to the maximum number of categorical any categorical feature should have.
E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories = 2, then feature 0 will be declared categorical and use indices {0, 1}, and feature 1 will be declared continuous.
If maxCategories is set to be very large, then this will build an index of unique values for all features.
Warning: This can cause problems if features are continuous since this will collect ALL unique values to the driver.
E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories >= 3, then both features will be declared categorical.
This returns a model which can transform categorical features to use 0-based indices.
This is not guaranteed to choose the same category index across multiple runs.
If a categorical feature includes value 0, then this is guaranteed to map value 0 to index 0. This maintains vector sparsity.
More stability may be added in the future.
Preserve metadata in transform; if a feature’s metadata is already present, do not recompute.
Specify certain features to not index, either via a parameter or via existing metadata.
Add warning if a categorical feature has only 1 category.
New in version 1.4.0.
Examples
>>> from pyspark.ml.linalg import Vectors >>> df = spark.createDataFrame([(Vectors.dense([-1.0, 0.0]),), ... (Vectors.dense([0.0, 1.0]),), (Vectors.dense([0.0, 2.0]),)], ["a"]) >>> indexer = VectorIndexer(maxCategories=2, inputCol="a") >>> indexer.setOutputCol("indexed") VectorIndexer... >>> model = indexer.fit(df) >>> indexer.getHandleInvalid() 'error' >>> model.setOutputCol("output") VectorIndexerModel... >>> model.transform(df).head().output DenseVector([1.0, 0.0]) >>> model.numFeatures 2 >>> model.categoryMaps {0: {0.0: 0, -1.0: 1}} >>> indexer.setParams(outputCol="test").fit(df).transform(df).collect()[1].test DenseVector([0.0, 1.0]) >>> params = {indexer.maxCategories: 3, indexer.outputCol: "vector"} >>> model2 = indexer.fit(df, params) >>> model2.transform(df).head().vector DenseVector([1.0, 0.0]) >>> vectorIndexerPath = temp_path + "/vector-indexer" >>> indexer.save(vectorIndexerPath) >>> loadedIndexer = VectorIndexer.load(vectorIndexerPath) >>> loadedIndexer.getMaxCategories() == indexer.getMaxCategories() True >>> modelPath = temp_path + "/vector-indexer-model" >>> model.save(modelPath) >>> loadedModel = VectorIndexerModel.load(modelPath) >>> loadedModel.numFeatures == model.numFeatures True >>> loadedModel.categoryMaps == model.categoryMaps True >>> loadedModel.transform(df).take(1) == model.transform(df).take(1) True >>> dfWithInvalid = spark.createDataFrame([(Vectors.dense([3.0, 1.0]),)], ["a"]) >>> indexer.getHandleInvalid() 'error' >>> model3 = indexer.setHandleInvalid("skip").fit(df) >>> model3.transform(dfWithInvalid).count() 0 >>> model4 = indexer.setParams(handleInvalid="keep", outputCol="indexed").fit(df) >>> model4.transform(dfWithInvalid).head().indexed DenseVector([2.0, 1.0])
Methods
clear(param)
clear
Clears a param from the param map if it has been explicitly set.
copy([extra])
copy
Creates a copy of this instance with the same uid and some extra params.
explainParam(param)
explainParam
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
explainParams()
explainParams
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap([extra])
extractParamMap
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
fit(dataset[, params])
fit
Fits a model to the input dataset with optional parameters.
fitMultiple(dataset, paramMaps)
fitMultiple
Fits a model to the input dataset for each param map in paramMaps.
getHandleInvalid()
getHandleInvalid
Gets the value of handleInvalid or its default value.
getInputCol()
getInputCol
Gets the value of inputCol or its default value.
getMaxCategories()
getMaxCategories
Gets the value of maxCategories or its default value.
getOrDefault(param)
getOrDefault
Gets the value of a param in the user-supplied param map or its default value.
getOutputCol()
getOutputCol
Gets the value of outputCol or its default value.
getParam(paramName)
getParam
Gets a param by its name.
hasDefault(param)
hasDefault
Checks whether a param has a default value.
hasParam(paramName)
hasParam
Tests whether this instance contains a param with a given (string) name.
isDefined(param)
isDefined
Checks whether a param is explicitly set by user or has a default value.
isSet(param)
isSet
Checks whether a param is explicitly set by user.
load(path)
load
Reads an ML instance from the input path, a shortcut of read().load(path).
read()
read
Returns an MLReader instance for this class.
save(path)
save
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
set(param, value)
set
Sets a parameter in the embedded param map.
setHandleInvalid(value)
setHandleInvalid
Sets the value of handleInvalid.
handleInvalid
setInputCol(value)
setInputCol
Sets the value of inputCol.
inputCol
setMaxCategories(value)
setMaxCategories
Sets the value of maxCategories.
maxCategories
setOutputCol(value)
setOutputCol
Sets the value of outputCol.
outputCol
setParams(self, \*[, maxCategories, …])
setParams
Sets params for this VectorIndexer.
write()
write
Returns an MLWriter instance for this ML instance.
Attributes
params
Returns all params ordered by name.
Methods Documentation
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
Extra parameters to copy to the new instance
JavaParams
Copy of this instance
extra param values
merged param map
New in version 1.3.0.
pyspark.sql.DataFrame
input dataset.
an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.
Transformer
fitted model(s)
New in version 2.3.0.
collections.abc.Sequence
A Sequence of param maps.
_FitMultipleIterator
A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
Attributes Documentation
Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.
dir()
Param