Normalizer¶
-
class
pyspark.ml.feature.
Normalizer
(*, p: float = 2.0, inputCol: Optional[str] = None, outputCol: Optional[str] = None)[source]¶ Normalize a vector to have unit norm using the given p-norm.
New in version 1.4.0.
Examples
>>> from pyspark.ml.linalg import Vectors >>> svec = Vectors.sparse(4, {1: 4.0, 3: 3.0}) >>> df = spark.createDataFrame([(Vectors.dense([3.0, -4.0]), svec)], ["dense", "sparse"]) >>> normalizer = Normalizer(p=2.0) >>> normalizer.setInputCol("dense") Normalizer... >>> normalizer.setOutputCol("features") Normalizer... >>> normalizer.transform(df).head().features DenseVector([0.6, -0.8]) >>> normalizer.setParams(inputCol="sparse", outputCol="freqs").transform(df).head().freqs SparseVector(4, {1: 0.8, 3: 0.6}) >>> params = {normalizer.p: 1.0, normalizer.inputCol: "dense", normalizer.outputCol: "vector"} >>> normalizer.transform(df, params).head().vector DenseVector([0.4286, -0.5714]) >>> normalizerPath = temp_path + "/normalizer" >>> normalizer.save(normalizerPath) >>> loadedNormalizer = Normalizer.load(normalizerPath) >>> loadedNormalizer.getP() == normalizer.getP() True >>> loadedNormalizer.transform(df).take(1) == normalizer.transform(df).take(1) True
Methods
clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets the value of inputCol or its default value.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets the value of outputCol or its default value.
getP
()Gets the value of p or its default value.
getParam
(paramName)Gets a param by its name.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
set
(param, value)Sets a parameter in the embedded param map.
setInputCol
(value)Sets the value of
inputCol
.setOutputCol
(value)Sets the value of
outputCol
.setP
(value)Sets the value of
p
.setParams
(self, \*[, p, inputCol, outputCol])Sets params for this Normalizer.
transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
Returns all params ordered by name.
Methods Documentation
-
clear
(param: pyspark.ml.param.Param) → None¶ Clears a param from the param map if it has been explicitly set.
-
copy
(extra: Optional[ParamMap] = None) → JP¶ Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters
- extradict, optional
Extra parameters to copy to the new instance
- Returns
JavaParams
Copy of this instance
-
explainParam
(param: Union[str, pyspark.ml.param.Param]) → str¶ Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
-
explainParams
() → str¶ Returns the documentation of all params with their optionally default values and user-supplied values.
-
extractParamMap
(extra: Optional[ParamMap] = None) → ParamMap¶ Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters
- extradict, optional
extra param values
- Returns
- dict
merged param map
-
getInputCol
() → str¶ Gets the value of inputCol or its default value.
-
getOrDefault
(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶ Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
-
getOutputCol
() → str¶ Gets the value of outputCol or its default value.
-
getParam
(paramName: str) → pyspark.ml.param.Param¶ Gets a param by its name.
-
hasDefault
(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶ Checks whether a param has a default value.
-
hasParam
(paramName: str) → bool¶ Tests whether this instance contains a param with a given (string) name.
-
isDefined
(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶ Checks whether a param is explicitly set by user or has a default value.
-
isSet
(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶ Checks whether a param is explicitly set by user.
-
classmethod
load
(path: str) → RL¶ Reads an ML instance from the input path, a shortcut of read().load(path).
-
classmethod
read
() → pyspark.ml.util.JavaMLReader[RL]¶ Returns an MLReader instance for this class.
-
save
(path: str) → None¶ Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
-
set
(param: pyspark.ml.param.Param, value: Any) → None¶ Sets a parameter in the embedded param map.
-
setInputCol
(value: str) → pyspark.ml.feature.Normalizer[source]¶ Sets the value of
inputCol
.
-
setOutputCol
(value: str) → pyspark.ml.feature.Normalizer[source]¶ Sets the value of
outputCol
.
-
setP
(value: float) → pyspark.ml.feature.Normalizer[source]¶ Sets the value of
p
.New in version 1.4.0.
-
setParams
(self, \*, p=2.0, inputCol=None, outputCol=None)[source]¶ Sets params for this Normalizer.
New in version 1.4.0.
-
transform
(dataset: pyspark.sql.dataframe.DataFrame, params: Optional[ParamMap] = None) → pyspark.sql.dataframe.DataFrame¶ Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters
- dataset
pyspark.sql.DataFrame
input dataset
- paramsdict, optional
an optional param map that overrides embedded params.
- dataset
- Returns
pyspark.sql.DataFrame
transformed dataset
-
write
() → pyspark.ml.util.JavaMLWriter¶ Returns an MLWriter instance for this ML instance.
Attributes Documentation
-
inputCol
= Param(parent='undefined', name='inputCol', doc='input column name.')¶
-
outputCol
= Param(parent='undefined', name='outputCol', doc='output column name.')¶
-
p
= Param(parent='undefined', name='p', doc='the p norm value.')¶
-
params
¶ Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
-