spark.logit {SparkR} | R Documentation |
Fits an logistic regression model against a Spark DataFrame. It supports "binomial": Binary logistic regression with pivoting; "multinomial": Multinomial logistic (softmax) regression without pivoting, similar to glmnet. Users can print, make predictions on the produced model and save the model to the input path.
spark.logit(data, formula, ...) ## S4 method for signature 'SparkDataFrame,formula' spark.logit(data, formula, regParam = 0, elasticNetParam = 0, maxIter = 100, tol = 1e-06, family = "auto", standardization = TRUE, thresholds = 0.5, weightCol = NULL) ## S4 method for signature 'LogisticRegressionModel' predict(object, newData) ## S4 method for signature 'LogisticRegressionModel' summary(object) ## S4 method for signature 'LogisticRegressionModel,character' write.ml(object, path, overwrite = FALSE)
data |
SparkDataFrame for training. |
formula |
A symbolic description of the model to be fitted. Currently only a few formula operators are supported, including '~', '.', ':', '+', and '-'. |
... |
additional arguments passed to the method. |
regParam |
the regularization parameter. |
elasticNetParam |
the ElasticNet mixing parameter. For alpha = 0.0, the penalty is an L2 penalty. For alpha = 1.0, it is an L1 penalty. For 0.0 < alpha < 1.0, the penalty is a combination of L1 and L2. Default is 0.0 which is an L2 penalty. |
maxIter |
maximum iteration number. |
tol |
convergence tolerance of iterations. |
family |
the name of family which is a description of the label distribution to be used in the model. Supported options:
|
standardization |
whether to standardize the training features before fitting the model. The coefficients of models will be always returned on the original scale, so it will be transparent for users. Note that with/without standardization, the models should be always converged to the same solution when no regularization is applied. Default is TRUE, same as glmnet. |
thresholds |
in binary classification, in range [0, 1]. If the estimated probability of class label 1 is > threshold, then predict 1, else 0. A high threshold encourages the model to predict 0 more often; a low threshold encourages the model to predict 1 more often. Note: Setting this with threshold p is equivalent to setting thresholds c(1-p, p). In multiclass (or binary) classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values > 0, excepting that at most one value may be 0. The class with largest value p/t is predicted, where p is the original probability of that class and t is the class's threshold. |
weightCol |
The weight column name. |
object |
an LogisticRegressionModel fitted by |
newData |
a SparkDataFrame for testing. |
path |
The directory where the model is saved. |
overwrite |
Overwrites or not if the output path already exists. Default is FALSE which means throw exception if the output path exists. |
spark.logit
returns a fitted logistic regression model.
predict
returns the predicted values based on an LogisticRegressionModel.
summary
returns summary information of the fitted model, which is a list.
The list includes coefficients
(coefficients matrix of the fitted model).
spark.logit since 2.1.0
predict(LogisticRegressionModel) since 2.1.0
summary(LogisticRegressionModel) since 2.1.0
write.ml(LogisticRegression, character) since 2.1.0
## Not run:
##D sparkR.session()
##D # binary logistic regression
##D df <- createDataFrame(iris)
##D training <- df[df$Species %in% c("versicolor", "virginica"), ]
##D model <- spark.logit(training, Species ~ ., regParam = 0.5)
##D summary <- summary(model)
##D
##D # fitted values on training data
##D fitted <- predict(model, training)
##D
##D # save fitted model to input path
##D path <- "path/to/model"
##D write.ml(model, path)
##D
##D # can also read back the saved model and predict
##D # Note that summary deos not work on loaded model
##D savedModel <- read.ml(path)
##D summary(savedModel)
##D
##D # multinomial logistic regression
##D
##D df <- createDataFrame(iris)
##D model <- spark.logit(df, Species ~ ., regParam = 0.5)
##D summary <- summary(model)
##D
## End(Not run)