Spark ML -- Linear Regression
Perform linear regression on a Spark DataFrame.
ml_linear_regression(x, response, features, intercept = TRUE, alpha = 0,
lambda = 0, weights.column = NULL, iter.max = 100L,
ml.options = ml_options(), ...)Arguments
| x | An object coercable to a Spark DataFrame (typically, a
|
| response | The name of the response vector (as a length-one character
vector), or a formula, giving a symbolic description of the model to be
fitted. When |
| features | The name of features (terms) to use for the model fit. |
| intercept | Boolean; should the model be fit with an intercept term? |
| alpha, lambda | Parameters controlling loss function penalization (for e.g. lasso, elastic net, and ridge regression). See Details for more information. |
| weights.column | The name of the column to use as weights for the model fit. |
| iter.max | The maximum number of iterations to use. |
| ml.options | Optional arguments, used to affect the model generated. See
|
| ... | Optional arguments. The |
Details
Spark implements for both \(L1\) and \(L2\) regularization in linear regression models. See the preamble in the Spark Classification and Regression documentation for more details on how the loss function is parameterized.
In particular, with alpha set to 1, the parameterization
is equivalent to a lasso
model; if alpha is set to 0, the parameterization is equivalent to
a ridge regression model.
See also
Other Spark ML routines: ml_als_factorization,
ml_decision_tree,
ml_generalized_linear_regression,
ml_gradient_boosted_trees,
ml_kmeans, ml_lda,
ml_logistic_regression,
ml_multilayer_perceptron,
ml_naive_bayes,
ml_one_vs_rest, ml_pca,
ml_random_forest,
ml_survival_regression