CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-scikit-learn

A comprehensive machine learning library providing supervised and unsupervised learning algorithms with consistent APIs and extensive tools for data preprocessing, model evaluation, and deployment.

87

0.98x
Overview
Eval results
Files

supervised-learning.mddocs/

Supervised Learning

This document covers all supervised learning algorithms in scikit-learn, including classification and regression methods.

Linear Models

Regression

LinearRegression { .api }

from sklearn.linear_model import LinearRegression

LinearRegression(
    fit_intercept: bool = True,
    copy_X: bool = True,
    tol: float = 1e-6,
    n_jobs: int | None = None,
    positive: bool = False
)

Ordinary least squares Linear Regression.

Ridge { .api }

from sklearn.linear_model import Ridge

Ridge(
    alpha: float = 1.0,
    fit_intercept: bool = True,
    copy_X: bool = True,
    max_iter: int | None = None,
    tol: float = 0.0001,
    solver: str = "auto",
    positive: bool = False,
    random_state: int | RandomState | None = None
)

Linear least squares with l2 regularization.

RidgeCV { .api }

from sklearn.linear_model import RidgeCV

RidgeCV(
    alphas: ArrayLike = (0.1, 1.0, 10.0),
    fit_intercept: bool = True,
    scoring: str | Callable | None = None,
    cv: int | BaseCrossValidator | Iterable | None = None,
    gcv_mode: str | None = None,
    store_cv_values: bool = False,
    alpha_per_target: bool = False
)

Ridge regression with built-in cross-validation.

Lasso { .api }

from sklearn.linear_model import Lasso

Lasso(
    alpha: float = 1.0,
    fit_intercept: bool = True,
    precompute: bool | ArrayLike = False,
    copy_X: bool = True,
    max_iter: int = 1000,
    tol: float = 0.0001,
    warm_start: bool = False,
    positive: bool = False,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Linear Model trained with L1 prior as regularizer (aka the Lasso).

LassoCV { .api }

from sklearn.linear_model import LassoCV

LassoCV(
    eps: float = 0.001,
    n_alphas: int = 100,
    alphas: ArrayLike | None = None,
    fit_intercept: bool = True,
    precompute: bool | str | ArrayLike = "auto",
    max_iter: int = 1000,
    tol: float = 0.0001,
    copy_X: bool = True,
    cv: int | BaseCrossValidator | Iterable | None = None,
    verbose: bool | int = False,
    n_jobs: int | None = None,
    positive: bool = False,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Lasso linear model with iterative fitting along a regularization path.

ElasticNet { .api }

from sklearn.linear_model import ElasticNet

ElasticNet(
    alpha: float = 1.0,
    l1_ratio: float = 0.5,
    fit_intercept: bool = True,
    precompute: bool | ArrayLike = False,
    max_iter: int = 1000,
    copy_X: bool = True,
    tol: float = 0.0001,
    warm_start: bool = False,
    positive: bool = False,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Linear regression with combined L1 and L2 priors as regularizer.

ElasticNetCV { .api }

from sklearn.linear_model import ElasticNetCV

ElasticNetCV(
    l1_ratio: float | ArrayLike = 0.5,
    eps: float = 0.001,
    n_alphas: int = 100,
    alphas: ArrayLike | None = None,
    fit_intercept: bool = True,
    precompute: bool | str | ArrayLike = "auto",
    max_iter: int = 1000,
    tol: float = 0.0001,
    cv: int | BaseCrossValidator | Iterable | None = None,
    copy_X: bool = True,
    verbose: bool | int = False,
    n_jobs: int | None = None,
    positive: bool = False,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Elastic Net model with iterative fitting along a regularization path.

Lars { .api }

from sklearn.linear_model import Lars

Lars(
    fit_intercept: bool = True,
    verbose: bool | int = False,
    precompute: bool | str | ArrayLike = "auto",
    n_nonzero_coefs: int = 500,
    eps: float = ...,
    copy_X: bool = True,
    fit_path: bool = True,
    jitter: float | None = None,
    random_state: int | RandomState | None = None
)

Least Angle Regression model a.k.a. LAR.

LarsCV { .api }

from sklearn.linear_model import LarsCV

LarsCV(
    fit_intercept: bool = True,
    verbose: bool | int = False,
    max_iter: int = 500,
    precompute: bool | str | ArrayLike = "auto",
    cv: int | BaseCrossValidator | Iterable | None = None,
    max_n_alphas: int = 1000,
    n_jobs: int | None = None,
    eps: float = ...,
    copy_X: bool = True
)

Cross-validated Least Angle Regression model.

LassoLars { .api }

from sklearn.linear_model import LassoLars

LassoLars(
    alpha: float = 1.0,
    fit_intercept: bool = True,
    verbose: bool | int = False,
    precompute: bool | str | ArrayLike = "auto",
    max_iter: int = 500,
    eps: float = ...,
    copy_X: bool = True,
    fit_path: bool = True,
    positive: bool = False,
    jitter: float | None = None,
    random_state: int | RandomState | None = None
)

Lasso model fit with Least Angle Regression a.k.a. Lars.

LassoLarsCV { .api }

from sklearn.linear_model import LassoLarsCV

LassoLarsCV(
    fit_intercept: bool = True,
    verbose: bool | int = False,
    max_iter: int = 500,
    precompute: bool | str | ArrayLike = "auto",
    cv: int | BaseCrossValidator | Iterable | None = None,
    max_n_alphas: int = 1000,
    n_jobs: int | None = None,
    eps: float = ...,
    copy_X: bool = True,
    positive: bool = False
)

Cross-validated Lasso, using the LARS algorithm.

LassoLarsIC { .api }

from sklearn.linear_model import LassoLarsIC

LassoLarsIC(
    criterion: str = "aic",
    fit_intercept: bool = True,
    verbose: bool | int = False,
    precompute: bool | str | ArrayLike = "auto",
    max_iter: int = 500,
    eps: float = ...,
    copy_X: bool = True,
    positive: bool = False,
    noise_variance: float | None = None
)

Lasso model fit with Lars using BIC or AIC for model selection.

OrthogonalMatchingPursuit { .api }

from sklearn.linear_model import OrthogonalMatchingPursuit

OrthogonalMatchingPursuit(
    n_nonzero_coefs: int | None = None,
    tol: float | None = None,
    fit_intercept: bool = True,
    precompute: bool | str | ArrayLike = "auto"
)

Orthogonal Matching Pursuit model (OMP).

OrthogonalMatchingPursuitCV { .api }

from sklearn.linear_model import OrthogonalMatchingPursuitCV

OrthogonalMatchingPursuitCV(
    copy: bool = True,
    fit_intercept: bool = True,
    max_iter: int | None = None,
    cv: int | BaseCrossValidator | Iterable | None = None,
    n_jobs: int | None = None,
    verbose: bool | int = False
)

Cross-validated Orthogonal Matching Pursuit model (OMP).

BayesianRidge { .api }

from sklearn.linear_model import BayesianRidge

BayesianRidge(
    max_iter: int = 300,
    tol: float = 0.001,
    alpha_1: float = 1e-06,
    alpha_2: float = 1e-06,
    lambda_1: float = 1e-06,
    lambda_2: float = 1e-06,
    alpha_init: float | None = None,
    lambda_init: float | None = None,
    compute_score: bool = False,
    fit_intercept: bool = True,
    copy_X: bool = True,
    verbose: bool = False
)

Bayesian ridge regression.

ARDRegression { .api }

from sklearn.linear_model import ARDRegression

ARDRegression(
    max_iter: int = 300,
    tol: float = 0.001,
    alpha_1: float = 1e-06,
    alpha_2: float = 1e-06,
    lambda_1: float = 1e-06,
    lambda_2: float = 1e-06,
    compute_score: bool = False,
    threshold_lambda: float = 10000.0,
    fit_intercept: bool = True,
    copy_X: bool = True,
    verbose: bool = False
)

Bayesian ARD regression.

MultiTaskLasso { .api }

from sklearn.linear_model import MultiTaskLasso

MultiTaskLasso(
    alpha: float = 1.0,
    fit_intercept: bool = True,
    copy_X: bool = True,
    max_iter: int = 1000,
    tol: float = 0.0001,
    warm_start: bool = False,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer.

MultiTaskLassoCV { .api }

from sklearn.linear_model import MultiTaskLassoCV

MultiTaskLassoCV(
    eps: float = 0.001,
    n_alphas: int = 100,
    alphas: ArrayLike | None = None,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.0001,
    copy_X: bool = True,
    cv: int | BaseCrossValidator | Iterable | None = None,
    verbose: bool | int = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer.

MultiTaskElasticNet { .api }

from sklearn.linear_model import MultiTaskElasticNet

MultiTaskElasticNet(
    alpha: float = 1.0,
    l1_ratio: float = 0.5,
    fit_intercept: bool = True,
    copy_X: bool = True,
    max_iter: int = 1000,
    tol: float = 0.0001,
    warm_start: bool = False,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer.

MultiTaskElasticNetCV { .api }

from sklearn.linear_model import MultiTaskElasticNetCV

MultiTaskElasticNetCV(
    l1_ratio: float | ArrayLike = 0.5,
    eps: float = 0.001,
    n_alphas: int = 100,
    alphas: ArrayLike | None = None,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.0001,
    cv: int | BaseCrossValidator | Iterable | None = None,
    copy_X: bool = True,
    verbose: bool | int = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    selection: str = "cyclic"
)

Multi-task L1/L2 ElasticNet with built-in cross-validation.

HuberRegressor { .api }

from sklearn.linear_model import HuberRegressor

HuberRegressor(
    epsilon: float = 1.35,
    max_iter: int = 100,
    alpha: float = 0.0001,
    warm_start: bool = False,
    fit_intercept: bool = True,
    tol: float = 1e-05
)

Linear regression model that is robust to outliers.

RANSACRegressor { .api }

from sklearn.linear_model import RANSACRegressor

RANSACRegressor(
    estimator: object | None = None,
    min_samples: int | float | None = None,
    residual_threshold: float | None = None,
    is_data_valid: Callable | None = None,
    is_model_valid: Callable | None = None,
    max_trials: int = 100,
    max_skips: int = ...,
    stop_n_inliers: int = ...,
    stop_score: float = ...,
    stop_probability: float = 0.99,
    loss: str | Callable = "absolute_error",
    random_state: int | RandomState | None = None,
    base_estimator: object = "deprecated"
)

RANSAC (RANdom SAmple Consensus) algorithm.

TheilSenRegressor { .api }

from sklearn.linear_model import TheilSenRegressor

TheilSenRegressor(
    fit_intercept: bool = True,
    copy_X: bool = True,
    max_subpopulation: int = 10000,
    n_subsamples: int | None = None,
    max_iter: int = 300,
    tol: float = 0.001,
    random_state: int | RandomState | None = None,
    n_jobs: int | None = None,
    verbose: bool = False
)

Theil-Sen Estimator: robust multivariate regression model.

PassiveAggressiveRegressor { .api }

from sklearn.linear_model import PassiveAggressiveRegressor

PassiveAggressiveRegressor(
    C: float = 1.0,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.001,
    early_stopping: bool = False,
    validation_fraction: float = 0.1,
    n_iter_no_change: int = 5,
    shuffle: bool = True,
    verbose: int = 0,
    loss: str = "epsilon_insensitive",
    epsilon: float = 0.1,
    random_state: int | RandomState | None = None,
    warm_start: bool = False,
    average: bool | int = False
)

Passive Aggressive Regressor.

SGDRegressor { .api }

from sklearn.linear_model import SGDRegressor

SGDRegressor(
    loss: str = "squared_error",
    penalty: str = "l2",
    alpha: float = 0.0001,
    l1_ratio: float = 0.15,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.001,
    shuffle: bool = True,
    verbose: int = 0,
    epsilon: float = 0.1,
    random_state: int | RandomState | None = None,
    learning_rate: str = "invscaling",
    eta0: float = 0.01,
    power_t: float = 0.25,
    early_stopping: bool = False,
    validation_fraction: float = 0.1,
    n_iter_no_change: int = 5,
    warm_start: bool = False,
    average: bool | int = False
)

Linear model fitted by minimizing a regularized empirical loss with SGD.

TweedieRegressor { .api }

from sklearn.linear_model import TweedieRegressor

TweedieRegressor(
    power: float = 0.0,
    alpha: float = 1.0,
    fit_intercept: bool = True,
    link: str = "auto",
    solver: str = "lbfgs",
    max_iter: int = 100,
    tol: float = 0.0001,
    warm_start: bool = False,
    verbose: int = 0
)

Generalized Linear Model with a Tweedie distribution.

PoissonRegressor { .api }

from sklearn.linear_model import PoissonRegressor

PoissonRegressor(
    alpha: float = 1.0,
    fit_intercept: bool = True,
    solver: str = "lbfgs",
    max_iter: int = 100,
    tol: float = 0.0001,
    warm_start: bool = False,
    verbose: int = 0
)

Generalized Linear Model with a Poisson distribution.

GammaRegressor { .api }

from sklearn.linear_model import GammaRegressor

GammaRegressor(
    alpha: float = 1.0,
    fit_intercept: bool = True,
    solver: str = "lbfgs",
    max_iter: int = 100,
    tol: float = 0.0001,
    warm_start: bool = False,
    verbose: int = 0
)

Generalized Linear Model with a Gamma distribution.

QuantileRegressor { .api }

from sklearn.linear_model import QuantileRegressor

QuantileRegressor(
    quantile: float = 0.5,
    alpha: float = 1.0,
    fit_intercept: bool = True,
    solver: str = "interior-point",
    solver_options: dict | None = None
)

Linear regression model that predicts conditional quantiles.

Classification

LogisticRegression { .api }

from sklearn.linear_model import LogisticRegression

LogisticRegression(
    penalty: str | None = "l2",
    dual: bool = False,
    tol: float = 0.0001,
    C: float = 1.0,
    fit_intercept: bool = True,
    intercept_scaling: float = 1,
    class_weight: dict | str | None = None,
    random_state: int | RandomState | None = None,
    solver: str = "lbfgs",
    max_iter: int = 100,
    multi_class: str = "auto",
    verbose: int = 0,
    warm_start: bool = False,
    n_jobs: int | None = None,
    l1_ratio: float | None = None
)

Logistic Regression (aka logit, MaxEnt) classifier.

LogisticRegressionCV { .api }

from sklearn.linear_model import LogisticRegressionCV

LogisticRegressionCV(
    Cs: int | ArrayLike = 10,
    fit_intercept: bool = True,
    cv: int | BaseCrossValidator | Iterable | None = None,
    dual: bool = False,
    penalty: str = "l2",
    scoring: str | Callable | None = None,
    solver: str = "lbfgs",
    tol: float = 0.0001,
    max_iter: int = 100,
    class_weight: dict | str | None = None,
    n_jobs: int | None = None,
    verbose: int = 0,
    refit: bool = True,
    intercept_scaling: float = 1.0,
    multi_class: str = "auto",
    random_state: int | RandomState | None = None,
    l1_ratios: ArrayLike | None = None
)

Logistic Regression CV (aka logit, MaxEnt) classifier.

RidgeClassifier { .api }

from sklearn.linear_model import RidgeClassifier

RidgeClassifier(
    alpha: float = 1.0,
    fit_intercept: bool = True,
    copy_X: bool = True,
    max_iter: int | None = None,
    tol: float = 0.0001,
    class_weight: dict | str | None = None,
    solver: str = "auto",
    positive: bool = False,
    random_state: int | RandomState | None = None
)

Classifier using Ridge regression.

RidgeClassifierCV { .api }

from sklearn.linear_model import RidgeClassifierCV

RidgeClassifierCV(
    alphas: ArrayLike = (0.1, 1.0, 10.0),
    fit_intercept: bool = True,
    scoring: str | Callable | None = None,
    cv: int | BaseCrossValidator | Iterable | None = None,
    class_weight: dict | str | None = None,
    store_cv_values: bool = False
)

Ridge classifier with built-in cross-validation.

SGDClassifier { .api }

from sklearn.linear_model import SGDClassifier

SGDClassifier(
    loss: str = "hinge",
    penalty: str = "l2",
    alpha: float = 0.0001,
    l1_ratio: float = 0.15,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.001,
    shuffle: bool = True,
    verbose: int = 0,
    epsilon: float = 0.1,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    learning_rate: str = "optimal",
    eta0: float = 0.0,
    power_t: float = 0.5,
    early_stopping: bool = False,
    validation_fraction: float = 0.1,
    n_iter_no_change: int = 5,
    class_weight: dict | str | None = None,
    warm_start: bool = False,
    average: bool | int = False
)

Linear classifiers (SVM, logistic regression, etc.) with SGD training.

SGDOneClassSVM { .api }

from sklearn.linear_model import SGDOneClassSVM

SGDOneClassSVM(
    nu: float = 0.5,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.001,
    shuffle: bool = True,
    verbose: int = 0,
    random_state: int | RandomState | None = None,
    learning_rate: str = "optimal",
    eta0: float = 0.0,
    power_t: float = 0.5,
    warm_start: bool = False,
    average: bool | int = False
)

Solves linear One-Class SVM using Stochastic Gradient Descent.

Perceptron { .api }

from sklearn.linear_model import Perceptron

Perceptron(
    penalty: str | None = None,
    alpha: float = 0.0001,
    l1_ratio: float = 0.15,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.001,
    shuffle: bool = True,
    verbose: int = 0,
    eta0: float = 1.0,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    early_stopping: bool = False,
    validation_fraction: float = 0.1,
    n_iter_no_change: int = 5,
    class_weight: dict | str | None = None,
    warm_start: bool = False
)

Perceptron classifier.

PassiveAggressiveClassifier { .api }

from sklearn.linear_model import PassiveAggressiveClassifier

PassiveAggressiveClassifier(
    C: float = 1.0,
    fit_intercept: bool = True,
    max_iter: int = 1000,
    tol: float = 0.001,
    early_stopping: bool = False,
    validation_fraction: float = 0.1,
    n_iter_no_change: int = 5,
    shuffle: bool = True,
    verbose: int = 0,
    loss: str = "hinge",
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    warm_start: bool = False,
    class_weight: dict | str | None = None,
    average: bool | int = False
)

Passive Aggressive Classifier.

Linear Model Functions

ridge_regression { .api }

from sklearn.linear_model import ridge_regression

ridge_regression(
    X: ArrayLike,
    y: ArrayLike,
    alpha: float | ArrayLike,
    sample_weight: ArrayLike | None = None,
    solver: str = "auto",
    max_iter: int | None = None,
    tol: float = 0.0001,
    verbose: int = 0,
    positive: bool = False,
    random_state: int | RandomState | None = None,
    return_n_iter: bool = False,
    return_intercept: bool = False,
    check_input: bool = True
) -> ArrayLike | tuple[ArrayLike, int] | tuple[ArrayLike, ArrayLike] | tuple[ArrayLike, int, ArrayLike]

Solve the ridge equation by the method of normal equations.

lasso_path { .api }

from sklearn.linear_model import lasso_path

lasso_path(
    X: ArrayLike,
    y: ArrayLike,
    eps: float = 0.001,
    n_alphas: int = 100,
    alphas: ArrayLike | None = None,
    precompute: bool | str | ArrayLike = "auto",
    Xy: ArrayLike | None = None,
    copy_X: bool = True,
    coef_init: ArrayLike | None = None,
    verbose: bool | int = False,
    return_n_iter: bool = False,
    positive: bool = False,
    **params
) -> tuple[ArrayLike, ArrayLike, ArrayLike] | tuple[ArrayLike, ArrayLike, ArrayLike, ArrayLike]

Compute Lasso path with coordinate descent.

lars_path { .api }

from sklearn.linear_model import lars_path

lars_path(
    X: ArrayLike,
    y: ArrayLike,
    Xy: ArrayLike | None = None,
    Gram: ArrayLike | None = None,
    max_iter: int = 500,
    alpha_min: float = 0,
    method: str = "lar",
    copy_X: bool = True,
    eps: float = ...,
    copy_Gram: bool = True,
    verbose: int = 0,
    return_path: bool = True,
    return_n_iter: bool = False,
    positive: bool = False
) -> tuple[ArrayLike, ArrayLike] | tuple[ArrayLike, ArrayLike, ArrayLike] | tuple[ArrayLike, ArrayLike, int] | tuple[ArrayLike, ArrayLike, ArrayLike, int]

Compute Least Angle Regression or Lasso path using LARS algorithm.

lars_path_gram { .api }

from sklearn.linear_model import lars_path_gram

lars_path_gram(
    Xy: ArrayLike,
    Gram: ArrayLike,
    n_samples: int,
    max_iter: int = 500,
    alpha_min: float = 0,
    method: str = "lar",
    copy_X: bool = True,
    eps: float = ...,
    copy_Gram: bool = True,
    verbose: int = 0,
    return_path: bool = True,
    return_n_iter: bool = False,
    positive: bool = False
) -> tuple[ArrayLike, ArrayLike] | tuple[ArrayLike, ArrayLike, ArrayLike] | tuple[ArrayLike, ArrayLike, int] | tuple[ArrayLike, ArrayLike, ArrayLike, int]

lars_path in the sufficient stats mode.

enet_path { .api }

from sklearn.linear_model import enet_path

enet_path(
    X: ArrayLike,
    y: ArrayLike,
    l1_ratio: float = 0.5,
    eps: float = 0.001,
    n_alphas: int = 100,
    alphas: ArrayLike | None = None,
    precompute: bool | str | ArrayLike = "auto",
    Xy: ArrayLike | None = None,
    copy_X: bool = True,
    coef_init: ArrayLike | None = None,
    verbose: bool | int = False,
    return_n_iter: bool = False,
    positive: bool = False,
    check_input: bool = True,
    **params
) -> tuple[ArrayLike, ArrayLike, ArrayLike] | tuple[ArrayLike, ArrayLike, ArrayLike, ArrayLike]

Compute elastic net path with coordinate descent.

orthogonal_mp { .api }

from sklearn.linear_model import orthogonal_mp

orthogonal_mp(
    X: ArrayLike,
    y: ArrayLike,
    n_nonzero_coefs: int | None = None,
    tol: float | None = None,
    precompute: bool = False,
    copy_X: bool = True,
    return_path: bool = False,
    return_n_iter: bool = False
) -> ArrayLike | tuple[ArrayLike, ArrayLike] | tuple[ArrayLike, int] | tuple[ArrayLike, ArrayLike, int]

Orthogonal Matching Pursuit (OMP).

orthogonal_mp_gram { .api }

from sklearn.linear_model import orthogonal_mp_gram

orthogonal_mp_gram(
    Gram: ArrayLike,
    Xy: ArrayLike,
    n_nonzero_coefs: int | None = None,
    tol: float | None = None,
    norms_squared: ArrayLike | None = None,
    copy_Gram: bool = True,
    copy_Xy: bool = True,
    return_path: bool = False,
    return_n_iter: bool = False
) -> ArrayLike | tuple[ArrayLike, ArrayLike] | tuple[ArrayLike, int] | tuple[ArrayLike, ArrayLike, int]

Gram Orthogonal Matching Pursuit (OMP).

Support Vector Machines

Classes

SVC { .api }

from sklearn.svm import SVC

SVC(
    C: float = 1.0,
    kernel: str | Callable = "rbf",
    degree: int = 3,
    gamma: str | float = "scale",
    coef0: float = 0.0,
    shrinking: bool = True,
    probability: bool = False,
    tol: float = 0.001,
    cache_size: float = 200,
    class_weight: dict | str | None = None,
    verbose: bool = False,
    max_iter: int = -1,
    decision_function_shape: str = "ovr",
    break_ties: bool = False,
    random_state: int | RandomState | None = None
)

C-Support Vector Classification.

NuSVC { .api }

from sklearn.svm import NuSVC

NuSVC(
    nu: float = 0.5,
    kernel: str | Callable = "rbf",
    degree: int = 3,
    gamma: str | float = "scale",
    coef0: float = 0.0,
    shrinking: bool = True,
    probability: bool = False,
    tol: float = 0.001,
    cache_size: float = 200,
    class_weight: dict | str | None = None,
    verbose: bool = False,
    max_iter: int = -1,
    decision_function_shape: str = "ovr",
    break_ties: bool = False,
    random_state: int | RandomState | None = None
)

Nu-Support Vector Classification.

LinearSVC { .api }

from sklearn.svm import LinearSVC

LinearSVC(
    penalty: str = "l2",
    loss: str = "squared_hinge",
    dual: bool = "auto",
    tol: float = 0.0001,
    C: float = 1.0,
    multi_class: str = "ovr",
    fit_intercept: bool = True,
    intercept_scaling: float = 1,
    class_weight: dict | str | None = None,
    verbose: int = 0,
    random_state: int | RandomState | None = None,
    max_iter: int = 1000
)

Linear Support Vector Classification.

SVR { .api }

from sklearn.svm import SVR

SVR(
    kernel: str | Callable = "rbf",
    degree: int = 3,
    gamma: str | float = "scale",
    coef0: float = 0.0,
    tol: float = 0.001,
    C: float = 1.0,
    epsilon: float = 0.1,
    shrinking: bool = True,
    cache_size: float = 200,
    verbose: bool = False,
    max_iter: int = -1
)

Epsilon-Support Vector Regression.

NuSVR { .api }

from sklearn.svm import NuSVR

NuSVR(
    nu: float = 0.5,
    C: float = 1.0,
    kernel: str | Callable = "rbf",
    degree: int = 3,
    gamma: str | float = "scale",
    coef0: float = 0.0,
    shrinking: bool = True,
    tol: float = 0.001,
    cache_size: float = 200,
    verbose: bool = False,
    max_iter: int = -1
)

Nu Support Vector Regression.

LinearSVR { .api }

from sklearn.svm import LinearSVR

LinearSVR(
    epsilon: float = 0.0,
    tol: float = 0.0001,
    C: float = 1.0,
    loss: str = "epsilon_insensitive",
    fit_intercept: bool = True,
    intercept_scaling: float = 1.0,
    dual: bool = "auto",
    verbose: int = 0,
    random_state: int | RandomState | None = None,
    max_iter: int = 1000
)

Linear Support Vector Regression.

OneClassSVM { .api }

from sklearn.svm import OneClassSVM

OneClassSVM(
    kernel: str | Callable = "rbf",
    degree: int = 3,
    gamma: str | float = "scale",
    coef0: float = 0.0,
    tol: float = 0.001,
    nu: float = 0.5,
    shrinking: bool = True,
    cache_size: float = 200,
    verbose: bool = False,
    max_iter: int = -1
)

Unsupervised Outlier Detection.

Functions

l1_min_c { .api }

from sklearn.svm import l1_min_c

l1_min_c(
    X: ArrayLike,
    y: ArrayLike,
    loss: str = "squared_hinge",
    fit_intercept: bool = True,
    intercept_scaling: float = 1.0
) -> float

Return the lowest bound for C.

Decision Trees

DecisionTreeClassifier { .api }

from sklearn.tree import DecisionTreeClassifier

DecisionTreeClassifier(
    criterion: str = "gini",
    splitter: str = "best",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = None,
    random_state: int | RandomState | None = None,
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    class_weight: dict | list[dict] | str | None = None,
    ccp_alpha: float = 0.0,
    monotonic_cst: ArrayLike | None = None
)

A decision tree classifier.

DecisionTreeRegressor { .api }

from sklearn.tree import DecisionTreeRegressor

DecisionTreeRegressor(
    criterion: str = "squared_error",
    splitter: str = "best",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = None,
    random_state: int | RandomState | None = None,
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    ccp_alpha: float = 0.0,
    monotonic_cst: ArrayLike | None = None
)

A decision tree regressor.

ExtraTreeClassifier { .api }

from sklearn.tree import ExtraTreeClassifier

ExtraTreeClassifier(
    criterion: str = "gini",
    splitter: str = "random",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = "sqrt",
    random_state: int | RandomState | None = None,
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    class_weight: dict | list[dict] | str | None = None,
    ccp_alpha: float = 0.0
)

An extremely randomized tree classifier.

ExtraTreeRegressor { .api }

from sklearn.tree import ExtraTreeRegressor

ExtraTreeRegressor(
    criterion: str = "squared_error",
    splitter: str = "random",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = 1.0,
    random_state: int | RandomState | None = None,
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    ccp_alpha: float = 0.0
)

An extremely randomized tree regressor.

BaseDecisionTree { .api }

from sklearn.tree import BaseDecisionTree

BaseDecisionTree(
    criterion: str,
    splitter: str,
    max_depth: int | None,
    min_samples_split: int | float,
    min_samples_leaf: int | float,
    min_weight_fraction_leaf: float,
    max_features: int | float | str | None,
    max_leaf_nodes: int | None,
    random_state: int | RandomState | None,
    min_impurity_decrease: float,
    class_weight: dict | list[dict] | str | None = None,
    ccp_alpha: float = 0.0
)

Base class for decision trees.

Decision Tree Functions

export_graphviz { .api }

from sklearn.tree import export_graphviz

export_graphviz(
    decision_tree: BaseDecisionTree,
    out_file: str | None = None,
    max_depth: int | None = None,
    feature_names: ArrayLike | None = None,
    class_names: ArrayLike | bool | None = None,
    label: str = "all",
    filled: bool = False,
    leaves_parallel: bool = False,
    impurity: bool = True,
    node_ids: bool = False,
    proportion: bool = False,
    rotate: bool = False,
    rounded: bool = False,
    special_characters: bool = False,
    precision: int = 3,
    fontname: str = "helvetica"
) -> str | None

Export a decision tree in DOT format.

export_text { .api }

from sklearn.tree import export_text

export_text(
    decision_tree: BaseDecisionTree,
    feature_names: ArrayLike | None = None,
    max_depth: int | None = 10,
    spacing: int = 3,
    decimals: int = 2,
    show_weights: bool = False
) -> str

Build a text report showing the rules of a decision tree.

plot_tree { .api }

from sklearn.tree import plot_tree

plot_tree(
    decision_tree: BaseDecisionTree,
    max_depth: int | None = None,
    feature_names: ArrayLike | None = None,
    class_names: ArrayLike | None = None,
    label: str = "all",
    filled: bool = False,
    impurity: bool = True,
    node_ids: bool = False,
    proportion: bool = False,
    rotate: bool = False,
    rounded: bool = False,
    precision: int = 3,
    ax: Axes | None = None,
    fontsize: int | None = None
) -> list[Annotation]

Plot a decision tree.

Ensemble Methods

RandomForestClassifier { .api }

from sklearn.ensemble import RandomForestClassifier

RandomForestClassifier(
    n_estimators: int = 100,
    criterion: str = "gini",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = "sqrt",
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    bootstrap: bool = True,
    oob_score: bool = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    warm_start: bool = False,
    class_weight: dict | list[dict] | str | None = None,
    ccp_alpha: float = 0.0,
    max_samples: int | float | None = None,
    monotonic_cst: ArrayLike | None = None
)

A random forest classifier.

RandomForestRegressor { .api }

from sklearn.ensemble import RandomForestRegressor

RandomForestRegressor(
    n_estimators: int = 100,
    criterion: str = "squared_error",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = 1.0,
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    bootstrap: bool = True,
    oob_score: bool = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    warm_start: bool = False,
    ccp_alpha: float = 0.0,
    max_samples: int | float | None = None,
    monotonic_cst: ArrayLike | None = None
)

A random forest regressor.

ExtraTreesClassifier { .api }

from sklearn.ensemble import ExtraTreesClassifier

ExtraTreesClassifier(
    n_estimators: int = 100,
    criterion: str = "gini",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = "sqrt",
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    bootstrap: bool = False,
    oob_score: bool = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    warm_start: bool = False,
    class_weight: dict | list[dict] | str | None = None,
    ccp_alpha: float = 0.0,
    max_samples: int | float | None = None,
    monotonic_cst: ArrayLike | None = None
)

An extra-trees classifier.

ExtraTreesRegressor { .api }

from sklearn.ensemble import ExtraTreesRegressor

ExtraTreesRegressor(
    n_estimators: int = 100,
    criterion: str = "squared_error",
    max_depth: int | None = None,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_features: int | float | str | None = 1.0,
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    bootstrap: bool = False,
    oob_score: bool = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    warm_start: bool = False,
    ccp_alpha: float = 0.0,
    max_samples: int | float | None = None,
    monotonic_cst: ArrayLike | None = None
)

An extra-trees regressor.

GradientBoostingClassifier { .api }

from sklearn.ensemble import GradientBoostingClassifier

GradientBoostingClassifier(
    loss: str = "log_loss",
    learning_rate: float = 0.1,
    n_estimators: int = 100,
    subsample: float = 1.0,
    criterion: str = "friedman_mse",
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_depth: int = 3,
    min_impurity_decrease: float = 0.0,
    init: BaseClassifier | str | None = None,
    random_state: int | RandomState | None = None,
    max_features: int | float | str | None = None,
    alpha: float = 0.9,
    verbose: int = 0,
    max_leaf_nodes: int | None = None,
    warm_start: bool = False,
    validation_fraction: float = 0.1,
    n_iter_no_change: int | None = None,
    tol: float = 0.0001,
    ccp_alpha: float = 0.0
)

Gradient Boosting for classification.

GradientBoostingRegressor { .api }

from sklearn.ensemble import GradientBoostingRegressor

GradientBoostingRegressor(
    loss: str = "squared_error",
    learning_rate: float = 0.1,
    n_estimators: int = 100,
    subsample: float = 1.0,
    criterion: str = "friedman_mse",
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_depth: int = 3,
    min_impurity_decrease: float = 0.0,
    init: BaseRegressor | str | None = None,
    random_state: int | RandomState | None = None,
    max_features: int | float | str | None = None,
    alpha: float = 0.9,
    verbose: int = 0,
    max_leaf_nodes: int | None = None,
    warm_start: bool = False,
    validation_fraction: float = 0.1,
    n_iter_no_change: int | None = None,
    tol: float = 0.0001,
    ccp_alpha: float = 0.0
)

Gradient Boosting for regression.

HistGradientBoostingClassifier { .api }

from sklearn.ensemble import HistGradientBoostingClassifier

HistGradientBoostingClassifier(
    loss: str = "log_loss",
    learning_rate: float = 0.1,
    max_iter: int = 100,
    max_leaf_nodes: int = 31,
    max_depth: int | None = None,
    min_samples_leaf: int = 20,
    l2_regularization: float = 0.0,
    max_features: float = 1.0,
    max_bins: int = 255,
    categorical_features: ArrayLike | str | None = None,
    monotonic_cst: ArrayLike | dict | None = None,
    interaction_cst: ArrayLike | str | None = None,
    warm_start: bool = False,
    early_stopping: str | bool = "auto",
    scoring: str | Callable | None = "loss",
    validation_fraction: int | float | None = 0.1,
    n_iter_no_change: int = 10,
    tol: float = 1e-07,
    verbose: int = 0,
    random_state: int | RandomState | None = None,
    class_weight: dict | str | None = None
)

Histogram-based Gradient Boosting Classification Tree.

HistGradientBoostingRegressor { .api }

from sklearn.ensemble import HistGradientBoostingRegressor

HistGradientBoostingRegressor(
    loss: str = "squared_error",
    quantile: float | None = None,
    learning_rate: float = 0.1,
    max_iter: int = 100,
    max_leaf_nodes: int = 31,
    max_depth: int | None = None,
    min_samples_leaf: int = 20,
    l2_regularization: float = 0.0,
    max_features: float = 1.0,
    max_bins: int = 255,
    categorical_features: ArrayLike | str | None = None,
    monotonic_cst: ArrayLike | dict | None = None,
    interaction_cst: ArrayLike | str | None = None,
    warm_start: bool = False,
    early_stopping: str | bool = "auto",
    scoring: str | Callable | None = "loss",
    validation_fraction: int | float | None = 0.1,
    n_iter_no_change: int = 10,
    tol: float = 1e-07,
    verbose: int = 0,
    random_state: int | RandomState | None = None
)

Histogram-based Gradient Boosting Regression Tree.

AdaBoostClassifier { .api }

from sklearn.ensemble import AdaBoostClassifier

AdaBoostClassifier(
    estimator: object | None = None,
    n_estimators: int = 50,
    learning_rate: float = 1.0,
    algorithm: str = "SAMME.R",
    random_state: int | RandomState | None = None,
    base_estimator: object = "deprecated"
)

An AdaBoost classifier.

AdaBoostRegressor { .api }

from sklearn.ensemble import AdaBoostRegressor

AdaBoostRegressor(
    estimator: object | None = None,
    n_estimators: int = 50,
    learning_rate: float = 1.0,
    loss: str = "linear",
    random_state: int | RandomState | None = None,
    base_estimator: object = "deprecated"
)

An AdaBoost regressor.

BaggingClassifier { .api }

from sklearn.ensemble import BaggingClassifier

BaggingClassifier(
    estimator: object | None = None,
    n_estimators: int = 10,
    max_samples: int | float = 1.0,
    max_features: int | float = 1.0,
    bootstrap: bool = True,
    bootstrap_features: bool = False,
    oob_score: bool = False,
    warm_start: bool = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    base_estimator: object = "deprecated"
)

A Bagging classifier.

BaggingRegressor { .api }

from sklearn.ensemble import BaggingRegressor

BaggingRegressor(
    estimator: object | None = None,
    n_estimators: int = 10,
    max_samples: int | float = 1.0,
    max_features: int | float = 1.0,
    bootstrap: bool = True,
    bootstrap_features: bool = False,
    oob_score: bool = False,
    warm_start: bool = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    base_estimator: object = "deprecated"
)

A Bagging regressor.

VotingClassifier { .api }

from sklearn.ensemble import VotingClassifier

VotingClassifier(
    estimators: list[tuple[str, BaseEstimator]],
    voting: str = "hard",
    weights: ArrayLike | None = None,
    n_jobs: int | None = None,
    flatten_transform: bool = True,
    verbose: bool = False
)

Soft Voting/Majority Rule classifier for unfitted estimators.

VotingRegressor { .api }

from sklearn.ensemble import VotingRegressor

VotingRegressor(
    estimators: list[tuple[str, BaseEstimator]],
    weights: ArrayLike | None = None,
    n_jobs: int | None = None,
    verbose: bool = False
)

Prediction voting regressor for unfitted estimators.

StackingClassifier { .api }

from sklearn.ensemble import StackingClassifier

StackingClassifier(
    estimators: list[tuple[str, BaseEstimator]],
    final_estimator: BaseClassifier | None = None,
    cv: int | BaseCrossValidator | Iterable | str | None = None,
    stack_method: str = "auto",
    n_jobs: int | None = None,
    passthrough: bool = False,
    verbose: int = 0
)

Stack of estimators with a final classifier.

StackingRegressor { .api }

from sklearn.ensemble import StackingRegressor

StackingRegressor(
    estimators: list[tuple[str, BaseEstimator]],
    final_estimator: BaseRegressor | None = None,
    cv: int | BaseCrossValidator | Iterable | str | None = None,
    n_jobs: int | None = None,
    passthrough: bool = False,
    verbose: int = 0
)

Stack of estimators with a final regressor.

IsolationForest { .api }

from sklearn.ensemble import IsolationForest

IsolationForest(
    n_estimators: int = 100,
    max_samples: int | float | str = "auto",
    contamination: float | str = "auto",
    max_features: int | float = 1.0,
    bootstrap: bool = False,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    warm_start: bool = False
)

Isolation Forest Algorithm.

RandomTreesEmbedding { .api }

from sklearn.ensemble import RandomTreesEmbedding

RandomTreesEmbedding(
    n_estimators: int = 100,
    max_depth: int = 5,
    min_samples_split: int | float = 2,
    min_samples_leaf: int | float = 1,
    min_weight_fraction_leaf: float = 0.0,
    max_leaf_nodes: int | None = None,
    min_impurity_decrease: float = 0.0,
    sparse_output: bool = True,
    n_jobs: int | None = None,
    random_state: int | RandomState | None = None,
    verbose: int = 0,
    warm_start: bool = False
)

An ensemble of totally random trees.

BaseEnsemble { .api }

from sklearn.ensemble import BaseEnsemble

BaseEnsemble(
    estimator: object | None,
    n_estimators: int = 10,
    estimator_params: tuple = ()
)

Base class for all ensemble classes.

Naive Bayes

GaussianNB { .api }

from sklearn.naive_bayes import GaussianNB

GaussianNB(
    priors: ArrayLike | None = None,
    var_smoothing: float = 1e-09
)

Gaussian Naive Bayes (GaussianNB).

MultinomialNB { .api }

from sklearn.naive_bayes import MultinomialNB

MultinomialNB(
    alpha: float | ArrayLike = 1.0,
    force_alpha: bool = False,
    fit_prior: bool = True,
    class_prior: ArrayLike | None = None
)

Naive Bayes classifier for multinomial models.

ComplementNB { .api }

from sklearn.naive_bayes import ComplementNB

ComplementNB(
    alpha: float | ArrayLike = 1.0,
    force_alpha: bool = False,
    fit_prior: bool = True,
    class_prior: ArrayLike | None = None,
    norm: bool = False
)

The Complement Naive Bayes classifier described in Rennie et al. (2003).

BernoulliNB { .api }

from sklearn.naive_bayes import BernoulliNB

BernoulliNB(
    alpha: float | ArrayLike = 1.0,
    force_alpha: bool = False,
    binarize: float | None = 0.0,
    fit_prior: bool = True,
    class_prior: ArrayLike | None = None
)

Naive Bayes classifier for multivariate Bernoulli models.

CategoricalNB { .api }

from sklearn.naive_bayes import CategoricalNB

CategoricalNB(
    alpha: float | ArrayLike = 1.0,
    force_alpha: bool = False,
    fit_prior: bool = True,
    class_prior: ArrayLike | None = None,
    min_categories: int | ArrayLike | None = None
)

Naive Bayes classifier for categorical features.

k-Nearest Neighbors

KNeighborsClassifier { .api }

from sklearn.neighbors import KNeighborsClassifier

KNeighborsClassifier(
    n_neighbors: int = 5,
    weights: str | Callable = "uniform",
    algorithm: str = "auto",
    leaf_size: int = 30,
    p: int | float = 2,
    metric: str | Callable = "minkowski",
    metric_params: dict | None = None,
    n_jobs: int | None = None
)

Classifier implementing the k-nearest neighbors vote.

KNeighborsRegressor { .api }

from sklearn.neighbors import KNeighborsRegressor

KNeighborsRegressor(
    n_neighbors: int = 5,
    weights: str | Callable = "uniform",
    algorithm: str = "auto",
    leaf_size: int = 30,
    p: int | float = 2,
    metric: str | Callable = "minkowski",
    metric_params: dict | None = None,
    n_jobs: int | None = None
)

Regression based on k-nearest neighbors.

RadiusNeighborsClassifier { .api }

from sklearn.neighbors import RadiusNeighborsClassifier

RadiusNeighborsClassifier(
    radius: float = 1.0,
    weights: str | Callable = "uniform",
    algorithm: str = "auto",
    leaf_size: int = 30,
    p: int | float = 2,
    metric: str | Callable = "minkowski",
    outlier_label: int | str | ArrayLike | None = None,
    metric_params: dict | None = None,
    n_jobs: int | None = None
)

Classifier implementing a vote among neighbors within a radius.

RadiusNeighborsRegressor { .api }

from sklearn.neighbors import RadiusNeighborsRegressor

RadiusNeighborsRegressor(
    radius: float = 1.0,
    weights: str | Callable = "uniform",
    algorithm: str = "auto",
    leaf_size: int = 30,
    p: int | float = 2,
    metric: str | Callable = "minkowski",
    metric_params: dict | None = None,
    n_jobs: int | None = None
)

Regression based on neighbors within a fixed radius.

NearestCentroid { .api }

from sklearn.neighbors import NearestCentroid

NearestCentroid(
    metric: str | Callable = "euclidean",
    shrink_threshold: float | None = None
)

Nearest centroid classifier.

Neural Networks

MLPClassifier { .api }

from sklearn.neural_network import MLPClassifier

MLPClassifier(
    hidden_layer_sizes: tuple = (100,),
    activation: str = "relu",
    solver: str = "adam",
    alpha: float = 0.0001,
    batch_size: int | str = "auto",
    learning_rate: str = "constant",
    learning_rate_init: float = 0.001,
    power_t: float = 0.5,
    max_iter: int = 200,
    shuffle: bool = True,
    random_state: int | RandomState | None = None,
    tol: float = 0.0001,
    verbose: bool = False,
    warm_start: bool = False,
    momentum: float = 0.9,
    nesterovs_momentum: bool = True,
    early_stopping: bool = False,
    validation_fraction: float = 0.1,
    beta_1: float = 0.9,
    beta_2: float = 0.999,
    epsilon: float = 1e-08,
    n_iter_no_change: int = 10,
    max_fun: int = 15000
)

Multi-layer Perceptron classifier.

MLPRegressor { .api }

from sklearn.neural_network import MLPRegressor

MLPRegressor(
    hidden_layer_sizes: tuple = (100,),
    activation: str = "relu",
    solver: str = "adam",
    alpha: float = 0.0001,
    batch_size: int | str = "auto",
    learning_rate: str = "constant",
    learning_rate_init: float = 0.001,
    power_t: float = 0.5,
    max_iter: int = 200,
    shuffle: bool = True,
    random_state: int | RandomState | None = None,
    tol: float = 0.0001,
    verbose: bool = False,
    warm_start: bool = False,
    momentum: float = 0.9,
    nesterovs_momentum: bool = True,
    early_stopping: bool = False,
    validation_fraction: float = 0.1,
    beta_1: float = 0.9,
    beta_2: float = 0.999,
    epsilon: float = 1e-08,
    n_iter_no_change: int = 10,
    max_fun: int = 15000
)

Multi-layer Perceptron regressor.

BernoulliRBM { .api }

from sklearn.neural_network import BernoulliRBM

BernoulliRBM(
    n_components: int = 256,
    learning_rate: float = 0.1,
    batch_size: int = 10,
    n_iter: int = 10,
    verbose: int = 0,
    random_state: int | RandomState | None = None
)

Bernoulli Restricted Boltzmann Machine (RBM).

Discriminant Analysis

LinearDiscriminantAnalysis { .api }

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis

LinearDiscriminantAnalysis(
    solver: str = "svd",
    shrinkage: str | float | None = None,
    priors: ArrayLike | None = None,
    n_components: int | None = None,
    store_covariance: bool = False,
    tol: float = 0.0001,
    covariance_estimator: BaseEstimator | None = None
)

Linear Discriminant Analysis.

QuadraticDiscriminantAnalysis { .api }

from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis

QuadraticDiscriminantAnalysis(
    priors: ArrayLike | None = None,
    reg_param: float = 0.0,
    store_covariance: bool = False,
    tol: float = 0.0001
)

Quadratic Discriminant Analysis.

Gaussian Processes

GaussianProcessClassifier { .api }

from sklearn.gaussian_process import GaussianProcessClassifier

GaussianProcessClassifier(
    kernel: Kernel | None = None,
    optimizer: str | Callable | None = "fmin_l_bfgs_b",
    n_restarts_optimizer: int = 0,
    max_iter_predict: int = 100,
    warm_start: bool = False,
    copy_X_train: bool = True,
    random_state: int | RandomState | None = None,
    multi_class: str = "one_vs_rest",
    n_jobs: int | None = None
)

Gaussian process classification (GPC) based on Laplace approximation.

GaussianProcessRegressor { .api }

from sklearn.gaussian_process import GaussianProcessRegressor

GaussianProcessRegressor(
    kernel: Kernel | None = None,
    alpha: float | ArrayLike = 1e-10,
    optimizer: str | Callable | None = "fmin_l_bfgs_b",
    n_restarts_optimizer: int = 0,
    normalize_y: bool = False,
    copy_X_train: bool = True,
    n_targets: int | None = None,
    random_state: int | RandomState | None = None
)

Gaussian process regression (GPR).

Kernel Ridge Regression

KernelRidge { .api }

from sklearn.kernel_ridge import KernelRidge

KernelRidge(
    alpha: float | ArrayLike = 1,
    kernel: str | Callable = "linear",
    gamma: float | None = None,
    degree: float = 3,
    coef0: float = 1,
    kernel_params: dict | None = None
)

Kernel ridge regression.

Isotonic Regression

IsotonicRegression { .api }

from sklearn.isotonic import IsotonicRegression

IsotonicRegression(
    y_min: float | None = None,
    y_max: float | None = None,
    increasing: bool | str = True,
    out_of_bounds: str = "nan"
)

Isotonic regression model.

Multiclass and Multioutput

OneVsRestClassifier { .api }

from sklearn.multiclass import OneVsRestClassifier

OneVsRestClassifier(
    estimator: BaseEstimator,
    n_jobs: int | None = None
)

One-vs-the-rest (OvR) multiclass strategy.

OneVsOneClassifier { .api }

from sklearn.multiclass import OneVsOneClassifier

OneVsOneClassifier(
    estimator: BaseEstimator,
    n_jobs: int | None = None
)

One-vs-one multiclass strategy.

OutputCodeClassifier { .api }

from sklearn.multiclass import OutputCodeClassifier

OutputCodeClassifier(
    estimator: BaseEstimator,
    code_size: float = 1.5,
    random_state: int | RandomState | None = None,
    n_jobs: int | None = None
)

(Error-Correcting) Output-Code multiclass strategy.

MultiOutputClassifier { .api }

from sklearn.multioutput import MultiOutputClassifier

MultiOutputClassifier(
    estimator: BaseEstimator,
    n_jobs: int | None = None
)

Multi target classification.

MultiOutputRegressor { .api }

from sklearn.multioutput import MultiOutputRegressor

MultiOutputRegressor(
    estimator: BaseEstimator,
    n_jobs: int | None = None
)

Multi target regression.

ClassifierChain { .api }

from sklearn.multioutput import ClassifierChain

ClassifierChain(
    base_estimator: BaseEstimator,
    order: ArrayLike | str | None = None,
    cv: int | BaseCrossValidator | Iterable | None = None,
    random_state: int | RandomState | None = None
)

A multi-label model that arranges binary classifiers into a chain.

RegressorChain { .api }

from sklearn.multioutput import RegressorChain

RegressorChain(
    base_estimator: BaseEstimator,
    order: ArrayLike | str | None = None,
    cv: int | BaseCrossValidator | Iterable | None = None,
    random_state: int | RandomState | None = None
)

A multi-label model that arranges regressors into a chain.

Semi-Supervised Learning

LabelPropagation { .api }

from sklearn.semi_supervised import LabelPropagation

LabelPropagation(
    kernel: str | Callable = "rbf",
    gamma: float = 20,
    n_neighbors: int = 7,
    max_iter: int = 1000,
    tol: float = 0.001,
    n_jobs: int | None = None
)

Label Propagation classifier.

LabelSpreading { .api }

from sklearn.semi_supervised import LabelSpreading

LabelSpreading(
    kernel: str | Callable = "rbf",
    gamma: float = 20,
    n_neighbors: int = 7,
    alpha: float = 0.2,
    max_iter: int = 30,
    tol: float = 0.001,
    n_jobs: int | None = None
)

LabelSpreading model for semi-supervised learning.

SelfTrainingClassifier { .api }

from sklearn.semi_supervised import SelfTrainingClassifier

SelfTrainingClassifier(
    base_estimator: BaseEstimator,
    threshold: float = 0.75,
    criterion: str = "threshold",
    k_best: int = 10,
    max_iter: int | None = 10,
    verbose: bool = False
)

Self-training classifier.

Dummy Estimators

DummyClassifier { .api }

from sklearn.dummy import DummyClassifier

DummyClassifier(
    strategy: str = "prior",
    random_state: int | RandomState | None = None,
    constant: int | str | ArrayLike | None = None
)

DummyClassifier makes predictions that ignore the input features.

DummyRegressor { .api }

from sklearn.dummy import DummyRegressor

DummyRegressor(
    strategy: str = "mean",
    constant: int | float | ArrayLike | None = None,
    quantile: float | None = None
)

DummyRegressor makes predictions that ignore the input features.

Calibration

CalibratedClassifierCV { .api }

from sklearn.calibration import CalibratedClassifierCV

CalibratedClassifierCV(
    estimator: BaseClassifier | None = None,
    method: str = "sigmoid",
    cv: int | BaseCrossValidator | Iterable | str | None = None,
    n_jobs: int | None = None,
    ensemble: bool = True,
    base_estimator: BaseClassifier = "deprecated"
)

Probability calibration with isotonic regression or logistic regression.

Install with Tessl CLI

npx tessl i tessl/pypi-scikit-learn

docs

datasets.md

feature-extraction.md

index.md

metrics.md

model-selection.md

neighbors.md

pipelines.md

preprocessing.md

supervised-learning.md

unsupervised-learning.md

utilities.md

tile.json