A comprehensive machine learning library providing supervised and unsupervised learning algorithms with consistent APIs and extensive tools for data preprocessing, model evaluation, and deployment.
87
Build a utility that tunes a classification estimator using the package's model selection tools and reports evaluation diagnostics.
best_params, best_score, fold_scores (list of per-fold metrics), cv_mean_score, learning_curve (train_sizes, train_scores, val_scores), and the refit estimator. @test@generates
from typing import Any, Dict, Iterable, List, Optional, Sequence, Tuple
def run_model_selection(
X: Sequence[Sequence[float]],
y: Sequence[Any],
*,
estimator: Any,
param_grid: Dict[str, Iterable[Any]],
scoring: str,
cv_folds: int = 5,
group_labels: Optional[Sequence[Any]] = None,
learning_curve_sizes: Optional[Sequence[float]] = None,
random_state: Optional[int] = None,
) -> Dict[str, Any]:
"""Fits and evaluates the estimator using package-native model selection utilities.
Returns keys: best_params, best_score, fold_scores, cv_mean_score, learning_curve, estimator."""Machine learning algorithms, model selection tools, and metrics.
Install with Tessl CLI
npx tessl i tessl/pypi-scikit-learndocs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10