CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-torchmetrics

PyTorch native metrics library providing 400+ rigorously tested metrics across classification, regression, audio, image, text, and other ML domains

Overview
Eval results
Files

nominal.mddocs/

Nominal/Categorical Metrics

Statistical measures for analyzing associations and agreements between categorical variables, useful for evaluating classification models and understanding categorical data relationships.

Capabilities

Association Measures

Metrics for measuring strength of association between categorical variables.

class CramersV(Metric):
    def __init__(
        self,
        num_classes: int,
        bias_correction: bool = True,
        **kwargs
    ): ...

class TheilsU(Metric):
    def __init__(
        self,
        num_classes: int,
        **kwargs
    ): ...

class TschuprowsT(Metric):
    def __init__(
        self,
        num_classes: int,
        bias_correction: bool = True,
        **kwargs
    ): ...

class PearsonsContingencyCoefficient(Metric):
    def __init__(
        self,
        num_classes: int,
        **kwargs
    ): ...

Agreement Measures

Metrics for evaluating inter-rater agreement and consistency.

class FleissKappa(Metric):
    def __init__(
        self,
        mode: str = "counts",
        **kwargs
    ): ...

Usage Examples

import torch
from torchmetrics.nominal import CramersV, FleissKappa, TheilsU

# Association between two categorical variables
cramers_v = CramersV(num_classes=3)
theil_u = TheilsU(num_classes=3)

# Sample categorical data
preds = torch.randint(0, 3, (100,))  # Predicted categories
target = torch.randint(0, 3, (100,))  # True categories

# Compute association measures
cv_score = cramers_v(preds, target)
tu_score = theil_u(preds, target)

print(f"Cramer's V: {cv_score:.4f}")
print(f"Theil's U: {tu_score:.4f}")

# Inter-rater agreement
fleiss_kappa = FleissKappa(mode="counts")

# Rating counts: (subjects, categories)
# Each row represents how many raters assigned each category to a subject
ratings = torch.tensor([
    [0, 0, 0, 0, 14],  # Subject 1: all 14 raters chose category 5
    [0, 2, 6, 4, 2],   # Subject 2: mixed ratings
    [0, 0, 3, 5, 6],   # Subject 3: mixed ratings
])

kappa_score = fleiss_kappa(ratings)
print(f"Fleiss' Kappa: {kappa_score:.4f}")

Types

CategoricalData = Tensor  # Integer categorical labels
RatingCounts = Tensor     # Inter-rater agreement count matrix

Install with Tessl CLI

npx tessl i tessl/pypi-torchmetrics

docs

audio.md

classification.md

clustering.md

detection.md

functional.md

image.md

index.md

multimodal.md

nominal.md

regression.md

retrieval.md

segmentation.md

shape.md

text.md

utilities.md

video.md

tile.json