or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/kedro@1.1.x

docs

index.md
tile.json

tessl/pypi-kedro

tessl install tessl/pypi-kedro@1.1.0

Kedro helps you build production-ready data and analytics pipelines

Agent Success

Agent success rate when using this tile

98%

Improvement

Agent success rate improvement when using this tile compared to baseline

1.32x

Baseline

Agent success rate without this tile

74%

constants.mddocs/reference/

Constants Reference

Constants and enumerations used throughout Kedro.

Protocol Constants

From kedro.utils and kedro.io.core:

HTTP_PROTOCOLS: tuple[str, ...] = ("http", "https")

CLOUD_PROTOCOLS: tuple[str, ...] = (
    "abfs",   # Azure Blob File System
    "abfss",  # Azure Blob File System (secure)
    "adl",    # Azure Data Lake
    "gcs",    # Google Cloud Storage
    "gdrive", # Google Drive
    "gs",     # Google Storage
    "oci",    # Oracle Cloud Infrastructure
    "oss",    # Alibaba Cloud OSS
    "s3",     # AWS S3
    "s3a",    # AWS S3 (Hadoop)
    "s3n",    # AWS S3 (legacy Hadoop)
)

PROTOCOL_DELIMITER: str = "://"

Version Constants

VERSION_FORMAT: str = "%Y-%m-%dT%H.%M.%S.%fZ"  # Timestamp format
VERSIONED_FLAG_KEY: str = "versioned"  # Key in config to enable versioning
VERSION_KEY: str = "version"  # Key for version specification

Configuration Constants

TYPE_KEY: str = "type"  # Key in dataset config to specify dataset class
CREDENTIALS_KEY: str = "credentials"  # Key in dataset config for credentials reference

Pipeline Constants

From kedro.pipeline.transcoding:

TRANSCODING_SEPARATOR: str = "@"

The separator used in dataset names to specify transcoding formats. When a dataset name contains the transcoding separator, the part after the separator specifies the format to transcode to.

Example usage:

# Use the same underlying data with different formats
node(load_data, "raw_data@pandas", "processed_data")  # Load as pandas DataFrame
node(spark_transform, "raw_data@spark", "spark_output")  # Load as Spark DataFrame

Catalog Configuration Constants

From kedro.io.catalog_config_resolver:

DEFAULT_RUNTIME_PATTERN: dict[str, dict[str, Any]] = {
    "{default}": {"type": "kedro.io.MemoryDataset"}
}

The default runtime pattern used by CatalogConfigResolver when no custom runtime patterns are provided. This pattern matches any dataset name not explicitly defined in the catalog and creates a MemoryDataset for it.

Config Loader Constants

From kedro.config:

MERGING_IMPLEMENTATIONS: dict[MergeStrategies, str] = {
    MergeStrategies.SOFT: "_soft_merge",
    MergeStrategies.DESTRUCTIVE: "_destructive_merge"
}

Internal mapping of MergeStrategies enum values to their corresponding merge method names in OmegaConfigLoader. This is used to dispatch to the appropriate merge strategy implementation.

Version Information

__version__: str  # Package version "1.1.1"