CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-prowler

Open source cloud security assessment tool for AWS, Azure, GCP, and Kubernetes with hundreds of compliance checks.

Pending
Overview
Eval results
Files

check-models.mddocs/

Check Metadata and Models

Standardized data models for representing security check metadata, severity levels, remediation information, and compliance mappings. These Pydantic models ensure consistent structure and validation across all security checks and findings in the Prowler ecosystem.

Capabilities

Check Metadata Model

Comprehensive metadata model for security checks containing all necessary information for execution, reporting, and compliance mapping.

class CheckMetadata(BaseModel):
    """
    Model representing the metadata of a check.
    
    This Pydantic model standardizes how security checks are defined and provides
    the foundation for check execution, finding generation, and compliance
    reporting across all supported providers.
    
    Attributes:
    - Provider: str - The provider of the check (aws, azure, gcp, kubernetes, etc.)
    - CheckID: str - The unique ID of the check
    - CheckTitle: str - The human-readable title of the check
    - CheckType: list[str] - The type/categories of the check
    - CheckAliases: list[str] - Optional aliases for the check (defaults to empty list)
    - ServiceName: str - The name of the cloud service
    - SubServiceName: str - The name of the sub-service or component
    - ResourceIdTemplate: str - The template for the resource ID
    - Severity: Severity - The severity level of the check
    - ResourceType: str - The type of the resource being checked
    - Description: str - The description of the check
    - Risk: str - The risk associated with the check
    - RelatedUrl: str - The URL related to the check or documentation
    - Remediation: Remediation - The remediation steps for the check
    - Categories: list[str] - The categories of the check
    - DependsOn: list[str] - The dependencies of the check
    - RelatedTo: list[str] - The related checks
    - Notes: str - Additional notes for the check
    - Compliance: Optional[list] - The compliance information for the check (defaults to empty list)
    
    Validators:
    - valid_category(value): Validator function to validate the categories of the check
    - severity_to_lower(severity): Validator function to convert the severity to lowercase
    - valid_cli_command(remediation): Validator function to validate the CLI command is not a URL
    - valid_resource_type(resource_type): Validator function to validate the resource type is not empty
    """
    
    Provider: str
    CheckID: str
    CheckTitle: str
    CheckType: list[str]
    CheckAliases: list[str] = []
    ServiceName: str
    SubServiceName: str
    ResourceIdTemplate: str
    Severity: Severity
    ResourceType: str
    Description: str
    Risk: str
    RelatedUrl: str
    Remediation: Remediation
    Categories: list[str]
    DependsOn: list[str]
    RelatedTo: list[str]
    Notes: str
    # We set the compliance to None to store the compliance later if supplied
    Compliance: Optional[list[Any]] = []

    @validator("Categories", each_item=True, pre=True, always=True)
    def valid_category(value):
        """
        Validate category format - must be lowercase letters, numbers and hyphens only.
        
        Parameters:
        - value: Category string to validate
        
        Returns:
        str: Validated and normalized category value
        
        Raises:
        ValueError: If category format is invalid
        """
        if not isinstance(value, str):
            raise ValueError("Categories must be a list of strings")
        value_lower = value.lower()
        if not re.match("^[a-z0-9-]+$", value_lower):
            raise ValueError(
                f"Invalid category: {value}. Categories can only contain lowercase letters, numbers and hyphen '-'"
            )
        return value_lower

    @validator("Severity", pre=True, always=True)
    def severity_to_lower(severity):
        """
        Convert severity to lowercase for consistency.
        
        Parameters:
        - severity: Severity value to normalize
        
        Returns:
        str: Lowercase severity value
        """
        return severity.lower()

    @validator("Remediation")
    def valid_cli_command(remediation):
        """
        Validate that CLI remediation command is not a URL.
        
        Parameters:
        - remediation: Remediation object to validate
        
        Returns:
        Remediation: Validated remediation object
        
        Raises:
        ValueError: If CLI command is a URL
        """
        if re.match(r"^https?://", remediation.Code.CLI):
            raise ValueError("CLI command cannot be an URL")
        return remediation

    @validator("ResourceType", pre=True, always=True)
    def valid_resource_type(resource_type):
        """
        Validate that resource type is not empty.
        
        Parameters:
        - resource_type: Resource type string to validate
        
        Returns:
        str: Validated resource type
        
        Raises:
        ValueError: If resource type is empty or invalid
        """
        if not resource_type or not isinstance(resource_type, str):
            raise ValueError("ResourceType must be a non-empty string")
        return resource_type

Severity Enumeration

Standardized severity levels for security findings aligned with industry standards.

class Severity(Enum):
    """
    Severity level enumeration for security findings.
    
    Provides standardized severity classification aligned with
    industry security frameworks and vulnerability assessment standards.
    """
    
    critical = "critical"    # Immediate action required, severe security risk
    high = "high"           # High priority, significant security risk
    medium = "medium"       # Medium priority, moderate security risk
    low = "low"            # Low priority, minor security risk
    informational = "informational"  # Information only, no immediate risk

Remediation Models

Structured models for representing remediation information including code samples and recommendations.

class Code(BaseModel):
    """
    Model for remediation code in various formats.
    
    Provides code samples for fixing security issues using
    different infrastructure management approaches.
    
    Attributes:
    - NativeIaC: str - Native infrastructure-as-code (CloudFormation, ARM, etc.)
    - Terraform: str - Terraform configuration code
    - CLI: str - Command-line interface commands
    - Other: str - Other remediation code formats
    """
    
    NativeIaC: Optional[str] = None
    Terraform: Optional[str] = None
    CLI: Optional[str] = None
    Other: Optional[str] = None

class Recommendation(BaseModel):
    """
    Model for remediation recommendations and guidance.
    
    Provides textual guidance and reference URLs for
    understanding and implementing security fixes.
    
    Attributes:
    - Text: str - Detailed remediation guidance text
    - Url: str - Reference URL for additional information
    """
    
    Text: str
    Url: Optional[str] = None

class Remediation(BaseModel):
    """
    Combined remediation information model.
    
    Contains both code samples and textual recommendations
    for comprehensive remediation guidance.
    
    Attributes:
    - Code: Code - Code samples for various platforms
    - Recommendation: Recommendation - Textual guidance and references
    """
    
    Code: Optional[Code] = None
    Recommendation: Recommendation

Compliance Models

Models for representing compliance framework mappings and requirements.

class ComplianceBaseModel(BaseModel):
    """
    Base model for compliance framework mappings.
    
    Provides the foundation for mapping security checks to
    various compliance frameworks and regulatory requirements.
    
    Attributes:
    - Framework: str - Compliance framework name
    - Provider: str - Cloud provider for framework
    - Version: str - Framework version
    - Description: str - Framework description
    - Requirements: list[dict] - Specific compliance requirements
    """
    
    Framework: str
    Provider: str
    Version: str
    Description: str
    Requirements: List[Dict[str, Any]] = []

class Compliance(BaseModel):
    """
    Main compliance framework model.
    
    Comprehensive model for representing compliance frameworks
    with their associated checks and requirements.
    
    Attributes:
    - Framework: str - Framework identifier
    - Provider: str - Provider name
    - Version: str - Framework version
    - Description: str - Framework description
    - Requirements: list - List of compliance requirements
    - Checks: dict - Mapping of checks to requirements
    """
    
    Framework: str
    Provider: str
    Version: str
    Description: str
    Requirements: List[Dict[str, Any]]
    Checks: Dict[str, List[str]]

Custom Check Metadata

Functions for parsing and updating custom check metadata.

def parse_custom_checks_metadata_file(metadata_file: str) -> dict:
    """
    Parse custom checks metadata from file.
    
    Loads and validates custom check metadata from YAML or JSON
    files, ensuring compatibility with the CheckMetadata model.
    
    Parameters:
    - metadata_file: Path to metadata file (YAML or JSON)
    
    Returns:
    Dictionary containing parsed and validated metadata
    
    Raises:
    ProwlerException: On file parsing or validation errors
    """

def update_checks_metadata(
    checks_metadata: dict,
    custom_metadata: dict
) -> dict:
    """
    Update checks metadata with custom definitions.
    
    Merges custom check metadata with built-in metadata,
    allowing for customization and extension of check behavior.
    
    Parameters:
    - checks_metadata: Built-in checks metadata dictionary
    - custom_metadata: Custom metadata to merge
    
    Returns:
    Updated metadata dictionary with custom overrides applied
    
    Raises:
    ProwlerException: On metadata merge conflicts or validation errors
    """

Usage Examples

Creating Check Metadata

from prowler.lib.check.models import (
    CheckMetadata,
    Severity,
    Remediation,
    Recommendation,
    Code
)

# Create remediation information
code = Code(
    CLI="aws iam put-user-policy --user-name <user> --policy-name MFARequired",
    Terraform="""
resource "aws_iam_user_policy" "mfa_required" {
  name = "MFARequired"
  user = var.user_name
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Deny"
      Action = "*"
      Resource = "*"
      Condition = {
        BoolIfExists = {
          "aws:MultiFactorAuthPresent" = "false"
        }
      }
    }]
  })
}
"""
)

recommendation = Recommendation(
    Text="Enable MFA for all IAM users to enhance account security",
    Url="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html"
)

remediation = Remediation(
    Code=code,
    Recommendation=recommendation
)

# Create check metadata
check_metadata = CheckMetadata(
    Provider="aws",
    CheckID="iam_user_mfa_enabled",
    CheckTitle="Ensure MFA is enabled for all IAM users",
    CheckType=["Identity and Access Management"],
    ServiceName="iam",
    SubServiceName="user",
    ResourceIdTemplate="arn:aws:iam::account:user/user-name",
    Severity=Severity.high,
    ResourceType="AwsIamUser",
    Description="Checks if MFA is enabled for all IAM users",
    Risk="Users without MFA are vulnerable to credential compromise",
    RelatedUrl="",
    Remediation=remediation,
    Categories=["Security", "IAM"],
    Compliance=[]
)

Working with Severity Levels

from prowler.lib.check.models import Severity

# Check severity levels
critical_checks = []
high_priority_checks = []

for check in all_checks:
    if check.metadata.Severity == Severity.critical:
        critical_checks.append(check)
    elif check.metadata.Severity == Severity.high:
        high_priority_checks.append(check)

# Sort by severity
severity_order = {
    Severity.critical: 5,
    Severity.high: 4,
    Severity.medium: 3,
    Severity.low: 2,
    Severity.informational: 1
}

sorted_checks = sorted(
    all_checks,
    key=lambda c: severity_order[c.metadata.Severity],
    reverse=True
)

Custom Metadata Processing

from prowler.lib.check.custom_checks_metadata import (
    parse_custom_checks_metadata_file,
    update_checks_metadata
)

# Load custom metadata
custom_metadata = parse_custom_checks_metadata_file(
    '/path/to/custom-checks-metadata.yaml'
)

# Update built-in metadata with customizations
updated_metadata = update_checks_metadata(
    built_in_metadata,
    custom_metadata
)

# Apply updated metadata to checks
for check_id, metadata in updated_metadata.items():
    if check_id in active_checks:
        active_checks[check_id].metadata = CheckMetadata(**metadata)

Compliance Framework Integration

from prowler.lib.check.compliance_models import Compliance
from prowler.lib.check.compliance import update_checks_metadata_with_compliance

# Load compliance framework
cis_aws_framework = Compliance(
    Framework="CIS",
    Provider="aws",
    Version="1.5",
    Description="CIS Amazon Web Services Foundations Benchmark v1.5.0",
    Requirements=[
        {
            "Id": "1.1",
            "Description": "Maintain current contact details",
            "Checks": ["account_maintain_current_contact_details"]
        }
    ],
    Checks={
        "1.1": ["account_maintain_current_contact_details"],
        "1.4": ["iam_user_mfa_enabled"]
    }
)

# Update check metadata with compliance mapping
compliance_updated_metadata = update_checks_metadata_with_compliance(
    checks_metadata,
    [cis_aws_framework]
)

Install with Tessl CLI

npx tessl i tessl/pypi-prowler

docs

check-management.md

check-models.md

cli-interface.md

configuration.md

finding-management.md

index.md

logging-utilities.md

provider-framework.md

tile.json