An extensible cyclomatic complexity analyzer for many programming languages including C/C++, Java, JavaScript, Python, Ruby, Swift, and more.
—
An extensible cyclomatic complexity analyzer for many programming languages including C/C++, Java, JavaScript, Python, Ruby, Swift, Objective-C, and 20+ others. Lizard provides comprehensive code quality metrics including cyclomatic complexity, lines of code without comments (nloc), copy-paste detection (code clone/duplicate detection), and various other forms of static analysis.
pip install lizardimport lizardFor direct analysis functions:
from lizard import analyze, analyze_files, mainFor data classes:
from lizard import FunctionInfo, FileInformation, FileAnalyzerimport lizard
# Analyze source files in a directory
results = lizard.analyze(['src/'])
for file_info in results:
print(f"File: {file_info.filename}")
print(f" NLOC: {file_info.nloc}")
print(f" CCN: {file_info.CCN}")
for func in file_info.function_list:
print(f" Function {func.name}: CCN={func.cyclomatic_complexity}, NLOC={func.nloc}")
# Analyze specific files
files = ['app.py', 'utils.py']
results = lizard.analyze_files(files)
# Use command-line interface programmatically
import sys
lizard.main(['-l', 'python', 'src/'])Lizard's extensible architecture consists of:
lizard.py module with analysis functions and data modelslizard_languages package supporting 26+ programming languageslizard_ext package with analysis extensions and output formattersThis design enables comprehensive code analysis across multiple languages while maintaining extensibility for custom metrics and output formats.
Primary analysis functions for processing source code and extracting complexity metrics from files and directories.
def analyze(paths, exclude_pattern=None, threads=1, exts=None, lans=None):
"""
Main analysis function that processes source files.
Args:
paths: List of file/directory paths to analyze
exclude_pattern: List of patterns to exclude from analysis
threads: Number of threads for parallel processing
exts: List of extension objects for additional analysis
lans: List of languages to analyze
Returns:
Iterator of FileInformation objects containing function statistics
"""
def analyze_files(files, threads=1, exts=None):
"""
Analyzes specific files using FileAnalyzer.
Args:
files: List of file paths to analyze
threads: Number of threads for parallel processing
exts: List of extension objects for additional analysis
Returns:
Iterator of FileInformation objects
"""
def main(argv=None):
"""
Command-line entry point for Lizard.
Args:
argv: Optional command-line arguments list
"""Core data structures representing analysis results including function information, file statistics, and complexity metrics.
class FunctionInfo:
"""Represents function information with complexity metrics."""
name: str
cyclomatic_complexity: int
nloc: int # Lines of code without comments
token_count: int
parameter_count: int
length: int # Total lines including comments
location: str # File path and line number
class FileInformation:
"""Contains file-level statistics and function list."""
filename: str
nloc: int
function_list: list # List of FunctionInfo objects
average_nloc: float
average_token_count: float
average_cyclomatic_complexity: float
CCN: int # Total cyclomatic complexity
ND: int # Total of maximum nesting depths across all functionsLanguage parsing capabilities supporting 26+ programming languages through the lizard_languages package.
def languages():
"""
Returns list of all available language reader classes.
Returns:
List of language reader classes for supported languages
"""
def get_reader_for(filename):
"""
Returns appropriate language reader class for a filename.
Args:
filename: File path or name to match
Returns:
Language reader class or None if no match found
"""Extension framework for custom analysis metrics and output formats through the lizard_ext package.
def get_extensions(extension_names):
"""
Loads and expands extension modules for analysis.
Args:
extension_names: List of extension names to load
Returns:
List of extension objects
"""Available extensions include duplicate detection, nesting depth analysis, output formatters (HTML, CSV, XML, Checkstyle), and 20+ specialized analysis extensions.
Helper functions for file processing, filtering, and output formatting.
def get_all_source_files(paths, exclude_patterns, lans):
"""
Gets all source files from paths with exclusion patterns.
Args:
paths: List of paths to search
exclude_patterns: List of exclusion patterns
lans: List of languages to filter
Returns:
Iterator of filtered source file paths
"""
def warning_filter(option, module_infos):
"""
Filters functions that exceed specified thresholds.
Args:
option: Configuration object with threshold settings
module_infos: Iterator of file information objects
Returns:
Generator yielding functions exceeding thresholds
"""class FileAnalyzer:
"""Main file analysis engine with extension support."""
def __call__(self, filename: str) -> FileInformation:
"""Analyze a single file and return file information."""
def analyze_source_code(self, filename: str, code: str) -> FileInformation:
"""Analyze source code string and return file information."""
class Nesting:
"""Base class representing one level of nesting."""
name_in_space: str
class Namespace(Nesting):
"""Represents namespace nesting level."""
def __init__(self, name: str):
"""Initialize namespace with given name."""
# Constants
DEFAULT_CCN_THRESHOLD: int = 15
DEFAULT_WHITELIST: str = "whitelizard.txt"
DEFAULT_MAX_FUNC_LENGTH: int = 1000
analyze_file: FileAnalyzer
"""Pre-instantiated FileAnalyzer with default extensions for quick single-file analysis"""Install with Tessl CLI
npx tessl i tessl/pypi-lizard