CtrlK
BlogDocsLog inGet started
Tessl Logo

general-secure-coding-agent-skills

github.com/santosomar/general-secure-coding-agent-skills

Skill

Added

Review

test-oracle-generator

Generates test oracles — the "expected output" part of a test — by choosing among reference implementations, invariants, inverse functions, or differential comparison when the correct answer isn't obvious. Use when the hard part of testing is knowing what the right answer is, not generating inputs.

behavior-preservation-checker

Verifies that a refactoring or transformation preserved observable behavior by comparing before and after execution, differential testing, or I/O capture. Use after a refactoring, after automated code transformation, before merging a structural PR, or whenever the claim is that two code versions do the same thing.

mocking-test-generator

Generates tests that mock external dependencies — HTTP, databases, filesystems, clocks — isolating the unit under test while still exercising realistic interactions. Use when the code has side effects you can't run in a test, when external services are slow or unavailable, or when testing error paths that are hard to trigger for real.

c-cpp-to-lean4-translator

Translates C/C++ into Lean 4 for interactive theorem proving — deep verification where automated tools fail. Use when Dafny's automation isn't enough, when proving mathematical properties of an algorithm, or when building a machine-checked proof for publication or certification.

scenario-generator

Generates concrete scenarios from a requirement — happy paths, edge cases, and error conditions — expressed as Given/When/Then or equivalent structured narratives. Use when turning a requirement into acceptance tests, when exploring what could go wrong, or when the requirement is abstract and needs grounding.

specification-to-temporal-logic-generator

Translates specifications into temporal logic formulas (LTL, CTL, or TLA) by matching the specification's shape to the right logic and operators. Use when formalizing requirements for any model checker, when choosing between LTL and CTL for a property, or when the user has a temporal claim and doesn't know which operators express it.

pseudocode-to-python-code

Translates pseudocode into idiomatic Python, choosing the right standard-library structures and leveraging Python idioms that pseudocode doesn't express. Use when implementing an algorithm from a paper or spec, when the user hands you pseudocode and wants Python, or when realizing a verified-pseudocode artifact.

semantic-equivalence-verifier

Proves two program fragments semantically equivalent using symbolic reasoning — stronger than testing, applicable when differential testing is insufficient or impossible. Use when behavior preservation must be proven rather than sampled, when the input space is too large to enumerate, or when a transformation needs a correctness argument.

formal-spec-generator

Dispatch skill — routes a formal specification request to the right concrete generator based on what's being specified and what needs to be proven. Use when the user asks to formally specify something without naming a target formalism, or when unsure which verification tool fits the problem.

static-bug-detector

Identifies bugs through static code analysis (null dereferences, type mismatches, control flow issues) without executing the program. Use when scanning code for defects before running tests, when the user asks for static analysis, or when integrating with CI for defect detection.

taint-instrumentation-assistant

Sets up taint tracking by defining sources, sinks, and sanitizers from Project CodeGuard's input-validation taxonomy, then configures the target tool (CodeQL, Semgrep, custom instrumentation). Use when wiring taint analysis into CI, when the user asks for taint tracking, or when you need a source/sink catalog for a specific language.

counterexample-debugger

Interprets and explains counterexamples produced by model checkers or property-based testing tools to make them actionable. Use when TLC, NuSMV, CBMC, or a property-based test emits a counterexample the user doesn't understand, when a trace is too long to read, or when mapping a model-level trace back to source code.

test-suite-prioritizer

Orders tests so failures surface earliest — runs tests covering changed code first, historically flaky/failing tests early, and slow low-value tests last. Use when the suite is too slow to run in full on every change, when CI feedback takes too long, or when deciding what to run in a smoke-test tier.

code-refactoring-assistant

Executes refactorings — extract method, inline, rename, move — in small, behavior-preserving steps with a test between each. Use when the user wants to restructure working code, when cleaning up after a feature lands, or when a smell has been identified and needs fixing.

code-summarizer

Produces natural-language summaries of what code does at the function, class, module, or subsystem level, with length and abstraction scaled to the scope. Explains purpose, side effects, and non-obvious behavior rather than restating syntax. Use when onboarding to unfamiliar code, when the user asks what something does, when generating docstrings or architecture notes, or when preparing a handoff document.

code-translation

Translates a single function or small code unit between programming languages, mapping idioms and preserving observable behavior. Use when porting one function, when the user pastes code and asks for it in another language, or as the per-unit primitive for larger migrations.

multi-version-behavior-comparator

Compares the runtime behavior of two or more versions of the same code by running them on identical inputs and diffing outputs, side effects, and errors. Use when validating a refactor, port, or optimization; when the user asks if two implementations behave the same; or when investigating a suspected regression across versions.