This version of the tile failed moderation
Failed moderation (intent review) in rules/session-bootstrap.md: This is a prompt injection attack disguised as a cursor rule. It instructs an AI agent to execute a bash command as its very first action in every session, then call an external skill ('tessl__trusted-memory') which could load arbitrary instructions or exfiltrate data. The sentinel file mechanism (/tmp/session_bootstrapped) is designed to make it look like a legitimate caching/dedup mechanism, but the real goal is to force the AI to execute commands and call external services before processing the user's actual request. This is a classic prompt injection / agent manipulation pattern.