Microsoft Teams bots and AI agents - Claude/OpenAI, Adaptive Cards, Graph API
44
32%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ms-teams-apps/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is essentially a comma-separated list of technologies with no verbs, no concrete actions, and no explicit trigger guidance. While it names a recognizable domain (Microsoft Teams bots), it fails to communicate what the skill actually does or when it should be selected, making it poorly suited for skill selection among many options.
Suggestions
Add concrete action verbs describing what the skill does, e.g., 'Builds Microsoft Teams bots and AI agents, designs Adaptive Cards, integrates with Microsoft Graph API for user and channel management.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about creating Teams bots, building Teams apps, designing Adaptive Cards, or integrating with Microsoft Graph API.'
Include common natural-language variations users might say, such as 'Teams app', 'Teams chatbot', 'Teams integration', 'bot framework', or 'Teams messaging'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lists technologies (Claude/OpenAI, Adaptive Cards, Graph API) but does not describe any concrete actions. There are no verbs indicating what the skill actually does—it reads more like a tag list than a capability description. | 1 / 3 |
Completeness | The description does not clearly answer 'what does this do' (no actions described) nor 'when should Claude use it' (no 'Use when...' clause or equivalent trigger guidance). Both dimensions are very weak. | 1 / 3 |
Trigger Term Quality | It includes some relevant keywords a user might mention ('Microsoft Teams', 'bots', 'AI agents', 'Adaptive Cards', 'Graph API'), but lacks common natural-language variations like 'Teams app', 'chatbot', 'Teams integration', or 'messaging extension'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Microsoft Teams bots' provides some niche specificity, but the inclusion of broad terms like 'AI agents' and 'Claude/OpenAI' could cause overlap with general AI or chatbot skills. It's somewhat specific but not sharply delineated. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides highly actionable, executable code examples covering the full breadth of Teams app development with Claude/OpenAI integration, which is its primary strength. However, it is extremely verbose and monolithic — nearly everything is inlined into a single massive file with no progressive disclosure. Much of the content (architecture overviews, ASCII diagrams, full manifest boilerplate, env variable listings) could be trimmed or split into referenced files to dramatically improve token efficiency.
Suggestions
Split into multiple files: move the full manifest JSON, Graph operations, Adaptive Card templates, RAG implementation, and deployment scripts into separate referenced files (e.g., MANIFEST.md, GRAPH.md, CARDS.md, DEPLOYMENT.md), keeping only a concise overview with links in the main skill.
Remove the ASCII box diagrams for architecture and UX guidelines — replace with brief bullet lists that convey the same information in a fraction of the tokens.
Trim boilerplate that Claude can generate on its own: full .env templates, docker-compose files, basic Dockerfile patterns, and the complete manifest JSON. Instead, highlight only Teams-specific or non-obvious configuration.
Add explicit validation checkpoints to the deployment workflow: verify bot registration, validate manifest schema before publishing, confirm endpoint reachability before testing.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~800+ lines. The ASCII box diagrams for architecture and UX guidelines waste significant tokens. Includes extensive boilerplate (full manifest JSON, Docker compose, env files) that Claude already knows how to produce. Many sections explain concepts Claude understands (what tabs are, what webhooks do, what SSO is). | 1 / 3 |
Actionability | The code examples are fully executable and copy-paste ready — complete TypeScript bot implementations, CLI commands, Azure deployment scripts, Dockerfile, unit tests, and Adaptive Card definitions with proper imports and types. | 3 / 3 |
Workflow Clarity | Deployment steps are listed clearly (Azure CLI commands, Teams Toolkit commands), and testing has a local debug flow. However, there are no explicit validation checkpoints — e.g., no step to verify the bot registration succeeded before deploying, no manifest validation step before publishing, no verification that the ngrok tunnel is working before testing. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of content with no references to external files. The manifest examples, Graph operations, RAG implementation, authentication, Adaptive Cards, deployment, and testing are all inlined. Content like the full manifest JSON, Graph operations class, and RAG implementation should be split into separate reference files with links from the main skill. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (1254 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
d4ddb03
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.