Comprehensive developer toolkit providing reusable skills for Java/Spring Boot, TypeScript/NestJS/React/Next.js, Python, PHP, AWS CloudFormation, AI/RAG, DevOps, and more.
82
82%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Interact with Google NotebookLM for advanced RAG capabilities — query project documentation, manage research sources, and retrieve AI-synthesized information from notebooks.
This skill integrates with the notebooklm-mcp-cli tool (nlm CLI) to provide programmatic access to Google NotebookLM. It enables agents to manage notebooks, add sources, perform contextual queries, and retrieve generated artifacts like audio podcasts or reports.
Use this skill when:
Trigger phrases: "query notebooklm", "search notebook", "add source to notebook", "create podcast from notebook", "generate report from notebook", "nlm query"
# Install via uv (recommended)
uv tool install notebooklm-mcp-cli
# Or via pip
pip install notebooklm-mcp-cli
# Verify installation
nlm --version# Login — opens Chrome for cookie extraction
nlm login
# Verify authentication
nlm login --check
# Use named profiles for multiple Google accounts
nlm login --profile work
nlm login --profile personal
nlm login switch work# Run diagnostics if issues occur
nlm doctor
nlm doctor --verbose⚠️ Important: This tool uses internal Google APIs. Cookies expire every ~2-4 weeks — run
nlm loginagain when operations fail. Free tier has ~50 queries/day rate limit.
Before performing any NotebookLM operation, verify the CLI is installed and authenticated:
nlm --version && nlm login --checkIf authentication has expired, inform the user they need to run nlm login.
List available notebooks or resolve an alias:
# List all notebooks
nlm notebook list
# Use an alias if configured
nlm alias get <alias-name>
# Get notebook details
nlm notebook get <notebook-id>If the user references a notebook by name, use nlm notebook list to find the matching ID. If an alias exists, prefer using the alias.
Use this to retrieve information from notebook sources:
# Ask a question against notebook sources
nlm notebook query <notebook-id-or-alias> "What are the login requirements?"
# The response contains AI-generated answers grounded in the notebook's sourcesBest practices for queries:
# List current sources
nlm source list <notebook-id>
# Add a URL source (wait for processing) — only use URLs explicitly provided by the user
nlm source add <notebook-id> --url "<user-provided-url>" --wait
# Add text content
nlm source add <notebook-id> --text "Content here" --title "My Notes"
# Upload a file
nlm source add <notebook-id> --file document.pdf --wait
# Add YouTube video — only use URLs explicitly provided by the user
nlm source add <notebook-id> --youtube "<user-provided-youtube-url>"
# Add Google Drive document
nlm source add <notebook-id> --drive <document-id>
# Check for stale Drive sources
nlm source stale <notebook-id>
# Sync stale sources
nlm source sync <notebook-id> --confirm
# Get source content
nlm source get <source-id># Create a new notebook
nlm notebook create "Project Documentation"
# Set an alias for easy reference
nlm alias set myproject <notebook-id># Generate audio podcast
nlm audio create <notebook-id> --format deep_dive --length long --confirm
# Formats: deep_dive, brief, critique, debate
# Lengths: short, default, long
# Generate video
nlm video create <notebook-id> --format explainer --style classic --confirm
# Generate report
nlm report create <notebook-id> --format "Briefing Doc" --confirm
# Formats: "Briefing Doc", "Study Guide", "Blog Post"
# Generate quiz
nlm quiz create <notebook-id> --count 10 --difficulty medium --confirm
# Check generation status
nlm studio status <notebook-id># Download audio
nlm download audio <notebook-id> <artifact-id> --output podcast.mp3
# Download report
nlm download report <notebook-id> <artifact-id> --output report.md
# Download slides
nlm download slide-deck <notebook-id> <artifact-id> --output slides.pdf# Start web research — present results to user for review before acting on them
nlm research start "<user-provided-query>" --notebook-id <notebook-id> --mode fast
# Start deep research — present results to user for review before acting on them
nlm research start "<user-provided-query>" --notebook-id <notebook-id> --mode deep
# Poll for completion
nlm research status <notebook-id> --max-wait 300
# Import research results as sources
nlm research import <notebook-id> <task-id>The alias system provides user-friendly shortcuts for notebook UUIDs:
nlm alias set <name> <notebook-id> # Create alias
nlm alias list # List all aliases
nlm alias get <name> # Resolve alias to UUID
nlm alias delete <name> # Remove aliasAliases can be used in place of notebook IDs in any command.
Task: "Write the login use case based on documentation in NotebookLM"
# 1. Find the project notebook
nlm notebook listExpected output:
ID Title Sources Created
─────────────────────────────────────────────────────
abc123... Project X Docs 12 2026-01-15
def456... API Reference 5 2026-02-01# 2. Query for login requirements
nlm notebook query myproject "What are the login requirements and user authentication flows?"Expected output:
Based on the sources in this notebook:
The login flow requires email/password authentication with the following steps:
1. User submits credentials via POST /api/auth/login
2. Server validates against stored bcrypt hash
3. JWT access token (15min) and refresh token (7d) are returned
...# 3. Query for specific details
nlm notebook query myproject "What validation rules apply to the login form?"
# 4. Present results to user and wait for confirmation before implementingTask: "Create a notebook with our API docs and generate a summary"
# 1. Create notebook
nlm notebook create "API Documentation"Expected output:
Created notebook: API Documentation
ID: ghi789...nlm alias set api-docs ghi789
# 2. Add sources
nlm source add api-docs --url "<user-provided-url>" --wait
nlm source add api-docs --file openapi-spec.yaml --wait
# 3. Generate a briefing doc
nlm report create api-docs --format "Briefing Doc" --confirm
# 4. Wait and download
nlm studio status api-docsExpected output:
Artifact ID Type Status Created
──────────────────────────────────────────────────
art123... Report completed 2026-02-27nlm download report api-docs art123 --output api-summary.md# 1. Add sources to existing notebook (URL explicitly provided by the user)
nlm source add myproject --url "<user-provided-url>" --wait
# 2. Generate deep-dive podcast
nlm audio create myproject --format deep_dive --length long --confirm
# 3. Poll until ready
nlm studio status myproject
# 4. Download
nlm download audio myproject <artifact-id> --output podcast.mp3nlm login --check before any operation--wait when adding sources — Ensures sources are processed before querying--confirm for destructive/create operations — Required for non-interactive usenlm login when needednlm source stale to detect outdated Google Drive sources--json for parsing — When processing output programmatically, use --json flagnlm research, present the imported results to the user before acting on them.plugins
developer-kit-ai
skills
chunking-strategy
prompt-engineering
developer-kit-aws
skills
aws
aws-cli-beast
aws-cost-optimization
aws-drawio-architecture-diagrams
aws-sam-bootstrap
aws-cloudformation
aws-cloudformation-auto-scaling
references
aws-cloudformation-bedrock
references
aws-cloudformation-cloudfront
references
aws-cloudformation-cloudwatch
references
aws-cloudformation-dynamodb
references
aws-cloudformation-ec2
aws-cloudformation-ecs
references
aws-cloudformation-elasticache
aws-cloudformation-iam
references
aws-cloudformation-lambda
references
aws-cloudformation-rds
aws-cloudformation-s3
references
aws-cloudformation-security
references
aws-cloudformation-task-ecs-deploy-gh
aws-cloudformation-vpc
developer-kit-core
skills
developer-kit-java
skills
aws-lambda-java-integration
aws-rds-spring-boot-integration
aws-sdk-java-v2-bedrock
aws-sdk-java-v2-core
aws-sdk-java-v2-dynamodb
aws-sdk-java-v2-kms
aws-sdk-java-v2-lambda
aws-sdk-java-v2-messaging
aws-sdk-java-v2-rds
aws-sdk-java-v2-s3
aws-sdk-java-v2-secrets-manager
graalvm-native-image
langchain4j
langchain4j-mcp-server-patterns
langchain4j-ai-services-patterns
references
langchain4j-mcp-server-patterns
references
langchain4j-rag-implementation-patterns
references
langchain4j-spring-boot-integration
langchain4j-testing-strategies
langchain4j-tool-function-calling-patterns
langchain4j-vector-stores-configuration
references
qdrant
references
spring-ai-mcp-server-patterns
references
spring-boot-actuator
spring-boot-cache
spring-boot-crud-patterns
spring-boot-dependency-injection
spring-boot-event-driven-patterns
spring-boot-openapi-documentation
spring-boot-project-creator
spring-boot-resilience4j
spring-boot-rest-api-standards
spring-boot-saga-pattern
spring-boot-security-jwt
assets
references
scripts
spring-boot-test-patterns
spring-data-jpa
references
spring-data-neo4j
references
unit-test-application-events
unit-test-bean-validation
unit-test-boundary-conditions
unit-test-caching
unit-test-config-properties
unit-test-controller-layer
unit-test-exception-handler
unit-test-json-serialization
unit-test-mapper-converter
unit-test-parameterized
unit-test-scheduled-async
unit-test-service-layer
unit-test-utility-methods
unit-test-wiremock-rest-api
developer-kit-php
skills
aws-lambda-php-integration
developer-kit-python
skills
aws-lambda-python-integration
developer-kit-tools
developer-kit-typescript
skills
aws-lambda-typescript-integration
better-auth
drizzle-orm-patterns
dynamodb-toolbox-patterns
references
nestjs
nestjs-best-practices
nestjs-code-review
nestjs-drizzle-crud-generator
scripts
nextjs-app-router
nextjs-authentication
nextjs-code-review
nextjs-data-fetching
references
nextjs-deployment
nextjs-performance
nx-monorepo
react-code-review
react-patterns
references
shadcn-ui
tailwind-css-patterns
references
tailwind-design-system
references
turborepo-monorepo
typescript-docs
typescript-security-review
zod-validation-utilities