Build premier landscape PDF proposals for Metis Strategy business development. Use whenever the user asks to create, build, draft, rebuild, refine, or iterate on a proposal, BD follow-up document, pitch document, or client-facing document to be sent to an external prospect after a discovery call. Output is a 16:9 landscape PDF (13.33" x 7.5") combining full-bleed photography, branded graphic devices, and coordinate-based ReportLab layout. Do NOT use for PowerPoint decks (use metis-pptx), whitepapers (use metis-whitepaper), one-pagers or internal reports (use metis-pdf-creator), or SOWs/MSAs (use metis-legal-drafting).
94
94%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Verification workflow for proposal PDFs. The core lesson from the Citi engagement: programmatic checks alone are not sufficient; every meaningful page must be rendered and visually reviewed before declaring the PDF done.
For every meaningful page (not just the ones that changed), render to PNG at 1.5–2x zoom and actually look at it. Use the Read tool to view each PNG.
What visual review catches that bbox analysis misses:
Do not skip this pass. The Citi engagement repeatedly declared success based on bbox checks, then had user feedback flag exactly the issues that visual review would have caught.
See scripts/verify.py. Usage:
# Overlap + Y-position check across all pages
python scripts/verify.py --pdf output.pdf --mode overlaps
# Render all pages to PNG for visual review
python scripts/verify.py --pdf output.pdf --mode render --out verify/
# Check for banned content (em dashes, absolutes)
python scripts/verify.py --pdf output.pdf --mode contentimport fitz
def rects_overlap(r1, r2, threshold=3):
x_overlap = min(r1[2], r2[2]) - max(r1[0], r2[0])
y_overlap = min(r1[3], r2[3]) - max(r1[1], r2[1])
return x_overlap > threshold and y_overlap > threshold
doc = fitz.open(pdf_path)
for i, page in enumerate(doc):
spans = []
for b in page.get_text('dict')['blocks']:
if 'lines' in b:
for line in b['lines']:
for span in line['spans']:
t = span['text'].strip()
if 'Page ' in t and 'Citi' in t: continue
if 'Proprietary' in t: continue
if len(t) < 2: continue
spans.append((span['bbox'], t))
overlaps = 0
for j, (b1, t1) in enumerate(spans):
for k, (b2, t2) in enumerate(spans):
if j >= k: continue
if t1.strip() == t2.strip(): continue
if rects_overlap(b1, b2):
overlaps += 1
if overlaps:
print(f'Page {i+1}: {overlaps} overlaps')Threshold tuning: threshold=3 (points) is right for proposals. Below that you get false positives from glyph descenders touching the next line's ascenders. Above that you miss small but real overlaps.
Common false positive: Metric cards where the big number (24pt font) has a bbox that extends slightly into the small label's ascender region. This is solved by making the card at least 56pt tall, not 48pt.
For each content page (not dividers, not cover), find the bottom-most text line. It should be at y > 460pt (85% of the 540pt page height). If lower, there is likely excess whitespace.
for i, page in enumerate(doc):
max_y = 0
for b in page.get_text('dict')['blocks']:
if 'lines' in b:
for line in b['lines']:
for span in line['spans']:
y = span['origin'][1]
if y > max_y:
max_y = y
pct = max_y / 540 * 100
print(f'Page {i+1}: bottom text at {max_y:.0f}pt ({pct:.0f}%)')Caveat: this check doesn't work for image-dominant pages because the bbox is only for text, not the image. For those pages, visual review is the only option.
banned = {
'em_dash': ['—', '\u2014'],
'absolutes': ['Zero', 'Never', 'Always', 'Every'],
'ai_tells': ['leverage', 'utilize', 'impactful', 'synergies'],
}
for i, page in enumerate(doc):
text = page.get_text()
for category, words in banned.items():
for w in words:
if w in text:
print(f'Page {i+1}: banned "{w}" ({category})')Review hits manually — some may be legitimate (e.g., a word from a verbatim client quote).
import fitz
doc = fitz.open(pdf_path)
for i in range(len(doc)):
pix = doc[i].get_pixmap(matrix=fitz.Matrix(1.5, 1.5))
pix.save(f'verify/page-{i+1:02d}.png')In Claude Code, the Read tool displays PNGs inline. Walk through each page and assess:
Cover page checklist:
Section divider checklist:
Content page checklist:
Image page checklist:
Proof point (case study) checklist:
Closing page checklist:
Before every deployment, verify the spelling of every named person referenced in the proposal:
Cross-reference against:
This is a 2-minute check that prevents a "Neil Ciderman / Neil Seideman"-level credibility hit.
The PDF is ready for deployment when:
content-rules.md)_working/, NOT on shared drive<Client> <Context> - Metis Strategy <YYYY-MM-DD>.pdfG:\Shared drives\BizDev Collaboration\<Client>\<Year>\When the user reviews the shared drive copy, if they surface any issue, the debug loop should be:
Before declaring the skill stable for a new client, run a build-reproducibility test:
If builds are not reproducible, there is a non-determinism bug. Common sources:
These matter when the user wants to diff before/after builds to see what changed. Reproducible builds make tiny edits trackable.