Extract actionable insights and valuable artifacts from web posts, articles, and technical documentation. Use when summarizing content, extracting key ideas from URLs/articles, preserving code snippets and diagrams, or creating visual summaries. Triggers on requests like "summarize this post", "extract insights from", "distill this article", "what are the key takeaways", or when a URL is shared for analysis.
97
Quality
100%
Does it follow best practices?
Impact
94%
1.25xAverage score across 5 eval scenarios
A startup CTO needs to share technical learnings from a lengthy engineering post-mortem with the company's non-technical board members and investors. The original document is detailed but contains valuable insights about their platform's recent scalability improvements. The audience has limited time and needs information presented in a scannable, no-nonsense format that respects their time while conveying the essential technical decisions and their business impact.
The summary will be included in the quarterly board package where brevity and clarity are essential - every unnecessary word detracts from the message.
The following files are provided as inputs. Extract them before beginning.
=============== FILE: inputs/postmortem.txt ===============
Authors: Platform Engineering Team Date: October 2024 Stakeholders: Engineering, Product, Operations, Executive Leadership
Following our Series B funding and aggressive customer acquisition, our platform experienced significant growth challenges that necessitated a comprehensive re-architecture effort. This document details our learnings.
Our platform powers real-time collaboration features for enterprise customers. When we started the quarter, we were serving 2,000 concurrent users. By quarter's end, we needed to support 15,000 concurrent users - a 7.5x increase.
On September 3rd, our primary database server reached 95% CPU utilization during peak hours. Response times degraded from an average of 200ms to over 3 seconds. We lost approximately $45,000 in potential revenue due to customers being unable to complete transactions.
The investigation revealed three interconnected issues:
We implemented read replicas with automatic failover. Configuration:
database:
primary:
host: db-primary.internal
pool_size: 50
replicas:
- host: db-replica-1.internal
pool_size: 100
- host: db-replica-2.internal
pool_size: 100
read_write_splitting: autoResults:
We refactored 47 GraphQL resolvers to use DataLoader pattern, eliminating N+1 queries.
Before: 847 queries for a typical dashboard load After: 12 queries for the same dashboard
This reduced average page load time from 4.2 seconds to 0.8 seconds.
We deployed Redis as a caching layer with these policies:
Cache hit rate stabilized at 73%, reducing database load by an estimated 4x.
| Metric | Before | After | Change |
|---|---|---|---|
| p99 Response Time | 3.2s | 290ms | -91% |
| Database CPU (peak) | 95% | 35% | -63% |
| Concurrent Users Supported | 2,000 | 18,000 | +800% |
| Infrastructure Cost | $8,200/mo | $11,400/mo | +39% |
| Revenue Lost to Downtime | $45,000 | $0 | -100% |
This incident revealed communication gaps between our sales and engineering teams. Sales had closed three enterprise contracts without engineering capacity assessment. We've since implemented a "technical readiness review" for deals above $50k ARR.
Special thanks to our on-call engineers who worked through the September 3rd incident: Maria Chen, David Park, and James Wilson. Also thanks to our customers who patiently worked with us during the degradation period.
The executive team's quick approval of the additional infrastructure budget enabled this rapid response.
[Detailed hour-by-hour timeline omitted for brevity]
[Diagrams omitted - see engineering wiki] =============== END FILE ===============
Create a file named board-summary.md that distills the post-mortem into a format suitable for the board package. The summary should be professional, scannable, and free of unnecessary content.