CtrlK
BlogDocsLog inGet started
Tessl Logo

alonso-skills/mssql-server

Writes, optimizes, and debugs T-SQL queries. Explains SQL Server internals, troubleshoots performance issues, and guides database administration tasks including backup/restore, high availability, security, and index design. Use when the user asks about T-SQL syntax, SQL Server administration, query performance, stored procedures, indexes, locking, transactions, backup/restore, high availability, security, or any MSSQL-related topic — even without saying 'SQL Server' explicitly. Also trigger on terms like SSMS, tempdb, bcp, sqlcmd, MSSQL, sp_executesql, NOLOCK, columnstore, Hekaton, RCSI, param sniffing, or execution plan.

100

Quality

100%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

28-statistics.mdreferences/

28 — Statistics

<!-- TOC --> <!-- /TOC -->

When to Use This Reference

Load this file when the user asks about:

  • Stale statistics causing bad query plans or incorrect row estimates
  • Auto-update statistics behavior and thresholds
  • DBCC SHOW_STATISTICS interpretation
  • Histograms, RANGE_HI_KEY, EQ_ROWS, AVG_RANGE_ROWS
  • Ascending key / new data distribution problem
  • UPDATE STATISTICS options (FULLSCAN, SAMPLE, ROWCOUNT, PAGECOUNT)
  • Filtered statistics for partial table statistics
  • Multi-column statistics and density vectors
  • STATISTICS_INCREMENTAL for partitioned tables
  • Cardinality estimation errors and plan regression

Statistics Overview

Statistics are metadata objects that describe the data distribution of one or more columns. The query optimizer uses statistics to estimate the number of rows that will satisfy a predicate (cardinality estimation), which drives plan choices: whether to seek vs. scan, which join algorithm to use, how large a memory grant to allocate.

Each statistics object contains:

  1. Header — when updated, row count, sampled rows, steps
  2. Density vector — all-density for each column prefix (for multi-column stats)
  3. Histogram — up to 200 steps describing value distribution for the leading column

Statistics are created:

  • Automatically on index creation (always)
  • Automatically on query columns when AUTO_CREATE_STATISTICS is ON (default)
  • Manually via CREATE STATISTICS or UPDATE STATISTICS
-- Check auto-statistics settings
SELECT
    name,
    is_auto_create_stats_on,
    is_auto_update_stats_on,
    is_auto_update_stats_async_on
FROM sys.databases
WHERE name = DB_NAME();

[!WARNING] AUTO_UPDATE_STATS_ASYNC Async stats update means the optimizer uses stale stats for the current query and schedules an update in the background. This can cause repeated bad plans until the update completes. Enable async only if you prefer consistent response times over accurate plans for the triggering query.


How Statistics Are Created

Automatic statistics (AUTO_CREATE_STATISTICS = ON):

  • Created when a query references a column in a predicate (WHERE, JOIN, GROUP BY, ORDER BY) and no statistics exist for that column
  • Created as single-column statistics
  • Named _WA_Sys_<column_hash>_<table_id_hex>

Index statistics — created automatically when an index is built; statistics name = index name; leading key column drives the histogram.

Manual creation:

-- Single column
CREATE STATISTICS stat_OrderDate
ON dbo.Orders (OrderDate);

-- Multi-column
CREATE STATISTICS stat_CustomerStatus
ON dbo.Orders (CustomerID, Status);

-- With specific sample rate
CREATE STATISTICS stat_OrderDate_Full
ON dbo.Orders (OrderDate)
WITH FULLSCAN;

-- Filtered (partial table)
CREATE STATISTICS stat_OrderDate_Active
ON dbo.Orders (OrderDate)
WHERE Status = 'Active';

Histogram Structure

The histogram covers only the leading column of a statistics object. It has up to 200 steps. Each step (row) in the histogram represents a range:

ColumnDescription
RANGE_HI_KEYUpper bound value of the histogram step
EQ_ROWSEstimated number of rows equal to RANGE_HI_KEY
RANGE_ROWSEstimated number of rows between the previous RANGE_HI_KEY and this one (exclusive)
DISTINCT_RANGE_ROWSEstimated distinct values in the range (not counting RANGE_HI_KEY itself)
AVG_RANGE_ROWSRANGE_ROWS / DISTINCT_RANGE_ROWS — avg rows per distinct value in the range

How the optimizer uses it:

  • Predicate col = @value: if @value matches a RANGE_HI_KEY, use EQ_ROWS; otherwise use AVG_RANGE_ROWS from the enclosing step
  • Predicate col BETWEEN @lo AND @hi: sum EQ_ROWS and RANGE_ROWS across spanned steps (with partial interpolation at boundaries)
  • If @value is outside the histogram range (ascending key problem), the CE falls back to density-based estimates — often wildly wrong

200-step limit implication: For a table with millions of distinct values, each histogram step covers a wide range. AVG_RANGE_ROWS becomes an average over a large range and can be very inaccurate for skewed distributions.


DBCC SHOW_STATISTICS Output Interpretation

-- Full output (header + density vector + histogram)
DBCC SHOW_STATISTICS ('dbo.Orders', 'IX_Orders_OrderDate');

-- Or specify statistics name explicitly
DBCC SHOW_STATISTICS ('dbo.Orders', 'stat_OrderDate');

-- Show only histogram
DBCC SHOW_STATISTICS ('dbo.Orders', 'IX_Orders_OrderDate') WITH HISTOGRAM;

-- Show only density vector
DBCC SHOW_STATISTICS ('dbo.Orders', 'IX_Orders_OrderDate') WITH DENSITY_VECTOR;

-- Show only header
DBCC SHOW_STATISTICS ('dbo.Orders', 'IX_Orders_OrderDate') WITH STAT_HEADER;

Header fields:

FieldMeaning
NameStatistics object name
UpdatedDatetime of last update
RowsRow count at last update
Rows SampledRows actually read (may be < Rows for sampled update)
StepsHistogram step count (max 200)
Density1 / distinct values for leading column (deprecated in favor of density vector)
Average key lengthAvg bytes in key columns
String IndexWhether a string summary index exists
Filter ExpressionFor filtered statistics
Unfiltered RowsTotal rows regardless of filter

Density vector:

  • Row per column prefix: (col1), (col1, col2), etc.
  • All density = 1 / distinct values for that prefix
  • Selectivity = All density × table row count
  • Low density (close to 0) = high selectivity; high density (close to 1) = low selectivity / poor candidate for index seek

Reading the histogram:

-- Look for histogram holes (large range between steps covering skewed data)
-- Look for steps where AVG_RANGE_ROWS >> EQ_ROWS (high within-step skew)
-- Look for max RANGE_HI_KEY vs actual max value (ascending key gap)
DBCC SHOW_STATISTICS ('dbo.Orders', 'IX_Orders_OrderDate') WITH HISTOGRAM;

SELECT MAX(OrderDate) FROM dbo.Orders;
-- If max > last histogram RANGE_HI_KEY, ascending key problem is present

Auto-Update Thresholds

SQL Server auto-updates statistics when a modification counter exceeds a threshold. The counter (rowmodctr or the newer modification_counter) increments on INSERT, UPDATE, DELETE, and MERGE for each affected row (updates count as 1 delete + 1 insert for stats purposes).

Legacy threshold (pre-2016, compat < 130):

Table typeThreshold to trigger auto-update
Empty tableFirst INSERT (any rows)
Table with < 500 rows500 modifications
Table with ≥ 500 rows500 + 20% of row count

Problem: On a 10-million-row table, 20% = 2 million modifications required before auto-update fires. By that point, statistics may be badly stale.

Check current modification counts:

SELECT
    OBJECT_NAME(s.object_id)           AS table_name,
    s.name                              AS stats_name,
    sp.last_updated,
    sp.rows,
    sp.rows_sampled,
    sp.modification_counter,
    CAST(100.0 * sp.modification_counter / NULLIF(sp.rows, 0) AS DECIMAL(5,2))
                                        AS pct_modified
FROM sys.stats s
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id = OBJECT_ID('dbo.Orders')
ORDER BY sp.modification_counter DESC;

Dynamic Statistics Threshold (2016+)

[!NOTE] SQL Server 2016 / Compatibility Level 130 Trace flag 2371 (which enabled dynamic threshold in earlier versions) is superseded by compatibility level 130+, where dynamic threshold is ON by default.

Dynamic threshold formula:

threshold = SQRT(1000 × current_row_count)

Examples:

Row countOld threshold (20%)Dynamic threshold
10,0002,000~3,162
100,00020,000~10,000
1,000,000200,000~31,623
10,000,0002,000,000~100,000
100,000,00020,000,000~316,228

For large tables the dynamic threshold fires much sooner than the legacy 20% rule, significantly reducing the window of stale statistics.

To check which CE/threshold behavior is active:

SELECT compatibility_level FROM sys.databases WHERE name = DB_NAME();
-- 130+ = dynamic threshold enabled

Ascending Key Problem

Symptom: Queries filtering on a date/timestamp/identity column for recent values produce dramatically overestimated or underestimated row counts, causing bad plans.

Root cause: Statistics are updated only when the modification threshold is crossed. In the meantime, new rows are inserted with values beyond the histogram's RANGE_HI_KEY maximum. The CE has no histogram data for these values and falls back to a fixed fraction of the density estimate — often 1 row or a tiny fraction of actual rows.

Diagnosis:

-- Compare histogram max to actual max
DBCC SHOW_STATISTICS ('dbo.Orders', 'IX_Orders_OrderDate') WITH HISTOGRAM;
-- Note the max RANGE_HI_KEY

SELECT MAX(OrderDate) AS actual_max FROM dbo.Orders;
-- If actual_max >> RANGE_HI_KEY, ascending key problem is present

-- Confirm via estimated vs actual in execution plan
-- Look for large discrepancy on the date column predicate

Workarounds (in order of preference):

  1. Increase update frequency — use a maintenance job with UPDATE STATISTICS ... WITH FULLSCAN or Ola Hallengren's IndexOptimize job.

  2. Trace flag 2389/2390 (pre-2016) — marks the leading column as "ascending key" so the CE applies a higher estimate for out-of-range values. Not needed at compat 130+ with CE 120+ which has built-in ascending key heuristics.

  3. STATISTICS_INCREMENTAL (2014+, partitioned tables) — update only the newest partition's statistics, avoiding a full-table scan.

  4. Filtered statistics — create a statistics object covering only recent data (requires maintenance to keep the filter relevant).

  5. Query hintsOPTION (USE HINT ('ASSUME_MIN_SELECTIVITY_FOR_FILTER_ESTIMATES')) or OPTIMIZE FOR as a last resort.

-- Force a full statistics update on the problematic index/column
UPDATE STATISTICS dbo.Orders IX_Orders_OrderDate WITH FULLSCAN;

-- Or update all statistics on the table
UPDATE STATISTICS dbo.Orders WITH FULLSCAN;

UPDATE STATISTICS Options

-- Syntax
UPDATE STATISTICS table_or_view [ index_or_stats_name ]
    [ WITH
        { FULLSCAN [ , PERSIST_SAMPLE_PERCENT = { ON | OFF } ]
        | SAMPLE n { PERCENT | ROWS }
        | RESAMPLE
        | ROWCOUNT = n, PAGECOUNT = n
        | NORECOMPUTE
        | INCREMENTAL = { ON | OFF }
        | MAXDOP = n   -- 2022+
        }
    ]
OptionDescriptionWhen to use
FULLSCANRead every row; most accurateAfter bulk load, ascending key fix, initial setup
SAMPLE n PERCENTRead n% of rowsBalance between speed and accuracy on very large tables
SAMPLE n ROWSRead exactly n rowsRarely needed; use PERCENT instead
RESAMPLEUse same sample rate as last updateConsistent behavior in maintenance jobs
ROWCOUNT = n, PAGECOUNT = nInject fake row/page counts without updating histogramRarely: forcing optimizer to treat table as larger/smaller for testing
NORECOMPUTEDisable auto-update for this stats object after manual updateUse with caution — breaks auto-maintenance
INCREMENTAL = ONPartition-level update (requires STATISTICS_INCREMENTAL setup)Partitioned tables; see section below
PERSIST_SAMPLE_PERCENT = ONRemember the FULLSCAN or SAMPLE rate for future auto-updates2016+; prevents auto-update from downgrading to default sampling

[!NOTE] SQL Server 2022 MAXDOP = n option added to UPDATE STATISTICS — allows controlling parallelism during stats update without changing the instance-level MAXDOP.

[!NOTE] SQL Server 2016 / Compatibility Level 130 PERSIST_SAMPLE_PERCENT = ON persists the explicit sample rate for subsequent auto-updates on that statistics object, so a FULLSCAN doesn't revert to default sampling on the next auto-update.

Update all statistics on a table:

-- All statistics (index + auto-created column stats)
UPDATE STATISTICS dbo.Orders WITH FULLSCAN;

-- All statistics in a database (maintenance pattern)
EXEC sp_updatestats;
-- Note: sp_updatestats only updates stats with modification_counter > 0
-- It uses default sampling, not FULLSCAN — may not fix ascending key

Filtered Statistics

Filtered statistics cover a subset of rows defined by a WHERE predicate. The optimizer uses them when a query's WHERE clause matches (or implies) the filter.

When to create filtered statistics:

  • Table has heavily skewed value distribution (e.g., 95% of orders are Status = 'Completed', 5% are Status = 'Active')
  • Queries almost always filter on a specific value or range
  • Ascending key problem on recent data only
-- Filtered statistics for active orders only
CREATE STATISTICS stat_Orders_Active_Date
ON dbo.Orders (OrderDate)
WHERE Status = 'Active'
WITH FULLSCAN;

-- For recent data (ascending key workaround — requires periodic maintenance)
CREATE STATISTICS stat_Orders_Recent
ON dbo.Orders (OrderDate)
WHERE OrderDate >= '2024-01-01'
WITH FULLSCAN;

Optimizer match rules:

  • The query's WHERE clause must imply the statistics filter — not just overlap
  • Parameterized queries may not match filtered stats if the optimizer can't prove the parameter satisfies the filter at compile time
  • Filtered statistics on a column with filtered indexes often complement each other

Maintaining filtered statistics:

  • Auto-update respects the filter — modification_counter counts only filtered rows
  • If the filter covers a moving time window, the filter definition must be recreated periodically (DROP + CREATE — cannot ALTER the filter)

Multi-Column Statistics

Statistics on multiple columns capture correlation between columns and provide density vectors for column prefixes.

-- Two-column statistics
CREATE STATISTICS stat_CustomerStatus
ON dbo.Orders (CustomerID, Status)
WITH FULLSCAN;

Density vector for multi-column stats:

  • Row 1: density for (CustomerID) alone — same as single-column stats
  • Row 2: density for (CustomerID, Status) — captures joint selectivity

When multi-column stats help:

  • Queries with WHERE CustomerID = @id AND Status = @status — optimizer can use joint density instead of multiplying individual selectivities (which assumes independence, usually wrong)
  • Column ordering matters: put the most selective / equality-predicate column first

Limitation: The histogram covers only the leading column. If the second column drives the range predicate, a separate single-column stats object is better.

-- Check if correlated columns exist without multi-column stats
-- (Look for columns that are always queried together in plan cache)
SELECT
    qs.total_logical_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset
          END - qs.statement_start_offset)/2)+1) AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_logical_reads DESC;

STATISTICS_INCREMENTAL

[!NOTE] SQL Server 2014 / Compatibility Level 120 STATISTICS_INCREMENTAL requires partitioned tables with STATISTICS_INCREMENTAL = ON set at the statistics level. Requires the STATISTICS_INCREMENTAL option.

Incremental statistics maintain per-partition statistics metadata, allowing updates to target only changed partitions — critical for large partitioned tables where a full FULLSCAN is prohibitively expensive.

Setup:

-- Enable when creating the index (statistics created with index inherit this)
CREATE INDEX IX_Orders_OrderDate ON dbo.Orders (OrderDate)
WITH (STATISTICS_INCREMENTAL = ON);

-- Or create standalone incremental statistics
CREATE STATISTICS stat_Orders_Inc
ON dbo.Orders (OrderDate)
WITH FULLSCAN, INCREMENTAL = ON;

Updating only the newest partition:

-- Update statistics for partition 12 only (e.g., after loading December data)
UPDATE STATISTICS dbo.Orders IX_Orders_OrderDate
WITH RESAMPLE ON PARTITIONS (12);

-- Or use partition function to identify partition number
DECLARE @partition_num INT;
SELECT @partition_num = $PARTITION.PF_OrderDate('2024-12-01');

UPDATE STATISTICS dbo.Orders IX_Orders_OrderDate
WITH RESAMPLE ON PARTITIONS (@partition_num);

How the optimizer uses incremental stats:

  • The histogram is synthesized from per-partition component histograms
  • Each partition's component histogram has up to 200 steps
  • The merged histogram seen by the optimizer has up to 200 steps total (may lose resolution when merging many partitions)

Limitation: Incremental stats do not improve cardinality estimation for queries that span multiple partitions — the merged histogram has reduced resolution. They are primarily a maintenance efficiency feature, not a plan quality feature.

-- Check which stats are incremental
SELECT
    OBJECT_NAME(s.object_id) AS table_name,
    s.name                    AS stats_name,
    s.is_incremental
FROM sys.stats s
WHERE s.object_id = OBJECT_ID('dbo.Orders')
  AND s.is_incremental = 1;

Cardinality Estimator (CE) Versions

The cardinality estimator version affects how histograms are used and how multi-predicate selectivity is calculated.

CE VersionCompat LevelDefault forKey behavioral changes
CE 7070SQL Server 7.0Original CE
CE 120120SQL Server 2014Rewritten; better multi-predicate, ascending key heuristics
CE 130130SQL Server 2016Dynamic threshold, incremental updates
CE 140140SQL Server 2017IQP interleaved execution, memory grant feedback
CE 150150SQL Server 2019Batch mode on rowstore, table variable deferred compilation
CE 160160SQL Server 2022DOP feedback, CE feedback, PSPO

[!WARNING] CE version change on upgrade Upgrading compatibility level changes the CE. Plans that were good under the old CE may regress. Test with Query Store's plan forcing to revert individual queries.

Force old CE for a specific query:

-- Use legacy CE 70 for this query
SELECT * FROM dbo.Orders WHERE OrderDate > '2024-01-01'
OPTION (USE HINT ('FORCE_LEGACY_CARDINALITY_ESTIMATION'));

-- Or force CE 120
OPTION (USE HINT ('ENABLE_QUERY_OPTIMIZER_HOTFIXES'));

Verify CE version in use:

-- From execution plan XML: look for CardinalityEstimationModelVersion attribute
SELECT qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
WHERE qp.query_plan.value('(//*[@StatementId])[1]/@CardinalityEstimationModelVersion',
    'INT') < 120;  -- find queries using legacy CE

Statistics and Index Maintenance

INDEX REBUILD vs statistics:

  • ALTER INDEX ... REBUILD updates statistics with FULLSCAN (reads all rows)
  • ALTER INDEX ... REORGANIZE does not update statistics
  • After a rebuild, auto-update statistics won't fire until the modification threshold is crossed again

Recommendation: Use a maintenance strategy (e.g., Ola Hallengren's IndexOptimize) that handles both index defragmentation and statistics updates independently — don't rely on rebuilds as your stats update mechanism.

-- Rebuild updates stats; reorganize does not
ALTER INDEX IX_Orders_OrderDate ON dbo.Orders REBUILD;
-- Equivalent to UPDATE STATISTICS with FULLSCAN for that index

ALTER INDEX IX_Orders_OrderDate ON dbo.Orders REORGANIZE;
-- Stats remain untouched

After bulk loads:

-- Always update statistics after significant bulk load
BULK INSERT dbo.Orders FROM '\\server\share\orders.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', TABLOCK);

-- Then update stats
UPDATE STATISTICS dbo.Orders WITH FULLSCAN;
-- Or just the relevant index
UPDATE STATISTICS dbo.Orders IX_Orders_OrderDate WITH FULLSCAN;

Common Patterns

Identify most stale statistics:

SELECT TOP 20
    OBJECT_SCHEMA_NAME(s.object_id) + '.' + OBJECT_NAME(s.object_id) AS table_name,
    s.name          AS stats_name,
    sp.last_updated,
    sp.rows,
    sp.modification_counter,
    CAST(100.0 * sp.modification_counter / NULLIF(sp.rows, 0) AS DECIMAL(5,2)) AS pct_modified,
    sp.rows_sampled,
    CAST(100.0 * sp.rows_sampled / NULLIF(sp.rows, 0) AS DECIMAL(5,2))         AS sample_pct
FROM sys.stats s
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id > 100   -- exclude system tables
ORDER BY sp.modification_counter DESC;

Check if ascending key problem exists:

-- For each index, compare histogram max to actual max
SELECT
    OBJECT_NAME(s.object_id)    AS table_name,
    s.name                       AS stats_name,
    sp.last_updated,
    sp.rows,
    sp.modification_counter
FROM sys.stats s
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id = OBJECT_ID('dbo.Orders')
  AND s.name LIKE 'IX_%';

-- Then inspect histogram for the suspicious index
DBCC SHOW_STATISTICS ('dbo.Orders', 'IX_Orders_OrderDate') WITH HISTOGRAM;

Force stats update for all tables in database:

-- sp_updatestats: only updates stats where modification_counter > 0
-- Uses default sampling (not FULLSCAN)
EXEC sp_updatestats;

-- For FULLSCAN on all user tables:
DECLARE @sql NVARCHAR(MAX) = N'';
SELECT @sql += N'UPDATE STATISTICS ' +
    QUOTENAME(SCHEMA_NAME(schema_id)) + '.' + QUOTENAME(name) +
    ' WITH FULLSCAN;' + CHAR(10)
FROM sys.tables
WHERE is_ms_shipped = 0;
EXEC sp_executesql @sql;

Diagnose bad cardinality estimate from a plan:

-- Capture estimated vs actual row counts from recent plans
SELECT TOP 20
    qs.execution_count,
    qs.total_logical_reads / qs.execution_count AS avg_logical_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(st.text)
            ELSE qs.statement_end_offset END
          - qs.statement_start_offset)/2)+1) AS query_text,
    qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qs.total_logical_reads / qs.execution_count > 10000
ORDER BY qs.total_logical_reads DESC;

Metadata Queries

All statistics on a table:

SELECT
    s.stats_id,
    s.name                  AS stats_name,
    s.auto_created,
    s.user_created,
    s.is_incremental,
    s.filter_definition,
    sp.last_updated,
    sp.rows,
    sp.rows_sampled,
    sp.steps,
    sp.unfiltered_rows,
    sp.modification_counter,
    STRING_AGG(c.name, ', ') WITHIN GROUP (ORDER BY sc.stats_column_id)
                             AS columns
FROM sys.stats s
INNER JOIN sys.stats_columns sc ON sc.object_id = s.object_id AND sc.stats_id = s.stats_id
INNER JOIN sys.columns c        ON c.object_id = sc.object_id AND c.column_id = sc.column_id
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.object_id = OBJECT_ID('dbo.Orders')
GROUP BY s.stats_id, s.name, s.auto_created, s.user_created, s.is_incremental,
         s.filter_definition, sp.last_updated, sp.rows, sp.rows_sampled,
         sp.steps, sp.unfiltered_rows, sp.modification_counter
ORDER BY s.stats_id;

Statistics not updated in 7 days:

SELECT
    OBJECT_SCHEMA_NAME(s.object_id) + '.' + OBJECT_NAME(s.object_id) AS table_name,
    s.name                  AS stats_name,
    sp.last_updated,
    sp.rows,
    sp.modification_counter
FROM sys.stats s
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE sp.last_updated < DATEADD(DAY, -7, GETDATE())
  AND s.object_id > 100
ORDER BY sp.last_updated ASC;

Auto-created statistics (candidates for review):

SELECT
    OBJECT_SCHEMA_NAME(s.object_id) + '.' + OBJECT_NAME(s.object_id) AS table_name,
    s.name AS stats_name,
    sp.last_updated,
    sp.rows,
    sp.modification_counter,
    c.name AS column_name
FROM sys.stats s
INNER JOIN sys.stats_columns sc ON sc.object_id = s.object_id AND sc.stats_id = s.stats_id
INNER JOIN sys.columns c        ON c.object_id = sc.object_id AND c.column_id = sc.column_id
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE s.auto_created = 1
  AND s.object_id > 100
ORDER BY sp.modification_counter DESC;

Incremental statistics per partition:

-- Requires sys.dm_db_incremental_stats_properties (2014+)
SELECT
    OBJECT_NAME(s.object_id) AS table_name,
    s.name                   AS stats_name,
    isp.partition_number,
    isp.last_updated,
    isp.rows,
    isp.modification_counter
FROM sys.stats s
CROSS APPLY sys.dm_db_incremental_stats_properties(s.object_id, s.stats_id) isp
WHERE s.object_id = OBJECT_ID('dbo.Orders')
ORDER BY isp.partition_number;

Gotchas

  1. Auto-update fires at the end of the triggering query — the query that trips the modification threshold runs with stale stats and possibly a bad plan. Auto-update happens after that query completes.

  2. NORECOMPUTE disables auto-updates permanently — if you use UPDATE STATISTICS ... WITH NORECOMPUTE (or sp_autostats 'OFF'), SQL Server will never auto-update that stats object again. Always remove NORECOMPUTE after one-off manual updates.

  3. sp_updatestats uses sampling, not FULLSCAN — it's faster but won't fix ascending key problems. Use UPDATE STATISTICS ... WITH FULLSCAN for problem tables.

  4. Rebuilding an index also updates its statistics — but doesn't update auto-created column-level statistics. You may need to update both.

  5. Filtered statistics don't help if the predicate doesn't match at compile time — parameterized queries with WHERE Status = @status won't use filtered stats for Status = 'Active' unless the optimizer can prove at compile time that @status = 'Active'.

  6. 200-step histogram limit is the same regardless of table size — a 1-billion-row table and a 1,000-row table both get at most 200 histogram steps. Bucket width grows proportionally with table size.

  7. Statistics update during index rebuild is FULLSCAN, not sampled — good for accuracy, but a large index rebuild triggers a full stats read as a side effect, which is expected.

  8. STATISTICS_INCREMENTAL doesn't improve plan quality for cross-partition queries — the merged histogram has reduced resolution. It is purely a maintenance performance optimization.

  9. Dropping and recreating a table drops all statistics — including manually created ones. TRUNCATE TABLE retains statistics structure but resets modification counters.

  10. Statistics names with auto-generated names (_WA_Sys_...) are not stable across environments — don't reference them by name in scripts. Use sys.stats to find by column name.

  11. CE feedback (2022+) in Query Store can silently override histogram-based estimates — if a plan has repeatedly seen a cardinality mismatch, the CE may apply a learned correction. This is good, but can make debugging confusing. Check sys.query_store_plan_feedback for active corrections.

  12. UPDATE STATISTICS without specifying a stats name updates all statistics on the table — intended behavior, but can be slow on wide tables with many indexes. Be explicit in maintenance windows.


See Also

  • references/29-query-plans.md — reading estimated vs actual row counts in execution plans
  • references/30-query-store.md — CE feedback, plan regression tracking, forced plans
  • references/31-intelligent-query-processing.md — memory grant feedback, DOP feedback, how IQP uses statistics
  • references/10-partitioning.md — STATISTICS_INCREMENTAL in context of partition switching
  • references/08-indexes.md — index rebuild vs reorganize and stats impact
  • references/32-performance-diagnostics.md — missing index DMVs, wait stats, sp_Blitz

Sources

references

01-syntax-ddl.md

02-syntax-dql.md

03-syntax-dml.md

04-ctes.md

05-views.md

06-stored-procedures.md

07-functions.md

08-indexes.md

09-columnstore-indexes.md

10-partitioning.md

11-custom-data-types.md

12-custom-defaults-rules.md

13-transactions-locking.md

14-error-handling.md

15-principals-permissions.md

16-security-encryption.md

17-temporal-tables.md

18-in-memory-oltp.md

19-json-xml.md

20-full-text-search.md

21-graph-tables.md

22-ledger-tables.md

23-dynamic-sql.md

24-string-date-math-functions.md

25-null-handling.md

26-collation.md

27-cursors.md

28-statistics.md

29-query-plans.md

30-query-store.md

31-intelligent-query-processing.md

32-performance-diagnostics.md

33-extended-events.md

34-tempdb.md

35-dbcc-commands.md

36-data-compression.md

37-change-tracking-cdc.md

38-auditing.md

39-triggers.md

40-service-broker-queuing.md

41-replication.md

42-database-snapshots.md

43-high-availability.md

44-backup-restore.md

45-linked-servers.md

46-polybase-external-tables.md

47-cli-bulk-operations.md

48-database-mail.md

49-configuration-tuning.md

50-sql-server-agent.md

51-2022-features.md

52-2025-features.md

53-migration-compatibility.md

54-linux-containers.md

SKILL.md

tile.json