Postgres
pg_stat_statements: der vollständige Leitfaden 2026
Jede Spalte, jede Falle, die Queries, die du heute ausführen solltest — und warum pg_stat_statements 2026 immer noch die nützlichsten 80 Zeilen Telemetrie in Postgres sind.
pg_stat_statements is the most useful 80 lines of telemetry in Postgres. It gives you a per-query-shape rollup of execution count, total time, rows touched, and (since Postgres 13) planning time — for free, in shared memory, with no agent. Every meaningful Postgres monitoring product is built on top of it.
And yet most engineers use maybe four of its seventeen columns. This is the complete 2026 guide: every column, every gotcha, the queries you should run today, and an honest take on its limits and the five new alternatives that emerged in 2024–2026.
On this page
What pg_stat_statements actually is
It’s a contrib extension that hooks into Postgres’s query executor. Every time a query runs, Postgres normalizes the statement (replaces literals with $1, $2), hashes it into a queryid, and accumulates per-execution stats into a fixed-size in-memory hash table. You read it like a regular view.
Two important properties: it’s aggregated (no per-call rows — only sums and counts), and the hash table has a maximum size (pg_stat_statements.max, default 5,000). Once you exceed that, Postgres evicts the least-used entries — which is where most surprises come from.
Enabling it (and the right settings)
pg_stat_statements ships with every contrib package and every cloud provider (RDS / Aurora / Cloud SQL / Azure / Crunchbridge). You enable it in two steps:
# postgresql.conf shared_preload_libraries = 'pg_stat_statements' pg_stat_statements.max = 10000 # default 5000 — bump for busy systems pg_stat_statements.track = all # 'top' (default) misses inner statements in functions pg_stat_statements.track_planning = on # since PG 13 — captures planning time too
Then create the extension once per database you care about:
CREATE EXTENSION pg_stat_statements;
Every column, decoded
| Column | What it means | When you actually use it |
|---|---|---|
| userid / dbid | Owner and DB of the query | Multi-tenant DBs; almost always group by these |
| queryid | Stable hash of the normalized statement | The signature; join key for dashboards |
| query | Normalized statement text (pg_stat_statements rewrites literals) | Show the operator what query is slow |
| plans | Number of times the query was planned | Compare with calls to detect parse-prepare ratio |
| total_plan_time | Cumulative planning time, ms | When planning is your bottleneck (huge IN lists) |
| mean_plan_time / min_plan_time / max_plan_time / stddev_plan_time | Planning latency stats | Same |
| calls | Number of executions | Most useful column. Sort by it. |
| total_exec_time | Cumulative exec time, ms | Sort by this to find what’s burning your DB |
| mean_exec_time / min_exec_time / max_exec_time / stddev_exec_time | Execution latency stats | Mean is the misleading one — see gotchas |
| rows | Cumulative rows returned or affected | Detect 'small fast query, but it returns 100M rows' |
| shared_blks_hit / read / dirtied / written | Buffer pool stats | Cache hit ratio per signature |
| local_blks_* | Temp table buffer stats | Rare — mostly for ETL/large CTE work |
| temp_blks_read / written | Disk-spill stats | Sort/hash spilling to disk → bump work_mem |
| blk_read_time / blk_write_time | Time spent in OS read/write (track_io_timing on) | IO bottleneck detection |
| wal_records / wal_fpi / wal_bytes | WAL volume per query (PG 13+) | Hot writes that bloat your WAL |
| jit_* | JIT compilation stats (PG 11+) | Diagnose 'JIT made it slower' |
| stats_since / minmax_stats_since | When the row started accumulating (PG 17+) | Critical — see gotchas |
Five queries to run today
1. Top time-burners
SELECT queryid, substring(query for 100) AS q, calls, round(total_exec_time::numeric, 1) AS total_ms, round(mean_exec_time::numeric, 2) AS mean_ms, rows FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 20;
Sort by total_exec_time, not mean_exec_time. A query that runs 10ms a million times costs more than a query that runs 5s ten times. Most outages live at the top of this list.
2. Cache hit ratio per signature
SELECT
queryid,
shared_blks_hit,
shared_blks_read,
round(100.0 * shared_blks_hit /
NULLIF(shared_blks_hit + shared_blks_read, 0), 2) AS cache_hit_pct
FROM pg_stat_statements
WHERE shared_blks_hit + shared_blks_read > 1000
ORDER BY cache_hit_pct
LIMIT 20;Anything below 90% wants attention; anything below 50% on a busy signature is usually a missing index.
3. Write-amplifying queries
SELECT queryid, substring(query for 80) AS q, calls, pg_size_pretty(wal_bytes) AS wal_bytes, wal_fpi FROM pg_stat_statements ORDER BY wal_bytes DESC LIMIT 20;
High wal_fpi (full-page images) means the query is hitting cold pages and triggering WAL bloat. Common culprit: bulk UPDATE on a wide table without a recent checkpoint.
4. Disk-spill detection
SELECT queryid, substring(query for 80) AS q, pg_size_pretty(temp_blks_read::bigint * 8192) AS temp_read, pg_size_pretty(temp_blks_written::bigint * 8192) AS temp_written FROM pg_stat_statements WHERE temp_blks_written > 0 ORDER BY temp_blks_written DESC LIMIT 20;
Anything in this list spilled a sort or hash to disk because work_mem wasn’t large enough. Either bump work_mem for that role/session, or rewrite the query.
5. Reset and re-measure
SELECT pg_stat_statements_reset();
Reset before a release, then read after — you’ll see exactly what the new code is doing instead of an average over weeks.
Gotchas that bite production
- queryid is unstable across major versions. Postgres 14+ uses a different hash function than 13. Don’t use queryid as a join key across an upgrade.
mean_exec_timehides bimodals. A query that’s 1ms when warm and 5s when cold has a mean of ~50ms — useless. Always look atstddev_exec_timealongside mean. Better: store HDR histograms. (See Why your Postgres p99 lies.)- Eviction silently drops history. Hit
pg_stat_statements.maxand the least-used entries vanish. You won’t see a warning. Watchpg_stat_statements_info.dealloc to detect. - track = top misses inner SQL. Default mode skips statements inside functions. Set
track = allif you have non-trivial PL/pgSQL. - Cloud-provider differences. RDS exposes most columns. Aurora has a slight delay. Cloud SQL truncates the query column more aggressively. Verify column-by-column before you trust a dashboard.
- Reset destroys the global view. If two teams share a DB and one resets to debug, the other team’s dashboards lose history. Coordinate or use
pg_stat_statements_reset(userid, dbid, queryid)for surgical resets (PG 12+).
Limits — and the 2026 alternatives
pg_stat_statements is rolling aggregate only. It can’t answer: “what was p99 of this query at 14:35 yesterday?” or “show me the actual plan from the slow execution.” That’s where DBM agents come in.
The 2026 alternatives — none replace it, all complement it:
pg_stat_kcache— adds CPU and OS-level disk-IO stats per signature.pg_qualstats+hypopg— index recommendation built on missing-quals stats.auto_explain— captures EXPLAIN plans for slow executions, so you have plans not just stats.- The new
pg_stat_ioview (PG 16+) — splits IO by backend and context, complementary topg_stat_statements. - OpenTelemetry-flavored Postgres receivers, which scrape pg_stat_statements and ship to Tempo / Loki / Grafana.
Obsfly Query Summary is built on top of pg_stat_statements + pg_stat_kcache + auto_explain, with HDR histograms persisted in ClickHouse so you can answer time-bounded percentile questions that the in-memory view can’t.
FAQ
Does pg_stat_statements affect production performance?+
Why is my queryid different than what my ORM logs?+
Can I use it without superuser?+
Is it safe on a replica?+
Keep reading
Postgres
Postgres slow queries: 12 causes and how to find each one
A field-tested playbook for diagnosing a slow Postgres query in production — from missing indexes to plan flips to bloated tables — with the SQL to find each cause and the fix.
Postgres
Why your Postgres p99 latency lies — and what to track instead
p99 over 1m windows is the most-displayed and most-misleading number on every DBM dashboard. Here's the histogram math, the seasonality math, and a saner default.
Postgres
Postgres lock chains: how to find the session blocking yours
A practical walkthrough of pg_locks, pg_blocking_pids, and the recursive CTE that gives you the full chain — including the AccessExclusiveLocks that quietly take your DB down.