How to Find Slow Queries in PostgreSQL

·Updated Apr 3, 2026·
postgresqldatabasesqlperformancemonitoringquery-optimization
·

Level: intermediate · ~14 min read · Intent: informational

Audience: backend developers, database engineers, technical teams

Prerequisites

  • basic familiarity with PostgreSQL

Key takeaways

  • The fastest way to find slow PostgreSQL queries is usually a combination of pg_stat_statements for historical cost, EXPLAIN ANALYZE for one query's real behavior, and pg_stat_activity or logs for what is happening right now.
  • Most slow-query investigations go better when you separate the problem into three buckets: expensive SQL, too many SQL statements, or waiting caused by locks, I/O, or connection pressure.

FAQ

What is the best way to find slow queries in PostgreSQL?
For most systems, start with pg_stat_statements to find expensive statements over time, then inspect the worst candidates with EXPLAIN ANALYZE, and use pg_stat_activity or logs to understand what is currently running.
Should I use log_min_duration_statement or pg_stat_statements?
Usually both. log_min_duration_statement is good for capturing slow statements in logs, while pg_stat_statements is better for seeing which queries are expensive or frequent over time.
Can pg_stat_activity show slow queries?
Yes, but it shows what is active right now. It is best for spotting currently running queries, waits, and blocking sessions, not for long-term query history.
Is EXPLAIN ANALYZE enough to find slow queries?
No. EXPLAIN ANALYZE is excellent once you already know which query to investigate, but you still need monitoring or logging to decide which queries are actually slow or expensive in the first place.
0

Finding slow queries in PostgreSQL is one of the most valuable skills you can learn.

It is also one of the easiest places to waste time if you use the wrong workflow.

A lot of teams go straight to:

  • changing settings
  • adding random indexes
  • blaming PostgreSQL
  • or assuming the database needs more hardware

before they have actually answered the first real question:

Which query is slow, and in what way is it slow?

That matters because not all “slow query” problems are the same.

Sometimes the issue is:

  • one expensive SQL statement

Sometimes it is:

  • one endpoint issuing 150 small queries

Sometimes it is not even query cost at all. It is:

  • lock waits
  • I/O pressure
  • bad pagination
  • long transactions
  • or connection pooling problems

This guide shows how to find slow queries in PostgreSQL using the right tools in the right order.

The Best Workflow for Finding Slow Queries

If you want the shortest practical answer, use this sequence:

  1. Use pg_stat_statements to find the most expensive or most frequently executed queries over time
  2. Use pg_stat_activity to see what is slow right now
  3. Use EXPLAIN ANALYZE on the worst candidates
  4. Use slow-query logging with log_min_duration_statement when you need query evidence in logs
  5. Use auto_explain if you want slow plans automatically logged for real application statements

That combination covers:

  • historical patterns
  • live problems
  • and root-cause analysis

1. Start With pg_stat_statements

For most systems, this is the best first tool.

The pg_stat_statements extension tracks planning and execution statistics for SQL statements executed by the server. That makes it one of the best ways to answer questions like:

  • which queries consume the most total time
  • which queries run the most often
  • which queries have high average latency
  • which queries have high planning time
  • and which queries are quietly expensive because they execute constantly :contentReference[oaicite:1]{index=1}

Why it matters

A query does not need to be individually dramatic to be a problem.

A 7 ms query executed 300,000 times can be a bigger system cost than one obvious 3-second report query.

That is why pg_stat_statements is such a strong starting point: it shows you cost over time, not only whatever is currently happening.

Basic setup idea

The extension must be enabled and usually depends on query identifiers being computed. PostgreSQL’s runtime statistics docs note that pg_stat_statements requires a query identifier to be computed, unless an external module is used instead. :contentReference[oaicite:2]{index=2}

Example query

select
  query,
  calls,
  total_exec_time,
  mean_exec_time,
  rows
from pg_stat_statements
order by total_exec_time desc
limit 20;

What to look for

Focus on:

  • high total_exec_time
  • high mean_exec_time
  • high calls
  • queries returning suspiciously many rows
  • repeated statements you did not expect to be so frequent

A useful practical pattern is:

  • first sort by total time
  • then sort by mean time
  • then sort by calls

That helps you catch:

  • expensive reports
  • slow transactional queries
  • and noisy app patterns

2. Use pg_stat_activity for What Is Slow Right Now

pg_stat_statements is great for historical patterns. pg_stat_activity is great for the present.

PostgreSQL exposes active backend information through pg_stat_activity, and the docs note that application_name is shown there as well, which is one reason setting it from the app helps so much during investigations. :contentReference[oaicite:3]{index=3}

Why it matters

This view helps answer:

  • what is running right now
  • which sessions are active
  • how long a query has been running
  • whether sessions are blocked
  • and whether the issue is really query work or waiting

Example query

select
  pid,
  usename,
  application_name,
  state,
  wait_event_type,
  wait_event,
  now() - query_start as runtime,
  query
from pg_stat_activity
where state <> 'idle'
order by query_start asc;

What to look for

Pay attention to:

  • long-running active queries
  • sessions in idle in transaction
  • wait events
  • repeated copies of the same statement
  • missing application_name values from your app

This is often how you discover the problem is not “query cost” but:

  • blocking
  • pooling trouble
  • stuck workers
  • or an application transaction left open too long

3. Use EXPLAIN ANALYZE Once You Know the Query

This is one of the most important distinctions in PostgreSQL troubleshooting:

EXPLAIN ANALYZE is not how you discover which query is slow first.
It is how you understand why a specific query is slow.

The PostgreSQL docs describe EXPLAIN as showing the execution plan chosen by the planner, including scan types and join algorithms, while EXPLAIN ANALYZE actually executes the statement and shows real runtime measurements. :contentReference[oaicite:4]{index=4}

Example

explain analyze
select id, email
from users
where lower(email) = lower($1);

Why this helps

This shows you things like:

  • sequential scan vs index scan
  • nested loop vs hash join
  • expensive sorts
  • rows estimated vs rows actually seen
  • where most of the time is spent

Important habit

Do not stop at:

  • “it used a seq scan”

A sequential scan is not always wrong.

Ask instead:

  • was the table large enough for this to matter?
  • was the predicate selective?
  • were row estimates wildly wrong?
  • is the query shape itself the real issue?

Good first things to inspect

  • actual time
  • rows
  • loops
  • sort steps
  • hash build cost
  • rows removed by filter
  • estimated rows versus actual rows

That gives you much better information than intuition.

4. Enable Slow-Query Logging With log_min_duration_statement

Sometimes you need evidence in the log, especially in production.

PostgreSQL’s logging configuration includes log_min_duration_statement, which logs statements whose duration meets or exceeds the configured threshold. The docs also note related options like log_statement, log_duration, log_min_error_statement, and log_line_prefix, which together affect how query activity appears in logs. :contentReference[oaicite:5]{index=5}

Why it matters

This is useful when:

  • you want production evidence
  • you need to capture sporadic slow statements
  • you want to correlate slow queries with time windows
  • you do not yet know which statement is misbehaving

Example setting

log_min_duration_statement = 500ms

That tells PostgreSQL to log any statement taking 500 ms or more.

Good practice

Choose a threshold that is:

  • low enough to catch meaningful pain
  • high enough not to flood logs

What is “right” depends on the workload. A high-throughput API may want a much lower threshold than a batch-heavy reporting system.

Add useful context

The docs note that application_name can be included in logs through log_line_prefix, and it also appears in pg_stat_activity. That makes it much easier to trace slow queries back to the part of the app responsible. :contentReference[oaicite:6]{index=6}

5. Use auto_explain for Slow Queries You Cannot Reproduce Easily

Some slow queries are hard to catch manually.

That is where auto_explain becomes valuable.

The PostgreSQL docs describe auto_explain as a module that logs execution plans of slow statements automatically, without needing to run EXPLAIN by hand. They explicitly note this is especially helpful for tracking down unoptimized queries in large applications. :contentReference[oaicite:7]{index=7}

Why it matters

This is useful when:

  • the application is large
  • slow queries are hard to reproduce manually
  • the problem happens only under real traffic
  • you want plans in the log for statements crossing a threshold

What it gives you

Instead of only seeing:

  • “statement took 1700 ms”

you can also get:

  • the plan shape
  • row counts
  • join behavior
  • and more diagnostic detail

That makes production slow-query investigation much easier.

Important caution

Use it deliberately. Like many logging tools, you want enough visibility to catch the issue without creating unnecessary overhead or log noise.

6. Separate “Expensive Query” From “Too Many Queries”

This is one of the biggest troubleshooting distinctions.

Sometimes the slow part is one query. Sometimes the slow part is query count.

Case A: One expensive query

Common signs:

  • high mean_exec_time
  • a bad plan
  • expensive sort, join, or scan
  • one obvious SQL statement dominating total time

Case B: Too many queries

Common signs:

  • high calls
  • moderate per-query time
  • endpoint latency still poor
  • ORM-generated N+1 behavior
  • many tiny repeated statements

Why this matters

The fix is different.

For one expensive query, you often inspect:

  • query shape
  • indexes
  • row estimates
  • sorting
  • joins

For too many queries, you often fix:

  • N+1 behavior
  • batching
  • preloading
  • endpoint query count
  • repeated lookups

This is why pg_stat_statements is so useful: it helps you see both patterns.

7. Check Waits, Not Just Query Text

A “slow query” is not always spending its time doing compute or scan work.

Sometimes it is waiting.

PostgreSQL’s monitoring views help expose wait information, and pg_stat_activity includes wait event columns that can show whether a backend is waiting on locks or other resources. The monitoring docs also point out that performance analysis should not neglect operating system tools like top, iostat, and vmstat. :contentReference[oaicite:8]{index=8}

  • lock contention
  • long transactions
  • I/O pressure
  • WAL-related pressure
  • connection or client-side stalls
  • temp file spill behavior during sorts or hashes

Why this matters

If a query is “slow” because it is blocked, adding an index may not help.

You need to know whether time is being spent:

  • executing or
  • waiting

That changes the fix entirely.

8. Use Logging and Monitoring Together

There is no single perfect view.

A better workflow is to combine tools.

Good combination

  • pg_stat_statements for historical query cost
  • pg_stat_activity for live sessions and waits
  • EXPLAIN ANALYZE for one query’s real behavior
  • log_min_duration_statement for production evidence
  • auto_explain when you need plans for slow application statements
  • OS tools like iostat or vmstat when the issue may be infrastructure-side :contentReference[oaicite:9]{index=9}

This lets you answer:

  • what is slow over time
  • what is slow right now
  • why a specific statement is slow
  • and whether the problem is SQL or broader system behavior

9. A Good First Query Set for Troubleshooting

Here is a practical starter pack.

Top expensive statements by total execution time

select
  query,
  calls,
  total_exec_time,
  mean_exec_time,
  rows
from pg_stat_statements
order by total_exec_time desc
limit 20;

Top slow statements by average execution time

select
  query,
  calls,
  mean_exec_time,
  total_exec_time
from pg_stat_statements
where calls > 10
order by mean_exec_time desc
limit 20;

Active running statements

select
  pid,
  usename,
  application_name,
  state,
  wait_event_type,
  wait_event,
  now() - query_start as runtime,
  query
from pg_stat_activity
where state <> 'idle'
order by query_start asc;

Sessions stuck in transaction

select
  pid,
  usename,
  application_name,
  state,
  now() - xact_start as xact_age,
  query
from pg_stat_activity
where state = 'idle in transaction'
order by xact_start asc;

These are usually enough to start classifying the problem.

10. Common Mistakes When Looking for Slow Queries

Mistake 1: Jumping straight to settings

Many problems are query shape or app behavior problems first.

Mistake 2: Looking only at one slow query

Sometimes the real issue is frequency, not single-query latency.

Mistake 3: Ignoring waits and locks

A blocked query can look like a slow query.

Mistake 4: Running EXPLAIN ANALYZE on everything before finding the real offender

You need a target list first.

Mistake 5: Not tagging application connections

Set application_name from the app so you can see which service or worker generated the query. PostgreSQL surfaces that name in pg_stat_activity and can include it in logs via log_line_prefix. :contentReference[oaicite:10]{index=10}

Mistake 6: Treating logs and monitoring as alternatives

They are complements, not substitutes.

11. A Practical Investigation Flow

When you suspect slow queries, use this order:

Step 1

Check pg_stat_statements to find expensive or noisy statements.

Step 2

Check pg_stat_activity to see whether the problem is happening right now.

Step 3

Pick one query and run EXPLAIN ANALYZE.

Step 4

Check whether it is:

  • scan-heavy
  • sort-heavy
  • join-heavy
  • blocked
  • or simply called too often

Step 5

Turn on or inspect slow-query logging if you need more production evidence.

Step 6

Use auto_explain if you need slow plans captured automatically.

This keeps the investigation disciplined.

FAQ

What is the best way to find slow queries in PostgreSQL?

For most systems, start with pg_stat_statements to find the heaviest statements over time, then inspect the worst candidates with EXPLAIN ANALYZE, and use pg_stat_activity or logs to understand what is slow right now.

Should I use log_min_duration_statement or pg_stat_statements?

Usually both. log_min_duration_statement is helpful for capturing slow statements in logs, while pg_stat_statements is better for ranking query cost over time.

Can pg_stat_activity show slow queries?

Yes, but only what is currently active. It is best for spotting live long-running statements, waits, blocking, and sessions stuck in bad states.

Is EXPLAIN ANALYZE enough to find slow queries?

No. It is excellent once you already know which query to investigate, but you still need monitoring or logging to identify the real offenders first.

Conclusion

Finding slow queries in PostgreSQL gets much easier once you stop looking for one magical tool.

The strongest workflow is usually:

  • pg_stat_statements for historical cost
  • pg_stat_activity for live visibility
  • EXPLAIN ANALYZE for root-cause analysis
  • log_min_duration_statement for log evidence
  • and auto_explain when you need real slow plans from the application

That combination helps you answer the only question that matters:

Which query is slow, why is it slow, and what kind of slow is it?

Once you know that, the fix is usually much clearer.

PostgreSQL cluster

Explore the connected PostgreSQL guides around tuning, indexing, operations, schema design, scaling, and app integrations.

Pillar guide

PostgreSQL Performance Tuning: Complete Developer Guide

A practical PostgreSQL performance tuning guide for developers covering indexing, query plans, caching, connection pooling, vacuum, schema design, and troubleshooting with real examples.

View all PostgreSQL guides →

Related posts