How to Optimize Slow PostgreSQL Queries

·Updated Apr 3, 2026·
postgresqldatabasesqlperformancequery-optimizationbackend
·

Level: intermediate · ~15 min read · Intent: informational

Audience: backend developers, database engineers, technical teams

Prerequisites

  • basic familiarity with PostgreSQL

Key takeaways

  • Most slow PostgreSQL queries are fixed by reducing the amount of data touched: better indexes, more selective filters, smaller result sets, and better query shape usually matter more than global tuning.
  • The fastest optimization workflow is usually: identify the real slow statement, run EXPLAIN ANALYZE, inspect scans/joins/sorts/row estimates, then apply the smallest targeted schema or query change that removes the waste.

FAQ

What is the first thing to do when a PostgreSQL query is slow?
Start with EXPLAIN ANALYZE on the real query and inspect whether the main problem is scanning too many rows, sorting too much data, joining inefficiently, or waiting on something outside pure execution.
Do slow PostgreSQL queries always need a new index?
No. Many slow queries are caused by bad result shape, poor pagination, too many repeated queries, stale statistics, or application-side patterns like N+1 queries rather than missing indexes alone.
Should I use EXPLAIN or EXPLAIN ANALYZE?
Use EXPLAIN ANALYZE when it is safe to execute the query, because it shows what actually happened at runtime rather than only what the planner predicted.
Is PostgreSQL query optimization mostly about settings?
Usually no. Query shape, indexes, row counts, joins, and application behavior usually matter before database-wide tuning settings.
0

Optimizing slow PostgreSQL queries is usually less about magic tuning and more about removing wasted work.

That wasted work often looks like:

  • scanning far more rows than you need
  • sorting too much data
  • joining in the wrong order or on the wrong shape
  • returning too many columns
  • paginating inefficiently
  • hiding important filters inside expressions or JSON
  • or issuing the same query pattern too many times from the application

That is why the best optimization mindset is not:

  • “How do I make PostgreSQL faster?”

It is:

  • “What is this query forcing PostgreSQL to do that it should not have to do?”

Once you look at it that way, most optimizations get much clearer.

1. Start With the Exact Slow Query

Do not optimize a guess. Optimize the actual statement.

That means first identifying:

  • the real SQL text
  • its parameter pattern
  • how often it runs
  • whether it is slow individually or only expensive in aggregate
  • and which endpoint, job, or service is issuing it

This is why tools like:

  • pg_stat_statements
  • slow-query logging
  • pg_stat_activity
  • and app-level query tracing

matter before any tuning work begins.

A lot of teams waste time improving a query that looked suspicious while the real bottleneck was:

  • another statement
  • the same statement called 500 times
  • or one endpoint doing N+1 lookups

2. Use EXPLAIN ANALYZE, Not Guesswork

Once you have the real query, the next step is usually EXPLAIN ANALYZE.

That matters because:

  • EXPLAIN shows what PostgreSQL plans to do
  • EXPLAIN ANALYZE shows what actually happened at runtime

That difference is critical.

Example

explain analyze
select id, status, created_at
from orders
where account_id = $1
order by created_at desc
limit 20;

What to inspect first

When reading the plan, look for:

  • sequential scans on large tables
  • rows removed by filter
  • expensive sort nodes
  • bad join choices
  • estimated rows versus actual rows
  • loops multiplying work
  • total execution time concentration

You do not need to read every detail immediately. Start with:

  • where most of the time went
  • how many rows were touched
  • and whether the plan shape matches your expectation

3. Reduce the Number of Rows the Query Touches

This is one of the most reliable PostgreSQL optimization rules:

The fewer rows PostgreSQL has to touch, the faster the query usually becomes.

A lot of query tuning is really row-count reduction.

Common ways to reduce row work

  • add a better index
  • make the filter more selective
  • move filtering earlier
  • avoid offset pagination on very deep pages
  • stop returning whole-row shapes when only 4 columns are needed
  • split one huge query into a cheaper staged pattern only when it actually helps

Example

Weak query shape:

select *
from orders
where status = 'paid'
order by created_at desc
limit 50;

Better query shape when the real workload is tenant-scoped:

select id, total_cents, created_at
from orders
where account_id = $1
  and status = 'paid'
order by created_at desc
limit 50;

That second query is more selective and usually easier to support well.

4. Fix Missing or Mismatched Indexes

Indexes are one of the most common slow-query fixes, but only when they match the query shape.

PostgreSQL’s docs emphasize that indexes improve performance by helping the server find rows much faster, but they also add overhead and should be used sensibly. B-tree remains the default general-purpose index type. (postgresql.org)

Common mistake

The query is:

select *
from orders
where account_id = $1
order by created_at desc
limit 20;

But the schema only has:

create index idx_orders_account_id on orders (account_id);
create index idx_orders_created_at on orders (created_at);

A better index is often:

create index idx_orders_account_created
on orders (account_id, created_at desc);

Why this helps

A good multicolumn B-tree index can align with:

  • the equality filter
  • the sort order
  • and the limit pattern

PostgreSQL documents that multicolumn B-tree indexes are most efficient when predicates constrain the leading leftmost columns. (postgresql.org)

Practical rule

Do not ask:

  • “What columns are important?”

Ask:

  • “What exact repeated query pattern needs a better access path?”

5. Make the Query Return Less Data

Sometimes a query is slow not because it finds rows badly, but because it returns too much.

Common causes:

  • select *
  • very wide rows
  • large JSON or text payloads
  • fetching 500 rows when the endpoint only displays 20
  • returning columns the app never uses

Bad

select *
from users
where id = $1;

Better

select id, email, created_at, status
from users
where id = $1;

This matters more as tables grow wider over time.

A lot of app queries become quietly slower because the table expanded, but the query never got more selective.

6. Optimize Joins by Fixing Shape First

Join performance problems often come from poor query shape before they come from engine weakness.

Common causes:

  • joining too early before filtering
  • joining large tables without selective predicates
  • joining on columns without strong supporting indexes
  • querying denormalized data badly
  • asking for far more related rows than the endpoint actually needs

Better pattern

Filter as early as possible, especially on the table that drives the result.

Example:

select o.id, o.created_at, c.name
from orders o
join customers c on c.id = o.customer_id
where o.account_id = $1
  and o.created_at >= now() - interval '30 days'
order by o.created_at desc
limit 50;

This is much easier to optimize than a join-first query that pulls wide unrelated sets and filters later.

Practical tips

  • index join keys where appropriate
  • keep join conditions explicit and clean
  • inspect whether one side of the join is much larger than expected
  • watch for bad row-estimate gaps in EXPLAIN ANALYZE

7. Watch Sorts, Aggregates, and Memory-Heavy Operations

Sorts and hash operations become expensive when they touch too much data or spill poorly.

PostgreSQL’s runtime resource docs note that work_mem controls how much memory can be used by each internal sort operation and hash table before writing temporary files to disk, and that the total memory used can be many times that value because several operations may run at once. (postgresql.org)

Common symptoms

  • slow ORDER BY
  • expensive GROUP BY
  • large temp file activity
  • report queries much slower than expected
  • sorts that dominate the plan

Fixes

  • reduce the row count before the sort
  • support common ordering with a usable index
  • avoid sorting huge result sets only to throw most of them away
  • use smaller result shapes
  • review work_mem only after understanding the query and concurrency pattern

Important caution

Do not blindly raise work_mem globally. It is per internal operation, not a simple per-database cap.

8. Use Keyset Pagination Instead of Deep OFFSET Pagination

Offset pagination is a very common source of slow queries.

Weak pattern

select id, created_at, title
from posts
order by created_at desc
offset 20000
limit 20;

That gets worse as the offset grows because PostgreSQL still has to walk through a lot of rows to reach the requested page.

Better pattern

select id, created_at, title
from posts
where created_at < $1
order by created_at desc
limit 20;

This is usually called keyset pagination.

Why it helps

It lets the query continue from a known point rather than forcing PostgreSQL to skip a deep window of rows first.

For feeds, timelines, activity lists, and large tables, this is often one of the cleanest query optimizations available.

9. Use JSONB Carefully

PostgreSQL’s docs explain that jsonb is stored in a decomposed binary format and is significantly faster to process than json, which has to be reparsed each time. (postgresql.org)

That does not mean JSONB is always the fastest design choice overall.

Good use of JSONB

Use it when:

  • the structure is genuinely flexible
  • the shape varies
  • you need document-style operators
  • you will query inside the document with the right operators and indexes

Bad use of JSONB

Do not hide important relational fields inside JSONB if they are:

  • frequently filtered
  • joined
  • sorted
  • constrained
  • or central to business logic

A query like:

select *
from users
where profile->>'status' = 'active';

is often a sign that something important may have been modeled too loosely.

Better optimization mindset

Use relational columns for core relational facts. Use JSONB for flexible metadata.

10. Fix Statistics Problems When Estimates Are Wrong

Sometimes the SQL is reasonable and the indexes are present, but PostgreSQL still picks a poor plan.

That often happens when row estimates are badly wrong.

The planner depends heavily on table and column statistics, and PostgreSQL’s ANALYZE command collects those statistics. The docs explicitly note that statistics guide the planner in selecting efficient execution plans. (postgresql.org)

Signs of a statistics problem

In EXPLAIN ANALYZE, you see:

  • estimated rows far from actual rows
  • a plan that looks irrational for the true data shape
  • worse plans after major data distribution changes

Fixes

  • run ANALYZE
  • make sure autovacuum and autoanalyze are healthy
  • review column statistics targets only where justified
  • revisit the query if correlation or data skew is fooling the planner badly

Do not assume every bad plan means PostgreSQL is confused permanently. Sometimes it just lacks good enough statistics.

11. Shorten Transactions Around Slow Queries

A slow query can become much worse if it runs inside a transaction that is too broad.

Long transactions:

  • hold locks longer
  • increase contention
  • delay cleanup
  • and make the system harder to reason about under concurrency

Bad shape

  1. open transaction
  2. call another service
  3. perform app-side validation
  4. run several queries
  5. finally commit

Better shape

  1. do app-side work first
  2. open transaction
  3. execute minimal necessary database work
  4. commit quickly

Sometimes the query itself is only moderately expensive. It feels much worse because the surrounding transaction design is bad.

12. Fix N+1 Query Patterns Before Tuning One Query Forever

Sometimes you do not have one slow query. You have one endpoint issuing hundreds of ordinary queries.

That is an application-side optimization problem.

Common pattern

  • fetch 100 parent rows
  • fetch each related child row separately
  • fetch each count separately
  • fetch each metadata set separately

Fixes

  • eager load appropriately
  • batch related lookups
  • join where it makes sense
  • precompute or cache hot summary patterns only when justified
  • measure query count per request

The most important optimization may be deleting 140 queries, not making one query 10 percent faster.

13. Use Covering and Expression Indexes Where They Truly Help

Sometimes ordinary indexes are not enough.

Covering indexes

PostgreSQL supports index-only scans and covering indexes with INCLUDE. The docs note that B-tree indexes always support index-only scans and that INCLUDE lets you store extra payload columns in the index. (postgresql.org)

This can help if a query repeatedly needs:

  • a small set of selected columns
  • from rows filtered by indexed key columns

Expression indexes

If your query repeatedly uses an expression such as:

where lower(email) = lower($1)

then an expression index like:

create index idx_users_lower_email
on users (lower(email));

can be much better than hoping PostgreSQL optimizes around the expression automatically.

Important caution

These are high-leverage, but more specialized. Use them for repeated hot paths, not by default everywhere.

14. Keep Optimization Changes Targeted and Verifiable

A lot of query tuning goes wrong because teams make too many changes at once.

That makes it hard to know what actually helped.

Better workflow

  1. identify the slow query
  2. run EXPLAIN ANALYZE
  3. form one concrete hypothesis
  4. make one targeted change
  5. rerun and compare
  6. keep the change only if it actually improves the result

Good targets

  • one index
  • one query rewrite
  • one result-shape reduction
  • one pagination change
  • one transaction-scope fix
  • one application batching improvement

The closer you stay to one change at a time, the better your tuning work usually becomes.

Practical Slow Query Optimization Checklist

When optimizing a slow PostgreSQL query, walk through this in order:

1. Is it the right query?

Confirm the real SQL and its frequency.

2. What does EXPLAIN ANALYZE show?

Find the expensive step, not the suspicious guess.

3. Is it touching too many rows?

If yes, improve selectivity or indexing.

4. Is the index wrong, missing, or mismatched?

Match the index to the actual filter and sort shape.

5. Is it returning too much data?

Reduce row count and selected columns.

6. Is sorting or hashing too expensive?

Reduce intermediate rows and support ordering better.

7. Is pagination inefficient?

Use keyset pagination where it fits.

8. Are estimates badly wrong?

Check statistics and analyze behavior.

9. Is the problem really query count, not one query?

Check for N+1 or overly chatty endpoints.

10. Is the query waiting on something?

Inspect locks, waits, and transaction behavior.

FAQ

What is the first thing to do when a PostgreSQL query is slow?

Start with EXPLAIN ANALYZE on the real query and inspect whether the main problem is scanning too many rows, sorting too much data, joining inefficiently, or waiting on something outside pure execution.

Do slow PostgreSQL queries always need a new index?

No. Many slow queries are caused by bad result shape, poor pagination, too many repeated queries, stale statistics, or application-side N+1 behavior rather than missing indexes alone.

Should I use EXPLAIN or EXPLAIN ANALYZE?

Use EXPLAIN ANALYZE when it is safe to execute the query, because it shows what actually happened at runtime rather than only what the planner predicted.

Is PostgreSQL query optimization mostly about settings?

Usually no. Query shape, indexes, row counts, joins, and application behavior usually matter before database-wide tuning settings.

Conclusion

Optimizing slow PostgreSQL queries is mostly about removing unnecessary work.

That means:

  • touching fewer rows
  • returning less data
  • using better indexes
  • sorting less
  • paginating better
  • keeping statistics healthy
  • and fixing application-side query patterns when they are the real culprit

The biggest optimization mistake is usually trying to tune PostgreSQL globally before understanding the specific query locally.

Find the real query. Run EXPLAIN ANALYZE. Identify the waste. Remove it.

That is where most meaningful PostgreSQL query speedups come from.

PostgreSQL cluster

Explore the connected PostgreSQL guides around tuning, indexing, operations, schema design, scaling, and app integrations.

Pillar guide

PostgreSQL Performance Tuning: Complete Developer Guide

A practical PostgreSQL performance tuning guide for developers covering indexing, query plans, caching, connection pooling, vacuum, schema design, and troubleshooting with real examples.

View all PostgreSQL guides →

Related posts