How to Find Slow SQL Queries

·Updated Apr 4, 2026·
sqldatabasequery-languageperformancesql-optimizationquery-tuning
·

Level: intermediate · ~18 min read · Intent: informational

Audience: backend developers, data analysts, data engineers, technical teams, database administrators, operations teams

Prerequisites

  • basic familiarity with databases
  • basic understanding of SQL queries and indexes

Key takeaways

  • The best way to find slow SQL queries is to combine query timing, execution plans, frequency data, and wait analysis rather than relying on one metric alone.
  • A query is not only slow because it runs for a long time once. Many of the worst SQL problems are medium-slow queries that run thousands of times, poorly indexed queries that scan too much data, or blocked queries that are waiting on locks.

FAQ

What is the easiest way to find slow SQL queries?
The easiest way is usually to start with your database's slow query logging or query statistics feature, then inspect the worst queries with an execution plan.
What is better: query logs or execution plans?
You usually need both. Logs and query statistics tell you which queries are slow or expensive, while execution plans help explain why they are slow.
Should I focus on the single slowest SQL query first?
Not always. A query that runs in 300 ms but executes 20,000 times per hour can be more damaging than one query that takes 8 seconds once a day.
How do I know if a query is slow because of indexing or locking?
Use the execution plan and wait analysis together. Bad indexing usually shows scans, poor estimates, and large row counts, while locking problems show blocked sessions, wait time, and delays caused by other transactions.
0

Finding slow SQL queries is one of the most useful database skills you can build.

It matters because most database performance problems do not begin with:

  • total database failure
  • obvious crashes
  • or one dramatic incident

They usually begin with smaller signals:

  • one endpoint feels inconsistent
  • dashboards load slowly
  • batch jobs stretch longer every day
  • CPU stays high
  • the connection pool fills up
  • or a database that used to feel fast now feels unpredictable

In many of those cases, the root problem is not “the database is slow.” It is:

  • a few bad queries
  • too many repeated queries
  • a blocked query waiting on locks
  • poor indexing
  • stale statistics
  • or a medium-slow query executed so often that it dominates total load

That is why the real skill is not just tuning SQL. It is finding the right SQL queries to tune first.

This guide explains how to find slow SQL queries in a practical way, which tools to use, what signals matter most, and how to avoid chasing the wrong bottleneck.

Why finding slow SQL queries is harder than it looks

At first, this sounds easy:

  • check which query took the longest
  • fix it

But real systems are rarely that simple.

A query can be a problem because:

  • it is very slow once
  • it is moderately slow but runs constantly
  • it returns too many rows
  • it scans too much data
  • it spills to disk during sorts or hashes
  • it blocks other work
  • it waits on locks
  • it runs fast in isolation but badly under concurrency
  • or the application calls it too many times per request

That means “slow” can mean different things depending on the workload.

For example:

  • one 12-second reporting query run once per night may be harmless
  • one 250-millisecond query run 30,000 times per hour may be a major problem
  • one query may look slow but is actually just blocked by another transaction
  • one query may be fine in development and terrible in production because the data size changed

This is why finding slow SQL queries is not just about measuring duration. It is about understanding impact.

The most important rule

Before looking at tools, remember this:

The worst SQL query is not always the one with the longest runtime. It is often the query causing the most total damage to the system.

That damage may come from:

  • total execution time across many calls
  • lock contention
  • row explosion in joins
  • repeated scans
  • memory spills
  • high-frequency repetition
  • or bad concurrency behavior

That is why the best investigation process always asks:

  • how long does the query take?
  • how often does it run?
  • how much data does it touch?
  • what waits is it experiencing?
  • and what else does it slow down?

If you only look at runtime in isolation, you often optimize the wrong thing.

Step 1: Define what “slow” means in your system

Before you start digging into logs and execution plans, define what slow means for your environment.

That threshold depends on the workload.

Examples:

  • an interactive web request query may feel slow above 100 to 300 ms
  • a dashboard query may be acceptable at 1 to 3 seconds
  • an overnight reporting query may be acceptable at 20 seconds
  • an internal admin export may tolerate more delay than a user-facing feed

It is also important to separate:

  • slow per execution
  • slow in total system impact
  • slow because of waiting
  • slow because of actual CPU, I/O, or scan work

A good first step is to define a few working categories such as:

Latency-sensitive slow queries

Queries that hurt user-facing response time.

High-total-cost queries

Queries that may not be individually terrible, but dominate total DB time because they run often.

Blocking or waiting queries

Queries that appear slow mainly because they are waiting on locks or resources.

Batch and analytics slow queries

Queries that are expected to be heavier but still need to be watched.

That framing makes the rest of the investigation much clearer.

Step 2: Start with symptoms from the application layer

A lot of slow-query work begins outside the database.

Useful app-layer clues include:

  • slow API endpoints
  • timeouts
  • background job delays
  • queue buildup
  • request traces with long DB spans
  • pool exhaustion
  • retry storms
  • or endpoints that get slower as traffic increases

This matters because the application often tells you where the pain is first.

Good questions to ask:

  • which endpoints are slow?
  • which background jobs got worse recently?
  • which user workflows trigger the database most heavily?
  • do failures happen at a certain time of day?
  • do only certain tenants, customers, or reports suffer?

These clues help narrow which SQL paths actually matter before you start combing through everything.

Step 3: Use slow query logs or database query statistics

The next step is to look at database-level query visibility.

This is one of the most reliable ways to find real slow queries because it shows what the database is actually doing rather than what the app guessed.

Different databases provide different tools, but the main idea is the same:

  • capture query text
  • capture execution time
  • capture frequency
  • capture resource cost where possible

You usually want one or both of these:

  • slow query logging
  • query statistics views or tracking extensions

These tools help answer:

  • which queries are slow?
  • how often do they run?
  • which queries consume the most total time?
  • are there repeated patterns or one-off outliers?

Step 4: In PostgreSQL, use pg_stat_statements

If you are on PostgreSQL, one of the best tools for slow query discovery is pg_stat_statements.

It aggregates query statistics so you can see:

  • total execution time
  • average execution time
  • number of calls
  • rows returned
  • and which queries are costing the most overall

A common starting query looks like this:

SELECT
    query,
    calls,
    total_exec_time,
    mean_exec_time,
    rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

This is powerful because it does not only show the slowest single execution. It helps you find the queries with the highest total impact.

A second useful angle is to sort by average execution time:

SELECT
    query,
    calls,
    total_exec_time,
    mean_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;

That can reveal queries that are individually expensive even if they run less often.

For PostgreSQL teams, this is usually one of the best first tools to reach for.

Step 5: In MySQL, use the slow query log and performance views

In MySQL, a common starting point is the slow query log.

This helps capture queries above a chosen threshold so you can inspect:

  • which queries exceed the threshold
  • how often they appear
  • and whether they correlate with real system pain

For ongoing investigation, MySQL performance instrumentation and statement summary views can also help reveal:

  • repeated expensive statements
  • long-running patterns
  • and statements that cause large cumulative cost

The key idea is the same: you want to find both:

  • slow individual queries
  • and queries that hurt the system through repetition

If you only sample one-off outliers, you may miss the true source of load.

Step 6: In SQL Server, use Query Store and execution stats

If you are working with SQL Server, Query Store is often the first place to look.

It helps track:

  • query text
  • execution history
  • runtime patterns over time
  • plan changes
  • regressions
  • and which statements got more expensive

SQL Server also provides dynamic management views that help identify:

  • expensive statements
  • frequently executed statements
  • wait-heavy workloads
  • and changes in execution characteristics

This is especially useful when a query became slow recently. You can compare:

  • old plan behavior
  • new plan behavior
  • runtime changes
  • and whether a regression or statistics shift may be involved

For SQL Server teams, Query Store is often one of the most valuable tools in this whole process.

Step 7: Look for the highest total cost, not just the highest duration

Once you have query statistics, do not immediately jump to the single longest query.

Instead, ask which queries are expensive in total.

A query that takes:

  • 80 ms
  • but runs 200,000 times per day

may matter far more than a query that takes:

  • 9 seconds
  • and runs twice a week

This is one of the most common tuning mistakes: teams optimize an impressive-looking outlier and ignore the constant medium-cost query that is actually dominating load.

A good ranking approach is to review:

  • total execution time
  • average execution time
  • frequency
  • maximum time
  • and often rows scanned or returned when available

That gives a much more realistic picture of what deserves attention first.

Step 8: Separate slow execution from waiting and blocking

A query may look slow, but not because the SQL itself is bad.

It may be waiting on:

  • row locks
  • table locks
  • transactions
  • I/O
  • concurrency limits
  • memory pressure
  • or another query holding up progress

That is why slow-query analysis should always include wait or blocking analysis where possible.

A query can appear to take 15 seconds because:

  • the actual plan needed 200 ms of work
  • and the remaining 14.8 seconds were spent waiting on a lock

If you tune the SQL without fixing the blocking transaction, nothing meaningful improves.

So whenever a query looks slow, ask:

  • was it actually executing slowly?
  • or was it mostly waiting?

That distinction changes the solution completely.

Step 9: Capture the execution plan

Once you know which query is a problem, the next step is to inspect the execution plan.

This is where performance troubleshooting becomes concrete.

The execution plan tells you how the database chose to run the query.

That includes things like:

  • sequential scan or index scan
  • join type
  • sort operations
  • hash operations
  • row estimates
  • aggregation strategy
  • and where the real work is happening

A slow query without a plan is mostly guesswork. A slow query with a plan becomes diagnosable.

This is why you almost always want to move from:

  • “this query is slow” to
  • “this query is slow because it scans 5 million rows, sorts too much data, and joins in the wrong order”

That is the real diagnostic jump.

Step 10: Use EXPLAIN and EXPLAIN ANALYZE

For many databases, especially PostgreSQL, one of the most important tools is EXPLAIN.

Example:

EXPLAIN
SELECT id, created_at, status
FROM orders
WHERE customer_id = 42
ORDER BY created_at DESC
LIMIT 20;

And for deeper investigation:

EXPLAIN ANALYZE
SELECT id, created_at, status
FROM orders
WHERE customer_id = 42
ORDER BY created_at DESC
LIMIT 20;

The difference matters:

EXPLAIN

Shows the planned execution path.

EXPLAIN ANALYZE

Runs the query and shows what actually happened.

That helps answer:

  • did PostgreSQL use an index?
  • did it scan too many rows?
  • were estimates accurate?
  • did a sort spill to disk?
  • was the slow part the join, the filter, or the sort?

For performance work, this is one of the most important diagnostic steps.

Step 11: Compare estimated rows versus actual rows

One of the most valuable things to inspect in a plan is the gap between:

  • estimated row counts
  • actual row counts

If the database expected:

  • 20 rows

but actually processed:

  • 500,000 rows

that is a major clue.

It often means:

  • stale statistics
  • skewed data
  • poor cardinality estimation
  • missing extended statistics
  • or query patterns that the planner is struggling to model accurately

Bad row estimates can lead to:

  • wrong join strategies
  • wrong scan choices
  • unnecessary sorts
  • memory spills
  • and generally poor plans

So when reading a plan, always ask:

  • where were the estimates badly wrong?

That often points directly to the root cause.

Step 12: Watch for scans that touch far too much data

One of the most common slow-query patterns is simple: the query touches far more rows than it returns.

Example:

  • scans 2 million rows
  • returns 40

That often points to:

  • missing indexes
  • weak composite indexes
  • functions wrapped around indexed columns
  • poor selectivity
  • or filters that are not aligned with the existing schema and indexes

Useful warning signs include:

  • sequential scans on very large hot tables
  • index scans that still fetch huge numbers of rows
  • bitmap scans over much larger working sets than expected
  • and queries where filtering happens too late

This is one of the clearest ways to recognize a query that needs indexing or shape improvements.

Step 13: Look for expensive sorts and hash spills

A query may be slow not because of scanning, but because of sorting or hashing too much data.

That often shows up in:

  • ORDER BY
  • GROUP BY
  • DISTINCT
  • hash joins
  • hash aggregation

Common issues include:

  • sort spills to disk
  • hash operations exceeding memory
  • large working sets being sorted unnecessarily
  • or grouping done after too much data has already been joined

This is especially important when the query:

  • returns a small result
  • but builds a very large intermediate result first

If you see large sorts or spills, ask:

  • could an index reduce the need to sort?
  • could fewer rows reach the sort?
  • could the query shape be changed so aggregation happens earlier?

That is often where the real fix lives.

Step 14: Check joins carefully

A surprising number of slow queries are really join problems.

These often include:

  • missing indexes on join keys
  • wrong join order
  • row explosion from one-to-many relationships
  • poorly selective filters applied too late
  • nested loops over unexpectedly large outer sets
  • and joins that multiply data far more than intended

You should inspect:

  • what join type was chosen
  • how many rows flow into the join
  • how many rows come out
  • whether estimates are accurate
  • whether foreign key and filter indexes support the path

Sometimes the query text looks harmless until you realize one join step turned:

  • 50,000 rows into
  • 4 million intermediate rows

That is the kind of pattern an execution plan reveals quickly.

Step 15: Review query frequency and repetition patterns

Slow-query work should never stop at individual statements. You also need to understand repetition.

Important questions:

  • does this query run once per request?
  • five times per request?
  • once per row in a loop?
  • once per page element?
  • repeatedly due to background polling?
  • from many workers at once?

This is where app behavior and DB behavior intersect.

A query may be technically acceptable on its own, but terrible as part of:

  • an N+1 pattern
  • a chatty API endpoint
  • repeated dashboard widgets
  • queue polling loops
  • or an ORM access pattern that issues many near-identical statements

That is why application traces and database query statistics together are so valuable. They show not just:

  • what is slow but also:
  • how often the app is asking for it

Step 16: Use APM and tracing if you have them

If your stack includes APM tools or distributed tracing, use them.

These tools help connect:

  • slow endpoints
  • slow jobs
  • and slow database spans

That is extremely useful because database performance problems often show up first as:

  • slow API routes
  • queue stalls
  • or request timeouts

APM and tracing tools help answer:

  • which request triggered the query?
  • how many queries happened inside the request?
  • which span was slowest?
  • was the delay in the DB or in the app?
  • did one page make 80 small SQL calls?

This is often how teams discover:

  • N+1 issues
  • repeated identical queries
  • serialization bottlenecks after the query
  • or endpoints that look database-heavy but are really blocked elsewhere

Tracing does not replace execution plans. It helps you find which query paths deserve plan analysis first.

Step 17: Check for lock waits and blocking sessions

Sometimes the right question is not:

  • which query is slow?

It is:

  • which query is blocking everything else?

A blocking session can make many ordinary queries look slow.

Common scenarios include:

  • long-running transactions
  • update conflicts
  • migration statements
  • bulk writes
  • accidental idle-in-transaction sessions
  • application code that keeps transactions open too long

Look for:

  • blocked sessions
  • waiting sessions
  • long transactions
  • repeated lock waits
  • deadlocks
  • and the session or query at the root of the queue

In many production incidents, the slow-query problem is actually a concurrency-control problem.

Step 18: Do not ignore medium-slow queries

Very slow queries get attention because they are dramatic.

Medium-slow queries are often more dangerous because they hide in plain sight.

Examples:

  • 120 ms query repeated thousands of times
  • 250 ms query in the most common endpoint
  • 400 ms query inside queue polling
  • 180 ms query triggered by every dashboard card

These queries do not always show up as obvious incidents, but they can dominate:

  • total DB time
  • CPU load
  • memory pressure
  • pool contention
  • and end-user latency

This is why average execution time alone is not enough. You need to pair it with frequency.

A medium-slow query multiplied by heavy traffic becomes a major system problem.

Step 19: Capture before-and-after evidence

Once you identify a slow query, capture a baseline before changing anything.

Useful before-and-after evidence includes:

  • average execution time
  • calls per minute or hour
  • total execution time
  • rows scanned
  • plan shape
  • wait behavior
  • endpoint latency
  • CPU or I/O impact if visible

This matters because query tuning should be evidence-driven. Without a baseline, you may:

  • fix the wrong thing
  • fail to prove improvement
  • or accidentally move cost somewhere else

A strong workflow is:

  1. identify
  2. measure
  3. explain
  4. change
  5. measure again

That is how performance work becomes reliable instead of anecdotal.

Step 20: Build a repeatable slow-query workflow

If you only investigate slow queries when everything is already on fire, you will always be reactive.

A better pattern is to build a repeatable workflow.

A practical one looks like this:

Step 1

Use application symptoms and database metrics to identify pain.

Step 2

Use query statistics or slow-query logging to find the worst candidates.

Step 3

Rank by:

  • total execution time
  • average execution time
  • frequency
  • and operational importance

Step 4

Capture the execution plan.

Step 5

Check:

  • scans
  • estimates
  • joins
  • sorts
  • spills
  • blocking
  • and concurrency behavior

Step 6

Fix the likely cause:

  • indexing
  • query shape
  • repeated execution pattern
  • transaction behavior
  • data distribution issue
  • or lock contention

Step 7

Measure again.

This workflow works much better than jumping straight to random index creation or configuration changes.

Common mistakes when trying to find slow SQL queries

Looking only at the slowest single execution

This often misses the queries doing the most total damage.

Ignoring frequency

A medium-slow query can dominate total cost if it runs constantly.

Tuning without looking at the plan

Without an execution plan, you are mostly guessing.

Confusing waiting with execution

A query blocked by locks is a very different problem from a bad index.

Investigating only inside the database

Sometimes the real clue starts in the application trace or endpoint behavior.

Focusing only on query text and not on row counts

A harmless-looking query can still scan millions of rows.

Failing to fix the root cause

Deleting one symptom without changing indexes, transaction behavior, or app repetition patterns usually leads to repeat incidents.

A practical checklist for finding slow SQL queries

Use this checklist when a system feels slow:

  • Which endpoint, job, or report is slow?
  • Is the slowness consistent or bursty?
  • What does the database say are the most expensive queries by total time?
  • What queries are the most expensive by average time?
  • How often do those queries run?
  • Are they actually executing slowly or mainly waiting?
  • What does the execution plan show?
  • Are row estimates badly wrong?
  • Are there full scans or large row scans?
  • Are there sorts or hash spills?
  • Are joins multiplying rows too much?
  • Are queries repeated in loops or N+1 patterns?
  • Are locks or long transactions involved?
  • Can the improvement be measured after the fix?

That checklist alone will prevent a lot of wasted investigation time.

FAQ

What is the easiest way to find slow SQL queries?

The easiest way is usually to start with your database's slow query logging or query statistics feature, then inspect the worst queries with an execution plan.

What is better: query logs or execution plans?

You usually need both. Logs and query statistics tell you which queries are slow or expensive, while execution plans help explain why they are slow.

Should I focus on the single slowest SQL query first?

Not always. A query that runs in 300 ms but executes 20,000 times per hour can be more damaging than one query that takes 8 seconds once a day.

How do I know if a query is slow because of indexing or locking?

Use the execution plan and wait analysis together. Bad indexing usually shows scans, poor estimates, and large row counts, while locking problems show blocked sessions, wait time, and delays caused by other transactions.

Final thoughts

Finding slow SQL queries is really about finding expensive database behavior.

Sometimes that behavior is:

  • one obviously slow report
  • one bad join
  • one missing index

But very often it is something subtler:

  • a medium-slow query repeated constantly
  • an ORM pattern causing N+1 calls
  • a sort spilling to disk
  • a lock wait making normal queries look slow
  • or an execution plan that became bad because the data changed

That is why the best approach is not to hunt only for “slow queries.” It is to build a workflow that combines:

  • query statistics
  • execution plans
  • frequency
  • blocking analysis
  • and application context

Once you do that consistently, slow-query troubleshooting becomes much more predictable.

And that is usually the difference between reactive database firefighting and real performance engineering.

SQL guides

Explore the connected SQL guides on fundamentals, joins, analytics, performance, interviews, and practical workflows.

Pillar guide

SQL Complete Guide for Beginners and Developers

A complete SQL guide for beginners and developers covering databases, tables, SELECT, WHERE, JOINs, GROUP BY, CASE, subqueries, CTEs, inserts, updates, deletes, indexes, and practical query patterns.

View all SQL guides →

Related posts