Best PostgreSQL Indexes for Performance
Level: intermediate · ~15 min read · Intent: informational
Audience: backend developers, database engineers, technical teams
Prerequisites
- basic familiarity with PostgreSQL
Key takeaways
- For most workloads, B-tree is still the best PostgreSQL index because it handles equality, range filters, sorting, and many common lookup patterns.
- The fastest index is not the most advanced one. It is the one that matches the query shape, data distribution, and write workload without creating unnecessary maintenance overhead.
FAQ
- What is the best PostgreSQL index for most queries?
- B-tree is the best default index for most PostgreSQL queries because it works well for equality filters, range filters, ORDER BY patterns, and many ordinary joins.
- When should I use GIN instead of B-tree in PostgreSQL?
- Use GIN when a column contains many component values, such as arrays, tsvector full-text data, or jsonb documents you query with containment-style operators. B-tree is usually better for ordinary scalar columns.
- Are multicolumn indexes always better than separate indexes?
- No. Multicolumn indexes are best when your real queries repeatedly filter or sort on the same column combination in the same order. Separate indexes are often better when access patterns vary.
- What is the biggest indexing mistake in PostgreSQL?
- The biggest mistake is creating indexes without checking real query patterns. Overindexing increases storage, slows writes, and often leaves you with indexes that look useful but are rarely chosen by the planner.
Indexes are one of the most powerful PostgreSQL performance tools.
They are also one of the easiest ways to make a database slower if you use them carelessly.
That is the trade-off.
A good index can turn a painful full-table scan into a fast index scan, reduce sort work, and make joins dramatically cheaper. A bad index can:
- waste disk
- slow inserts and updates
- increase vacuum and maintenance cost
- and never actually get used for the queries you care about
That is why the best PostgreSQL index is not a single index type. It is the index that matches:
- your query shape
- your data distribution
- your write volume
- and your actual access patterns
This guide explains which PostgreSQL indexes are best for performance, when each one makes sense, and how to avoid building indexes that look smart but perform badly in practice.
Quick Answer
If you want the shortest possible answer, start here:
| Query Pattern | Best Index Choice | Why |
|---|---|---|
| Equality lookups on scalar columns | B-tree |
Best general-purpose default |
Range filters like <, >, BETWEEN |
B-tree |
Excellent for sortable scalar data |
ORDER BY on indexed scalar columns |
B-tree |
Can support ordered retrieval |
| JSONB containment or array membership | GIN |
Built for multi-valued data |
| Full-text search | GIN |
Best common default for tsvector search |
| Geometric, range, nearest-neighbor, special operator classes | GiST |
Flexible framework for specialized searches |
| Specialized partitioned search structures | SP-GiST |
Good for niche data distributions |
| Huge append-heavy tables ordered by time or ID | BRIN |
Tiny and efficient when data order correlates physically |
| Queries on a subset of rows | Partial index |
Smaller, cheaper, and more targeted |
| Repeated queries on computed expressions | Expression index |
Makes computed predicates indexable |
| Queries needing only indexed columns | Covering index with INCLUDE |
Can help enable index-only scans |
That table is the overview. The real value comes from understanding when each choice actually works.
The Most Important Rule About PostgreSQL Indexes
Indexes improve reads by trading off write cost and storage.
That matters because every extra index means PostgreSQL has more work to do when rows are:
- inserted
- updated
- deleted
- vacuumed
- or rewritten
So before asking:
- “Which index type is fastest?”
ask:
- “Which index solves the real query problem with the least extra cost?”
That question usually leads to much better indexing decisions.
1. B-Tree Is Still the Best Index for Most PostgreSQL Queries
If you remember only one thing from this article, remember this:
B-tree is the default for a reason.
In PostgreSQL, B-tree is the index type created by default, and it fits the most common situations. It handles:
- equality lookups
- range filters
BETWEENINIS NULLIS NOT NULL- and many ordered access patterns
That makes it the best index for most ordinary application tables.
Best use cases for B-tree
Use B-tree for:
- primary keys
- foreign-key lookup columns
- email lookups
- usernames
- slugs
- status + date filters
- timestamps
- prices
- counts
- most joins on ordinary scalar values
Example
create index idx_orders_customer_id on orders (customer_id);
create index idx_orders_created_at on orders (created_at);
create index idx_users_email on users (email);
Why B-tree wins so often
Because most app queries are still things like:
select *
from users
where email = 'a@example.com';
select *
from orders
where created_at >= now() - interval '7 days';
select *
from events
where account_id = 42
order by created_at desc
limit 50;
Those are classic B-tree problems.
Practical rule
If the column is:
- scalar
- sortable
- and commonly filtered or sorted
start with B-tree unless you have a strong reason not to.
2. Hash Indexes Are Niche, Not the Default
Hash indexes only support simple equality comparisons.
That makes them much narrower than B-tree.
When hash indexes can make sense
Use a hash index only when:
- the query is truly equality-only
- there is no need for range access
- and you have a reason to prefer it over B-tree in a very specific workload
Example
create index idx_sessions_token_hash on sessions using hash (token);
Why they are usually not the first choice
Because B-tree already handles equality well, and also supports much more:
- range filtering
- ordering
- broader planner flexibility
That means hash is usually a specialized choice, not the best starting point.
Practical rule
If you are not sure whether you need a hash index, you probably do not.
3. GIN Indexes Are Best for JSONB, Arrays, and Full-Text Search
GIN stands for Generalized Inverted Index.
It is designed for values that contain multiple component values, not just one scalar value.
That is why GIN shines for:
jsonb- arrays
- full-text search (
tsvector) - and other “contains many things inside one column” use cases
Best use cases for GIN
Use GIN for:
jsonb @>- array containment queries
- full-text search
- tag arrays
- flexible document-style filtering
JSONB example
create index idx_products_attributes_gin
on products using gin (attributes);
Then queries like this can benefit:
select *
from products
where attributes @> '{"color":"black"}';
Array example
create index idx_posts_tags_gin
on posts using gin (tags);
Then queries like:
select *
from posts
where tags @> array['postgresql'];
become much more realistic at scale.
Full-text example
create index idx_articles_search
on articles using gin (to_tsvector('english', title || ' ' || body));
Important trade-off
GIN is excellent for read patterns like containment and membership tests, but it is not a general replacement for B-tree.
It is also not the right choice for:
- ordinary equality on scalar columns
- simple timestamp filters
- typical numeric range searches
Practical rule
If the column contains many internal values and the query asks:
- “Does this contain X?”
GIN is often the right answer.
4. GiST Is Best for Specialized Search Patterns
GiST is not one single behavior. It is a framework that supports different operator classes.
That makes it powerful, but also more specialized than B-tree.
Best use cases for GiST
GiST is a strong fit for:
- geometric data
- range types
- nearest-neighbor searches
- PostGIS-style workloads
- specialized operator classes
Example range use case
create index idx_bookings_room_timerange
on bookings using gist (room_id, during);
This kind of design is useful when you need queries like:
- overlapping ranges
- exclusion logic
- time-window conflict detection
Example nearest-neighbor pattern
GiST can support distance-style searches depending on the operator class, which makes it useful for:
- location search
- geometric proximity
- specialized ranking behavior
Practical rule
If your problem involves:
- overlap
- proximity
- geometry
- ranges
- or specialized search operators
GiST is worth evaluating.
If your problem is ordinary app filtering, B-tree is still usually better.
5. SP-GiST Is a Niche Performance Tool
SP-GiST is even more specialized.
It supports partitioned search structures such as:
- quadtrees
- k-d trees
- radix trees (tries)
That means it can perform very well for the right operator class and data shape, but it is not something most application teams need every day.
Best use cases for SP-GiST
SP-GiST is worth considering for:
- certain geometric searches
- trie-like or prefix-style data structures
- workloads where the underlying operator class matches a partitioned search space well
Practical rule
SP-GiST is usually not a first-line answer. Use it when you know the operator class and data structure match the problem.
6. BRIN Is One of the Best Indexes for Huge Append-Heavy Tables
BRIN is one of the most underused PostgreSQL performance tools.
It stores summaries for block ranges rather than indexing every row like B-tree does. That makes BRIN extremely small and efficient when the indexed column is well correlated with the physical row order.
Best use cases for BRIN
Use BRIN for:
- very large tables
- append-heavy event tables
- logs
- metrics
- time-series style data
- monotonically increasing IDs
created_aton naturally ordered insert-heavy tables
Example
create index idx_events_created_at_brin
on events using brin (created_at);
Why BRIN can be amazing
On very large tables, a BRIN index can be tiny compared with B-tree. That means:
- less disk
- less maintenance
- and still enough pruning power to make huge scans much cheaper
When BRIN is a bad choice
BRIN performs badly when the values are not physically correlated with row order.
If the table has:
- frequent rewrites
- randomly distributed values
- no physical order relationship
then BRIN loses much of its value.
Practical rule
If the table is huge and naturally ordered by time or sequence, BRIN is often one of the best PostgreSQL indexes for performance.
7. Multicolumn Indexes Are Powerful, But Order Matters
Multicolumn indexes are one of the biggest sources of both performance wins and design mistakes.
PostgreSQL supports multicolumn indexes for B-tree, GiST, GIN, and BRIN. For B-tree specifically, they are most efficient when the query constrains the leading leftmost columns. Equality constraints on leading columns matter most.
Example
create index idx_orders_account_status_created
on orders (account_id, status, created_at desc);
This can be very strong for queries like:
select *
from orders
where account_id = 42
and status = 'paid'
order by created_at desc
limit 20;
Why column order matters
An index on:
(account_id, status, created_at)
is not the same as:
(status, account_id, created_at)
The best order is driven by the query pattern:
- equality filters first
- then range or sort columns
- then optional payload considerations
Practical rule
Design multicolumn indexes from real repeated query patterns, not from “these are important columns.”
8. Partial Indexes Are Often Better Than Full Indexes
A partial index stores entries only for rows matching a predicate.
That makes them very useful when:
- the table is large
- only a subset of rows is interesting
- and most queries focus on that subset
Example
create index idx_orders_unbilled
on orders (order_nr)
where billed is not true;
This is a classic use case:
- most rows are billed
- unbilled rows are fewer
- but queries care disproportionately about unbilled rows
Why partial indexes are powerful
They can:
- reduce index size
- reduce write cost
- improve cache density
- and sometimes outperform a full-table index by a lot
The important limitation
The planner can only use a partial index when it can recognize that the query’s WHERE condition implies the index predicate.
That means partial indexes work best when:
- the predicate is stable
- the query shape is consistent
- and the application uses a matching condition clearly
Practical rule
If only a small, predictable slice of rows matters for a common query path, a partial index can be one of the best PostgreSQL performance moves you can make.
9. Covering Indexes Can Enable Index-Only Scans
PostgreSQL supports index-only scans when:
- the index type supports them
- the query references only columns stored in the index
- visibility conditions make heap access unnecessary enough of the time
B-tree always supports index-only scans. GIN does not.
Example with INCLUDE
create index idx_orders_account_created_include_total
on orders (account_id, created_at desc)
include (total_cents, status);
A query like:
select created_at, total_cents, status
from orders
where account_id = 42
order by created_at desc
limit 50;
may benefit because the query can potentially get everything it needs directly from the index.
Important warning
Covering indexes are not free.
Adding payload columns:
- makes indexes larger
- increases write cost
- and only pays off when the table is read-heavy enough that index-only scans are actually likely
This is especially useful for:
- read-heavy dashboards
- recent-activity lists
- hot API endpoints
- stable lookup tables
Practical rule
Use INCLUDE carefully for queries that repeat constantly and only need a few extra columns.
10. Expression Indexes Are Great for Computed Predicates
Expression indexes let you index a computation, not just a raw column.
This is extremely useful when the query filters on the same expression repeatedly.
Example
create index idx_users_lower_email
on users (lower(email));
Then this query becomes indexable:
select *
from users
where lower(email) = 'a@example.com';
Common use cases
Expression indexes are great for:
- case-insensitive search
- normalized lookups
- concatenated fields
- derived date buckets
- computed search values
Important trade-off
Expression indexes cost more on writes because PostgreSQL must compute the expression for inserts and non-HOT updates.
That means they are strongest when:
- reads matter a lot
- the expression is repeated constantly
- and avoiding function work during lookup is worth the write overhead
Practical rule
If the application repeatedly writes queries like lower(col) or another deterministic expression in the WHERE clause, an expression index is often better than hoping the planner can optimize around it.
11. Unique Indexes Matter for More Than Integrity
PostgreSQL automatically creates unique B-tree indexes for primary keys and unique constraints.
That matters for performance because those indexes also serve:
- ordinary lookups
- joins
- existence checks
- and uniqueness enforcement
Example
create table users (
id bigint primary key,
email text unique
);
That schema creates:
- a unique B-tree index on
id - a unique B-tree index on
email
Practical rule
Use constraints first when the rule is really data integrity. PostgreSQL will build the supporting index.
For performance-only uniqueness over a subset of rows, a unique partial index can be a strong option.
Practical Index Recipes by Query Type
Case 1: Most common app lookup
Query:
select *
from users
where email = $1;
Best index:
create unique index idx_users_email on users (email);
Case 2: Activity feed by tenant
Query:
select *
from events
where account_id = $1
order by created_at desc
limit 100;
Best index:
create index idx_events_account_created
on events (account_id, created_at desc);
Case 3: Querying recent unprocessed jobs
Query:
select *
from jobs
where processed_at is null
order by created_at
limit 100;
Best index:
create index idx_jobs_unprocessed_created
on jobs (created_at)
where processed_at is null;
Case 4: JSONB attribute filter
Query:
select *
from products
where attributes @> '{"category":"laptop"}';
Best index:
create index idx_products_attributes_gin
on products using gin (attributes);
Case 5: Huge append-only metrics table
Query:
select *
from metrics
where recorded_at >= now() - interval '1 day';
Best index for huge ordered tables:
create index idx_metrics_recorded_at_brin
on metrics using brin (recorded_at);
Common Indexing Mistakes That Hurt Performance
Mistake 1: Indexing every column “just in case”
This creates:
- write slowdown
- bigger vacuum cost
- more disk usage
- and index clutter the planner may ignore
Mistake 2: Using the wrong index type for the query shape
Examples:
- B-tree on JSON containment
- GIN on ordinary scalar equality
- BRIN on random-value columns
Mistake 3: Building multicolumn indexes in the wrong order
Column order is not cosmetic. It changes how useful the index really is.
Mistake 4: Creating both a full index and a partial index without a reason
That often increases maintenance cost without enough extra benefit.
Mistake 5: Assuming an index-only scan is automatic
Even if an index can support it, workload churn and visibility rules may still force heap access often enough that the gain is smaller than expected.
Mistake 6: Ignoring write workload
Every index helps reads by making writes more expensive.
That trade-off should be deliberate.
A Better Way to Choose the Best PostgreSQL Index
Use this order:
Step 1: Identify the real query
Not the imagined one. The real slow repeated query.
Step 2: Classify the query shape
Is it:
- scalar equality
- scalar range
- sort-heavy
- subset filtering
- JSON/array containment
- expression-based
- huge table pruning
Step 3: Pick the simplest index that matches that shape
Usually:
- B-tree first
- then partial, multicolumn, expression, or covering variations
- then specialized types like GIN, GiST, or BRIN where appropriate
Step 4: Check write cost and index count
A good index is one you can afford to maintain.
Conclusion
The best PostgreSQL indexes for performance are usually not the fanciest ones.
They are the ones that match the real access pattern with the least unnecessary overhead.
That usually means:
- B-tree for most scalar lookups, joins, and ordering
- GIN for JSONB, arrays, and full-text search
- GiST for ranges, geometry, and nearest-neighbor style workloads
- SP-GiST for specialized partitioned search structures
- BRIN for huge append-heavy tables with naturally ordered values
- partial indexes when only a subset of rows matters
- expression indexes when the query filters on a computation
- covering indexes when index-only scans are realistically achievable
If you build indexes from actual query patterns instead of database superstition, PostgreSQL indexing gets much easier.
And much faster.