PostgreSQL Memory Settings Explained: shared_buffers and work_mem
Level: intermediate · ~15 min read · Intent: informational
Audience: backend developers, database engineers, technical teams
Prerequisites
- basic familiarity with PostgreSQL
Key takeaways
- The most important PostgreSQL memory mistake is treating each memory setting in isolation. Real memory usage depends on shared memory, per-session memory, per-operation memory, autovacuum workers, and connection count all at once.
- shared_buffers and work_mem do very different jobs. shared_buffers helps PostgreSQL cache table and index pages globally, while work_mem is a per-operation limit for sorts, hashes, and similar executor work that can multiply quickly under concurrency.
FAQ
- What is the best shared_buffers setting in PostgreSQL?
- On a dedicated database server with 1GB or more of RAM, PostgreSQL documents 25% of RAM as a reasonable starting point, and notes that going above 40% is unlikely to help more than a smaller setting.
- Why is work_mem dangerous to raise too high?
- Because work_mem is not per server. It is a base limit per sort or hash operation, and a complex query can use several of them at once across many concurrent sessions.
- What does effective_cache_size do in PostgreSQL?
- It does not allocate memory. It tells the planner how much cache is likely available to a single query, which affects whether PostgreSQL prefers index scans or sequential scans.
- Should maintenance_work_mem be larger than work_mem?
- Usually yes. PostgreSQL's docs say it is safe to set maintenance_work_mem significantly larger than work_mem because maintenance operations are less numerous, but you still need to account for autovacuum workers.
PostgreSQL memory tuning is one of the most misunderstood parts of database performance.
That is mostly because people treat memory settings like isolated dials:
- raise
shared_buffers - raise
work_mem - raise
maintenance_work_mem
and expect the server to get faster.
Sometimes it does. Sometimes it gets less stable instead.
The reason is simple: PostgreSQL memory usage is not one pool.
It is a mix of:
- shared memory used by the server as a whole
- per-session memory
- per-operation memory
- autovacuum worker memory
- replication-related memory
- and operating-system page cache outside PostgreSQL itself
That is why memory tuning works best when you understand what each setting is actually for.
This guide focuses on the settings most developers and operators need to understand first:
shared_bufferswork_memmaintenance_work_memautovacuum_work_memtemp_bufferseffective_cache_size- and how
max_connectionsquietly affects the whole picture
The Most Important Memory Rule
Before going setting by setting, remember this:
PostgreSQL memory settings are not additive in a simple way.
You cannot safely think:
shared_buffers = 8GBwork_mem = 64MB- therefore memory usage is roughly 8GB plus a bit
That is not how it works.
Why?
Because:
shared_buffersis shared across the whole serverwork_memis per operation- a single query can use several
work_mem-limited operations at once - many sessions can do that concurrently
- autovacuum workers can each allocate maintenance memory
- and the OS page cache still matters too
This is the main reason memory tuning is easy to get wrong.
1. shared_buffers: PostgreSQL's Shared Cache
shared_buffers controls how much memory PostgreSQL uses for shared memory buffers.
This is PostgreSQL’s own main cache for table and index pages.
What it does
When queries read or modify data, PostgreSQL works heavily through shared buffers. If hot data stays in this cache, PostgreSQL can avoid more trips to disk.
What the docs say
PostgreSQL documents that on a dedicated database server with 1GB or more of RAM, a reasonable starting point for shared_buffers is 25% of system memory. The docs also note that because PostgreSQL relies on the operating system cache too, it is unlikely that allocating more than 40% of RAM to shared_buffers will work better than a smaller amount. :contentReference[oaicite:1]{index=1}
Practical example
If a server has:
- 16GB RAM
a sensible starting point might be around:
- 4GB
shared_buffers
Not because 4GB is universally perfect, but because it is a sane first benchmark point.
Bigger is not automatically better
One of the biggest shared-buffers mistakes is assuming:
- more cache = more speed
That is too simplistic.
If you make shared_buffers too large, you can:
- starve the OS cache
- increase checkpoint pressure
- worsen write smoothing if related WAL/checkpoint settings do not keep up
- and make the server less balanced overall
Important companion setting
PostgreSQL’s docs explicitly note that larger shared_buffers settings usually require a corresponding increase in max_wal_size, so large amounts of changed data can be written out more gradually. :contentReference[oaicite:2]{index=2}
That is a very important operational detail.
2. work_mem: Per Operation, Not Per Query, Not Per Server
work_mem is probably the most misunderstood PostgreSQL memory setting.
It sets the base maximum amount of memory used by one query operation before PostgreSQL starts writing temporary files.
What counts as a query operation?
PostgreSQL’s docs specifically call out operations like:
- sorts
- hash tables
ORDER BYDISTINCT- merge joins
- hash joins
- hash-based aggregation
- memoize nodes
- and hash-based
INsubquery processing :contentReference[oaicite:3]{index=3}
Why this matters
A single query can use several work_mem-limited nodes at once.
And several sessions can do that concurrently.
So if you set:
work_mem = 64MB
that does not mean:
- each query gets 64MB total
It means:
- each qualifying sort/hash operation can use roughly that much before spilling
That is why total memory usage can become much larger than expected.
Default value
PostgreSQL’s current docs say the default is 4MB. :contentReference[oaicite:4]{index=4}
Practical example
Imagine one complex query does:
- one sort
- one hash join
- one hash aggregate
That one query may already consume several chunks of memory shaped by work_mem.
Now multiply that by:
- 30 concurrent sessions
and you can see why aggressive work_mem values can create memory pressure fast.
3. hash_mem_multiplier: Why Hash Operations Can Use More Than work_mem
Hash-based operations are often more sensitive to memory than sorts.
PostgreSQL handles that by using hash_mem_multiplier.
What it does
The memory limit for a hash table is computed by multiplying:
work_membyhash_mem_multiplier
Default value
The docs say the default is 2.0, meaning hash-based operations can use twice the base work_mem amount. :contentReference[oaicite:5]{index=5}
Why this matters
If you thought:
work_mem = 32MB
meant hash operations cap around 32MB, that is not always true.
With the default multiplier:
- a hash operation may go to around 64MB
When it can help to raise it
PostgreSQL’s docs say raising hash_mem_multiplier can help environments where spills happen regularly, especially when raising work_mem directly causes broader memory pressure. :contentReference[oaicite:6]{index=6}
Practical lesson
If your system is:
- spilling hashes a lot
- but already struggling with broad memory pressure
then raising hash_mem_multiplier selectively may be a cleaner move than blindly pushing work_mem higher.
4. maintenance_work_mem: For Maintenance, Not Ordinary Queries
maintenance_work_mem is separate from work_mem.
It controls memory for maintenance operations such as:
VACUUMCREATE INDEXALTER TABLE ADD FOREIGN KEY
Default value
PostgreSQL documents the default as 64MB. :contentReference[oaicite:7]{index=7}
Why it is often set larger than work_mem
The docs note that since only one such maintenance operation is normally executed at a time by a session, and there usually are not many of them running concurrently, it is safe to set maintenance_work_mem significantly larger than work_mem. :contentReference[oaicite:8]{index=8}
That makes sense:
- query memory is multiplied by concurrency
- maintenance memory is usually not
When it helps most
Larger maintenance_work_mem can improve:
- vacuuming
- index builds
- restore operations
- certain schema maintenance tasks
Practical example
If:
work_mem = 16MB
it may still be totally reasonable for:
maintenance_work_mem = 256MBor more,
depending on the system and maintenance workload.
5. autovacuum_work_mem: The Hidden Multiplier
This setting matters because autovacuum is concurrent.
What it does
autovacuum_work_mem sets the maximum memory used by each autovacuum worker.
Default behavior
The docs say it defaults to -1, which means:
- use
maintenance_work_meminstead :contentReference[oaicite:9]{index=9}
Why this matters
PostgreSQL also warns that when autovacuum runs, up to:
autovacuum_max_workerstimes the maintenance memory amount may be allocated. :contentReference[oaicite:10]{index=10}
So if you set:
maintenance_work_mem = 1GB
and autovacuum inherits it, that could become a surprisingly large memory commitment once several workers are active.
Practical lesson
If you want:
- strong manual maintenance memory
but do not want every autovacuum worker inheriting that size,
set autovacuum_work_mem separately.
That is one of the cleanest PostgreSQL memory-safety practices on busy production systems.
6. temp_buffers: Session-Local Memory for Temporary Tables
temp_buffers is not about sorts and hashes.
It is about temporary tables.
What it does
It controls the maximum amount of memory used for temporary buffers within each session, and those buffers are used only for access to temporary tables.
Default value
PostgreSQL documents the default as 8MB. :contentReference[oaicite:11]{index=11}
Important detail
This setting can be changed per session, but only before the first use of temporary tables in that session. After temp tables have been used, changing it has no effect for that session. :contentReference[oaicite:12]{index=12}
Practical lesson
For many ordinary web applications, temp_buffers is not the first memory setting to tune.
It matters more when you have workloads that use:
- session temp tables
- ETL-style session staging
- or special query workflows built around temporary relations
7. effective_cache_size: Planner Hint, Not Real Allocation
This is one of the easiest settings to misunderstand.
effective_cache_size does not allocate memory.
It tells PostgreSQL’s planner how much cache is likely available to a single query.
What it influences
PostgreSQL’s docs say:
- a higher
effective_cache_sizemakes index scans more likely - a lower one makes sequential scans more likely :contentReference[oaicite:13]{index=13}
What to include mentally
The docs say you should consider:
- PostgreSQL shared buffers
- plus the part of the OS page cache likely used for PostgreSQL data
- minus the fact that concurrent queries must share that cache :contentReference[oaicite:14]{index=14}
Default value
The current docs list the default as 4GB. :contentReference[oaicite:15]{index=15}
Practical lesson
This setting is about planner realism.
If you set it too low, PostgreSQL may underestimate cache availability and prefer more sequential scans than it should.
If you set it absurdly high, PostgreSQL may become too optimistic about index access.
8. max_connections: Quietly a Memory Setting Too
A lot of people think of max_connections only as a connection-limit setting.
It is also a memory-shaping setting.
PostgreSQL’s docs explicitly say that increasing max_connections leads to higher allocation of certain resources, including shared memory. :contentReference[oaicite:16]{index=16}
Why this matters
Higher connection counts do not just mean:
- more clients
They also often mean:
- more backend processes
- more potential concurrent
work_memusage - more internal shared resource allocation
- and more general coordination overhead
Practical lesson
If your first instinct is:
- “the app has more traffic, so raise max_connections”
be careful.
Often the better answer is:
- use connection pooling
- keep real concurrent active backends saner
- and avoid multiplying memory pressure through backend count
9. How These Settings Interact in Real Life
Here is the part that matters most operationally.
Imagine a server with:
- 32GB RAM
shared_buffers = 8GBwork_mem = 64MBhash_mem_multiplier = 2.0maintenance_work_mem = 512MBautovacuum_max_workers = 3max_connections = 300
That might look fine on paper.
But now imagine:
- many concurrent sessions each doing several sort/hash operations
- hash nodes allowed roughly 128MB each
- autovacuum workers inheriting large maintenance memory
- and the OS still needing space for page cache and everything else
This is how “we only changed one memory setting” incidents happen.
Practical rule
Always think in terms of:
- worst-case or near-worst-case concurrency
- not ideal single-query behavior
10. Good Tuning Order
If you are tuning PostgreSQL memory, a practical order is:
First
Understand the workload:
- many small OLTP queries?
- large analytical queries?
- mixed workload?
- heavy index builds and maintenance?
- many temp tables?
- lots of hash spills?
Second
Set a sane shared_buffers baseline.
On a dedicated DB server, PostgreSQL’s documented 25% starting point is a very good place to begin. :contentReference[oaicite:17]{index=17}
Third
Set effective_cache_size honestly.
Do not treat it as fantasy cache. Treat it as realistic cache likely available to one query.
Fourth
Tune work_mem conservatively.
If you raise it, remember:
- it multiplies across operations
- and across sessions
Fifth
Tune maintenance_work_mem and autovacuum_work_mem together.
That is how you improve maintenance speed without accidentally creating background memory pressure.
Sixth
Only then revisit the overall connection model.
If connection count is high, pooling may be a better fix than pushing memory settings harder.
11. Common Mistakes
Mistake 1: Setting work_mem based on one query
This ignores concurrency and per-operation multiplication.
Mistake 2: Treating effective_cache_size like a real cache allocation
It is just a planner estimate.
Mistake 3: Increasing shared_buffers without thinking about WAL/checkpoints
Larger shared buffers often need other write-path settings to keep up. :contentReference[oaicite:18]{index=18}
Mistake 4: Forgetting autovacuum inherits maintenance memory by default
This can multiply memory pressure through workers. :contentReference[oaicite:19]{index=19}
Mistake 5: Solving memory problems with more max_connections
This often makes the overall situation worse. :contentReference[oaicite:20]{index=20}
Mistake 6: Tuning without checking spills
If sorts and hashes are spilling, investigate with logs and EXPLAIN (ANALYZE, BUFFERS) rather than guessing.
12. A Practical Mental Model
Use this simple model:
shared_buffers
Global PostgreSQL cache.
work_mem
Per executor operation, multiplied by query complexity and concurrency.
hash_mem_multiplier
Extra multiplier for hash-based operations.
maintenance_work_mem
Per maintenance operation.
autovacuum_work_mem
Per autovacuum worker.
temp_buffers
Per session, only for temp tables.
effective_cache_size
Planner hint only.
max_connections
A hidden amplifier of many other memory and resource costs.
If you remember that model, you will avoid most PostgreSQL memory-tuning mistakes.
FAQ
What is the best shared_buffers setting in PostgreSQL?
On a dedicated database server with 1GB or more of RAM, PostgreSQL documents 25% of RAM as a reasonable starting point, and notes that going above 40% is unlikely to help more than a smaller setting.
Why is work_mem dangerous to raise too high?
Because work_mem is not per server. It is a base limit per sort or hash operation, and a complex query can use several of them at once across many concurrent sessions.
What does effective_cache_size do in PostgreSQL?
It does not allocate memory. It tells the planner how much cache is likely available to a single query, which affects whether PostgreSQL prefers index scans or sequential scans.
Should maintenance_work_mem be larger than work_mem?
Usually yes. PostgreSQL's docs say it is safe to set maintenance_work_mem significantly larger than work_mem because maintenance operations are less numerous, but you still need to account for autovacuum workers.
Conclusion
PostgreSQL memory tuning gets much easier once you stop thinking in terms of one big memory bucket.
The most important settings each do different jobs:
shared_buffershelps PostgreSQL cache data globallywork_memcontrols sort and hash work per operationmaintenance_work_memspeeds up maintenance tasksautovacuum_work_memcontrols background worker memory more safelytemp_buffersmatters for temp tableseffective_cache_sizeteaches the planner what cache probably exists- and
max_connectionsquietly influences total memory pressure more than many teams expect
That is why the best PostgreSQL memory tuning is not aggressive tuning.
It is coherent tuning.
If the settings make sense together, the server usually behaves much better than if each one was tuned in isolation.