PG Stuff

Database Tool, Query Tool, SQL Tool PG Optimized Configuration Database Tool, Query Tool, SQL Tool

*** SQLEXEC LLC is not responsible for recommendations taken from this page.

 PG v14

Parameter
Default
Recommended
Configuration Notes
archive_command
 
'cp -i %p /$PGDATA/archives /%f'
The recommended value is just a typical example. There are various commands that can be used depending on what your backup process is. It is best practices to offload these archive log files in case the server crashes. In our case we will archive them to another folder under the Postgresql DATA directory (archives), but then have another operations backup process that preserves them elsewhere on the network. Make sure you create this archive directory under the PGDATA folder as the postgres user. archive_mode=on and archive_command defined --> effectively is log_shipping. Since archive_mode changes require a server restart, it is often recommended to keep archive_mode on, but have a dummy archive command like this: archive_command = 'exit 0' or '/bin/true'. Since 9.5, archive_command is active on standby servers. Here is another flavor of the command: archive_command = test ! -f /backup/wal/%f && cp %p /backup/wal/%f

Not applicable on AWS Aurora/RDS.
archive_mode
off
on
off during migration

Not applicable on AWS Aurora/RDS.
autovacuum_analyze_scale_factor
0.1
0.01
Defaults are too conservative, decrease according to your workload
autovacuum_naptime
60
15
Almost never increase from the default value, but may have to decrease it to keep up with bloat (5-15 secs).
autovacuum_vacuum_insert_scale_factor
0.2
0.02
Defaults are too conservative, decrease according to your workload
autovacuum_vacuum_scale_factor
0.2
0.02
Defaults are too conservative, decrease according to your workload
checkpoint_completion_target
0.9
0.9
default is adequate
checkpoint_timeout
300
300
default is adequate in most cases
default_statistics_target
100
300
Adjusts the sampling size of data page values used by the optimizer. Larger values increase the time needed to do ANALYZE, but might improve the quality of the planner's estimates. 0. With the default value, it means that to populate the statistics for a column the database will draw a sample of 300 * default_statistics_target = 30000 rows. Increase the default size if you have irregular data distributions:
SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE '<your table name>%'; useful to look at pg_statistics and pg_stats to getter better idea.
effective_cache_size
4GB
75-90% of total memory
This value includes all memory available, whether it is applied to the database or other applications should be equal to shared_buffers value or higher. Also used for optimizer to determine if it uses big indexes if it thinks the index fits into memory. You might determine the proper value by executing free -m and add free+cached value. Again, the higher the value, the more likely the optimizer will use index scans that it sees as residing in memory most of the time. In short: If PostgreSQL knows that a lot of RAM is around, it can safely assume that fewer pages have to come from disk and more data will come from the cache, which allows the optimizer to make indexes cheaper (relative to a sequential scan). This effect can only be observed if your database is sufficiently large. On fairly small databases you will not observe any changes in execution plans.
effective_io_concurrency
0
4-256
Sets the number of concurrent disk I/O operations to do in parallel. It only affects bitmap heap scans. Zero value disables it. Increase in multiples of 4 per Shaun Thomas. It is also recommended in some cases to increase to as high as 256 when working with SSDs. effective_io_concurrency only applies to bitmap heap scans as of PG v14.
fsync
on
on
set to off temporarily to make bulk loading work faster. Note, you may not be able to recover your database after a crash with this setting set to off. In December 2015, The Amazon RDS team actually benchmarked fsync=off vs sync commit off and discovered that you get better performance turning sync commit off and leaving fsync alone in some cases. In other cases the difference isn't enough to be worth it. There are known problems with fsync() and remedies will not appear until PG V12 using kernel 4.13 or higher. See section below on fsync() for more information.
full_page_writes
on
on
Set to off, database corruption could occur during recovery. Not worth the savings in performance and size by turning it off. See Impact of Full Page Writes. For Aurora, you don't have the option of changing this parameter since Aurora doesn't have the problem of torn pages, so it can safely use partial page writes. Might be helpful to turn off during DML bulk loading, restore operations. If turned off you don't buy anything by turning on wal_compression. This must also be turned on when using pg_rewind.
See https://www.2ndquadrant.com/en/blog/on-the-impact-of-full-page-writes/
hot_standby_feedback
off
off
Specifies whether or not a hot standby will send feedback to the primary or upstream standby about queries currently executing on the standby. This parameter can be used to eliminate query cancels caused by cleanup records, but can cause database bloat on the primary for some workloads. Feedback messages will not be sent more frequently than once per wal_receiver_status_interval. The default value is off. Practically, this has the effect of stopping cleanup of rows during vacuum operations. You might see messages like: DETAIL: 194817 dead row versions cannot be removed yet.
idle_in_transaction_session_timeout
0
900000
Wait 15 minutes before terminating an idle connection with an open transaction. A value of zero (the default) disables the timeout.

This option can be used to ensure that idle sessions do not hold locks for an unreasonable amount of time. Even when no significant locks are held, an open transaction prevents vacuuming away recently-dead tuples that may be visible only to this transaction; so remaining idle for a long time can contribute to table bloat.
jit
on
on
Introduced in v11, it compiles execution plan into machine code, off by default, mostly used for OLAP type queries where the query takes at least a few seconds to run. SET jit = on; <explain your sql> to see if any performance gain, especially for CPU bound queries (IO Time is significant). It seems to have a big benefit for server-side prepared statements if one is willing to sacrifice the planning hit on the first execution.
log_autovacuum_min_duration
-1
-1
Turn on to capture autovacuum statistics to help tune vacuuming better.
log_checkpoints
off
on
Nice way to adjust the checkpoint_segments value by watching any warnings that surface.
log_connections
off
off
Turn on to evaluation the need for connection pooling or reporting interests like using pgbadger
log_disconnections
off
off
Turn on to evaluation the need for connection pooling or reporting interests like using pgbadger
log_duration
off
off
Better to use log_min_duration_statement and turn of this and log_statement. Pgbadger requires these last 2 parameters to be off.
log_filename
postgresql-%Y-%m-%d_%H%M%S.log
postgresql-%Y-%m-%d_%H.log
Try to use a higher unit for evaluation log files easier.
logging_collector
on
on
Always capture log information.
log_hostname
off
on
Only turn on if you are sure it has a non-negligible performance penalty for reverse DNS names lookup.
log_line_prefix
'%m [%p] '
'%t [%p]: [%l-1] db=%d, user=%u, app=%a, client=%h, state=%e, host=%r, tid=%x '
Acceptable by PGBadger and captures the most stuff that helps in grepping log files. AWS Aurora has a predefined value that cannot be changed.
log_lock_waits
off
on
Always turn this on to help in identifying and fixing lock waits.
log_min_duration_statement
-1
3000
3 seconds is a common value to keep log file noise to a minimum. Lower it closer to 0 when you want to get full robust PGBadger reports.
log_statement
none
none
perhaps 'DDL' to capture DDL changes.
log_temp_files
-1
0
Always specify 0 to capture all query temporary writes and reads from disk.
maintenance_work_mem
65536
minimally, 2% of total memory or at least 256MB whichever one is higher
Useful for DDL stuff like create indexes and also used by autovacuum workers if autovacuum_work_mem=-1. Make it very high during migration where indexes are being created. This parameter is also used by vacuum, index creation and other DDL operations. Be careful setting this value in the context of autovacuum_max_workers since each worker can use up to maintenance_work_mem memory, so total memory usage really is autovacuum_max_workers * maintenance_work_mem. Very large work_mem or maintenance_work_mem settings are unlikely to help unless they result in a fully internal sort. There is evidence that the heap that tuple sorting uses benefits from *lower* settings. Sometimes as low as 64MB.
max_connections
100
500+
performance hit if you set max_connections too high, since allocation stuff for this done up front. Also, you many need to increase max_files_per_process as well. PostgreSQL will allocate slightly more than "max_connections x track_activity_query_size" bytes at startup. It is important to synchronize your postgresql max_connections with the linux postgres user's max processes. The following command shows the current value:
ulimit -a | grep processes --> max user processes (-u) 1024
max_parallel_maintenance_workers
2
25% of max_parallel_workers
Use defaults unless you have more max_worker_processes to work with. max_parallel_workers taken is from max_worker_processes and is limited by max_parallel_workers.
max_parallel_workers
8
8+
Use defaults unless you have more max_worker_processes to work with. max_parallel_workers taken is from max_worker_processes.
max_parallel_workers_per_gather
2
25% of max_parallel_workers
Use defaults unless you have more max_worker_processes to work with. max_parallel_workers taken is from max_worker_processes.
max_standby_archive_delay
30000
30000+
Use defaults unless queries are timing out on secondary.
max_standby_streaming_delay
30000
30000+
Use defaults unless queries are timing out on secondary.
max_wal_size
1024
1GB (minimum)
max_wal_size/min_wal_size replaces checkpoint_segments.Iincrease this value if checkpoint_warnings show up in pg log, similar to how we increased checkpoint_segments (obsolete) in earlier versions. max_wal_size = (3 * checkpoint_segments) * 16MB. test checkpoint_segments = 40 so 3 * 40 * 16 = ~2GB. max_wal_size + (16MB * wal_keep_segments) is the real size of the transaction log now. Shaun Thomas recommends max_wal_size = pg_current_wal_lsn() or pg_wal_lsn_diff() * 2
max_worker_processes
8
8+
Based on number of CPUs available since each worker consumes a cpu. Sets the maximum number of background processes that the system can support. This parameter can only be set at server start. It does not include wal senders/receivers.
min_parallel_index_scan_size
64
64+
8kB units - Sets the minimum amount of index data that must be scanned in order for a parallel scan to be considered. Note that a parallel index scan typically won't touch the entire index; it is the number of pages which the planner believes will actually be touched by the scan which is relevant. Consider increasing this significantly so that parallel workers can be more devoted to big tables and not smaller ones.
min_parallel_table_scan_size
1024
1024+
8kB units - Sets the minimum amount of table data that must be scanned in order for a parallel scan to be considered. For a parallel sequential scan, the amount of table data scanned is always equal to the size of the table, but when indexes are used the amount of table data scanned will normally be less. Consider increasing this significantly so that parallel workers can be more devoted to big tables and not smaller ones.
min_wal_size
80
512MB (minimum)
max_wal_size/min_wal_size replaces checkpoint_segments.
random_page_cost
4
1.1
Sets the planner's estimate of the cost of a non-sequentially-fetched disk page. The default is 4.0. Higher values push the optimizer to use table scans, as opposed to index scans. 2-3 might be a good value for faster disks, and as low as 1.1 for SSD or flash disk drives. General rule of thumb: set to 3.0 for RAID 10; 2.0 for a SAN; 1.1 for Amazon EBS. Going with a conservative value (default=4) is particularly important when using high latency storage, such as spinning disks. Otherwise you might want to equalize random_page_cost to seq_page_cost. The default values for seq_page_cost and random_page_cost are not optimized for SSDs, where you should lower the random_page_cost closer to the seq_page_cost.
shared_buffers
16384
20-25% of RAM
shared_buffers can make things worse if it is set too low or too high. A good rule of thumb: 2GB RAM: 20%;64GB RAM: 25%;Above 64GB RAM: 16GB. pg_buffercache provides real-time examination of what's happening in the shared buffer cache. Reasons for setting it relatively low compared to available operating system memory:
1. PostgreSQL only uses direct I/O for writing to the WAL; everything else goes through the regular OS buffer cache 2. shared_buffers memory has to be reconciled with disk at every checkpoint, where OS buffers do not. 3. OS may just be more efficient at buffering--it knows more about the underlying hardware, and the clock-sweep method used internally by PostgreSQL to simulate a LRU cache is not extremely sophisticated.

Also, make sure kernel.shmmax is set properly to work with the shared_buffers setting you set.
For Aurora/RDS, this value is computed based on the instance class. Leave it alone in that case.
shared_preload_libraries
 
pg_stat_statements (minimually)
extensions that require library loading when Postgres starts.
synchronous_commit
on
on
"on" is the safe setting, but "off" will get you better performance and maintain data consistency, but you may lose a couple of last second, committed transactions in case of a crash. When synchronous_commit is OFF, the server issues fsync call, but not necessarily immediately at commit time for each transaction. It may delay them after the commit, for a maximum of wal_writer_delay multiplied by 3. With a default configuration that would be 600ms. This allows to reduce the number of fsync calls, especially for workloads with lots of small transactions. In case of a server crash, the transactions not yet sync'ed in the last 600ms before the crash may be rolled back, but the state of the database is always consistent. details. Note: when turned off, the lock pressure on WALInsertLock which is used to protect the WAL buffer increases. Since this can be set at the transaction level, you might want to turn it off for stuff that you don't care too much if you lose it. With it in the off position, consider lowering wal_writer_delay to write WAL more frequently. Also, increase wal_buffers to a multiple of wal_segment_size. For Aurora, you might want to change this to off since an Aurora log write (substitute for WAL write) writes to 4 disks.
temp_buffers
1024
Increase if lot of temp table usage
Sets the maximum number of temporary buffers used by each database session for temporary objects (CREATE TEMP TABLE <temp table>). In this example, there is 8MB max per session broken up into 8KB page chunks. Whenever an 8KB page is used, it is never released for the life of the connection. For unused 8KB pages, 64 bytes are always used per page. work_mem is still required in addition to temp_buffers for sorts and hashes. The allocation is done on an as needed basis, but once allocated it is not released until the connection ends. A 64 byte header is created for every temp_buffer buffers. If the buffers max is surpassed then temporary table rows will be written and read from disk. log_temp_files only applies to work_mem, so the only way to monitor this is to keep a close eye on the operating system available memory.
track_activity_query_size
1024
8192
increases buffer to capture entire SQL statement in pg_stat_activity. Note: PostgreSQL will allocate slightly more than "max_connections x track_activity_query_size" bytes at startup.
track_io_timing
off
on
helps with explain output using analyze
wal_keep_size
0
64
Useful for helping secondary to catch up unless more WALs are needed than are being kept. Rely on archiving, not this to keep secondary running.
wal_level
replica
replica
minimal good for bulk loading, but archive/replica needed needed to restore from backup and wal files. Set to hot_standby/replica incase of synch rep. Set to logical for logical decoding logic. 9.6 introduced replica, replacing archive and hot_standby. In logical level, the same information is logged as with replica, plus information needed to allow extracting logical change sets from the WAL (logical decoding). Using a level of logical will increase the WAL volume, particularly if many tables are configured for REPLICA IDENTITY FULL and many UPDATE and DELETE statements are executed. ALTER TABLE REPLICA_IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING } This form changes the information which is written to the write-ahead log to identify rows which are updated or deleted. This option has no effect except when logical replication is in use. DEFAULT (the default for non-system tables) records the old values of the columns of the primary key, if any. USING INDEX records the old values of the columns covered by the named index, which must be unique, not partial, not deferrable, and include only columns marked NOT NULL. FULL records the old values of all columns in the row and the most resource-consuming for heavy write update activity. NOTHING records no information about the old row. (This is the default for system tables.) In all cases, no old values are logged unless at least one of the columns that would be logged differs between the old and new versions of the row. REPLICA IDENTITY should be chosen carefully for each table of a given application, knowing that for example FULL generates an extra amount of WAL that may not be necessary, NOTHING may forget about essential information. In most of the cases, DEFAULT provides a good cover though. It is recommended to set wal_level = logical. It adds more information written to the WAL, but it isn't significant and it is necessary for use logical replication and with other utilities like pg_rewind. Also, turn on wal_hint_bits. This isn't necessary if your database was initialized with checksums, but it is ignored in that case.
work_mem
4MB
64MB+
Involves memory sorts/hashes. It is more than just a session-based value, it is allocated per sort or hash operation, so a single complex query could consume a multiple of that! A work_mem is allocated for every parallel worker doing a hash join/sort! Since this memory consumption is in addition to shared_buffers value, be careful how high you set this. 2-10% of total memory or 64M work_mem is a good starting point.
"top -c" --> will show you what is going on with work_mem. 2gig max recommended by Greg Smith. If log_temp_file results in log file stuff, increase work_mem to reduce creation of temp files. Temp files are created when they do not fit in memory, so increasing work_mem might make stuff start being cached in memory. Frost recommends 64MB. Temp files are usually located in $PGDATA/base/pgsql_tmp. A general rule of thumb is to make work_mem 2-3 times larger than the largest temp file. If query plans for SQL statements show multiple external sort nodes, then you need to increase work_mem. Note: There is a hard limit on the number of tuples than can be sorted in memory prior to PostgreSQL 9.4. It's also the case that very large work_mem or maintenance_work_mem settings are unlikely to help unless they result in a fully internal sort. There is evidence that the heap that tuple sorting uses benefits from *lower* settings. Sometimes as low as 64MB. This might be improved in 9.6, but until then don't expect large values for work_mem and maintenance_work_mem to accomplish what you expect. Note on explain output: Sort Method: external merge Disk: 2072kB means that it spilled to disk after work_mem + 2072kB was requested.
Recommended Short List for GUCs Table

 


Home |  Products |  Support< |  Services |  PG Stuff

Questions or problems regarding this web site should be directed to Support@sqlexec.com.
Copyright 2022 SQLEXEC LLC. All rights reserved.
Last modified: 11/18/22.