Movatterモバイル変換


[0]ホーム

URL:



Facebook
Postgres Pro
Facebook
Downloads
17.4. Managing Kernel Resources
Prev UpChapter 17. Server Setup and OperationHome Next

17.4. Managing Kernel Resources

Postgres Pro can sometimes exhaust various operating system resource limits, especially when multiple copies of the server are running on the same system, or in very large installations. This section explains the kernel resources used byPostgres Pro and the steps you can take to resolve problems related to kernel resource consumption.

17.4.1. Shared Memory and Semaphores

Shared memory and semaphores are collectively referred to asSystem VIPC (together with message queues, which are not relevant forPostgres Pro). Except onWindows, wherePostgres Pro provides its own replacement implementation of these facilities, these facilities are required in order to runPostgres Pro.

The complete lack of these facilities is usually manifested by anIllegal system call error upon server start. In that case there is no alternative but to reconfigure your kernel.Postgres Pro won't work without them. This situation is rare, however, among modern operating systems.

WhenPostgres Pro exceeds one of the various hardIPC limits, the server will refuse to start and should leave an instructive error message describing the problem and what to do about it. (See alsoSection 17.3.1.) The relevant kernel parameters are named consistently across different systems;Table 17.1 gives an overview. The methods to set them, however, vary. Suggestions for some platforms are given below.

Note

Prior toPostgreSQL 9.3, the amount of System V shared memory required to start the server was much larger. If you are running an older version of the server, please consult the documentation for your server version.

Table 17.1. System VIPC Parameters

NameDescriptionReasonable values
SHMMAXMaximum size of shared memory segment (bytes)at least 1kB (more if running many copies of the server)
SHMMINMinimum size of shared memory segment (bytes)1
SHMALLTotal amount of shared memory available (bytes or pages)if bytes, same asSHMMAX; if pages,ceil(SHMMAX/PAGE_SIZE)
SHMSEGMaximum number of shared memory segments per processonly 1 segment is needed, but the default is much higher
SHMMNIMaximum number of shared memory segments system-widelikeSHMSEG plus room for other applications
SEMMNIMaximum number of semaphore identifiers (i.e., sets)at leastceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16)
SEMMNSMaximum number of semaphores system-wideceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) * 17 plus room for other applications
SEMMSLMaximum number of semaphores per setat least 17
SEMMAPNumber of entries in semaphore mapsee text
SEMVMXMaximum value of semaphoreat least 1000 (The default is often 32767; do not change unless necessary)

Postgres Pro requires a few bytes of System V shared memory (typically 48 bytes, on 64-bit platforms) for each copy of the server. On most modern operating systems, this amount can easily be allocated. However, if you are running many copies of the server, or if other applications are also using System V shared memory, it may be necessary to increaseSHMMAX, the maximum size in bytes of a shared memory segment, orSHMALL, the total amount of System V shared memory system-wide. Note thatSHMALL is measured in pages rather than bytes on many systems.

Less likely to cause problems is the minimum size for shared memory segments (SHMMIN), which should be at most approximately 32 bytes forPostgres Pro (it is usually just 1). The maximum number of segments system-wide (SHMMNI) or per-process (SHMSEG) are unlikely to cause a problem unless your system has them set to zero.

Postgres Pro uses one semaphore per allowed connection (max_connections), allowed autovacuum worker process (autovacuum_max_workers) and allowed background process (max_worker_processes), in sets of 16. Each such set will also contain a 17th semaphore which contains amagic number, to detect collision with semaphore sets used by other applications. The maximum number of semaphores in the system is set bySEMMNS, which consequently must be at least as high asmax_connections plusautovacuum_max_workers plusmax_worker_processes, plus one extra for each 16 allowed connections plus workers (see the formula inTable 17.1). The parameterSEMMNI determines the limit on the number of semaphore sets that can exist on the system at one time. Hence this parameter must be at leastceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16). Lowering the number of allowed connections is a temporary workaround for failures, which are usually confusingly wordedNo space left on device, from the functionsemget.

In some cases it might also be necessary to increaseSEMMAP to be at least on the order ofSEMMNS. If the system has this parameter (many do not), it defines the size of the semaphore resource map, in which each contiguous block of available semaphores needs an entry. When a semaphore set is freed it is either added to an existing entry that is adjacent to the freed block or it is registered under a new map entry. If the map is full, the freed semaphores get lost (until reboot). Fragmentation of the semaphore space could over time lead to fewer available semaphores than there should be.

TheSEMMSL parameter, which determines how many semaphores can be in a set, must be at least 17 forPostgres Pro.

Various other settings related tosemaphore undo, such asSEMMNU andSEMUME, do not affectPostgres Pro.

AIX

At least as of version 5.1, it should not be necessary to do any special configuration for such parameters asSHMMAX, as it appears this is configured to allow all memory to be used as shared memory. That is the sort of configuration commonly used for other databases such asDB/2.

It might, however, be necessary to modify the globalulimit information in/etc/security/limits, as the default hard limits for file sizes (fsize) and numbers of files (nofiles) might be too low.

FreeBSD

The default IPC settings can be changed using thesysctl orloader interfaces. The following parameters can be set usingsysctl:

#sysctl kern.ipc.shmall=32768#sysctl kern.ipc.shmmax=134217728

To make these settings persist over reboots, modify/etc/sysctl.conf.

These semaphore-related settings are read-only as far assysctl is concerned, but can be set in/boot/loader.conf:

kern.ipc.semmni=256kern.ipc.semmns=512

After modifying that file, a reboot is required for the new settings to take effect.

You might also want to configure your kernel to lock shared memory into RAM and prevent it from being paged out to swap. This can be accomplished using thesysctl settingkern.ipc.shm_use_phys.

If running in FreeBSD jails by enablingsysctl'ssecurity.jail.sysvipc_allowed,postmasters running in different jails should be run by different operating system users. This improves security because it prevents non-root users from interfering with shared memory or semaphores in different jails, and it allows the Postgres Pro IPC cleanup code to function properly. (In FreeBSD 6.0 and later the IPC cleanup code does not properly detect processes in other jails, preventing the running of postmasters on the same port in different jails.)

FreeBSD versions before 4.0 work like oldOpenBSD (see below).

NetBSD

InNetBSD 5.0 and later, IPC parameters can be adjusted usingsysctl, for example:

#sysctl -w kern.ipc.semmni=100

To make these settings persist over reboots, modify/etc/sysctl.conf.

You will usually want to increasekern.ipc.semmni andkern.ipc.semmns, asNetBSD's default settings for these are uncomfortably small.

You might also want to configure your kernel to lock shared memory into RAM and prevent it from being paged out to swap. This can be accomplished using thesysctl settingkern.ipc.shm_use_phys.

NetBSD versions before 5.0 work like oldOpenBSD (see below), except that kernel parameters should be set with the keywordoptions notoption.

OpenBSD

InOpenBSD 3.3 and later, IPC parameters can be adjusted usingsysctl, for example:

#sysctl kern.seminfo.semmni=100

To make these settings persist over reboots, modify/etc/sysctl.conf.

You will usually want to increasekern.seminfo.semmni andkern.seminfo.semmns, asOpenBSD's default settings for these are uncomfortably small.

In olderOpenBSD versions, you will need to build a custom kernel to change the IPC parameters. Make sure that the optionsSYSVSHM andSYSVSEM are enabled, too. (They are by default.) The following shows an example of how to set the various parameters in the kernel configuration file:

option        SYSVSHMoption        SHMMAXPGS=4096option        SHMSEG=256option        SYSVSEMoption        SEMMNI=256option        SEMMNS=512option        SEMMNU=256

HP-UX

The default settings tend to suffice for normal installations. OnHP-UX 10, the factory default forSEMMNS is 128, which might be too low for larger database sites.

IPC parameters can be set in theSystem Administration Manager (SAM) underKernel ConfigurationConfigurable Parameters. ChooseCreate A New Kernel when you're done.

Linux

The default maximum segment size is 32 MB, and the default maximum total size is 2097152 pages. A page is almost always 4096 bytes except in unusual kernel configurations withhuge pages (usegetconf PAGE_SIZE to verify).

The shared memory size settings can be changed via thesysctl interface. For example, to allow 16 GB:

$sysctl -w kernel.shmmax=17179869184$sysctl -w kernel.shmall=4194304

In addition these settings can be preserved between reboots in the file/etc/sysctl.conf. Doing that is highly recommended.

Ancient distributions might not have thesysctl program, but equivalent changes can be made by manipulating the/proc file system:

$echo 17179869184 >/proc/sys/kernel/shmmax$echo 4194304 >/proc/sys/kernel/shmall

The remaining defaults are quite generously sized, and usually do not require changes.

OS X

The recommended method for configuring shared memory in OS X is to create a file named/etc/sysctl.conf, containing variable assignments such as:

kern.sysv.shmmax=4194304kern.sysv.shmmin=1kern.sysv.shmmni=32kern.sysv.shmseg=8kern.sysv.shmall=1024

Note that in some OS X versions,all five shared-memory parameters must be set in/etc/sysctl.conf, else the values will be ignored.

Beware that recent releases of OS X ignore attempts to setSHMMAX to a value that isn't an exact multiple of 4096.

SHMALL is measured in 4 kB pages on this platform.

In older OS X versions, you will need to reboot to have changes in the shared memory parameters take effect. As of 10.5 it is possible to change all butSHMMNI on the fly, usingsysctl. But it's still best to set up your preferred values via/etc/sysctl.conf, so that the values will be kept across reboots.

The file/etc/sysctl.conf is only honored in OS X 10.3.9 and later. If you are running a previous 10.3.x release, you must edit the file/etc/rc and change the values in the following commands:

sysctl -w kern.sysv.shmmaxsysctl -w kern.sysv.shmminsysctl -w kern.sysv.shmmnisysctl -w kern.sysv.shmsegsysctl -w kern.sysv.shmall

Note that/etc/rc is usually overwritten by OS X system updates, so you should expect to have to redo these edits after each update.

In OS X 10.2 and earlier, instead edit these commands in the file/System/Library/StartupItems/SystemTuning/SystemTuning.

SCO OpenServer

In the default configuration, only 512 kB of shared memory per segment is allowed. To increase the setting, first change to the directory/etc/conf/cf.d. To display the current value ofSHMMAX, run:

./configure -y SHMMAX

To set a new value forSHMMAX, run:

./configure SHMMAX=value

wherevalue is the new value you want to use (in bytes). After settingSHMMAX, rebuild the kernel:

./link_unix

and reboot.

Solaris 2.6 to 2.9 (Solaris 6 to Solaris 9)

The relevant settings can be changed in/etc/system, for example:

set shmsys:shminfo_shmmax=0x2000000set shmsys:shminfo_shmmin=1set shmsys:shminfo_shmmni=256set shmsys:shminfo_shmseg=256set semsys:seminfo_semmap=256set semsys:seminfo_semmni=512set semsys:seminfo_semmns=512set semsys:seminfo_semmsl=32

You need to reboot for the changes to take effect. See alsohttp://sunsite.uakom.sk/sunworldonline/swol-09-1997/swol-09-insidesolaris.html for information on shared memory under older versions of Solaris.

Solaris 2.10 (Solaris 10) and later
OpenSolaris

In Solaris 10 and later, and OpenSolaris, the default shared memory and semaphore settings are good enough for mostPostgres Pro applications. Solaris now defaults to aSHMMAX of one-quarter of systemRAM. To further adjust this setting, use a project setting associated with thepostgres user. For example, run the following asroot:

projadd -c "Postgres Pro DB User" -K "project.max-shm-memory=(privileged,8GB,deny)" -U postgres -G postgres user.postgres

This command adds theuser.postgres project and sets the shared memory maximum for thepostgres user to 8GB, and takes effect the next time that user logs in, or when you restartPostgres Pro (not reload). The above assumes thatPostgres Pro is run by thepostgres user in thepostgres group. No server reboot is required.

Other recommended kernel setting changes for database servers which will have a large number of connections are:

project.max-shm-ids=(priv,32768,deny)project.max-sem-ids=(priv,4096,deny)project.max-msg-ids=(priv,4096,deny)

Additionally, if you are runningPostgres Pro inside a zone, you may need to raise the zone resource usage limits as well. See "Chapter2: Projects and Tasks" in theSystem Administrator's Guide for more information onprojects andprctl.

UnixWare

OnUnixWare 7, the maximum size for shared memory segments is 512 kB in the default configuration. To display the current value ofSHMMAX, run:

/etc/conf/bin/idtune -g SHMMAX

which displays the current, default, minimum, and maximum values. To set a new value forSHMMAX, run:

/etc/conf/bin/idtune SHMMAXvalue

wherevalue is the new value you want to use (in bytes). After settingSHMMAX, rebuild the kernel:

/etc/conf/bin/idbuild -B

and reboot.

17.4.2. systemd RemoveIPC

Ifsystemd is in use, some care must be taken that IPC resources (shared memory and semaphores) are not prematurely removed by the operating system. This is especially of concern when installingPostgres Pro from source. Users of distribution packages ofPostgres Pro are less likely to be affected, as thepostgres user is then normally created as a system user.

The settingRemoveIPC inlogind.conf controls whether IPC objects are removed when a user fully logs out. System users are exempt. This setting defaults to on in stocksystemd, but some operating system distributions default it to off.

A typical observed effect when this setting is on is that the semaphore objects used by aPostgres Pro server are removed at apparently random times, leading to the server crashing with log messages like

LOG: semctl(1234567890, 0, IPC_RMID, ...) failed: Invalid argument

Different types of IPC objects (shared memory vs. semaphores, System V vs. POSIX) are treated slightly differently bysystemd, so one might observe that some IPC resources are not removed in the same way as others. But it is not advisable to rely on these subtle differences.

Auser logging out might happen as part of a maintenance job or manually when an administrator logs in as thepostgres user or something similar, so it is hard to prevent in general.

What is asystem user is determined atsystemd compile time from theSYS_UID_MAX setting in/etc/login.defs.

Packaging and deployment scripts should be careful to create thepostgres user as a system user by usinguseradd -r,adduser --system, or equivalent.

Alternatively, if the user account was created incorrectly or cannot be changed, it is recommended to set

RemoveIPC=no

in/etc/systemd/logind.conf or another appropriate configuration file.

Caution

At least one of these two things has to be ensured, or thePostgres Pro server will be very unreliable.

17.4.3. Resource Limits

Unix-like operating systems enforce various kinds of resource limits that might interfere with the operation of yourPostgres Pro server. Of particular importance are limits on the number of processes per user, the number of open files per process, and the amount of memory available to each process. Each of these have ahard and asoft limit. The soft limit is what actually counts but it can be changed by the user up to the hard limit. The hard limit can only be changed by the root user. The system callsetrlimit is responsible for setting these parameters. The shell's built-in commandulimit (Bourne shells) orlimit (csh) is used to control the resource limits from the command line. On BSD-derived systems the file/etc/login.conf controls the various resource limits set during login. See the operating system documentation for details. The relevant parameters aremaxproc,openfiles, anddatasize. For example:

default:\...        :datasize-cur=256M:\        :maxproc-cur=256:\        :openfiles-cur=256:\...

(-cur is the soft limit. Append-max to set the hard limit.)

Kernels can also have system-wide limits on some resources.

  • OnLinux/proc/sys/fs/file-max determines the maximum number of open files that the kernel will support. It can be changed by writing a different number into the file or by adding an assignment in/etc/sysctl.conf. The maximum limit of files per process is fixed at the time the kernel is compiled; see/usr/src/linux/Documentation/proc.txt for more information.

ThePostgres Pro server uses one process per connection so you should provide for at least as many processes as allowed connections, in addition to what you need for the rest of your system. This is usually not a problem but if you run several servers on one machine things might get tight.

The factory default limit on open files is often set tosocially friendly values that allow many users to coexist on a machine without using an inappropriate fraction of the system resources. If you run many servers on a machine this is perhaps what you want, but on dedicated servers you might want to raise this limit.

On the other side of the coin, some systems allow individual processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the system-wide limit, you can setPostgres Pro'smax_files_per_process configuration parameter to limit the consumption of open files.

17.4.4. Linux Memory Overcommit

In Linux 2.4 and later, the default virtual memory behavior is not optimal forPostgres Pro. Because of the way that the kernel implements memory overcommit, the kernel might terminate thePostgres Pro postmaster (the master server process) if the memory demands of eitherPostgres Pro or another process cause the system to run out of virtual memory.

If this happens, you will see a kernel message that looks like this (consult your system documentation and configuration on where to look for such a message):

Out of Memory: Killed process 12345 (postgres).

This indicates that thepostgres process has been terminated due to memory pressure. Although existing database connections will continue to function normally, no new connections will be accepted. To recover,Postgres Pro will need to be restarted.

One way to avoid this problem is to runPostgres Pro on a machine where you can be sure that other processes will not run the machine out of memory. If memory is tight, increasing the swap space of the operating system can help avoid the problem, because the out-of-memory (OOM) killer is invoked only when physical memory and swap space are exhausted.

IfPostgres Pro itself is the cause of the system running out of memory, you can avoid the problem by changing your configuration. In some cases, it may help to lower memory-related configuration parameters, particularlyshared_buffers andwork_mem. In other cases, the problem may be caused by allowing too many connections to the database server itself. In many cases, it may be better to reducemax_connections and instead make use of external connection-pooling software.

On Linux 2.6 and later, it is possible to modify the kernel's behavior so that it will notovercommit memory. Although this setting will not prevent theOOM killer from being invoked altogether, it will lower the chances significantly and will therefore lead to more robust system behavior. This is done by selecting strict overcommit mode viasysctl:

sysctl -w vm.overcommit_memory=2

or placing an equivalent entry in/etc/sysctl.conf. You might also wish to modify the related settingvm.overcommit_ratio. For details see the kernel documentation fileDocumentation/vm/overcommit-accounting.

Another approach, which can be used with or without alteringvm.overcommit_memory, is to set the process-specificOOM score adjustment value for the postmaster process to-1000, thereby guaranteeing it will not be targeted by the OOM killer. The simplest way to do this is to execute

echo -1000 > /proc/self/oom_score_adj

in the postmaster's startup script just before invoking the postmaster. Note that this action must be done as root, or it will have no effect; so a root-owned startup script is the easiest place to do it. If you do this, you should also set these environment variables in the startup script before invoking the postmaster:

export PG_OOM_ADJUST_FILE=/proc/self/oom_score_adjexport PG_OOM_ADJUST_VALUE=0

These settings will cause postmaster child processes to run with the normal OOM score adjustment of zero, so that the OOM killer can still target them at need. You could use some other value forPG_OOM_ADJUST_VALUE if you want the child processes to run with some other OOM score adjustment. (PG_OOM_ADJUST_VALUE can also be omitted, in which case it defaults to zero.) If you do not setPG_OOM_ADJUST_FILE, the child processes will run with the same OOM score adjustment as the postmaster, which is unwise since the whole point is to ensure that the postmaster has a preferential setting.

Older Linux kernels do not offer/proc/self/oom_score_adj, but may have a previous version of the same functionality called/proc/self/oom_adj. This works the same except the disable value is-17 not-1000.

Note

Some vendors' Linux 2.4 kernels are reported to have early versions of the 2.6 overcommitsysctl parameter. However, settingvm.overcommit_memory to 2 on a 2.4 kernel that does not have the relevant code will make things worse, not better. It is recommended that you inspect the actual kernel source code (see the functionvm_enough_memory in the filemm/mmap.c) to verify what is supported in your kernel before you try this in a 2.4 installation. The presence of theovercommit-accounting documentation file shouldnot be taken as evidence that the feature is there. If in any doubt, consult a kernel expert or your kernel vendor.

17.4.5. Linux Huge Pages

Using huge pages reduces overhead when using large contiguous chunks of memory, asPostgres Pro does, particularly when using large values ofshared_buffers. To use this feature inPostgres Pro you need a kernel withCONFIG_HUGETLBFS=y andCONFIG_HUGETLB_PAGE=y. You will also have to adjust the kernel settingvm.nr_hugepages. To estimate the number of huge pages needed, startPostgres Pro without huge pages enabled and check the postmaster'sVmPeak value, as well as the system's huge page size, using the/proc file system. This might look like:

$head -1 $PGDATA/postmaster.pid4170$grep ^VmPeak /proc/4170/statusVmPeak:  6490428 kB$grep ^Hugepagesize /proc/meminfoHugepagesize:       2048 kB

6490428 /2048 gives approximately3169.154, so in this example we need at least3170 huge pages, which we can set with:

$sysctl -w vm.nr_hugepages=3170

A larger setting would be appropriate if other programs on the machine also need huge pages. Don't forget to add this setting to/etc/sysctl.conf so that it will be reapplied after reboots.

Sometimes the kernel is not able to allocate the desired number of huge pages immediately, so it might be necessary to repeat the command or to reboot. (Immediately after a reboot, most of the machine's memory should be available to convert into huge pages.) To verify the huge page allocation situation, use:

$grep Huge /proc/meminfo

It may also be necessary to give the database server's operating system user permission to use huge pages by settingvm.hugetlb_shm_group viasysctl, and/or give permission to lock memory withulimit -l.

The default behavior for huge pages inPostgres Pro is to use them when possible and to fall back to normal pages when failing. To enforce the use of huge pages, you can sethuge_pages toon inpostgresql.conf. Note that with this settingPostgres Pro will fail to start if not enough huge pages are available.

For a detailed description of theLinux huge pages feature have a look athttps://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt.


Prev Up Next
17.3. Starting the Database Server Home 17.5. Shutting Down the Server
pdfepub
Go to Postgres Pro Standard 9.5
By continuing to browse this website, you agree to the use of cookies. Go toPrivacy Policy.

[8]ページ先頭

©2009-2025 Movatter.jp