Movatterモバイル変換


[0]ホーム

URL:



Facebook
Postgres Pro
Facebook
Downloads
24.3. Log File Maintenance
Prev UpChapter 24. Routine Database Maintenance TasksHome Next

24.3. Log File Maintenance#

It is a good idea to save the database server's log output somewhere, rather than just discarding it via/dev/null. The log output is invaluable when diagnosing problems.

Note

The server log can contain sensitive information and needs to be protected, no matter how or where it is stored, or the destination to which it is routed. For example, some DDL statements might contain plaintext passwords or other authentication details. Logged statements at theERROR level might show the SQL source code for applications and might also contain some parts of data rows. Recording data, events and related information is the intended function of this facility, so this is not a leakage or a bug. Please ensure the server logs are visible only to appropriately authorized people.

Log output tends to be voluminous (especially at higher debug levels) so you won't want to save it indefinitely. You need torotate the log files so that new log files are started and old ones removed after a reasonable period of time.

If you simply direct thestderr ofpostgres into a file, you will have log output, but the only way to truncate the log file is to stop and restart the server. This might be acceptable if you are usingPostgreSQL in a development environment, but few production servers would find this behavior acceptable.

A better approach is to send the server'sstderr output to some type of log rotation program. There is a built-in log rotation facility, which you can use by setting the configuration parameterlogging_collector totrue inpostgresql.conf. The control parameters for this program are described inSection 19.8.1. You can also use this approach to capture the log data in machine readableCSV (comma-separated values) format.

Alternatively, you might prefer to use an external log rotation program if you have one that you are already using with other server software. For example, therotatelogs tool included in theApache distribution can be used withPostgreSQL. One way to do this is to pipe the server'sstderr output to the desired program. If you start the server withpg_ctl, thenstderr is already redirected tostdout, so you just need a pipe command, for example:

pg_ctl start | rotatelogs /var/log/pgsql_log 86400

You can combine these approaches by setting uplogrotate to collect log files produced byPostgreSQL built-in logging collector. In this case, the logging collector defines the names and location of the log files, whilelogrotate periodically archives these files. When initiating log rotation,logrotate must ensure that the application sends further output to the new file. This is commonly done with apostrotate script that sends aSIGHUP signal to the application, which then reopens the log file. InPostgreSQL, you can runpg_ctl with thelogrotate option instead. When the server receives this command, the server either switches to a new log file or reopens the existing file, depending on the logging configuration (seeSection 19.8.1).

Note

When using static log file names, the server might fail to reopen the log file if the max open file limit is reached or a file table overflow occurs. In this case, log messages are sent to the old log file until a successful log rotation. Iflogrotate is configured to compress the log file and delete it, the server may lose the messages logged in this time frame. To avoid this issue, you can configure the logging collector to dynamically assign log file names and use aprerotate script to ignore open log files.

Another production-grade approach to managing log output is to send it tosyslog and letsyslog deal with file rotation. To do this, set the configuration parameterlog_destination tosyslog (to log tosyslog only) inpostgresql.conf. Then you can send aSIGHUP signal to thesyslog daemon whenever you want to force it to start writing a new log file. If you want to automate log rotation, thelogrotate program can be configured to work with log files fromsyslog.

On many systems, however,syslog is not very reliable, particularly with large log messages; it might truncate or drop messages just when you need them the most. Also, onLinux,syslog will flush each message to disk, yielding poor performance. (You can use a- at the start of the file name in thesyslog configuration file to disable syncing.)

Note that all the solutions described above take care of starting new log files at configurable intervals, but they do not handle deletion of old, no-longer-useful log files. You will probably want to set up a batch job to periodically delete old log files. Another possibility is to configure the rotation program so that old log files are overwritten cyclically.

pgBadger is an external project that does sophisticated log file analysis.check_postgres provides Nagios alerts when important messages appear in the log files, as well as detection of many other extraordinary conditions.


Prev Up Next
24.2. Routine Reindexing Home Chapter 25. Backup and Restore
pdfepub
Go to PostgreSQL 17
By continuing to browse this website, you agree to the use of cookies. Go toPrivacy Policy.

[8]ページ先頭

©2009-2025 Movatter.jp