This PEP describes a proposed logging package for Python’sstandard library.
Basically the system involves the user creating one or more loggerobjects on which methods are called to log debugging notes,general information, warnings, errors etc. Different logging‘levels’ can be used to distinguish important messages from lessimportant ones.
A registry of named singleton logger objects is maintained so that
The system is configurable at runtime. This configurationmechanism allows one to tune the level and type of logging donewhile not touching the application itself.
If a single logging mechanism is enshrined in the standardlibrary, 1) logging is more likely to be done ‘well’, and 2)multiple libraries will be able to be integrated into largerapplications which can be logged reasonably coherently.
This proposal was put together after having studied thefollowing logging packages:
This shows a very simple example of how the logging package can beused to generate simple logging output on stderr.
---------mymodule.py-------------------------------importlogginglog=logging.getLogger("MyModule")defdoIt():log.debug("Doin' stuff...")#do stuff...raiseTypeError,"Bogus type error for testing"-----------------------------------------------------
---------myapp.py----------------------------------importmymodule,logginglogging.basicConfig()log=logging.getLogger("MyApp")log.info("Starting my app")try:mymodule.doIt()exceptException,e:log.exception("There was a problem.")log.info("Ending my app")-----------------------------------------------------
$pythonmyapp.pyINFO:MyApp: Starting my appDEBUG:MyModule: Doin' stuff...ERROR:MyApp: There was a problem.Traceback (most recent call last): File "myapp.py", line 9, in ? mymodule.doIt() File "mymodule.py", line 7, in doIt raise TypeError, "Bogus type error for testing"TypeError: Bogus type error for testingINFO:MyApp: Ending my app
The above example shows the default output format. Allaspects of the output format should be configurable, so thatyou could have output formatted like this:
2002-04-19 07:56:58,174 MyModule DEBUG - Doin' stuff...or justDoin' stuff...
Applications make logging calls onLogger objects. Loggers areorganized in a hierarchical namespace and child Loggers inheritsome logging properties from their parents in the namespace.
Logger names fit into a “dotted name” namespace, with dots(periods) indicating sub-namespaces. The namespace of loggerobjects therefore corresponds to a single tree data structure.
"" is the root of the namespace"Zope" would be a child node of the root"Zope.ZODB" would be a child node of"Zope"These Logger objects createLogRecord objects which are passedtoHandler objects for output. Both Loggers and Handlers mayuse logginglevels and (optionally)Filters to decide if theyare interested in a particular LogRecord. When it is necessary tooutput a LogRecord externally, a Handler can (optionally) use aFormatter to localize and format the message before sending itto an I/O stream.
Each Logger keeps track of a set of output Handlers. By defaultall Loggers also send their output to all Handlers of theirancestor Loggers. Loggers may, however, also be configured toignore Handlers higher up the tree.
The APIs are structured so that calls on the Logger APIs can becheap when logging is disabled. If logging is disabled for agiven log level, then the Logger can make a cheap comparison testand return. If logging is enabled for a given log level, theLogger is still careful to minimize costs before passing theLogRecord into the Handlers. In particular, localization andformatting (which are relatively expensive) are deferred until theHandler requests them.
The overall Logger hierarchy can also have a level associated withit, which takes precedence over the levels of individual Loggers.This is done through a module-level function:
defdisable(lvl):""" Do not generate any LogRecords for requests with a severity less than 'lvl'. """...
The logging levels, in increasing order of importance, are:
The term CRITICAL is used in preference to FATAL, which is used bylog4j. The levels are conceptually the same - that of a serious,or very serious, error. However, FATAL implies death, which inPython implies a raised and uncaught exception, traceback, andexit. Since the logging module does not enforce such an outcomefrom a FATAL-level log entry, it makes sense to use CRITICAL inpreference to FATAL.
These are just integer constants, to allow simple comparison ofimportance. Experience has shown that too many levels can beconfusing, as they lead to subjective interpretation of whichlevel should be applied to any particular log request.
Although the above levels are strongly recommended, the loggingsystem should not be prescriptive. Users may define their ownlevels, as well as the textual representation of any levels. Userdefined levels must, however, obey the constraints that they areall positive integers and that they increase in order ofincreasing severity.
User-defined logging levels are supported through two module-levelfunctions:
defgetLevelName(lvl):"""Return the text for level 'lvl'."""...defaddLevelName(lvl,lvlName):""" Add the level 'lvl' with associated text 'levelName', or set the textual representation of existing level 'lvl' to be 'lvlName'."""...
Each Logger object keeps track of a log level (or threshold) thatit is interested in, and discards log requests below that level.
AManager class instance maintains the hierarchical namespace ofnamed Logger objects. Generations are denoted with dot-separatednames: Logger “foo” is the parent of Loggers “foo.bar” and“foo.baz”.
The Manager class instance is a singleton and is not directlyexposed to users, who interact with it using various module-levelfunctions.
The general logging method is:
classLogger:deflog(self,lvl,msg,*args,**kwargs):"""Log 'str(msg) % args' at logging level 'lvl'."""...
However, convenience functions are defined for each logging level:
classLogger:defdebug(self,msg,*args,**kwargs):...definfo(self,msg,*args,**kwargs):...defwarn(self,msg,*args,**kwargs):...deferror(self,msg,*args,**kwargs):...defcritical(self,msg,*args,**kwargs):...
Only one keyword argument is recognized at present - “exc_info”.If true, the caller wants exception information to be provided inthe logging output. This mechanism is only needed if exceptioninformation needs to be provided atany logging level. In themore common case, where exception information needs to be added tothe log only when errors occur, i.e. at the ERROR level, thenanother convenience method is provided:
classLogger:defexception(self,msg,*args):...
This should only be called in the context of an exception handler,and is the preferred way of indicating a desire for exceptioninformation in the log. The other convenience methods areintended to be called with exc_info only in the unusual situationwhere you might want to provide exception information in thecontext of an INFO message, for example.
The “msg” argument shown above will normally be a format string;however, it can be any object x for whichstr(x) returns theformat string. This facilitates, for example, the use of anobject which fetches a locale- specific message for aninternationalized/localized application, perhaps using thestandard gettext module. An outline example:
classMessage:"""Represents a message"""def__init__(self,id):"""Initialize with the message ID"""def__str__(self):"""Return an appropriate localized message text"""...logger.info(Message("abc"),...)
Gathering and formatting data for a log message may be expensive,and a waste if the logger was going to discard the message anyway.To see if a request will be honoured by the logger, theisEnabledFor() method can be used:
classLogger:defisEnabledFor(self,lvl):""" Return true if requests at level 'lvl' will NOT be discarded. """...
so instead of this expensive and possibly wasteful DOM to XMLconversion:
...hamletStr=hamletDom.toxml()log.info(hamletStr)...
one can do this:
iflog.isEnabledFor(logging.INFO):hamletStr=hamletDom.toxml()log.info(hamletStr)
When new loggers are created, they are initialized with a levelwhich signifies “no level”. A level can be set explicitly usingthesetLevel() method:
classLogger:defsetLevel(self,lvl):...
If a logger’s level is not set, the system consults all itsancestors, walking up the hierarchy until an explicitly set levelis found. That is regarded as the “effective level” of thelogger, and can be queried via the getEffectiveLevel() method:
defgetEffectiveLevel(self):...
Loggers are never instantiated directly. Instead, a module-levelfunction is used:
defgetLogger(name=None):...
If no name is specified, the root logger is returned. Otherwise,if a logger with that name exists, it is returned. If not, a newlogger is initialized and returned. Here, “name” is synonymouswith “channel name”.
Users can specify a custom subclass of Logger to be used by thesystem when instantiating new loggers:
defsetLoggerClass(klass):...
The passed class should be a subclass of Logger, and its__init__method should callLogger.__init__.
Handlers are responsible for doing something useful with a givenLogRecord. The following core Handlers will be implemented:
StreamHandler: A handler for writing to a file-like object.FileHandler: A handler for writing to a single file or setof rotating files.SocketHandler: A handler for writing to remote TCP ports.DatagramHandler: A handler for writing to UDP sockets, forlow-cost logging. Jeff Bauer already had such a system[5].MemoryHandler: A handler that buffers log records in memoryuntil the buffer is full or a particular condition occurs[1].SMTPHandler: A handler for sending to email addresses via SMTP.SysLogHandler: A handler for writing to Unix syslog via UDP.NTEventLogHandler: A handler for writing to event logs onWindows NT, 2000 and XP.HTTPHandler: A handler for writing to a Web server witheither GET or POST semantics.Handlers can also have levels set for them using thesetLevel() method:
defsetLevel(self,lvl):...
The FileHandler can be set up to create a rotating set of logfiles. In this case, the file name passed to the constructor istaken as a “base” file name. Additional file names for therotation are created by appending .1, .2, etc. to the base filename, up to a maximum as specified when rollover is requested.The setRollover method is used to specify a maximum size for a logfile and a maximum number of backup files in the rotation.
defsetRollover(maxBytes,backupCount):...
If maxBytes is specified as zero, no rollover ever occurs and thelog file grows indefinitely. If a non-zero size is specified,when that size is about to be exceeded, rollover occurs. Therollover method ensures that the base file name is always the mostrecent, .1 is the next most recent, .2 the next most recent afterthat, and so on.
There are many additional handlers implemented in the test/examplescripts provided with[6] - for example, XMLHandler andSOAPHandler.
A LogRecord acts as a receptacle for information about alogging event. It is little more than a dictionary, though itdoes define agetMessage method which merges a message withoptional runarguments.
A Formatter is responsible for converting a LogRecord to a stringrepresentation. A Handler may call its Formatter before writing arecord. The following core Formatters will be implemented:
Formatter: Provide printf-like formatting, using the % operator.BufferingFormatter: Provide formatting for multiplemessages, with header and trailer formatting support.Formatters are associated with Handlers by callingsetFormatter()on a handler:
defsetFormatter(self,form):...
Formatters use the % operator to format the logging message. Theformat string should contain%(name)x and the attribute dictionaryof the LogRecord is used to obtain message-specific data. Thefollowing attributes are provided:
%(name)s | Name of the logger (logging channel) |
%(levelno)s | Numeric logging level for the message (DEBUG,INFO, WARN, ERROR, CRITICAL) |
%(levelname)s | Text logging level for the message (“DEBUG”, “INFO”,“WARN”, “ERROR”, “CRITICAL”) |
%(pathname)s | Full pathname of the source file where the loggingcall was issued (if available) |
%(filename)s | Filename portion of pathname |
%(module)s | Module from which logging call was made |
%(lineno)d | Source line number where the logging call was issued(if available) |
%(created)f | Time when the LogRecord was created (time.time()return value) |
%(asctime)s | Textual time when the LogRecord was created |
%(msecs)d | Millisecond portion of the creation time |
%(relativeCreated)d | Time in milliseconds when the LogRecord was created,relative to the time the logging module was loaded(typically at application startup time) |
%(thread)d | Thread ID (if available) |
%(message)s | The result of record.getMessage(), computed just asthe record is emitted |
If a formatter sees that the format string includes “(asctime)s”,the creation time is formatted into the LogRecord’s asctimeattribute. To allow flexibility in formatting dates, Formattersare initialized with a format string for the message as a whole,and a separate format string for date/time. The date/time formatstring should be in time.strftime format. The default value forthe message format is “%(message)s”. The default date/time formatis ISO8601.
The formatter uses a class attribute, “converter”, to indicate howto convert a time from seconds to a tuple. By default, the valueof “converter” is “time.localtime”. If needed, a differentconverter (e.g. “time.gmtime”) can be set on an individualformatter instance, or the class attribute changed to affect allformatter instances.
When level-based filtering is insufficient, a Filter can be calledby a Logger or Handler to decide if a LogRecord should be output.Loggers and Handlers can have multiple filters installed, and anyone of them can veto a LogRecord being output.
classFilter:deffilter(self,record):""" Return a value indicating true if the record is to be processed. Possibly modify the record, if deemed appropriate by the filter. """
The default behaviour allows a Filter to be initialized with aLogger name. This will only allow through events which aregenerated using the named logger or any of its children. Forexample, a filter initialized with “A.B” will allow events loggedby loggers “A.B”, “A.B.C”, “A.B.C.D”, “A.B.D” etc. but not “A.BB”,“B.A.B” etc. If initialized with the empty string, all events arepassed by the Filter. This filter behaviour is useful when it isdesired to focus attention on one particular area of anapplication; the focus can be changed simply by changing a filterattached to the root logger.
There are many examples of Filters provided in[6].
The main benefit of a logging system like this is that one cancontrol how much and what logging output one gets from anapplication without changing that application’s source code.Therefore, although configuration can be performed through thelogging API, it must also be possible to change the loggingconfiguration without changing an application at all. Forlong-running programs like Zope, it should be possible to changethe logging configuration while the program is running.
Configuration includes the following:
In general each application will have its own requirements for howa user may configure logging output. However, each applicationwill specify the required configuration to the logging systemthrough a standard mechanism.
The most simple configuration is that of a single handler, writingto stderr, attached to the root logger. This configuration is setup by calling thebasicConfig() function once the logging modulehas been imported.
defbasicConfig():...
For more sophisticated configurations, this PEP makes no specificproposals, for the following reasons:
The reference implementation[6] has a working configuration fileformat, implemented for the purpose of proving the concept andsuggesting one possible alternative. It may be that separateextension modules, not part of the core Python distribution, arecreated for logging configuration and log viewing, supplementalhandlers and other features which are not of interest to the bulkof the community.
The logging system should support thread-safe operation withoutany special action needing to be taken by its users.
To support use of the logging mechanism in short scripts and smallapplications, module-level functionsdebug(),info(),warn(),error(),critical() andexception() are provided. These work inthe same way as the correspondingly named methods of Logger - infact they delegate to the corresponding methods on the rootlogger. A further convenience provided by these functions is thatif no configuration has been done,basicConfig() is automaticallycalled.
At application exit, all handlers can be flushed by calling the function:
defshutdown():...
This will flush and close all handlers.
The reference implementation is Vinay Sajip’s logging module[6].
The reference implementation is implemented as a single module.This offers the simplest interface - all users have to do is“import logging” and they are in a position to use all thefunctionality available.
This document has been placed in the public domain.
Source:https://github.com/python/peps/blob/main/peps/pep-0282.rst
Last modified:2025-02-01 08:55:40 GMT