Howto Logging Cookbook
Howto Logging Cookbook
Release 3.13.0
Contents
10 Use of contextvars 18
15 Customizing LogRecord 30
1
17 Subclassing QueueHandler and QueueListener- a pynng example 32
17.1 Subclass QueueListener . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
17.2 Subclass QueueHandler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
36 Patterns to avoid 66
36.1 Opening the same log file multiple times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
36.2 Using loggers as attributes in a class or passing them as parameters . . . . . . . . . . . . . . . . . 67
36.3 Adding handlers other than NullHandler to a logger in a library . . . . . . . . . . . . . . . . . . 67
36.4 Creating a lot of loggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
37 Other resources 67
Index 68
Author
Vinay Sajip <vinay_sajip at red-dove dot com>
This page contains a number of recipes related to logging, which have been found useful in the past. For links to
tutorial and reference information, please see Other resources.
2
1 Using logging in multiple modules
Multiple calls to logging.getLogger('someLogger') return a reference to the same logger object. This is true
not only within the same module, but also across modules as long as it is in the same Python interpreter process. It is
true for references to the same object; additionally, application code can define and configure a parent logger in one
module and create (but not configure) a child logger in a separate module, and all logger calls to the child will pass
up to the parent. Here is a main module:
import logging
import auxiliary_module
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
import logging
# create logger
module_logger = logging.getLogger('spam_application.auxiliary')
class Auxiliary:
def __init__(self):
self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
self.logger.info('creating an instance of Auxiliary')
def do_something(self):
self.logger.info('doing something')
a = 1 + 1
self.logger.info('done doing something')
def some_function():
module_logger.info('received a call to "some_function"')
3
The output looks like this:
import logging
import threading
import time
def worker(arg):
while not arg['stop']:
logging.debug('Hi from myfunc')
time.sleep(0.5)
def main():
logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d
,→%(threadName)s %(message)s')
if __name__ == '__main__':
main()
When run, the script should print something like the following:
4
0 Thread-1 Hi from myfunc
3 MainThread Hello from main
505 Thread-1 Hi from myfunc
755 MainThread Hello from main
1007 Thread-1 Hi from myfunc
1507 MainThread Hello from main
1508 Thread-1 Hi from myfunc
2010 Thread-1 Hi from myfunc
2258 MainThread Hello from main
2512 Thread-1 Hi from myfunc
3009 MainThread Hello from main
3013 Thread-1 Hi from myfunc
3515 Thread-1 Hi from myfunc
3761 MainThread Hello from main
4017 Thread-1 Hi from myfunc
4513 MainThread Hello from main
4518 Thread-1 Hi from myfunc
This shows the logging output interspersed as one might expect. This approach works for more threads than shown
here, of course.
import logging
logger = logging.getLogger('simple_example')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s
,→')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warning('warn message')
logger.error('error message')
logger.critical('critical message')
Notice that the ‘application’ code does not care about multiple handlers. All that changed was the addition and
configuration of a new handler named fh.
5
The ability to create new handlers with higher- or lower-severity filters can be very helpful when writing and test-
ing an application. Instead of using many print statements for debugging, use logger.debug: Unlike the print
statements, which you will have to delete or comment out later, the logger.debug statements can remain intact in the
source code and remain dormant until you need them again. At that time, the only change that needs to happen is to
modify the severity level of the logger and/or handler to debug.
import logging
# Now, we can log to the root logger, or any other logger. First the root...
logging.info('Jackdaws love my big sphinx of quartz.')
# Now, define a couple of other loggers which might represent areas in your
# application:
logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')
6
(continued from previous page)
10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
As you can see, the DEBUG message only shows up in the file. The other messages are sent to both destinations.
This example uses console and file handlers, but you can use any number and combination of handlers you choose.
Note that the above choice of log filename /tmp/myapp.log implies use of a standard location for temporary files
on POSIX systems. On Windows, you may need to choose a different directory name for the log - just ensure that
the directory exists and that you have the permissions to create and update files in it.
{
"version": 1,
"disable_existing_loggers": false,
"formatters": {
"simple": {
"format": "%(levelname)-8s - %(message)s"
}
},
"handlers": {
"stdout": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "simple",
"stream": "ext://sys.stdout"
},
"stderr": {
"class": "logging.StreamHandler",
"level": "ERROR",
"formatter": "simple",
"stream": "ext://sys.stderr"
},
"file": {
"class": "logging.FileHandler",
"formatter": "simple",
"filename": "app.log",
"mode": "w"
}
},
"root": {
"level": "DEBUG",
"handlers": [
"stderr",
"stdout",
"file"
(continues on next page)
7
(continued from previous page)
]
}
}
This configuration does almost what we want, except that sys.stdout would show messages of severity ERROR and
only events of this severity and higher will be tracked as well as INFO and WARNING messages. To prevent this, we
can set up a filter which excludes those messages and add it to the relevant handler. This can be configured by adding
a filters section parallel to formatters and handlers:
{
"filters": {
"warnings_and_below": {
"()" : "__main__.filter_maker",
"level": "WARNING"
}
}
}
{
"stdout": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "simple",
"stream": "ext://sys.stdout",
"filters": ["warnings_and_below"]
}
}
A filter is just a function, so we can define the filter_maker (a factory function) as follows:
def filter_maker(level):
level = getattr(logging, level)
def filter(record):
return record.levelno <= level
return filter
This converts the string argument passed in to a numeric level, and returns a function which only returns True if
the level of the passed in record is at or below the specified level. Note that in this example I have defined the
filter_maker in a test script main.py that I run from the command line, so its module will be __main__ - hence
the __main__.filter_maker in the filter configuration. You will need to change that if you define it in a different
module.
With the filter added, we can run main.py, which in full is:
import json
import logging
import logging.config
CONFIG = '''
{
"version": 1,
"disable_existing_loggers": false,
"formatters": {
(continues on next page)
8
(continued from previous page)
"simple": {
"format": "%(levelname)-8s - %(message)s"
}
},
"filters": {
"warnings_and_below": {
"()" : "__main__.filter_maker",
"level": "WARNING"
}
},
"handlers": {
"stdout": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "simple",
"stream": "ext://sys.stdout",
"filters": ["warnings_and_below"]
},
"stderr": {
"class": "logging.StreamHandler",
"level": "ERROR",
"formatter": "simple",
"stream": "ext://sys.stderr"
},
"file": {
"class": "logging.FileHandler",
"formatter": "simple",
"filename": "app.log",
"mode": "w"
}
},
"root": {
"level": "DEBUG",
"handlers": [
"stderr",
"stdout",
"file"
]
}
}
'''
def filter_maker(level):
level = getattr(logging, level)
def filter(record):
return record.levelno <= level
return filter
logging.config.dictConfig(json.loads(CONFIG))
logging.debug('A DEBUG message')
logging.info('An INFO message')
logging.warning('A WARNING message')
logging.error('An ERROR message')
logging.critical('A CRITICAL message')
9
And after running it like this:
$ more *.log
::::::::::::::
app.log
::::::::::::::
DEBUG - A DEBUG message
INFO - An INFO message
WARNING - A WARNING message
ERROR - An ERROR message
CRITICAL - A CRITICAL message
::::::::::::::
stderr.log
::::::::::::::
ERROR - An ERROR message
CRITICAL - A CRITICAL message
::::::::::::::
stdout.log
::::::::::::::
INFO - An INFO message
WARNING - A WARNING message
import logging
import logging.config
import time
import os
logger = logging.getLogger('simpleExample')
try:
# loop through logging calls to see the difference
# new configurations make, until Ctrl+C is pressed
while True:
logger.debug('debug message')
logger.info('info message')
logger.warning('warn message')
logger.error('error message')
logger.critical('critical message')
time.sleep(5)
except KeyboardInterrupt:
# cleanup
logging.config.stopListening()
(continues on next page)
10
(continued from previous page)
t.join()
And here is a script that takes a filename and sends that file to the server, properly preceded with the binary-encoded
length, as the new logging configuration:
#!/usr/bin/env python
import socket, sys, struct
HOST = 'localhost'
PORT = 9999
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('connecting...')
s.connect((HOST, PORT))
print('sending config...')
s.send(struct.pack('>L', len(data_to_send)))
s.send(data_to_send)
s.close()
print('complete')
11
(continued from previous page)
handler.setFormatter(formatter)
listener.start()
# The log output will display the thread which generated
# the event (the main thread) rather than the internal
# thread which monitors the internal queue. This is what
# you want to happen.
root.warning('Look out!')
listener.stop()
® Note
Although the earlier discussion wasn’t specifically talking about async code, but rather about slow logging handlers,
it should be noted that when logging from async code, network and even file handlers could lead to problems
(blocking the event loop) because some logging is done from asyncio internals. It might be best, if any async
code is used in an application, to use the above approach for logging, so that any blocking code runs only in the
QueueListener thread.
Changed in version 3.5: Prior to Python 3.5, the QueueListener always passed every message received from the
queue to every handler it was initialized with. (This was because it was assumed that level filtering was all done on
the other side, where the queue is filled.) From 3.5 onwards, this behaviour can be changed by passing a keyword
argument respect_handler_level=True to the listener’s constructor. When this is done, the listener compares
the level of each message with the handler’s level, and only passes a message to a handler if it’s appropriate to do so.
rootLogger = logging.getLogger('')
rootLogger.setLevel(logging.DEBUG)
socketHandler = logging.handlers.SocketHandler('localhost',
logging.handlers.DEFAULT_TCP_LOGGING_PORT)
# don't bother with a formatter, since a socket handler sends the event as
# an unformatted pickle
rootLogger.addHandler(socketHandler)
# Now, we can log to the root logger, or any other logger. First the root...
logging.info('Jackdaws love my big sphinx of quartz.')
# Now, define a couple of other loggers which might represent areas in your
# application:
logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')
12
At the receiving end, you can set up a receiver using the socketserver module. Here is a basic working example:
import pickle
import logging
import logging.handlers
import socketserver
import struct
class LogRecordStreamHandler(socketserver.StreamRequestHandler):
"""Handler for a streaming logging request.
def handle(self):
"""
Handle multiple requests - each expected to be a 4-byte length,
followed by the LogRecord in pickle format. Logs the record
according to whatever policy is configured locally.
"""
while True:
chunk = self.connection.recv(4)
if len(chunk) < 4:
break
slen = struct.unpack('>L', chunk)[0]
chunk = self.connection.recv(slen)
while len(chunk) < slen:
chunk = chunk + self.connection.recv(slen - len(chunk))
obj = self.unPickle(chunk)
record = logging.makeLogRecord(obj)
self.handleLogRecord(record)
class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
"""
Simple TCP socket-based logging receiver suitable for testing.
"""
allow_reuse_address = True
13
(continued from previous page)
def __init__(self, host='localhost',
port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
handler=LogRecordStreamHandler):
socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
self.abort = 0
self.timeout = 1
self.logname = None
def serve_until_stopped(self):
import select
abort = 0
while not abort:
rd, wr, ex = select.select([self.socket.fileno()],
[], [],
self.timeout)
if rd:
self.handle_request()
abort = self.abort
def main():
logging.basicConfig(
format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
tcpserver = LogRecordSocketReceiver()
print('About to start TCP server...')
tcpserver.serve_until_stopped()
if __name__ == '__main__':
main()
First run the server, and then the client. On the client side, nothing is printed on the console; on the server side, you
should see something like:
Note that there are some security issues with pickle in some scenarios. If these affect you, you can use an alternative
serialization scheme by overriding the makePickle() method and implementing your alternative there, as well as
adapting the above script to use your alternative serialization.
14
File Purpose
prepare.sh A Bash script to prepare the environment for testing
supervisor. The Supervisor configuration file, which has entries for the listener and a multi-process web
conf application
ensure_app.sh A Bash script to ensure that Supervisor is running with the above configuration
log_listener. The socket listener program which receives log events and records them to a file
py
main.py A simple web application which performs logging via a socket connected to the listener
webapp.json A JSON configuration file for the web application
client.py A Python script to exercise the web application
The web application uses Gunicorn, which is a popular web application server that starts multiple worker processes
to handle requests. This example setup shows how the workers can write to the same log file without conflicting with
one another — they all go through the socket listener.
To test these files, do the following in a POSIX environment:
1. Download the Gist as a ZIP archive using the Download ZIP button.
2. Unzip the above files from the archive into a scratch directory.
3. In the scratch directory, run bash prepare.sh to get things ready. This creates a run subdirectory to contain
Supervisor-related and log files, and a venv subdirectory to contain a virtual environment into which bottle,
gunicorn and supervisor are installed.
4. Run bash ensure_app.sh to ensure that Supervisor is running with the above configuration.
5. Run venv/bin/python client.py to exercise the web application, which will lead to records being written
to the log.
6. Inspect the log files in the run subdirectory. You should see the most recent log lines in files matching the
pattern app.log*. They won’t be in any particular order, since they have been handled concurrently by
different worker processes in a non-deterministic way.
7. You can shut down the listener and the web application by running venv/bin/supervisorctl -c
supervisor.conf shutdown.
You may need to tweak the configuration files in the unlikely event that the configured ports clash with something
else in your test environment.
15
delegates the call to the underlying instance of Logger passed to its constructor, and arranges to pass the contextual
information in the delegated call. Here’s a snippet from the code of LoggerAdapter:
The process() method of LoggerAdapter is where the contextual information is added to the logging output.
It’s passed the message and keyword arguments of the logging call, and it passes back (potentially) modified versions
of these to use in the call to the underlying logger. The default implementation of this method leaves the message
alone, but inserts an ‘extra’ key in the keyword argument whose value is the dict-like object passed to the constructor.
Of course, if you had passed an ‘extra’ keyword argument in the call to the adapter, it will be silently overwritten.
The advantage of using ‘extra’ is that the values in the dict-like object are merged into the LogRecord instance’s
__dict__, allowing you to use customized strings with your Formatter instances which know about the keys of the
dict-like object. If you need a different method, e.g. if you want to prepend or append the contextual information to
the message string, you just need to subclass LoggerAdapter and override process() to do what you need. Here
is a simple example:
class CustomAdapter(logging.LoggerAdapter):
"""
This example adapter expects the passed in dict-like object to have a
'connid' key, whose value in brackets is prepended to the log message.
"""
def process(self, msg, kwargs):
return '[%s] %s' % (self.extra['connid'], msg), kwargs
logger = logging.getLogger(__name__)
adapter = CustomAdapter(logger, {'connid': some_conn_id})
Then any events that you log to the adapter will have the value of some_conn_id prepended to the log messages.
import logging
from random import choice
16
(continued from previous page)
class ContextFilter(logging.Filter):
"""
This is a filter which injects contextual information into the log.
record.ip = choice(ContextFilter.IPS)
record.user = choice(ContextFilter.USERS)
return True
if __name__ == '__main__':
levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.
,→CRITICAL)
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)-15s %(name)-5s %(levelname)-8s IP:
,→%(ip)-15s User: %(user)-8s %(message)s')
a1 = logging.getLogger('a.b.c')
a2 = logging.getLogger('d.e.f')
f = ContextFilter()
a1.addFilter(f)
a2.addFilter(f)
a1.debug('A debug message')
a1.info('An info message with %s', 'some parameters')
for x in range(10):
lvl = choice(levels)
lvlname = logging.getLevelName(lvl)
a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
17
(continued from previous page)
2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A␣
,→message at ERROR level with 2 parameters
10 Use of contextvars
Since Python 3.7, the contextvars module has provided context-local storage which works for both threading
and asyncio processing needs. This type of storage may thus be generally preferable to thread-locals. The following
example shows how, in a multi-threaded environment, logs can populated with contextual information such as, for
example, request attributes handled by web applications.
For the purposes of illustration, say that you have different web applications, each independent of the other but
running in the same Python process and using a library common to them. How can each of these applications have
their own log, where all logging messages from the library (and other request processing code) are directed to the
appropriate application’s log file, while including in the log additional contextual information such as client IP, HTTP
request method and client username?
Let’s assume that the library can be simulated by the following code:
# webapplib.py
import logging
import time
logger = logging.getLogger(__name__)
def useful():
# Just a representative event logged from the library
logger.debug('Hello from webapplib!')
# Just sleep for a bit so other threads get to run
time.sleep(0.01)
We can simulate the multiple web applications by means of two simple classes, Request and WebApp. These simulate
how real threaded web applications work - each request is handled by a thread:
# main.py
import argparse
from contextvars import ContextVar
import logging
import os
from random import choice
import threading
import webapplib
logger = logging.getLogger(__name__)
root = logging.getLogger()
root.setLevel(logging.DEBUG)
class Request:
"""
A simple dummy request class which just holds dummy HTTP request method,
client IP address and client username
"""
def __init__(self, method, ip, user):
(continues on next page)
18
(continued from previous page)
self.method = method
self.ip = ip
self.user = user
# A dummy set of requests which will be used in the simulation - we'll just pick
# from this list randomly. Note that all GET requests are from 192.168.2.XXX
# addresses, whereas POST requests are from 192.16.3.XXX addresses. Three users
# are represented in the sample requests.
REQUESTS = [
Request('GET', '192.168.2.20', 'jim'),
Request('POST', '192.168.3.20', 'fred'),
Request('GET', '192.168.2.21', 'sheila'),
Request('POST', '192.168.3.21', 'jim'),
Request('GET', '192.168.2.22', 'fred'),
Request('POST', '192.168.3.22', 'sheila'),
]
# Note that the format string includes references to request context information
# such as HTTP method, client IP and username
# Create our context variables. These will be filled at the start of request
# processing, and used in the logging that happens during that processing
ctx_request = ContextVar('request')
ctx_appname = ContextVar('appname')
class InjectingFilter(logging.Filter):
"""
A filter which injects context-specific information into logs and ensures
that only information for a specific webapp is included in its log
"""
def __init__(self, app):
self.app = app
class WebApp:
"""
A dummy web application class which has its own handler and filter for a
webapp-specific log.
"""
def __init__(self, name):
self.name = name
handler = logging.FileHandler(name + '.log', 'w')
f = InjectingFilter(self)
handler.setFormatter(formatter)
(continues on next page)
19
(continued from previous page)
handler.addFilter(f)
root.addHandler(handler)
self.num_requests = 0
def main():
fn = os.path.splitext(os.path.basename(__file__))[0]
adhf = argparse.ArgumentDefaultsHelpFormatter
ap = argparse.ArgumentParser(formatter_class=adhf, prog=fn,
description='Simulate a couple of web '
'applications handling some '
'requests, showing how request '
'context can be used to '
'populate logs')
aa = ap.add_argument
aa('--count', '-c', type=int, default=100, help='How many requests to simulate
,→')
options = ap.parse_args()
# Create the dummy webapps and put them in a list which we can use to select
# from randomly
app1 = WebApp('app1')
app2 = WebApp('app2')
apps = [app1, app2]
threads = []
# Add a common handler which will capture all events
handler = logging.FileHandler('app.log', 'w')
handler.setFormatter(formatter)
root.addHandler(handler)
20
(continued from previous page)
for t in threads:
t.join()
if __name__ == '__main__':
main()
If you run the above, you should find that roughly half the requests go into app1.log and the rest into app2.log,
and the all the requests are logged to app.log. Each webapp-specific log will contain only log entries for only that
webapp, and the request information will be displayed consistently in the log (i.e. the information in each dummy
request will always appear together in a log line). This is illustrated by the following shell output:
21
(continued from previous page)
,→processing started
~/logging-contextual-webapp$ grep app1 app1.log | wc -l
153
~/logging-contextual-webapp$ grep app2 app2.log | wc -l
147
~/logging-contextual-webapp$ grep app1 app.log | wc -l
153
~/logging-contextual-webapp$ grep app2 app.log | wc -l
147
import copy
import logging
if __name__ == '__main__':
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(message)s from %(user)-8s')
handler.setFormatter(formatter)
handler.addFilter(filter)
logger.addHandler(handler)
22
logging configurations for the listener and the other processes in your application, and can be used as the basis for
code meeting your own specific requirements:
# You'll need these imports in your own code
import logging
import logging.handlers
import multiprocessing
#
# Because you'll want to define the logging configurations for listener and␣
,→workers, the
# for configuring logging for that process. These functions are also passed the␣
,→queue,
# simple example, the listener does not apply level or filter logic to received␣
,→records.
# In practice, you would probably want to do this logic in the worker processes,␣
,→to avoid
h.setFormatter(f)
root.addHandler(h)
# This is the listener process top-level loop: wait for logging events
# (LogRecords)on the queue and handle them, quit when you get a None for a
# LogRecord.
def listener_process(queue, configurer):
configurer()
while True:
try:
record = queue.get()
if record is None: # We send this as a sentinel to tell the listener␣
,→to quit.
break
logger = logging.getLogger(record.name)
logger.handle(record) # No level or filter logic applied - just do it!
except Exception:
import sys, traceback
print('Whoops! Problem:', file=sys.stderr)
traceback.print_exc(file=sys.stderr)
23
(continued from previous page)
LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
logging.ERROR, logging.CRITICAL]
MESSAGES = [
'Random message #1',
'Random message #2',
'Random message #3',
]
# The worker configuration is done at the start of the worker process run.
# Note that on Windows you can't rely on fork semantics, so each process
# will run the logging configuration code when it starts.
def worker_configurer(queue):
h = logging.handlers.QueueHandler(queue) # Just the one handler needed
root = logging.getLogger()
root.addHandler(h)
# send all messages, for demo; no other level or filter logic applied.
root.setLevel(logging.DEBUG)
# This is the worker process top-level loop, which just logs ten events with
# random intervening delays before terminating.
# The print messages are just so you know it's doing something!
def worker_process(queue, configurer):
configurer(queue)
name = multiprocessing.current_process().name
print('Worker started: %s' % name)
for i in range(10):
time.sleep(random())
logger = logging.getLogger(choice(LOGGERS))
level = choice(LEVELS)
message = choice(MESSAGES)
logger.log(level, message)
print('Worker finished: %s' % name)
# Here's where the demo gets orchestrated. Create the queue, create and start
# the listener, create ten workers and start them, wait for them to finish,
# then send a None to the queue to tell the listener to finish.
def main():
queue = multiprocessing.Queue(-1)
listener = multiprocessing.Process(target=listener_process,
args=(queue, listener_configurer))
listener.start()
workers = []
for i in range(10):
worker = multiprocessing.Process(target=worker_process,
args=(queue, worker_configurer))
workers.append(worker)
worker.start()
for w in workers:
w.join()
queue.put_nowait(None)
listener.join()
if __name__ == '__main__':
(continues on next page)
24
(continued from previous page)
main()
A variant of the above script keeps the logging in the main process, in a separate thread:
import logging
import logging.config
import logging.handlers
from multiprocessing import Process, Queue
import random
import threading
import time
def logger_thread(q):
while True:
record = q.get()
if record is None:
break
logger = logging.getLogger(record.name)
logger.handle(record)
def worker_process(q):
qh = logging.handlers.QueueHandler(q)
root = logging.getLogger()
root.setLevel(logging.DEBUG)
root.addHandler(qh)
levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
logging.CRITICAL]
loggers = ['foo', 'foo.bar', 'foo.bar.baz',
'spam', 'spam.ham', 'spam.ham.eggs']
for i in range(100):
lvl = random.choice(levels)
logger = logging.getLogger(random.choice(loggers))
logger.log(lvl, 'Message no. %d', i)
if __name__ == '__main__':
q = Queue()
d = {
'version': 1,
'formatters': {
'detailed': {
'class': 'logging.Formatter',
'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-
,→10s %(message)s'
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'INFO',
},
'file': {
'class': 'logging.FileHandler',
'filename': 'mplog.log',
'mode': 'w',
'formatter': 'detailed',
(continues on next page)
25
(continued from previous page)
},
'foofile': {
'class': 'logging.FileHandler',
'filename': 'mplog-foo.log',
'mode': 'w',
'formatter': 'detailed',
},
'errors': {
'class': 'logging.FileHandler',
'filename': 'mplog-errors.log',
'mode': 'w',
'level': 'ERROR',
'formatter': 'detailed',
},
},
'loggers': {
'foo': {
'handlers': ['foofile']
}
},
'root': {
'level': 'DEBUG',
'handlers': ['console', 'file', 'errors']
},
}
workers = []
for i in range(5):
wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
workers.append(wp)
wp.start()
logging.config.dictConfig(d)
lp = threading.Thread(target=logger_thread, args=(q,))
lp.start()
# At this point, the main process could do some useful work of its own
# Once it's done that, it can wait for the workers to terminate...
for wp in workers:
wp.join()
# And now tell the logging thread to finish up, too
q.put(None)
lp.join()
This variant shows how you can e.g. apply configuration for particular loggers - e.g. the foo logger has a special
handler which stores all events in the foo subsystem in a file mplog-foo.log. This will be used by the logging
machinery in the main process (even though the logging events are generated in the worker processes) to direct the
messages to the appropriate destinations.
queue = multiprocessing.Queue(-1)
26
and you can then replace the worker creation from this:
workers = []
for i in range(10):
worker = multiprocessing.Process(target=worker_process,
args=(queue, worker_configurer))
workers.append(worker)
worker.start()
for w in workers:
w.join()
import glob
import logging
import logging.handlers
LOG_FILENAME = 'logging_rotatingfile_example.out'
my_logger.addHandler(handler)
27
The result should be 6 separate files, each with part of the log history for the application:
logging_rotatingfile_example.out
logging_rotatingfile_example.out.1
logging_rotatingfile_example.out.2
logging_rotatingfile_example.out.3
logging_rotatingfile_example.out.4
logging_rotatingfile_example.out.5
The most current file is always logging_rotatingfile_example.out, and each time it reaches the size limit it
is renamed with the suffix .1. Each of the existing backup files is renamed to increment the suffix (.1 becomes .2,
etc.) and the .6 file is erased.
Obviously this example sets the log length much too small as an extreme example. You would want to set maxBytes
to an appropriate value.
Logging (as of 3.2) provides improved support for these two additional formatting styles. The Formatter class been
enhanced to take an additional, optional keyword parameter named style. This defaults to '%', but other possible
values are '{' and '$', which correspond to the other two formatting styles. Backwards compatibility is maintained
by default (as you would expect), but by explicitly specifying a style parameter, you get the ability to specify format
strings which work with str.format() or string.Template. Here’s an example console session to show the
possibilities:
Note that the formatting of logging messages for final output to logs is completely independent of how an individual
logging message is constructed. That can still use %-formatting, as shown here:
Logging calls (logger.debug(), logger.info() etc.) only take positional parameters for the actual logging
28
message itself, with keyword parameters used only for determining options for how to handle the actual logging
call (e.g. the exc_info keyword parameter to indicate that traceback information should be logged, or the extra
keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make
logging calls using str.format() or string.Template syntax, because internally the logging package uses %-
formatting to merge the format string and the variable arguments. There would be no changing this while preserving
backward compatibility, since all logging calls which are out there in existing code will be using %-format strings.
There is, however, a way that you can use {}- and $- formatting to construct your individual log messages. Recall
that for a message you can use an arbitrary object as a message format string, and that the logging package will call
str() on that object to get the actual format string. Consider the following two classes:
class BraceMessage:
def __init__(self, fmt, /, *args, **kwargs):
self.fmt = fmt
self.args = args
self.kwargs = kwargs
def __str__(self):
return self.fmt.format(*self.args, **self.kwargs)
class DollarMessage:
def __init__(self, fmt, /, **kwargs):
self.fmt = fmt
self.kwargs = kwargs
def __str__(self):
from string import Template
return Template(self.fmt).substitute(**self.kwargs)
Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual
“message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”.
It’s a little unwieldy to use the class names whenever you want to log something, but it’s quite palatable if you use an
alias such as __ (double underscore — not to be confused with _, the single underscore used as a synonym/alias for
gettext.gettext() or its brethren).
The above classes are not included in Python, though they’re easy enough to copy and paste into your own code. They
can be used as follows (assuming that they’re declared in a module called wherever):
While the above examples use print() to show how the formatting works, you would of course use logger.
debug() or similar to actually log using this approach.
One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens
not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a
handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string
29
and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call
to one of the XXXMessage classes.
If you prefer, you can use a LoggerAdapter to achieve a similar effect to the above, as in the following example:
import logging
class Message:
def __init__(self, fmt, args):
self.fmt = fmt
self.args = args
def __str__(self):
return self.fmt.format(*self.args)
class StyleAdapter(logging.LoggerAdapter):
def log(self, level, msg, /, *args, stacklevel=1, **kwargs):
if self.isEnabledFor(level):
msg, kwargs = self.process(msg, kwargs)
self.logger.log(level, Message(msg, args), **kwargs,
stacklevel=stacklevel+1)
logger = StyleAdapter(logging.getLogger(__name__))
def main():
logger.debug('Hello, {}', 'world!')
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
main()
The above script should log the message Hello, world! when run with Python 3.8 or later.
15 Customizing LogRecord
Every logging event is represented by a LogRecord instance. When an event is logged and not filtered out by a logger’s
level, a LogRecord is created, populated with information about the event and then passed to the handlers for that
logger (and its ancestors, up to and including the logger where further propagation up the hierarchy is disabled).
Before Python 3.2, there were only two places where this creation was done:
• Logger.makeRecord(), which is called in the normal process of logging an event. This invoked LogRecord
directly to create an instance.
• makeLogRecord(), which is called with a dictionary containing attributes to be added to the LogRecord.
This is typically invoked when a suitable dictionary has been received over the network (e.g. in pickle form
via a SocketHandler, or in JSON form via an HTTPHandler).
This has usually meant that if you need to do anything special with a LogRecord, you’ve had to do one of the
following.
• Create your own Logger subclass, which overrides Logger.makeRecord(), and set it using
setLoggerClass() before any loggers that you care about are instantiated.
• Add a Filter to a logger or handler, which does the necessary special manipulation you need when its
filter() method is called.
The first approach would be a little unwieldy in the scenario where (say) several different libraries wanted to do
different things. Each would attempt to set its own Logger subclass, and the one which did this last would win.
The second approach works reasonably well for many cases, but does not allow you to e.g. use a specialized subclass
of LogRecord. Library developers can set a suitable filter on their loggers, but they would have to remember to do
30
this every time they introduced a new logger (which they would do simply by adding new packages or modules and
doing
logger = logging.getLogger(__name__)
at module level). It’s probably one too many things to think about. Developers could also add the filter to a
NullHandler attached to their top-level logger, but this would not be invoked if an application developer attached
a handler to a lower-level library logger — so output from that handler would not reflect the intentions of the library
developer.
In Python 3.2 and later, LogRecord creation is done through a factory, which you can specify. The factory is just
a callable you can set with setLogRecordFactory(), and interrogate with getLogRecordFactory(). The
factory is invoked with the same signature as the LogRecord constructor, as LogRecord is the default setting for
the factory.
This approach allows a custom factory to control all aspects of LogRecord creation. For example, you could return
a subclass, or just add some additional attributes to the record once created, using a pattern similar to this:
old_factory = logging.getLogRecordFactory()
logging.setLogRecordFactory(record_factory)
This pattern allows different libraries to chain factories together, and as long as they don’t overwrite each other’s
attributes or unintentionally overwrite the attributes provided as standard, there should be no surprises. However, it
should be borne in mind that each link in the chain adds run-time overhead to all logging operations, and the technique
should only be used when the use of a Filter does not provide the desired result.
ctx = zmq.Context()
sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
sock.bind('tcp://*:5556') # or wherever
class ZeroMQSocketHandler(QueueHandler):
def enqueue(self, record):
self.queue.send_json(record.__dict__)
handler = ZeroMQSocketHandler(sock)
Of course there are other ways of organizing this, for example passing in the data needed by the handler to create the
socket:
31
class ZeroMQSocketHandler(QueueHandler):
def __init__(self, uri, socktype=zmq.PUB, ctx=None):
self.ctx = ctx or zmq.Context()
socket = zmq.Socket(self.ctx, socktype)
socket.bind(uri)
super().__init__(socket)
def close(self):
self.queue.close()
def dequeue(self):
msg = self.queue.recv_json()
return logging.makeLogRecord(msg)
import pynng
DEFAULT_ADDR = "tcp://localhost:13232"
interrupted = False
class NNGSocketListener(logging.handlers.QueueListener):
32
(continued from previous page)
socket = pynng.Sub0(listen=uri, recv_timeout=500)
# The b'' subscription matches all topics
topics = kwargs.pop('topics', None) or b''
socket.subscribe(topics)
# We treat the socket as a queue
super().__init__(socket, *handlers, **kwargs)
def enqueue_sentinel(self):
# Not used in this implementation, as the socket isn't really a
# queue
pass
logging.getLogger('pynng').propagate = False
listener = NNGSocketListener(DEFAULT_ADDR, logging.StreamHandler(), topics=b'')
listener.start()
print('Press Ctrl-C to stop.')
try:
while True:
pass
except KeyboardInterrupt:
interrupted = True
finally:
listener.stop()
import pynng
DEFAULT_ADDR = "tcp://localhost:13232"
33
(continued from previous page)
class NNGSocketHandler(logging.handlers.QueueHandler):
def close(self):
self.queue.close()
logging.getLogger('pynng').propagate = False
handler = NNGSocketHandler(DEFAULT_ADDR)
# Make sure the process ID is in the output
logging.basicConfig(level=logging.DEBUG,
handlers=[logging.StreamHandler(), handler],
format='%(levelname)-8s %(name)10s %(process)6s %(message)s')
levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
logging.CRITICAL)
logger_names = ('myapp', 'myapp.lib1', 'myapp.lib2')
msgno = 1
while True:
# Just randomly select some loggers and levels and log away
level = random.choice(levels)
logger = logging.getLogger(random.choice(logger_names))
logger.log(level, 'Message no. %5d' % msgno)
msgno += 1
delay = random.random() * 2 + 0.5
time.sleep(delay)
You can run the above two snippets in separate command shells. If we run the listener in one shell and run the sender
in two separate shells, we should see something like the following. In the first sender shell:
$ python sender.py
DEBUG myapp 613 Message no. 1
WARNING myapp.lib2 613 Message no. 2
CRITICAL myapp.lib2 613 Message no. 3
WARNING myapp.lib2 613 Message no. 4
CRITICAL myapp.lib1 613 Message no. 5
DEBUG myapp 613 Message no. 6
CRITICAL myapp.lib1 613 Message no. 7
INFO myapp.lib1 613 Message no. 8
(and so on)
$ python sender.py
INFO myapp.lib2 657 Message no. 1
CRITICAL myapp.lib2 657 Message no. 2
CRITICAL myapp 657 Message no. 3
CRITICAL myapp.lib1 657 Message no. 4
INFO myapp.lib1 657 Message no. 5
WARNING myapp.lib2 657 Message no. 6
(continues on next page)
34
(continued from previous page)
CRITICAL myapp 657 Message no. 7
DEBUG myapp.lib1 657 Message no. 8
(and so on)
$ python listener.py
Press Ctrl-C to stop.
DEBUG myapp 613 Message no. 1
WARNING myapp.lib2 613 Message no. 2
INFO myapp.lib2 657 Message no. 1
CRITICAL myapp.lib2 613 Message no. 3
CRITICAL myapp.lib2 657 Message no. 2
CRITICAL myapp 657 Message no. 3
WARNING myapp.lib2 613 Message no. 4
CRITICAL myapp.lib1 613 Message no. 5
CRITICAL myapp.lib1 657 Message no. 4
INFO myapp.lib1 657 Message no. 5
DEBUG myapp 613 Message no. 6
WARNING myapp.lib2 657 Message no. 6
CRITICAL myapp 657 Message no. 7
CRITICAL myapp.lib1 613 Message no. 7
INFO myapp.lib1 613 Message no. 8
DEBUG myapp.lib1 657 Message no. 8
(and so on)
As you can see, the logging from the two sender processes is interleaved in the listener’s output.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {process:d} {thread:d}
,→{message}',
'style': '{',
},
'simple': {
'format': '{levelname} {message}',
'style': '{',
},
},
'filters': {
'special': {
'()': 'project.logging.SpecialFilter',
'foo': 'bar',
},
},
'handlers': {
'console': {
'level': 'INFO',
(continues on next page)
35
(continued from previous page)
'class': 'logging.StreamHandler',
'formatter': 'simple',
},
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['special']
}
},
'loggers': {
'django': {
'handlers': ['console'],
'propagate': True,
},
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': False,
},
'myproject.custom': {
'handlers': ['console', 'mail_admins'],
'level': 'INFO',
'filters': ['special']
}
}
}
For more information about this configuration, you can see the relevant section of the Django documentation.
import gzip
import logging
import logging.handlers
import os
import shutil
def namer(name):
return name + ".gz"
rh = logging.handlers.RotatingFileHandler('rotated.log', maxBytes=128,␣
,→backupCount=5)
rh.rotator = rotator
rh.namer = namer
root = logging.getLogger()
(continues on next page)
36
(continued from previous page)
root.setLevel(logging.INFO)
root.addHandler(rh)
f = logging.Formatter('%(asctime)s %(message)s')
rh.setFormatter(f)
for i in range(1000):
root.info(f'Message no. {i + 1}')
After running this, you will see six new files, five of which are compressed:
$ ls rotated.log*
rotated.log rotated.log.2.gz rotated.log.4.gz
rotated.log.1.gz rotated.log.3.gz rotated.log.5.gz
$ zcat rotated.log.1.gz
2023-01-20 02:28:17,767 Message no. 996
2023-01-20 02:28:17,767 Message no. 997
2023-01-20 02:28:17,767 Message no. 998
import logging
import logging.config
import logging.handlers
from multiprocessing import Process, Queue, Event, current_process
import os
import random
import time
class MyHandler:
"""
A simple handler for logging events. It runs in the listener process and
dispatches events to loggers based on the name in the received record,
which then get dispatched, by the logging system, to the handlers
configured for those loggers.
"""
if logger.isEnabledFor(record.levelno):
# The process name is transformed just to show that it's the listener
(continues on next page)
37
(continued from previous page)
# doing the logging to files and console
record.processName = '%s (for %s)' % (current_process().name, record.
,→processName)
logger.handle(record)
def worker_process(config):
"""
A number of these are spawned for the purpose of illustration. In
practice, they could be a heterogeneous bunch of processes rather than
ones which are identical to each other.
38
(continued from previous page)
logger.critical('Should not appear, because of disabled logger ...')
for i in range(100):
lvl = random.choice(levels)
logger = logging.getLogger(random.choice(loggers))
logger.log(lvl, 'Message no. %d', i)
time.sleep(0.01)
def main():
q = Queue()
# The main process gets a simple configuration which prints to the console.
config_initial = {
'version': 1,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'INFO'
}
},
'root': {
'handlers': ['console'],
'level': 'DEBUG'
}
}
# The worker process configuration is just a QueueHandler attached to the
# root logger, which allows all messages to be sent to the queue.
# We disable existing loggers to disable the "setup" logger used in the
# parent process. This is needed on POSIX because the logger will
# be there in the child following a fork().
config_worker = {
'version': 1,
'disable_existing_loggers': True,
'handlers': {
'queue': {
'class': 'logging.handlers.QueueHandler',
'queue': q
}
},
'root': {
'handlers': ['queue'],
'level': 'DEBUG'
}
}
# The listener process configuration shows that the full flexibility of
# logging configuration is available to dispatch events to handlers however
# you want.
# We disable existing loggers to disable the "setup" logger used in the
# parent process. This is needed on POSIX because the logger will
# be there in the child following a fork().
config_listener = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'detailed': {
'class': 'logging.Formatter',
'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-
,→10s %(message)s'
39
(continued from previous page)
},
'simple': {
'class': 'logging.Formatter',
'format': '%(name)-15s %(levelname)-8s %(processName)-10s
,→%(message)s'
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'simple',
'level': 'INFO'
},
'file': {
'class': 'logging.FileHandler',
'filename': 'mplog.log',
'mode': 'w',
'formatter': 'detailed'
},
'foofile': {
'class': 'logging.FileHandler',
'filename': 'mplog-foo.log',
'mode': 'w',
'formatter': 'detailed'
},
'errors': {
'class': 'logging.FileHandler',
'filename': 'mplog-errors.log',
'mode': 'w',
'formatter': 'detailed',
'level': 'ERROR'
}
},
'loggers': {
'foo': {
'handlers': ['foofile']
}
},
'root': {
'handlers': ['console', 'file', 'errors'],
'level': 'DEBUG'
}
}
# Log some initial events, just to show that logging in the parent works
# normally.
logging.config.dictConfig(config_initial)
logger = logging.getLogger('setup')
logger.info('About to create workers ...')
workers = []
for i in range(5):
wp = Process(target=worker_process, name='worker %d' % (i + 1),
args=(config_worker,))
workers.append(wp)
wp.start()
logger.info('Started worker: %s', wp.name)
logger.info('About to create listener ...')
(continues on next page)
40
(continued from previous page)
stop_event = Event()
lp = Process(target=listener_process, name='listener',
args=(q, stop_event, config_listener))
lp.start()
logger.info('Started listener')
# We now hang around for the workers to finish their work.
for wp in workers:
wp.join()
# Workers all done, listening can now stop.
# Logging in the parent still works normally.
logger.info('Telling listener to stop ...')
stop_event.set()
lp.join()
logger.info('All done.')
if __name__ == '__main__':
main()
The Unicode code point U+FEFF, when encoded using UTF-8, will be encoded as a UTF-8 BOM – the byte-
string b'\xef\xbb\xbf'.
2. Replace the ASCII section with whatever placeholders you like, but make sure that the data that appears in
there after substitution is always ASCII (that way, it will remain unchanged after UTF-8 encoding).
3. Replace the Unicode section with whatever placeholders you like; if the data which appears there after substi-
tution contains characters outside the ASCII range, that’s fine – it will be encoded using UTF-8.
The formatted message will be encoded using UTF-8 encoding by SysLogHandler. If you follow the above rules,
you should be able to produce RFC 5424-compliant messages. If you don’t, logging may not complain, but your
messages will not be RFC 5424-compliant, and your syslog daemon may complain.
41
import json
import logging
class StructuredMessage:
def __init__(self, message, /, **kwargs):
self.message = message
self.kwargs = kwargs
def __str__(self):
return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
logging.basicConfig(level=logging.INFO, format='%(message)s')
logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
Note that the order of items might be different according to the version of Python used.
If you need more specialised processing, you can use a custom JSON encoder, as in the following complete example:
import json
import logging
class Encoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, set):
return tuple(o)
elif isinstance(o, str):
return o.encode('unicode_escape').decode('ascii')
return super().default(o)
class StructuredMessage:
def __init__(self, message, /, **kwargs):
self.message = message
self.kwargs = kwargs
def __str__(self):
s = Encoder().encode(self.kwargs)
return '%s >>> %s' % (self.message, s)
def main():
logging.basicConfig(level=logging.INFO, format='%(message)s')
logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))
if __name__ == '__main__':
main()
Note that the order of items might be different according to the version of Python used.
42
23 Customizing handlers with dictConfig()
There are times when you want to customize logging handlers in particular ways, and if you use dictConfig() you
may be able to do this without subclassing. As an example, consider that you may want to set the ownership of a
log file. On POSIX, this is easily done using shutil.chown(), but the file handlers in the stdlib don’t offer built-in
support. You can customize handler creation using a plain function such as:
You can then specify, in a logging configuration passed to dictConfig(), that a logging handler be created by
calling this function:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'default': {
'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
},
},
'handlers': {
'file':{
# The values below are popped from this dictionary and
# used to create the handler, set the handler's level and
# its formatter.
'()': owned_file_handler,
'level':'DEBUG',
'formatter': 'default',
# The values below are passed to the handler creator callable
# as keyword arguments.
'owner': ['pulse', 'pulse'],
'filename': 'chowntest.log',
'mode': 'w',
'encoding': 'utf-8',
},
},
'root': {
'handlers': ['file'],
'level': 'DEBUG',
},
}
In this example I am setting the ownership using the pulse user and group, just for the purposes of illustration.
Putting it together into a working script, chowntest.py:
43
(continued from previous page)
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'default': {
'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
},
},
'handlers': {
'file':{
# The values below are popped from this dictionary and
# used to create the handler, set the handler's level and
# its formatter.
'()': owned_file_handler,
'level':'DEBUG',
'formatter': 'default',
# The values below are passed to the handler creator callable
# as keyword arguments.
'owner': ['pulse', 'pulse'],
'filename': 'chowntest.log',
'mode': 'w',
'encoding': 'utf-8',
},
},
'root': {
'handlers': ['file'],
'level': 'DEBUG',
},
}
logging.config.dictConfig(LOGGING)
logger = logging.getLogger('mylogger')
logger.debug('A debug message')
Note that this example uses Python 3.3 because that’s where shutil.chown() makes an appearance. This approach
should work with any Python version that supports dictConfig() - namely, Python 2.7, 3.2 or later. With pre-3.3
versions, you would need to implement the actual ownership change using e.g. os.chown().
In practice, the handler-creating function may be in a utility module somewhere in your project. Instead of the line
in the configuration:
'()': owned_file_handler,
'()': 'ext://project.util.owned_file_handler',
where project.util can be replaced with the actual name of the package where the function resides. In the
above working script, using 'ext://__main__.owned_file_handler' should work. Here, the actual callable
44
is resolved by dictConfig() from the ext:// specification.
This example hopefully also points the way to how you could implement other types of file change - e.g. setting
specific POSIX permission bits - in the same way, using os.chmod().
Of course, the approach could also be extended to types of handler other than a FileHandler - for example, one
of the rotating file handlers, or a different type of handler altogether.
class BraceMessage:
def __init__(self, fmt, /, *args, **kwargs):
self.fmt = fmt
self.args = args
self.kwargs = kwargs
def __str__(self):
return self.fmt.format(*self.args, **self.kwargs)
class DollarMessage:
def __init__(self, fmt, /, **kwargs):
(continues on next page)
45
(continued from previous page)
self.fmt = fmt
self.kwargs = kwargs
def __str__(self):
from string import Template
return Template(self.fmt).substitute(**self.kwargs)
Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual
“message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”.
If you find it a little unwieldy to use the class names whenever you want to log something, you can make it more
palatable if you use an alias such as M or _ for the message (or perhaps __, if you are using _ for localization).
Examples of this approach are given below. Firstly, formatting with str.format():
>>> __ = BraceMessage
>>> print(__('Message with {0} {1}', 2, 'placeholders'))
Message with 2 placeholders
>>> class Point: pass
...
>>> p = Point()
>>> p.x = 0.5
>>> p.y = 0.5
>>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p))
Message with coordinates: (0.50, 0.50)
>>> __ = DollarMessage
>>> print(__('Message with $num $what', num=2, what='placeholders'))
Message with 2 placeholders
>>>
One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens
not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a
handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string
and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call
to one of the XXXMessage classes shown above.
import logging
import logging.config
import sys
class MyFilter(logging.Filter):
def __init__(self, param=None):
self.param = param
46
(continued from previous page)
if self.param is None:
allow = True
else:
allow = self.param not in record.msg
if allow:
record.msg = 'changed: ' + record.msg
return allow
LOGGING = {
'version': 1,
'filters': {
'myfilter': {
'()': MyFilter,
'param': 'noshow',
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'filters': ['myfilter']
}
},
'root': {
'level': 'DEBUG',
'handlers': ['console']
},
}
if __name__ == '__main__':
logging.config.dictConfig(LOGGING)
logging.debug('hello')
logging.debug('hello - noshow')
This example shows how you can pass configuration data to the callable which constructs the instance, in the form of
keyword parameters. When run, the above script will print:
changed: hello
47
import logging
class OneLineExceptionFormatter(logging.Formatter):
def formatException(self, exc_info):
"""
Format an exception so that it prints on a single line.
"""
result = super().formatException(exc_info)
return repr(result) # or format into one line however you want to
def configure_logging():
fh = logging.FileHandler('output.txt', 'w')
f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
'%d/%m/%Y %H:%M:%S')
fh.setFormatter(f)
root = logging.getLogger()
root.setLevel(logging.DEBUG)
root.addHandler(fh)
def main():
configure_logging()
logging.info('Sample message')
try:
x = 1 / 0
except ZeroDivisionError as e:
logging.exception('ZeroDivisionError: %s', e)
if __name__ == '__main__':
main()
While the above treatment is simplistic, it points the way to how exception information can be formatted to your
liking. The traceback module may be helpful for more specialized needs.
48
import logging
import subprocess
import sys
class TTSHandler(logging.Handler):
def emit(self, record):
msg = self.format(record)
# Speak slowly in a female English voice
cmd = ['espeak', '-s150', '-ven+f3', msg]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
# wait for the program to finish
p.communicate()
def configure_logging():
h = TTSHandler()
root = logging.getLogger()
root.addHandler(h)
# the default formatter just returns the message
root.setLevel(logging.DEBUG)
def main():
logging.info('Hello')
logging.debug('Goodbye')
if __name__ == '__main__':
configure_logging()
sys.exit(main())
When run, this script should say “Hello” and then “Goodbye” in a female voice.
The above approach can, of course, be adapted to other TTS systems and even other systems altogether which can
process messages via external programs run from a command line.
49
Here’s the script:
import logging
from logging.handlers import MemoryHandler
import sys
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
def decorator(fn):
def wrapper(*args, **kwargs):
logger.addHandler(handler)
try:
return fn(*args, **kwargs)
except Exception:
logger.exception('call failed')
raise
finally:
super(MemoryHandler, handler).flush()
logger.removeHandler(handler)
return wrapper
return decorator
def write_line(s):
sys.stderr.write('%s\n' % s)
def foo(fail=False):
write_line('about to log at DEBUG ...')
logger.debug('Actually logged at DEBUG')
write_line('about to log at INFO ...')
logger.info('Actually logged at INFO')
write_line('about to log at WARNING ...')
logger.warning('Actually logged at WARNING')
if fail:
write_line('about to log at ERROR ...')
logger.error('Actually logged at ERROR')
write_line('about to log at CRITICAL ...')
logger.critical('Actually logged at CRITICAL')
return fail
decorated_foo = log_if_errors(logger)(foo)
if __name__ == '__main__':
logger.setLevel(logging.DEBUG)
write_line('Calling undecorated foo with False')
assert not foo(False)
write_line('Calling undecorated foo with True')
(continues on next page)
50
(continued from previous page)
assert foo(True)
write_line('Calling decorated foo with False')
assert not decorated_foo(False)
write_line('Calling decorated foo with True')
assert decorated_foo(True)
As you can see, actual logging output only occurs when an event is logged whose severity is ERROR or greater, but
in that case, any previous events at lower severities are also logged.
You can of course use the conventional means of decoration:
@log_if_errors(logger)
def foo(fail=False):
...
import logging
import logging.handlers
import smtplib
class BufferingSMTPHandler(logging.handlers.BufferingHandler):
(continues on next page)
51
(continued from previous page)
def __init__(self, mailhost, port, username, password, fromaddr, toaddrs,
subject, capacity):
logging.handlers.BufferingHandler.__init__(self, capacity)
self.mailhost = mailhost
self.mailport = port
self.username = username
self.password = password
self.fromaddr = fromaddr
if isinstance(toaddrs, str):
toaddrs = [toaddrs]
self.toaddrs = toaddrs
self.subject = subject
self.setFormatter(logging.Formatter("%(asctime)s %(levelname)-5s
,→%(message)s"))
def flush(self):
if len(self.buffer) > 0:
try:
smtp = smtplib.SMTP(self.mailhost, self.mailport)
smtp.starttls()
smtp.login(self.username, self.password)
msg = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (self.fromaddr,
,→ ','.join(self.toaddrs), self.subject)
for record in self.buffer:
s = self.format(record)
msg = msg + s + "\r\n"
smtp.sendmail(self.fromaddr, self.toaddrs, msg)
smtp.quit()
except Exception:
if logging.raiseExceptions:
raise
self.buffer = []
if __name__ == '__main__':
import argparse
ap = argparse.ArgumentParser()
aa = ap.add_argument
aa('host', metavar='HOST', help='SMTP server')
aa('--port', '-p', type=int, default=587, help='SMTP port')
aa('user', metavar='USER', help='SMTP username')
aa('password', metavar='PASSWORD', help='SMTP password')
aa('to', metavar='TO', help='Addressee for emails')
aa('sender', metavar='SENDER', help='Sender email address')
aa('--subject', '-s',
default='Test Logging email from Python logging module (buffering)',
help='Subject of email')
options = ap.parse_args()
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
h = BufferingSMTPHandler(options.host, options.port, options.user,
options.password, options.sender,
options.to, options.subject, 10)
logger.addHandler(h)
for i in range(102):
logger.info("Info index = %d", i)
(continues on next page)
52
(continued from previous page)
h.flush()
h.close()
If you run this script and your SMTP server is correctly set up, you should find that it sends eleven emails to the
addressee you specify. The first ten emails will each have ten log messages, and the eleventh will have two messages.
That makes up 102 messages as specified in the script.
import logging
import time
class UTCFormatter(logging.Formatter):
converter = time.gmtime
and you can then use the UTCFormatter in your code instead of Formatter. If you want to do that via configura-
tion, you can use the dictConfig() API with an approach illustrated by the following complete example:
import logging
import logging.config
import time
class UTCFormatter(logging.Formatter):
converter = time.gmtime
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'utc': {
'()': UTCFormatter,
'format': '%(asctime)s %(message)s',
},
'local': {
'format': '%(asctime)s %(message)s',
}
},
'handlers': {
'console1': {
'class': 'logging.StreamHandler',
'formatter': 'utc',
},
'console2': {
'class': 'logging.StreamHandler',
'formatter': 'local',
},
},
'root': {
'handlers': ['console1', 'console2'],
}
}
if __name__ == '__main__':
(continues on next page)
53
(continued from previous page)
logging.config.dictConfig(LOGGING)
logging.warning('The local time is %s', time.asctime())
showing how the time is formatted both as local time and UTC, one for each handler.
import logging
import sys
class LoggingContext:
def __init__(self, logger, level=None, handler=None, close=True):
self.logger = logger
self.level = level
self.handler = handler
self.close = close
def __enter__(self):
if self.level is not None:
self.old_level = self.logger.level
self.logger.setLevel(self.level)
if self.handler:
self.logger.addHandler(self.handler)
If you specify a level value, the logger’s level is set to that value in the scope of the with block covered by the context
manager. If you specify a handler, it is added to the logger on entry to the block and removed on exit from the block.
You can also ask the manager to close the handler for you on block exit - you could do this if you don’t need the
handler any more.
To illustrate how it works, we can add the following block of code to the above:
if __name__ == '__main__':
logger = logging.getLogger('foo')
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.INFO)
logger.info('1. This should appear just once on stderr.')
logger.debug('2. This should not appear.')
(continues on next page)
54
(continued from previous page)
with LoggingContext(logger, level=logging.DEBUG):
logger.debug('3. This should appear once on stderr.')
logger.debug('4. This should not appear.')
h = logging.StreamHandler(sys.stdout)
with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
logger.debug('5. This should appear twice - once on stderr and once on␣
,→stdout.')
We initially set the logger’s level to INFO, so message #1 appears and message #2 doesn’t. We then change the level
to DEBUG temporarily in the following with block, and so message #3 appears. After the block exits, the logger’s
level is restored to INFO and so message #4 doesn’t appear. In the next with block, we set the level to DEBUG again
but also add a handler writing to sys.stdout. Thus, message #5 appears twice on the console (once via stderr
and once via stdout). After the with statement’s completion, the status is as it was before so message #6 appears
(like message #1) whereas message #7 doesn’t (just like message #2).
If we run the resulting script, the result is as follows:
$ python logctx.py
1. This should appear just once on stderr.
3. This should appear once on stderr.
5. This should appear twice - once on stderr and once on stdout.
5. This should appear twice - once on stderr and once on stdout.
6. This should appear just once on stderr.
If we run it again, but pipe stderr to /dev/null, we see the following, which is the only message written to
stdout:
55
import argparse
import importlib
import logging
import os
import sys
def main(args=None):
scriptname = os.path.basename(__file__)
parser = argparse.ArgumentParser(scriptname)
levels = ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL')
parser.add_argument('--log-level', default='INFO', choices=levels)
subparsers = parser.add_subparsers(dest='command',
help='Available commands:')
start_cmd = subparsers.add_parser('start', help='Start a service')
start_cmd.add_argument('name', metavar='NAME',
help='Name of service to start')
stop_cmd = subparsers.add_parser('stop',
help='Stop one or more services')
stop_cmd.add_argument('names', metavar='NAME', nargs='+',
help='Name of service to stop')
restart_cmd = subparsers.add_parser('restart',
help='Restart one or more services')
restart_cmd.add_argument('names', metavar='NAME', nargs='+',
help='Name of service to restart')
options = parser.parse_args()
# the code to dispatch commands could all be in this file. For the purposes
# of illustration only, we implement each command in a separate module.
try:
mod = importlib.import_module(options.command)
cmd = getattr(mod, 'command')
except (ImportError, AttributeError):
print('Unable to find the code for command \'%s\'' % options.command)
return 1
# Could get fancy here and load configuration from file or dictionary
logging.basicConfig(level=options.log_level,
format='%(levelname)s %(name)s %(message)s')
cmd(options)
if __name__ == '__main__':
sys.exit(main())
And the start, stop and restart commands can be implemented in separate modules, like so for starting:
# start.py
import logging
logger = logging.getLogger(__name__)
def command(options):
logger.debug('About to start %s', options.name)
# actually do the command processing here ...
logger.info('Started the \'%s\' service.', options.name)
# stop.py
import logging
(continues on next page)
56
(continued from previous page)
logger = logging.getLogger(__name__)
def command(options):
n = len(options.names)
if n == 1:
plural = ''
services = '\'%s\'' % options.names[0]
else:
plural = 's'
services = ', '.join('\'%s\'' % name for name in options.names)
i = services.rfind(', ')
services = services[:i] + ' and ' + services[i + 2:]
logger.debug('About to stop %s', services)
# actually do the command processing here ...
logger.info('Stopped the %s service%s.', services, plural)
# restart.py
import logging
logger = logging.getLogger(__name__)
def command(options):
n = len(options.names)
if n == 1:
plural = ''
services = '\'%s\'' % options.names[0]
else:
plural = 's'
services = ', '.join('\'%s\'' % name for name in options.names)
i = services.rfind(', ')
services = services[:i] + ' and ' + services[i + 2:]
logger.debug('About to restart %s', services)
# actually do the command processing here ...
logger.info('Restarted the %s service%s.', services, plural)
If we run this application with the default log level, we get output like this:
The first word is the logging level, and the second word is the module or package name of the place where the event
was logged.
If we change the logging level, then we can change the information sent to the log. For example, if we want more
information:
57
(continued from previous page)
INFO start Started the 'foo' service.
In this case, the commands don’t print anything to the console, since nothing at WARNING level or above is logged by
them.
import datetime
import logging
import random
import sys
import time
58
(continued from previous page)
except ImportError:
from PyQt5 import QtCore, QtGui, QtWidgets
Signal = QtCore.pyqtSignal
Slot = QtCore.pyqtSlot
logger = logging.getLogger(__name__)
#
# Signals need to be contained in a QObject or subclass in order to be correctly
# initialized.
#
class Signaller(QtCore.QObject):
signal = Signal(str, logging.LogRecord)
#
# Output to a Qt GUI is only supposed to happen on the main thread. So, this
# handler is designed to take a slot function which is set up to run in the main
# thread. In this example, the function takes a string argument which is a
# formatted log message, and the log record which generated it. The formatted
# string is just a convenience - you could format a string for output any way
# you like in the slot function itself.
#
# You specify the slot function to do whatever GUI updates you want. The handler
# doesn't know or care about specific UI elements.
#
class QtHandler(logging.Handler):
def __init__(self, slotfunc, *args, **kwargs):
super().__init__(*args, **kwargs)
self.signaller = Signaller()
self.signaller.signal.connect(slotfunc)
#
# This example uses QThreads, which means that the threads at the Python level
# are named something like "Dummy-1". The function below gets the Qt name of the
# current thread.
#
def ctname():
return QtCore.QThread.currentThread().objectName()
#
# Used to generate random levels for logging.
#
LEVELS = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
logging.CRITICAL)
#
# This worker class represents work that is done in a thread separate to the
# main thread. The way the thread is kicked off to do work is via a button press
# that connects to a slot in the worker.
#
(continues on next page)
59
(continued from previous page)
# Because the default threadName value in the LogRecord isn't much use, we add
# a qThreadName which contains the QThread name as computed above, and pass that
# value in an "extra" dictionary which is used to update the LogRecord with the
# QThread name.
#
# This example worker just outputs messages sequentially, interspersed with
# random delays of the order of a few seconds.
#
class Worker(QtCore.QObject):
@Slot()
def start(self):
extra = {'qThreadName': ctname() }
logger.debug('Started work', extra=extra)
i = 1
# Let the thread run until interrupted. This allows reasonably clean
# thread termination.
while not QtCore.QThread.currentThread().isInterruptionRequested():
delay = 0.5 + random.random() * 2
time.sleep(delay)
try:
if random.random() < 0.1:
raise ValueError('Exception raised: %d' % i)
else:
level = random.choice(LEVELS)
logger.log(level, 'Message after delay of %3.1f: %d', delay, i,
,→ extra=extra)
except ValueError as e:
logger.exception('Failed: %s', e, extra=extra)
i += 1
#
# Implement a simple UI for this cookbook example. This contains:
#
# * A read-only text edit window which holds formatted log messages
# * A button to start work and log stuff in a separate thread
# * A button to log something from the main thread
# * A button to clear the log window
#
class Window(QtWidgets.QWidget):
COLORS = {
logging.DEBUG: 'black',
logging.INFO: 'blue',
logging.WARNING: 'orange',
logging.ERROR: 'red',
logging.CRITICAL: 'purple',
}
60
(continued from previous page)
else:
f.setStyleHint(f.StyleHint.Monospace) # for Qt6
te.setFont(f)
te.setReadOnly(True)
PB = QtWidgets.QPushButton
self.work_button = PB('Start background work', self)
self.log_button = PB('Log a message at a random level', self)
self.clear_button = PB('Clear log window', self)
self.handler = h = QtHandler(self.update_status)
# Remember to use qThreadName rather than threadName in the format string.
fs = '%(asctime)s %(qThreadName)-12s %(levelname)-8s %(message)s'
formatter = logging.Formatter(fs)
h.setFormatter(formatter)
logger.addHandler(h)
# Set up to terminate the QThread when we exit
app.aboutToQuit.connect(self.force_quit)
# Start a new worker thread and connect the slots for the worker
self.start_thread()
self.work_button.clicked.connect(self.worker.start)
# Once started, the button should be disabled
self.work_button.clicked.connect(lambda : self.work_button.
,→setEnabled(False))
def start_thread(self):
self.worker = Worker()
self.worker_thread = QtCore.QThread()
self.worker.setObjectName('Worker')
self.worker_thread.setObjectName('WorkerThread') # for qThreadName
self.worker.moveToThread(self.worker_thread)
# This will start an event loop in the worker thread
self.worker_thread.start()
def kill_thread(self):
# Just tell the worker to stop, then tell it to quit and wait for that
# to happen
self.worker_thread.requestInterruption()
if self.worker_thread.isRunning():
self.worker_thread.quit()
self.worker_thread.wait()
else:
print('worker has already exited.')
def force_quit(self):
(continues on next page)
61
(continued from previous page)
# For use when the window is closed
if self.worker_thread.isRunning():
self.kill_thread()
# The functions below update the UI and run in the main thread because
# that's where the slots are set up
@Slot(str, logging.LogRecord)
def update_status(self, status, record):
color = self.COLORS.get(record.levelno, 'black')
s = '<pre><font color="%s">%s</font></pre>' % (color, status)
self.textedit.appendHtml(s)
@Slot()
def manual_update(self):
# This function uses the formatted message passed in, but also uses
# information from the record to format the message in an appropriate
# color according to its severity (level).
level = random.choice(LEVELS)
extra = {'qThreadName': ctname() }
logger.log(level, 'Manually logged!', extra=extra)
@Slot()
def clear_display(self):
self.textedit.clear()
def main():
QtCore.QThread.currentThread().setObjectName('MainThread')
logging.getLogger().setLevel(logging.DEBUG)
app = QtWidgets.QApplication(sys.argv)
example = Window(app)
example.show()
if hasattr(app, 'exec'):
rc = app.exec()
else:
rc = app.exec_()
sys.exit(rc)
if __name__=='__main__':
main()
62
(continued from previous page)
import time
class SysLogHandler5424(logging.handlers.SysLogHandler):
tz_offset = re.compile(r'([+-]\d{2})(\d{2})$')
escaped = re.compile(r'([\]"\\])')
# mean)
# There's no error checking here - it's purely for illustration, and␣
,→you
def replacer(m):
g = m.groups()
return '\\' + g[0]
63
(continued from previous page)
parts.append(part)
sdata = ''.join(parts)
return f'{version} {asctime} {hostname} {appname} {procid} {msgid} {sdata}
,→{msg}'
You’ll need to be familiar with RFC 5424 to fully understand the above code, and it may be that you have slightly
different needs (e.g. for how you pass structural data to the log). Nevertheless, the above should be adaptable to your
speciric needs. With the above handler, you’d pass structured data using something like this:
sd = {
'foo@12345': {'bar': 'baz', 'baz': 'bozz', 'fizz': r'buzz'},
'foo@54321': {'rab': 'baz', 'zab': 'bozz', 'zzif': r'buzz'}
}
extra = {'structured_data': sd}
i = 1
logger.debug('Message %d', i, extra=extra)
import logging
class LoggerWriter:
def __init__(self, logger, level):
self.logger = logger
self.level = level
def flush(self):
# doesn't actually do anything, but might be expected of a file-like
# object - so optional depending on your situation
pass
def close(self):
# doesn't actually do anything, but might be expected of a file-like
# object - so optional depending on your situation. You might want
# to set a flag so that later calls to write raise an exception
pass
def main():
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger('demo')
info_fp = LoggerWriter(logger, logging.INFO)
debug_fp = LoggerWriter(logger, logging.DEBUG)
print('An INFO message', file=info_fp)
print('A DEBUG message', file=debug_fp)
if __name__ == "__main__":
main()
64
When this script is run, it prints
You could also use LoggerWriter to redirect sys.stdout and sys.stderr by doing something like this:
import sys
You should do this after configuring logging for your needs. In the above example, the basicConfig() call does
this (using the sys.stderr value before it is overwritten by a LoggerWriter instance). Then, you’d get this kind
of result:
>>> print('Foo')
INFO:demo:Foo
>>> print('Bar', file=sys.stderr)
WARNING:demo:Bar
>>>
Of course, the examples above show output according to the format used by basicConfig(), but you can use a
different formatter when you configure logging.
Note that with the above scheme, you are somewhat at the mercy of buffering and the sequence of write calls which
you are intercepting. For example, with the definition of LoggerWriter above, if you have the snippet
WARNING:demo:
WARNING:demo:main()
WARNING:demo: File "/home/runner/cookbook-loggerwriter/test.py", line 49, in main
WARNING:demo:
WARNING:demo:1 / 0
WARNING:demo:ZeroDivisionError
WARNING:demo::
WARNING:demo:division by zero
As you can see, this output isn’t ideal. That’s because the underlying code which writes to sys.stderr makes
multiple writes, each of which results in a separate logged line (for example, the last three lines above). To get
around this problem, you need to buffer things and only output log lines when newlines are seen. Let’s use a slightly
better implementation of LoggerWriter:
class BufferingLoggerWriter(LoggerWriter):
def __init__(self, logger, level):
super().__init__(logger, level)
self.buffer = ''
65
(continued from previous page)
if '\n' not in message:
self.buffer += message
else:
parts = message.split('\n')
if self.buffer:
s = self.buffer + parts.pop(0)
self.logger.log(self.level, s)
self.buffer = parts.pop()
for part in parts:
self.logger.log(self.level, part)
This just buffers up stuff until a newline is seen, and then logs complete lines. With this approach, you get better
output:
WARNING:demo: main()
WARNING:demo: File "/home/runner/cookbook-loggerwriter/main.py", line 52, in main
WARNING:demo: 1/0
WARNING:demo:ZeroDivisionError: division by zero
36 Patterns to avoid
Although the preceding sections have described ways of doing things you might need to do or deal with, it is worth
mentioning some usage patterns which are unhelpful, and which should therefore be avoided in most cases. The
following sections are in no particular order.
66
36.2 Using loggers as attributes in a class or passing them as parameters
While there might be unusual cases where you’ll need to do this, in general there is no point because loggers are
singletons. Code can always access a given logger instance by name using logging.getLogger(name), so passing
instances around and holding them as instance attributes is pointless. Note that in other languages such as Java and
C#, loggers are often static class attributes. However, this pattern doesn’t make sense in Python, where the module
(and not the class) is the unit of software decomposition.
37 Other resources
µ See also
Module logging
API reference for the logging module.
Module logging.config
Configuration API for the logging module.
Module logging.handlers
Useful handlers included with the logging module.
Basic Tutorial
Advanced Tutorial
67
Index
R
RFC
RFC 3164, 62
RFC 5424, 41, 62
RFC 5424 Section 6, 41
68