Docs Aiohttp Org en V3.7.4.post0
Docs Aiohttp Org en V3.7.4.post0
Release 3.7.4.post0
aiohttp contributors
1 Key Features 3
2 Library Installation 5
2.1 Installing speedups altogether . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Getting Started 7
3.1 Client example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Server example: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5 Tutorial 11
6 Source code 13
7 Dependencies 15
8 Communication channels 17
9 Contributing 19
12 Table Of Contents 25
12.1 Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
12.2 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
12.3 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
12.4 FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
12.5 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
12.6 Who uses aiohttp? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
12.7 Contributing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Index 257
i
ii
aiohttp Documentation, Release 3.7.4.post0
CONTENTS 1
aiohttp Documentation, Release 3.7.4.post0
2 CONTENTS
CHAPTER
ONE
KEY FEATURES
3
aiohttp Documentation, Release 3.7.4.post0
TWO
LIBRARY INSTALLATION
You may want to install optional cchardet library as faster replacement for chardet:
For speeding up DNS resolving by client API you may install aiodns as well. This option is highly recommended:
The following will get you aiohttp along with chardet, aiodns and brotlipy in one bundle. No need to type
separate commands anymore!
5
aiohttp Documentation, Release 3.7.4.post0
THREE
GETTING STARTED
import aiohttp
import asyncio
print("Status:", response.status)
print("Content-type:", response.headers['content-type'])
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
This prints:
Status: 200
Content-type: text/html; charset=utf-8
Body: <!doctype html> ...
app = web.Application()
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
7
aiohttp Documentation, Release 3.7.4.post0
FOUR
Go to What’s new in aiohttp 3.0 page for aiohttp 3.0 major release changes.
9
aiohttp Documentation, Release 3.7.4.post0
FIVE
TUTORIAL
Polls tutorial
11
aiohttp Documentation, Release 3.7.4.post0
12 Chapter 5. Tutorial
CHAPTER
SIX
SOURCE CODE
13
aiohttp Documentation, Release 3.7.4.post0
SEVEN
DEPENDENCIES
• Python 3.6+
• async_timeout
• attrs
• chardet
• multidict
• yarl
• Optional cchardet as faster replacement for chardet.
Install it explicitly via:
• Optional aiodns for fast DNS resolving. The library is highly recommended.
15
aiohttp Documentation, Release 3.7.4.post0
16 Chapter 7. Dependencies
CHAPTER
EIGHT
COMMUNICATION CHANNELS
17
aiohttp Documentation, Release 3.7.4.post0
NINE
CONTRIBUTING
Please read the instructions for contributors before making a Pull Request.
19
aiohttp Documentation, Release 3.7.4.post0
20 Chapter 9. Contributing
CHAPTER
TEN
The aiohttp package is written mostly by Nikolay Kim and Andrew Svetlov.
It’s Apache 2 licensed and freely available.
Feel free to improve this package and send a pull request to GitHub.
21
aiohttp Documentation, Release 3.7.4.post0
ELEVEN
23
aiohttp Documentation, Release 3.7.4.post0
TWELVE
TABLE OF CONTENTS
12.1 Client
Eager to get started? This page gives a good introduction in how to get started with aiohttp client API.
First, make sure that aiohttp is installed and up-to-date
Let’s get started with some simple examples.
Make a Request
import aiohttp
import asyncio
Now, let’s try to get a web-page. For example let’s query http://httpbin.org/get:
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Now, we have a ClientSession called session and a ClientResponse object called resp. We can get all
the information we need from the response. The mandatory parameter of ClientSession.get() coroutine is an
HTTP url (str or class:yarl.URL instance).
In order to make an HTTP POST request use ClientSession.post() coroutine:
session.post('http://httpbin.org/post', data=b'data')
25
aiohttp Documentation, Release 3.7.4.post0
session.put('http://httpbin.org/put', data=b'data')
session.delete('http://httpbin.org/delete')
session.head('http://httpbin.org/get')
session.options('http://httpbin.org/get')
session.patch('http://httpbin.org/patch', data=b'data')
Note: Don’t create a session per request. Most likely you need a session per application which performs all requests
altogether.
More complex cases may require a session per site, e.g. one for Github and other one for Facebook APIs. Anyway
making a session for every request is a very bad idea.
A session contains a connection pool inside. Connection reusage and keep-alives (both are on by default) may speed
up total performance.
A session context manager usage is not mandatory but await session.close() method should be called in this
case, e.g.:
session = aiohttp.ClientSession()
async with session.get('...'):
# ...
await session.close()
You often want to send some sort of data in the URL’s query string. If you were constructing the URL by hand, this
data would be given as key/value pairs in the URL after a question mark, e.g. httpbin.org/get?key=val.
Requests allows you to provide these arguments as a dict, using the params keyword argument. As an example,
if you wanted to pass key1=value1 and key2=value2 to httpbin.org/get, you would use the following
code:
You can see that the URL has been correctly encoded by printing the URL.
For sending data with multiple values for the same key MultiDict may be used; the library support nested lists
({'key': ['value1', 'value2']}) alternative as well.
It is also possible to pass a list of 2 item tuples as parameters, in that case you can specify multiple values for each
key:
You can also pass str content as param, but beware – content is not encoded by library. Note that + is not encoded:
await session.get(
URL('http://example.com/%30', encoded=True))
We can read the content of the server’s response and its status code. Consider the GitHub time-line again:
200
'[{"created_at":"2015-06-12T14:06:22Z","public":true,"actor":{...
aiohttp automatically decodes the content from the server. You can specify custom encoding for the text()
method:
await resp.text(encoding='windows-1251')
You can also access the response body as bytes, for non-text requests:
print(await resp.read())
b'[{"created_at":"2015-06-12T14:06:22Z","public":true,"actor":{...
The gzip and deflate transfer-encodings are automatically decoded for you.
You can enable brotli transfer-encodings support, just install brotlipy.
12.1. Client 27
aiohttp Documentation, Release 3.7.4.post0
JSON Request
By default session uses python’s standard json module for serialization. But it is possible to use different
serializer. ClientSession accepts json_serialize parameter:
import ujson
Note: ujson library is faster than standard json but slightly incompatible.
There’s also a built-in JSON decoder, in case you’re dealing with JSON data:
async with session.get('https://api.github.com/events') as resp:
print(await resp.json())
In case that JSON decoding fails, json() will raise an exception. It is possible to specify custom encoding and
decoder functions for the json() call.
Note: The methods above reads the whole response body into memory. If you are planning on reading lots of data,
consider using the streaming response method documented below.
While methods read(), json() and text() are very convenient you should use them carefully. All these methods
load the whole response in memory. For example if you want to download several gigabyte sized files, these methods
will load all the data in memory. Instead you can use the content attribute. It is an instance of the aiohttp.
StreamReader class. The gzip and deflate transfer-encodings are automatically decoded for you:
async with session.get('https://api.github.com/events') as resp:
await resp.content.read(10)
In general, however, you should use a pattern like this to save what is being streamed to a file:
with open(filename, 'wb') as fd:
while True:
chunk = await resp.content.read(chunk_size)
if not chunk:
break
fd.write(chunk)
It is not possible to use read(), json() and text() after explicit reading from content.
Typically, you want to send some form-encoded data – much like an HTML form. To do this, simply pass a dictionary
to the data argument. Your dictionary of data will automatically be form-encoded when the request is made:
{
...
"form": {
"key2": "value2",
"key1": "value1"
},
...
}
If you want to send data that is not form-encoded you can do it by passing a bytes instead of a dict. This data will
be posted directly and content-type set to ‘application/octet-stream’ by default:
url = 'http://httpbin.org/post'
files = {'file': open('report.xls', 'rb')}
url = 'http://httpbin.org/post'
data = FormData()
data.add_field('file',
open('report.xls', 'rb'),
filename='report.xls',
content_type='application/vnd.ms-excel')
12.1. Client 29
aiohttp Documentation, Release 3.7.4.post0
If you pass a file object as data parameter, aiohttp will stream it to the server automatically. Check StreamReader
for supported format information.
See also:
Working with Multipart
Streaming uploads
aiohttp supports multiple types of streaming uploads, which allows you to send large files without reading them
into memory.
As a simple case, simply provide a file-like object for your body:
Because the content attribute is a StreamReader (provides async iterator protocol), you can chain get and post
requests together:
Note: Python 3.5 has no native support for asynchronous generators, use async_generator library as
workaround.
Deprecated since version 3.1: aiohttp still supports aiohttp.streamer decorator but this approach is depre-
cated in favor of asynchronous generators as shown above.
WebSockets
You must use the only websocket task for both reading (e.g. await ws.receive() or async for msg in
ws:) and writing but may have multiple writer tasks which can only send data asynchronously (by await ws.
send_str('data') for example).
Timeouts
timeout = aiohttp.ClientTimeout(total=60)
async with aiohttp.ClientSession(timeout=timeout) as session:
...
12.1. Client 31
aiohttp Documentation, Release 3.7.4.post0
The maximal number of seconds allowed for period between reading a new data portion from
a peer.
All fields are floats, None or 0 disables a particular timeout check, see the ClientTimeout reference for defaults
and additional details.
Thus the default timeout is:
aiohttp.ClientTimeout(total=5*60, connect=None,
sock_connect=None, sock_read=None)
Note: aiohttp ceils timeout if the value is equal or greater than 5 seconds. The timeout expires at the next integer
second greater than current_time + timeout.
The ceiling is done for the sake of optimization, when many concurrent tasks are scheduled to wake-up at the almost
same but different absolute times. It leads to very many event loop wakeups, which kills performance.
The optimization shifts absolute wakeup times by scheduling them to exactly the same time as other neighbors, the
loop wakes up once-per-second for timeout expiration.
Smaller timeouts are not rounded to help testing; in the real life network timeouts usually greater than tens of seconds.
Client Session
ClientSession is the heart and the main entry point for all client API operations.
Create the session first, use the instance for performing HTTP requests and initiating WebSocket connections.
The session contains a cookie storage and connection pool, thus cookies and connections are shared between HTTP
requests sent by the same session.
If you need to add HTTP headers to a request, pass them in a dict to the headers parameter.
For example, if you want to specify the content-type directly:
url = 'http://example.com/image'
payload = b'GIF89a\x01\x00\x01\x00\x00\xff\x00,\x00\x00'
b'\x00\x00\x01\x00\x01\x00\x00\x02\x00;'
headers = {'content-type': 'image/gif'}
await session.post(url,
data=payload,
headers=headers)
You also can set default headers for all session requests:
Typical use case is sending JSON body. You can specify content type directly as shown above, but it is more convenient
to use special keyword json:
For text/plain
Custom Cookies
To send your own cookies to the server, you can use the cookies parameter of ClientSession constructor:
url = 'http://httpbin.org/cookies'
cookies = {'cookies_are': 'working'}
async with ClientSession(cookies=cookies) as session:
async with session.get(url) as resp:
assert await resp.json() == {
"cookies": {"cookies_are": "working"}}
Note: httpbin.org/cookies endpoint returns request cookies in JSON-encoded body. To access session
cookies see ClientSession.cookie_jar.
assert resp.headers == {
'ACCESS-CONTROL-ALLOW-ORIGIN': '*',
'CONTENT-TYPE': 'application/json',
'DATE': 'Tue, 15 Jul 2014 16:49:51 GMT',
'SERVER': 'gunicorn/18.0',
'CONTENT-LENGTH': '331',
'CONNECTION': 'keep-alive'}
The dictionary is special, though: it’s made just for HTTP headers. According to RFC 7230, HTTP Header names are
case-insensitive. It also supports multiple values for the same key as HTTP protocol does.
So, we can access the headers using any capitalization we want:
12.1. Client 33
aiohttp Documentation, Release 3.7.4.post0
All headers are converted from binary data using UTF-8 with surrogateescape option. That works fine on
most cases but sometimes unconverted data is needed if a server uses nonstandard encoding. While these headers
are malformed from RFC 7230 perspective they may be retrieved by using ClientResponse.raw_headers
property:
assert resp.raw_headers == (
(b'SERVER', b'nginx'),
(b'DATE', b'Sat, 09 Jan 2016 20:28:40 GMT'),
(b'CONTENT-TYPE', b'text/html; charset=utf-8'),
(b'CONTENT-LENGTH', b'12150'),
(b'CONNECTION', b'keep-alive'))
If a response contains some HTTP Cookies, you can quickly access them:
url = 'http://example.com/some/cookie/setting/url'
async with session.get(url) as resp:
print(resp.cookies['example_cookie_name'])
Note: Response cookies contain only values, that were in Set-Cookie headers of the last request in redirection
chain. To gather cookies between all redirection requests please use aiohttp.ClientSession object.
Redirection History
If a request was redirected, it is possible to view previous responses using the history attribute:
Cookie Jar
Cookie Safety
By default ClientSession uses strict version of aiohttp.CookieJar. RFC 2109 explicitly forbids cookie
accepting from URLs with IP address instead of DNS name (e.g. http://127.0.0.1:80/cookie).
It’s good but sometimes for testing we need to enable support for such cookies. It should be done by passing
unsafe=True to aiohttp.CookieJar constructor:
jar = aiohttp.CookieJar(unsafe=True)
session = aiohttp.ClientSession(cookie_jar=jar)
The client uses the SimpleCookie quoting routines conform to the RFC 2109, which in turn references the character
definitions from RFC 2068. They provide a two-way quoting algorithm where any non-text character is translated into
a 4 character sequence: a forward-slash followed by the three-digit octal equivalent of the character. Any \ or " is
quoted with a preceding \ slash. Because of the way browsers really handle cookies (as opposed to what the RFC
says) we also encode , and ;.
Some backend systems does not support quoted cookies. You can skip this quotation routine by passing
quote_cookie=False to the CookieJar constructor:
jar = aiohttp.CookieJar(quote_cookie=False)
session = aiohttp.ClientSession(cookie_jar=jar)
Sometimes cookie processing is not desirable. For this purpose it’s possible to pass aiohttp.DummyCookieJar
instance into client session:
jar = aiohttp.DummyCookieJar()
session = aiohttp.ClientSession(cookie_jar=jar)
To upload data that is already compressed before passing it to aiohttp, call the request function with the used compres-
sion algorithm name (usually deflate or gzip) as the value of the Content-Encoding header:
The standard explicitly restricts JSON Content-Type HTTP header to application/json or any extended
form, e.g. application/vnd.custom-type+json. Unfortunately, some servers send a wrong type, like
text/html.
This can be worked around in two ways:
1. Pass the expected type explicitly (in this case checking will be strict, without the extended form support, so
custom/xxx+type won’t be accepted):
await resp.json(content_type='custom/type').
2. Disable the check entirely:
await resp.json(content_type=None).
12.1. Client 35
aiohttp Documentation, Release 3.7.4.post0
Client Tracing
The execution flow of a specific request can be followed attaching listeners coroutines to the signals provided by the
TraceConfig instance, this instance will be used as a parameter for the ClientSession constructor having
as a result a client that triggers the different signals supported by the TraceConfig. By default any instance of
ClientSession class comes with the signals ability disabled. The following snippet shows how the start and the
end signals of a request flow can be followed:
async def on_request_start(
session, trace_config_ctx, params):
print("Starting request")
trace_config = aiohttp.TraceConfig()
trace_config.on_request_start.append(on_request_start)
trace_config.on_request_end.append(on_request_end)
async with aiohttp.ClientSession(
trace_configs=[trace_config]) as client:
client.get('http://example.com/some/redirect/')
The trace_configs is a list that can contain instances of TraceConfig class that allow run the signals handlers
coming from different TraceConfig instances. The following example shows how two different TraceConfig
that have a different nature are installed to perform their job in each signal handle:
from mylib.traceconfig import AuditRequest
from mylib.traceconfig import XRay
All signals take as a parameters first, the ClientSession instance used by the specific request related to that
signals and second, a SimpleNamespace instance called trace_config_ctx. The trace_config_ctx
object can be used to share the state through to the different signals that belong to the same request and to the same
TraceConfig class, perhaps:
async def on_request_start(
session, trace_config_ctx, params):
trace_config_ctx.start = asyncio.get_event_loop().time()
session.get('http://example.com/some/redirect/',
trace_request_ctx={'foo': 'bar'})
See also:
Tracing Reference section for more information about the different signals supported.
Connectors
To tweak or change transport layer of requests you can pass a custom connector to ClientSession and family.
For example:
conn = aiohttp.TCPConnector()
session = aiohttp.ClientSession(connector=conn)
Note: By default session object takes the ownership of the connector, among other things closing the connections
once the session is closed. If you are keen on share the same connector through different session instances you must
give the connector_owner parameter as False for each session instance.
See also:
Connectors section for more information about different connector types and configuration options.
To limit amount of simultaneously opened connections you can pass limit parameter to connector:
conn = aiohttp.TCPConnector(limit=30)
conn = aiohttp.TCPConnector(limit=0)
To limit amount of simultaneously opened connection to the same endpoint ((host, port, is_ssl) triple) you
can pass limit_per_host parameter to connector:
conn = aiohttp.TCPConnector(limit_per_host=30)
12.1. Client 37
aiohttp Documentation, Release 3.7.4.post0
By default TCPConnector comes with the DNS cache table enabled, and resolutions will be cached by default for
10 seconds. This behavior can be changed either to change of the TTL for a resolution, as can be seen in the following
example:
conn = aiohttp.TCPConnector(ttl_dns_cache=300)
or disabling the use of the DNS cache table, meaning that all requests will end up making a DNS resolution, as the
following example shows:
conn = aiohttp.TCPConnector(use_dns_cache=False)
In order to specify the nameservers to when resolving the hostnames, aiodns is required:
If your HTTP server uses UNIX domain sockets you can use UnixConnector:
conn = aiohttp.UnixConnector(path='/path/to/socket')
session = aiohttp.ClientSession(connector=conn)
If your HTTP server uses Named pipes you can use NamedPipeConnector:
conn = aiohttp.NamedPipeConnector(path=r'\\.\pipe\<name-of-pipe>')
session = aiohttp.ClientSession(connector=conn)
By default aiohttp uses strict checks for HTTPS protocol. Certification checks can be relaxed by setting ssl to False:
If you need to setup custom ssl parameters (use own certification files for example) you can create a ssl.
SSLContext instance and pass it into the proper ClientSession method:
sslcontext = ssl.create_default_context(
cafile='/path/to/ca-bundle.crt')
r = await session.get('https://example.com', ssl=sslcontext)
If you need to verify self-signed certificates, you can do the same thing as the previous example, but add another call
to ssl.SSLContext.load_cert_chain() with the key pair:
sslcontext = ssl.create_default_context(
cafile='/path/to/ca-bundle.crt')
sslcontext.load_cert_chain('/path/to/client/public/device.pem',
'/path/to/client/private/device.key')
r = await session.get('https://example.com', ssl=sslcontext)
try:
await session.get('https://expired.badssl.com/')
except aiohttp.ClientConnectorSSLError as e:
assert isinstance(e, ssl.SSLError)
aiohttp.ClientConnectorCertificateError:
try:
await session.get('https://wrong.host.badssl.com/')
except aiohttp.ClientConnectorCertificateError as e:
assert isinstance(e, ssl.CertificateError)
try:
await session.get('https://expired.badssl.com/')
except aiohttp.ClientSSLError as e:
assert isinstance(e, ssl.SSLError)
try:
await session.get('https://wrong.host.badssl.com/')
except aiohttp.ClientSSLError as e:
assert isinstance(e, ssl.CertificateError)
Note that this is the fingerprint of the DER-encoded certificate. If you have the certificate in PEM format, you can
convert it to DER with e.g:
12.1. Client 39
aiohttp Documentation, Release 3.7.4.post0
openssl x509 -in crt.pem -inform PEM -outform DER > crt.der
Note: Tip: to convert from a hexadecimal digest to a binary byte-string, you can use binascii.unhexlify().
ssl parameter could be passed to TCPConnector as default, the value from ClientSession.get() and others
override default.
Proxy support
aiohttp supports plain HTTP proxies and HTTP proxies that can be upgraded to HTTPS via the HTTP CONNECT
method. aiohttp does not support proxies that must be connected to via https://. To connect, use the proxy
parameter:
session.get("http://python.org",
proxy="http://user:[email protected]")
Contrary to the requests library, it won’t read environment variables by default. But you can do so by pass-
ing trust_env=True into aiohttp.ClientSession constructor for extracting proxy configuration from
HTTP_PROXY or HTTPS_PROXY environment variables (both are case insensitive):
Proxy credentials are given from ~/.netrc file if present (see aiohttp.ClientSession for more details).
Graceful Shutdown
When ClientSession closes at the end of an async with block (or through a direct ClientSession.
close() call), the underlying connection remains open due to asyncio internal details. In practice, the underlying
connection will close after a short while. However, if the event loop is stopped before the underlying connection is
closed, a ResourceWarning: unclosed transport warning is emitted (when warnings are enabled).
To avoid this situation, a small delay must be added before closing the event loop to allow any open underlying
connections to close.
For a ClientSession without SSL, a simple zero-sleep (await asyncio.sleep(0)) will suffice:
loop = asyncio.get_event_loop()
loop.run_until_complete(read_website())
# Zero-sleep to allow underlying connections to close
loop.run_until_complete(asyncio.sleep(0))
loop.close()
For a ClientSession with SSL, the application must wait a short duration before closing:
...
# Wait 250 ms for the underlying SSL connections to close
loop.run_until_complete(asyncio.sleep(0.250))
loop.close()
Note that the appropriate amount of time to wait will vary from application to application.
All if this will eventually become obsolete when the asyncio internals are changed so that aiohttp itself can wait on the
underlying connection to close. Please follow issue #1925 for the progress on this.
Client Session
import aiohttp
import asyncio
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The client session supports the context manager protocol for self closing.
12.1. Client 41
aiohttp Documentation, Release 3.7.4.post0
client_session = aiohttp.ClientSession(raise_for_status=True)
resp = await client_session.get(url, raise_for_status=False)
async with resp:
assert resp.status == 200
Set the parameter to True if you need raise_for_status for most of cases but over-
ride raise_for_status for those requests where you need to handle responses with
status 400 or higher.
• timeout –
a ClientTimeout settings structure, 300 seconds (5min) total timeout by default.
New in version 3.3.
• read_timeout (float) – Request operations timeout. read_timeout is cumulative
for all request operations (request, redirects, responses, data consuming). By default, the
read timeout is 5*60 seconds. Use None or 0 to disable timeout checks.
Deprecated since version 3.3: Use timeout parameter instead.
• conn_timeout (float) – timeout for connection establishing (optional). Values 0 or
None mean no timeout.
Deprecated since version 3.3: Use timeout parameter instead.
• connector_owner (bool) – Close connector instance on session closing.
Setting the parameter to False allows to share connection pool between sessions without
sharing session state: cookies etc.
• auto_decompress (bool) –
Automatically decompress response body, True by default
New in version 2.3.
• read_bufsize (int) –
Size of the read buffer (ClientResponse.content). 64 KiB by default.
New in version 3.7.
• trust_env (bool) – Get proxies information from HTTP_PROXY / HTTPS_PROXY en-
vironment variables if the parameter is True (False by default).
Get proxy credentials from ~/.netrc file if present.
See also:
.netrc documentation: https://www.gnu.org/software/inetutils/manual/html_node/The-_
002enetrc-file.html
New in version 2.3.
Changed in version 3.0: Added support for ~/.netrc file.
• requote_redirect_url (bool) –
Apply URL requoting for redirection URLs if automatic redirection is enabled (True by
default).
New in version 3.5.
• trace_configs – A list of TraceConfig instances used for client tracing. None
(default) is used for request tracing disabling. See Tracing Reference for more information.
closed
True if the session has been closed, False otherwise.
A read-only property.
12.1. Client 43
aiohttp Documentation, Release 3.7.4.post0
connector
aiohttp.BaseConnector derived instance used for the session.
A read-only property.
cookie_jar
The session cookies, AbstractCookieJar instance.
Gives access to cookie jar’s content and modifiers.
A read-only property.
requote_redirect_url
aiohttp re quote’s redirect urls by default, but some servers require exact url from location header. To
disable re-quote system set requote_redirect_url attribute to False.
New in version 2.1.
connector_owner
Should connector be closed on session closing
bool (optional)
New in version 3.7.
raise_for_status
Should ClientResponse.raise_for_status() be called for each response
Either bool or callable
New in version 3.7.
auto_decompress
Should the body response be automatically decompressed
bool default is True
New in version 3.7.
trust_env
Should get proxies information from HTTP_PROXY / HTTPS_PROXY environment variables or ~/.netrc
file if present
bool default is False
New in version 3.7.
trace_config
A list of TraceConfig instances used for client tracing. None (default) is used for request tracing
disabling. See Tracing Reference for more information.
New in version 3.7.
coroutine async-with request(method, url, *, params=None, data=None, json=None,
cookies=None, headers=None, skip_auto_headers=None,
auth=None, allow_redirects=True, max_redirects=10,
compress=None, chunked=None, expect100=False,
raise_for_status=None, read_until_eof=True,
read_bufsize=None, proxy=None, proxy_auth=None, time-
out=sentinel, ssl=None, verify_ssl=None, fingerprint=None,
ssl_context=None, proxy_headers=None)
Performs an asynchronous HTTP request. Returns a response object.
Parameters
• method (str) – HTTP method
• url – Request URL, str or URL.
• params – Mapping, iterable of tuple of key/value pairs or string to be sent as parameters in
the query string of the new request. Ignored for subsequent redirected requests (optional)
Allowed values are:
– collections.abc.Mapping e.g. dict, aiohttp.MultiDict or aiohttp.
MultiDictProxy
– collections.abc.Iterable e.g. tuple or list
– str with preferably url-encoded content (Warning: content will not be encoded by
aiohttp)
12.1. Client 45
aiohttp Documentation, Release 3.7.4.post0
• data – The data to send in the body of the request. This can be a FormData object or
anything that can be passed into FormData, e.g. a dictionary, bytes, or file-like object.
(optional)
• json – Any json compatible python object (optional). json and data parameters could not
be used at the same time.
• cookies (dict) –
HTTP Cookies to send with the request (optional)
Global session cookies and the explicitly set cookies will be merged when sending the
request.
New in version 3.5.
• headers (dict) – HTTP Headers to send with the request (optional)
• skip_auto_headers – set of headers for which autogeneration should be skipped.
aiohttp autogenerates headers like User-Agent or Content-Type if these headers
are not explicitly passed. Using skip_auto_headers parameter allows to skip that
generation.
Iterable of str or istr (optional)
• auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization
(optional)
• allow_redirects (bool) – If set to False, do not follow redirects. True by default
(optional).
• max_redirects (int) – Maximum number of redirects to follow. 10 by default.
• compress (bool) – Set to True if request has to be compressed with deflate encoding.
If compress can not be combined with a Content-Encoding and Content-Length headers.
None by default (optional).
• chunked (int) – Enable chunked transfer encoding. It is up to the developer to decide
how to chunk data streams. If chunking is enabled, aiohttp encodes the provided chunks
in the “Transfer-encoding: chunked” format. If chunked is set, then the Transfer-encoding
and content-length headers are disallowed. None by default (optional).
• expect100 (bool) – Expect 100-continue response from server. False by default
(optional).
• raise_for_status (bool) –
Automatically call ClientResponse.raise_for_status() for response if set
to True. If set to None value from ClientSession will be used. None by de-
fault (optional).
New in version 3.4.
• read_until_eof (bool) – Read response until EOF if response does not have
Content-Length header. True by default (optional).
• read_bufsize (int) –
Size of the read buffer (ClientResponse.content). None by default, it means
that the session global value is used.
New in version 3.7.
• proxy – Proxy URL, str or URL (optional)
12.1. Client 47
aiohttp Documentation, Release 3.7.4.post0
12.1. Client 49
aiohttp Documentation, Release 3.7.4.post0
• verify_ssl (bool) – Perform SSL certificate validation for HTTPS requests (enabled
by default). May be disabled to skip validation for sites with invalid certificates.
New in version 2.3.
Deprecated since version 3.0: Use ssl=False
• fingerprint (bytes) – Pass the SHA256 digest of the expected certificate in DER
format to verify that the certificate the server presents matches. Useful for certificate
pinning.
Note: use of MD5 or SHA1 digests is insecure and deprecated.
New in version 2.3.
Deprecated since version 3.0: Use ssl=aiohttp.Fingerprint(digest)
• ssl_context (ssl.SSLContext) – ssl context used for processing HTTPS requests
(optional).
ssl_context may be used for configuring certification authority channel, supported SSL
options etc.
New in version 2.3.
Deprecated since version 3.0: Use ssl=ssl_context
• proxy_headers (dict) – HTTP headers to send to the proxy if the parameter proxy
has been provided.
New in version 2.3.
• compress (int) –
Enable Per-Message Compress Extension support. 0 for disable, 9 to 15 for window
bit support. Default value is 0.
New in version 2.3.
• max_msg_size (int) –
maximum size of read websocket message, 4 MB by default. To disable the size limit
use 0.
New in version 3.3.
• method (str) –
HTTP method to establish WebSocket connection, 'GET' by default.
New in version 3.5.
coroutine close()
Close underlying connector.
Release all acquired resources.
detach()
Detach connector from session without closing the former.
Session is switched to closed state anyway.
Basic API
While we encourage ClientSession usage we also provide simple coroutines for making HTTP requests.
Basic API is good for performing simple HTTP requests without keepaliving, cookies and complex connection stuff
like properly configured SSL certification chaining.
async-with aiohttp.request(method, url, *, params=None, data=None, json=None, head-
ers=None, cookies=None, auth=None, allow_redirects=True,
max_redirects=10, encoding='utf-8', version=HttpVersion(major=1,
minor=1), compress=None, chunked=None, expect100=False,
raise_for_status=False, read_bufsize=None, connector=None,
loop=None, read_until_eof=True, timeout=sentinel)
Asynchronous context manager for performing an asynchronous HTTP request. Returns a ClientResponse
response object.
Parameters
• method (str) – HTTP method
• url – Requested URL, str or URL
• params (dict) – Parameters to be sent in the query string of the new request (optional)
• data – The data to send in the body of the request. This can be a FormData object or
anything that can be passed into FormData, e.g. a dictionary, bytes, or file-like object.
(optional)
• json – Any json compatible python object (optional). json and data parameters could not
be used at the same time.
• headers (dict) – HTTP Headers to send with the request (optional)
• cookies (dict) – Cookies to send with the request (optional)
• auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization
(optional)
• allow_redirects (bool) – If set to False, do not follow redirects. True by default
(optional).
• version (aiohttp.protocol.HttpVersion) – Request HTTP version (optional)
• compress (bool) – Set to True if request has to be compressed with deflate encoding.
False instructs aiohttp to not compress data. None by default (optional).
• chunked (int) – Enables chunked transfer encoding. None by default (optional).
• expect100 (bool) – Expect 100-continue response from server. False by default (op-
tional).
• raise_for_status (bool) –
Automatically call ClientResponse.raise_for_status() for response if set to
True. If set to None value from ClientSession will be used. None by default
(optional).
New in version 3.4.
• connector (aiohttp.BaseConnector) – BaseConnector sub-class instance to sup-
port connection pooling.
• read_until_eof (bool) – Read response until EOF if response does not have Content-
Length header. True by default (optional).
12.1. Client 51
aiohttp Documentation, Release 3.7.4.post0
• read_bufsize (int) –
Size of the read buffer (ClientResponse.content). None by default, it means that
the session global value is used.
New in version 3.7.
• timeout – a ClientTimeout settings structure, 300 seconds (5min) total timeout by
default.
• loop –
event loop used for processing HTTP requests. If param is None, asyncio.
get_event_loop() is used for getting default event loop.
Deprecated since version 2.0.
Return ClientResponse a client response object.
Usage:
import aiohttp
Connectors
BaseConnector
TCPConnector
12.1. Client 53
aiohttp Documentation, Release 3.7.4.post0
Parameters
• verify_ssl (bool) – perform SSL certificate validation for HTTPS requests (enabled
by default). May be disabled to skip validation for sites with invalid certificates.
Deprecated since version 2.3: Pass verify_ssl to ClientSession.get() etc.
• fingerprint (bytes) – pass the SHA256 digest of the expected certificate in DER for-
mat to verify that the certificate the server presents matches. Useful for certificate pinning.
Note: use of MD5 or SHA1 digests is insecure and deprecated.
Deprecated since version 2.3: Pass verify_ssl to ClientSession.get() etc.
• use_dns_cache (bool) – use internal cache for DNS lookups, True by default.
Enabling an option may speedup connection establishing a bit but may introduce some side
effects also.
• ttl_dns_cache (int) – expire after some seconds the DNS entries, None means
cached forever. By default 10 seconds (optional).
In some environments the IP addresses related to a specific HOST can change after a specific
time. Use this option to keep the DNS cache updated refreshing each entry after N seconds.
• limit (int) – total number simultaneous connections. If limit is None the connector has
no limit (default: 100).
• limit_per_host (int) – limit simultaneous connections to the same endpoint. End-
points are the same if they are have equal (host, port, is_ssl) triple. If limit is 0
the connector has no limit (default: 0).
• resolver (aiohttp.abc.AbstractResolver) – custom resolver instance to use.
aiohttp.DefaultResolver by default (asynchronous if aiodns>=1.1 is in-
stalled).
Custom resolvers allow to resolve hostnames differently than the way the host is configured.
The resolver is aiohttp.ThreadedResolver by default, asynchronous version is
pretty robust but might fail in very rare cases.
• family (int) – TCP socket family, both IPv4 and IPv6 by default. For IPv4 only use
socket.AF_INET, for IPv6 only – socket.AF_INET6.
family is 0 by default, that means both IPv4 and IPv6 are accepted. To specify only concrete
version please pass socket.AF_INET or socket.AF_INET6 explicitly.
• ssl_context (ssl.SSLContext) – SSL context used for processing HTTPS requests
(optional).
ssl_context may be used for configuring certification authority channel, supported SSL op-
tions etc.
• local_addr (tuple) – tuple of (local_host, local_port) used to bind socket
locally if specified.
family
TCP socket family e.g. socket.AF_INET or socket.AF_INET6
Read-only property.
dns_cache
Use quick lookup in internal DNS cache for host names if True.
Read-only bool property.
cached_hosts
The cache of resolved hosts if dns_cache is enabled.
Read-only types.MappingProxyType property.
clear_dns_cache(self, host=None, port=None)
Clear internal DNS cache.
Remove specific entry if both host and port are specified, clear all cache otherwise.
UnixConnector
conn = UnixConnector(path='/path/to/socket')
session = ClientSession(connector=conn)
async with session.get('http://python.org') as resp:
...
Constructor accepts all parameters suitable for BaseConnector plus UNIX-specific one:
Parameters path (str) – Unix socket path
path
Path to UNIX socket, read-only str property.
12.1. Client 55
aiohttp Documentation, Release 3.7.4.post0
Connection
class aiohttp.Connection
Encapsulates single connection in connector object.
End user should never create Connection instances manually but get it by BaseConnector.connect()
coroutine.
closed
bool read-only property, True if connection was closed, released or detached.
loop
Event loop used for connection
Deprecated since version 3.5.
transport
Connection transport
close()
Close connection with forcibly closing underlying socket.
release()
Release connection back to connector.
Underlying socket is not closed, the connection may be reused later if timeout (30 seconds by default) for
connection was not expired.
Response object
class aiohttp.ClientResponse
Client response returned by ClientSession.request() and family.
User never creates the instance of ClientResponse class but gets it from API calls.
ClientResponse supports async context manager protocol, e.g.:
After exiting from async with block response object will be released (see release() coroutine).
version
Response’s version, HttpVersion instance.
status
HTTP status code of response (int), e.g. 200.
reason
HTTP status reason of response (str), e.g. "OK".
ok
Boolean representation of HTTP status code (bool). True if status is less than 400; otherwise,
False.
method
Request’s method (str).
url
URL of request (URL).
real_url
Unmodified URL of request with URL fragment unstripped (URL).
New in version 3.2.
connection
Connection used for handling response.
content
Payload stream, which contains response’s BODY (StreamReader). It supports various reading meth-
ods depending on the expected format. When chunked transfer encoding is used by the server, allows
retrieving the actual http chunks.
Reading from the stream may raise aiohttp.ClientPayloadError if the response object is closed
before response receives all data or in case if any transfer encoding related errors like misformed chunked
encoding of broken compression data.
cookies
HTTP cookies of response (Set-Cookie HTTP header, SimpleCookie).
headers
A case-insensitive multidict proxy with HTTP headers of response, CIMultiDictProxy.
raw_headers
Unmodified HTTP headers of response as unconverted bytes, a sequence of (key, value) pairs.
links
Link HTTP header parsed into a MultiDictProxy.
For each link, key is link param rel when it exists, or link url as str otherwise, and value is
MultiDictProxy of link params and url at key url as URL instance.
New in version 3.2.
content_type
Read-only property with content part of Content-Type header.
charset
Read-only property that specifies the encoding for the request’s BODY.
The value is parsed from the Content-Type HTTP header.
Returns str like 'utf-8' or None if no Content-Type header present in HTTP headers or it has no
charset information.
content_disposition
Read-only property that specified the Content-Disposition HTTP header.
Instance of ContentDisposition or None if no Content-Disposition header present in HTTP head-
ers.
history
A Sequence of ClientResponse objects of preceding requests (earliest request first) if there were
redirects, an empty sequence otherwise.
close()
Close response and underlying connection.
12.1. Client 57
aiohttp Documentation, Release 3.7.4.post0
Note: If response has no charset info in Content-Type HTTP header cchardet / chardet is used for
content encoding autodetection.
It may hurt performance. If page encoding is known passing explicit encoding parameter might help:
await resp.text('ISO-8859-1')
• encoding (str) – text encoding used for BODY decoding, or None for encoding au-
todetection (default).
By the standard JSON encoding should be UTF-8 but practice beats purity: some servers
return non-UTF responses. Autodetection works pretty fine anyway.
• loads (callable) – callable() used for loading JSON data, json.loads() by
default.
• content_type (str) – specify response’s content-type, if content type does not match
raise aiohttp.ClientResponseError. To disable content-type check, pass None
as value. (default: application/json).
Returns BODY as JSON data parsed by loads parameter or None if BODY is empty or contains
white-spaces only.
request_info
A namedtuple with request URL and headers from ClientRequest object, aiohttp.
RequestInfo instance.
get_encoding()
Automatically detect content encoding using charset info in Content-Type HTTP header. If this
info is not exists or there are no appropriate codecs for encoding then cchardet / chardet is used.
Beware that it is not always safe to use the result of this function to decode a response. Some encodings
detected by cchardet are not known by Python (e.g. VISCII).
Raises RuntimeError – if called before the body has been read, for cchardet usage
New in version 3.0.
ClientWebSocketResponse
12.1. Client 59
aiohttp Documentation, Release 3.7.4.post0
Parameters message – optional payload of pong message, str (converted to UTF-8 encoded
bytes) or bytes.
Changed in version 3.0: The method is converted into coroutine
coroutine send_str(data, compress=None)
Send data to peer as TEXT message.
Parameters
• data (str) – data to send.
• compress (int) – sets specific level of compression for single message, None for not
overriding per-socket setting.
Raises TypeError – if data is not str
Changed in version 3.0: The method is converted into coroutine, compress parameter added.
coroutine send_bytes(data, compress=None)
Send data to peer as BINARY message.
Parameters
• data – data to send.
• compress (int) – sets specific level of compression for single message, None for not
overriding per-socket setting.
Raises TypeError – if data is not bytes, bytearray or memoryview.
Changed in version 3.0: The method is converted into coroutine, compress parameter added.
coroutine send_json(data, compress=None, *, dumps=json.dumps)
Send data to peer as JSON string.
Parameters
• data – data to send.
• compress (int) – sets specific level of compression for single message, None for not
overriding per-socket setting.
• dumps (callable) – any callable that accepts an object and returns a JSON string
(json.dumps() by default).
Raises
• RuntimeError – if connection is not started or closing
• ValueError – if data is not serializable object
• TypeError – if value returned by dumps(data) is not str
Changed in version 3.0: The method is converted into coroutine, compress parameter added.
coroutine close(*, code=1000, message=b'')
A coroutine that initiates closing handshake by sending CLOSE message. It waits for close response from
server. To add a timeout to close() call just wrap the call with asyncio.wait() or asyncio.wait_for().
Parameters
• code (int) – closing code
• message – optional payload of close message, str (converted to UTF-8 encoded bytes)
or bytes.
coroutine receive()
A coroutine that waits upcoming data message from peer and returns it.
The coroutine implicitly handles PING, PONG and CLOSE without returning the message.
It process ping-pong game and performs closing handshake internally.
Returns WSMessage
coroutine receive_str()
A coroutine that calls receive() but also asserts the message type is TEXT.
Return str peer’s message content.
Raises TypeError – if message is BINARY.
coroutine receive_bytes()
A coroutine that calls receive() but also asserts the message type is BINARY.
Return bytes peer’s message content.
Raises TypeError – if message is TEXT.
coroutine receive_json(*, loads=json.loads)
A coroutine that calls receive_str() and loads the JSON string to a Python dict.
Parameters loads (callable) – any callable that accepts str and returns dict with
parsed JSON (json.loads() by default).
Return dict loaded JSON content
Raises
• TypeError – if message is BINARY.
• ValueError – if message is not valid JSON.
Utilities
ClientTimeout
12.1. Client 61
aiohttp Documentation, Release 3.7.4.post0
RequestInfo
class aiohttp.RequestInfo
A data class with request URL and headers from ClientRequest object, available as ClientResponse.
request_info attribute.
url
Requested url, yarl.URL instance.
method
Request HTTP method like 'GET' or 'POST', str.
headers
HTTP headers for request, multidict.CIMultiDict instance.
real_url
Requested url with URL fragment unstripped, yarl.URL instance.
New in version 3.2.
BasicAuth
encode()
Encode credentials into string suitable for Authorization header etc.
Returns encoded authentication data, str.
CookieJar
len(session.cookie_jar)
12.1. Client 63
aiohttp Documentation, Release 3.7.4.post0
save(file_path)
Write a pickled representation of cookies into the file at provided path.
Parameters file_path – Path to file where cookies will be serialized, str or pathlib.
Path instance.
load(file_path)
Load a pickled representation of cookies from the file at provided path.
Parameters file_path – Path to file from where cookies will be imported, str or
pathlib.Path instance.
class aiohttp.DummyCookieJar(*, loop=None)
Dummy cookie jar which does not store cookies but ignores them.
Could be useful e.g. for web crawlers to iterate over Internet without blowing up with saved cookies information.
To install dummy cookie jar pass it into session instance:
jar = aiohttp.DummyCookieJar()
session = aiohttp.ClientSession(cookie_jar=DummyCookieJar())
class aiohttp.Fingerprint(digest)
Fingerprint helper for checking SSL certificates by SHA256 digest.
Parameters digest (bytes) – SHA256 digest for certificate in DER-encoded binary form (see
ssl.SSLSocket.getpeercert()).
To check fingerprint pass the object into ClientSession.get() call, e.g.:
import hashlib
FormData
A FormData object contains the form data and also handles encoding it into a body that is either multipart/
form-data or application/x-www-form-urlencoded. multipart/form-data is used if at least one
field is an io.IOBase object or was added with at least one optional argument to add_field (content_type,
filename, or content_transfer_encoding). Otherwise, application/x-www-form-urlencoded
is used.
FormData instances are callable and return a Payload on being called.
class aiohttp.FormData(fields, quote_fields=True, charset=None)
Helper class for multipart/form-data and application/x-www-form-urlencoded body generation.
Parameters fields – A container for the key/value pairs of this form.
Possible types are:
• dict
• tuple or list
• io.IOBase, e.g. a file-like object
• multidict.MultiDict or multidict.MultiDictProxy
If it is a tuple or list, it must be a valid argument for add_fields.
For dict, multidict.MultiDict, and multidict.MultiDictProxy, the keys and
values must be valid name and value arguments to add_field, respectively.
add_field(name, value, content_type=None, filename=None, content_transfer_encoding=None)
Add a field to the form.
Parameters
• name (str) – Name of the field
• value – Value of the field
Possible types are:
– str
– bytes, bytesarray, or memoryview
– io.IOBase, e.g. a file-like object
• content_type (str) – The field’s content-type header (optional)
• filename (str) – The field’s filename (optional)
If this is not set and value is a bytes, bytesarray, or memoryview object, the
name argument is used as the filename unless content_transfer_encoding is
specified.
If filename is not set and value is an io.IOBase object, the filename is extracted
from the object if possible.
• content_transfer_encoding (str) – The field’s content-transfer-encoding
header (optional)
add_fields(fields)
Add one or more fields to the form.
Parameters fields – An iterable containing:
• io.IOBase, e.g. a file-like object
• multidict.MultiDict or multidict.MultiDictProxy
• tuple or list of length two, containing a name-value pair
Client exceptions
Exception hierarchy has been significantly modified in version 2.0. aiohttp defines only exceptions that covers con-
nection handling and server response misbehaviors. For developer specific mistakes, aiohttp uses python standard
exceptions like ValueError or TypeError.
Reading a response content may raise a ClientPayloadError exception. This exception indicates errors specific
to the payload encoding. Such as invalid compressed data, malformed chunked-encoded chunks or not enough data
that satisfy the content-length header.
All exceptions are available as members of aiohttp module.
exception aiohttp.ClientError
Base class for all client specific exceptions.
Derived from Exception
12.1. Client 65
aiohttp Documentation, Release 3.7.4.post0
class aiohttp.ClientPayloadError
This exception can only be raised while reading the response payload if one of these errors occurs:
1. invalid compression
2. malformed chunked encoding
3. not enough data that satisfy Content-Length HTTP header.
Derived from ClientError
exception aiohttp.InvalidURL
URL used for fetching is malformed, e.g. it does not contain host part.
Derived from ClientError and ValueError
url
Invalid URL, yarl.URL instance.
class aiohttp.ContentDisposition
Represent Content-Disposition header
value
A str instance. Value of Content-Disposition header itself, e.g. attachment.
filename
A str instance. Content filename extracted from parameters. May be None.
parameters
Read-only mapping contains all parameters.
Response errors
exception aiohttp.ClientResponseError
These exceptions could happen after we get response from server.
Derived from ClientError
request_info
Instance of RequestInfo object, contains information about request.
status
HTTP status code of response (int), e.g. 400.
message
Message of response (str), e.g. "OK".
headers
Headers in response, a list of pairs.
history
History from failed response, if available, else empty tuple.
A tuple of ClientResponse objects used for handle redirection responses.
code
HTTP status code of response (int), e.g. 400.
Deprecated since version 3.1.
class aiohttp.WSServerHandshakeError
Web socket server response error.
Derived from ClientResponseError
class aiohttp.ContentTypeError
Invalid content type.
Derived from ClientResponseError
New in version 2.3.
class aiohttp.TooManyRedirects
Client was redirected too many times.
Maximum number of redirects can be configured by using parameter max_redirects in request.
Derived from ClientResponseError
New in version 3.2.
Connection errors
class aiohttp.ClientConnectionError
These exceptions related to low-level connection problems.
Derived from ClientError
class aiohttp.ClientOSError
Subset of connection errors that are initiated by an OSError exception.
Derived from ClientConnectionError and OSError
class aiohttp.ClientConnectorError
Connector related exceptions.
Derived from ClientOSError
class aiohttp.ClientProxyConnectionError
Derived from ClientConnectorError
class aiohttp.ServerConnectionError
Derived from ClientConnectionError
class aiohttp.ClientSSLError
Derived from ClientConnectorError
class aiohttp.ClientConnectorSSLError
Response ssl error.
Derived from ClientSSLError and ssl.SSLError
class aiohttp.ClientConnectorCertificateError
Response certificate error.
Derived from ClientSSLError and ssl.CertificateError
class aiohttp.ServerDisconnectedError
Server disconnected.
Derived from ServerDisconnectionError
message
Partially parsed HTTP message (optional).
12.1. Client 67
aiohttp Documentation, Release 3.7.4.post0
class aiohttp.ServerTimeoutError
Server operation timeout: read timeout, etc.
Derived from ServerConnectionError and asyncio.TimeoutError
class aiohttp.ServerFingerprintMismatch
Server fingerprint mismatch.
Derived from ServerConnectionError
Hierarchy of exceptions
• ClientError
– ClientResponseError
* ContentTypeError
* WSServerHandshakeError
* ClientHttpProxyError
– ClientConnectionError
* ClientOSError
· ClientConnectorError
· ClientSSLError
· ClientConnectorCertificateError
· ClientConnectorSSLError
· ClientProxyConnectionError
· ServerConnectionError
· ServerDisconnectedError
· ServerTimeoutError
· ServerFingerprintMismatch
– ClientPayloadError
– InvalidURL
Overview
Name Description
start on_request_start
redirect on_request_redirect
acquire_connection Connection acquiring
headers_received
exception on_request_exception
end on_request_end
headers_sent
chunk_sent on_request_chunk_sent
chunk_received on_response_chunk_received
12.1. Client 69
aiohttp Documentation, Release 3.7.4.post0
Connection acquiring
Name Description
begin
end
queued_start on_connection_queued_start
create_start on_connection_create_start
reuseconn on_connection_reuseconn
queued_end on_connection_queued_end
create_end on_connection_create_end
exception Exception raised
resolve_dns DNS resolving
sock_connect Connection establishment
DNS resolving
Name Description
begin
end
exception Exception raised
resolve_end on_dns_resolvehost_end
resolve_start on_dns_resolvehost_start
cache_hit on_dns_cache_hit
cache_miss on_dns_cache_miss
12.1. Client 71
aiohttp Documentation, Release 3.7.4.post0
TraceConfig
class aiohttp.TraceConfig(trace_config_ctx_factory=SimpleNamespace)
Trace config is the configuration object used to trace requests launched by a ClientSession object using
different events related to different parts of the request flow.
Parameters trace_config_ctx_factory – factory used to create trace contexts, default
class used types.SimpleNamespace
trace_config_ctx(trace_request_ctx=None)
Parameters trace_request_ctx – Will be used to pass as a kw for the
trace_config_ctx_factory.
Build a new trace context from the config.
Every signal handler should have the following signature:
on_connection_queued_end
Property that gives access to the signals that will be executed when a request that was queued already has
an available connection.
params is aiohttp.TraceConnectionQueuedEndParams instance.
on_connection_create_start
Property that gives access to the signals that will be executed when a request creates a new connection.
params is aiohttp.TraceConnectionCreateStartParams instance.
on_connection_create_end
Property that gives access to the signals that will be executed when a request that created a new connection
finishes its creation.
params is aiohttp.TraceConnectionCreateEndParams instance.
on_connection_reuseconn
Property that gives access to the signals that will be executed when a request reuses a connection.
params is aiohttp.TraceConnectionReuseconnParams instance.
on_dns_resolvehost_start
Property that gives access to the signals that will be executed when a request starts to resolve the domain
related with the request.
params is aiohttp.TraceDnsResolveHostStartParams instance.
on_dns_resolvehost_end
Property that gives access to the signals that will be executed when a request finishes to resolve the domain
related with the request.
params is aiohttp.TraceDnsResolveHostEndParams instance.
on_dns_cache_hit
Property that gives access to the signals that will be executed when a request was able to use a cached
DNS resolution for the domain related with the request.
params is aiohttp.TraceDnsCacheHitParams instance.
on_dns_cache_miss
Property that gives access to the signals that will be executed when a request was not able to use a cached
DNS resolution for the domain related with the request.
params is aiohttp.TraceDnsCacheMissParams instance.
TraceRequestStartParams
class aiohttp.TraceRequestStartParams
See TraceConfig.on_request_start for details.
method
Method that will be used to make the request.
url
URL that will be used for the request.
headers
Headers that will be used for the request, can be mutated.
12.1. Client 73
aiohttp Documentation, Release 3.7.4.post0
TraceRequestChunkSentParams
class aiohttp.TraceRequestChunkSentParams
New in version 3.1.
See TraceConfig.on_request_chunk_sent for details.
method
Method that will be used to make the request.
url
URL that will be used for the request.
chunk
Bytes of chunk sent
TraceResponseChunkReceivedParams
class aiohttp.TraceResponseChunkReceivedParams
New in version 3.1.
See TraceConfig.on_response_chunk_received for details.
method
Method that will be used to make the request.
url
URL that will be used for the request.
chunk
Bytes of chunk received
TraceRequestEndParams
class aiohttp.TraceRequestEndParams
See TraceConfig.on_request_end for details.
method
Method used to make the request.
url
URL used for the request.
headers
Headers used for the request.
response
Response ClientResponse.
TraceRequestExceptionParams
class aiohttp.TraceRequestExceptionParams
See TraceConfig.on_request_exception for details.
method
Method used to make the request.
url
URL used for the request.
headers
Headers used for the request.
exception
Exception raised during the request.
TraceRequestRedirectParams
class aiohttp.TraceRequestRedirectParams
See TraceConfig.on_request_redirect for details.
method
Method used to get this redirect request.
url
URL used for this redirect request.
headers
Headers used for this redirect.
response
Response ClientResponse got from the redirect.
TraceConnectionQueuedStartParams
class aiohttp.TraceConnectionQueuedStartParams
See TraceConfig.on_connection_queued_start for details.
There are no attributes right now.
TraceConnectionQueuedEndParams
class aiohttp.TraceConnectionQueuedEndParams
See TraceConfig.on_connection_queued_end for details.
There are no attributes right now.
12.1. Client 75
aiohttp Documentation, Release 3.7.4.post0
TraceConnectionCreateStartParams
class aiohttp.TraceConnectionCreateStartParams
See TraceConfig.on_connection_create_start for details.
There are no attributes right now.
TraceConnectionCreateEndParams
class aiohttp.TraceConnectionCreateEndParams
See TraceConfig.on_connection_create_end for details.
There are no attributes right now.
TraceConnectionReuseconnParams
class aiohttp.TraceConnectionReuseconnParams
See TraceConfig.on_connection_reuseconn for details.
There are no attributes right now.
TraceDnsResolveHostStartParams
class aiohttp.TraceDnsResolveHostStartParams
See TraceConfig.on_dns_resolvehost_start for details.
host
Host that will be resolved.
TraceDnsResolveHostEndParams
class aiohttp.TraceDnsResolveHostEndParams
See TraceConfig.on_dns_resolvehost_end for details.
host
Host that has been resolved.
TraceDnsCacheHitParams
class aiohttp.TraceDnsCacheHitParams
See TraceConfig.on_dns_cache_hit for details.
host
Host found in the cache.
TraceDnsCacheMissParams
class aiohttp.TraceDnsCacheMissParams
See TraceConfig.on_dns_cache_miss for details.
host
Host didn’t find the cache.
The first time you use aiohttp, you’ll notice that a simple HTTP request is performed not with one, but with up to three
steps:
It’s especially unexpected when coming from other libraries such as the very popular requests, where the “hello world”
looks like this:
response = requests.get('http://python.org')
print(response.text)
12.1. Client 77
aiohttp Documentation, Release 3.7.4.post0
The requests library does in fact also provides a session system. Indeed, it lets you do:
It’s just not the default behavior, nor is it advertised early in the documentation. Because of this, most users take a hit
in performance, but can quickly start hacking. And for requests, it’s an understandable trade-off, since its goal is to be
“HTTP for humans” and simplicity has always been more important than performance in this context.
However, if one uses aiohttp, one chooses asynchronous programming, a paradigm that makes the opposite trade-
off: more verbosity for better performance. And so the library default behavior reflects this, encouraging you to use
performant best practices from the start.
By default the aiohttp.ClientSession object will hold a connector with a maximum of 100 connections,
putting the rest in a queue. This is quite a big number, this means you must be connected to a hundred different servers
(not pages!) concurrently before even having to consider if your task needs resource adjustment.
In fact, you can picture the session object as a user starting and closing a browser: it wouldn’t make sense to do that
every time you want to load a new tab.
So you are expected to reuse a session object and make many requests from it. For most scripts and average-sized
software, this means you can create a single session, and reuse it for the entire execution of the program. You can even
pass the session around as a parameter in functions. For example, the typical “hello world”:
import aiohttp
import asyncio
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
import aiohttp
import asyncio
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
On more complex code bases, you can even create a central registry to hold the session object from anywhere in the
code, or a higher level Client class that holds a reference to it.
When to create more than one session object then? It arises when you want more granularity with your resources
management:
• you want to group connections by a common configuration. e.g: sessions can set cookies, headers, timeout
values, etc. that are shared for all connections they hold.
• you need several threads and want to avoid sharing a mutable object between them.
• you want several connection pools to benefit from different queues and assign priorities. e.g: one session never
uses the queue and is for high priority requests, the other one has a small concurrency limit and a very long
queue, for non important requests.
12.2 Server
Next, create an Application instance and register the request handler on a particular HTTP method and path:
app = web.Application()
app.add_routes([web.get('/', hello)])
web.run_app(app)
routes = web.RouteTableDef()
@routes.get('/')
async def hello(request):
return web.Response(text="Hello, world")
app = web.Application()
app.add_routes(routes)
web.run_app(app)
12.2. Server 79
aiohttp Documentation, Release 3.7.4.post0
Both ways essentially do the same work, the difference is only in your taste: do you prefer Django style with famous
urls.py or Flask with shiny route decorators.
aiohttp server documentation uses both ways in code snippets to emphasize their equality, switching from one style to
another is very trivial.
See also:
Graceful shutdown section explains what run_app() does and how to implement complex server initializa-
tion/finalization from scratch.
Application runners for more handling more complex cases like asynchronous web application serving and multiple
hosts support.
aiohttp.web implements a basic CLI for quickly serving an Application in development over TCP/IP:
package.module:init_func should be an importable callable that accepts a list of any non-parsed command-
line arguments and returns an Application instance after setting it up:
def init_func(argv):
app = web.Application()
app.router.add_get("/", index_handler)
return app
Handler
A request handler must be a coroutine that accepts a Request instance as its only argument and returns a
StreamResponse derived (e.g. Response) instance:
Handlers are setup to handle requests by registering them with the Application.add_routes() on a particular
route (HTTP method and path pair) using helpers like get() and post():
app.add_routes([web.get('/', handler),
web.post('/post', post_handler),
web.put('/put', put_handler)])
routes = web.RouteTableDef()
@routes.get('/')
async def get_handler(request):
...
@routes.post('/post')
async def post_handler(request):
...
@routes.put('/put')
(continues on next page)
app.add_routes(routes)
Wildcard HTTP method is also supported by route() or RouteTableDef.route(), allowing a handler to serve
incoming requests on a path having any HTTP method:
The HTTP method can be queried later in the request handler using the Request.method property.
By default endpoints added with GET method will accept HEAD requests and return the same response headers as they
would for a GET request. You can also deny HEAD requests on a route:
Here handler won’t be called on HEAD request and the server will respond with 405: Method Not Allowed.
app.add_routes([web.get('/path1', get_1),
web.post('/path1', post_1),
web.get('/path2', get_2),
web.post('/path2', post_2)]
and:
app.add_routes([web.get('/path1', get_1),
web.get('/path2', get_2),
web.post('/path2', post_2),
web.post('/path1', post_1)]
12.2. Server 81
aiohttp Documentation, Release 3.7.4.post0
Variable Resources
Resource may have variable path also. For instance, a resource with the path '/a/{name}/c' would match all
incoming requests with paths such as '/a/b/c', '/a/1/c', and '/a/etc/c'.
A variable part is specified in the form {identifier}, where the identifier can be used later in a request
handler to access the matched value for that part. This is done by looking up the identifier in the Request.
match_info mapping:
@routes.get('/{name}')
async def variable_handler(request):
return web.Response(
text="Hello, {}".format(request.match_info['name']))
web.get(r'/{name:\d+}', handler)
@routes.get('/root', name='root')
async def handler(request):
...
Which can then be used to access and build a URL for that resource later (e.g. in a request handler):
app.router.add_resource(r'/{user}/info', name='user-info')
In this case you can also pass in the parts of the route:
url = request.app.router['user-info'].url_for(user='john_doe')
url_with_qs = url.with_query("a=b")
assert url_with_qs == '/john_doe/info?a=b'
app.router.add_get('/', hello)
But sometimes it’s convenient to group logically similar handlers into a Python class.
Since aiohttp.web does not dictate any implementation details, application developers can organize handlers in
classes if they so wish:
class Handler:
def __init__(self):
pass
handler = Handler()
app.add_routes([web.get('/intro', handler.handle_intro),
web.get('/greet/{name}', handler.handle_greeting)])
class MyView(web.View):
async def get(self):
return await get_resp(self.request)
Handlers should be coroutines accepting self only and returning response object as regular web-handler. Request
object can be retrieved by View.request property.
After implementing the view (MyView from example above) should be registered in application’s router:
web.view('/path/to', MyView)
or:
@routes.view('/path/to')
class MyView(web.View):
...
Example will process GET and POST requests for /path/to but raise 405 Method not allowed exception for unimple-
mented HTTP methods.
12.2. Server 83
aiohttp Documentation, Release 3.7.4.post0
Resource Views
All registered resources in a router can be viewed using the UrlDispatcher.resources() method:
A subset of the resources that were registered with a name can be viewed using the UrlDispatcher.
named_resources() method:
Code examples shown above use imperative style for adding new routes: they call app.router.add_get(...)
etc.
There are two alternatives: route tables and route decorators.
Route tables look like Django way:
app.router.add_routes([web.get('/get', handle_get),
web.post('/post', handle_post),
The snippet calls add_routes() to register a list of route definitions (aiohttp.web.RouteDef instances)
created by aiohttp.web.get() or aiohttp.web.post() functions.
See also:
RouteDef and StaticDef reference.
Route decorators are closer to Flask approach:
routes = web.RouteTableDef()
@routes.get('/get')
async def handle_get(request):
...
@routes.post('/post')
async def handle_post(request):
...
app.router.add_routes(routes)
routes = web.RouteTableDef()
@routes.view("/view")
class MyView(web.View):
async def get(self):
...
app.router.add_routes(routes)
JSON Response
It is a common case to return JSON data in response, aiohttp.web provides a shortcut for returning JSON –
aiohttp.web.json_response():
The shortcut method returns aiohttp.web.Response instance so you can for example set cookies before return-
ing it from handler.
User Sessions
Often you need a container for storing user data across requests. The concept is usually called a session.
aiohttp.web has no built-in concept of a session, however, there is a third-party library, aiohttp_session,
that adds session support:
import asyncio
import time
import base64
from cryptography import fernet
from aiohttp import web
from aiohttp_session import setup, get_session, session_middleware
from aiohttp_session.cookie_storage import EncryptedCookieStorage
12.2. Server 85
aiohttp Documentation, Release 3.7.4.post0
web.run_app(make_app())
HTTP Forms
<label for="login">Login</label>
<input id="login" name="login" type="text" value="" autofocus/>
<label for="password">Password</label>
<input id="password" name="password" type="password" value=""/>
File Uploads
aiohttp.web has built-in support for handling files uploaded from the browser.
First, make sure that the HTML <form> element has its enctype attribute set to enctype="multipart/
form-data". As an example, here is a form that accepts an MP3 file:
<form action="/store/mp3" method="post" accept-charset="utf-8"
enctype="multipart/form-data">
<label for="mp3">Mp3</label>
<input id="mp3" name="mp3" type="file" value=""/>
Then, in the request handler you can access the file input field as a FileField instance. FileField is simply a
container for the file as well as some of its metadata:
async def store_mp3_handler(request):
mp3 = data['mp3']
# .file contains the actual file data that needs to be stored somewhere.
mp3_file = data['mp3'].file
content = mp3_file.read()
return web.Response(body=content,
headers=MultiDict(
{'CONTENT-DISPOSITION': mp3_file}))
You might have noticed a big warning in the example above. The general issue is that Request.post() reads the
whole payload in memory, resulting in possible OOM (Out Of Memory) errors. To avoid this, for multipart uploads,
you should use Request.multipart() which returns a multipart reader:
async def store_mp3_handler(request):
12.2. Server 87
aiohttp Documentation, Release 3.7.4.post0
WebSockets
ws = web.WebSocketResponse()
await ws.prepare(request)
return ws
Redirects
To redirect user to another endpoint - raise HTTPFound with an absolute URL, relative URL or view name (the
argument from router):
raise web.HTTPFound('/redirect')
router.add_get('/handler', handler)
router.add_get('/login', login_handler, name='login')
@aiohttp_jinja2.template('login.html')
async def login(request):
if request.method == 'POST':
form = await request.post()
error = validate_login(form)
if error:
return {'error': error}
else:
# login form is valid
location = request.app.router['index'].url_for()
raise web.HTTPFound(location=location)
return {}
Exceptions
Warning: Returning HTTPException or its subclasses is deprecated and will be removed in subsequent aiohttp
versions.
Each exception class has a status code according to RFC 2068: codes with 100-300 are not really errors; 400s are
client errors, and 500s are server errors.
HTTP Exception hierarchy chart:
Exception
HTTPException
HTTPSuccessful
* 200 - HTTPOk
* 201 - HTTPCreated
* 202 - HTTPAccepted
* 203 - HTTPNonAuthoritativeInformation
* 204 - HTTPNoContent
* 205 - HTTPResetContent
* 206 - HTTPPartialContent
HTTPRedirection
(continues on next page)
12.2. Server 89
aiohttp Documentation, Release 3.7.4.post0
If not directly specified, headers will be added to the default response headers.
Classes HTTPMultipleChoices, HTTPMovedPermanently, HTTPFound, HTTPSeeOther,
HTTPUseProxy, HTTPTemporaryRedirect have the following constructor signature:
HTTPMethodNotAllowed(method, allowed_methods, *,
headers=None, reason=None,
body=None, text=None, content_type=None)
Unicode support
Peer disconnection
When a client peer is gone a subsequent reading or writing raises OSError or more specific exception like
ConnectionResetError.
The reason for disconnection is vary; it can be a network issue or explicit socket closing on the peer side without
reading the whole server response.
aiohttp handles disconnection properly but you can handle it explicitly, e.g.:
run_app() accepts either application instance or a coroutine for making an application. The coroutine based ap-
proach allows to perform async IO before making an app:
web.run_app(app_factory())
Gunicorn worker supports a factory as well. For Gunicorn the factory should accept zero parameters:
12.2. Server 91
aiohttp Documentation, Release 3.7.4.post0
Start gunicorn:
Sometimes you need to register handlers on more complex criteria than simply a HTTP method and path pair.
Although UrlDispatcher does not support any extra criteria, routing based on custom conditions can be accom-
plished by implementing a second layer of routing in your application.
The following example shows custom routing based on the HTTP Accept header:
class AcceptChooser:
def __init__(self):
self._accepts = {}
chooser = AcceptChooser()
app.add_routes([web.get('/', chooser.do_route)])
chooser.reg_acceptor('application/json', handle_json)
chooser.reg_acceptor('application/xml', handle_xml)
The best way to handle static files (images, JavaScripts, CSS files etc.) is using Reverse Proxy like nginx or CDN
services.
But for development it’s very convenient to handle static files by aiohttp server itself.
To do it just register a new static route by RouteTableDef.static() or static() calls:
app.add_routes([web.static('/prefix', path_to_static_folder)])
routes.static('/prefix', path_to_static_folder)
When a directory is accessed within a static route then the server responses to client with HTTP/403 Forbidden
by default. Displaying folder index instead could be enabled with show_index parameter set to True:
When a symlink from the static directory is accessed, the server responses to client with HTTP/404 Not Found
by default. To allow the server to follow symlinks, parameter follow_symlinks should be set to True:
When you want to enable cache busting, parameter append_version can be set to True
Cache busting is the process of appending some form of file version hash to the filename of resources like JavaScript
and CSS files. The performance advantage of doing this is that we can tell the browser to cache these files indefinitely
without worrying about the client not getting the latest version when the file changes:
Template Rendering
app = web.Application()
aiohttp_jinja2.setup(app,
loader=jinja2.FileSystemLoader('/path/to/templates/folder'))
After that you may use the template engine in your handlers. The most convenient way is to simply wrap your handlers
with the aiohttp_jinja2.template() decorator:
@aiohttp_jinja2.template('tmpl.jinja2')
async def handler(request):
return {'name': 'Andrew', 'surname': 'Svetlov'}
If you prefer the Mako template engine, please take a look at the aiohttp_mako library.
12.2. Server 93
aiohttp Documentation, Release 3.7.4.post0
@routes.get('/path')
@aiohttp_jinja2.template('tmpl.jinja2')
async def handler(request):
return {'name': 'Andrew', 'surname': 'Svetlov'}
Reading from the WebSocket (await ws.receive()) must only be done inside the request handler task; however,
writing (ws.send_str(...)) to the WebSocket, closing (await ws.close()) and canceling the handler task
may be delegated to other tasks. See also FAQ section.
aiohttp.web creates an implicit asyncio.Task for handling every incoming request.
Note: While aiohttp.web itself only supports WebSockets without downgrading to LONG-POLLING, etc., our
team supports SockJS, an aiohttp-based library for implementing SockJS-compatible server code.
Warning: Parallel reads from websocket are forbidden, there is no possibility to call WebSocketResponse.
receive() from two tasks.
See FAQ section for instructions how to solve the problem.
aiohttp.web discourages the use of global variables, aka singletons. Every variable should have its own context
that is not global.
So, Application and Request support a collections.abc.MutableMapping interface (i.e. they are
dict-like objects), allowing them to be used as data stores.
Application’s config
For storing global-like variables, feel free to save them in an Application instance:
app['my_private_key'] = data
In case of nested applications the desired lookup strategy could be the following:
1. Search the key in the current nested application.
2. If the key is not found continue searching in the parent application(s).
For this please use Request.config_dict read-only property:
Request’s storage
Variables that are only needed for the lifetime of a Request, can be stored in a Request:
async def handler(request):
request['my_private_key'] = "data"
...
This is mostly useful for Middlewares and Signals handlers to store data for further processing by the next handlers in
the chain.
Response’s storage
Naming hint
To avoid clashing with other aiohttp users and third-party libraries, please choose a unique key name for storing data.
If your code is published on PyPI, then the project name is most likely unique and safe to use as the key. Otherwise,
something based on your company name/url would be satisfactory (i.e. org.company.app).
ContextVars support
Starting from Python 3.7 asyncio has Context Variables as a context-local storage (a generalization of thread-
local concept that works with asyncio tasks also).
aiohttp server supports it in the following way:
• A server inherits the current task’s context used when creating it. aiohttp.web.run_app() runs a task for
handling all underlying jobs running the app, but alternatively Application runners can be used.
• Application initialization / finalization events (Application.cleanup_ctx, Application.
on_startup and Application.on_shutdown, Application.on_cleanup) are executed
inside the same context.
E.g. all context modifications made on application startup are visible on teardown.
• On every request handling aiohttp creates a context copy. web-handler has all variables installed on initialization
stage. But the context modification made by a handler or middleware is invisible to another HTTP request
handling call.
An example of context vars usage:
from contextvars import ContextVar
12.2. Server 95
aiohttp Documentation, Release 3.7.4.post0
app.on_startup.append(on_startup)
app.on_cleanup.append(on_cleanup)
return app
web.run_app(init())
print('done', VAR.get())
Middlewares
aiohttp.web provides a powerful mechanism for customizing request handlers via middlewares.
A middleware is a coroutine that can modify either the request or response. For example, here’s a simple middleware
which appends ' wink' to the response:
@middleware
async def middleware(request, handler):
resp = await handler(request)
resp.text = resp.text + ' wink'
return resp
Every middleware should accept two parameters, a request instance and a handler, and return the response
or raise an exception. If the exception is not an instance of HTTPException it is converted to 500
HTTPInternalServerError after processing the middlewares chain.
When creating an Application, these middlewares are passed to the keyword-only middlewares parameter:
app = web.Application(middlewares=[middleware_1,
middleware_2])
Internally, a single request handler is constructed by applying the middleware chain to the original handler in reverse
order, and is called by the RequestHandler as a regular handler.
Since middlewares are themselves coroutines, they may perform extra await calls when creating a new handler, e.g.
call database etc.
Middlewares usually call the handler, but they may choose to ignore it, e.g. displaying 403 Forbidden page or raising
HTTPForbidden exception if the user does not have permissions to access the underlying resource. They may also
render errors raised by the handler, perform some pre- or post-processing like handling CORS and so on.
The following code demonstrates middlewares execution order:
from aiohttp import web
@web.middleware
async def middleware1(request, handler):
print('Middleware 1 called')
response = await handler(request)
print('Middleware 1 finished')
return response
@web.middleware
async def middleware2(request, handler):
print('Middleware 2 called')
response = await handler(request)
print('Middleware 2 finished')
return response
Produced output:
Middleware 1 called
Middleware 2 called
Handler function called
(continues on next page)
12.2. Server 97
aiohttp Documentation, Release 3.7.4.post0
Example
A common use of middlewares is to implement custom error pages. The following example will render 404 errors
using a JSON response, as might be appropriate a JSON REST service:
@web.middleware
async def error_middleware(request, handler):
try:
response = await handler(request)
if response.status != 404:
return response
message = response.message
except web.HTTPException as ex:
if ex.status != 404:
raise
message = ex.reason
return web.json_response({'error': message})
app = web.Application(middlewares=[error_middleware])
Middleware Factory
A middleware factory is a function that creates a middleware with passed arguments. For example, here’s a trivial
middleware factory:
def middleware_factory(text):
@middleware
async def sample_middleware(request, handler):
resp = await handler(request)
resp.text = resp.text + text
return resp
return sample_middleware
Remember that contrary to regular middlewares you need the result of a middleware factory not the function itself. So
when passing a middleware factory to an app you actually need to call it:
Signals
Although middlewares can customize request handlers before or after a Response has been prepared, they can’t
customize a Response while it’s being prepared. For this aiohttp.web provides signals.
For example, a middleware can only change HTTP headers for unprepared responses (see StreamResponse.
prepare()), but sometimes we need a hook for changing HTTP headers for streamed responses and WebSockets.
This can be accomplished by subscribing to the Application.on_response_prepare signal, which is called
after default headers have been computed and directly before headers are sent:
app.on_response_prepare.append(on_prepare)
app.on_startup.append(create_aiopg)
app.on_cleanup.append(dispose_aiopg)
Signal handlers should not return a value but may modify incoming mutable parameters.
Signal handlers will be run sequentially, in order they were added. All handlers must be asynchronous since aiohttp
3.0.
Cleanup Context
Bare Application.on_startup / Application.on_cleanup pair still has a pitfall: signals handlers are
independent on each other.
E.g. we have [create_pg, create_redis] in startup signal and [dispose_pg, dispose_redis] in
cleanup.
If, for example, create_pg(app) call fails create_redis(app) is not called. But on application cleanup
both dispose_pg(app) and dispose_redis(app) are still called: cleanup signal has no knowledge about
startup/cleanup pairs and their execution state.
The solution is Application.cleanup_ctx usage:
12.2. Server 99
aiohttp Documentation, Release 3.7.4.post0
app.cleanup_ctx.append(pg_engine)
The attribute is a list of asynchronous generators, a code before yield is an initialization stage (called on startup), a
code after yield is executed on cleanup. The generator must have only one yield.
aiohttp guarantees that cleanup code is called if and only if startup code was successfully finished.
Asynchronous generators are supported by Python 3.6+, on Python 3.5 please use async_generator library.
New in version 3.1.
Nested applications
Sub applications are designed for solving the problem of the big monolithic code base. Let’s assume we have a project
with own business logic and tools like administration panel and debug toolbar.
Administration panel is a separate application by its own nature but all toolbar URLs are served by prefix like /admin.
Thus we’ll create a totally separate application named admin and connect it to main app with prefix by
Application.add_subapp():
admin = web.Application()
# setup admin routes, signals and middlewares
app.add_subapp('/admin/', admin)
admin = web.Application()
admin.add_routes([web.get('/resource', handler, name='name')])
url = admin.router['name'].url_for()
admin = web.Application()
admin.add_routes([web.get('/resource', handler, name='name')])
app.add_subapp('/admin/', admin)
app['admin'] = admin
Expect Header
aiohttp.web supports Expect header. By default it sends HTTP/1.1 100 Continue line to client, or raises
HTTPExpectationFailed if header value is not equal to “100-continue”. It is possible to specify custom Expect
header handler on per route basis. This handler gets called if Expect header exist in request after receiving all headers
and before processing application’s Middlewares and route handler. Handler can return None, in that case the request
processing continues as usual. If handler returns an instance of class StreamResponse, request handler uses it as
response. Also handler can raise a subclass of HTTPException. In this case all further processing will not happen
and client will receive appropriate http response.
Note: A server that does not understand or is unable to comply with any of the expectation values in the Expect field
of a request MUST respond with appropriate error status. The server MUST respond with a 417 (Expectation Failed)
status if any of the expectations cannot be met or, if there are other problems with the request, some other 4xx status.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20
If all checks pass, the custom handler must write a HTTP/1.1 100 Continue status code before returning.
The following example shows how to setup a custom handler for the Expect header:
if request.headers.get('EXPECT') != '100-continue':
raise HTTPExpectationFailed(text="Unknown Expect: %s" % expect)
if request.headers.get('AUTHORIZATION') is None:
raise HTTPForbidden()
Application runners
while True:
await asyncio.sleep(3600) # sleep forever
Graceful shutdown
Stopping aiohttp web server by just closing all connections is not always satisfactory.
The problem is: if application supports websockets or data streaming it most likely has open connections at server
shutdown time.
The library has no knowledge how to close them gracefully but developer can help by registering Application.
on_shutdown signal handler and call the signal on web server closing.
Developer should keep a list of opened connections (Application is a good candidate).
The following websocket snippet shows an example for websocket handler:
from aiohttp import web
import weakref
app = web.Application()
app['websockets'] = weakref.WeakSet()
request.app['websockets'].add(ws)
(continues on next page)
return ws
app.on_shutdown.append(on_shutdown)
Background tasks
Sometimes there’s a need to perform some asynchronous operations just after application start-up.
Even more, in some sophisticated systems there could be a need to run some background tasks in the event loop along
with the application’s request handler. Such as listening to message queue or other network message/event sources
(e.g. ZeroMQ, Redis Pub/Sub, AMQP, etc.) to react to received messages within the application.
For example the background task could listen to ZeroMQ on zmq.SUB socket, process and forward re-
trieved messages to clients connected via WebSocket that are stored somewhere in the application (e.g. in the
application['websockets'] list).
To run such short and long running background tasks aiohttp provides an ability to register Application.
on_startup signal handler(s) that will run along with the application’s request handler.
For example there’s a need to run one quick task and two long running tasks that will live till the application is alive.
The appropriate background tasks could be registered as an Application.on_startup signal handlers as shown
in the example below:
app = web.Application()
app.on_startup.append(start_background_tasks)
app.on_cleanup.append(cleanup_background_tasks)
web.run_app(app)
The task listen_to_redis() will run forever. To shut it down correctly Application.on_cleanup signal
handler may be used to send a cancellation to it.
Pages like 404 Not Found and 500 Internal Error could be handled by custom middleware, see polls demo for example.
As discussed in Server Deployment the preferable way is deploying aiohttp web server behind a Reverse Proxy Server
like nginx for production usage.
In this way properties like BaseRequest.scheme BaseRequest.host and BaseRequest.remote are in-
correct.
Real values should be given from proxy server, usually either Forwarded or old-fashion X-Forwarded-For,
X-Forwarded-Host, X-Forwarded-Proto HTTP headers are used.
aiohttp does not take forwarded headers into account by default because it produces security issue: HTTP client might
add these headers too, pushing non-trusted data values.
That’s why aiohttp server should setup forwarded headers in custom middleware in tight conjunction with reverse
proxy configuration.
For changing BaseRequest.scheme BaseRequest.host and BaseRequest.remote the middleware
might use BaseRequest.clone().
See also:
https://github.com/aio-libs/aiohttp-remotes provides secure helpers for modifying scheme, host and remote attributes
according to Forwarded and X-Forwarded-* HTTP headers.
Swagger support
aiohttp-swagger is a library that allow to add Swagger documentation and embed the Swagger-UI into your
aiohttp.web project.
CORS support
aiohttp.web itself does not support Cross-Origin Resource Sharing, but there is an aiohttp plugin for it: aio-
http_cors.
Debug Toolbar
aiohttp-debugtoolbar is a very useful library that provides a debugging toolbar while you’re developing an aiohttp.
web application.
Install it with pip:
import aiohttp_debugtoolbar
from aiohttp_debugtoolbar import toolbar_middleware_factory
app = web.Application()
aiohttp_debugtoolbar.setup(app)
Dev Tools
• runserver provides a development server with auto-reload, live-reload, static file serving and aiohttp-
debugtoolbar integration.
• start is a cookiecutter command which does the donkey work of creating new :mod:`aiohttp.web Applications.
Documentation and a complete tutorial of creating and running an app locally are available at aiohttp-devtools.
Abstract
Sometimes user don’t need high-level concepts introduced in Server: applications, routers, middlewares and signals.
All what is needed is supporting asynchronous callable which accepts a request and returns a response object.
This is done by introducing aiohttp.web.Server class which serves a protocol factory role for asyncio.
AbstractEventLoop.create_server() and bridges data stream to web handler and sends result back.
Low level web handler should accept the single BaseRequest parameter and performs one of the following actions:
1. Return a Response with the whole HTTP body stored in memory.
import asyncio
from aiohttp import web
# pause here for very long time by serving HTTP requests and
# waiting for keyboard interruption
await asyncio.sleep(100*3600)
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
except KeyboardInterrupt:
pass
loop.close()
In the snippet we have handler which returns a regular Response with "OK" in BODY.
This handler is processed by server (Server which acts as protocol factory). Network communication is created
by runners API to serve http://127.0.0.1:8080/.
The handler should process every request for every path, e.g. GET, POST, Web-Socket.
The example is very basic: it always return 200 OK response, real life code is much more complex usually.
The Request object contains all the information about an incoming HTTP request.
BaseRequest is used for Low-Level Servers (which have no applications, routers, signals and middlewares).
Request has an Request.app and Request.match_info attributes.
A BaseRequest / Request are dict like objects, allowing them to be used for sharing data among Middlewares
and Signals handlers.
class aiohttp.web.BaseRequest
version
HTTP version of request, Read-only property.
Returns aiohttp.protocol.HttpVersion instance.
method
HTTP method, read-only property.
The value is upper-cased str like "GET", "POST", "PUT" etc.
url
A URL instance with absolute URL to resource (scheme, host and port are included).
Note: In case of malformed request (e.g. without "HOST" HTTP header) the absolute url may be
unavailable.
rel_url
A URL instance with relative URL to resource (contains path, query and fragment parts only, scheme, host
and port are excluded).
The property is equal to .url.relative() but is always present.
See also:
A note from url.
scheme
A string representing the scheme of the request.
The scheme is 'https' if transport for request handling is SSL, 'http' otherwise.
The value could be overridden by clone().
Read-only str property.
Changed in version 2.3: Forwarded and X-Forwarded-Proto are not used anymore.
Call .clone(scheme=new_scheme) for setting up the value explicitly.
See also:
Deploying behind a Proxy
secure
Shorthand for request.url.scheme == 'https'
Read-only bool property.
See also:
scheme
forwarded
A tuple containing all parsed Forwarded header(s).
Makes an effort to parse Forwarded headers as specified by RFC 7239:
• It adds one (immutable) dictionary per Forwarded field-value, i.e. per proxy. The element
corresponds to the data in the Forwarded field-value added by the first proxy encountered by
the client. Each subsequent item corresponds to those added by later proxies.
• It checks that every value has valid syntax in general as specified in RFC 7239#section-4: either a
token or a quoted-string.
• It un-escapes quoted-pairs.
• It does NOT validate ‘by’ and ‘for’ contents as specified in RFC 7239#section-6.
• It does NOT validate host contents (Host ABNF).
• It does NOT validate proto contents for valid URI scheme names.
Returns a tuple containing one or more MappingProxy objects
See also:
scheme
See also:
host
host
Host name of the request, resolved in this order:
• Overridden value by clone() call.
• Host HTTP header
• socket.gtfqdn()
Read-only str property.
Changed in version 2.3: Forwarded and X-Forwarded-Host are not used anymore.
Call .clone(host=new_host) for setting up the value explicitly.
See also:
Deploying behind a Proxy
remote
Originating IP address of a client initiated HTTP request.
The IP is resolved through the following headers, in this order:
• Overridden value by clone() call.
• Peer name of opened socket.
Read-only str property.
Call .clone(remote=new_remote) for setting up the value explicitly.
New in version 2.3.
See also:
Deploying behind a Proxy
path_qs
The URL including PATH_INFO and the query string. e.g., /app/blog?id=10
Read-only str property.
path
The URL including PATH INFO without the host or scheme. e.g., /app/blog. The path is URL-
decoded. For raw path info see raw_path.
peername = request.transport.get_extra_info('peername')
if peername is not None:
host, port = peername
loop
An event loop instance used by HTTP request handling.
Read-only asyncio.AbstractEventLoop property.
Deprecated since version 3.5.
cookies
A multidict of all request’s cookies.
Read-only MultiDictProxy lazy property.
content
A StreamReader instance, input stream for reading request’s BODY.
Read-only property.
body_exists
Return True if request has HTTP BODY, False otherwise.
Read-only bool property.
charset
Read-only property that specifies the encoding for the request’s BODY.
The value is parsed from the Content-Type HTTP header.
Returns str like 'utf-8' or None if Content-Type has no charset information.
content_length
Read-only property that returns length of the request’s BODY.
The value is parsed from the Content-Length HTTP header.
Returns int or None if Content-Length is absent.
http_range
Read-only property that returns information about Range HTTP header.
Returns a slice where .start is left inclusive bound, .stop is right exclusive bound and .step is
1.
The property might be used in two manners:
1. Attribute-access style (example assumes that both left and right borders are set, the real logic for
case of open bounds is more complex):
rng = request.http_range
with open(filename, 'rb') as f:
f.seek(rng.start)
return f.read(rng.stop-rng.start)
2. Slice-style:
return buffer[request.http_range]
if_modified_since
Read-only property that returns the date specified in the If-Modified-Since header.
Returns datetime.datetime or None if If-Modified-Since header is absent or is not a valid HTTP
date.
if_unmodified_since
Read-only property that returns the date specified in the If-Unmodified-Since header.
Returns datetime.datetime or None if If-Unmodified-Since header is absent or is not a valid HTTP
date.
New in version 3.1.
if_range
Read-only property that returns the date specified in the If-Range header.
Returns datetime.datetime or None if If-Range header is absent or is not a valid HTTP date.
New in version 3.1.
clone(*, method=..., rel_url=..., headers=...)
Clone itself with replacement some attributes.
Creates and returns a new instance of Request object. If no parameters are given, an exact copy is returned.
If a parameter is not passed, it will reuse the one from the current request object.
Parameters
• method (str) – http method
• rel_url – url to use, str or URL
• headers – CIMultiDict or compatible headers container.
Returns a cloned Request instance.
get_extra_info(name, default=None)
Reads extra information from the protocol’s transport. If no value associated with name is found,
default is returned.
Parameters
• name (str) – The key to look up in the transport extra information.
• default – Default value to be used when no value for name is found (default
is None).
New in version 3.7.
coroutine read()
Read request body, returns bytes object with body content.
Note: The method does store read data internally, subsequent read() call will return the same value.
coroutine text()
Read request body, decode it using charset encoding or UTF-8 if no encoding was specified in MIME-
type.
Returns str with body content.
Note: The method does store read data internally, subsequent text() call will return the same value.
Parameters loads (callable) – any callable that accepts str and returns dict with
parsed JSON (json.loads() by default).
Note: The method does store read data internally, subsequent json() call will return the same value.
coroutine multipart()
Returns aiohttp.multipart.MultipartReader which processes incoming multipart request.
The method is just a boilerplate coroutine implemented as:
This method is a coroutine for consistency with the else reader methods.
Warning: The method does not store read data internally. That means once you exhausts multipart
reader, you cannot get the request payload one more time.
See also:
Working with Multipart
Changed in version 3.4: Dropped reader parameter.
coroutine post()
A coroutine that reads POST parameters from request body.
Returns MultiDictProxy instance filled with parsed data.
If method is not POST, PUT, PATCH, TRACE or DELETE or content_type is not empty or
application/x-www-form-urlencoded or multipart/form-data returns empty multidict.
Note: The method does store read data internally, subsequent post() call will return the same value.
coroutine release()
Release request.
Eat unread part of HTTP BODY if present.
Note: User code may never call release(), all required work will be processed by aiohttp.web
internal machinery.
class aiohttp.web.Request
A request used for receiving request’s information by web handler.
Every handler accepts a request instance as the first positional parameter.
The class in derived from BaseRequest, shares all parent’s attributes and methods but has a couple of addi-
tional properties:
match_info
Read-only property with AbstractMatchInfo instance for result of route resolving.
Note: Exact type of property depends on used router. If app.router is UrlDispatcher the
property contains UrlMappingMatchInfo instance.
app
An Application instance used to call request handler, Read-only property.
config_dict
A aiohttp.ChainMapProxy instance for mapping all properties from the current application re-
turned by app property and all its parents.
See also:
Application’s config
New in version 3.2.
Note: You should never create the Request instance manually – aiohttp.web does it for you. But
clone() may be used for cloning modified request copy with changed path, method etc.
Response classes
For now, aiohttp.web has three classes for the HTTP response: StreamResponse, Response and
FileResponse.
Usually you need to use the second one. StreamResponse is intended for streaming data, while Response
contains HTTP BODY as an attribute and sends own content as single piece with the correct Content-Length HTTP
header.
For sake of design decisions Response is derived from StreamResponse parent class.
The response supports keep-alive handling out-of-the-box if request supports it.
You can disable keep-alive by force_close() though.
The common case for sending an answer from web-handler is returning a Response instance:
Response classes are dict like objects, allowing them to be used for sharing data among Middlewares and Signals
handlers:
resp['key'] = value
StreamResponse
See also:
compression
chunked
Read-only property, indicates if chunked encoding is on.
Can be enabled by enable_chunked_encoding() call.
See also:
enable_chunked_encoding
enable_chunked_encoding()
Enables chunked encoding for response. There are no ways to disable it back. With enabled chunked
encoding each write() operation encoded in separate chunk.
See also:
chunked
headers
CIMultiDict instance for outgoing HTTP headers.
cookies
An instance of http.cookies.SimpleCookie for outgoing cookies.
Warning: Direct setting up Set-Cookie header may be overwritten by explicit calls to cookie manip-
ulation.
We are encourage using of cookies and set_cookie(), del_cookie() for cookie manipula-
tions.
• secure (bool) – attribute (with no value) directs the user agent to use only
(unspecified) secure means to contact the origin server whenever it sends back
this cookie. The user agent (possibly under the user’s control) may determine
what level of security it considers appropriate for “secure” cookies. The secure
should be considered security advice from the server to the user agent, indicating
that it is in the session’s interest to protect the cookie contents. (optional)
• httponly (bool) – True if the cookie HTTP only (optional)
• version (int) – a decimal integer, identifies to which version of the state
management specification the cookie conforms. (Optional, version=1 by default)
• samesite (str) – Asserts that a cookie must not be sent with cross-origin re-
quests, providing some protection against cross-site request forgery attacks. Gen-
erally the value should be one of: None, Lax or Strict. (optional)
New in version 3.7.
Warning: In HTTP version 1.1, expires was deprecated and replaced with the easier-to-use
max-age, but Internet Explorer (IE6, IE7, and IE8) does not support max-age.
await resp.write(data)
Response
text
Read-write attribute for storing response’s content, represented as string, str.
Setting text also recalculates content_length value and body value
Resetting text (assigning None) sets content_length to None too, dropping Content-Length
HTTP header.
WebSocketResponse
ws = web.WebSocketResponse()
await ws.prepare(request)
coroutine prepare(request)
Starts websocket. After the call you can use websocket methods.
Parameters request (aiohttp.web.Request) – HTTP request object, that the re-
sponse answers.
Raises HTTPException – if websocket handshake has failed.
can_prepare(request)
Performs checks for request data to figure out if websocket can be started on the request.
If can_prepare() call is success then prepare() will success too.
closed
Read-only property, True if connection has been closed or in process of closing. CLOSE message has
been received from peer.
close_code
Read-only property, close code from peer. It is set to None on opened connection.
ws_protocol
Websocket subprotocol chosen after start() call.
May be None if server and client protocols are not overlapping.
exception()
Returns last occurred exception or None.
coroutine ping(message=b'')
Send PING to peer.
Parameters message – optional payload of ping message, str (converted to UTF-8 en-
coded bytes) or bytes.
Raises RuntimeError – if connections is not started or closing.
Changed in version 3.0: The method is converted into coroutine
coroutine pong(message=b'')
Send unsolicited PONG to peer.
Parameters message – optional payload of pong message, str (converted to UTF-8 en-
coded bytes) or bytes.
Raises RuntimeError – if connections is not started or closing.
Changed in version 3.0: The method is converted into coroutine
coroutine send_str(data, compress=None)
Send data to peer as TEXT message.
Parameters
• data (str) – data to send.
• compress (int) – sets specific level of compression for single message, None
for not overriding per-socket setting.
Raises
• RuntimeError – if connection is not started or closing
• TypeError – if data is not str
Changed in version 3.0: The method is converted into coroutine, compress parameter added.
Parameters
• loads (callable) – any callable that accepts str and returns dict with
parsed JSON (json.loads() by default).
• timeout – timeout for receive operation.
timeout value overrides response`s receive_timeout attribute.
Return dict loaded JSON content
Raises
• TypeError – if message is BINARY.
• ValueError – if message is not valid JSON.
See also:
WebSockets handling
WebSocketReady
class aiohttp.web.WebSocketReady
A named tuple for returning result from WebSocketResponse.can_prepare().
Has bool check implemented, e.g.:
ok
True if websocket connection can be established, False otherwise.
protocol
str represented selected websocket sub-protocol.
See also:
WebSocketResponse.can_prepare()
json_response
HTTP Exceptions
Errors can also be returned by raising a HTTP exception instance from within the handler.
class aiohttp.web.HTTPException(*, headers=None, reason=None, text=None, con-
tent_type=None)
Low-level HTTP failure.
Parameters
• headers (dict or multidict.CIMultiDict) – headers for the response
• reason (str) – reason included in the response
• text (str) – response’s body
• content_type (str) – response’s content type. This is passed through to the
Response initializer.
Sub-classes of HTTPException exist for the standard HTTP response codes as described in Exceptions and
the expected usage is to simply raise the appropriate exception type to respond with a specific HTTP response
code.
Since HTTPException is a sub-class of Response, it contains the methods and properties that allow you to
directly manipulate details of the response.
status_code
HTTP status code for this exception class. This attribute is usually defined at the class level. self.
status_code is passed to the Response initializer.
Application
app = Application()
app['database'] = await aiopg.create_engine(**db_config)
Although Application is a dict-like object, it can’t be duplicated like one using Application.copy().
class aiohttp.web.Application(*, logger=<default>, router=None, middlewares=(), han-
dler_args=None, client_max_size=1024**2, loop=None, de-
bug=...)
The class inherits dict.
Parameters
• logger – logging.Logger instance for storing application logs.
By default the value is logging.getLogger("aiohttp.web")
• router –
aiohttp.abc.AbstractRouter instance, the system creates
UrlDispatcher by default if router is None.
Deprecated since version 3.3: The custom routers support is deprecated, the parameter
will be removed in 4.0.
• middlewares – list of middleware factories, see Middlewares for details.
• handler_args – dict-like object that overrides keyword arguments of
Application.make_handler()
• client_max_size – client’s maximum size in a request, in bytes. If a POST re-
quest exceeds this value, it raises an HTTPRequestEntityTooLarge exception.
• loop – event loop
Deprecated since version 2.0: The parameter is deprecated. Loop is get set during
freeze stage.
• debug – Switches debug mode.
Deprecated since version 3.5: Use asyncio Debug Mode instead.
router
Read-only property that returns router instance.
logger
logging.Logger instance for storing application logs.
loop
event loop used for processing HTTP requests.
Deprecated since version 3.5.
debug
Boolean value indicating whether the debug mode is turned on or off.
Deprecated since version 3.5: Use asyncio Debug Mode instead.
on_response_prepare
A Signal that is fired near the end of StreamResponse.prepare() with parameters request and
response. It can be used, for example, to add custom headers to each response, or to modify the default
headers computed by the application, directly before sending the headers to the client.
Signal handlers should have the following signature:
on_startup
A Signal that is fired on application start-up.
Subscribers may use the signal to run background tasks in the event loop along with the application’s
request handler just after the application start-up.
Signal handlers should have the following signature:
See also:
Signals.
on_shutdown
A Signal that is fired on application shutdown.
Subscribers may use the signal for gracefully closing long running connections, e.g. websockets and data
streaming.
Signal handlers should have the following signature:
It’s up to end user to figure out which web-handlers are still alive and how to finish them properly.
We suggest keeping a list of long running handlers in Application dictionary.
See also:
Graceful shutdown and on_cleanup.
on_cleanup
A Signal that is fired on application cleanup.
Subscribers may use the signal for gracefully closing connections to database server etc.
Signal handlers should have the following signature:
See also:
Signals and on_shutdown.
cleanup_ctx
A list of context generators for startup/cleanup handling.
Signal handlers should have the following signature:
loop = asyncio.get_event_loop()
app = Application()
await loop.create_server(app.make_handler(),
'0.0.0.0', 8080)
Deprecated since version 3.2: The method is deprecated and will be removed in future aiohttp versions.
Please use Application runners instead.
coroutine startup()
A coroutine that will be called along with the application’s request handler.
The purpose of the method is calling on_startup signal handlers.
coroutine shutdown()
A coroutine that should be called on server stopping but before cleanup().
The purpose of the method is calling on_shutdown signal handlers.
coroutine cleanup()
A coroutine that should be called on server stopping but after shutdown().
The purpose of the method is calling on_cleanup signal handlers.
Note: Application object has router attribute but has no add_route() method. The reason is: we want to
support different router implementations (even maybe not url-matching based but traversal ones).
For sake of that fact we have very trivial ABC for AbstractRouter: it should have only
AbstractRouter.resolve() coroutine.
No methods for adding routes or route reversing (getting URL by route name). All those are router implemen-
tation details (but, sure, you need to deal with that methods after choosing the router for your application).
Server
Router
Warning: Use add_static() for development only. In production, static content should be
processed by web servers like nginx or apache.
Parameters
• prefix (str) – URL path prefix for handled static files
• path – path to the folder in file system that contains handled static files, str or
pathlib.Path.
• name (str) – optional route name.
• expect_handler (coroutine) – optional expect header handler.
• chunk_size (int) – size of single chunk for file downloading, 256Kb by de-
fault.
Increasing chunk_size parameter to, say, 1Mb may increase file downloading
speed but consumes more memory.
• show_index (bool) – flag for allowing to show indexes of a directory, by
default it’s not allowed and HTTP/403 will be returned on directory access.
• follow_symlinks (bool) – flag for allowing to follow symlinks from a di-
rectory, by default it’s not allowed and HTTP/404 will be returned on access.
• append_version (bool) – flag for adding file version (hash) to the url query
string, this value will be used as default when you call to StaticRoute.
url() and StaticRoute.url_for() methods.
Returns new StaticRoute instance.
coroutine resolve(request)
A coroutine that returns AbstractMatchInfo for request.
The method never raises exception, but returns AbstractMatchInfo instance with:
1. http_exception assigned to HTTPException instance.
2. handler which raises HTTPNotFound or HTTPMethodNotAllowed on handler’s execution
if there is no registered route for request.
Middlewares can process that exceptions to render pretty-looking error page for example.
Used by internal machinery, end user unlikely need to call the method.
Note: The method uses Request.raw_path for pattern matching against registered routes.
resources()
The method returns a view for all registered resources.
The view is an object that allows to:
1. Get size of the router table:
len(app.router.resources())
route in app.router.resources()
routes()
The method returns a view for all registered routes.
named_resources()
Returns a dict-like types.MappingProxyType view over all named resources.
The view maps every named resource’s name to the BaseResource instance. It supports the usual
dict-like operations, except for any mutable operations (i.e. it’s read-only):
len(app.router.named_resources())
"name" in app.router.named_resources()
app.router.named_resources()["name"]
Resource
router.add_resource(path).add_route(method, handler)
Resource with a name is called named resource. The main purpose of named resource is constructing URL by route
name for passing it into template engine for example:
AbstractResource
Resource
PlainResource
DynamicResource
StaticResource
class aiohttp.web.AbstractResource
A base class for all resources.
Inherited from collections.abc.Sized and collections.abc.Iterable.
len(resource) returns amount of routes belongs to the resource, for route in resource allows to
iterate over these routes.
name
Read-only name of resource or None.
canonical
Read-only canonical path associate with the resource. For example /path/to or /path/{to}
New in version 3.3.
coroutine resolve(request)
Resolve resource by finding appropriate web-handler for (method, path) combination.
Returns
(match_info, allowed_methods) pair.
allowed_methods is a set or HTTP methods accepted by resource.
match_info is either UrlMappingMatchInfo if request is resolved or None if no
route is found.
get_info()
A resource description, e.g. {'path': '/path/to'} or {'formatter': '/path/{to}',
'pattern': re.compile(r'^/path/(?P<to>[a-zA-Z][_a-zA-Z0-9]+)$
url_for(*args, **kwargs)
Construct an URL for route with additional params.
args and kwargs depend on a parameters list accepted by inherited resource class.
Returns URL – resulting URL instance.
class aiohttp.web.Resource
A base class for new-style resources, inherits AbstractResource.
add_route(method, handler, *, expect_handler=None)
Add a web-handler to resource.
Parameters
• method (str) – HTTP method for route. Should be one of 'GET', 'POST',
'PUT', 'DELETE', 'PATCH', 'HEAD', 'OPTIONS' or '*' for any method.
The parameter is case-insensitive, e.g. you can push 'get' as well as 'GET'.
The method should be unique for resource.
• handler (callable) – route handler.
• expect_handler (coroutine) – optional expect header handler.
Returns new ResourceRoute instance.
class aiohttp.web.PlainResource
A resource, inherited from Resource.
The class corresponds to resources with plain-text matching, '/path/to' for example.
canonical
Read-only canonical path associate with the resource. Returns the path used to create the PlainResource.
For example /path/to
New in version 3.3.
url_for()
Returns a URL for the resource.
class aiohttp.web.DynamicResource
A resource, inherited from Resource.
The class corresponds to resources with variable matching, e.g. '/path/{to}/{param}' etc.
canonical
Read-only canonical path associate with the resource. Returns the formatter obtained from the path
used to create the DynamicResource. For example, from a path /get/{num:^\d+}, it returns /get/
{num}
New in version 3.3.
url_for(**params)
Returns a URL for the resource.
Parameters params – – a variable substitutions for dynamic resource.
E.g. for '/path/{to}/{param}' pattern the method should be called as
resource.url_for(to='val1', param='val2')
class aiohttp.web.StaticResource
A resource, inherited from Resource.
The class corresponds to resources for static file serving.
canonical
Read-only canonical path associate with the resource. Returns the prefix used to create the StaticRe-
source. For example /prefix
New in version 3.3.
url_for(filename, append_version=None)
Returns a URL for file path under resource prefix.
Parameters
• filename – – a file name substitution for static file handler.
Accepts both str and pathlib.Path.
E.g. an URL for '/prefix/dir/file.txt' should be generated as
resource.url_for(filename='dir/file.txt')
• append_version (bool) –
– a flag for adding file version (hash) to the url query string for cache boosting
By default has value from a constructor (False by default) When set to True
- v=FILE_HASH query string param will be added When set to False has no
impact
if file not found has no impact
class aiohttp.web.PrefixedSubAppResource
A resource for serving nested applications. The class instance is returned by add_subapp call.
canonical
Read-only canonical path associate with the resource. Returns the prefix used to create the PrefixedSub-
AppResource. For example /prefix
New in version 3.3.
url_for(**kwargs)
The call is not allowed, it raises RuntimeError.
Route
Route has HTTP method (wildcard '*' is an option), web-handler and optional expect handler.
Every route belong to some resource.
Route classes hierarchy:
AbstractRoute
ResourceRoute
SystemRoute
ResourceRoute is the route used for resources, SystemRoute serves URL resolving errors like 404 Not Found
and 405 Method Not Allowed.
class aiohttp.web.AbstractRoute
Base class for routes served by UrlDispatcher.
method
HTTP method handled by the route, e.g. GET, POST etc.
handler
handler that processes the route.
name
Name of the route, always equals to name of resource which owns the route.
resource
Resource instance which holds the route, None for SystemRoute.
url_for(*args, **kwargs)
Abstract method for constructing url handled by the route.
Actually it’s a shortcut for route.resource.url_for(...).
coroutine handle_expect_header(request)
100-continue handler.
class aiohttp.web.ResourceRoute
The route class for handling different HTTP methods for Resource.
class aiohttp.web.SystemRoute
The route class for handling URL resolution errors like like 404 Not Found and 405 Method Not Allowed.
status
HTTP status code
reason
HTTP status reason
app.router.add_routes([web.get('/get', handle_get),
web.post('/post', handle_post),
class aiohttp.web.AbstractRouteDef
A base class for route definitions.
Inherited from abc.ABC.
New in version 3.1.
register(router)
Register itself into UrlDispatcher.
Abstract method, should be overridden by subclasses.
Returns list of registered AbstractRoute objects.
Changed in version 3.7: Return value updated from None to list of AbstractRoute instances.
class aiohttp.web.RouteDef
A definition of not registered yet route.
Implements AbstractRouteDef.
New in version 2.3.
Changed in version 3.1: The class implements AbstractRouteDef interface.
method
HTTP method (GET, POST etc.) (str).
path
Path to resource, e.g. /path/to. Could contain {} brackets for variable resources (str).
handler
An async function to handle HTTP request.
kwargs
A dict of additional arguments.
class aiohttp.web.StaticDef
A definition of static file resource.
Implements AbstractRouteDef.
New in version 3.1.
prefix
A prefix used for static file handling, e.g. /static.
path
File system directory to serve, str or pathlib.Path (e.g. '/home/web-service/path/to/
static'.
kwargs
A dict of additional arguments, see UrlDispatcher.add_static() for a list of supported op-
tions.
aiohttp.web.get(path, handler, *, name=None, allow_head=True, expect_handler=None)
Return RouteDef for processing GET requests. See UrlDispatcher.add_get() for information about
parameters.
New in version 2.3.
aiohttp.web.post(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing POST requests. See UrlDispatcher.add_post() for information
about parameters.
New in version 2.3.
aiohttp.web.head(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing HEAD requests. See UrlDispatcher.add_head() for information
about parameters.
New in version 2.3.
aiohttp.web.put(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing PUT requests. See UrlDispatcher.add_put() for information about
parameters.
New in version 2.3.
aiohttp.web.patch(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing PATCH requests. See UrlDispatcher.add_patch() for information
about parameters.
New in version 2.3.
aiohttp.web.delete(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing DELETE requests. See UrlDispatcher.add_delete() for informa-
tion about parameters.
New in version 2.3.
aiohttp.web.view(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing ANY requests. See UrlDispatcher.add_view() for information about
parameters.
New in version 3.0.
RouteTableDef
A routes table definition used for describing routes by decorators (Flask style):
routes = web.RouteTableDef()
@routes.get('/get')
async def handle_get(request):
...
@routes.post('/post')
async def handle_post(request):
...
app.router.add_routes(routes)
@routes.view("/view")
class MyView(web.View):
async def get(self):
...
class aiohttp.web.RouteTableDef
A sequence of RouteDef instances (implements abc.collections.Sequence protocol).
In addition to all standard list methods the class provides also methods like get() and post() for adding
new route definition.
New in version 2.3.
@get(path, *, allow_head=True, name=None, expect_handler=None)
Add a new RouteDef item for registering GET web-handler.
See UrlDispatcher.add_get() for information about parameters.
@post(path, *, name=None, expect_handler=None)
Add a new RouteDef item for registering POST web-handler.
See UrlDispatcher.add_post() for information about parameters.
MatchInfo
View
class aiohttp.web.View(request)
Inherited from AbstractView.
Base class for class based views. Implementations should derive from View and override methods for handling
HTTP verbs like get() or post():
class MyView(View):
app.router.add_view('/view', MyView)
The view raises 405 Method Not allowed (HTTPMethodNotAllowed) if requested web verb is not supported.
Parameters request – instance of Request that has initiated a view processing.
request
Request sent to view’s constructor, read-only property.
Overridable coroutine methods: connect(), delete(), get(), head(), options(), patch(),
post(), put(), trace().
See also:
Class Based Views
Running Applications
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
# wait for finish signal
await runner.cleanup()
New in version 3.0: AppRunner / ServerRunner and TCPSite / UnixSite / SockSite are added in aiohttp
3.0
class aiohttp.web.BaseRunner
A base class for runners. Use AppRunner for serving Application, ServerRunner for low-level
Server.
server
Low-level web Server for handling HTTP requests, read-only attribute.
addresses
A list of served sockets addresses.
See socket.getsockname() for items type.
Utilities
class aiohttp.web.FileField
A namedtuple instance that is returned as multidict value by Request.POST() if field is uploaded file.
name
Field name
filename
File name as specified by uploading (client) side.
file
An io.IOBase instance with content of uploaded file.
content_type
MIME type of uploaded file, 'text/plain' by default.
See also:
File Uploads
aiohttp.web.run_app(app, *, host=None, port=None, path=None, sock=None, shut-
down_timeout=60.0, ssl_context=None, print=print, back-
log=128, access_log_class=aiohttp.helpers.AccessLogger, ac-
cess_log_format=aiohttp.helpers.AccessLogger.LOG_FORMAT,
access_log=aiohttp.log.access_logger, handle_signals=True,
reuse_address=None, reuse_port=None)
A utility function for running an application, serving it until keyboard interrupt and performing a Graceful
shutdown.
Suitable as handy tool for scaffolding aiohttp based projects. Perhaps production config will use more sophisti-
cated runner but it good enough at least at very beginning stage.
The server will listen on any host or Unix domain socket path you supply. If no hosts or paths are supplied, or
only a port is supplied, a TCP server listening on 0.0.0.0 (all hosts) will be launched.
Distributing HTTP traffic to multiple hosts or paths on the same application process provides no performance
benefit as the requests are handled on the same event loop. See Server Deployment for ways of distributing work
for increased performance.
Parameters
• app – Application instance to run or a coroutine that returns an application.
• host (str) – TCP/IP host or a sequence of hosts for HTTP server. Default is '0.0.
0.0' if port has been specified or if path is not supplied.
• port (int) – TCP/IP port for HTTP server. Default is 8080 for plain text HTTP and
8443 for HTTP via SSL (when ssl_context parameter is specified).
• path (str) – file system path for HTTP server Unix domain socket. A sequence
of file system paths can be used to bind multiple domain sockets. Listening on Unix
domain sockets is not supported by all operating systems.
• sock (socket) – a preexisting socket object to accept connections on. A sequence
of socket objects can be passed.
• shutdown_timeout (int) – a delay to wait for graceful server shutdown before
disconnecting all open client sockets hard way.
A system with properly Graceful shutdown implemented never waits for this timeout
but closes a server in a few milliseconds.
• ssl_context – ssl.SSLContext for HTTPS server, None for HTTP connec-
tion.
• print – a callable compatible with print(). May be used to override STDOUT
output or suppress it. Passing None disables output.
• backlog (int) – the number of unaccepted connections that the system will allow
before refusing new connections (128 by default).
• access_log_class – class for access_logger. Default: aiohttp.
helpers.AccessLogger. Must to be a subclass of aiohttp.abc.
AbstractAccessLogger.
• access_log – logging.Logger instance used for saving access logs. Use None
for disabling logs for sake of speedup.
Constants
class aiohttp.web.ContentCoding
An enum.Enum class of available Content Codings.
deflate
DEFLATE compression
gzip
GZIP compression
identity
no compression
Middlewares
If merge_slashes is True, merge multiple consecutive slashes in the path into one.
New in version 3.4: Support for remove_slash
12.2.5 Logging
import logging
from aiohttp import web
app = web.Application()
logging.basicConfig(level=logging.DEBUG)
web.run_app(app, port=5000)
Access logs
Access logs are enabled by default. If the debug flag is set, and the default logger 'aiohttp.access' is used,
access logs will be output to stderr if no handlers are attached. Furthermore, if the default logger has no log level
set, the log level will be set to logging.DEBUG.
This logging may be controlled by aiohttp.web.AppRunner() and aiohttp.web.run_app().
To override the default logger, pass an instance of logging.Logger to override the default logger.
Format specification
The library provides custom micro-language to specifying info about request and response:
Option Meaning
%% The percent sign
%a Remote IP-address (IP-address of proxy if using reverse proxy)
%t Time when the request was started to process
%P The process ID of the child that serviced the request
%r First line of request
%s Response status code
%b Size of response in bytes, including HTTP headers
%T The time taken to serve the request, in seconds
%Tf The time taken to serve the request, in seconds with fraction in %.06f format
%D The time taken to serve the request, in microseconds
%{FOO}i request.headers['FOO']
%{FOO}o response.headers['FOO']
class AccessLogger(AbstractAccessLogger):
When Gunicorn is used for deployment, its default access log format will be automatically replaced with the default
aiohttp’s access log format.
If Gunicorn’s option access_logformat is specified explicitly, it should use aiohttp’s format specification.
Gunicorn’s access log works only if accesslog is specified explicitly in your config or as a command line option.
This configuration can be either a path or '-'. If the application uses a custom logging setup intercepting the
'gunicorn.access' logger, accesslog should be set to '-' to prevent Gunicorn to create an empty access log
file upon every startup.
Error logs
aiohttp.web uses a logger named 'aiohttp.server' to store errors given on web requests handling.
This log is enabled by default.
To use a different logger name, pass logger (logging.Logger instance) to the aiohttp.web.AppRunner()
constructor.
12.2.6 Testing
aiohttp provides plugin for pytest making writing web server tests extremely easy, it also provides test framework
agnostic utilities for testing with other frameworks such as unittest.
Before starting to write your tests, you may also be interested on reading how to write testable services that interact
with the loop.
For using pytest plugin please install pytest-aiohttp library:
If you don’t want to install pytest-aiohttp for some reason you may insert pytest_plugins = 'aiohttp.
pytest_plugin' line into conftest.py instead for the same functionality.
Provisional Status
aiohttp test utils provides a scaffolding for testing aiohttp-based web servers.
They are consist of two parts: running test server and making HTTP requests to this server.
TestServer runs aiohttp.web.Application based server, RawTestServer starts aiohttp.web.
WebServer low level server.
For performing HTTP requests to these servers you have to create a test client: TestClient instance.
The client incapsulates aiohttp.ClientSession by providing proxy methods to the client for common opera-
tions such as ws_connect, get, post, etc.
Pytest
The aiohttp_client fixture available from pytest-aiohttp plugin allows you to create a client to make requests to
test your app.
A simple would be:
It also provides access to the app instance allowing tests to check the state of the app. Tests can be made even more
succinct with a fixture to create an app test client:
import pytest
from aiohttp import web
@pytest.fixture
def cli(loop, aiohttp_client):
app = web.Application()
app.router.add_get('/', previous)
app.router.add_post('/', previous)
return loop.run_until_complete(aiohttp_client(app))
handler should be a coroutine which accepts a request and returns response, e.g.
port optional, port the server is run at, if not provided a random unused port is used.
New in version 3.0.
pytest_aiohttp.aiohttp_unused_port
Function to return an unused port number for IPv4 TCP protocol:
Changed in version 3.0: The fixture was renamed from unused_port to aiohttp_unused_port.
Unittest
To test applications with the standard library’s unittest or unittest-based functionality, the AioHTTPTestCase is pro-
vided:
class MyAppTestCase(AioHTTPTestCase):
app = web.Application()
app.router.add_get('/', hello)
return app
# a vanilla example
def test_example_vanilla(self):
async def test_get_route():
url = "/"
resp = await self.client.request("GET", url)
assert resp.status == 200
text = await resp.text()
assert "Hello, world" in text
self.loop.run_until_complete(test_get_route())
class aiohttp.test_utils.AioHTTPTestCase
A base class to allow for unittest web applications using aiohttp.
Derived from unittest.TestCase
Note: The TestClient’s methods are asynchronous: you have to execute function on the test client using
asynchronous methods.
A basic test class wraps every test method by unittest_run_loop() decorator:
class TestA(AioHTTPTestCase):
@unittest_run_loop
(continues on next page)
unittest_run_loop:
A decorator dedicated to use with asynchronous methods of an AioHTTPTestCase.
Handles executing an asynchronous function, using the AioHTTPTestCase.loop of the
AioHTTPTestCase.
aiohttp provides test utility for creating fake aiohttp.web.Request objects: aiohttp.test_utils.
make_mocked_request(), it could be useful in case of simple unit tests, like handler tests, or simulate error
conditions that hard to reproduce on real server:
def handler(request):
assert request.headers.get('token') == 'x'
return web.Response(body=b'data')
def test_handler():
req = make_mocked_request('GET', '/', headers={'token': 'x'})
resp = handler(req)
assert resp.body == b'data'
• closing (bool) – flag indicates that connection should be closed after response.
• app (aiohttp.web.Application) – the aiohttp.web application attached for
fake request
• writer (aiohttp.StreamWriter) – object for managing outcoming data
• transport (asyncio.transports.Transport) – asyncio transport instance
• payload (aiohttp.StreamReader) – raw payload reader object
• sslcontext (ssl.SSLContext) – ssl.SSLContext object, for HTTPS connection
• loop (asyncio.AbstractEventLoop) – An event loop instance, mocked loop
by default.
Returns aiohttp.web.Request object.
New in version 2.3: match_info parameter.
loop.run_until_complete(test_get_route())
If it’s preferred to handle the creation / teardown on a more granular basis, the TestClient object can be used directly:
loop.run_until_complete(test_get_route())
loop.run_until_complete(client.close())
A full list of the utilities provided can be found at the api reference
Test server
• host (str) – a host for TCP socket, IPv4 local host ('127.0.0.1') by default.
• port (int) – optional port for TCP socket, if not provided a random unused port is
used.
New in version 3.0.
class aiohttp.test_utils.TestServer(app, *, scheme='http', host='127.0.0.1')
Test server (derived from BaseTestServer) for starting Application.
Parameters
• app – aiohttp.web.Application instance to run.
• scheme (str) – HTTP scheme, non-protected "http" by default.
• host (str) – a host for TCP socket, IPv4 local host ('127.0.0.1') by default.
• port (int) – optional port for TCP socket, if not provided a random unused port is
used.
New in version 3.0.
app
aiohttp.web.Application instance to run.
Test Client
Unlike the methods on the TestClient, client session requests do not automatically include the host in
the url queried, and will require an absolute path to the resource.
coroutine start_server(**kwargs)
Start a test server.
coroutine close()
Stop and finish executed test server.
make_url(path)
Return an absolute URL for given path.
coroutine request(method, path, *args, **kwargs)
Routes a request to tested http server.
The interface is identical to aiohttp.ClientSession.request(), except the loop kwarg is over-
ridden by the instance used by the test server.
coroutine get(path, *args, **kwargs)
Perform an HTTP GET request.
coroutine post(path, *args, **kwargs)
Perform an HTTP POST request.
coroutine options(path, *args, **kwargs)
Perform an HTTP OPTIONS request.
coroutine head(path, *args, **kwargs)
Perform an HTTP HEAD request.
coroutine put(path, *args, **kwargs)
Perform an HTTP PUT request.
coroutine patch(path, *args, **kwargs)
Perform an HTTP PATCH request.
coroutine delete(path, *args, **kwargs)
Perform an HTTP DELETE request.
coroutine ws_connect(path, *args, **kwargs)
Initiate websocket connection.
The api corresponds to aiohttp.ClientSession.ws_connect().
Utilities
aiohttp.test_utils.make_mocked_coro(return_value)
Creates a coroutine mock.
Behaves like a coroutine which returns return_value. But it is also a mock object, you might test it as usual
Mock:
mocked = make_mocked_coro(1)
assert 1 == await mocked(1, 2)
mocked.assert_called_with(1, 2)
Parameters return_value – A value that the the mock object will return when called.
Returns A mock object that behaves as a coroutine which returns return_value when called.
aiohttp.test_utils.unused_port()
Return an unused port number for IPv4 TCP protocol.
Return int ephemeral port number which could be reused by test server.
aiohttp.test_utils.loop_context(loop_factory=<function asyncio.new_event_loop>)
A contextmanager that creates an event_loop, for test purposes.
Handles the creation and cleanup of a test loop.
aiohttp.test_utils.setup_test_loop(loop_factory=<function asyncio.new_event_loop>)
Create and return an asyncio.AbstractEventLoop instance.
The caller should also call teardown_test_loop, once they are done with the loop.
Note: As side effect the function changes asyncio default loop by asyncio.set_event_loop() call.
Previous default loop is not restored.
It should not be a problem for test suite: every test expects a new test loop instance anyway.
Changed in version 3.1: The function installs a created event loop as default.
aiohttp.test_utils.teardown_test_loop(loop)
Teardown and cleanup an event_loop created by setup_test_loop.
Parameters loop (asyncio.AbstractEventLoop) – the loop to teardown
Standalone
Nginx+supervisord
Nginx configuration
Here is short extraction about writing Nginx configuration file. It does not cover all available Nginx options.
For full reference read Nginx tutorial and official Nginx documentation.
First configure HTTP server itself:
http {
server {
listen 80;
client_max_body_size 4G;
server_name example.com;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://aiohttp;
}
location /static {
# path for static files
root /path/to/app/static;
}
}
}
This config listens on port 80 for server named example.com and redirects everything to aiohttp backend group.
Also it serves static files from /path/to/app/static path as example.com/static.
Next we need to configure aiohttp upstream group:
http {
upstream aiohttp {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# Unix domain sockets are used in this example due to their high performance,
# but TCP/IP sockets could be used instead:
# server 127.0.0.1:8081 fail_timeout=0;
# server 127.0.0.1:8082 fail_timeout=0;
# server 127.0.0.1:8083 fail_timeout=0;
# server 127.0.0.1:8084 fail_timeout=0;
}
}
All HTTP requests for http://example.com except ones for http://example.com/static will be redi-
rected to example1.sock, example2.sock, example3.sock or example4.sock backend servers. By
Note: Nginx is not the only existing reverse proxy server but the most popular one. Alternatives like HAProxy may
be used as well.
Supervisord
After configuring Nginx we need to start our aiohttp backends. Better to use some tool for starting them automatically
after system reboot or backend crash.
There are very many ways to do it: Supervisord, Upstart, Systemd, Gaffer, Circus, Runit etc.
Here we’ll use Supervisord for example:
[program:aiohttp]
numprocs = 4
numprocs_start = 1
process_name = example_%(process_num)s
user=nobody
autostart=true
autorestart=true
aiohttp server
The last step is preparing aiohttp server for working with supervisord.
Assuming we have properly configured aiohttp.web.Application and port is specified by command line, the
task is trivial:
# aiohttp_example.py
import argparse
from aiohttp import web
if __name__ == '__main__':
app = web.Application()
# configure app
args = parser.parse_args()
web.run_app(app, path=args.path, port=args.port)
For real use cases we perhaps need to configure other things like logging etc., but it’s out of scope of the topic.
Nginx+Gunicorn
aiohttp can be deployed using Gunicorn, which is based on a pre-fork worker model. Gunicorn launches your app as
worker processes for handling incoming requests.
In opposite to deployment with bare Nginx the solution does not need to manually run several aiohttp processes and
use tool like supervisord for monitoring it. But nothing is for free: running aiohttp application under gunicorn is
slightly slower.
Prepare environment
You firstly need to setup your deployment environment. This example is based on Ubuntu 16.04.
Create a directory for your application:
Now that the virtual environment is ready, we’ll proceed to install aiohttp and gunicorn:
Application
Lets write a simple application, which we will save to file. We’ll name this file my_app_module.py:
my_web_app = web.Application()
my_web_app.router.add_get('/', index)
Application factory
As an option an entry point could be a coroutine that accepts no parameters and returns an application instance:
Start Gunicorn
When Running Gunicorn, you provide the name of the module, i.e. my_app_module, and the name of the app or
application factory, i.e. my_web_app, along with other Gunicorn Settings provided as command line flags or in your
config file.
In this case, we will use:
• the --bind flag to set the server’s socket address;
• the --worker-class flag to tell Gunicorn that we want to use a custom worker subclass instead of one of
the Gunicorn default worker types;
• you may also want to use the --workers flag to tell Gunicorn how many worker processes to use for handling
requests. (See the documentation for recommendations on How Many Workers?)
• you may also want to use the --accesslog flag to enable the access log to be populated. (See logging for
more information.)
The custom worker subclass is defined in aiohttp.GunicornWebWorker:
[2015-03-11 18:27:21 +0000] [1253] [INFO] Booting worker with pid: 1253
Gunicorn is now running and ready to serve requests to your app’s worker processes.
Note: If you want to use an alternative asyncio event loop uvloop, you can use the aiohttp.
GunicornUVLoopWebWorker worker class.
We can proxy our gunicorn workers through NGINX with a configuration like this:
worker_processes 1;
user nobody nogroup;
events {
worker_connections 1024;
}
http {
## Main Server Block
server {
## Open by default.
listen 80 default_server;
server_name main;
client_max_body_size 200M;
(continues on next page)
Since gunicorn listens for requests at our localhost address on port 8080, we can use the proxy_pass directive to send
web traffic to our workers. If everything is configured correctly, we should reach our application at the ip address of
our web server.
worker_processes 1;
user nobody nogroup;
events {
worker_connections 1024;
}
http {
## SSL Redirect
server {
listen 80 default;
return 301 https://$host$request_uri;
}
ssl_certificate /etc/secrets/cert.pem;
ssl_certificate_key /etc/secrets/key.pem;
The first server block accepts regular http connections on port 80 and redirects them to our secure SSL connection.
The second block matches our previous example except we need to change our open port to https and specify where
our SSL certificates are being stored with the ssl_certificate and ssl_certificate_key directives.
During development, you may want to create your own self-signed certificates for testing purposes and use another
service like Let’s Encrypt when you are ready to move to production.
More information
See the official documentation for more information about suggested nginx configuration. You can also find out more
about configuring for secure https connections as well.
Logging configuration
aiohttp and gunicorn use different format for specifying access log.
By default aiohttp uses own defaults:
For more information please read Format Specification for Access Log.
Issues have been reported using Apache2 in front of aiohttp server: #2687 Intermittent 502 proxy errors when running
behind Apache <https://github.com/aio-libs/aiohttp/issues/2687>.
12.3 Utilities
Abstract routing
coroutine resolve(request)
Performs URL resolving. It’s an abstract method, should be overridden in router implementation.
Parameters request – aiohttp.web.Request instance for resolving, the request has
aiohttp.web.Request.match_info equals to None at resolving stage.
Returns AbstractMatchInfo instance.
class aiohttp.abc.AbstractMatchInfo
Abstract match info, returned by AbstractRouter.resolve() call.
http_exception
aiohttp.web.HTTPException if no match was found, None otherwise.
coroutine handler(request)
Abstract method performing web-handler processing.
Parameters request – aiohttp.web.Request instance for resolving, the request has
aiohttp.web.Request.match_info equals to None at resolving stage.
Returns aiohttp.web.StreamResponse or descendants.
Raise aiohttp.web.HTTPException on error
coroutine expect_handler(request)
Abstract method for handling 100-continue processing.
For class based view support aiohttp has abstract AbstractView class which is awaitable (may be uses like await
Cls() or yield from Cls() and has a request as an attribute.
class aiohttp.abc.AbstractView
An abstract class, base for all class based views implementations.
Methods __iter__ and __await__ should be overridden.
request
aiohttp.web.Request instance for performing the request.
class aiohttp.abc.AbstractCookieJar
The cookie jar instance is available as ClientSession.cookie_jar.
The jar contains Morsel items for storing internal cookie data.
API provides a count of saved cookies:
len(session.cookie_jar)
Parameters
• cookies – a collections.abc.Mapping (e.g. dict, SimpleCookie)
or iterable of pairs with cookies returned by server’s response.
• response_url (str) – URL of response, None for shared cookies. Regular
cookies are coupled with server’s URL and are sent only to this server, shared
ones are sent in every client request.
filter_cookies(request_url)
Return jar’s cookies acceptable for URL and available in Cookie header for sending client requests for
given URL.
Parameters response_url (str) – request’s URL for which cookies are asked.
Returns http.cookies.SimpleCookie with filtered cookies for given URL.
class aiohttp.abc.AbstractAccessLogger
An abstract class, base for all RequestHandler access_logger implementations
Method log should be overridden.
log(request, response, time)
Parameters
• request – aiohttp.web.Request object.
• response – aiohttp.web.Response object.
• time (float) – Time taken to serve the request.
aiohttp supports a full featured multipart reader and writer. Both are designed with streaming processing in mind
to avoid unwanted footprint which may be significant if you’re dealing with large payloads, but this also means that
most I/O operation are only possible to be executed a single time.
Assume you made a request, as usual, and want to process the response multipart data:
First, you need to wrap the response with a MultipartReader.from_response(). This needs to keep the
implementation of MultipartReader separated from the response and the connection routines which makes it
more portable:
reader = aiohttp.MultipartReader.from_response(resp)
Let’s assume with this response you’d received some JSON document and multiple files for it, but you don’t need all
of them, just a specific one.
So first you need to enter into a loop where the multipart body will be processed:
metadata = None
filedata = None
while True:
part = await reader.next()
The returned type depends on what the next part is: if it’s a simple body part then you’ll get BodyPartReader
instance here, otherwise, it will be another MultipartReader instance for the nested multipart. Remember, that
multipart format is recursive and supports multiple levels of nested body parts. When there are no more parts left to
fetch, None value will be returned - that’s the signal to break the loop:
if part is None:
break
Both BodyPartReader and MultipartReader provides access to body part headers: this allows you to filter
parts by their attributes:
if part.headers[aiohttp.hdrs.CONTENT_TYPE] == 'application/json':
metadata = await part.json()
continue
Nor BodyPartReader or MultipartReader instances does not read the whole body part data without explicitly
asking for. BodyPartReader provides a set of helpers methods to fetch popular content types in friendly way:
• BodyPartReader.text() for plain text data;
• BodyPartReader.json() for JSON;
• BodyPartReader.form() for application/www-urlform-encode
Each of these methods automatically recognizes if content is compressed by using gzip and deflate encoding (while it
respects identity one), or if transfer encoding is base64 or quoted-printable - in each case the result will get automati-
cally decoded. But in case you need to access to raw binary data as it is, there are BodyPartReader.read() and
BodyPartReader.read_chunk() coroutine methods as well to read raw binary data as it is all-in-single-shot
or by chunks respectively.
When you have to deal with multipart files, the BodyPartReader.filename property comes to help. It’s a very
smart helper which handles Content-Disposition handler right and extracts the right filename attribute from it:
if part.filename != 'secret.txt':
continue
If current body part does not matches your expectation and you want to skip it - just continue a loop to start a next
iteration of it. Here is where magic happens. Before fetching the next body part await reader.next() it ensures
that the previous one was read completely. If it was not, all its content sends to the void in term to fetch the next part.
So you don’t have to care about cleanup routines while you’re within a loop.
Once you’d found a part for the file you’d searched for, just read it. Let’s handle it as it is without applying any
decoding magic:
Later you may decide to decode the data. It’s still simple and possible to do:
filedata = part.decode(filedata)
Once you are done with multipart processing, just break a loop:
break
MultipartWriter provides an interface to build multipart payload from the Python data and serialize it into
chunked binary stream. Since multipart format is recursive and supports deeply nesting, you can use with statement
to design your multipart data closer to how it will be:
The MultipartWriter.append() is used to join new body parts into a single stream. It accepts various inputs
and determines what default headers should be used for.
For text data default Content-Type is text/plain; charset=utf-8:
mpwriter.append('hello')
mpwriter.append(b'aiohttp')
You can always override these default by passing your own headers with the second argument:
mpwriter.append(io.BytesIO(b'GIF89a...'),
{'CONTENT-TYPE': 'image/gif'})
For file objects Content-Type will be determined by using Python’s mod:mimetypes module and additionally Content-
Disposition header will include the file’s basename:
If you want to send a file with a different name, just handle the Payload instance which MultipartWriter.
append() will always return and set Content-Disposition explicitly by using the Payload.
set_content_disposition() helper:
part.set_content_disposition('attachment', filename='secret.txt')
part.headers[aiohttp.hdrs.CONTENT_ID] = 'X-12345'
If you’d set Content-Encoding, it will be automatically applied to the data on serialization (see below):
part.headers[aiohttp.hdrs.CONTENT_ENCODING] = 'gzip'
mpwriter.append_json({'test': 'passed'})
mpwriter.append_form([('key', 'value')])
When it’s done, to make a request just pass a root MultipartWriter instance as aiohttp.ClientSession.
request() data argument:
Behind the scenes MultipartWriter.write() will yield chunks of every part and if body part has Content-
Encoding or Content-Transfer-Encoding they will be applied on streaming content.
Please note, that on MultipartWriter.write() all the file objects will be read until the end and there is no way
to repeat a request without rewinding their pointers to the start.
Example MJPEG Streaming multipart/x-mixed-replace. By default MultipartWriter.write() ap-
pends closing --boundary-- and breaks your content. Providing close_boundary = False prevents this.:
my_boundary = 'some-boundary'
response = web.StreamResponse(
status=200,
reason='OK',
headers={
'Content-Type': 'multipart/x-mixed-replace;boundary={}'.format(my_boundary)
}
)
while True:
frame = get_jpeg_frame()
with MultipartWriter('image/jpeg', boundary=my_boundary) as mpwriter:
mpwriter.append(frame, {
'Content-Type': 'image/jpeg'
})
await mpwriter.write(response, close_boundary=False)
await response.drain()
Hacking Multipart
The Internet is full of terror and sometimes you may find a server which implements multipart support in strange ways
when an oblivious solution does not work.
For instance, is server used cgi.FieldStorage then you have to ensure that no body part contains a Content-
Length header:
On the other hand, some server may require to specify Content-Length for the whole multipart request. aiohttp does
not do that since it sends multipart using chunked transfer encoding by default. To overcome this issue, you have to
serialize a MultipartWriter by our own in the way to calculate its size:
class Writer:
def __init__(self):
self.buffer = bytearray()
(continues on next page)
writer = Writer()
await mpwriter.write(writer)
await aiohttp.post('http://example.com',
data=writer.buffer, headers=mpwriter.headers)
Sometimes the server response may not be well formed: it may or may not contains nested parts. For instance, we
request a resource which returns JSON documents with the files attached to it. If the document has any attachments,
they are returned as a nested multipart. If it has not it responds as plain body parts:
--:
CONTENT-TYPE: application/json
{"_id": "foo"}
--:
CONTENT-TYPE: multipart/related; boundary=----:
----:
CONTENT-TYPE: application/json
{"_id": "bar"}
----:
CONTENT-TYPE: text/plain
CONTENT-DISPOSITION: attachment; filename=bar.txt
{"_id": "boo"}
--:
CONTENT-TYPE: multipart/related; boundary=----:
----:
CONTENT-TYPE: application/json
{"_id": "baz"}
----:
CONTENT-TYPE: text/plain
CONTENT-DISPOSITION: attachment; filename=baz.txt
Reading such kind of data in single stream is possible, but is not clean at all:
result = []
while True:
part = await reader.next()
(continues on next page)
if part is None:
break
if isinstance(part, aiohttp.MultipartReader):
# Fetching files
while True:
filepart = await part.next()
if filepart is None:
break
result[-1].append((await filepart.read()))
else:
# Fetching document
result.append([(await part.json())])
Let’s hack a reader in the way to return pairs of document and reader of the related files on each iteration:
class PairsMultipartReader(aiohttp.MultipartReader):
:rtype: tuple
"""
reader = await super().next()
if self._at_eof:
return None, None
if isinstance(reader, self.multipart_reader_cls):
part = await reader.next()
doc = await part.json()
else:
doc = await reader.json()
if doc is None:
break
files = []
while True:
filepart = await files_reader.next()
if file.part is None:
break
(continues on next page)
result.append((doc, files))
See also:
Multipart reference
Changed in version 3.0: Property type was changed from bytes to str.
append(obj, headers=None)
Append an object to writer.
append_payload(payload)
Adds a new body part to multipart writer.
append_json(obj, headers=None)
Helper to append JSON part.
append_form(obj, headers=None)
Helper to append form urlencoded part.
size
Size of the payload.
coroutine write(writer, close_boundary=True)
Write body.
Parameters close_boundary (bool) – The (bool) that will emit boundary closing.
You may want to disable when streaming (multipart/x-mixed-replace)
New in version 3.4: Support close_boundary argument.
Reading Methods
coroutine StreamReader.read(n=- 1)
Read up to n bytes. If n is not provided, or set to -1, read until EOF and return all read bytes.
If the EOF was received and the internal buffer is empty, return an empty bytes object.
Parameters n (int) – how many bytes to read, -1 for the whole stream.
Return bytes the given data
coroutine StreamReader.readany()
Read next data portion for the stream.
Returns immediately if internal buffer has a data.
Return bytes the given data
coroutine StreamReader.readexactly(n)
Read exactly n bytes.
Raise an asyncio.IncompleteReadError if the end of the stream is reached before n can be read, the
asyncio.IncompleteReadError.partial attribute of the exception contains the partial read bytes.
Parameters n (int) – how many bytes to read.
Return bytes the given data
coroutine StreamReader.readline()
Read one line, where “line” is a sequence of bytes ending with \n.
If EOF is received, and \n was not found, the method will return the partial read bytes.
If the EOF was received and the internal buffer is empty, return an empty bytes object.
Return bytes the given line
coroutine StreamReader.readchunk()
Read a chunk of data as it was received by the server.
Returns a tuple of (data, end_of_HTTP_chunk).
When chunked transfer encoding is used, end_of_HTTP_chunk is a bool indicating if the end of the data
corresponds to the end of a HTTP chunk, otherwise it is always False.
Return tuple[bytes, bool] a chunk of data and a bool that is True when the end of the returned
chunk corresponds to the end of a HTTP chunk.
Also there are methods for iterating over data chunks with maximum size limit and over any available data.
async-for StreamReader.iter_chunked(n)
Iterates over data chunks with maximum size limit:
async-for StreamReader.iter_any()
Iterates over data chunks in order of intaking them into the stream:
async-for StreamReader.iter_chunks()
Iterates over data chunks as received from the server:
If chunked transfer encoding is used, the original http chunks formatting can be retrieved by reading the second
element of returned tuples:
buffer = b""
Helpers
StreamReader.exception()
Get the exception occurred on data reading.
aiohttp.is_eof()
Return True if EOF was reached.
Internal buffer may be not empty at the moment.
See also:
StreamReader.at_eof()
StreamReader.at_eof()
Return True if the buffer is empty and EOF was reached.
StreamReader.read_nowait(n=None)
Returns data from internal buffer if any, empty bytes object otherwise.
Raises RuntimeError if other coroutine is waiting for stream.
Parameters n (int) – how many bytes to read, -1 for the whole internal buffer.
Return bytes the given data
StreamReader.unread_data(data)
Rollback reading some data from stream, inserting it to buffer head.
Parameters data (bytes) – data to push back into the stream.
coroutine aiohttp.wait_eof()
Wait for EOF. The given data may be accessible by upcoming read calls.
12.3.5 Signals
freeze()
Freeze the list. After the call any content modification is forbidden.
FrozenList
ChainMapProxy
An immutable version of collections.ChainMap. Internally the proxy is a list of mappings (dictionaries), if the
requested key is not present in the first mapping the second is looked up and so on.
The class supports collections.abc.Mapping interface.
class aiohttp.ChainMapProxy(maps)
Create a new chained mapping proxy from a list of mappings (maps).
New in version 3.2.
class aiohttp.WSCloseCode
An IntEnum for keeping close message code.
OK
A normal closure, meaning that the purpose for which the connection was established has been fulfilled.
GOING_AWAY
An endpoint is “going away”, such as a server going down or a browser having navigated away from a
page.
PROTOCOL_ERROR
An endpoint is terminating the connection due to a protocol error.
UNSUPPORTED_DATA
An endpoint is terminating the connection because it has received a type of data it cannot accept (e.g., an
endpoint that understands only text data MAY send this if it receives a binary message).
INVALID_TEXT
An endpoint is terminating the connection because it has received data within a message that was not
consistent with the type of the message (e.g., non-UTF-8 RFC 3629 data within a text message).
POLICY_VIOLATION
An endpoint is terminating the connection because it has received a message that violates its policy.
This is a generic status code that can be returned when there is no other more suitable status code (e.g.,
unsupported_data or message_too_big) or if there is a need to hide specific details about the
policy.
MESSAGE_TOO_BIG
An endpoint is terminating the connection because it has received a message that is too big for it to
process.
MANDATORY_EXTENSION
An endpoint (client) is terminating the connection because it has expected the server to negotiate one or
more extension, but the server did not return them in the response message of the WebSocket handshake.
The list of extensions that are needed should appear in the /reason/ part of the Close frame. Note that this
status code is not used by the server, because it can fail the WebSocket handshake instead.
INTERNAL_ERROR
A server is terminating the connection because it encountered an unexpected condition that prevented it
from fulfilling the request.
SERVICE_RESTART
The service is restarted. a client may reconnect, and if it chooses to do, should reconnect using a random-
ized delay of 5-30s.
TRY_AGAIN_LATER
The service is experiencing overload. A client should only connect to a different IP (when there are
multiple for the target) or reconnect to the same IP upon user action.
class aiohttp.WSMsgType
An IntEnum for describing WSMessage type.
CONTINUATION
A mark for continuation frame, user will never get the message with this type.
TEXT
Text message, the value has str type.
BINARY
Binary message, the value has bytes type.
PING
Ping frame (sent by client peer).
PONG
Pong frame, answer on ping. Sent by server peer.
CLOSE
Close frame.
CLOSED FRAME
Actually not frame but a flag indicating that websocket was closed.
ERROR
Actually not frame but a flag indicating that websocket was received an error.
class aiohttp.WSMessage
Websocket message, returned by .receive() calls.
type
Message type, WSMsgType instance.
data
Message payload.
1. str for WSMsgType.TEXT messages.
2. bytes for WSMsgType.BINARY messages.
3. WSCloseCode for WSMsgType.CLOSE messages.
4. bytes for WSMsgType.PING messages.
5. bytes for WSMsgType.PONG messages.
extra
Additional info, str.
Makes sense only for WSMsgType.CLOSE messages, contains optional message description.
json(*, loads=json.loads)
Returns parsed JSON data.
Parameters loads – optional JSON decoder function.
12.4 FAQ
As of aiohttp 2.3, RouteTableDef provides an API similar to Flask’s @app.route. See Alternative ways for
registering routes.
Unlike Flask’s @app.route, RouteTableDef does not require an app in the module namespace (which often
leads to circular imports).
Instead, a RouteTableDef is decoupled from an application instance:
routes = web.RouteTableDef()
@routes.get('/get')
async def handle_get(request):
...
@routes.post('/post')
async def handle_post(request):
...
app.router.add_routes(routes)
12.4.2 Does aiohttp have a concept like Flask’s “blueprint” or Django’s “app”?
If you’re writing a large application, you may want to consider using nested applications, which are similar to Flask’s
“blueprints” or Django’s “apps”.
See: Nested applications.
12.4.3 How do I create a route that matches urls with a given prefix?
The first argument, *, matches any HTTP method (GET, POST, OPTIONS, etc). The second argument matches URLS
with the desired prefix. The third argument is the handler function.
aiohttp.web.Application object supports the dict interface and provides a place to store your database
connections or any other resource you want to share between handlers.
12.4.5 How can middleware store data for web handlers to use?
12.4.6 Can a handler receive incoming events from different sources in parallel?
Yes.
As an example, we may have two event sources:
1. WebSocket for events from an end user
2. Redis PubSub for events from other parts of the application
The most native way to handle this is to create a separate task for PubSub handling.
Parallel aiohttp.web.WebSocketResponse.receive() calls are forbidden; a single task should perform
WebSocket reading. However, other tasks may use the same WebSocket object for sending data to peers.
ws = web.WebSocketResponse()
await ws.prepare(request)
task = request.app.loop.create_task(
read_subscription(ws,
request.app['redis']))
try:
async for msg in ws:
# handle incoming messages
# use ws.send_str() to send data back
...
finally:
task.cancel()
try:
async for msg in channel.iter():
answer = process_the_message(msg) # your function here
(continues on next page)
ws = web.WebSocketResponse()
user_id = authenticate_user(request)
await ws.prepare(request)
request.app['websockets'][user_id].add(ws)
try:
async for msg in ws:
ws.send_str(msg.data)
finally:
request.app['websockets'][user_id].remove(ws)
return ws
user_id = authenticate_user(request)
ws_closers = [ws.close()
for ws in request.app['websockets'][user_id]
if not ws.closed]
return web.Response(text='OK')
def main():
loop = asyncio.get_event_loop()
app = web.Application(loop=loop)
app.router.add_route('GET', '/echo', echo_handler)
app.router.add_route('POST', '/logout', logout_handler)
app['websockets'] = defaultdict(set)
web.run_app(app, host='localhost', port=8080)
If your system has several IP interfaces, you may choose one which will be used used to bind a socket locally:
See also:
aiohttp.TCPConnector and local_addr parameter.
It’s impossible. Choosing what to compress and what not to compress is is a tricky matter.
If you need global compression, write a custom middleware. Or enable compression in NGINX (you are deploying
aiohttp behind reverse proxy, right?).
aiohttp.ClientSession should be created once for the lifetime of the server in order to benefit from connection
pooling.
Sessions save cookies internally. If you don’t need cookie processing, use aiohttp.DummyCookieJar. If
you need separate cookies for different http calls but process them in logical chains, use a single aiohttp.
TCPConnector with separate client sessions and connector_owner=False.
Restricting access from subapplication to main (or outer) app is a deliberate choice.
A subapplication is an isolated unit by design. If you need to share a database object, do it explicitly:
subapp['db'] = mainapp['db']
mainapp.add_subapp('/prefix', subapp)
12.4.13 How do I perform operations in a request handler after sending the re-
sponse?
Middlewares can be written to handle post-response operations, but they run after every request. You can explicitly
send the response by calling aiohttp.web.Response.write_eof(), which starts sending before the handler
returns, giving you a chance to execute follow-up operations:
def ping_handler(request):
"""Send PONG and increase DB counter."""
return resp
A aiohttp.web.Response object must be returned. This is required by aiohttp web contracts, even though the
response already been sent.
12.4.14 How do I make sure my custom middleware response will behave cor-
rectly?
Sometimes your middleware handlers might need to send a custom response. This is just fine as long as you always
create a new aiohttp.web.Response object when required.
The response object is a Finite State Machine. Once it has been dispatched by the server, it will reach its final state
and cannot be used again.
The following middleware will make the server hang, once it serves the second response:
def misbehaved_middleware():
# don't do this!
cached = web.Response(status=200, text='Hi, I am cached!')
@web.middleware
async def middleware(request, handler):
# ignoring response for the sake of this example
_res = handler(request)
return cached
(continues on next page)
return middleware
Short answer is: life-cycle of all asyncio objects should be shorter than life-cycle of event loop.
Full explanation is longer. All asyncio object should be correctly finished/disconnected/closed before event loop
shutdown. Otherwise user can get unexpected behavior. In the best case it is a warning about unclosed resource, in the
worst case the program just hangs, awaiting for coroutine is never resumed etc.
Consider the following code from mod.py:
import aiohttp
session = aiohttp.ClientSession()
The session grabs current event loop instance and stores it in a private variable.
The main module imports the module and installs uvloop (an alternative fast event loop implementation).
main.py:
import asyncio
import uvloop
import mod
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
asyncio.run(main())
The code is broken: session is bound to default asyncio loop on import time but the loop is changed after the
import by set_event_loop(). As result fetch() call hangs.
To avoid import dependency hell aiohttp encourages creation of ClientSession from async function. The same
policy works for web.Application too.
Another use case is unit test writing. Very many test libraries (aiohttp test tools first) creates a new loop instance for
every test function execution. It’s done for sake of tests isolation. Otherwise pending activity (timers, network packets
etc.) from previous test may interfere with current one producing very cryptic and unstable test failure.
Note: class variables are hidden globals actually. The following code has the same problem as mod.py example,
session variable is the hidden global object:
class A:
session = aiohttp.ClientSession()
12.5 Miscellaneous
Helpful pages.
12.5.1 Essays
Rationale
First generation (v1) of router has mapped (method, path) pair to web-handler. Mapping is named route. Routes
used to have unique names if any.
The main mistake with the design is coupling the route to (method, path) pair while really URL construction
operates with resources (location is a synonym). HTTP method is not part of URI but applied on sending HTTP
request only.
Having different route names for the same path is confusing. Moreover named routes constructed for the same path
should have unique non overlapping names which is cumbersome is certain situations.
From other side sometimes it’s desirable to bind several HTTP methods to the same web handler. For v1 router it can
be solved by passing ‘*’ as HTTP method. Class based views require ‘*’ method also usually.
Implementation
User still may use wildcard for accepting all HTTP methods (maybe we will add something like resource.
add_wildcard(handler) later).
Since names belongs to resources now app.router['name'] returns a resource instance instead of aiohttp.
web.Route.
resource has .url() method, so app.router['name'].url(parts={'a': 'b'}, query={'arg':
'param'}) still works as usual.
The change allows to rewrite static file handling and implement nested applications as well.
Decoupling of HTTP location and HTTP method makes life easier.
Backward compatibility
Since aiohttp 1.1 the library uses yarl for URL processing.
New API
URL encoding
app.router.add_get('/', handler)
and:
app.router.add_get('/%D0%BF%D1%83%D1%82%D1%8C', handler)
Sub-Applications
Sub applications are designed for solving the problem of the big monolithic code base. Let’s assume we have a project
with own business logic and tools like administration panel and debug toolbar.
Administration panel is a separate application by its own nature but all toolbar URLs are served by prefix like /admin.
Thus we’ll create a totally separate application named admin and connect it to main app with prefix:
admin = web.Application()
# setup admin routes, signals and middlewares
app.add_subapp('/admin/', admin)
Url reversing
Url reversing for sub-applications should generate urls with proper prefix.
But for getting URL sub-application’s router should be used:
admin = web.Application()
admin.add_get('/resource', handler, name='name')
app.add_subapp('/admin/', admin)
(continues on next page)
url = admin.router['name'].url_for()
Application freezing
Application can be used either as main app (app.make_handler()) or as sub-application – not both cases at the
same time.
After connecting application by .add_subapp() call or starting serving web-server as toplevel application the
application is frozen.
It means that registering new routes, signals and middlewares is forbidden. Changing state (app['name'] =
'value') of frozen application is deprecated and will be eventually removed.
Migration to 2.x
Client
chunking
aiohttp does not support custom chunking sizes. It is up to the developer to decide how to chunk data streams. If
chunking is enabled, aiohttp encodes the provided chunks in the “Transfer-encoding: chunked” format.
aiohttp does not enable chunked encoding automatically even if a transfer-encoding header is supplied: chunked has
to be set explicitly. If chunked is set, then the Transfer-encoding and content-length headers are disallowed.
compression
Compression has to be enabled explicitly with the compress parameter. If compression is enabled, adding a content-
encoding header is not allowed. Compression also enables the chunked transfer-encoding. Compression can not be
combined with a Content-Length header.
Client Connector
1. By default a connector object manages a total number of concurrent connections. This limit was a per host rule
in version 1.x. In 2.x, the limit parameter defines how many concurrent connection connector can open and a
new limit_per_host parameter defines the limit per host. By default there is no per-host limit.
2. BaseConnector.close is now a normal function as opposed to coroutine in version 1.x
3. BaseConnector.conn_timeout was moved to ClientSession
ClientResponse.release
Internal implementation was significantly redesigned. It is not required to call release on the response object. When
the client fully receives the payload, the underlying connection automatically returns back to pool. If the payload is
not fully read, the connection is closed
Client exceptions
Exception hierarchy has been significantly modified. aiohttp now defines only exceptions that covers connection
handling and server response misbehaviors. For developer specific mistakes, aiohttp uses python standard exceptions
like ValueError or TypeError.
Reading a response content may raise a ClientPayloadError exception. This exception indicates errors specific to the
payload encoding. Such as invalid compressed data, malformed chunked-encoded chunks or not enough data that
satisfy the content-length header.
All exceptions are moved from aiohttp.errors module to top level aiohttp module.
New hierarchy of exceptions:
• ClientError - Base class for all client specific exceptions
– ClientResponseError - exceptions that could happen after we get response from server
To unify form-data/payload handling a new Payload system was introduced. It handles customized handling of existing
types and provide implementation for user-defined types.
1. FormData.__call__ does not take an encoding arg anymore and its return value changes from an iterator or
bytes to a Payload instance. aiohttp provides payload adapters for some standard types like str, byte, io.IOBase,
StreamReader or DataQueue.
2. a generator is not supported as data provider anymore, streamer can be used instead. For example, to upload
data from file:
@aiohttp.streamer
def file_sender(writer, file_name=None):
with open(file_name, 'rb') as f:
chunk = f.read(2**16)
while chunk:
yield from writer.write(chunk)
chunk = f.read(2**16)
Various
1. the encoding parameter is deprecated in ClientSession.request(). Payload encoding is controlled at the payload
level. It is possible to specify an encoding for each payload instance.
2. the version parameter is removed in ClientSession.request() client version can be specified in the ClientSession
constructor.
3. aiohttp.MsgType dropped, use aiohttp.WSMsgType instead.
4. ClientResponse.url is an instance of yarl.URL class (url_obj is deprecated)
5. ClientResponse.raise_for_status() raises aiohttp.ClientResponseError exception
6. ClientResponse.json() is strict about response’s content type. if content type does not match, it raises aiohttp.
ClientResponseError exception. To disable content type check you can pass None as content_type
parameter.
Server
Internal implementation was significantly redesigned to provide better performance and support HTTP pipelining.
ServerHttpProtocol is dropped, implementation is merged with RequestHandler a lot of low-level api’s are dropped.
Application
1. Constructor parameter loop is deprecated. Loop is get configured by application runner, run_app function for
any of gunicorn workers.
2. Application.router.add_subapp is dropped, use Application.add_subapp instead
3. Application.finished is dropped, use Application.cleanup instead
1. the GET and POST attributes no longer exist. Use the query attribute instead of GET
2. Custom chunking size is not support WebResponse.chunked - developer is responsible for actual chunking.
3. Payloads are supported as body. So it is possible to use client response’s content object as body parameter for
WebResponse
4. FileSender api is dropped, it is replaced with more general FileResponse class:
RequestPayloadError
Reading request’s payload may raise a RequestPayloadError exception. The behavior is similar to ClientPayloadError.
WSGI
WSGI support has been dropped, as well as gunicorn wsgi support. We still provide default and uvloop gunicorn
workers for web.Application
async/await everywhere
The main change is dropping yield from support and using async/await everywhere. Farewell, Python 3.4.
The minimal supported Python version is 3.5.3 now.
Why not 3.5.0? Because 3.5.3 has a crucial change: asyncio.get_event_loop() returns the running loop
instead of default, which may be different, e.g.:
loop = asyncio.new_event_loop()
loop.run_until_complete(f())
Note, asyncio.set_event_loop() was not called and default loop is not equal to actually executed one.
Application Runners
People constantly asked about ability to run aiohttp servers together with other asyncio code, but aiohttp.web.
run_app() is blocking synchronous call.
aiohttp had support for starting the application without run_app but the API was very low-level and cumbersome.
Now application runners solve the task in a few lines of code, see Application runners for details.
Client Tracing
Other long awaited feature is tracing client request life cycle to figure out when and why client request spends a time
waiting for connection establishment, getting server response headers etc.
Now it is possible by registering special signal handlers on every request processing stage. Client Tracing provides
more info about the feature.
HTTPS support
Unfortunately asyncio has a bug with checking SSL certificates for non-ASCII
site DNS names, e.g. https://\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharÐÿ(U+438)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharÑA(U+441)
˛
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharÑĆ(U+442)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:Unicodecharп(U+43E)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharÑĂ(U+440)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharÐÿ(U+438)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharО(U+43A)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup.\protect\begingroup\immediate\write\@unused\
def\MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharÑĂ(U+440)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharÑĎ(U+444)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup or https://\protect\begingroup\immediate\write\@unused\
def\MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:UnicodecharéŻœ(U+96DC)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:Unicodecharè L’(U+8349)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:Unicodecharåůě(U+5DE5)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:Unicodechar䡜(U+4F5C)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\
MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:Unicodecharåőd’(U+5BA4)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.
˙TypeH<return>forimmediatehelp\endgroup.\protect\begingroup\immediate\write\@unused\
def\MessageBreak˙\let\protect\edefYourcommandwasignored.\MessageBreakTypeI<command>
<return>toreplaceitwithanothercommand,\MessageBreakor<return>tocontinuewithoutit.\errhelp\
let\def\MessageBreak˙(inputenc)\def\errmessagePackageinputencError:Unicodecharé˛eŹ(U+9999)
\MessageBreaknotsetupforusewithLaTeX.˙˙Seetheinputencpackagedocumentationforexplanation.˙TypeH<return>
forimmediatehelp\endgroup\protect\begingroup\immediate\write\@unused\def\MessageBreak˙\let\protect\
edefYourcommandwasignored.\MessageBreakTypeI<command><return>toreplaceitwithanothercommand,
\MessageBreakor<return>tocontinuewithoutit.\errhelp\let\def\MessageBreak˙(inputenc)\def\
errmessagePackageinputencError:Unicodecharæÿŕ(U+6E2F)\MessageBreaknotsetupforusewithLaTeX.
˙˙Seetheinputencpackagedocumentationforexplanation.˙TypeH<return>forimmediatehelp\endgroup.
The bug has been fixed in upcoming Python 3.7 only (the change requires breaking backward compatibility in ssl
API).
aiohttp installs a fix for older Python versions (3.5 and 3.6).
A switch to new major version is a great chance for dropping already deprecated features.
The release dropped a lot, see Changelog for details.
All removals was already marked as deprecated or related to very low level implementation details.
If user code did not raise DeprecationWarning it is compatible with aiohttp 3.0 most likely.
Summary
12.5.2 Glossary
resource A concept reflects the HTTP path, every resource corresponds to URI.
May have a unique name.
Contains route's for different HTTP methods.
route A part of resource, resource’s path coupled with HTTP method.
web-handler An endpoint that returns HTTP response.
websocket A protocol providing full-duplex communication channels over a single TCP connection. The WebSocket
protocol was standardized by the IETF as RFC 6455
yarl A library for operating with URL objects.
https://pypi.python.org/pypi/yarl
12.5.3 Changelog
3.7.4.post0 (2021-03-06)
Misc
• Bumped upper bound of the chardet runtime dependency to allow their v4.0 version stream. #5366
3.7.4 (2021-02-25)
Bugfixes
3.7.3 (2020-11-18)
Features
Bugfixes
• Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. #3532
• Fix web_middlewares.normalize_path_middleware behavior for patch without slash. #3669
• Fix overshadowing of overlapped sub-applications prefixes. #3701
• Make BaseConnector.close() a coroutine and wait until the client closes all connections. Drop deprecated “with
Connector():” syntax. #3736
• Reset the sock_read timeout each time data is received for a aiohttp.client response. #3808
• Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View #3880
• Fixed querying the address families from DNS that the current host supports. #5156
• Change return type of MultipartReader.__aiter__() and BodyPartReader.__aiter__() to AsyncIterator. #5163
• Provide x86 Windows wheels. #5230
Improved Documentation
Misc
• #4102
3.7.2 (2020-10-27)
Bugfixes
• Fixed static files handling for loops without .sendfile() support #5149
3.7.1 (2020-10-25)
Bugfixes
3.7.0 (2020-10-24)
Features
• Response headers are now prepared prior to running on_response_prepare hooks, directly before headers
are sent to the client. #1958
• Add a quote_cookie option to CookieJar, a way to skip quotation wrapping of cookies containing special
characters. #2571
• Call AccessLogger.log with the current exception available from sys.exc_info(). #3557
• web.UrlDispatcher.add_routes and web.Application.add_routes return a list of registered AbstractRoute in-
stances. AbstractRouteDef.register (and all subclasses) return a list of registered resources registered resource.
#3866
• Added properties of default ClientSession params to ClientSession class so it is available for introspection #3882
• Don’t cancel web handler on peer disconnection, raise OSError on reading/writing instead. #4080
• Implement BaseRequest.get_extra_info() to access a protocol transports’ extra info. #4189
• Added ClientSession.timeout property. #4191
• allow use of SameSite in cookies. #4224
• Use loop.sendfile() instead of custom implementation if available. #4269
• Apply SO_REUSEADDR to test server’s socket. #4393
• Use .raw_host instead of slower .host in client API #4402
• Allow configuring the buffer size of input stream by passing read_bufsize argument. #4453
• Pass tests on Python 3.8 for Windows. #4513
• Add method and url attributes to TraceRequestChunkSentParams and TraceResponseChunkReceivedParams.
#4674
• Add ClientResponse.ok property for checking status code under 400. #4711
• Don’t ceil timeouts that are smaller than 5 seconds. #4850
• TCPSite now listens by default on all interfaces instead of just IPv4 when None is passed in as the host. #4894
• Bump http_parser to 2.9.4 #5070
Bugfixes
Improved Documentation
Misc
3.6.3 (2020-10-12)
Bugfixes
• Pin yarl to <1.6.0 to avoid buggy behavior that will be fixed by the next aiohttp release.
3.6.2 (2019-10-09)
Features
• Made exceptions pickleable. Also changed the repr of some exceptions. #4077
• Use Iterable type hint instead of Sequence for Application middleware parameter. #4125
Bugfixes
• Reset the sock_read timeout each time data is received for a aiohttp.ClientResponse. #3808
• Fix handling of expired cookies so they are not stored in CookieJar. #4063
• Fix misleading message in the string representation of ClientConnectorError; self.ssl == None
means default SSL context, not SSL disabled #4097
• Don’t clobber HTTP status when using FileResponse. #4106
Improved Documentation
Misc
• #4102
3.6.1 (2019-09-19)
Features
Bugfixes
Improved Documentation
3.6.0 (2019-09-06)
Features
• Add support for Named Pipes (Site and Connector) under Windows. This feature requires Proactor event loop
to work. #3629
• Removed Transfer-Encoding: chunked header from websocket responses to be compatible with
more http proxy servers. #3798
• Accept non-GET request for starting websocket handshake on server side. #3980
Bugfixes
• Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. #3532
• Fix an issue where cookies would sometimes not be set during a redirect. #3576
• Change normalize_path_middleware to use 308 redirect instead of 301.
This behavior should prevent clients from being unable to use PUT/POST methods on endpoints that are redi-
rected because of a trailing slash. #3579
• Drop the processed task from all_tasks() list early. It prevents logging about a task with unhandled excep-
tion when the server is used in conjunction with asyncio.run(). #3587
• Signal type annotation changed from Signal[Callable[['TraceConfig'],
Awaitable[None]]] to Signal[Callable[ClientSession, SimpleNamespace, ...
]. #3595
• Use sanitized URL as Location header in redirects #3614
• Improve typing annotations for multipart.py along with changes required by mypy in files that references mul-
tipart.py. #3621
• Close session created inside aiohttp.request when unhandled exception occurs #3628
• Cleanup per-chunk data in generic data read. Memory leak fixed. #3631
• Use correct type for add_view and family #3633
• Fix _keepalive field in __slots__ of RequestHandler. #3644
• Properly handle ConnectionResetError, to silence the “Cannot write to closing transport” exception when clients
disconnect uncleanly. #3648
• Suppress pytest warnings due to test_utils classes #3660
• Fix overshadowing of overlapped sub-application prefixes. #3701
• Fixed return type annotation for WSMessage.json() #3720
• Properly expose TooManyRedirects publicly as documented. #3818
• Fix missing brackets for IPv6 in proxy CONNECT request #3841
• Make the signature of aiohttp.test_utils.TestClient.request match asyncio.
ClientSession.request according to the docs #3852
• Use correct style for re-exported imports, makes mypy --strict mode happy. #3868
• Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View #3880
• Made cython HTTP parser set Reason-Phrase of the response to an empty string if it is missing. #3906
• Add URL to the string representation of ClientResponseError. #3959
• Accept istr keys in LooseHeaders type hints. #3976
• Fixed race conditions in _resolve_host caching and throttling when tracing is enabled. #4013
• For URLs like “unix://localhost/. . . ” set Host HTTP header to “localhost” instead of “localhost:None”. #4039
Improved Documentation
• Modify documentation for Background Tasks to remove deprecated usage of event loop. #3526
• use if __name__ == '__main__': in server examples. #3775
• Update documentation reference to the default access logger. #3783
• Improve documentation for web.BaseRequest.path and web.BaseRequest.raw_path. #3791
• Removed deprecation warning in tracing example docs #3964
3.5.4 (2019-01-12)
Bugfixes
• Fix stream .read() / .readany() / .iter_any() which used to return a partial content only in case of
compressed content #3525
3.5.3 (2019-01-10)
Bugfixes
3.5.2 (2019-01-08)
Features
Bugfixes
• Preserve MultipartWriter parts headers on write. Refactor the way how Payload.headers are handled.
Payload instances now always have headers and Content-Type defined. Fix Payload Content-Disposition header
reset after initial creation. #3035
• Log suppressed exceptions in GunicornWebWorker. #3464
• Remove wildcard imports. #3468
• Use the same task for app initialization and web server handling in gunicorn workers. It allows to use Python3.7
context vars smoothly. #3471
• Fix handling of chunked+gzipped response when first chunk does not give uncompressed data #3477
• Replace collections.MutableMapping with collections.abc.MutableMapping to avoid a
deprecation warning. #3480
• Payload.size type annotation changed from Optional[float] to Optional[int]. #3484
• Ignore done tasks when cancels pending activities on web.run_app finalization. #3497
Improved Documentation
Misc
• #3487
3.5.1 (2018-12-24)
3.5.0 (2018-12-22)
Features
Bugfixes
• Fix forward compatibility with Python 3.8: importing ABCs directly from the collections module will not be
supported anymore. (#3273)
• Keep the query string by normalize_path_middleware. (#3278)
• Fix missing parameter raise_for_status for aiohttp.request() (#3290)
• Bracket IPv6 addresses in the HOST header (#3304)
• Fix default message for server ping and pong frames. (#3308)
• Fix tests/test_connector.py typo and tests/autobahn/server.py duplicate loop def. (#3337)
• Fix false-negative indicator end_of_HTTP_chunk in StreamReader.readchunk function (#3361)
• Release HTTP response before raising status exception (#3364)
• Fix task cancellation when sendfile() syscall is used by static file handling. (#3383)
• Fix stack trace for asyncio.TimeoutError which was not logged, when it is caught in the handler. (#3414)
Improved Documentation
Misc
• #3341, #3351
3.4.4 (2018-09-05)
• Fix installation from sources when compiling toolkit is not available (#3241)
3.4.3 (2018-09-04)
3.4.2 (2018-09-01)
3.4.1 (2018-08-28)
3.4.0 (2018-08-25)
Features
Bugfixes
• Prevent double closing when client connection is released before the last data_received() callback.
(#3031)
• Make redirect with normalize_path_middleware work when using url encoded paths. (#3051)
• Postpone web task creation to connection establishment. (#3052)
• Fix sock_read timeout. (#3053)
• When using a server-request body as the data= argument of a client request, iterate over the content with readany
instead of readline to avoid Line too long errors. (#3054)
• fix UrlDispatcher has no attribute add_options, add web.options (#3062)
• correct filename in content-disposition with multipart body (#3064)
• Many HTTP proxies has buggy keepalive support. Let’s not reuse connection but close it after processing every
response. (#3070)
• raise 413 “Payload Too Large” rather than raising ValueError in request.post() Add helpful debug message to
413 responses (#3087)
• Fix StreamResponse equality, now that they are MutableMapping objects. (#3100)
• Fix server request objects comparison (#3116)
• Do not hang on 206 Partial Content response with Content-Encoding: gzip (#3123)
• Fix timeout precondition checkers (#3145)
Improved Documentation
• Add a new FAQ entry that clarifies that you should not reuse response objects in middleware functions. (#3020)
• Add FAQ section “Why is creating a ClientSession outside of an event loop dangerous?” (#3072)
• Fix link to Rambler (#3115)
• Fix TCPSite documentation on the Server Reference page. (#3146)
• Fix documentation build configuration file for Windows. (#3147)
• Remove no longer existing lingering_timeout parameter of Application.make_handler from documentation.
(#3151)
• Mention that app.make_handler is deprecated, recommend to use runners API instead. (#3157)
3.3.2 (2018-06-12)
• Many HTTP proxies has buggy keepalive support. Let’s not reuse connection but close it after processing every
response. (#3070)
• Provide vendor source files in tarball (#3076)
3.3.1 (2018-06-05)
3.3.0 (2018-06-01)
Features
Bugfixes
• Don’t reuse a connection with the same URL but different proxy/TLS settings (#2981)
• When parsing the Forwarded header, the optional port number is now preserved. (#3009)
Improved Documentation
Misc
• #3008, #3011
3.2.1 (2018-05-10)
• Don’t reuse a connection with the same URL but different proxy/TLS settings (#2981)
3.2.0 (2018-05-06)
Features
• Raise TooManyRedirects exception when client gets redirected too many times instead of returning last
response. (#2631)
• Extract route definitions into separate web_routedef.py file (#2876)
• Raise an exception on request body reading after sending response. (#2895)
• ClientResponse and RequestInfo now have real_url property, which is request url without fragment part being
stripped (#2925)
• Speed up connector limiting (#2937)
• Added and links property for ClientResponse object (#2948)
• Add request.config_dict for exposing nested applications data. (#2949)
• Speed up HTTP headers serialization, server micro-benchmark runs 5% faster now. (#2957)
• Apply assertions in debug mode only (#2966)
Bugfixes
Improved Documentation
• Drop async_timeout usage from documentation for client API in favor of timeout parameter. (#2865)
• Improve Gunicorn logging documentation (#2921)
• Replace multipart writer .serialize() method with .write() in documentation. (#2965)
Misc
• #2958
3.1.3 (2018-04-12)
3.1.2 (2018-04-05)
• Make LineTooLong exception more detailed about actual data size (#2863)
• Call on_chunk_sent when write_eof takes as a param the last chunk (#2909)
3.1.1 (2018-03-27)
• Support asynchronous iterators (and asynchronous generators as well) in both client and server API as request
/ response BODY payloads. (#2802)
3.1.0 (2018-03-21)
routes = web.RouteTableDef()
@routes.get('/')
async def hello(request):
return web.Response(text="Hello, world")
app.add_routes(routes)
app.add_routes([web.get('/', hello)])
Both ways are equal, user may decide basing on own code taste.
Also we have a lot of minor features, bug fixes and documentation updates, see below.
Features
Bugfixes
Improved Documentation
• Document behavior when cchardet detects encodings that are unknown to Python. (#2732)
• Add diagrams for tracing request life style. (#2748)
• Drop removed functionality for passing StreamReader as data at client side. (#2793)
3.0.9 (2018-03-14)
3.0.8 (2018-03-12)
3.0.7 (2018-03-08)
3.0.6 (2018-03-05)
3.0.5 (2018-02-27)
3.0.4 (2018-02-26)
3.0.3 (2018-02-25)
• Relax attrs dependency to minimal actually supported version 17.0.3 The change allows to avoid version
conflicts with currently existing test tools.
3.0.2 (2018-02-23)
Security Fix
• Prevent Windows absolute URLs in static files. Paths like /static/D:\path and /static/\\
hostname\drive\path are forbidden.
3.0.1
3.0.0 (2018-02-12)
Features
Bugfixes
• Correctly process upgrade request from server to HTTP2. aiohttp does not support HTTP2 yet, the protocol
is not upgraded but response is handled correctly. (#2277)
• Fix ClientConnectorSSLError and ClientProxyConnectionError for proxy connector (#2408)
• Fix connector convert OSError to ClientConnectorError (#2423)
• Fix connection attempts for multiple dns hosts (#2424)
• Fix writing to closed transport by raising asyncio.CancelledError (#2499)
• Fix warning in ClientSession.__del__ by stopping to try to close it. (#2523)
• Fixed race-condition for iterating addresses from the DNSCache. (#2620)
• Fix default value of access_log_format argument in web.run_app (#2649)
• Freeze sub-application on adding to parent app (#2656)
• Do percent encoding for .url_for() parameters (#2668)
• Correctly process request start time and multiple request/response headers in access log extra (#2641)
Improved Documentation
• Improve tutorial docs, using literalinclude to link to the actual files. (#2396)
• Small improvement docs: better example for file uploads. (#2401)
• Rename from_env to trust_env in client reference. (#2451)
• Fixed mistype in Proxy Support section where trust_env parameter was used in ses-
sion.get(“http://python.org”, trust_env=True) method instead of aiohttp.ClientSession constructor as follows:
aiohttp.ClientSession(trust_env=True). (#2688)
• Fix issue with unittest example not compiling in testing docs. (#2717)
Misc
• #2552
2.3.10 (2018-02-02)
• Fix 100% CPU usage on HTTP GET and websocket connection just after it (#1955)
• Patch broken ssl.match_hostname() on Python<3.7 (#2674)
2.3.9 (2018-01-16)
2.3.8 (2018-01-15)
• Do not use yarl.unquote internal function in aiohttp. Fix incorrectly unquoted path part in URL dispatcher
(#2662)
• Fix compatibility with yarl==1.0.0 (#2662)
2.3.7 (2017-12-27)
2.3.6 (2017-12-04)
2.3.5 (2017-11-30)
2.3.4 (2017-11-29)
• Make request.app point to proper application instance when using nested applications (with middlewares).
(#2550)
• Change base class of ClientConnectorSSLError to ClientSSLError from ClientConnectorError. (#2563)
• Return client connection back to free pool on error in connector.connect(). (#2567)
2.3.3 (2017-11-17)
• Having a ; in Response content type does not assume it contains a charset anymore. (#2197)
• Use getattr(asyncio, ‘async’) for keeping compatibility with Python 3.7. (#2476)
• Ignore NotImplementedError raised by set_child_watcher from uvloop. (#2491)
• Fix warning in ClientSession.__del__ by stopping to try to close it. (#2523)
• Fixed typo’s in Third-party libraries page. And added async-v20 to the list (#2510)
2.3.2 (2017-11-01)
• Fix ValueError for AF_INET6 sockets if a preexisting INET6 socket to the aiohttp.web.run_app function.
(#2431)
• _SessionRequestContextManager closes the session properly now. (#2441)
• Rename from_env to trust_env in client reference. (#2451)
2.3.1 (2017-10-18)
2.3.0 (2017-10-18)
Features
• Fix parsing the Forwarded header. * commas and semicolons are allowed inside quoted-strings; * empty
forwarded-pairs (as in for=_1;;by=_2) are allowed; * non-standard parameters are allowed (although this alone
could be easily done in the previous parser). (#2173)
• Don’t require ssl module to run. aiohttp does not require SSL to function. The code paths involved with SSL will
only be hit upon SSL usage. Raise RuntimeError if HTTPS protocol is required but ssl module is not present.
(#2221)
• Accept coroutine fixtures in pytest plugin (#2223)
• Call shutdown_asyncgens before event loop closing on Python 3.6. (#2227)
• Speed up Signals when there are no receivers (#2229)
• Raise InvalidURL instead of ValueError on fetches with invalid URL. (#2241)
• Move DummyCookieJar into cookiejar.py (#2242)
• run_app: Make print=None disable printing (#2260)
• Support brotli encoding (generic-purpose lossless compression algorithm) (#2270)
• Add server support for WebSockets Per-Message Deflate. Add client option to add deflate compress header
in WebSockets request header. If calling ClientSession.ws_connect() with compress=15 the client will support
deflate compress negotiation. (#2273)
• Support verify_ssl, fingerprint, ssl_context and proxy_headers by client.ws_connect. (#2292)
• Added aiohttp.ClientConnectorSSLError when connection fails due ssl.SSLError (#2294)
• aiohttp.web.Application.make_handler support access_log_class (#2315)
• Build HTTP parser extension in non-strict mode by default. (#2332)
Bugfixes
• Raise OSError on async DNS lookup if resolved domain is an alias for another one, which does not have an A
or CNAME record. (#2231)
• Fix incorrect warning in StreamReader. (#2251)
• Properly clone state of web request (#2284)
• Fix C HTTP parser for cases when status line is split into different TCP packets. (#2311)
• Fix web.FileResponse overriding user supplied Content-Type (#2317)
Improved Documentation
• Add a note about possible performance degradation in await resp.text() if charset was not provided by Content-
Type HTTP header. Pass explicit encoding to solve it. (#1811)
• Drop disqus widget from documentation pages. (#2018)
• Add a graceful shutdown section to the client usage documentation. (#2039)
• Document connector_owner parameter. (#2072)
• Update the doc of web.Application (#2081)
• Fix mistake about access log disabling. (#2085)
• Add example usage of on_startup and on_shutdown signals by creating and disposing an aiopg connection
engine. (#2131)
• Document encoded=True for yarl.URL, it disables all yarl transformations. (#2198)
• Document that all app’s middleware factories are run for every request. (#2225)
• Reflect the fact that default resolver is threaded one starting from aiohttp 1.1 (#2228)
Misc
• #2013, #2014, #2048, #2094, #2149, #2187, #2214, #2225, #2243, #2248
2.2.5 (2017-08-03)
2.2.4 (2017-08-02)
• Fix issue with synchronous session closing when using ClientSession as an asynchronous context manager.
(#2063)
2.2.3 (2017-07-04)
2.2.2 (2017-07-03)
2.2.1 (2017-07-02)
2.2.0 (2017-06-20)
2.1.0 (2017-05-26)
• Added support for experimental async-tokio event loop written in Rust https://github.com/PyO3/tokio
• Write to transport \r\n before closing after keepalive timeout, otherwise client can not detect socket discon-
nection. (#1883)
• Only call loop.close in run_app if the user did not supply a loop. Useful for allowing clients to specify their
own cleanup before closing the asyncio loop if they wish to tightly control loop behavior
• Content disposition with semicolon in filename (#917)
• Added request_info to response object and ClientResponseError. (#1733)
• Added history to ClientResponseError. (#1741)
• Allow to disable redirect url re-quoting (#1474)
• Handle RuntimeError from transport (#1790)
• Dropped “%O” in access logger (#1673)
• Added args and kwargs to unittest_run_loop. Useful with other decorators, for example @patch. (#1803)
• Added iter_chunks to response.content object. (#1805)
• Avoid creating TimerContext when there is no timeout to allow compatibility with Tornado. (#1817) (#1180)
• Add proxy_from_env to ClientRequest to read from environment variables. (#1791)
• Add DummyCookieJar helper. (#1830)
• Fix assertion errors in Python 3.4 from noop helper. (#1847)
• Do not unquote + in match_info values (#1816)
• Use Forwarded, X-Forwarded-Scheme and X-Forwarded-Host for better scheme and host resolution. (#1134)
• Fix sub-application middlewares resolution order (#1853)
• Fix applications comparison (#1866)
• Fix static location in index when prefix is used (#1662)
• Make test server more reliable (#1896)
• Extend list of web exceptions, add HTTPUnprocessableEntity, HTTPFailedDependency, HTTPInsufficientStor-
age status codes (#1920)
2.0.7 (2017-04-12)
2.0.6 (2017-04-04)
2.0.5 (2017-03-29)
2.0.4 (2017-03-27)
2.0.3 (2017-03-24)
2.0.2 (2017-03-21)
2.0.1 (2017-03-21)
2.0.0 (2017-03-20)
2.0.0rc1 (2017-03-15)
1.3.5 (2017-03-16)
1.3.4 (2017-03-14)
1.3.3 (2017-02-19)
1.3.2 (2017-02-16)
1.3.1 (2017-02-09)
1.3.0 (2017-02-08)
• Multipart writer validates the data on append instead of on a request send (#920)
• Multipart reader accepts multipart messages with or without their epilogue to consistently handle valid and
legacy behaviors (#1526) (#1581)
• Separate read + connect + request timeouts # 1523
• Do not swallow Upgrade header (#1587)
• Fix polls demo run application (#1487)
• Ignore unknown 1XX status codes in client (#1353)
• Fix sub-Multipart messages missing their headers on serialization (#1525)
• Do not use readline when reading the content of a part in the multipart reader (#1535)
1.2.0 (2016-12-17)
• Extract BaseRequest from web.Request, introduce web.Server (former RequestHandlerFactory), introduce new
low-level web server which is not coupled with web.Application and routing (#1362)
• Make TestServer.make_url compatible with yarl.URL (#1389)
• Implement range requests for static files (#1382)
• Support task attribute for StreamResponse (#1410)
• Drop TestClient.app property, use TestClient.server.app instead (BACKWARD INCOMPATIBLE)
• Drop TestClient.handler property, use TestClient.server.handler instead (BACKWARD INCOMPATIBLE)
• TestClient.server property returns a test server instance, was asyncio.AbstractServer (BACKWARD INCOM-
PATIBLE)
• Follow gunicorn’s signal semantics in Gunicorn[UVLoop]WebWorker (#1201)
• Call worker_int and worker_abort callbacks in Gunicorn[UVLoop]WebWorker (#1202)
• Has functional tests for client proxy (#1218)
• Fix bugs with client proxy target path and proxy host with port (#1413)
• Fix bugs related to the use of unicode hostnames (#1444)
• Preserve cookie quoting/escaping (#1453)
• FileSender will send gzipped response if gzip version available (#1426)
• Don’t override Content-Length header in web.Response if no body was set (#1400)
• Introduce router.post_init() for solving (#1373)
• Fix raise error in case of multiple calls of TimeServive.stop()
• Allow to raise web exceptions on router resolving stage (#1460)
• Add a warning for session creation outside of coroutine (#1468)
• Avoid a race when application might start accepting incoming requests but startup signals are not processed yet
e98e8c6
• Raise a RuntimeError when trying to change the status of the HTTP response after the headers have been sent
(#1480)
• Fix bug with https proxy acquired cleanup (#1340)
• Use UTF-8 as the default encoding for multipart text parts (#1484)
1.1.6 (2016-11-28)
• Fix BodyPartReader.read_chunk bug about returns zero bytes before EOF (#1428)
1.1.5 (2016-11-16)
1.1.4 (2016-11-14)
1.1.3 (2016-11-10)
1.1.2 (2016-11-08)
1.1.1 (2016-11-04)
1.1.0 (2016-11-03)
1.0.5 (2016-10-11)
• Fix StreamReader._read_nowait to return all available data up to the requested amount (#1297)
1.0.4 (2016-09-22)
1.0.2 (2016-09-22)
1.0.1 (2016-09-16)
• Restore aiohttp.web.MsgType alias for aiohttp.WSMsgType for sake of backward compatibility (#1178)
• Tune alabaster schema.
• Use text/html content type for displaying index pages by static file handler.
• Fix AssertionError in static file handling (#1177)
• Fix access log formats %O and %b for static file handling
• Remove debug setting of GunicornWorker, use app.debug to control its debug-mode instead
1.0.0 (2016-09-16)
• Change default size for client session’s connection pool from unlimited to 20 (#977)
• Add IE support for cookie deletion. (#994)
• Remove deprecated WebSocketResponse.wait_closed method (BACKWARD INCOMPATIBLE)
• Remove deprecated force parameter for ClientResponse.close method (BACKWARD INCOMPATIBLE)
• Avoid using of mutable CIMultiDict kw param in make_mocked_request (#997)
• Make WebSocketResponse.close a little bit faster by avoiding new task creating just for timeout measurement
• Add proxy and proxy_auth params to client.get() and family, deprecate ProxyConnector (#998)
• Add support for websocket send_json and receive_json, synchronize server and client API for websockets (#984)
• Implement router shourtcuts for most useful HTTP methods, use app.router.add_get(), app.router.add_post()
etc. instead of app.router.add_route() (#986)
• Support SSL connections for gunicorn worker (#1003)
• Move obsolete examples to legacy folder
• Switch to multidict 2.0 and title-cased strings (#1015)
• {FOO}e logger format is case-sensitive now
• Fix logger report for unix socket 8e8469b
• Rename aiohttp.websocket to aiohttp._ws_impl
• Rename aiohttp.MsgType tp aiohttp.WSMsgType
• Introduce aiohttp.WSMessage officially
• Rename Message -> WSMessage
• Remove deprecated decode param from resp.read(decode=True)
• Use 5min default client timeout (#1028)
0.22.5 (08-02-2016)
0.22.3 (07-26-2016)
0.22.2 (07-23-2016)
0.22.1 (07-16-2016)
• Large cookie expiration/max-age does not break an event loop from now (fixes (#967))
0.22.0 (07-15-2016)
0.21.6 (05-05-2016)
0.21.5 (03-22-2016)
0.21.4 (03-12-2016)
• Fix ResourceAdapter: don’t add method to allowed if resource is not match (#826)
• Fix Resource: append found method to returned allowed methods
0.21.2 (02-16-2016)
• Fix a regression: support for handling ~/path in static file routes was broken (#782)
0.21.1 (02-10-2016)
0.21.0 (02-04-2016)
• Add version parameter to ClientSession constructor, deprecate it for session.request() and family (#736)
• Enable access log by default (#735)
• Deprecate app.router.register_route() (the method was not documented intentionally BTW).
• Deprecate app.router.named_routes() in favor of app.router.named_resources()
• route.add_static accepts pathlib.Path now (#743)
• Add command line support: $ python -m aiohttp.web package.main (#740)
• FAQ section was added to docs. Enjoy and fill free to contribute new topics
• Add async context manager support to ClientSession
• Document ClientResponse’s host, method, url properties
• Use CORK/NODELAY in client API (#748)
• ClientSession.close and Connector.close are coroutines now
• Close client connection on exception in ClientResponse.release()
• Allow to read multipart parts without content-length specified (#750)
• Add support for unix domain sockets to gunicorn worker (#470)
• Add test for default Expect handler (#601)
• Add the first demo project
• Rename loader keyword argument in web.Request.json method. (#646)
• Add local socket binding for TCPConnector (#678)
0.20.2 (01-07-2016)
0.20.1 (12-30-2015)
• Raise RuntimeError is Timeout context manager was used outside of task context.
• Add number of bytes to stream.read_nowait (#700)
• Use X-FORWARDED-PROTO for wsgi.url_scheme when available
0.20.0 (12-28-2015)
0.19.0 (11-25-2015)
0.18.4 (13-11-2015)
• Relax rule for router names again by adding dash to allowed characters: they may contain identifiers, dashes,
dots and columns
0.18.3 (25-10-2015)
0.18.2 (22-10-2015)
0.18.1 (20-10-2015)
• Relax rule for router names: they may contain dots and columns starting from now
0.18.0 (19-10-2015)
0.17.4 (09-29-2015)
0.17.3 (08-28-2015)
0.17.2 (08-11-2015)
0.17.1 (08-10-2015)
0.17.0 (08-04-2015)
0.16.6 (07-15-2015)
0.16.5 (06-13-2015)
0.16.4 (06-13-2015)
0.16.3 (05-30-2015)
0.16.2 (05-27-2015)
• Update python version required for __del__ usage: it’s actually 3.4.1 instead of 3.4.0
• Add check for presence of loop.is_closed() method before call the former (#378)
0.16.1 (05-27-2015)
0.16.0 (05-26-2015)
• ProxyConnector does not support keep-alive requests by default starting from now (#368)
• Add connector.force_close property
• Add ws_connect to ClientSession (#374)
• Support optional chunk_size parameter in router.add_static()
0.15.3 (04-22-2015)
0.15.2 (04-19-2015)
0.15.1 (03-31-2015)
0.15.0 (03-27-2015)
0.14.4 (01-29-2015)
• Fix issue with error during constructing of url with regex parts (#264)
0.14.3 (01-28-2015)
0.14.2 (01-23-2015)
0.14.1 (01-15-2015)
0.13.1 (12-31-2014)
0.13.0 (12-29-2014)
0.12.0 (12-12-2014)
0.11.0 (11-29-2014)
0.10.2 (11-19-2014)
0.10.1 (11-17-2014)
• aiohttp.web.HTTPException and descendants now files response body with string like 404: NotFound
• Fix multidict __iter__, the method should iterate over keys, not (key, value) pairs.
0.10.0 (11-13-2014)
0.9.3 (10-30-2014)
0.9.2 (10-16-2014)
0.9.1 (08-30-2014)
• Added MultiDict support for client request params and data (#114).
• Fixed parameter type for IncompleteRead exception (#118).
• Strictly require ASCII headers names and values (#137)
• Keep port in ProxyConnector (#128).
• Python 3.4.1 compatibility (#131).
0.9.0 (07-08-2014)
0.8.4 (07-04-2014)
0.8.3 (07-03-2014)
0.8.2 (06-22-2014)
0.8.1 (06-18-2014)
0.8.0 (06-06-2014)
• Exceptions refactoring
0.7.3 (05-20-2014)
0.7.2 (05-14-2014)
0.7.1 (04-28-2014)
0.7.0 (04-16-2014)
0.6.5 (03-29-2014)
0.6.4 (02-27-2014)
• Log content-length missing warning only for put and post requests.
0.6.3 (02-27-2014)
0.6.2 (02-18-2014)
0.6.1 (02-17-2014)
0.6.0 (02-12-2014)
0.5.0 (01-29-2014)
0.4.4 (11-15-2013)
• Resolve only AF_INET family, because it is not clear how to pass extra info to asyncio.
0.4.3 (11-15-2013)
0.4.2 (11-14-2013)
0.4.1 (11-12-2013)
0.4 (11-06-2013)
0.3 (11-04-2013)
• Added PortMapperWorker
• Added HttpClient
• Added TCP connection timeout to HTTP client
• Better client connection errors handling
• Gracefully handle process exit
0.2
• Fix packaging
• genindex
• modindex
• search
The list of aiohttp users: both libraries, big projects and web sites.
Please don’t hesitate to add your awesome project to the list by making a Pull Request on GitHub.
If you like the project – please go to GitHub and press Star button!
aiohttp is not the library for making HTTP requests and creating WEB server only.
It is the grand basement for libraries built on top of aiohttp.
This page is a list of these tools.
Please feel free to add your open sourced library if it’s not enlisted yet by making Pull Request to https://github.com/
aio-libs/aiohttp/
• Why do you might want to include your awesome library into the list?
• Just because the list increases your library visibility. People will have an easy way to find it.
Officially supported
This list contains libraries which are supported by aio-libs team and located on https://github.com/aio-libs
aiohttp extensions
Database drivers
Other tools
The libraries are not part of aio-libs but they are proven to be very well written and highly recommended for
usage.
• uvloop Ultra fast implementation of asyncio event loop on top of libuv.
We are highly recommending to use it instead of standard asyncio.
Database drivers
• asyncpg Another PostgreSQL async driver. It’s much faster than aiopg but it is not drop-in replacement – the
API is different. Anyway please take a look on it – the driver is really incredible fast.
Others
The list of libraries which are exists but not enlisted in former categories.
They may be perfect or not – we don’t know.
Please add your library reference here first and after some time period ask to raise the status.
• aiohttp-cache A cache system for aiohttp server.
• aiocache Caching for asyncio with multiple backends (framework agnostic)
• gain Web crawling framework based on asyncio for everyone.
• aiohttp-swagger Swagger API Documentation builder for aiohttp server.
• aiohttp-swaggerify Library to automatically generate swagger2.0 definition for aiohttp endpoints.
• aiohttp-validate Simple library that helps you validate your API endpoints requests/responses with json schema.
• aiohttp-pydantic An aiohttp.View to validate the HTTP request’s body, query-string, and headers regarding
function annotations and generate Open API doc. Python 3.8+ required.
• raven-aiohttp An aiohttp transport for raven-python (Sentry client).
• webargs A friendly library for parsing HTTP request arguments, with built-in support for popular web frame-
works, including Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.
• aioauth-client OAuth client for aiohttp.
• aiohttpretty A simple asyncio compatible httpretty mock using aiohttp.
• aioresponses a helper for mock/fake web requests in python aiohttp package.
• aiohttp-transmute A transmute implementation for aiohttp.
• aiohttp_apiset Package to build routes using swagger specification.
• aiohttp-login Registration and authorization (including social) for aiohttp applications.
• aiohttp_utils Handy utilities for building aiohttp.web applications.
• aiohttpproxy Simple aiohttp HTTP proxy.
• aiohttp_traversal Traversal based router for aiohttp.web.
• aiohttp_autoreload Makes aiohttp server auto-reload on source code change.
• gidgethub An async GitHub API library for Python.
• aiohttp_jrpc aiohttp JSON-RPC service.
• fbemissary A bot framework for the Facebook Messenger platform, built on asyncio and aiohttp.
• aioslacker slacker wrapper for asyncio.
• aioreloader Port of tornado reloader to asyncio.
• aiohttp_babel Babel localization support for aiohttp.
• python-mocket a socket mock framework - for all kinds of socket animals, web-clients included.
aiohttp is used to build useful libraries built on top of it, and there’s a page dedicated to list them: Third-Party libraries.
There are also projects that leverage the power of aiohttp to provide end-user tools, like command lines or software
with full user interfaces.
This page aims to list those projects. If you are using aiohttp in your software and if it’s playing a central role, you can
add it here in this list.
You can also add a Built with aiohttp link somewhere in your project, pointing to https://github.com/aio-libs/aiohttp.
• Molotov Load testing tool.
• Arsenic Async WebDriver.
• Home Assistant Home Automation Platform.
• Backend.AI Code execution API service.
• doh-proxy DNS Over HTTPS Proxy.
• Mariner Command-line torrent searcher.
• DEEPaaS API REST API for Machine learning, Deep learning and artificial intelligence applications.
12.7 Contributing
In order to make a clone of the GitHub repo: open the link and press the “Fork” button on the upper-right menu of the
web page.
I hope everybody knows how to work with git and github nowadays :)
Workflow is pretty straightforward:
0. Make sure you are reading the latest version of this document. It can be found in the GitHub repo in the docs
subdirectory.
1. Clone the GitHub repo using the --recurse-submodules argument
2. Setup your machine with the required dev environment
3. Make a change
4. Make sure all tests passed
5. Add a file into the CHANGES folder (see Changelog update for how).
6. Commit changes to your own aiohttp clone
7. Make a pull request from the github page of your clone against the master branch
8. Optionally make backport Pull Request(s) for landing a bug fix into released aiohttp versions.
Note: The project uses Squash-and-Merge strategy for GitHub Merge button.
Basically it means that there is no need to rebase a Pull Request against master branch. Just git merge master
into your working copy (a fork) if needed. The Pull Request is automatically squashed into the single commit once
the PR is accepted.
Note: GitHub issue and pull request threads are automatically locked when there has not been any recent activity for
one year. Please open a new issue for related bugs.
If you feel like there are important points in the locked discussions, please include those excerpts into that new issue.
$ cd aiohttp
$ virtualenv --python=`which python3` venv
$ . venv/bin/activate
$ cd aiohttp
$ python3 -m venv venv
$ . venv/bin/activate
For virtualenvwrapper:
$ cd aiohttp
$ mkvirtualenv --python=`which python3` aiohttp
There are other tools like pyvenv but you know the rule of thumb now: create a python3 virtual environment and
activate it.
After that please install libraries required for development:
Note: For now, the development tooling depends on make and assumes an Unix OS If you wish to contribute
to aiohttp from a Windows machine, the easiest way is probably to configure the WSL so you can use the same
instructions. If it’s not possible for you or if it doesn’t work, please contact us so we can find a solution together.
Warning: If you plan to use temporary print(), pdb or ipdb within the test suite, execute it with -s:
$ pytest tests -s
The project uses black + isort formatters to keep the source code style. Please run make fmt after every change before
starting tests.
$ make fmt
After all the preconditions are met you can run tests typing the next command:
$ make test
The command at first will run the linters (sorry, we don’t accept pull requests with pyflakes, black, isort, or mypy
errors).
On lint success the tests will be run.
Please take a look on the produced output.
Any extra texts (print statements and so on) should be removed.
We are trying hard to have good test coverage; please don’t make it worse.
Use:
$ make cov-dev
to run test suite and collect coverage information. Once the command has finished check your coverage at the file that
appears in the last line of the output: open file:///.../aiohttp/htmlcov/index.html
Please go to the link and make sure that your code change is covered.
The project uses codecov.io for storing coverage results. Visit https://codecov.io/gh/aio-libs/aiohttp for looking on
coverage of master branch, history, pull requests etc.
The browser extension https://docs.codecov.io/docs/browser-extension is highly recommended for analyzing the cov-
erage just in Files Changed tab on GitHub Pull Request review page.
12.7.6 Documentation
$ make doc
Once it finishes it will output the index html page open file:///.../aiohttp/docs/_build/html/
index.html.
Go to the link and make sure your doc changes looks good.
We use pyenchant and sphinxcontrib-spelling for running spell checker for documentation:
$ make doc-spelling
The CHANGES.rst file is managed using towncrier tool and all non trivial changes must be accompanied by a news
entry.
To add an entry to the news file, first you need to have created an issue describing the change you want to make. A
Pull Request itself may function as such, but it is preferred to have a dedicated issue (for example, in case the PR ends
up rejected due to code quality reasons).
Once you have an issue or pull request, you take the number and you create a file inside of the CHANGES/ directory
named after that issue number with an extension of .removal, .feature, .bugfix, or .doc. Thus if your issue
or PR number is 1234 and this change is fixing a bug, then you would create a file CHANGES/1234.bugfix. PRs
can span multiple categories by creating multiple files (for instance, if you added a feature and deprecated/removed
the old feature at the same time, you would create CHANGES/NNNN.feature and CHANGES/NNNN.removal).
Likewise if a PR touches multiple issues/PRs you may create a file for each of them with the exact same contents and
Towncrier will deduplicate them.
The contents of this file are reStructuredText formatted text that will be used as the content of the news file entry. You
do not need to reference the issue or PR numbers here as towncrier will automatically add a reference to all of the
affected issues when rendering the news file.
After finishing all steps make a GitHub Pull Request with master base branch.
12.7.10 Backporting
Contribute!
The easiest way is providing Pull Requests for issues in our bug tracker. But if you have a great idea for the library
improvement – please make an issue and Pull Request.
The rules for committers are simple:
1. No wild commits! Everything should go through PRs.
2. Take a part in reviews. It’s very important part of maintainer’s activity.
3. Pickup issues created by others, especially if they are simple.
4. Keep test suite comprehensive. In practice it means leveling up coverage. 97% is not bad but we wish to have
100% someday. Well, 99% is good target too.
5. Don’t hesitate to improve our docs. Documentation is very important thing, it’s the key for project success. The
documentation should not only cover our public API but help newbies to start using the project and shed a light
on non-obvious gotchas.
After positive answer aiohttp committer creates an issue on github with the proposal for nomination. If the proposal
will collect only positive votes and no strong objection – you’ll be a new member in our team.
a
aiohttp, 175
aiohttp.abc, 162
aiohttp.web, 79
255
aiohttp Documentation, Release 3.7.4.post0
257
aiohttp Documentation, Release 3.7.4.post0
258 Index
aiohttp Documentation, Release 3.7.4.post0
Index 259
aiohttp Documentation, Release 3.7.4.post0
I M
make_handler() (aiohttp.web.Application method),
identity (aiohttp.web.ContentCoding attribute), 143
125
IDNA, 193
make_mocked_coro() (in module aiohttp.test_utils),
if_modified_since (aiohttp.web.BaseRequest at-
155
tribute), 110
make_mocked_request() (in module aio-
if_range (aiohttp.web.BaseRequest attribute), 111
http.test_utils), 151
if_unmodified_since (aiohttp.web.BaseRequest
make_url() (aiohttp.test_utils.BaseTestServer
attribute), 110
method), 153
INTERNAL_ERROR (aiohttp.WSCloseCode attribute),
make_url() (aiohttp.test_utils.TestClient method), 155
176
MANDATORY_EXTENSION (aiohttp.WSCloseCode at-
INVALID_TEXT (aiohttp.WSCloseCode attribute), 175
tribute), 176
InvalidURL, 66
match_info (aiohttp.web.Request attribute), 112
is_eof() (in module aiohttp), 174
260 Index
aiohttp Documentation, Release 3.7.4.post0
Index 261
aiohttp Documentation, Release 3.7.4.post0
262 Index
aiohttp Documentation, Release 3.7.4.post0
Index 263
aiohttp Documentation, Release 3.7.4.post0
264 Index
aiohttp Documentation, Release 3.7.4.post0
Y
yarl, 194
Index 265