Aiohttp 3 0a
Aiohttp 3 0a
Release 3.0.0-a0
aiohttp contributors
1 Key Features 3
2 Library Installation 5
3 Getting Started 7
4 Tutorial 9
5 Source code 11
6 Dependencies 13
7 Communication channels 15
8 Contributing 17
11 Table Of Contents 23
11.1 Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
11.2 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.3 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
11.4 FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
11.5 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
11.6 Who use aiohttp? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
11.7 Contributing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
i
ii
aiohttp Documentation, Release 3.0.0-a0
Contents 1
aiohttp Documentation, Release 3.0.0-a0
2 Contents
CHAPTER 1
Key Features
3
aiohttp Documentation, Release 3.0.0-a0
Library Installation
You may want to install optional cchardet library as faster replacement for chardet:
For speeding up DNS resolving by client API you may install aiodns as well. This option is highly recommended:
5
aiohttp Documentation, Release 3.0.0-a0
Getting Started
Client example:
import aiohttp
import asyncio
import async_timeout
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Server example:
app = web.Application()
app.router.add_get('/', handle)
app.router.add_get('/{name}', handle)
web.run_app(app)
7
aiohttp Documentation, Release 3.0.0-a0
Tutorial
Polls tutorial
9
aiohttp Documentation, Release 3.0.0-a0
10 Chapter 4. Tutorial
CHAPTER 5
Source code
11
aiohttp Documentation, Release 3.0.0-a0
Dependencies
• Python 3.5.3+
• chardet
• multidict
• async_timeout
• yarl
• Optional cchardet as faster replacement for chardet.
Install it explicitly via:
• Optional aiodns for fast DNS resolving. The library is highly recommended.
13
aiohttp Documentation, Release 3.0.0-a0
14 Chapter 6. Dependencies
CHAPTER 7
Communication channels
15
aiohttp Documentation, Release 3.0.0-a0
Contributing
Please read the instructions for contributors before making a Pull Request.
17
aiohttp Documentation, Release 3.0.0-a0
18 Chapter 8. Contributing
CHAPTER 9
The aiohttp package is written mostly by Nikolay Kim and Andrew Svetlov.
It’s Apache 2 licensed and freely available.
Feel free to improve this package and send a pull request to GitHub.
19
aiohttp Documentation, Release 3.0.0-a0
21
aiohttp Documentation, Release 3.0.0-a0
Table Of Contents
11.1 Client
Eager to get started? This page gives a good introduction in how to get started with aiohttp client API.
First, make sure that aiohttp is installed and up-to-date
Let’s get started with some simple examples.
Make a Request
import aiohttp
Now, let’s try to get a web-page. For example let’s get GitHub’s public time-line:
Now, we have a ClientSession called session and a ClientResponse object called resp. We can get all
the information we need from the response. The mandatory parameter of ClientSession.get() coroutine is an
HTTP url.
In order to make an HTTP POST request use ClientSession.post() coroutine:
23
aiohttp Documentation, Release 3.0.0-a0
session.post('http://httpbin.org/post', data=b'data')
session.put('http://httpbin.org/put', data=b'data')
session.delete('http://httpbin.org/delete')
session.head('http://httpbin.org/get')
session.options('http://httpbin.org/get')
session.patch('http://httpbin.org/patch', data=b'data')
Note: Don’t create a session per request. Most likely you need a session per application which performs all requests
altogether.
A session contains a connection pool inside. Connection reusage and keep-alives (both are on by default) may speed
up total performance.
You often want to send some sort of data in the URL’s query string. If you were constructing the URL by hand, this
data would be given as key/value pairs in the URL after a question mark, e.g. httpbin.org/get?key=val.
Requests allows you to provide these arguments as a dict, using the params keyword argument. As an example,
if you wanted to pass key1=value1 and key2=value2 to httpbin.org/get, you would use the following
code:
You can see that the URL has been correctly encoded by printing the URL.
For sending data with multiple values for the same key MultiDict may be used as well.
It is also possible to pass a list of 2 item tuples as parameters, in that case you can specify multiple values for each
key:
You can also pass str content as param, but beware – content is not encoded by library. Note that + is not encoded:
We can read the content of the server’s response and it’s status code. Consider the GitHub time-line again:
200
'[{"created_at":"2015-06-12T14:06:22Z","public":true,"actor":{...
aiohttp automatically decodes the content from the server. You can specify custom encoding for the text()
method:
await resp.text(encoding='windows-1251')
You can also access the response body as bytes, for non-text requests:
print(await resp.read())
b'[{"created_at":"2015-06-12T14:06:22Z","public":true,"actor":{...
The gzip and deflate transfer-encodings are automatically decoded for you.
You can enable brotli transfer-encodings support, just install brotlipy.
JSON Request
By default session uses python’s standard json module for serialization. But it is possible to use different
serializer. ClientSession accepts json_serialize parameter:
import ujson
11.1. Client 25
aiohttp Documentation, Release 3.0.0-a0
Note: ujson library is faster than standard json but slightly incompatible.
There’s also a built-in JSON decoder, in case you’re dealing with JSON data:
In case that JSON decoding fails, json() will raise an exception. It is possible to specify custom encoding and
decoder functions for the json() call.
Note: The methods above reads the whole response body into memory. If you are planning on reading lots of data,
consider using the streaming response method documented below.
While methods read(), json() and text() are very convenient you should use them carefully. All these methods
load the whole response in memory. For example if you want to download several gigabyte sized files, these methods
will load all the data in memory. Instead you can use the content attribute. It is an instance of the aiohttp.
StreamReader class. The gzip and deflate transfer-encodings are automatically decoded for you:
In general, however, you should use a pattern like this to save what is being streamed to a file:
It is not possible to use read(), json() and text() after explicit reading from content.
Typically, you want to send some form-encoded data – much like an HTML form. To do this, simply pass a dictionary
to the data argument. Your dictionary of data will automatically be form-encoded when the request is made:
{
...
"form": {
"key2": "value2",
"key1": "value1"
},
...
}
If you want to send data that is not form-encoded you can do it by passing a bytes instead of a dict. This data will
be posted directly and content-type set to ‘application/octet-stream’ by default:
url = 'http://httpbin.org/post'
files = {'file': open('report.xls', 'rb')}
url = 'http://httpbin.org/post'
data = FormData()
data.add_field('file',
open('report.xls', 'rb'),
filename='report.xls',
content_type='application/vnd.ms-excel')
If you pass a file object as data parameter, aiohttp will stream it to the server automatically. Check StreamReader
for supported format information.
See also:
Working with Multipart
Streaming uploads
aiohttp supports multiple types of streaming uploads, which allows you to send large files without reading them
into memory.
As a simple case, simply provide a file-like object for your body:
11.1. Client 27
aiohttp Documentation, Release 3.0.0-a0
@aiohttp.streamer
def file_sender(writer, file_name=None):
with open(file_name, 'rb') as f:
chunk = f.read(2**16)
while chunk:
yield from writer.write(chunk)
chunk = f.read(2**16)
Also it is possible to use a StreamReader object. Lets say we want to upload a file from another request and
calculate the file SHA1 hash:
while True:
chunk = await resp.content.readany()
if not chunk:
break
h.update(chunk)
stream.feed_data(chunk)
return h.hexdigest()
resp = session.get('http://httpbin.org/post')
stream = StreamReader()
loop.create_task(session.post('http://httpbin.org/post', data=stream))
Because the response content attribute is a StreamReader, you can chain get and post requests together:
r = await session.get('http://python.org')
await session.post('http://httpbin.org/post',
data=r.content)
WebSockets
session = aiohttp.ClientSession()
async with session.ws_connect('http://example.org/websocket') as ws:
You must use the only websocket task for both reading (e.g. await ws.receive() or async for msg in
ws:) and writing but may have multiple writer tasks which can only send data asynchronously (by await ws.
send_str('data') for example).
Timeouts
By default all IO operations have 5min timeout. The timeout may be overridden by passing timeout parameter into
ClientSession.get() and family:
async with session.get('https://github.com', timeout=60) as r:
...
with async_timeout.timeout(0.001):
async with session.get('https://github.com') as r:
await r.text()
Note: Timeout is cumulative time, it includes all operations like sending request, redirects, response parsing, con-
suming response, etc.
Client Session
ClientSession is the heart and the main entry point for all client API operations.
Create the session first, use the instance for performing HTTP requests and initiating WebSocket connections.
The session contains a cookie storage and connection pool, thus cookies and connections are shared between HTTP
requests sent by the same session.
If you need to add HTTP headers to a request, pass them in a dict to the headers parameter.
11.1. Client 29
aiohttp Documentation, Release 3.0.0-a0
url = 'http://example.com/image'
payload = b'GIF89a\x01\x00\x01\x00\x00\xff\x00,\x00\x00'
b'\x00\x00\x01\x00\x01\x00\x00\x02\x00;'
headers = {'content-type': 'image/gif'}
await session.post(url,
data=payload,
headers=headers)
You also can set default headers for all session requests:
Typical use case is sending JSON body. You can specify content type directly as shown above, but it is more convenient
to use special keyword json:
Custom Cookies
To send your own cookies to the server, you can use the cookies parameter of ClientSession constructor:
url = 'http://httpbin.org/cookies'
cookies = {'cookies_are': 'working'}
async with ClientSession(cookies=cookies) as session:
async with session.get(url) as resp:
assert await resp.json() == {
"cookies": {"cookies_are": "working"}}
Note: httpbin.org/cookies endpoint returns request cookies in JSON-encoded body. To access session
cookies see ClientSession.cookie_jar.
>>> resp.headers
{'ACCESS-CONTROL-ALLOW-ORIGIN': '*',
'CONTENT-TYPE': 'application/json',
'DATE': 'Tue, 15 Jul 2014 16:49:51 GMT',
'SERVER': 'gunicorn/18.0',
'CONTENT-LENGTH': '331',
'CONNECTION': 'keep-alive'}
The dictionary is special, though: it’s made just for HTTP headers. According to RFC 7230, HTTP Header names are
case-insensitive. It also supports multiple values for the same key as HTTP protocol does.
So, we can access the headers using any capitalization we want:
>>> resp.headers['Content-Type']
'application/json'
>>> resp.headers.get('content-type')
'application/json'
All headers are converted from binary data using UTF-8 with surrogateescape option. That works fine on
most cases but sometimes unconverted data is needed if a server uses nonstandard encoding. While these headers
are malformed from RFC 7230 perspective they may be retrieved by using ClientResponse.raw_headers
property:
>>> resp.raw_headers
((b'SERVER', b'nginx'),
(b'DATE', b'Sat, 09 Jan 2016 20:28:40 GMT'),
(b'CONTENT-TYPE', b'text/html; charset=utf-8'),
(b'CONTENT-LENGTH', b'12150'),
(b'CONNECTION', b'keep-alive'))
If a response contains some HTTP Cookies, you can quickly access them:
url = 'http://example.com/some/cookie/setting/url'
async with session.get(url) as resp:
print(resp.cookies['example_cookie_name'])
Note: Response cookies contain only values, that were in Set-Cookie headers of the last request in redirection
chain. To gather cookies between all redirection requests please use aiohttp.ClientSession object.
Redirection History
If a request was redirected, it is possible to view previous responses using the history attribute:
11.1. Client 31
aiohttp Documentation, Release 3.0.0-a0
Cookie Jar
Cookie Safety
By default ClientSession uses strict version of aiohttp.CookieJar. RFC 2109 explicitly forbids cookie
accepting from URLs with IP address instead of DNS name (e.g. http://127.0.0.1:80/cookie).
It’s good but sometimes for testing we need to enable support for such cookies. It should be done by passing un-
safe=True to aiohttp.CookieJar constructor:
jar = aiohttp.CookieJar(unsafe=True)
session = aiohttp.ClientSession(cookie_jar=jar)
Sometimes cookie processing is not desirable. For this purpose it’s possible to pass aiohttp.DummyCookieJar
instance into client session:
jar = aiohttp.DummyCookieJar()
session = aiohttp.ClientSession(cookie_jar=jar)
To upload data that is already compressed before passing it to aiohttp, call the request function with the used compres-
sion algorithm name (usually deflate or gzip) as the value of the Content-Encoding header:
Client tracing
The execution flow of a specific request can be followed attaching listeners coroutines to the signals provided by the
TraceConfig instance, this instance will be used as a parameter for the ClientSession constructor having
as a result a client that triggers the different signals supported by the TraceConfig. By default any instance of
ClientSession class comes with the signals ability disabled. The following snippet shows how the start and the
end signals of a request flow can be followed:
trace_config = aiohttp.TraceConfig()
trace_config.on_request_start.append(on_request_start)
trace_config.on_request_end.append(on_request_end)
The trace_configs is a list that can contain instances of TraceConfig class that allow run the signals handlers
coming from different TraceConfig instances. The following example shows how two different TraceConfig
that have a different nature are installed to perform their job in each signal handle:
All signals take as a parameters first, the ClientSession instance used by the specific request related to that
signals and second, a SimpleNamespace instance called trace_config_ctx. The trace_config_ctx
object can be used to share the state through to the different signals that belong to the same request and to the same
TraceConfig class, perhaps:
session.get('http://example.com/some/redirect/',
trace_request_ctx={'foo': 'bar'})
See also:
Tracing Reference section for more information about the different signals supported.
Connectors
To tweak or change transport layer of requests you can pass a custom connector to ClientSession and family.
For example:
conn = aiohttp.TCPConnector()
session = aiohttp.ClientSession(connector=conn)
11.1. Client 33
aiohttp Documentation, Release 3.0.0-a0
Note: By default session object takes the ownership of the connector, among other things closing the connections
once the session is closed. If you are keen on share the same connector through different session instances you must
give the connector_owner parameter as False for each session instance.
See also:
Connectors section for more information about different connector types and configuration options.
To limit amount of simultaneously opened connections you can pass limit parameter to connector:
conn = aiohttp.TCPConnector(limit=30)
conn = aiohttp.TCPConnector(limit=0)
To limit amount of simultaneously opened connection to the same endpoint ((host, port, is_ssl) triple) you
can pass limit_per_host parameter to connector:
conn = aiohttp.TCPConnector(limit_per_host=30)
By default TCPConnector comes with the DNS cache table enabled, and resolutions will be cached by default for
10 seconds. This behavior can be changed either to change of the TTL for a resolution, as can be seen in the following
example:
conn = aiohttp.TCPConnector(ttl_dns_cache=300)
or disabling the use of the DNS cache table, meaning that all requests will end up making a DNS resolution, as the
following example shows:
conn = aiohttp.TCPConnector(use_dns_cache=False)
In order to specify the nameservers to when resolving the hostnames, aiodns is required:
If your HTTP server uses UNIX domain sockets you can use UnixConnector:
conn = aiohttp.UnixConnector(path='/path/to/socket')
session = aiohttp.ClientSession(connector=conn)
By default aiohttp uses strict checks for HTTPS protocol. Certification checks can be relaxed by setting ssl to False:
If you need to setup custom ssl parameters (use own certification files for example) you can create a ssl.
SSLContext instance and pass it into the proper ClientSession method:
sslcontext = ssl.create_default_context(
cafile='/path/to/ca-bundle.crt')
r = await session.get('https://example.com', ssl=sslcontext)
If you need to verify self-signed certificates, you can do the same thing as the previous example, but add another call
to ssl.SSLContext.load_cert_chain() with the key pair:
sslcontext = ssl.create_default_context(
cafile='/path/to/ca-bundle.crt')
sslcontext.load_cert_chain('/path/to/client/public/device.pem',
'/path/to/client/private/device.jey')
r = await session.get('https://example.com', ssl=sslcontext)
try:
await session.get('https://expired.badssl.com/')
except aiohttp.ClientConnectorSSLError as e:
assert isinstance(e, ssl.SSLError)
aiohttp.ClientConnectorCertificateError:
try:
await session.get('https://wrong.host.badssl.com/')
except aiohttp.ClientConnectorCertificateError as e:
assert isinstance(e, ssl.CertificateError)
try:
await session.get('https://expired.badssl.com/')
except aiohttp.ClientSSLError as e:
assert isinstance(e, ssl.SSLError)
try:
await session.get('https://wrong.host.badssl.com/')
11.1. Client 35
aiohttp Documentation, Release 3.0.0-a0
except aiohttp.ClientSSLError as e:
assert isinstance(e, ssl.CertificateError)
Note that this is the fingerprint of the DER-encoded certificate. If you have the certificate in PEM format, you can
convert it to DER with e.g:
openssl x509 -in crt.pem -inform PEM -outform DER > crt.der
Note: Tip: to convert from a hexadecimal digest to a binary byte-string, you can use binascii.unhexlify().
ssl parameter could be passed to TCPConnector as default, the value from ClientSession.get() and others
override default.
Proxy support
session.get("http://python.org",
proxy="http://user:[email protected]")
Contrary to the requests library, it won’t read environment variables by default. But you can do so by pass-
ing trust_env=True into aiohttp.ClientSession constructor for extracting proxy configuration from
HTTP_PROXY or HTTPS_PROXY environment variables (both are case insensitive):
Proxy credentials are given from ~/.netrc file if present (see aiohttp.ClientSession for more details).
Graceful Shutdown
When ClientSession closes at the end of an async with block (or through a direct ClientSession.
close() call), the underlying connection remains open due to asyncio internal details. In practice, the underlying
connection will close after a short while. However, if the event loop is stopped before the underlying connection is
closed, an ResourceWarning: unclosed transport warning is emitted (when warnings are enabled).
To avoid this situation, a small delay must be added before closing the event loop to allow any open underlying
connections to close.
For a ClientSession without SSL, a simple zero-sleep (await asyncio.sleep(0)) will suffice:
loop = asyncio.get_event_loop()
loop.run_until_complete(read_website())
# Zero-sleep to allow underlying connections to close
loop.run_until_complete(asyncio.sleep(0))
loop.close()
For a ClientSession with SSL, the application must wait a short duration before closing:
...
# Wait 250 ms for the underlying SSL connections to close
loop.run_until_complete(asyncio.sleep(0.250))
loop.close()
Note that the appropriate amount of time to wait will vary from application to application.
All if this will eventually become obsolete when the asyncio internals are changed so that aiohttp itself can wait on the
underlying connection to close. Please follow issue #1925 for the progress on this.
Client Session
11.1. Client 37
aiohttp Documentation, Release 3.0.0-a0
import aiohttp
import asyncio
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The client session supports the context manager protocol for self closing.
class aiohttp.ClientSession(*, connector=None, loop=None, cookies=None, headers=None,
skip_auto_headers=None, auth=None, json_serialize=json.dumps,
version=aiohttp.HttpVersion11, cookie_jar=None,
read_timeout=None, conn_timeout=None, raise_for_status=False,
connector_owner=True, auto_decompress=True, proxies=None)
The class for creating client sessions and making requests.
Parameters
• connector (aiohttp.connector.BaseConnector) – BaseConnector sub-class
instance to support connection pooling.
• loop – event loop used for processing HTTP requests.
If loop is None the constructor borrows it from connector if specified.
asyncio.get_event_loop() is used for getting default event loop otherwise.
Deprecated since version 2.0.
• cookies (dict) – Cookies to send with the request (optional)
• headers – HTTP Headers to send with every request (optional).
May be either iterable of key-value pairs or Mapping (e.g. dict, CIMultiDict).
• skip_auto_headers – set of headers for which autogeneration should be skipped.
aiohttp autogenerates headers like User-Agent or Content-Type if these headers are
not explicitly passed. Using skip_auto_headers parameter allows to skip that genera-
tion. Note that Content-Length autogeneration can’t be skipped.
Iterable of str or istr (optional)
• auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization
(optional)
• version – supported HTTP version, HTTP 1.1 by default.
• cookie_jar – Cookie Jar, AbstractCookieJar instance.
By default every session instance has own private cookie jar for automatic cookies process-
ing but user may redefine this behavior by providing own jar implementation.
One example is not processing cookies at all when working in proxy mode.
cookie_jar
The session cookies, AbstractCookieJar instance.
Gives access to cookie jar’s content and modifiers.
A read-only property.
11.1. Client 39
aiohttp Documentation, Release 3.0.0-a0
requote_redirect_url
aiohttp re quote’s redirect urls by default, but some servers require exact url from location header. To
disable re-quote system set requote_redirect_url attribute to False.
New in version 2.1.
loop
A loop instance used for session creation.
A read-only property.
coroutine async-with request(method, url, *, params=None, data=None, json=None,
headers=None, skip_auto_headers=None, auth=None, al-
low_redirects=True, max_redirects=10, compress=None,
chunked=None, expect100=False, read_until_eof=True,
proxy=None, proxy_auth=None, timeout=5*60, ssl=None,
verify_ssl=None, fingerprint=None, ssl_context=None,
proxy_headers=None)
Performs an asynchronous HTTP request. Returns a response object.
Parameters
• method (str) – HTTP method
• url – Request URL, str or URL.
• params – Mapping, iterable of tuple of key/value pairs or string to be sent as parameters in
the query string of the new request. Ignored for subsequent redirected requests (optional)
Allowed values are:
– collections.abc.Mapping e.g. dict, aiohttp.MultiDict or aiohttp.
MultiDictProxy
– collections.abc.Iterable e.g. tuple or list
– str with preferably url-encoded content (Warning: content will not be encoded by
aiohttp)
• data – Dictionary, bytes, or file-like object to send in the body of the request (optional)
• json – Any json compatible python object (optional). json and data parameters could not
be used at the same time.
• headers (dict) – HTTP Headers to send with the request (optional)
• skip_auto_headers – set of headers for which autogeneration should be skipped.
aiohttp autogenerates headers like User-Agent or Content-Type if these headers
are not explicitly passed. Using skip_auto_headers parameter allows to skip that
generation.
Iterable of str or istr (optional)
• auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization
(optional)
• allow_redirects (bool) – If set to False, do not follow redirects. True by default
(optional).
• compress (bool) – Set to True if request has to be compressed with deflate encoding.
If compress can not be combined with a Content-Encoding and Content-Length headers.
None by default (optional).
• chunked (int) – Enable chunked transfer encoding. It is up to the developer to decide
how to chunk data streams. If chunking is enabled, aiohttp encodes the provided chunks
in the “Transfer-encoding: chunked” format. If chunked is set, then the Transfer-encoding
and content-length headers are disallowed. None by default (optional).
• expect100 (bool) – Expect 100-continue response from server. False by default
(optional).
• read_until_eof (bool) – Read response until EOF if response does not have
Content-Length header. True by default (optional).
• proxy – Proxy URL, str or URL (optional)
• proxy_auth (aiohttp.BasicAuth) – an object that represents proxy HTTP Basic
Authorization (optional)
• timeout (int) – override the session’s timeout (read_timeout) for IO operations.
• ssl –
SSL validation mode. None for default SSL check (ssl.
create_default_context() is used), False for skip SSL certificate val-
idation, aiohttp.Fingerprint for fingerprint validation, ssl.SSLContext
for custom SSL certificate validation.
Supersedes verify_ssl, ssl_context and fingerprint parameters.
New in version 3.0.
• verify_ssl (bool) – Perform SSL certificate validation for HTTPS requests (enabled
by default). May be disabled to skip validation for sites with invalid certificates.
New in version 2.3.
Deprecated since version 3.0: Use ssl=False
• fingerprint (bytes) – Pass the SHA256 digest of the expected certificate in DER
format to verify that the certificate the server presents matches. Useful for certificate
pinning.
Warning: use of MD5 or SHA1 digests is insecure and removed.
New in version 2.3.
Deprecated since version 3.0: Use ssl=aiohttp.Fingerprint(digest)
• ssl_context (ssl.SSLContext) – ssl context used for processing HTTPS requests
(optional).
ssl_context may be used for configuring certification authority channel, supported SSL
options etc.
New in version 2.3.
Deprecated since version 3.0: Use ssl=ssl_context
• proxy_headers (abc.Mapping) – HTTP headers to send to the proxy if the param-
eter proxy has been provided.
New in version 2.3.
11.1. Client 41
aiohttp Documentation, Release 3.0.0-a0
11.1. Client 43
aiohttp Documentation, Release 3.0.0-a0
Basic API
While we encourage ClientSession usage we also provide simple coroutines for making HTTP requests.
Basic API is good for performing simple HTTP requests without keepaliving, cookies and complex connection stuff
like properly configured SSL certification chaining.
import aiohttp
11.1. Client 45
aiohttp Documentation, Release 3.0.0-a0
Connectors
BaseConnector
Endpoints are the same if they are have equal (host, port, is_ssl) triple.
If limit_per_host is None the connector has no limit per host.
Read-only property.
close()
Close all opened connections.
New in version 2.0.
coroutine connect(request)
Get a free connection from pool or create new one if connection is absent in the pool.
The call may be paused if limit is exhausted until used connections returns to pool.
Parameters request (aiohttp.ClientRequest) – request object which is connection
initiator.
Returns Connection object.
coroutine _create_connection(req)
Abstract method for actual connection establishing, should be overridden in subclasses.
TCPConnector
Parameters
• verify_ssl (bool) – perform SSL certificate validation for HTTPS requests (enabled
by default). May be disabled to skip validation for sites with invalid certificates.
Deprecated since version 2.3: Pass verify_ssl to ClientSession.get() etc.
• fingerprint (bytes) – pass the SHA256 digest of the expected certificate in DER for-
mat to verify that the certificate the server presents matches. Useful for certificate pinning.
Note: use of MD5 or SHA1 digests is insecure and deprecated.
Deprecated since version 2.3: Pass verify_ssl to ClientSession.get() etc.
11.1. Client 47
aiohttp Documentation, Release 3.0.0-a0
• use_dns_cache (bool) – use internal cache for DNS lookups, True by default.
Enabling an option may speedup connection establishing a bit but may introduce some side
effects also.
• ttl_dns_cache (int) – expire after some seconds the DNS entries, None means
cached forever. By default 10 seconds.
By default DNS entries are cached forever, in some environments the IP addresses related
to a specific HOST can change after a specific time. Use this option to keep the DNS cache
updated refreshing each entry after N seconds.
New in version 2.0.8.
• limit (int) – total number simultaneous connections. If limit is None the connector has
no limit (default: 100).
• limit_per_host (int) – limit simultaneous connections to the same endpoint. End-
points are the same if they are have equal (host, port, is_ssl) triple. If limit is 0
the connector has no limit (default: 0).
• resolver (aiohttp.abc.AbstractResolver) – custom resolver instance to use.
aiohttp.DefaultResolver by default (asynchronous if aiodns>=1.1 is in-
stalled).
Custom resolvers allow to resolve hostnames differently than the way the host is configured.
The resolver is aiohttp.ThreadedResolver by default, asynchronous version is
pretty robust but might fail in very rare cases.
• family (int) – TCP socket family, both IPv4 and IPv6 by default. For IPv4 only use
socket.AF_INET, for IPv6 only – socket.AF_INET6.
family is 0 by default, that means both IPv4 and IPv6 are accepted. To specify only concrete
version please pass socket.AF_INET or socket.AF_INET6 explicitly.
• ssl_context (ssl.SSLContext) – SSL context used for processing HTTPS requests
(optional).
ssl_context may be used for configuring certification authority channel, supported SSL op-
tions etc.
• local_addr (tuple) – tuple of (local_host, local_port) used to bind socket
locally if specified.
• force_close (bool) – close underlying sockets after connection releasing (optional).
• enable_cleanup_closed (tuple) – Some ssl servers do not properly complete SSL
shutdown process, in that case asyncio leaks SSL connections. If this parameter is set to
True, aiohttp additionally aborts underlining transport after 2 seconds. It is off by default.
family
TCP socket family e.g. socket.AF_INET or socket.AF_INET6
Read-only property.
dns_cache
Use quick lookup in internal DNS cache for host names if True.
Read-only bool property.
cached_hosts
The cache of resolved hosts if dns_cache is enabled.
UnixConnector
conn = UnixConnector(path='/path/to/socket')
session = ClientSession(connector=conn)
async with session.get('http://python.org') as resp:
...
Constructor accepts all parameters suitable for BaseConnector plus UNIX-specific one:
Parameters path (str) – Unix socket path
path
Path to UNIX socket, read-only str property.
Connection
class aiohttp.Connection
Encapsulates single connection in connector object.
End user should never create Connection instances manually but get it by BaseConnector.connect()
coroutine.
closed
bool read-only property, True if connection was closed, released or detached.
loop
Event loop used for connection
transport
Connection transport
close()
Close connection with forcibly closing underlying socket.
release()
Release connection back to connector.
Underlying socket is not closed, the connection may be reused later if timeout (30 seconds by default) for
connection was not expired.
11.1. Client 49
aiohttp Documentation, Release 3.0.0-a0
detach()
Detach underlying socket from connection.
Underlying socket is not closed, next close() or release() calls don’t return socket to free pool.
Response object
class aiohttp.ClientResponse
Client response returned be ClientSession.request() and family.
User never creates the instance of ClientResponse class but gets it from API calls.
ClientResponse supports async context manager protocol, e.g.:
After exiting from async with block response object will be released (see release() coroutine).
version
Response’s version, HttpVersion instance.
status
HTTP status code of response (int), e.g. 200.
reason
HTTP status reason of response (str), e.g. "OK".
method
Request’s method (str).
url
URL of request (URL).
connection
Connection used for handling response.
content
Payload stream, which contains response’s BODY (StreamReader). It supports various reading meth-
ods depending on the expected format. When chunked transfer encoding is used by the server, allows
retrieving the actual http chunks.
Reading from the stream may raise aiohttp.ClientPayloadError if the response object is closed
before response receives all data or in case if any transfer encoding related errors like misformed chunked
encoding of broken compression data.
cookies
HTTP cookies of response (Set-Cookie HTTP header, SimpleCookie).
headers
A case-insensitive multidict proxy with HTTP headers of response, CIMultiDictProxy.
raw_headers
Unmodified HTTP headers of response as unconverted bytes, a sequence of (key, value) pairs.
content_type
Read-only property with content part of Content-Type header.
charset
Read-only property that specifies the encoding for the request’s BODY.
The value is parsed from the Content-Type HTTP header.
Returns str like 'utf-8' or None if no Content-Type header present in HTTP headers or it has no
charset information.
content_disposition
Read-only property that specified the Content-Disposition HTTP header.
Instance of ContentDisposition or None if no Content-Disposition header present in HTTP head-
ers.
history
A Sequence of ClientResponse objects of preceding requests (earliest request first) if there were
redirects, an empty sequence otherwise.
close()
Close response and underlying connection.
For keep-alive support see release().
coroutine read()
Read the whole response’s body as bytes.
Close underlying connection if data reading gets an error, release connection otherwise.
Raise an aiohttp.ClientResponseError if the data can’t be read.
Return bytes read BODY.
See also:
close(), release().
coroutine release()
It is not required to call release on the response object. When the client fully receives the payload, the
underlying connection automatically returns back to pool. If the payload is not fully read, the connection
is closed
raise_for_status()
Raise an aiohttp.ClientResponseError if the response status is 400 or higher.
Do nothing for success responses (less than 400).
coroutine text(encoding=None)
Read response’s body and return decoded str using specified encoding parameter.
If encoding is None content encoding is autocalculated using Content-Type HTTP header and chardet
tool if the header is not provided by server.
cchardet is used with fallback to chardet if cchardet is not available.
Close underlying connection if data reading gets an error, release connection otherwise.
Parameters encoding (str) – text encoding used for BODY decoding, or None for encoding
autodetection (default).
11.1. Client 51
aiohttp Documentation, Release 3.0.0-a0
Note: If response has no charset info in Content-Type HTTP header cchardet / chardet is used for
content encoding autodetection.
It may hurt performance. If page encoding is known passing explicit encoding parameter might help:
await resp.text('ISO-8859-1')
ClientWebSocketResponse
get_extra_info(name, default=None)
Reads extra info from connection’s transport
exception()
Returns exception if any occurs or returns None.
coroutine ping(message=b”)
Send PING to peer.
Parameters message – optional payload of ping message, str (converted to UTF-8 encoded
bytes) or bytes.
Changed in version 3.0: The method is converted into coroutine
coroutine pong(message=b”)
Send PONG to peer.
Parameters message – optional payload of pong message, str (converted to UTF-8 encoded
bytes) or bytes.
Changed in version 3.0: The method is converted into coroutine
coroutine send_str(data, compress=None)
Send data to peer as TEXT message.
Parameters
• data (str) – data to send.
• compress (int) – sets specific level of compression for single message, None for not
overriding per-socket setting.
Raises TypeError – if data is not str
Changed in version 3.0: The method is converted into coroutine, compress parameter added.
coroutine send_bytes(data, compress=None)
Send data to peer as BINARY message.
Parameters
• data – data to send.
• compress (int) – sets specific level of compression for single message, None for not
overriding per-socket setting.
Raises TypeError – if data is not bytes, bytearray or memoryview.
Changed in version 3.0: The method is converted into coroutine, compress parameter added.
coroutine send_json(data, compress=None, *, dumps=json.dumps)
Send data to peer as JSON string.
Parameters
• data – data to send.
• compress (int) – sets specific level of compression for single message, None for not
overriding per-socket setting.
• dumps (callable) – any callable that accepts an object and returns a JSON string
(json.dumps() by default).
Raises
• RuntimeError – if connection is not started or closing
11.1. Client 53
aiohttp Documentation, Release 3.0.0-a0
Utilities
RequestInfo
class aiohttp.RequestInfo
A namedtuple with request URL and headers from ClientRequest object, available as
ClientResponse.request_info attribute.
url
Requested url, yarl.URL instance.
method
Request HTTP method like 'GET' or 'POST', str.
headers
HTTP headers for request, multidict.CIMultiDict instance.
BasicAuth
CookieJar
len(session.cookie_jar)
11.1. Client 55
aiohttp Documentation, Release 3.0.0-a0
jar = aiohttp.DummyCookieJar()
session = aiohttp.ClientSession(cookie_jar=DummyCookieJar())
class aiohttp.Fingerprint(digest)
Fingerprint helper for checking SSL certificates by SHA256 digest.
Parameters digest (bytes) – SHA256 digest for certificate in DER-encoded binary form (see
ssl.SSLSocket.getpeercert()).
To check fingerprint pass the object into ClientSession.get() call, e.g.:
import hashlib
Client exceptions
Exception hierarchy has been significantly modified in version 2.0. aiohttp defines only exceptions that covers con-
nection handling and server response misbehaviors. For developer specific mistakes, aiohttp uses python standard
exceptions like ValueError or TypeError.
Reading a response content may raise a ClientPayloadError exception. This exception indicates errors specific
to the payload encoding. Such as invalid compressed data, malformed chunked-encoded chunks or not enough data
that satisfy the content-length header.
All exceptions are available as members of aiohttp module.
exception aiohttp.ClientError
Base class for all client specific exceptions.
Derived from Exception
class aiohttp.ClientPayloadError
This exception can only be raised while reading the response payload if one of these errors occurs:
1. invalid compression
2. malformed chunked encoding
3. not enough data that satisfy Content-Length HTTP header.
Derived from ClientError
exception aiohttp.InvalidURL
URL used for fetching is malformed, e.g. it does not contain host part.
Derived from ClientError and ValueError
url
Invalid URL, yarl.URL instance.
class aiohttp.ContentDisposition
Represent Content-Disposition header
value
A str instance. Value of Content-Disposition header itself, e.g. attachment.
filename
A str instance. Content filename extracted from parameters. May be None.
parameters
Read-only mapping contains all parameters.
11.1. Client 57
aiohttp Documentation, Release 3.0.0-a0
Response errors
exception aiohttp.ClientResponseError
These exceptions could happen after we get response from server.
Derived from ClientError
request_info
Instance of RequestInfo object, contains information about request.
code
HTTP status code of response (int), e.g. 200.
message
Message of response (str), e.g. "OK".
headers
Headers in response, a list of pairs.
history
History from failed response, if available, else empty tuple.
A tuple of ClientResponse objects used for handle redirection responses.
class aiohttp.WSServerHandshakeError
Web socket server response error.
Derived from ClientResponseError
class aiohttp.WSServerHandshakeError
Web socket server response error.
Derived from ClientResponseError
class aiohttp.ContentTypeError
Invalid content type.
Derived from ClientResponseError
New in version 2.3.
Connection errors
class aiohttp.ClientConnectionError
These exceptions related to low-level connection problems.
Derived from ClientError
class aiohttp.ClientOSError
Subset of connection errors that are initiated by an OSError exception.
Derived from ClientConnectionError and OSError
class aiohttp.ClientConnectorError
Connector related exceptions.
Derived from ClientOSError
class aiohttp.ClientProxyConnectionError
Derived from ClientConnectonError
class aiohttp.ServerConnectionError
Derived from ClientConnectonError
class aiohttp.ClientSSLError
Derived from ClientConnectonError
class aiohttp.ClientConnectorSSLError
Response ssl error.
Derived from ClientSSLError and ssl.SSLError
class aiohttp.ClientConnectorCertificateError
Response certificate error.
Derived from ClientSSLError and ssl.CertificateError
class aiohttp.ServerDisconnectedError
Server disconnected.
Derived from ServerDisconnectonError
message
Partially parsed HTTP message (optional).
class aiohttp.ServerTimeoutError
Server operation timeout: read timeout, etc.
Derived from ServerConnectonError and asyncio.TimeoutError
class aiohttp.ServerFingerprintMismatch
Server fingerprint mismatch.
Derived from ServerConnectonError
Hierarchy of exceptions
• ClientError
– ClientResponseError
* ContentTypeError
* WSServerHandshakeError
* ClientHttpProxyError
– ClientConnectionError
* ClientOSError
· ClientConnectorError
· ClientSSLError
· ClientConnectorCertificateError
· ClientConnectorSSLError
· ClientProxyConnectionError
· ServerConnectionError
· ServerDisconnectedError
· ServerTimeoutError
· ServerFingerprintMismatch
– ClientPayloadError
11.1. Client 59
aiohttp Documentation, Release 3.0.0-a0
– InvalidURL
TraceConfig
Trace config is the configuration object used to trace requests launched by a Client session object using different events
related to different parts of the request flow.
class aiohttp.TraceConfig(trace_config_ctx_factory=SimpleNamespace)
Parameters trace_config_ctx_factory – factory used to create trace contexts, default
class used SimpleNamespace
trace_config_ctx(trace_request_ctx=None)
Parameters trace_request_ctx – Will be used to pass as a kw for the
trace_config_ctx_factory.
Return a new trace context.
on_request_start
Property that gives access to the signals that will be executed when a request starts, based on the Signal
implementation.
The coroutines listening will receive as a param the session, trace_config_ctx, method, url
and headers.
New in version 3.0.
on_request_redirect
Property that gives access to the signals that will be executed when a redirect happens during a request
flow.
The coroutines that are listening will receive the session, trace_config_ctx, method, url,
headers and resp params.
New in version 3.0.
on_request_end
Property that gives access to the signals that will be executed when a request ends.
The coroutines that are listening will receive the session, trace_config_ctx, method, url,
headers and resp params
New in version 3.0.
on_request_exception
Property that gives access to the signals that will be executed when a request finishes with an exception.
The coroutines listening will receive the session, trace_config_ctx, method, url, headers
and exception params.
New in version 3.0.
on_connection_queued_start
Property that gives access to the signals that will be executed when a request has been queued waiting for
an available connection.
The coroutines that are listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_connection_queued_end
Property that gives access to the signals that will be executed when a request that was queued already has
an available connection.
The coroutines that are listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_connection_create_start
Property that gives access to the signals that will be executed when a request creates a new connection.
The coroutines listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_connection_create_end
Property that gives access to the signals that will be executed when a request that created a new connection
finishes its creation.
The coroutines listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_connection_reuseconn
Property that gives access to the signals that will be executed when a request reuses a connection.
The coroutines listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_dns_resolvehost_start
Property that gives access to the signals that will be executed when a request starts to resolve the domain
related with the request.
The coroutines listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_dns_resolvehost_end
Property that gives access to the signals that will be executed when a request finishes to resolve the domain
related with the request.
The coroutines listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_dns_cache_hit
Property that gives access to the signals that will be executed when a request was able to use a cached
DNS resolution for the domain related with the request.
The coroutines listening will receive the session and trace_config_ctx params.
New in version 3.0.
on_dns_cache_miss
Property that gives access to the signals that will be executed when a request was not able to use a cached
DNS resolution for the domain related with the request.
The coroutines listening will receive the session and trace_config_ctx params.
New in version 3.0.
11.1. Client 61
aiohttp Documentation, Release 3.0.0-a0
11.2 Server
Are you going to learn aiohttp but don’t know where to start? We have example for you. Polls application is a great
example for getting started with aiohttp.
If you want the full source code in advance or for comparison, check out the demo source.
$ python -V
Python 3.5.0
Project structure looks very similar to other python based web projects:
.
README.rst
polls
Makefile
README.rst
aiohttpdemo_polls
__init__.py
__main__.py
db.py
main.py
routes.py
templates
utils.py
views.py
config
polls.yaml
images
example.png
setup.py
sql
create_tables.sql
install.sh
sample_data.sql
static
style.css
Application
All aiohttp server is built around aiohttp.web.Application instance. It is used for registering startup/cleanup
signals, connecting routes etc.
The following code creates an application:
app = web.Application()
web.run_app(app, host='127.0.0.1', port=8080)
$ python3 main.py
Alas, for now both return only 404: Not Found. To show something more meaningful let’s create a route and a
view.
Views
Let’s start from first views. Create the file aiohttpdemo_polls/views.py with the following:
This is the simplest view possible in Aiohttp. Now we should create a route for this index view. Put this into
aiohttpdemo_polls/routes.py (it is a good practice to separate views, routes, models etc. You’ll have more
of each, and it is nice to have them in different places):
def setup_routes(app):
app.router.add_get('/', index)
Also, we should call setup_routes function somewhere, and the best place is in the main.py
11.2. Server 63
aiohttp Documentation, Release 3.0.0-a0
app = web.Application()
setup_routes(app)
web.run_app(app, host='127.0.0.1', port=8080)
Success! For now your working directory should look like this:
.
..
polls
aiohttpdemo_polls
main.py
routes.py
views.py
Configuration files
aiohttp is configuration agnostic. It means the library does not require any configuration approach and does not have
builtin support for any config schema.
But please take into account these facts:
1. 99% of servers have configuration files.
2. Every product (except Python-based solutions like Django and Flask) does not store config files as part as source
code.
For example Nginx has own configuration files stored by default under /etc/nginx folder.
Mongo pushes config as /etc/mongodb.conf.
3. Config files validation is good idea, strong checks may prevent silly errors during product deployment.
Thus we suggest to use the following approach:
1. Pushing configs as yaml files (json or ini is also good but yaml is the best).
2. Loading yaml config from a list of predefined locations, e.g. ./config/app_cfg.yaml, /etc/
app_cfg.yaml.
3. Keeping ability to override config file by command line parameter, e.g. ./run_app --config=/opt/
config/app_cfg.yaml.
4. Applying strict validation checks to loaded dict. trafaret, colander or JSON schema are good candidates for such
job.
Load config and push into application:
Database
Setup
In this tutorial we will use the latest PostgreSQL database. You can install PostgreSQL using this instruction http:
//www.postgresql.org/download/
Database schema
We use SQLAlchemy to describe database schemas. For this tutorial we can use two simple models question and
choice:
import sqlalchemy as sa
meta = sa.MetaData()
question = sa.Table(
'question', meta,
sa.Column('id', sa.Integer, nullable=False),
sa.Column('question_text', sa.String(200), nullable=False),
sa.Column('pub_date', sa.Date, nullable=False),
# Indexes #
sa.PrimaryKeyConstraint('id', name='question_id_pkey'))
choice = sa.Table(
'choice', meta,
sa.Column('id', sa.Integer, nullable=False),
sa.Column('question_id', sa.Integer, nullable=False),
sa.Column('choice_text', sa.String(200), nullable=False),
sa.Column('votes', sa.Integer, server_default="0", nullable=False),
# Indexes #
sa.PrimaryKeyConstraint('id', name='choice_id_pkey'),
sa.ForeignKeyConstraint(['question_id'], [question.c.id],
name='choice_question_id_fkey',
ondelete='CASCADE'),
)
question
id
question_text
pub_date
choice
id
choice_text
votes
question_id
11.2. Server 65
aiohttp Documentation, Release 3.0.0-a0
For making DB queries we need an engine instance. Assuming conf is a dict with configuration info Postgres
connection could be done by the following coroutine:
app.on_startup.append(init_pg)
Graceful shutdown
app.on_cleanup.append(close_pg)
Templates
@aiohttp_jinja2.template('detail.html')
async def poll(request):
async with request.app['db'].acquire() as conn:
question_id = request.match_info['question_id']
try:
question, choices = await db.get_question(conn,
question_id)
except db.RecordNotFound as e:
raise web.HTTPNotFound(text=str(e))
return {
'question': question,
'choices': choices
}
Templates are very convenient way for web page writing. We return a dict with page content, aiohttp_jinja2.
template decorator processes it by jinja2 template renderer.
import aiohttp_jinja2
import jinja2
aiohttp_jinja2.setup(
app, loader=jinja2.PackageLoader('aiohttpdemo_polls', 'templates'))
Static files
Any web site has static files: images, JavaScript sources, CSS files etc.
The best way to handle static in production is setting up reverse proxy like NGINX or using CDN services.
But for development handling static files by aiohttp server is very convenient.
Fortunately it can be done easy by single call:
def setup_static_routes(app):
app.router.add_static('/static/',
path=PROJECT_ROOT / 'static',
name='static')
Middlewares
Middlewares are stacked around every web-handler. They are called before handler for pre-processing request and
after getting response back for post-processing given response.
Here we’ll add a simple middleware for displaying pretty looking pages for 404 Not Found and 500 Internal Error.
Middlewares could be registered in app by adding new middleware to app.middlewares list:
def setup_middlewares(app):
error_middleware = error_pages({404: handle_404,
500: handle_500})
app.middlewares.append(error_middleware)
Middleware itself is a factory which accepts application and next handler (the following middleware or web-handler
in case of the latest middleware in the list).
The factory returns middleware handler which has the same signature as regular web-handler – it accepts request and
returns response.
Middleware for processing HTTP exceptions:
def error_pages(overrides):
async def middleware(app, handler):
async def middleware_handler(request):
try:
response = await handler(request)
11.2. Server 67
aiohttp Documentation, Release 3.0.0-a0
override = overrides.get(response.status)
if override is None:
return response
else:
return await override(request, response)
except web.HTTPException as ex:
override = overrides.get(ex.status)
if override is None:
raise
else:
return await override(request, ex)
return middleware_handler
return middleware
See also:
Middlewares
Next, create an Application instance and register the request handler with the application’s router on a partic-
ular HTTP method and path:
app = web.Application()
app.router.add_get('/', hello)
web.run_app(app)
aiohttp.web implements a basic CLI for quickly serving an Application in development over TCP/IP:
package.module:init_func should be an importable callable that accepts a list of any non-parsed command-
line arguments and returns an Application instance after setting it up:
def init_func(argv):
app = web.Application()
app.router.add_get("/", index_handler)
return app
Handler
A request handler can be any callable that accepts a Request instance as its only argument and returns a
StreamResponse derived (e.g. Response) instance:
def handler(request):
return web.Response()
A handler may also be a coroutine, in which case aiohttp.web will await the handler:
Handlers are setup to handle requests by registering them with the Application.router on a particular
route (HTTP method and path pair) using methods like UrlDispatcher.add_get and UrlDispatcher.
add_post:
app.router.add_get('/', handler)
app.router.add_post('/post', post_handler)
app.router.add_put('/put', put_handler)
add_route() also supports the wildcard HTTP method, allowing a handler to serve incoming requests on a path
having any HTTP method:
The HTTP method can be queried later in the request handler using the Request.method property.
By default endpoints added with add_get() will accept HEAD requests and return the same response headers as
they would for a GET request. You can also deny HEAD requests on a route:
11.2. Server 69
aiohttp Documentation, Release 3.0.0-a0
Here handler won’t be called and the server will response with 405.
Note: This is a change as of aiohttp v2.0 to act in accordance with RFC 7231.
Previous version always returned 405 for HEAD requests to routes added with add_get().
If you have handlers which perform lots of processing to write the response body you may wish to improve per-
formance by skipping that processing in the case of HEAD requests while still taking care to respond with the same
headers as with GET requests.
See also:
Router refactoring in 0.21 for more details
Variable Resources
Resource may have variable path also. For instance, a resource with the path '/a/{name}/c' would match all
incoming requests with paths such as '/a/b/c', '/a/1/c', and '/a/etc/c'.
A variable part is specified in the form {identifier}, where the identifier can be used later in a request
handler to access the matched value for that part. This is done by looking up the identifier in the Request.
match_info mapping:
resource = app.router.add_resource('/{name}')
resource.add_route('GET', variable_handler)
resource = app.router.add_resource(r'/{name:\d+}')
Note: Regex should match against percent encoded URL (request.raw_path). E.g. space character is encoded
as %20.
According to RFC 3986 allowed in path symbols are:
Which can then be used to access and build a URL for that resource later (e.g. in a request handler):
app.router.add_resource(r'/{user}/info', name='user-info')
In this case you can also pass in the parts of the route:
>>> request.app.router['user-info'].url_for(user='john_doe')\
... .with_query("a=b")
'/john_doe/info?a=b'
app.router.add_get('/', hello)
But sometimes it’s convenient to group logically similar handlers into a Python class.
Since aiohttp.web does not dictate any implementation details, application developers can organize handlers in
classes if they so wish:
11.2. Server 71
aiohttp Documentation, Release 3.0.0-a0
class Handler:
def __init__(self):
pass
handler = Handler()
app.router.add_get('/intro', handler.handle_intro)
app.router.add_get('/greet/{name}', handler.handle_greeting)
class MyView(web.View):
async def get(self):
return await get_resp(self.request)
Handlers should be coroutines accepting self only and returning response object as regular web-handler. Request
object can be retrieved by View.request property.
After implementing the view (MyView from example above) should be registered in application’s router:
app.router.add_view('/path/to', MyView)
Example will process GET and POST requests for /path/to but raise 405 Method not allowed exception for unimple-
mented HTTP methods.
Resource Views
All registered resources in a router can be viewed using the UrlDispatcher.resources() method:
Similarly, a subset of the resources that were registered with a name can be viewed using the UrlDispatcher.
named_resources() method:
Code examples shown above use imperative style for adding new routes: they call app.router.add_get(...)
etc.
There are two alternatives: route tables and route decorators.
Route tables look like Django way:
app.router.add_routes([web.get('/get', handle_get),
web.post('/post', handle_post),
The snippet calls add_routes() to register a list of route definitions (aiohttp.web.RouteDef instances)
created by aiohttp.web.get() or aiohttp.web.post() functions.
See also:
RouteDef reference.
Route decorators are closer to Flask approach:
routes = web.RouteTableDef()
@routes.get('/get')
async def handle_get(request):
...
@routes.post('/post')
async def handle_post(request):
...
app.router.add_routes(routes)
routes = web.RouteTableDef()
@routes.view("/view")
class MyView(web.View):
async def get(self):
...
app.router.add_routes(routes)
11.2. Server 73
aiohttp Documentation, Release 3.0.0-a0
See also:
RouteTableDef reference.
All tree ways (imperative calls, route tables and decorators) are equivalent, you could use what do you prefer or even
mix them on your own.
New in version 2.3.
JSON Response
It is a common case to return JSON data in response, aiohttp.web provides a shortcut for returning JSON –
aiohttp.web.json_response():
def handler(request):
data = {'some': 'data'}
return web.json_response(data)
The shortcut method returns aiohttp.web.Response instance so you can for example set cookies before return-
ing it from handler.
User Sessions
Often you need a container for storing user data across requests. The concept is usually called a session.
aiohttp.web has no built-in concept of a session, however, there is a third-party library, aiohttp_session,
that adds session support:
import asyncio
import time
import base64
from cryptography import fernet
from aiohttp import web
from aiohttp_session import setup, get_session, session_middleware
from aiohttp_session.cookie_storage import EncryptedCookieStorage
def make_app():
app = web.Application()
# secret_key must be 32 url-safe base64-encoded bytes
fernet_key = fernet.Fernet.generate_key()
secret_key = base64.urlsafe_b64decode(fernet_key)
setup(app, EncryptedCookieStorage(secret_key))
app.router.add_route('GET', '/', handler)
return app
web.run_app(make_app())
HTTP Forms
If form’s method is "GET" (<form method="get">) use Request.query for getting form data.
To access form data with "POST" method use Request.post() or Request.multipart().
Request.post() accepts both 'application/x-www-form-urlencoded' and 'multipart/
form-data' form’s data encoding (e.g. <form enctype="multipart/form-data">). It stores files
data in temporary directory. If client_max_size is specified post raises ValueError exception. For efficiency use
Request.multipart(), It is especially effective for uploading large files (File Uploads).
Values submitted by the following form:
<label for="login">Login</label>
<input id="login" name="login" type="text" value="" autofocus/>
<label for="password">Password</label>
<input id="password" name="password" type="password" value=""/>
File Uploads
aiohttp.web has built-in support for handling files uploaded from the browser.
First, make sure that the HTML <form> element has its enctype attribute set to enctype="multipart/
form-data". As an example, here is a form that accepts an MP3 file:
<label for="mp3">Mp3</label>
<input id="mp3" name="mp3" type="file" value=""/>
Then, in the request handler you can access the file input field as a FileField instance. FileField is simply a
container for the file as well as some of its metadata:
mp3 = data['mp3']
11.2. Server 75
aiohttp Documentation, Release 3.0.0-a0
# .file contains the actual file data that needs to be stored somewhere.
mp3_file = data['mp3'].file
content = mp3_file.read()
return web.Response(body=content,
headers=MultiDict(
{'CONTENT-DISPOSITION': mp3_file}))
You might have noticed a big warning in the example above. The general issue is that Request.post() reads the
whole payload in memory, resulting in possible OOM (Out Of Memory) errors. To avoid this, for multipart uploads,
you should use Request.multipart() which returns a multipart reader:
async def store_mp3_handler(request):
WebSockets
ws = web.WebSocketResponse()
await ws.prepare(request)
else:
await ws.send_str(msg.data + '/answer')
elif msg.type == aiohttp.WSMsgType.ERROR:
print('ws connection closed with exception %s' %
ws.exception())
return ws
app.router.add_get('/ws', websocket_handler)
Exceptions
Warning: Returning HTTPException or its subclasses is deprecated and will be removed in subsequent aiohttp
versions.
and:
Each exception class has a status code according to RFC 2068: codes with 100-300 are not really errors; 400s are
client errors, and 500s are server errors.
HTTP Exception hierarchy chart:
Exception
HTTPException
HTTPSuccessful
* 200 - HTTPOk
* 201 - HTTPCreated
* 202 - HTTPAccepted
* 203 - HTTPNonAuthoritativeInformation
* 204 - HTTPNoContent
* 205 - HTTPResetContent
* 206 - HTTPPartialContent
HTTPRedirection
* 300 - HTTPMultipleChoices
* 301 - HTTPMovedPermanently
* 302 - HTTPFound
11.2. Server 77
aiohttp Documentation, Release 3.0.0-a0
* 303 - HTTPSeeOther
* 304 - HTTPNotModified
* 305 - HTTPUseProxy
* 307 - HTTPTemporaryRedirect
* 308 - HTTPPermanentRedirect
HTTPError
HTTPClientError
* 400 - HTTPBadRequest
* 401 - HTTPUnauthorized
* 402 - HTTPPaymentRequired
* 403 - HTTPForbidden
* 404 - HTTPNotFound
* 405 - HTTPMethodNotAllowed
* 406 - HTTPNotAcceptable
* 407 - HTTPProxyAuthenticationRequired
* 408 - HTTPRequestTimeout
* 409 - HTTPConflict
* 410 - HTTPGone
* 411 - HTTPLengthRequired
* 412 - HTTPPreconditionFailed
* 413 - HTTPRequestEntityTooLarge
* 414 - HTTPRequestURITooLong
* 415 - HTTPUnsupportedMediaType
* 416 - HTTPRequestRangeNotSatisfiable
* 417 - HTTPExpectationFailed
* 421 - HTTPMisdirectedRequest
* 422 - HTTPUnprocessableEntity
* 424 - HTTPFailedDependency
* 426 - HTTPUpgradeRequired
* 428 - HTTPPreconditionRequired
* 429 - HTTPTooManyRequests
* 431 - HTTPRequestHeaderFieldsTooLarge
* 451 - HTTPUnavailableForLegalReasons
HTTPServerError
* 500 - HTTPInternalServerError
* 501 - HTTPNotImplemented
* 502 - HTTPBadGateway
* 503 - HTTPServiceUnavailable
* 504 - HTTPGatewayTimeout
* 505 - HTTPVersionNotSupported
* 506 - HTTPVariantAlsoNegotiates
* 507 - HTTPInsufficientStorage
* 510 - HTTPNotExtended
* 511 - HTTPNetworkAuthenticationRequired
If not directly specified, headers will be added to the default response headers.
Classes HTTPMultipleChoices, HTTPMovedPermanently, HTTPFound, HTTPSeeOther,
HTTPUseProxy, HTTPTemporaryRedirect have the following constructor signature:
HTTPMethodNotAllowed(method, allowed_methods, *,
headers=None, reason=None,
body=None, text=None, content_type=None)
Warning: web-handler execution could be canceled on every await if client drops connection without reading
entire response’s BODY.
The behavior is very different from classic WSGI frameworks like Flask and Django.
Sometimes it is a desirable behavior: on processing GET request the code might fetch data from database or other web
resource, the fetching is potentially slow.
Canceling this fetch is very good: the peer dropped connection already, there is no reason to waste time and resources
(memory etc) by getting data from DB without any chance to send it back to peer.
But sometimes the cancellation is bad: on POST request very often is needed to save data to DB regardless to peer
closing.
Cancellation prevention could be implemented in several ways:
• Applying asyncio.shield() to coroutine that saves data into DB.
• Spawning a new task for DB saving
• Using aiojobs or other third party library.
asyncio.shield() works pretty good. The only disadvantage is you need to split web handler into exactly two
async functions: one for handler itself and other for protected code.
For example the following snippet is not safe:
Cancellation might be occurred just after saving data in REDIS, write_to_postgres will be not called.
Spawning a new task is much worse: there is no place to await spawned tasks:
In this case errors from write_to_redis are not awaited, it leads to many asyncio log messages Future exception
was never retrieved and Task was destroyed but it is pending!.
Moreover on Graceful shutdown phase aiohttp don’t wait for these tasks, you have a great chance to loose very
important data.
11.2. Server 79
aiohttp Documentation, Release 3.0.0-a0
On other hand aiojobs provides an API for spawning new jobs and awaiting their results etc. It stores all scheduled
activity in internal data structures and could terminate them gracefully:
app = web.Application()
setup(app)
app.router.add_get('/', handler)
@atomic
async def handler(request):
await write_to_db()
return web.Response()
app = web.Application()
setup(app)
app.router.add_post('/', handler)
It prevents all handler async function from cancellation, write_to_db will be never interrupted.
Sometimes you need to register handlers on more complex criteria than simply a HTTP method and path pair.
Although UrlDispatcher does not support any extra criteria, routing based on custom conditions can be accom-
plished by implementing a second layer of routing in your application.
The following example shows custom routing based on the HTTP Accept header:
class AcceptChooser:
def __init__(self):
self._accepts = {}
chooser = AcceptChooser()
app.router.add_get('/', chooser.do_route)
chooser.reg_acceptor('application/json', handle_json)
chooser.reg_acceptor('application/xml', handle_xml)
The best way to handle static files (images, JavaScripts, CSS files etc.) is using Reverse Proxy like nginx or CDN
services.
But for development it’s very convenient to handle static files by aiohttp server itself.
To do it just register a new static route by UrlDispatcher.add_static() call:
app.router.add_static('/prefix', path_to_static_folder)
When a directory is accessed within a static route then the server responses to client with HTTP/403 Forbidden
by default. Displaying folder index instead could be enabled with show_index parameter set to True:
When a symlink from the static directory is accessed, the server responses to client with HTTP/404 Not Found
by default. To allow the server to follow symlinks, parameter follow_symlinks should be set to True:
When you want to enable cache busting, parameter append_version can be set to True
Cache busting is the process of appending some form of file version hash to the filename of resources like JavaScript
and CSS files. The performance advantage of doing this is that we can tell the browser to cache these files indefinitely
without worrying about the client not getting the latest version when the file changes:
Template Rendering
app = web.Application()
aiohttp_jinja2.setup(app,
loader=jinja2.FileSystemLoader('/path/to/templates/folder'))
After that you may use the template engine in your handlers. The most convenient way is to simply wrap your handlers
with the aiohttp_jinja2.template() decorator:
11.2. Server 81
aiohttp Documentation, Release 3.0.0-a0
@aiohttp_jinja2.template('tmpl.jinja2')
def handler(request):
return {'name': 'Andrew', 'surname': 'Svetlov'}
If you prefer the Mako template engine, please take a look at the aiohttp_mako library.
Reading from the WebSocket (await ws.receive()) must only be done inside the request handler task; however,
writing (ws.send_str(...)) to the WebSocket, closing (await ws.close()) and canceling the handler task
may be delegated to other tasks. See also FAQ section.
aiohttp.web creates an implicit asyncio.Task for handling every incoming request.
Note: While aiohttp.web itself only supports WebSockets without downgrading to LONG-POLLING, etc., our
team supports SockJS, an aiohttp-based library for implementing SockJS-compatible server code.
Warning: Parallel reads from websocket are forbidden, there is no possibility to call aiohttp.web.
WebSocketResponse.receive() from two tasks.
See FAQ section for instructions how to solve the problem.
aiohttp.web discourages the use of global variables, aka singletons. Every variable should have its own context
that is not global.
So, aiohttp.web.Application and aiohttp.web.Request support a collections.abc.
MutableMapping interface (i.e. they are dict-like objects), allowing them to be used as data stores.
For storing global-like variables, feel free to save them in an Application instance:
app['my_private_key'] = data
Variables that are only needed for the lifetime of a Request, can be stored in a Request:
This is mostly useful for Middlewares and Signals handlers to store data for further processing by the next handlers in
the chain.
aiohttp.web.StreamResponse and aiohttp.web.Response objects also support collections.
abc.MutableMapping interface. This is useful when you want to share data with signals and middlewares once
all the work in the handler is done:
To avoid clashing with other aiohttp users and third-party libraries, please choose a unique key name for storing data.
If your code is published on PyPI, then the project name is most likely unique and safe to use as the key. Otherwise,
something based on your company name/url would be satisfactory (i.e. org.company.app).
Middlewares
aiohttp.web provides a powerful mechanism for customizing request handlers via middlewares.
A middleware is a coroutine that can modify either the request or response. For example, here’s a simple middleware
which appends ' wink' to the response:
from aiohttp.web import middleware
@middleware
async def middleware(request, handler):
resp = await handler(request)
resp.text = resp.text + ' wink'
return resp
Internally, a single request handler is constructed by applying the middleware chain to the original handler in reverse
order, and is called by the RequestHandler as a regular handler.
Since middlewares are themselves coroutines, they may perform extra await calls when creating a new handler, e.g.
call database etc.
Middlewares usually call the handler, but they may choose to ignore it, e.g. displaying 403 Forbidden page or raising
HTTPForbidden exception if the user does not have permissions to access the underlying resource. They may also
render errors raised by the handler, perform some pre- or post-processing like handling CORS and so on.
The following code demonstrates middlewares execution order:
from aiohttp import web
def test(request):
print('Handler function called')
return web.Response(text="Hello")
@web.middleware
async def middleware1(request, handler):
print('Middleware 1 called')
response = await handler(request)
print('Middleware 1 finished')
return response
11.2. Server 83
aiohttp Documentation, Release 3.0.0-a0
@web.middleware
async def middleware2(request, handler):
print('Middleware 2 called')
response = await handler(request)
print('Middleware 2 finished')
return response
Produced output:
Middleware 1 called
Middleware 2 called
Handler function called
Middleware 2 finished
Middleware 1 finished
Example
A common use of middlewares is to implement custom error pages. The following example will render 404 errors
using a JSON response, as might be appropriate a JSON REST service:
from aiohttp import web
@web.middleware
async def error_middleware(request, handler):
try:
response = await handler(request)
if response.status != 404:
return response
message = response.message
except web.HTTPException as ex:
if ex.status != 404:
raise
message = ex.reason
return web.json_response({'error': message})
app = web.Application(middlewares=[error_middleware])
Deprecated since version 2.3: Prior to v2.3 middleware required an outer middleware factory which returned the
middleware coroutine. Since v2.3 this is not required; instead the @middleware decorator should be used.
Old style middleware (with an outer factory and no @middleware decorator) is still supported. Furthermore, old
and new style middleware can be mixed.
A middleware factory is simply a coroutine that implements the logic of a middleware. For example, here’s a trivial
middleware factory:
async def middleware_factory(app, handler):
async def middleware_handler(request):
A middleware factory should accept two parameters, an app instance and a handler, and return a new handler.
Note: Both the outer middleware_factory coroutine and the inner middleware_handler coroutine are called for every
request handled.
Middleware factories should return a new handler that has the same signature as a request handler. That is, it should
accept a single Request instance and return a Response, or raise an exception.
Signals
Although middlewares can customize request handlers before or after a Response has been prepared, they can’t
customize a Response while it’s being prepared. For this aiohttp.web provides signals.
For example, a middleware can only change HTTP headers for unprepared responses (see prepare()), but some-
times we need a hook for changing HTTP headers for streamed responses and WebSockets. This can be accomplished
by subscribing to the on_response_prepare signal:
app.on_response_prepare.append(on_prepare)
Additionally, the on_startup and on_cleanup signals can be subscribed to for application component setup and
tear down accordingly.
The following example will properly initialize and dispose an aiopg connection engine:
app.on_startup.append(create_aiopg)
app.on_cleanup.append(dispose_aiopg)
Signal handlers should not return a value but may modify incoming mutable parameters.
Signal handlers will be run sequentially, in order they were added. If handler is asynchronous, it will be awaited before
calling next one.
11.2. Server 85
aiohttp Documentation, Release 3.0.0-a0
Warning: Signals API has provisional status, meaning it may be changed in future releases.
Signal subscription and sending will most likely be the same, but signal object creation is subject to change. As
long as you are not creating new signals, but simply reusing existing ones, you will not be affected.
Nested applications
Sub applications are designed for solving the problem of the big monolithic code base. Let’s assume we have a project
with own business logic and tools like administration panel and debug toolbar.
Administration panel is a separate application by its own nature but all toolbar URLs are served by prefix like /admin.
Thus we’ll create a totally separate application named admin and connect it to main app with prefix by
add_subapp():
admin = web.Application()
# setup admin routes, signals and middlewares
app.add_subapp('/admin/', admin)
admin = web.Application()
admin.router.add_get('/resource', handler, name='name')
app.add_subapp('/admin/', admin)
url = admin.router['name'].url_for()
admin = web.Application()
admin.router.add_get('/resource', handler, name='name')
app.add_subapp('/admin/', admin)
app['admin'] = admin
Expect Header
aiohttp.web supports Expect header. By default it sends HTTP/1.1 100 Continue line to client, or raises
HTTPExpectationFailed if header value is not equal to “100-continue”. It is possible to specify custom Expect
header handler on per route basis. This handler gets called if Expect header exist in request after receiving all headers
and before processing application’s Middlewares and route handler. Handler can return None, in that case the request
processing continues as usual. If handler returns an instance of class StreamResponse, request handler uses it as
response. Also handler can raise a subclass of HTTPException. In this case all further processing will not happen
and client will receive appropriate http response.
Note: A server that does not understand or is unable to comply with any of the expectation values in the Expect field
of a request MUST respond with appropriate error status. The server MUST respond with a 417 (Expectation Failed)
status if any of the expectations cannot be met or, if there are other problems with the request, some other 4xx status.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20
If all checks pass, the custom handler must write a HTTP/1.1 100 Continue status code before returning.
The following example shows how to setup a custom handler for the Expect header:
if request.headers.get('EXPECT') != '100-continue':
raise HTTPExpectationFailed(text="Unknown Expect: %s" % expect)
if request.headers.get('AUTHORIZATION') is None:
raise HTTPForbidden()
app = web.Application()
app.router.add_get('/', hello, expect_handler=check_auth)
Application runners
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
11.2. Server 87
aiohttp Documentation, Release 3.0.0-a0
await runner.cleanup()
Graceful shutdown
Stopping aiohttp web server by just closing all connections is not always satisfactory.
The problem is: if application supports websockets or data streaming it most likely has open connections at server
shutdown time.
The library has no knowledge how to close them gracefully but developer can help by registering Application.
on_shutdown signal handler and call the signal on web server closing.
Developer should keep a list of opened connections (Application is a good candidate).
The following websocket snippet shows an example for websocket handler:
app = web.Application()
app['websockets'] = []
request.app['websockets'].append(ws)
try:
async for msg in ws:
...
finally:
request.app['websockets'].remove(ws)
return ws
app.on_shutdown.append(on_shutdown)
Background tasks
Sometimes there’s a need to perform some asynchronous operations just after application start-up.
Even more, in some sophisticated systems there could be a need to run some background tasks in the event loop along
with the application’s request handler. Such as listening to message queue or other network message/event sources
(e.g. ZeroMQ, Redis Pub/Sub, AMQP, etc.) to react to received messages within the application.
For example the background task could listen to ZeroMQ on zmq.SUB socket, process and forward re-
trieved messages to clients connected via WebSocket that are stored somewhere in the application (e.g. in the
application['websockets'] list).
To run such short and long running background tasks aiohttp provides an ability to register Application.
on_startup signal handler(s) that will run along with the application’s request handler.
For example there’s a need to run one quick task and two long running tasks that will live till the application is alive.
The appropriate background tasks could be registered as an Application.on_startup signal handlers as shown
in the example below:
app = web.Application()
app.on_startup.append(start_background_tasks)
app.on_cleanup.append(cleanup_background_tasks)
web.run_app(app)
The task listen_to_redis() will run forever. To shut it down correctly Application.on_cleanup signal
handler may be used to send a cancellation to it.
Pages like 404 Not Found and 500 Internal Error could be handled by custom middleware, see Middlewares for details.
As discussed in Server Deployment the preferable way is deploying aiohttp web server behind a Reverse Proxy Server
like nginx for production usage.
In this way properties like scheme host and remote are incorrect.
Real values should be given from proxy server, usually either Forwarded or old-fashion X-Forwarded-For,
X-Forwarded-Host, X-Forwarded-Proto HTTP headers are used.
aiohttp does not take forwarded headers into account by default because it produces security issue: HTTP client might
add these headers too, pushing non-trusted data values.
That’s why aiohttp server should setup forwarded headers in custom middleware in tight conjunction with reverse
proxy configuration.
11.2. Server 89
aiohttp Documentation, Release 3.0.0-a0
For changing scheme host and remote the middleware might use clone().
See also:
https://github.com/aio-libs/aiohttp-remotes provides secure helpers for modifying scheme, host and remote attributes
according to Forwarded and X-Forwarded-* HTTP headers.
Swagger support
aiohttp-swagger is a library that allow to add Swagger documentation and embed the Swagger-UI into your
aiohttp.web project.
CORS support
aiohttp.web itself does not support Cross-Origin Resource Sharing, but there is an aiohttp plugin for it: aio-
http_cors.
Debug Toolbar
aiohttp-debugtoolbar is a very useful library that provides a debugging toolbar while you’re developing an aiohttp.
web application.
Install it via pip:
After that attach the aiohttp_debugtoolbar middleware to your aiohttp.web.Application and call
aiohttp_debugtoolbar.setup():
import aiohttp_debugtoolbar
from aiohttp_debugtoolbar import toolbar_middleware_factory
app = web.Application(middlewares=[toolbar_middleware_factory])
aiohttp_debugtoolbar.setup(app)
Dev Tools
Documentation and a complete tutorial of creating and running an app locally are available at aiohttp-devtools.
Abstract
Sometimes user don’t need high-level concepts introduced in Server: applications, routers, middlewares and signals.
All what is needed is supporting asynchronous callable which accepts a request and returns a response object.
This is done by introducing aiohttp.web.Server class which serves a protocol factory role for asyncio.
AbstractEventLoop.create_server() and bridges data stream to web handler and sends result back.
Low level web handler should accept the single BaseRequest parameter and performs one of the following actions:
1. Return a Response with the whole HTTP body stored in memory.
2. Create a StreamResponse, send headers by StreamResponse.prepare() call, send data chunks by
StreamResponse.write() and return finished response.
3. Raise HTTPException derived exception (see Exceptions section).
All other exceptions not derived from HTTPException leads to 500 Internal Server Error response.
4. Initiate and process Web-Socket connection by WebSocketResponse using (see WebSockets).
import asyncio
from aiohttp import web
# pause here for very long time by serving HTTP requests and
# waiting for keyboard interruption
await asyncio.sleep(100*3600)
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main(loop))
except KeyboardInterrupt:
pass
loop.close()
In the snippet we have handler which returns a regular Response with "OK" in BODY.
11.2. Server 91
aiohttp Documentation, Release 3.0.0-a0
This handler is processed by server (Server which acts as protocol factory). Network communication is created
by loop.create_server call to serve http://127.0.0.1:8080/.
The handler should process every request: GET, POST, Web-Socket for every path.
The example is very basic: it always return 200 OK response, real life code should be much more complex.
The Request object contains all the information about an incoming HTTP request.
BaseRequest is used for Low-Level Servers (which have no applications, routers, signals and middlewares).
Request has an Request.app and Request.match_info attributes.
A BaseRequest / Request are dict like objects, allowing them to be used for sharing data among Middlewares
and Signals handlers.
class aiohttp.web.BaseRequest
version
HTTP version of request, Read-only property.
Returns aiohttp.protocol.HttpVersion instance.
method
HTTP method, read-only property.
The value is upper-cased str like "GET", "POST", "PUT" etc.
url
A URL instance with absolute URL to resource (scheme, host and port are included).
Note: In case of malformed request (e.g. without "HOST" HTTP header) the absolute url
may be unavailable.
rel_url
A URL instance with relative URL to resource (contains path, query and fragment parts only,
scheme, host and port are excluded).
The property is equal to .url.relative() but is always present.
See also:
A note from url.
scheme
A string representing the scheme of the request.
The scheme is 'https' if transport for request handling is SSL, 'http' otherwise.
The value could be overridden by clone().
'http' otherwise.
Read-only str property.
Changed in version 2.3: Forwarded and X-Forwarded-Proto are not used anymore.
Call .clone(scheme=new_scheme) for setting up the value explicitly.
See also:
11.2. Server 93
aiohttp Documentation, Release 3.0.0-a0
peername = request.transport.get_extra_info('peername')
if peername is not None:
host, port = peername
loop
An event loop instance used by HTTP request handling.
charset
Read-only property that specifies the encoding for the request’s BODY.
The value is parsed from the Content-Type HTTP header.
Returns str like 'utf-8' or None if Content-Type has no charset information.
content_length
Read-only property that returns length of the request’s BODY.
The value is parsed from the Content-Length HTTP header.
Returns int or None if Content-Length is absent.
http_range
Read-only property that returns information about Range HTTP header.
Returns a slice where .start is left inclusive bound, .stop is right exclusive bound and
.step is 1.
The property might be used in two manners:
11.2. Server 95
aiohttp Documentation, Release 3.0.0-a0
1. Attribute-access style (example assumes that both left and right borders are set, the real logic
for case of open bounds is more complex):
rng = request.http_range
with open(filename, 'rb') as f:
f.seek(rng.start)
return f.read(rng.stop-rng.start)
2. Slice-style:
return buffer[request.http_range]
if_modified_since
Read-only property that returns the date specified in the If-Modified-Since header.
Returns datetime.datetime or None if If-Modified-Since header is absent or is not a valid
HTTP date.
clone(*, method=..., rel_url=..., headers=...)
Clone itself with replacement some attributes.
Creates and returns a new instance of Request object. If no parameters are given, an exact copy is
returned. If a parameter is not passed, it will reuse the one from the current request object.
Parameters
• method (str) – http method
• rel_url – url to use, str or URL
• headers – CIMultiDict or compatible headers container.
Returns a cloned Request instance.
coroutine read()
Read request body, returns bytes object with body content.
Note: The method does store read data internally, subsequent read() call will return the same
value.
coroutine text()
Read request body, decode it using charset encoding or UTF-8 if no encoding was specified in
MIME-type.
Returns str with body content.
Note: The method does store read data internally, subsequent text() call will return the same
value.
Parameters loads (callable) – any callable that accepts str and returns dict
with parsed JSON (json.loads() by default).
Note: The method does store read data internally, subsequent json() call will return the same
value.
This method is a coroutine for consistency with the else reader methods.
Warning: The method does not store read data internally. That means once you exhausts
multipart reader, you cannot get the request payload one more time.
See also:
Working with Multipart
coroutine post()
A coroutine that reads POST parameters from request body.
Returns MultiDictProxy instance filled with parsed data.
If method is not POST, PUT, PATCH, TRACE or DELETE or content_type is not empty or
application/x-www-form-urlencoded or multipart/form-data returns empty multidict.
Note: The method does store read data internally, subsequent post() call will return the same
value.
coroutine release()
Release request.
Eat unread part of HTTP BODY if present.
Note: User code may never call release(), all required work will be processed by aiohttp.
web internal machinery.
class aiohttp.web.Request
An request used for receiving request’s information by web handler.
Every handler accepts a request instance as the first positional parameter.
The class in derived from BaseRequest, shares all parent’s attributes and methods but has a couple of addi-
tional properties:
match_info
Read-only property with AbstractMatchInfo instance for result of route resolving.
11.2. Server 97
aiohttp Documentation, Release 3.0.0-a0
Note: Exact type of property depends on used router. If app.router is UrlDispatcher the
property contains UrlMappingMatchInfo instance.
app
An Application instance used to call request handler, Read-only property.
Note: You should never create the Request instance manually – aiohttp.web does it for you. But
clone() may be used for cloning modified request copy with changed path, method etc.
Response classes
For now, aiohttp.web has three classes for the HTTP response: StreamResponse, Response and
FileResponse.
Usually you need to use the second one. StreamResponse is intended for streaming data, while Response
contains HTTP BODY as an attribute and sends own content as single piece with the correct Content-Length HTTP
header.
For sake of design decisions Response is derived from StreamResponse parent class.
The response supports keep-alive handling out-of-the-box if request supports it.
You can disable keep-alive by force_close() though.
The common case for sending an answer from web-handler is returning a Response instance:
def handler(request):
return Response("All right!")
Response classes are dict like objects, allowing them to be used for sharing data among Middlewares and Signals
handlers:
resp['key'] = value
StreamResponse
• reason (str) – HTTP reason. If param is None reason will be calculated basing on
status parameter. Otherwise pass str with arbitrary status explanation..
prepared
Read-only bool property, True if prepare() has been called, False otherwise.
task
A task that serves HTTP request handling.
May be useful for graceful shutdown of long-running requests (streaming, long polling or web-socket).
status
Read-only property for HTTP response status code, int.
200 (OK) by default.
reason
Read-only property for HTTP response reason, str.
set_status(status, reason=None)
Set status and reason.
reason value is auto calculated if not specified (None).
keep_alive
Read-only property, copy of Request.keep_alive by default.
Can be switched to False by force_close() call.
force_close()
Disable keep_alive for connection. There are no ways to enable it back.
compression
Read-only bool property, True if compression is enabled.
False by default.
See also:
enable_compression()
enable_compression(force=None)
Enable compression.
When force is unset compression encoding is selected based on the request’s Accept-Encoding header.
Accept-Encoding is not checked if force is set to a ContentCoding.
See also:
compression
chunked
Read-only property, indicates if chunked encoding is on.
Can be enabled by enable_chunked_encoding() call.
See also:
enable_chunked_encoding
enable_chunked_encoding()
Enables chunked encoding for response. There are no ways to disable it back. With enabled chunked
encoding each write() operation encoded in separate chunk.
11.2. Server 99
aiohttp Documentation, Release 3.0.0-a0
See also:
chunked
headers
CIMultiDict instance for outgoing HTTP headers.
cookies
An instance of http.cookies.SimpleCookie for outgoing cookies.
Warning: Direct setting up Set-Cookie header may be overwritten by explicit calls to cookie manip-
ulation.
We are encourage using of cookies and set_cookie(), del_cookie() for cookie manipula-
tions.
Warning: In HTTP version 1.1, expires was deprecated and replaced with the easier-to-use
max-age, but Internet Explorer (IE6, IE7, and IE8) does not support max-age.
await resp.write(data)
Response
Parameters
• body (bytes) – response’s BODY
• status (int) – HTTP status code, 200 OK by default.
• headers (collections.abc.Mapping) – HTTP headers that should be added
to response’s ones.
• text (str) – response’s BODY
• content_type (str) – response’s content type. 'text/plain' if text is passed
also, 'application/octet-stream' otherwise.
• charset (str) – response’s charset. 'utf-8' if text is passed also, None other-
wise.
body
Read-write attribute for storing response’s content aka BODY, bytes.
Setting body also recalculates content_length value.
Resetting body (assigning None) sets content_length to None too, dropping Content-Length
HTTP header.
text
Read-write attribute for storing response’s content, represented as string, str.
Setting text also recalculates content_length value and body value
Resetting text (assigning None) sets content_length to None too, dropping Content-Length
HTTP header.
WebSocketResponse
ws = web.WebSocketResponse()
await ws.prepare(request)
coroutine prepare(request)
Starts websocket. After the call you can use websocket methods.
Parameters request (aiohttp.web.Request) – HTTP request object, that the re-
sponse answers.
Raises HTTPException – if websocket handshake has failed.
can_prepare(request)
Performs checks for request data to figure out if websocket can be started on the request.
If can_prepare() call is success then prepare() will success too.
Parameters request (aiohttp.web.Request) – HTTP request object, that the re-
sponse answers.
Returns
WebSocketReady instance.
WebSocketReady.ok is True on success, WebSocketReady.protocol is
websocket subprotocol which is passed by client and accepted by server (one of proto-
cols sequence from WebSocketResponse ctor). WebSocketReady.protocol
may be None if client and server subprotocols are not overlapping.
closed
Read-only property, True if connection has been closed or in process of closing. CLOSE message has
been received from peer.
close_code
Read-only property, close code from peer. It is set to None on opened connection.
protocol
Websocket subprotocol chosen after start() call.
May be None if server and client protocols are not overlapping.
exception()
Returns last occurred exception or None.
coroutine ping(message=b”)
Send PING to peer.
Parameters message – optional payload of ping message, str (converted to UTF-8 en-
coded bytes) or bytes.
Raises RuntimeError – if connections is not started or closing.
Changed in version 3.0: The method is converted into coroutine
coroutine pong(message=b”)
Send unsolicited PONG to peer.
Parameters message – optional payload of pong message, str (converted to UTF-8 en-
coded bytes) or bytes.
Raises RuntimeError – if connections is not started or closing.
Parameters
• loads (callable) – any callable that accepts str and returns dict with
parsed JSON (json.loads() by default).
• timeout – timeout for receive operation.
timeout value overrides response‘s receive_timeout attribute.
WebSocketReady
class aiohttp.web.WebSocketReady
A named tuple for returning result from WebSocketResponse.can_prepare().
Has bool check implemented, e.g.:
ok
True if websocket connection can be established, False otherwise.
protocol
str represented selected websocket sub-protocol.
See also:
WebSocketResponse.can_prepare()
json_response
Application
app = Application()
app['database'] = await aiopg.create_engine(**db_config)
Although Application is a dict-like object, it can’t be duplicated like one using Application.copy().
class aiohttp.web.Application(*, logger=<default>, router=None, middlewares=(), han-
dler_args=None, client_max_size=1024**2, loop=None, de-
bug=...)
The class inherits dict.
Parameters
• logger – logging.Logger instance for storing application logs.
By default the value is logging.getLogger("aiohttp.web")
• router – aiohttp.abc.AbstractRouter instance, the system creates
UrlDispatcher by default if router is None.
• middlewares – list of middleware factories, see Middlewares for details.
• handler_args – dict-like object that overrides keyword arguments of
Application.make_handler()
• client_max_size – client’s maximum size in a request, in bytes. If a POST re-
quest exceeds this value, it raises an HTTPRequestEntityTooLarge exception.
• loop – event loop
Deprecated since version 2.0: The parameter is deprecated. Loop is get set during
freeze stage.
• debug – Switches debug mode.
router
Read-only property that returns router instance.
logger
logging.Logger instance for storing application logs.
loop
event loop used for processing HTTP requests.
debug
Boolean value indicating whether the debug mode is turned on or off.
on_response_prepare
A Signal that is fired at the beginning of StreamResponse.prepare() with parameters request
and response. It can be used, for example, to add custom headers to each response before sending.
Signal handlers should have the following signature:
async def on_prepare(request, response):
pass
on_startup
A Signal that is fired on application start-up.
Subscribers may use the signal to run background tasks in the event loop along with the application’s
request handler just after the application start-up.
Signal handlers should have the following signature:
async def on_startup(app):
pass
See also:
Background tasks.
on_shutdown
A Signal that is fired on application shutdown.
Subscribers may use the signal for gracefully closing long running connections, e.g. websockets and data
streaming.
Signal handlers should have the following signature:
It’s up to end user to figure out which web-handlers are still alive and how to finish them properly.
We suggest keeping a list of long running handlers in Application dictionary.
See also:
Graceful shutdown and on_cleanup.
on_cleanup
A Signal that is fired on application cleanup.
Subscribers may use the signal for gracefully closing connections to database server etc.
Signal handlers should have the following signature:
See also:
Graceful shutdown and on_shutdown.
make_handler(loop=None, **kwargs)
Creates HTTP protocol factory for handling requests.
Parameters
• loop –
event loop used for processing HTTP requests.
If param is None asyncio.get_event_loop() used for getting de-
fault event loop.
Deprecated since version 2.0.
• tcp_keepalive (bool) – Enable TCP Keep-Alive. Default: True.
• keepalive_timeout (int) – Number of seconds before closing Keep-Alive
connection. Default: 75 seconds (NGINX’s default value).
• logger – Custom logger object. Default: aiohttp.log.
server_logger.
• access_log – Custom logging object. Default: aiohttp.log.
access_logger.
• access_log_class – class for access_logger. Default: aiohttp.
helpers.AccessLogger. Must to be a subclass of aiohttp.abc.
AbstractAccessLogger.
loop = asyncio.get_event_loop()
app = Application()
await loop.create_server(app.make_handler(),
'0.0.0.0', 8080)
coroutine startup()
A coroutine that will be called along with the application’s request handler.
The purpose of the method is calling on_startup signal handlers.
coroutine shutdown()
A coroutine that should be called on server stopping but before cleanup().
The purpose of the method is calling on_shutdown signal handlers.
coroutine cleanup()
A coroutine that should be called on server stopping but after shutdown().
The purpose of the method is calling on_cleanup signal handlers.
Note: Application object has router attribute but has no add_route() method. The reason is: we want to
support different router implementations (even maybe not url-matching based but traversal ones).
For sake of that fact we have very trivial ABC for AbstractRouter: it should have only
AbstractRouter.resolve() coroutine.
No methods for adding routes or route reversing (getting URL by route name). All those are router implemen-
tation details (but, sure, you need to deal with that methods after choosing the router for your application).
Server
Server.connections
List of all currently opened connections.
aiohttp.web.requests_count
Amount of processed requests.
coroutine Server.shutdown(timeout)
A coroutine that should be called to close all opened connections.
Router
Warning: Use add_static() for development only. In production, static content should be
processed by web servers like nginx or apache.
Parameters
• prefix (str) – URL path prefix for handled static files
• path – path to the folder in file system that contains handled static files, str or
pathlib.Path.
• name (str) – optional route name.
• expect_handler (coroutine) – optional expect header handler.
• chunk_size (int) – size of single chunk for file downloading, 256Kb by de-
fault.
Increasing chunk_size parameter to, say, 1Mb may increase file downloading
speed but consumes more memory.
• show_index (bool) – flag for allowing to show indexes of a directory, by
default it’s not allowed and HTTP/403 will be returned on directory access.
• follow_symlinks (bool) – flag for allowing to follow symlinks from a di-
rectory, by default it’s not allowed and HTTP/404 will be returned on access.
• append_version (bool) – flag for adding file version (hash) to the url query
string, this value will be used as default when you call to StaticRoute.
url() and StaticRoute.url_for() methods.
Returns new StaticRoute instance.
add_subapp(prefix, subapp)
Register nested sub-application under given path prefix.
In resolving process if request’s path starts with prefix then further resolving is passed to subapp.
Parameters
• prefix (str) – path’s prefix for the resource.
• subapp (Application) – nested application attached under prefix.
Returns a PrefixedSubAppResource instance.
coroutine resolve(request)
A coroutine that returns AbstractMatchInfo for request.
The method never raises exception, but returns AbstractMatchInfo instance with:
1. http_exception assigned to HTTPException instance.
2. handler which raises HTTPNotFound or HTTPMethodNotAllowed on handler’s execution
if there is no registered route for request.
Middlewares can process that exceptions to render pretty-looking error page for example.
Used by internal machinery, end user unlikely need to call the method.
Note: The method uses Request.raw_path for pattern matching against registered routes.
resources()
The method returns a view for all registered resources.
The view is an object that allows to:
1. Get size of the router table:
len(app.router.resources())
route in app.router.resources()
routes()
The method returns a view for all registered routes.
named_resources()
Returns a dict-like types.MappingProxyType view over all named resources.
The view maps every named resource’s name to the BaseResource instance. It supports the usual
dict-like operations, except for any mutable operations (i.e. it’s read-only):
len(app.router.named_resources())
"name" in app.router.named_resources()
app.router.named_resources()["name"]
Resource
router.add_resource(path).add_route(method, handler)
Resource with a name is called named resource. The main purpose of named resource is constructing URL by route
name for passing it into template engine for example:
AbstractResource
Resource
PlainResource
DynamicResource
StaticResource
class aiohttp.web.AbstractResource
A base class for all resources.
Inherited from collections.abc.Sized and collections.abc.Iterable.
len(resource) returns amount of routes belongs to the resource, for route in resource allows to
iterate over these routes.
name
Read-only name of resource or None.
coroutine resolve(method, path)
Resolve resource by finding appropriate web-handler for (method, path) combination.
Parameters
• method (str) – requested HTTP method.
• path (str) – path part of request.
Returns
(match_info, allowed_methods) pair.
allowed_methods is a set or HTTP methods accepted by resource.
match_info is either UrlMappingMatchInfo if request is resolved or None if no
route is found.
get_info()
A resource description, e.g. {'path': '/path/to'} or {'formatter': '/path/{to}',
'pattern': re.compile(r'^/path/(?P<to>[a-zA-Z][_a-zA-Z0-9]+)$
url_for(*args, **kwargs)
Construct an URL for route with additional params.
args and kwargs depend on a parameters list accepted by inherited resource class.
Returns URL – resulting URL instance.
class aiohttp.web.Resource
A base class for new-style resources, inherits AbstractResource.
add_route(method, handler, *, expect_handler=None)
Add a web-handler to resource.
Parameters
• method (str) – HTTP method for route. Should be one of 'GET', 'POST',
'PUT', 'DELETE', 'PATCH', 'HEAD', 'OPTIONS' or '*' for any method.
The parameter is case-insensitive, e.g. you can push 'get' as well as 'GET'.
The method should be unique for resource.
• handler (callable) – route handler.
• expect_handler (coroutine) – optional expect header handler.
Returns new ResourceRoute instance.
class aiohttp.web.PlainResource
A resource, inherited from Resource.
The class corresponds to resources with plain-text matching, '/path/to' for example.
url_for()
Returns a URL for the resource.
class aiohttp.web.DynamicResource
A resource, inherited from Resource.
The class corresponds to resources with variable matching, e.g. '/path/{to}/{param}' etc.
url_for(**params)
Returns a URL for the resource.
Parameters params – – a variable substitutions for dynamic resource.
E.g. for '/path/{to}/{param}' pattern the method should be called as
resource.url_for(to='val1', param='val2')
class aiohttp.web.StaticResource
A resource, inherited from Resource.
The class corresponds to resources for static file serving.
url_for(filename, append_version=None)
Returns a URL for file path under resource prefix.
Parameters
• filename – – a file name substitution for static file handler.
Accepts both str and pathlib.Path.
E.g. an URL for '/prefix/dir/file.txt' should be generated as
resource.url_for(filename='dir/file.txt')
• append_version (bool) –
– a flag for adding file version (hash) to the url query string for cache boosting
By default has value from an constructor (False by default) When set to True
- v=FILE_HASH query string param will be added When set to False has no
impact
if file not found has no impact
class aiohttp.web.PrefixedSubAppResource
A resource for serving nested applications. The class instance is returned by add_subapp call.
url_for(**kwargs)
The call is not allowed, it raises RuntimeError.
Route
Route has HTTP method (wildcard '*' is an option), web-handler and optional expect handler.
Every route belong to some resource.
Route classes hierarchy:
AbstractRoute
ResourceRoute
SystemRoute
ResourceRoute is the route used for resources, SystemRoute serves URL resolving errors like 404 Not Found
and 405 Method Not Allowed.
class aiohttp.web.AbstractRoute
Base class for routes served by UrlDispatcher.
method
HTTP method handled by the route, e.g. GET, POST etc.
handler
handler that processes the route.
name
Name of the route, always equals to name of resource which owns the route.
resource
Resource instance which holds the route, None for SystemRoute.
url_for(*args, **kwargs)
Abstract method for constructing url handled by the route.
Actually it’s a shortcut for route.resource.url_for(...).
coroutine handle_expect_header(request)
100-continue handler.
class aiohttp.web.ResourceRoute
The route class for handling different HTTP methods for Resource.
class aiohttp.web.SystemRoute
The route class for handling URL resolution errors like like 404 Not Found and 405 Method Not Allowed.
status
HTTP status code
reason
HTTP status reason
RouteDef
app.router.add_routes([web.get('/get', handle_get),
web.post('/post', handle_post),
class aiohttp.web.RouteDef
A definition for not added yet route.
method
HTTP method (GET, POST etc.) (str).
path
Path to resource, e.g. /path/to. Could contain {} brackets for variable resources (str).
handler
An async function to handle HTTP request.
kwargs
A dict of additional arguments.
New in version 2.3.
aiohttp.web.get(path, handler, *, name=None, allow_head=True, expect_handler=None)
Return RouteDef for processing GET requests. See UrlDispatcher.add_get() for information about
parameters.
New in version 2.3.
aiohttp.web.post(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing POST requests. See UrlDispatcher.add_post() for information
about parameters.
New in version 2.3.
aiohttp.web.head(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing HEAD requests. See UrlDispatcher.add_head() for information
about parameters.
New in version 2.3.
aiohttp.web.put(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing PUT requests. See UrlDispatcher.add_put() for information about
parameters.
New in version 2.3.
aiohttp.web.patch(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing PATCH requests. See UrlDispatcher.add_patch() for information
about parameters.
New in version 2.3.
aiohttp.web.delete(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing DELETE requests. See UrlDispatcher.add_delete() for informa-
tion about parameters.
New in version 2.3.
aiohttp.web.view(path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing ANY requests. See UrlDispatcher.add_view() for information about
parameters.
New in version 3.0.
aiohttp.web.route(method, path, handler, *, name=None, expect_handler=None)
Return RouteDef for processing POST requests. See UrlDispatcher.add_route() for information
about parameters.
New in version 2.3.
RouteTableDef
A routes table definition used for describing routes by decorators (Flask style):
routes = web.RouteTableDef()
@routes.get('/get')
async def handle_get(request):
...
@routes.post('/post')
async def handle_post(request):
...
app.router.add_routes(routes)
@routes.view("/view")
class MyView(web.View):
async def get(self):
...
class aiohttp.web.RouteTableDef
A sequence of RouteDef instances (implements abc.collections.Sequence protocol).
In addition to all standard list methods the class provides also methods like get() and post() for adding
new route definition.
@get(path, *, allow_head=True, name=None, expect_handler=None)
Add a new RouteDef item for registering GET web-handler.
See UrlDispatcher.add_get() for information about parameters.
@post(path, *, name=None, expect_handler=None)
Add a new RouteDef item for registering POST web-handler.
See UrlDispatcher.add_post() for information about parameters.
@head(path, *, name=None, expect_handler=None)
Add a new RouteDef item for registering HEAD web-handler.
See UrlDispatcher.add_head() for information about parameters.
@put(path, *, name=None, expect_handler=None)
Add a new RouteDef item for registering PUT web-handler.
See UrlDispatcher.add_put() for information about parameters.
@patch(path, *, name=None, expect_handler=None)
Add a new RouteDef item for registering PATCH web-handler.
See UrlDispatcher.add_patch() for information about parameters.
@delete(path, *, name=None, expect_handler=None)
Add a new RouteDef item for registering DELETE web-handler.
MatchInfo
View
class aiohttp.web.View(request)
Inherited from AbstractView.
Base class for class based views. Implementations should derive from View and override methods for handling
HTTP verbs like get() or post():
class MyView(View):
app.router.add_view('/view', MyView)
The view raises 405 Method Not allowed (HTTPMethodNotAllowed) if requested web verb is not supported.
Parameters request – instance of Request that has initiated a view processing.
request
Request sent to view’s constructor, read-only property.
Overridable coroutine methods: connect(), delete(), get(), head(), options(), patch(),
post(), put(), trace().
See also:
Class Based Views
Running Applications
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
# wait for finish signal
await runner.cleanup()
New in version 3.0: AppRunner and TCPSite / UnixSite / SockSite are added in aiohttp 3.0
class aiohttp.web.AppRunner(app, *, handle_signals=False, **kwargs)
A runner for Application. Used with conjunction with sites to serve on specific port.
Parameters
• app (Application) – web application instance to serve.
• handle_signals (bool) – add signal handlers for signal.SIGINT and
signal.SIGTERM (False by default).
• kwargs – named parameters to pass into Application.make_handler().
app
Read-only attribute for accessing to Application served instance.
server
Low-level web Server for handling HTTP requests, read-only attribute.
sites
A read-only set of served sites (TCPSite / UnixSite / SockSite instances).
coroutine setup()
Initialize application. Should be called before adding sites.
The method calls Application.on_startup registered signals.
coroutine cleanup()
Stop handling all registered sites and cleanup used resources.
Application.on_shutdown and Application.on_cleanup signals are called internally.
class aiohttp.web.BaseSite
An abstract class for handled sites.
name
An identifier for site, read-only str property. Could be an handled URL or UNIX socket path.
coroutine start()
Start handling a site.
coroutine stop()
Stop handling a site.
TCPSite(runner, host=None, port=None, *, shutdown_timeout=60.0, ssl_context
reuse_port=None)
Serve a runner on TCP socket.
Parameters
• runner – a runner to serve.
• host (str) – HOST to listen on, '0.0.0.0' if None (default).
• port (int) – PORT to listed on, 8080 if None (default).
• shutdown_timeout (float) – a timeout for closing opened connections on
BaseSite.stop() call.
• ssl_context – a ssl.SSLContext instance for serving SSL/TLS secure server,
None for plain HTTP server (default).
• backlog (int) – a number of unaccepted connections that the system will allow
before refusing new connections, see socket.listen() for details.
128 by default.
• reuse_address (bool) – tells the kernel to reuse a local socket in TIME_WAIT
state, without waiting for its natural timeout to expire. If not specified will automati-
cally be set to True on UNIX.
• reuse_port (bool) – tells the kernel to allow this endpoint to be bound to the same
port as other existing endpoints are bound to, so long as they all set this flag when being
created. This option is not supported on Windows.
class aiohttp.web.UnixSite(runner, path, *, shutdown_timeout=60.0, ssl_context=None, back-
log=128)
Serve a runner on UNIX socket.
Parameters
• runner – a runner to serve.
• path (str) – PATH to UNIX socket to listen.
• shutdown_timeout (float) – a timeout for closing opened connections on
BaseSite.stop() call.
• ssl_context – a ssl.SSLContext instance for serving SSL/TLS secure server,
None for plain HTTP server (default).
• backlog (int) – a number of unaccepted connections that the system will allow
before refusing new connections, see socket.listen() for details.
128 by default.
class aiohttp.web.SockSite(runner, sock, *, shutdown_timeout=60.0, ssl_context=None, back-
log=128)
Serve a runner on UNIX socket.
Parameters
• runner – a runner to serve.
• sock – socket.socket to listen.
• shutdown_timeout (float) – a timeout for closing opened connections on
BaseSite.stop() call.
Utilities
class aiohttp.web.FileField
A namedtuple instance that is returned as multidict value by Request.POST() if field is uploaded file.
name
Field name
filename
File name as specified by uploading (client) side.
file
An io.IOBase instance with content of uploaded file.
content_type
MIME type of uploaded file, 'text/plain' by default.
See also:
File Uploads
aiohttp.web.run_app(app, *, host=None, port=None, path=None, sock=None, shut-
down_timeout=60.0, ssl_context=None, print=print, back-
log=128, access_log_class=aiohttp.helpers.AccessLogger, ac-
cess_log_format=aiohttp.helpers.AccessLogger.LOG_FORMAT,
access_log=aiohttp.log.access_logger, handle_signals=True,
reuse_address=None, reuse_port=None)
A utility function for running an application, serving it until keyboard interrupt and performing a Graceful
shutdown.
Suitable as handy tool for scaffolding aiohttp based projects. Perhaps production config will use more sophisti-
cated runner but it good enough at least at very beginning stage.
The server will listen on any host or Unix domain socket path you supply. If no hosts or paths are supplied, or
only a port is supplied, a TCP server listening on 0.0.0.0 (all hosts) will be launched.
Distributing HTTP traffic to multiple hosts or paths on the same application process provides no performance
benefit as the requests are handled on the same event loop. See Server Deployment for ways of distributing work
for increased performance.
Parameters
• app – Application instance to run
• host (str) – TCP/IP host or a sequence of hosts for HTTP server. Default is '0.0.
0.0' if port has been specified or if path is not supplied.
• port (int) – TCP/IP port for HTTP server. Default is 8080 for plain text HTTP and
8443 for HTTP via SSL (when ssl_context parameter is specified).
• path (str) – file system path for HTTP server Unix domain socket. A sequence
of file system paths can be used to bind multiple domain sockets. Listening on Unix
domain sockets is not supported by all operating systems.
Constants
class aiohttp.web.ContentCoding
An enum.Enum class of available Content Codings.
deflate
DEFLATE compression
gzip
GZIP compression
aiohttp.web.identity
no compression
Middlewares
11.2.6 Logging
Access logs
Other parameter called access_log_format may be used for specifying log format (see below).
Format specification
The library provides custom micro-language to specifying info about request and response:
Option Meaning
%% The percent sign
%a Remote IP-address (IP-address of proxy if using reverse proxy)
%t Time when the request was started to process
%P The process ID of the child that serviced the request
%r First line of request
%s Response status code
%b Size of response in bytes, excluding HTTP headers
%T The time taken to serve the request, in seconds
%Tf The time taken to serve the request, in seconds with fraction in %.06f format
%D The time taken to serve the request, in microseconds
%{FOO}i request.headers['FOO']
%{FOO}o response.headers['FOO']
class AccessLogger(AbstractAccessLogger):
Note: When Gunicorn is used for deployment its default access log format will be automatically replaced with the
default aiohttp’s access log format.
If Gunicorn’s option access_logformat is specified explicitly it should use aiohttp’s format specification.
Error logs
aiohttp.web uses logger named 'aiohttp.server' to store errors given on web requests handling.
The log is enabled by default.
To use different logger name please specify logger parameter (logging.Logger instance) on performing
aiohttp.web.Application.make_handler() call.
11.2.7 Testing
aiohttp provides plugin for pytest making writing web server tests extremely easy, it also provides test framework
agnostic utilities for testing with other frameworks such as unittest.
Before starting to write your tests, you may also be interested on reading how to write testable services that interact
with the loop.
For using pytest plugin please install pytest-aiohttp library:
$ pip install pytest-aiohttp
If you don’t want to install pytest-aiohttp for some reason you may insert pytest_plugins = 'aiohttp.
pytest_plugin' line into conftest.py instead for the same functionality.
Provisional Status
aiohttp test utils provides a scaffolding for testing aiohttp-based web servers.
They are consist of two parts: running test server and making HTTP requests to this server.
TestServer runs aiohttp.web.Application based server, RawTestServer starts aiohttp.web.
WebServer low level server.
For performing HTTP requests to these servers you have to create a test client: TestClient instance.
The client incapsulates aiohttp.ClientSession by providing proxy methods to the client for common opera-
tions such as ws_connect, get, post, etc.
Pytest
The test_client fixture available from pytest-aiohttp plugin allows you to create a client to make requests to test
your app.
A simple would be:
from aiohttp import web
It also provides access to the app instance allowing tests to check the state of the app. Tests can be made even more
succinct with a fixture to create an app test client:
import pytest
from aiohttp import web
@pytest.fixture
def cli(loop, test_client):
app = web.Application()
app.router.add_get('/', previous)
app.router.add_post('/', previous)
return loop.run_until_complete(test_client(app))
aiohttp.test_utils.test_client(raw_server, **kwargs)
A fixture factory that creates TestClient for access to tested server:
async def test_f(test_client):
app = web.Application()
# fill route table
handler should be a coroutine which accepts a request and returns response, e.g.
port optional, port the server is run at, if not provided a random unused port is used.
New in version 3.0.
aiohttp.test_utils.unused_port
Function to return an unused port number for IPv4 TCP protocol:
async def test_f(test_client, unused_port):
port = unused_port()
app = web.Application()
# fill route table
Unittest
To test applications with the standard library’s unittest or unittest-based functionality, the AioHTTPTestCase is pro-
vided:
from aiohttp.test_utils import AioHTTPTestCase, unittest_run_loop
from aiohttp import web
class MyAppTestCase(AioHTTPTestCase):
"""
Override the get_app method to return your application.
"""
return web.Application()
# a vanilla example
def test_example(self):
async def test_get_route():
url = root + "/"
resp = await self.client.request("GET", url, loop=loop)
assert resp.status == 200
text = await resp.text()
assert "Hello, world" in text
self.loop.run_until_complete(test_get_route())
class aiohttp.test_utils.AioHTTPTestCase
A base class to allow for unittest web applications using aiohttp.
Derived from unittest.TestCase
Provides the following:
client
an aiohttp test client, TestClient instance.
server
an aiohttp test server, TestServer instance.
New in version 2.3.
loop
The event loop in which the application and server are running.
app
The application returned by get_app() (aiohttp.web.Application instance).
coroutine get_client()
This async method can be overridden to return the TestClient object used in the test.
Returns TestClient instance.
New in version 2.3.
coroutine get_server()
This async method can be overridden to return the TestServer object used in the test.
Returns TestServer instance.
New in version 2.3.
coroutine get_application()
This async method should be overridden to return the aiohttp.web.Application ob-
ject to test.
Returns aiohttp.web.Application instance.
coroutine setUpAsync()
This async method do nothing by default and can be overridden to execute asynchronous
code during the setUp stage of the TestCase.
New in version 2.3.
coroutine tearDownAsync()
This async method do nothing by default and can be overridden to execute asynchronous
code during the tearDown stage of the TestCase.
New in version 2.3.
setUp()
Standard test initialization method.
tearDown()
Standard test finalization method.
Note: The TestClient’s methods are asynchronous: you have to execute function on the test client using
asynchronous methods.
A basic test class wraps every test method by unittest_run_loop() decorator:
class TestA(AioHTTPTestCase):
@unittest_run_loop
async def test_f(self):
resp = await self.client.get('/')
unittest_run_loop:
A decorator dedicated to use with asynchronous methods of an AioHTTPTestCase.
Handles executing an asynchronous function, using the AioHTTPTestCase.loop of the
AioHTTPTestCase.
aiohttp provides test utility for creating fake aiohttp.web.Request objects: aiohttp.test_utils.
make_mocked_request(), it could be useful in case of simple unit tests, like handler tests, or simulate error
conditions that hard to reproduce on real server:
def handler(request):
assert request.headers.get('token') == 'x'
return web.Response(body=b'data')
def test_handler():
req = make_mocked_request('GET', '/', headers={'token': 'x'})
resp = handler(req)
assert resp.body == b'data'
loop.run_until_complete(test_get_route())
If it’s preferred to handle the creation / teardown on a more granular basis, the TestClient object can be used directly:
from aiohttp.test_utils import TestClient
loop.run_until_complete(test_get_route())
loop.run_until_complete(client.close())
A full list of the utilities provided can be found at the api reference
Some libraries like motor, aioes and others depend on the asyncio loop for executing the code. When running your
normal program, these libraries pick the main event loop by doing asyncio.get_event_loop. The problem
during testing is that there is no main loop assigned because an independent loop for each test is created without
assigning it as the main one.
This raises a problem when those libraries try to find it. Luckily, the ones that are well written, allow passing the loop
explicitly. Let’s have a look at the aioes client signature:
def __init__(self, endpoints, *, loop=None, **kwargs)
As you can see, there is an optional loop kwarg. Of course, we are not going to test directly the aioes client but
our service that depends on it will. So, if we want our AioESService to be easily testable, we should define it as
follows:
import asyncio
class AioESService:
if __name__ == "__main__":
client = AioESService()
loop = asyncio.get_event_loop()
loop.run_until_complete(client.get_info())
Note that it is accepting an optional loop kwarg. For the normal flow of execution it won’t affect because we can still
call the service without passing the loop explicitly having a main loop available. The problem comes when you try to
do a test like:
import pytest
class TestAioESService:
If you try to run the test, it will fail with a similar error:
...
RuntimeError: There is no current event loop in thread 'MainThread'.
If you check the stack trace, you will see aioes is complaining that there is no current event loop in the main thread.
Pass explicit loop to solve it.
If you rely on code which works with implicit loops only you may try to use hackish approach from FAQ.
Test server
scheme
A scheme for tested application, 'http' for non-protected run and 'https' for TLS encrypted server.
host
host used to start a test server.
port
port used to start the test server.
handler
aiohttp.web.WebServer used for HTTP requests serving.
server
asyncio.AbstractServer used for managing accepted connections.
coroutine start_server(loop=None, **kwargs)
Parameters loop (asyncio.AbstractEventLoop) – the event_loop to use
Start a test server.
coroutine close()
Stop and finish executed test server.
make_url(path)
Return an absolute URL for given path.
class aiohttp.test_utils.RawTestServer(handler, *, scheme="http", host=’127.0.0.1’)
Low-level test server (derived from BaseTestServer).
Parameters
• handler – a coroutine for handling web requests. The handler should
accept aiohttp.web.BaseRequest and return a response instance, e.g.
StreamResponse or Response.
The handler could raise HTTPException as a signal for non-200 HTTP response.
• scheme (str) – HTTP scheme, non-protected "http" by default.
• host (str) – a host for TCP socket, IPv4 local host ('127.0.0.1') by default.
• port (int) – optional port for TCP socket, if not provided a random unused port is
used.
New in version 3.0.
class aiohttp.test_utils.TestServer(app, *, scheme="http", host=’127.0.0.1’)
Test server (derived from BaseTestServer) for starting Application.
Parameters
• app – aiohttp.web.Application instance to run.
• scheme (str) – HTTP scheme, non-protected "http" by default.
• host (str) – a host for TCP socket, IPv4 local host ('127.0.0.1') by default.
• port (int) – optional port for TCP socket, if not provided a random unused port is
used.
New in version 3.0.
app
aiohttp.web.Application instance to run.
Test Client
Utilities
aiohttp.test_utils.make_mocked_coro(return_value)
Creates a coroutine mock.
Behaves like a coroutine which returns return_value. But it is also a mock object, you might test it as usual
Mock:
mocked = make_mocked_coro(1)
assert 1 == await mocked(1, 2)
mocked.assert_called_with(1, 2)
Parameters return_value – A value that the the mock object will return when called.
Returns A mock object that behaves as a coroutine which returns return_value when called.
aiohttp.test_utils.unused_port()
Return an unused port number for IPv4 TCP protocol.
Return int ephemeral port number which could be reused by test server.
aiohttp.test_utils.loop_context(loop_factory=<function asyncio.new_event_loop>)
A contextmanager that creates an event_loop, for test purposes.
Handles the creation and cleanup of a test loop.
aiohttp.test_utils.setup_test_loop(loop_factory=<function asyncio.new_event_loop>)
Create and return an asyncio.AbstractEventLoop instance.
The caller should also call teardown_test_loop, once they are done with the loop.
aiohttp.test_utils.teardown_test_loop(loop)
Teardown and cleanup an event_loop created by setup_test_loop.
Parameters loop (asyncio.AbstractEventLoop) – the loop to teardown
Standalone
Nginx+supervisord
Nginx configuration
Here is short extraction about writing Nginx configuration file. It does not cover all available Nginx options.
For full reference read Nginx tutorial and official Nginx documentation.
First configure HTTP server itself:
http {
server {
listen 80;
client_max_body_size 4G;
server_name example.com;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://aiohttp;
}
location /static {
# path for static files
root /path/to/app/static;
}
}
}
This config listens on port 80 for server named example.com and redirects everything to aiohttp backend group.
Also it serves static files from /path/to/app/static path as example.com/static.
Next we need to configure aiohttp upstream group:
http {
upstream aiohttp {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# Unix domain sockets are used in this example due to their high performance,
# but TCP/IP sockets could be used instead:
# server 127.0.0.1:8081 fail_timeout=0;
# server 127.0.0.1:8082 fail_timeout=0;
# server 127.0.0.1:8083 fail_timeout=0;
# server 127.0.0.1:8084 fail_timeout=0;
}
}
All HTTP requests for http://example.com except ones for http://example.com/static will be redi-
rected to example1.sock, example2.sock, example3.sock or example4.sock backend servers. By
default, Nginx uses round-robin algorithm for backend selection.
Note: Nginx is not the only existing reverse proxy server but the most popular one. Alternatives like HAProxy may
be used as well.
Supervisord
After configuring Nginx we need to start our aiohttp backends. Better to use some tool for starting them automatically
after system reboot or backend crash.
There are very many ways to do it: Supervisord, Upstart, Systemd, Gaffer, Circus, Runit etc.
Here we’ll use Supervisord for example:
[program:aiohttp]
numprocs = 4
numprocs_start = 1
process_name = example_%(process_num)s
user=nobody
autostart=true
autorestart=true
aiohttp server
The last step is preparing aiohttp server for working with supervisord.
Assuming we have properly configured aiohttp.web.Application and port is specified by command line, the
task is trivial:
# aiohttp_example.py
import argparse
from aiohttp import web
if __name__ == '__main__':
app = web.Application()
# configure app
args = parser.parse_args()
web.run_app(app, path=args.path, port=args.port)
For real use cases we perhaps need to configure other things like logging etc., but it’s out of scope of the topic.
Nginx+Gunicorn
aiohttp can be deployed using Gunicorn, which is based on a pre-fork worker model. Gunicorn launches your app as
worker processes for handling incoming requests.
In opposite to deployment with bare Nginx the solution does not need to manually run several aiohttp processes and
use tool like supervisord for monitoring it. But nothing is for free: running aiohttp application under gunicorn is
slightly slower.
Prepare environment
You firstly need to setup your deployment environment. This example is based on Ubuntu 14.04.
Create a directory for your application:
Ubuntu has a bug in pyenv, so to create virtualenv you need to do some extra manipulation:
Now that the virtual environment is ready, we’ll proceed to install aiohttp and gunicorn:
Application
Lets write a simple application, which we will save to file. We’ll name this file my_app_module.py:
def index(request):
return web.Response(text="Welcome home!")
my_web_app = web.Application()
my_web_app.router.add_get('/', index)
Start Gunicorn
When Running Gunicorn, you provide the name of the module, i.e. my_app_module, and the name of the app, i.e.
my_web_app, along with other Gunicorn Settings provided as command line flags or in your config file.
In this case, we will use:
• the ‘–bind’ flag to set the server’s socket address;
• the ‘–worker-class’ flag to tell Gunicorn that we want to use a custom worker subclass instead of one of the
Gunicorn default worker types;
• you may also want to use the ‘–workers’ flag to tell Gunicorn how many worker processes to use for handling
requests. (See the documentation for recommendations on How Many Workers?)
The custom worker subclass is defined in aiohttp.GunicornWebWorker and should be used instead of the gaiohttp
worker provided by Gunicorn, which supports only aiohttp.wsgi applications:
[2015-03-11 18:27:21 +0000] [1253] [INFO] Booting worker with pid: 1253
Gunicorn is now running and ready to serve requests to your app’s worker processes.
Note: If you want to use an alternative asyncio event loop uvloop, you can use the aiohttp.
GunicornUVLoopWebWorker worker class.
More information
The Gunicorn documentation recommends deploying Gunicorn behind an Nginx proxy server. See the official docu-
mentation for more information about suggested nginx configuration.
Logging configuration
aiohttp and gunicorn use different format for specifying access log.
For more information please read Format Specification for Access Log.
11.3 Utilities
Abstract routing
coroutine expect_handler(request)
Abstract method for handling 100-continue processing.
For class based view support aiohttp has abstract AbstractView class which is awaitable (may be uses like await
Cls() or yield from Cls() and has a request as an attribute.
class aiohttp.AbstractView
An abstract class, base for all class based views implementations.
Methods __iter__ and __await__ should be overridden.
request
aiohttp.web.Request instance for performing the request.
class aiohttp.abc.AbstractCookieJar
The cookie jar instance is available as ClientSession.cookie_jar.
The jar contains Morsel items for storing internal cookie data.
API provides a count of saved cookies:
len(session.cookie_jar)
class aiohttp.abc.AbstractAccessLogger
An abstract class, base for all RequestHandler access_logger implementations
aiohttp supports a full featured multipart reader and writer. Both are designed with steaming processing in mind to
avoid unwanted footprint which may be significant if you’re dealing with large payloads, but this also means that most
I/O operation are only possible to be executed a single time.
Assume you made a request, as usual, and want to process the response multipart data:
First, you need to wrap the response with a MultipartReader.from_response(). This needs to keep the
implementation of MultipartReader separated from the response and the connection routines which makes it
more portable:
reader = aiohttp.MultipartReader.from_response(resp)
Let’s assume with this response you’d received some JSON document and multiple files for it, but you don’t need all
of them, just a specific one.
So first you need to enter into a loop where the multipart body will be processed:
metadata = None
filedata = None
while True:
part = await reader.next()
The returned type depends on what the next part is: if it’s a simple body part then you’ll get BodyPartReader
instance here, otherwise, it will be another MultipartReader instance for the nested multipart. Remember, that
multipart format is recursive and supports multiple levels of nested body parts. When there are no more parts left to
fetch, None value will be returned - that’s the signal to break the loop:
if part is None:
break
Both BodyPartReader and MultipartReader provides access to body part headers: this allows you to filter
parts by their attributes:
if part.headers[aiohttp.hdrs.CONTENT_TYPE] == 'application/json':
metadata = await part.json()
continue
Nor BodyPartReader or MultipartReader instances does not read the whole body part data without explicitly
asking for. BodyPartReader provides a set of helpers methods to fetch popular content types in friendly way:
If current body part does not matches your expectation and you want to skip it - just continue a loop to start a next
iteration of it. Here is where magic happens. Before fetching the next body part await reader.next() it ensures
that the previous one was read completely. If it was not, all its content sends to the void in term to fetch the next part.
So you don’t have to care about cleanup routines while you’re within a loop.
Once you’d found a part for the file you’d searched for, just read it. Let’s handle it as it is without applying any
decoding magic:
filedata = await part.read(decode=False)
Later you may decide to decode the data. It’s still simple and possible to do:
filedata = part.decode(filedata)
Once you are done with multipart processing, just break a loop:
break
MultipartWriter provides an interface to build multipart payload from the Python data and serialize it into
chunked binary stream. Since multipart format is recursive and supports deeply nesting, you can use with statement
to design your multipart data closer to how it will be:
with aiohttp.MultipartWriter('mixed') as mpwriter:
...
with aiohttp.MultipartWriter('related') as subwriter:
...
mpwriter.append(subwriter)
The MultipartWriter.append() is used to join new body parts into a single stream. It accepts various inputs
and determines what default headers should be used for.
For text data default Content-Type is text/plain; charset=utf-8:
mpwriter.append('hello')
mpwriter.append(b'aiohttp')
You can always override these default by passing your own headers with the second argument:
mpwriter.append(io.BytesIO(b'GIF89a...'),
{'CONTENT-TYPE': 'image/gif'})
For file objects Content-Type will be determined by using Python’s mod:mimetypes module and additionally Content-
Disposition header will include the file’s basename:
If you want to send a file with a different name, just handle the BodyPartWriter instance which
MultipartWriter.append() will always return and set Content-Disposition explicitly by using the
BodyPartWriter.set_content_disposition() helper:
part.set_content_disposition('attachment', filename='secret.txt')
part.headers[aiohttp.hdrs.CONTENT_ID] = 'X-12345'
If you’d set Content-Encoding, it will be automatically applied to the data on serialization (see below):
part.headers[aiohttp.hdrs.CONTENT_ENCODING] = 'gzip'
mpwriter.append_json({'test': 'passed'})
mpwriter.append_form([('key', 'value')])
When it’s done, to make a request just pass a root MultipartWriter instance as aiohttp.ClientSession.
request() data argument:
Behind the scenes MultipartWriter.serialize() will yield chunks of every part and if body part has
Content-Encoding or Content-Transfer-Encoding they will be applied on streaming content.
Please note, that on MultipartWriter.serialize() all the file objects will be read until the end and there is
no way to repeat a request without rewinding their pointers to the start.
Hacking Multipart
The Internet is full of terror and sometimes you may find a server which implements multipart support in strange ways
when an oblivious solution does not work.
For instance, is server used cgi.FieldStorage then you have to ensure that no body part contains a Content-
Length header:
On the other hand, some server may require to specify Content-Length for the whole multipart request. aiohttp does
not do that since it sends multipart using chunked transfer encoding by default. To overcome this issue, you have to
serialize a MultipartWriter by our own in the way to calculate its size:
body = b''.join(mpwriter.serialize())
await aiohttp.post('http://example.com',
data=body, headers=mpwriter.headers)
Sometimes the server response may not be well formed: it may or may not contains nested parts. For instance, we
request a resource which returns JSON documents with the files attached to it. If the document has any attachments,
they are returned as a nested multipart. If it has not it responds as plain body parts:
--:
CONTENT-TYPE: application/json
{"_id": "foo"}
--:
CONTENT-TYPE: multipart/related; boundary=----:
----:
CONTENT-TYPE: application/json
{"_id": "bar"}
----:
CONTENT-TYPE: text/plain
CONTENT-DISPOSITION: attachment; filename=bar.txt
{"_id": "boo"}
--:
CONTENT-TYPE: multipart/related; boundary=----:
----:
CONTENT-TYPE: application/json
{"_id": "baz"}
----:
CONTENT-TYPE: text/plain
CONTENT-DISPOSITION: attachment; filename=baz.txt
Reading such kind of data in single stream is possible, but is not clean at all:
result = []
while True:
part = await reader.next()
if part is None:
break
if isinstance(part, aiohttp.MultipartReader):
# Fetching files
while True:
filepart = await part.next()
if filepart is None:
break
result[-1].append((await filepart.read()))
else:
# Fetching document
result.append([(await part.json())])
Let’s hack a reader in the way to return pairs of document and reader of the related files on each iteration:
class PairsMultipartReader(aiohttp.MultipartReader):
:rtype: tuple
"""
reader = await super().next()
if self._at_eof:
return None, None
if isinstance(reader, self.multipart_reader_cls):
part = await reader.next()
doc = await part.json()
else:
doc = await reader.json()
reader = PairsMultipartReader.from_response(resp)
result = []
while True:
doc, files_reader = await reader.next()
if doc is None:
break
files = []
while True:
filepart = await files_reader.next()
if file.part is None:
break
files.append((await filepart.read()))
result.append((doc, files))
See also:
Multipart reference
boundary
The string (str) representation of the boundary.
Changed in version 3.0: Property type was changed from bytes to str.
append(obj, headers=None)
Append an object to writer.
append_payload(payload)
Adds a new body part to multipart writer.
append_json(obj, headers=None)
Helper to append JSON part.
append_form(obj, headers=None)
Helper to append form urlencoded part.
size
Size of the payload.
coroutine write(writer)
Write body.
Reading Methods
coroutine StreamReader.read(n=-1)
Read up to n bytes. If n is not provided, or set to -1, read until EOF and return all read bytes.
If the EOF was received and the internal buffer is empty, return an empty bytes object.
Parameters n (int) – how many bytes to read, -1 for the whole stream.
Return bytes the given data
coroutine StreamReader.readany()
Read next data portion for the stream.
Returns immediately if internal buffer has a data.
Return bytes the given data
coroutine StreamReader.readexactly(n)
Read exactly n bytes.
Raise an asyncio.IncompleteReadError if the end of the stream is reached before n can be read, the
asyncio.IncompleteReadError.partial attribute of the exception contains the partial read bytes.
Parameters n (int) – how many bytes to read.
Return bytes the given data
coroutine StreamReader.readline()
Read one line, where “line” is a sequence of bytes ending with \n.
If EOF is received, and \n was not found, the method will return the partial read bytes.
If the EOF was received and the internal buffer is empty, return an empty bytes object.
Return bytes the given line
coroutine StreamReader.readchunk()
Read a chunk of data as it was received by the server.
Returns a tuple of (data, end_of_HTTP_chunk).
When chunked transfer encoding is used, end_of_HTTP_chunk is a bool indicating if the end of the data
corresponds to the end of a HTTP chunk, otherwise it is always False.
Return tuple[bytes, bool] a chunk of data and a bool that is True when the end of the returned
chunk corresponds to the end of a HTTP chunk.
Also there are methods for iterating over data chunks with maximum size limit and over any available data.
async-for StreamReader.iter_chunked(n)
Iterates over data chunks with maximum size limit:
async-for StreamReader.iter_any()
Iterates over data chunks in order of intaking them into the stream:
async-for StreamReader.iter_chunks()
Iterates over data chunks as received from the server:
If chunked transfer encoding is used, the original http chunks formatting can be retrieved by reading the second
element of returned tuples:
buffer = b""
Helpers
StreamReader.exception()
Get the exception occurred on data reading.
aiohttp.is_eof()
Return True if EOF was reached.
Internal buffer may be not empty at the moment.
See also:
StreamReader.at_eof()
StreamReader.at_eof()
Return True if the buffer is empty and EOF was reached.
StreamReader.read_nowait(n=None)
Returns data from internal buffer if any, empty bytes object otherwise.
Raises RuntimeError if other coroutine is waiting for stream.
Parameters n (int) – how many bytes to read, -1 for the whole internal buffer.
Return bytes the given data
StreamReader.unread_data(data)
Rollback reading some data from stream, inserting it to buffer head.
Parameters data (bytes) – data to push back into the stream.
coroutine aiohttp.wait_eof()
Wait for EOF. The given data may be accessible by upcoming read calls.
11.3.5 Signals
freeze()
Freeze the list. After the call any content modification is forbidden.
class aiohttp.WSCloseCode
An IntEnum for keeping close message code.
OK
A normal closure, meaning that the purpose for which the connection was established has been fulfilled.
GOING_AWAY
An endpoint is “going away”, such as a server going down or a browser having navigated away from a
page.
PROTOCOL_ERROR
An endpoint is terminating the connection due to a protocol error.
UNSUPPORTED_DATA
An endpoint is terminating the connection because it has received a type of data it cannot accept (e.g., an
endpoint that understands only text data MAY send this if it receives a binary message).
INVALID_TEXT
An endpoint is terminating the connection because it has received data within a message that was not
consistent with the type of the message (e.g., non-UTF-8 RFC 3629 data within a text message).
POLICY_VIOLATION
An endpoint is terminating the connection because it has received a message that violates its policy.
This is a generic status code that can be returned when there is no other more suitable status code (e.g.,
unsupported_data or message_too_big) or if there is a need to hide specific details about the
policy.
MESSAGE_TOO_BIG
An endpoint is terminating the connection because it has received a message that is too big for it to
process.
MANDATORY_EXTENSION
An endpoint (client) is terminating the connection because it has expected the server to negotiate one or
more extension, but the server did not return them in the response message of the WebSocket handshake.
The list of extensions that are needed should appear in the /reason/ part of the Close frame. Note that this
status code is not used by the server, because it can fail the WebSocket handshake instead.
INTERNAL_ERROR
A server is terminating the connection because it encountered an unexpected condition that prevented it
from fulfilling the request.
SERVICE_RESTART
The service is restarted. a client may reconnect, and if it chooses to do, should reconnect using a random-
ized delay of 5-30s.
TRY_AGAIN_LATER
The service is experiencing overload. A client should only connect to a different IP (when there are
multiple for the target) or reconnect to the same IP upon user action.
class aiohttp.WSMsgType
An IntEnum for describing WSMessage type.
CONTINUATION
A mark for continuation frame, user will never get the message with this type.
TEXT
Text message, the value has str type.
BINARY
Binary message, the value has bytes type.
PING
Ping frame (sent by client peer).
PONG
Pong frame, answer on ping. Sent by server peer.
CLOSE
Close frame.
CLOSED FRAME
Actually not frame but a flag indicating that websocket was closed.
ERROR
Actually not frame but a flag indicating that websocket was received an error.
class aiohttp.WSMessage
Websocket message, returned by .receive() calls.
type
Message type, WSMsgType instance.
data
Message payload.
1. str for WSMsgType.TEXT messages.
2. bytes for WSMsgType.BINARY messages.
3. WSCloseCode for WSMsgType.CLOSE messages.
4. bytes for WSMsgType.PING messages.
5. bytes for WSMsgType.PONG messages.
extra
Additional info, str.
Makes sense only for WSMsgType.CLOSE messages, contains optional message description.
json(*, loads=json.loads)
Returns parsed JSON data.
Parameters loads – optional JSON decoder function.
11.4 FAQ
11.4.1 Are there any plans for @app.route decorator like in Flask?
routes = web.RouteTableDef()
@routes.get('/get')
async def handle_get(request):
...
@routes.post('/post')
async def handle_post(request):
...
app.router.add_routes(routes)
If you’re planing to write big applications, maybe you must consider use nested applications. They acts as a Flask
Blueprint or like the Django application concept.
Using nested application you can add sub-applications to the main application.
see: Nested applications.
11.4.3 How to create route that catches urls with given prefix?
Where first argument, star, means catch any possible method (GET, POST, OPTIONS, etc), second matching url with
desired prefix, third – handler.
aiohttp.web.Application object supports dict interface, and right place to store your database connections
or any other resource you want to share between handlers. Take a look on following example:
As of aiohttp v0.18.0 we dropped support for Python 3.3 up to 3.4.1. The main reason for that is the object.
__del__() method, which is fully working since Python 3.4.1 and we need it for proper resource closing.
The last Python 3.3, 3.4.0 compatible version of aiohttp is v0.17.4.
This should not be an issue for most aiohttp users (for example Ubuntu 14.04.3 LTS provides python upgraded to
3.4.3), however libraries depending on aiohttp should consider this and either freeze aiohttp version or drop Python
3.3 support as well.
As of aiohttp v1.0.0 we dropped support for Python 3.4.1 up to 3.4.2+ also. The reason is: loop.is_closed appears in
3.4.2+
Again, it should be not an issue at 2016 Summer because all major distributions are switched to Python 3.5 now.
11.4.6 How a middleware may store a data for using by web-handler later?
The most native way to perform it is creation of separate task for pubsub handling.
Parallel aiohttp.web.WebSocketResponse.receive() calls are forbidden, only the single task should per-
form websocket reading.
But other tasks may use the same websocket object for sending data to peer:
ws = web.WebSocketResponse()
await ws.prepare(request)
task = request.app.loop.create_task(
read_subscription(ws,
request.app['redis']))
try:
async for msg in ws:
# handle incoming messages
# use ws.send_str() to send data back
...
finally:
task.cancel()
try:
async for msg in channel.iter():
answer = process message(msg)
ws.send_str(answer)
finally:
await redis.unsubscribe('channel:1')
ws = web.WebSocketResponse()
user_id = authenticate_user(request)
await ws.prepare(request)
request.app['websockets'][user_id].add(ws)
try:
async for msg in ws:
ws.send_str(msg.data)
finally:
request.app['websockets'][user_id].remove(ws)
return ws
user_id = authenticate_user(request)
# Watch out, this will keep us from returing the response until all are closed
ws_closers and await asyncio.gather(*ws_closers)
return web.Response(text='OK')
def main():
loop = asyncio.get_event_loop()
app = web.Application(loop=loop)
app.router.add_route('GET', '/echo', echo_handler)
app.router.add_route('POST', '/logout', logout_handler)
app['websockets'] = defaultdict(set)
web.run_app(app, host='localhost', port=8080)
If your system has several IP interfaces you may choose one which will be used used to bind socket locally:
See also:
aiohttp.TCPConnector and local_addr parameter.
11.4.10 How to use aiohttp test features with code which works with implicit loop?
Passing explicit loop everywhere is the recommended way. But sometimes, in case you have many nested non well-
written services, this is impossible.
There is a technique based on monkey-patching your low level service that depends on aioes, to inject the loop at
that level. This way, you just need your AioESService with the loop in its signature. An example would be the
following:
import pytest
class TestAcceptance:
**kwargs,
loop=loop))):
client = await test_client(create_app)
resp = await client.get("/")
assert resp.status == 200
Note how we are patching the AioESService with and instance of itself but adding the explicit loop as an extra
(you need to load the loop fixture in your test signature).
The final code to test all this (you will need a local instance of elasticsearch running):
import asyncio
class AioESService:
class MyService:
def __init__(self):
self.aioes_service = AioESService()
def create_app(loop=None):
app = web.Application(loop=loop)
app.router.add_route('GET', '/', hello_aioes)
return app
if __name__ == "__main__":
web.run_app(create_app())
class TestAioESService:
class TestAcceptance:
Note how we are using the side_effect feature for injecting the loop to the AioESService.__init__ call.
The use of **args, **kwargs is mandatory in order to propagate the arguments being used by the caller.
11.4.12 How to enable gzip compression globally for the whole application?
It’s impossible. Choosing what to compress and where don’t apply such time consuming operation is very tricky
matter.
If you need global compression – write own custom middleware. Or enable compression in NGINX (you are deploying
aiohttp behind reverse proxy, is not it).
aiohttp.ClientSession should be created once for the lifetime of the server in order to benefit from connection
pooling.
Session saves cookies internally. If you don’t need cookies processing use aiohttp.DummyCookieJar. If
you need separate cookies for different http calls but process them in logical chains use single aiohttp.
TCPConnector with separate client session and own_connector=False.
Restricting access from subapplication to main (or outer) app is the deliberate choice.
Subapplication is an isolated unit by design. If you need to share database object please do it explicitly:
subapp['db'] = mainapp['db']
mainapp.add_subapp('/prefix', subapp)
11.5 Miscellaneous
Helpful pages.
11.5.1 Essays
Rationale
First generation (v1) of router has mapped (method, path) pair to web-handler. Mapping is named route. Routes
used to have unique names if any.
The main mistake with the design is coupling the route to (method, path) pair while really URL construction
operates with resources (location is a synonym). HTTP method is not part of URI but applied on sending HTTP
request only.
Having different route names for the same path is confusing. Moreover named routes constructed for the same path
should have unique non overlapping names which is cumbersome is certain situations.
From other side sometimes it’s desirable to bind several HTTP methods to the same web handler. For v1 router it can
be solved by passing ‘*’ as HTTP method. Class based views require ‘*’ method also usually.
Implementation
User still may use wildcard for accepting all HTTP methods (maybe we will add something like resource.
add_wildcard(handler) later).
Since names belongs to resources now app.router['name'] returns a resource instance instead of aiohttp.
web.Route.
Backward compatibility
Since aiohttp 1.1 the library uses yarl for URL processing.
New API
The method returns a relative URL, absolute URL may be constructed by request.url.join(request.
url_for(...) call.
URL encoding
app.router.add_get('/', handler)
and:
app.router.add_get('/%D0%BF%D1%83%D1%82%D1%8C', handler)
Sub-Applications
Sub applications are designed for solving the problem of the big monolithic code base. Let’s assume we have a project
with own business logic and tools like administration panel and debug toolbar.
Administration panel is a separate application by its own nature but all toolbar URLs are served by prefix like /admin.
Thus we’ll create a totally separate application named admin and connect it to main app with prefix:
admin = web.Application()
# setup admin routes, signals and middlewares
app.add_subapp('/admin/', admin)
Url reversing
Url reversing for sub-applications should generate urls with proper prefix.
But for getting URL sub-application’s router should be used:
admin = web.Application()
admin.add_get('/resource', handler, name='name')
app.add_subapp('/admin/', admin)
url = admin.router['name'].url_for()
Application freezing
Application can be used either as main app (app.make_handler()) or as sub-application – not both cases at the
same time.
After connecting application by .add_subapp() call or starting serving web-server as toplevel application the
application is frozen.
It means that registering new routes, signals and middlewares is forbidden. Changing state (app['name'] =
'value') of frozen application is deprecated and will be eventually removed.
Migration to 2.x
Client
chunking
aiohttp does not support custom chunking sizes. It is up to the developer to decide how to chunk data streams. If
chunking is enabled, aiohttp encodes the provided chunks in the “Transfer-encoding: chunked” format.
aiohttp does not enable chunked encoding automatically even if a transfer-encoding header is supplied: chunked has
to be set explicitly. If chunked is set, then the Transfer-encoding and content-length headers are disallowed.
compression
Compression has to be enabled explicitly with the compress parameter. If compression is enabled, adding a content-
encoding header is not allowed. Compression also enables the chunked transfer-encoding. Compression can not be
combined with a Content-Length header.
Client Connector
1. By default a connector object manages a total number of concurrent connections. This limit was a per host rule
in version 1.x. In 2.x, the limit parameter defines how many concurrent connection connector can open and a
new limit_per_host parameter defines the limit per host. By default there is no per-host limit.
2. BaseConnector.close is now a normal function as opposed to coroutine in version 1.x
3. BaseConnector.conn_timeout was moved to ClientSession
ClientResponse.release
Internal implementation was significantly redesigned. It is not required to call release on the response object. When
the client fully receives the payload, the underlying connection automatically returns back to pool. If the payload is
not fully read, the connection is closed
Client exceptions
Exception hierarchy has been significantly modified. aiohttp now defines only exceptions that covers connection
handling and server response misbehaviors. For developer specific mistakes, aiohttp uses python standard exceptions
like ValueError or TypeError.
Reading a response content may raise a ClientPayloadError exception. This exception indicates errors specific to the
payload encoding. Such as invalid compressed data, malformed chunked-encoded chunks or not enough data that
satisfy the content-length header.
All exceptions are moved from aiohttp.errors module to top level aiohttp module.
New hierarchy of exceptions:
• ClientError - Base class for all client specific exceptions
– ClientResponseError - exceptions that could happen after we get response from server
To unify form-data/payload handling a new Payload system was introduced. It handles customized handling of existing
types and provide implementation for user-defined types.
1. FormData.__call__ does not take an encoding arg anymore and its return value changes from an iterator or
bytes to a Payload instance. aiohttp provides payload adapters for some standard types like str, byte, io.IOBase,
StreamReader or DataQueue.
2. a generator is not supported as data provider anymore, streamer can be used instead. For example, to upload
data from file:
@aiohttp.streamer
def file_sender(writer, file_name=None):
with open(file_name, 'rb') as f:
chunk = f.read(2**16)
while chunk:
yield from writer.write(chunk)
chunk = f.read(2**16)
Various
1. the encoding parameter is deprecated in ClientSession.request(). Payload encoding is controlled at the payload
level. It is possible to specify an encoding for each payload instance.
2. the version parameter is removed in ClientSession.request() client version can be specified in the ClientSession
constructor.
3. aiohttp.MsgType dropped, use aiohttp.WSMsgType instead.
4. ClientResponse.url is an instance of yarl.URL class (url_obj is deprecated)
5. ClientResponse.raise_for_status() raises aiohttp.ClientResponseError exception
6. ClientResponse.json() is strict about response’s content type. if content type does not match, it raises aiohttp.
ClientResponseError exception. To disable content type check you can pass None as content_type
parameter.
Server
Internal implementation was significantly redesigned to provide better performance and support HTTP pipelining.
ServerHttpProtocol is dropped, implementation is merged with RequestHandler a lot of low-level api’s are dropped.
Application
1. Constructor parameter loop is deprecated. Loop is get configured by application runner, run_app function for
any of gunicorn workers.
2. Application.router.add_subapp is dropped, use Application.add_subapp instead
3. Application.finished is dropped, use Application.cleanup instead
1. the GET and POST attributes no longer exist. Use the query attribute instead of GET
2. Custom chunking size is not support WebResponse.chunked - developer is responsible for actual chunking.
3. Payloads are supported as body. So it is possible to use client response’s content object as body parameter for
WebResponse
4. FileSender api is dropped, it is replaced with more general FileResponse class:
RequestPayloadError
Reading request’s payload may raise a RequestPayloadError exception. The behavior is similar to ClientPayloadError.
WSGI
WSGI support has been dropped, as well as gunicorn wsgi support. We still provide default and uvloop gunicorn
workers for web.Application
11.5.2 Changelog
2.3.9 (2018-01-16)
2.3.8 (2018-01-15)
• Do not use yarl.unquote internal function in aiohttp. Fix incorrectly unquoted path part in URL dispatcher
(#2662)
• Fix compatibility with yarl==1.0.0 (#2662)
2.3.7 (2017-12-27)
2.3.6 (2017-12-04)
2.3.5 (2017-11-30)
2.3.4 (2017-11-29)
• Make request.app point to proper application instance when using nested applications (with middlewares).
(#2550)
• Change base class of ClientConnectorSSLError to ClientSSLError from ClientConnectorError. (#2563)
• Return client connection back to free pool on error in connector.connect(). (#2567)
2.3.3 (2017-11-17)
• Having a ; in Response content type does not assume it contains a charset anymore. (#2197)
• Use getattr(asyncio, ‘async’) for keeping compatibility with Python 3.7. (#2476)
• Ignore NotImplementedError raised by set_child_watcher from uvloop. (#2491)
• Fix warning in ClientSession.__del__ by stopping to try to close it. (#2523)
• Fixed typo’s in Third-party libraries page. And added async-v20 to the list (#2510)
2.3.2 (2017-11-01)
2.3.1 (2017-10-18)
2.3.0 (2017-10-18)
Features
Bugfixes
Improved Documentation
• Add a note about possible performance degradation in await resp.text() if charset was not provided by Content-
Type HTTP header. Pass explicit encoding to solve it. (#1811)
• Drop disqus widget from documentation pages. (#2018)
• Add a graceful shutdown section to the client usage documentation. (#2039)
• Document connector_owner parameter. (#2072)
• Update the doc of web.Application (#2081)
• Fix mistake about access log disabling. (#2085)
• Add example usage of on_startup and on_shutdown signals by creating and disposing an aiopg connection
engine. (#2131)
• Document encoded=True for yarl.URL, it disables all yarl transformations. (#2198)
• Document that all app’s middleware factories are run for every request. (#2225)
• Reflect the fact that default resolver is threaded one starting from aiohttp 1.1 (#2228)
Misc
• #2013, #2014, #2048, #2094, #2149, #2187, #2214, #2225, #2243, #2248
2.2.5 (2017-08-03)
2.2.4 (2017-08-02)
• Fix issue with synchronous session closing when using ClientSession as an asynchronous context manager.
(#2063)
2.2.3 (2017-07-04)
2.2.2 (2017-07-03)
2.2.1 (2017-07-02)
2.2.0 (2017-06-20)
• Raise RuntimeError is you try to set the Content Length and enable chunked encoding at the same time (#1941)
• Small update for unittest_run_loop
• Use CIMultiDict for ClientRequest.skip_auto_headers (#1970)
• Fix wrong startup sequence: test server and run_app() are not raise DeprecationWarning now (#1947)
• Make sure cleanup signal is sent if startup signal has been sent (#1959)
• Fixed server keep-alive handler, could cause 100% cpu utilization (#1955)
• Connection can be destroyed before response get processed if await aiohttp.request(..) is used (#1981)
• MultipartReader does not work with -OO (#1969)
• Fixed ClientPayloadError with blank Content-Encoding header (#1931)
• Support deflate encoding implemented in httpbin.org/deflate (#1918)
• Fix BadStatusLine caused by extra CRLF after POST data (#1792)
• Keep a reference to ClientSession in response object (#1985)
• Deprecate undocumented app.on_loop_available signal (#1978)
2.1.0 (2017-05-26)
• Added support for experimental async-tokio event loop written in Rust https://github.com/PyO3/tokio
• Write to transport \r\n before closing after keepalive timeout, otherwise client can not detect socket discon-
nection. (#1883)
• Only call loop.close in run_app if the user did not supply a loop. Useful for allowing clients to specify their
own cleanup before closing the asyncio loop if they wish to tightly control loop behavior
• Content disposition with semicolon in filename (#917)
• Added request_info to response object and ClientResponseError. (#1733)
• Added history to ClientResponseError. (#1741)
• Allow to disable redirect url re-quoting (#1474)
• Handle RuntimeError from transport (#1790)
• Dropped “%O” in access logger (#1673)
• Added args and kwargs to unittest_run_loop. Useful with other decorators, for example @patch. (#1803)
• Added iter_chunks to response.content object. (#1805)
• Avoid creating TimerContext when there is no timeout to allow compatibility with Tornado. (#1817) (#1180)
• Add proxy_from_env to ClientRequest to read from environment variables. (#1791)
• Add DummyCookieJar helper. (#1830)
• Fix assertion errors in Python 3.4 from noop helper. (#1847)
• Do not unquote + in match_info values (#1816)
• Use Forwarded, X-Forwarded-Scheme and X-Forwarded-Host for better scheme and host resolution. (#1134)
• Fix sub-application middlewares resolution order (#1853)
• Fix applications comparison (#1866)
• Fix static location in index when prefix is used (#1662)
2.0.7 (2017-04-12)
2.0.6 (2017-04-04)
2.0.5 (2017-03-29)
2.0.4 (2017-03-27)
2.0.3 (2017-03-24)
2.0.2 (2017-03-21)
2.0.1 (2017-03-21)
2.0.0 (2017-03-20)
2.0.0rc1 (2017-03-15)
1.3.5 (2017-03-16)
1.3.4 (2017-03-14)
1.3.3 (2017-02-19)
1.3.2 (2017-02-16)
1.3.1 (2017-02-09)
1.3.0 (2017-02-08)
• Multipart writer validates the data on append instead of on a request send (#920)
• Multipart reader accepts multipart messages with or without their epilogue to consistently handle valid and
legacy behaviors (#1526) (#1581)
• Separate read + connect + request timeouts # 1523
• Do not swallow Upgrade header (#1587)
• Fix polls demo run application (#1487)
• Ignore unknown 1XX status codes in client (#1353)
• Fix sub-Multipart messages missing their headers on serialization (#1525)
• Do not use readline when reading the content of a part in the multipart reader (#1535)
• Add optional flag for quoting FormData fields (#916)
• 416 Range Not Satisfiable if requested range end > file size (#1588)
• Having a : or @ in a route does not work (#1552)
• Added receive_timeout timeout for websocket to receive complete message. (#1325)
• Added heartbeat parameter for websocket to automatically send ping message. (#1024) (#777)
• Remove web.Application dependency from web.UrlDispatcher (#1510)
• Accepting back-pressure from slow websocket clients (#1367)
• Do not pause transport during set_parser stage (#1211)
• Lingering close does not terminate before timeout (#1559)
• setsockopt may raise OSError exception if socket is closed already (#1595)
• Lots of CancelledError when requests are interrupted (#1565)
• Allow users to specify what should happen to decoding errors when calling a responses text() method (#1542)
• Back port std module http.cookies for python3.4.2 (#1566)
• Maintain url’s fragment in client response (#1314)
• Allow concurrently close WebSocket connection (#754)
• Gzipped responses with empty body raises ContentEncodingError (#609)
• Return 504 if request handle raises TimeoutError.
• Refactor how we use keep-alive and close lingering timeouts.
• Close response connection if we can not consume whole http message during client response release
• Abort closed ssl client transports, broken servers can keep socket open un-limit time (#1568)
• Log warning instead of RuntimeError is websocket connection is closed.
• Deprecated: aiohttp.protocol.HttpPrefixParser will be removed in 1.4 (#1590)
• Deprecated: Servers response’s .started, .start() and .can_start() method will be removed in 1.4 (#1591)
• Deprecated: Adding sub app via app.router.add_subapp() is deprecated use app.add_subapp() instead, will be
removed in 1.4 (#1592)
• Deprecated: aiohttp.get(), aiohttp.options(), aiohttp.head(), aiohttp.post(), aiohttp.put(), aiohttp.patch(), aio-
http.delete(), and aiohttp.ws_connect() will be removed in 1.4 (#1593)
1.2.0 (2016-12-17)
• Extract BaseRequest from web.Request, introduce web.Server (former RequestHandlerFactory), introduce new
low-level web server which is not coupled with web.Application and routing (#1362)
• Make TestServer.make_url compatible with yarl.URL (#1389)
• Implement range requests for static files (#1382)
• Support task attribute for StreamResponse (#1410)
• Drop TestClient.app property, use TestClient.server.app instead (BACKWARD INCOMPATIBLE)
• Drop TestClient.handler property, use TestClient.server.handler instead (BACKWARD INCOMPATIBLE)
• TestClient.server property returns a test server instance, was asyncio.AbstractServer (BACKWARD INCOM-
PATIBLE)
• Follow gunicorn’s signal semantics in Gunicorn[UVLoop]WebWorker (#1201)
• Call worker_int and worker_abort callbacks in Gunicorn[UVLoop]WebWorker (#1202)
• Has functional tests for client proxy (#1218)
• Fix bugs with client proxy target path and proxy host with port (#1413)
• Fix bugs related to the use of unicode hostnames (#1444)
• Preserve cookie quoting/escaping (#1453)
• FileSender will send gzipped response if gzip version available (#1426)
• Don’t override Content-Length header in web.Response if no body was set (#1400)
• Introduce router.post_init() for solving (#1373)
• Fix raise error in case of multiple calls of TimeServive.stop()
• Allow to raise web exceptions on router resolving stage (#1460)
• Add a warning for session creation outside of coroutine (#1468)
• Avoid a race when application might start accepting incoming requests but startup signals are not processed yet
e98e8c6
• Raise a RuntimeError when trying to change the status of the HTTP response after the headers have been sent
(#1480)
• Fix bug with https proxy acquired cleanup (#1340)
• Use UTF-8 as the default encoding for multipart text parts (#1484)
1.1.6 (2016-11-28)
• Fix BodyPartReader.read_chunk bug about returns zero bytes before EOF (#1428)
1.1.5 (2016-11-16)
1.1.4 (2016-11-14)
1.1.3 (2016-11-10)
1.1.2 (2016-11-08)
1.1.1 (2016-11-04)
1.1.0 (2016-11-03)
1.0.5 (2016-10-11)
• Fix StreamReader._read_nowait to return all available data up to the requested amount (#1297)
1.0.4 (2016-09-22)
1.0.2 (2016-09-22)
1.0.1 (2016-09-16)
• Restore aiohttp.web.MsgType alias for aiohttp.WSMsgType for sake of backward compatibility (#1178)
• Tune alabaster schema.
• Use text/html content type for displaying index pages by static file handler.
• Fix AssertionError in static file handling (#1177)
• Fix access log formats %O and %b for static file handling
• Remove debug setting of GunicornWorker, use app.debug to control its debug-mode instead
1.0.0 (2016-09-16)
• Change default size for client session’s connection pool from unlimited to 20 (#977)
• Add IE support for cookie deletion. (#994)
• Remove deprecated WebSocketResponse.wait_closed method (BACKWARD INCOMPATIBLE)
• Remove deprecated force parameter for ClientResponse.close method (BACKWARD INCOMPATIBLE)
• Avoid using of mutable CIMultiDict kw param in make_mocked_request (#997)
• Make WebSocketResponse.close a little bit faster by avoiding new task creating just for timeout measurement
• Add proxy and proxy_auth params to client.get() and family, deprecate ProxyConnector (#998)
• Add support for websocket send_json and receive_json, synchronize server and client API for websockets (#984)
• Implement router shourtcuts for most useful HTTP methods, use app.router.add_get(), app.router.add_post()
etc. instead of app.router.add_route() (#986)
• Support SSL connections for gunicorn worker (#1003)
• Move obsolete examples to legacy folder
• Switch to multidict 2.0 and title-cased strings (#1015)
• {FOO}e logger format is case-sensitive now
• Fix logger report for unix socket 8e8469b
• Rename aiohttp.websocket to aiohttp._ws_impl
• Rename aiohttp.MsgType tp aiohttp.WSMsgType
• Introduce aiohttp.WSMessage officially
• Rename Message -> WSMessage
• Remove deprecated decode param from resp.read(decode=True)
• Use 5min default client timeout (#1028)
0.22.5 (08-02-2016)
0.22.3 (07-26-2016)
0.22.2 (07-23-2016)
0.22.1 (07-16-2016)
• Large cookie expiration/max-age does not break an event loop from now (fixes (#967))
0.22.0 (07-15-2016)
0.21.6 (05-05-2016)
0.21.5 (03-22-2016)
0.21.4 (03-12-2016)
• Fix ResourceAdapter: don’t add method to allowed if resource is not match (#826)
• Fix Resource: append found method to returned allowed methods
0.21.2 (02-16-2016)
• Fix a regression: support for handling ~/path in static file routes was broken (#782)
0.21.1 (02-10-2016)
0.21.0 (02-04-2016)
• Add version parameter to ClientSession constructor, deprecate it for session.request() and family (#736)
• Enable access log by default (#735)
• Deprecate app.router.register_route() (the method was not documented intentionally BTW).
• Deprecate app.router.named_routes() in favor of app.router.named_resources()
• route.add_static accepts pathlib.Path now (#743)
• Add command line support: $ python -m aiohttp.web package.main (#740)
• FAQ section was added to docs. Enjoy and fill free to contribute new topics
• Add async context manager support to ClientSession
• Document ClientResponse’s host, method, url properties
• Use CORK/NODELAY in client API (#748)
• ClientSession.close and Connector.close are coroutines now
• Close client connection on exception in ClientResponse.release()
• Allow to read multipart parts without content-length specified (#750)
• Add support for unix domain sockets to gunicorn worker (#470)
• Add test for default Expect handler (#601)
• Add the first demo project
• Rename loader keyword argument in web.Request.json method. (#646)
• Add local socket binding for TCPConnector (#678)
0.20.2 (01-07-2016)
0.20.1 (12-30-2015)
• Raise RuntimeError is Timeout context manager was used outside of task context.
• Add number of bytes to stream.read_nowait (#700)
• Use X-FORWARDED-PROTO for wsgi.url_scheme when available
0.20.0 (12-28-2015)
0.19.0 (11-25-2015)
0.18.4 (13-11-2015)
• Relax rule for router names again by adding dash to allowed characters: they may contain identifiers, dashes,
dots and columns
0.18.3 (25-10-2015)
0.18.2 (22-10-2015)
0.18.1 (20-10-2015)
• Relax rule for router names: they may contain dots and columns starting from now
0.18.0 (19-10-2015)
0.17.4 (09-29-2015)
0.17.3 (08-28-2015)
0.17.2 (08-11-2015)
0.17.1 (08-10-2015)
0.17.0 (08-04-2015)
0.16.6 (07-15-2015)
0.16.5 (06-13-2015)
0.16.4 (06-13-2015)
0.16.3 (05-30-2015)
0.16.2 (05-27-2015)
• Update python version required for __del__ usage: it’s actually 3.4.1 instead of 3.4.0
• Add check for presence of loop.is_closed() method before call the former (#378)
0.16.1 (05-27-2015)
0.16.0 (05-26-2015)
0.15.3 (04-22-2015)
0.15.2 (04-19-2015)
0.15.1 (03-31-2015)
0.15.0 (03-27-2015)
0.14.4 (01-29-2015)
• Fix issue with error during constructing of url with regex parts (#264)
0.14.3 (01-28-2015)
0.14.2 (01-23-2015)
0.14.1 (01-15-2015)
0.13.1 (12-31-2014)
0.13.0 (12-29-2014)
0.12.0 (12-12-2014)
0.11.0 (11-29-2014)
0.10.2 (11-19-2014)
0.10.1 (11-17-2014)
• aiohttp.web.HTTPException and descendants now files response body with string like 404: NotFound
• Fix multidict __iter__, the method should iterate over keys, not (key, value) pairs.
0.10.0 (11-13-2014)
0.9.3 (10-30-2014)
0.9.2 (10-16-2014)
0.9.1 (08-30-2014)
• Added MultiDict support for client request params and data (#114).
• Fixed parameter type for IncompleteRead exception (#118).
• Strictly require ASCII headers names and values (#137)
• Keep port in ProxyConnector (#128).
• Python 3.4.1 compatibility (#131).
0.9.0 (07-08-2014)
0.8.4 (07-04-2014)
0.8.3 (07-03-2014)
0.8.2 (06-22-2014)
0.8.1 (06-18-2014)
0.8.0 (06-06-2014)
0.7.3 (05-20-2014)
0.7.2 (05-14-2014)
0.7.1 (04-28-2014)
0.7.0 (04-16-2014)
0.6.5 (03-29-2014)
0.6.4 (02-27-2014)
• Log content-length missing warning only for put and post requests.
0.6.3 (02-27-2014)
0.6.2 (02-18-2014)
0.6.1 (02-17-2014)
0.6.0 (02-12-2014)
0.5.0 (01-29-2014)
0.4.4 (11-15-2013)
• Resolve only AF_INET family, because it is not clear how to pass extra info to asyncio.
0.4.3 (11-15-2013)
0.4.2 (11-14-2013)
0.4.1 (11-12-2013)
0.4 (11-06-2013)
0.3 (11-04-2013)
• Added PortMapperWorker
• Added HttpClient
• Added TCP connection timeout to HTTP client
• Better client connection errors handling
• Gracefully handle process exit
0.2
• Fix packaging
11.5.3 Glossary
• genindex
• modindex
• search
The list of aiohttp users: both libraries, big projects and web sites.
Please don’t hesitate to add your awesome project to the list by making a Pull Request on GitHub.
If you like the project – please go to GitHub and press Star button!
aiohttp is not the library for making HTTP requests and creating WEB server only.
It is the grand basement for libraries built on top of aiohttp.
This page is a list of these tools.
Please feel free to add your open sourced library if it’s not enlisted yet by making Pull Request to https://github.com/
aio-libs/aiohttp/
• Why do you might want to include your awesome library into the list?
• Just because the list increases your library visibility. People will have an easy way to find it.
Officially supported
This list contains libraries which are supported by aio-libs team and located on https://github.com/aio-libs
aiohttp extensions
Database drivers
Other tools
The libraries are not part of aio-libs but they are proven to be very well written and highly recommended for
usage.
• uvloop Ultra fast implementation of asyncio event loop on top of libuv.
We are highly recommending to use it instead of standard asyncio.
Database drivers
• asyncpg Another PostgreSQL async driver. It’s much faster than aiopg but it is not drop-in replacement – the
API is different. Anyway please take a look on it – the driver is really incredible fast.
Others
The list of libraries which are exists but not enlisted in former categories.
They may be perfect or not – we don’t know.
Please add your library reference here first and after some time period ask to raise the status.
• aiohttp-cache A cache system for aiohttp server.
• aiocache Caching for asyncio with multiple backends (framework agnostic)
• gain Web crawling framework based on asyncio for everyone.
• aiohttp-swagger Swagger API Documentation builder for aiohttp server.
• aiohttp-swaggerify Library to automatically generate swagger2.0 definition for aiohttp endpoints.
• aiohttp-validate Simple library that helps you validate your API endpoints requests/responses with json schema.
• raven-aiohttp An aiohttp transport for raven-python (Sentry client).
• webargs A friendly library for parsing HTTP request arguments, with built-in support for popular web frame-
works, including Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.
• aioauth-client OAuth client for aiohttp.
• aiohttpretty A simple asyncio compatible httpretty mock using aiohttp.
• aioresponses a helper for mock/fake web requests in python aiohttp package.
• aiohttp-transmute A transmute implementation for aiohttp.
aiohttp is used to build useful libraries built on top of it, and there’s a page dedicated to list them: Third-Party libraries.
There are also projects that leverage the power of aiohttp to provide end-user tools, like command lines or software
with full user interfaces.
This page aims to list those projects. If you are using aiohttp in your software and if it’s playing a central role, you can
add it here in this list.
You can also add a Built with aiohttp link somewhere in your project, pointing to https://github.com/aio-libs/aiohttp.
• Molotov Load testing tool.
• Arsenic Async WebDriver.
• Skyscanner Hotels
• Ocean S.A.
• GNS3
• TutorCruncher socket
• Morpheus messaging microservice
• Eyepea - Custom telephony solutions
• ALLOcloud - Telephony in the cloud
• helpmanual - comprehensive help and man page database
11.7 Contributing
In order to make a clone of the GitHub repo: open the link and press the “Fork” button on the upper-right menu of the
web page.
I hope everybody knows how to work with git and github nowadays :)
Workflow is pretty straightforward:
1. Clone the GitHub repo
2. Make a change
3. Make sure all tests passed
4. Add a file into CHANGES folder (Changelog update).
5. Commit changes to own aiohttp clone
6. Make pull request from github page for your clone against master branch
$ cd aiohttp
$ virtualenv --python=`which python3` venv
$ . venv/bin/activate
$ cd aiohttp
$ python3 -m venv venv
$ . venv/bin/activate
For virtualenvwrapper:
$ cd aiohttp
$ mkvirtualenv --python=`which python3` aiohttp
There are other tools like pyvenv but you know the rule of thumb now: create a python3 virtual environment and
activate it.
After that please install libraries required for development:
Note: If you plan to use pdb or ipdb within the test suite, execute:
$ py.test tests -s
After all the preconditions are met you can run tests typing the next command:
$ make test
The command at first will run the flake8 tool (sorry, we don’t accept pull requests with pep8 or pyflakes errors).
On flake8 success the tests will be run.
Please take a look on the produced output.
Any extra texts (print statements and so on) should be removed.
We are trying hard to have good test coverage; please don’t make it worse.
Use:
$ make cov
to run test suite and collect coverage information. Once the command has finished check your coverage at the file that
appears in the last line of the output: open file:///.../aiohttp/htmlcov/index.html
Please go to the link and make sure that your code change is covered.
The project uses codecov.io for storing coverage results. Visit https://codecov.io/gh/aio-libs/aiohttp for looking on
coverage of master branch, history, pull requests etc.
The browser extension https://docs.codecov.io/docs/browser-extension is highly recommended for analyzing the cov-
erage just in Files Changed tab on GitHub Pull Request review page.
11.7.5 Documentation
$ make doc
Once it finishes it will output the index html page open file:///.../aiohttp/docs/_build/html/
index.html.
Go to the link and make sure your doc changes looks good.
We use pyenchant and sphinxcontrib-spelling for running spell checker for documentation:
$ make doc-spelling
The CHANGES.rst file is managed using towncrier tool and all non trivial changes must be accompanied by a news
entry.
To add an entry to the news file, first you need to have created an issue describing the change you want to make. A
Pull Request itself may function as such, but it is preferred to have a dedicated issue (for example, in case the PR ends
up rejected due to code quality reasons).
Once you have an issue or pull request, you take the number and you create a file inside of the CHANGES/ directory
named after that issue number with an extension of .removal, .feature, .bugfix, or .doc. Thus if your issue
or PR number is 1234 and this change is fixing a bug, then you would create a file CHANGES/1234.bugfix. PRs
can span multiple categories by creating multiple files (for instance, if you added a feature and deprecated/removed
the old feature at the same time, you would create CHANGES/NNNN.feature and CHANGES/NNNN.removal).
Likewise if a PR touches multiple issues/PRs you may create a file for each of them with the exact same contents and
Towncrier will deduplicate them.
The contents of this file are reStructuredText formatted text that will be used as the content of the news file entry. You
do not need to reference the issue or PR numbers here as towncrier will automatically add a reference to all of the
affected issues when rendering the news file.
Contribute!
The easiest way is providing Pull Requests for issues in our bug tracker. But if you have a great idea for the library
improvement – please make an issue and Pull Request.
The rules for committers are simple:
1. No wild commits! Everything should go through PRs.
2. Take a part in reviews. It’s very important part of maintainer’s activity.
3. Pickup issues created by others, especially if they are simple.
4. Keep test suite comprehensive. In practice it means leveling up coverage. 97% is not bad but we wish to have
100% someday. Well, 99% is good target too.
5. Don’t hesitate to improve our docs. Documentation is very important thing, it’s the key for project success. The
documentation should not only cover our public API but help newbies to start using the project and shed a light
on non-obvious gotchas.
After positive answer aiohttp committer creates an issue on github with the proposal for nomination. If the proposal
will collect only positive votes and no strong objection – you’ll be a new member in our team.
a
aiohttp, 60
aiohttp.web, 92
207
aiohttp Documentation, Release 3.0.0-a0
209
aiohttp Documentation, Release 3.0.0-a0
210 Index
aiohttp Documentation, Release 3.0.0-a0
Index 211
aiohttp Documentation, Release 3.0.0-a0
212 Index
aiohttp Documentation, Release 3.0.0-a0
Index 213
aiohttp Documentation, Release 3.0.0-a0
214 Index
aiohttp Documentation, Release 3.0.0-a0
Y
yarl, 198
Index 215