Asynchronous I O (Asyncio) - SQLAlchemy 2.0 Documentation
Asynchronous I O (Asyncio) - SQLAlchemy 2.0 Documentation
Warning
SQLAlchemy ORM
Please read Asyncio Platform Installation Notes (Including Apple M1) for important platform installation notes for
ORM Quick Start many platforms, including Apple M1 Architecture.
ORM Mapped Class ConSguration
Relationship ConSguration
Tasks
For the above platforms, greenlet is known to supply pre-built wheel Sles. For other platforms, greenlet does not install by default;
the current Sle listing for greenlet can be seen at Greenlet - Download Files. Note that there are many architectures omitted,
including Apple M1.
To install SQLAlchemy while ensuring the greenlet dependency is present regardless of what platform is in use, the [asyncio]
setuptools extra may be installed as follows, which will include also instruct pip to install greenlet :
Note that installation of greenlet on platforms that do not have a pre-built wheel Sle means that greenlet will be built from source,
which requires that Python’s development libraries also be present.
Synopsis - Core
For Core use, the create_async_engine() function creates an instance of AsyncEngine which then offers an async version of the
traditional Engine API. The AsyncEngine delivers an AsyncConnection via its AsyncEngine.connect() and AsyncEngine.begin()
methods which both deliver asynchronous context managers. The AsyncConnection can then invoke statements using either the
AsyncConnection.execute() method to deliver a buffered Result , or the AsyncConnection.stream() method to deliver a
streaming server-side AsyncResult :
import asyncio
meta = MetaData()
t1 = Table("t1", meta, Column("name", String(50), primary_key=True))
await conn.execute(
t1.insert(), [{"name": "some name 1"}, {"name": "some name 2"}]
)
print(result.fetchall())
asyncio.run(async_main())
Above, the AsyncConnection.run_sync() method may be used to invoke special DDL functions such as MetaData.create_all()
that don’t include an awaitable hook.
Tip
It’s advisable to invoke the AsyncEngine.dispose() method using await when using the AsyncEngine object in a scope that will go out of
context and be garbage collected, as illustrated in the async_main function in the above example. This ensures that any connections held
open by the connection pool will be properly disposed within an awaitable context. Unlike when using blocking IO, SQLAlchemy cannot
properly dispose of these connections within methods like __del__ or weakref Snalizers as there is no opportunity to invoke await .
Failing to explicitly dispose of the engine when it falls out of scope may result in warnings emitted to standard out resembling the form
RuntimeError: Event loop is closed within garbage collection.
The AsyncConnection also features a “streaming” API via the AsyncConnection.stream() method that returns an AsyncResult
object. This result object uses a server-side cursor and provides an async/await API, such as an async iterator:
Synopsis - ORM
Using 2.0 style querying, the AsyncSession class provides full ORM functionality.
Within the default mode of use, special care must be taken to avoid lazy loading or other expired-attribute access involving ORM
relationships and column attributes; the next section Preventing Implicit IO when Using AsyncSession details this.
Warning
A single instance of AsyncSession is not safe for use in multiple, concurrent tasks. See the sections Using
AsyncSession with Concurrent Tasks and Is the Session thread-safe? Is AsyncSession safe to share in concurrent
tasks? for background.
The example below illustrates a complete example including mapper and session conSguration:
import asyncio
import datetime
from typing import List
class A(Base):
__tablename__ = "a"
class B(Base):
__tablename__ = "b"
id: Mapped[int] = mapped_column(primary_key=True)
a_id: Mapped[int] = mapped_column(ForeignKey("a.id"))
data: Mapped[str]
for a1 in result.scalars():
print(a1)
print(f"created at: {a1.create_date}")
for b1 in a1.bs:
print(b1)
a1 = result.scalars().one()
await session.commit()
await insert_objects(async_session)
await select_and_update_objects(async_session)
asyncio.run(async_main())
In the example above, the AsyncSession is instantiated using the optional async_sessionmaker helper, which provides a factory for
new AsyncSession objects with a Sxed set of parameters, which here includes associating it with an AsyncEngine against particular
database URL. It is then passed to other methods where it may be used in a Python asynchronous context manager (i.e. async with:
statement) so that it is automatically closed at the end of the block; this is equivalent to calling the AsyncSession.close() method.
See the section Is the Session thread-safe? Is AsyncSession safe to share in concurrent tasks? for a general description of the
Session and AsyncSession with regards to how they should be used with concurrent workloads.
Attributes that are lazy-loading relationships, deferred columns or expressions, or are being accessed in expiration scenarios can
take advantage of the AsyncAttrs mixin. This mixin, when added to a speciSc class or more generally to the Declarative Base
superclass, provides an accessor AsyncAttrs.awaitable_attrs which delivers any attribute as an awaitable:
class A(Base):
__tablename__ = "a"
class B(Base):
__tablename__ = "b"
Accessing the A.bs collection on newly loaded instances of A when eager loading is not in use will normally use lazy loading,
which in order to succeed will usually emit IO to the database, which will fail under asyncio as no implicit IO is allowed. To access
this attribute directly under asyncio without any prior loading operations, the attribute can be accessed as an awaitable by
indicating the AsyncAttrs.awaitable_attrs preSx:
a1 = (await session.scalars(select(A))).one()
for b1 in await a1.awaitable_attrs.bs:
print(b1)
The AsyncAttrs mixin provides a succinct facade over the internal approach that’s also used by the AsyncSession.run_sync()
method.
See also
AsyncAttrs
Collections can be replaced with write only collections that will never emit IO implicitly, by using the Write Only Relationships
feature in SQLAlchemy 2.0. Using this feature, collections are never read from, only queried using explicit SQL calls. See the
example async_orm_writeonly.py in the Asyncio Integration section for an example of write-only collections used with asyncio.
When using write only collections, the program’s behavior is simple and easy to predict regarding collections. However, the
downside is that there is not any built-in system for loading many of these collections all at once, which instead would need to be
performed manually. Therefore, many of the bullets below address speciSc techniques when using traditional lazy-loaded
relationships with asyncio, which requires more care.
If not using AsyncAttrs , relationships can be declared with lazy="raise" so that by default they will not attempt to emit SQL. In
order to load collections, eager loading would be used instead.
The most useful eager loading strategy is the selectinload() eager loader, which is employed in the previous example in order
to eagerly load the A.bs collection within the scope of the await session.execute() call:
stmt = select(A).options(selectinload(A.bs))
When constructing new objects, collections are always assigned a default, empty collection, such as a list in the above example:
A(bs=[], data="a2")
This allows the .bs collection on the above A object to be present and readable when the A object is `ushed; otherwise, when the
A is `ushed, .bs would be unloaded and would raise an error on access.
The AsyncSession is conSgured using Session.expire_on_commit set to False, so that we may access attributes on an object
subsequent to a call to AsyncSession.commit() , as in the line at the end where we access an attribute:
# sessionmaker version
async_session = async_sessionmaker(engine, expire_on_commit=False)
a1 = result.scalars().first()
A lazy-loaded relationship can be loaded explicitly under asyncio using AsyncSession.refresh() , if the desired attribute name
is passed explicitly to Session.refresh.attribute_names , e.g.:
# collection is present
print(f"bs collection: {a_obj.bs}")
It’s of course preferable to use eager loading up front in order to have collections already set up without the need to lazy-load.
New in version 2.0.4: Added support for AsyncSession.refresh() and the underlying Session.refresh() method to
force lazy-loaded relationships to load, if they are named explicitly in the Session.refresh.attribute_names parameter.
In previous versions, the relationship would be silently skipped even if named in the parameter.
Avoid using the all cascade option documented at Cascades in favor of listing out the desired cascade features explicitly. The
all cascade option implies among others the refresh-expire setting, which means that the AsyncSession.refresh() method
will expire the attributes on related objects, but not necessarily refresh those related objects assuming eager loading is not
conSgured within the relationship() , leaving them in an expired state.
Appropriate loader options should be employed for deferred() columns, if used at all, in addition to that of relationship()
constructs as noted above. See Limiting which Columns Load with Column Deferral for background on deferred column loading.
The “dynamic” relationship loader strategy described at Dynamic Relationship Loaders is not compatible by default with the
asyncio approach. It can be used directly only if invoked within the AsyncSession.run_sync() method described at Running
Synchronous Methods and Functions under asyncio, or by using its .statement attribute to obtain a normal select:
The write only technique, introduced in version 2.0 of SQLAlchemy, is fully compatible with asyncio and should be preferred.
See also
“Dynamic” relationship loaders superseded by “Write Only” - notes on migration to 2.0 style
If using asyncio with a database that does not support RETURNING, such as MySQL 8, server default values such as generated
timestamps will not be available on newly `ushed objects unless the Mapper.eager_defaults option is used. In SQLAlchemy 2.0,
this behavior is applied automatically to backends like PostgreSQL, SQLite and MariaDB which use RETURNING to fetch new
values when rows are INSERTed.
This approach is essentially exposing publicly the mechanism by which SQLAlchemy is able to provide the asyncio
interface in the Srst place. While there is no technical issue with doing so, overall the approach can probably be
considered “controversial” as it works against some of the central philosophies of the asyncio programming model,
which is essentially that any programming statement that can potentially result in IO being invoked must have an
await call, lest the program does not make it explicitly clear every line at which IO may occur. This approach does not
change that general idea, except that it allows a series of synchronous IO instructions to be exempted from this rule
within the scope of a function call, essentially bundled up into a single awaitable.
As an alternative means of integrating traditional SQLAlchemy “lazy loading” within an asyncio event loop, an optional method known
as AsyncSession.run_sync() is provided which will run any Python function inside of a greenlet, where traditional synchronous
programming concepts will be translated to use await when they reach the database driver. A hypothetical approach here is an
asyncio-oriented application can package up database-related methods into functions that are invoked using
AsyncSession.run_sync() .
Altering the above example, if we didn’t use selectinload() for the A.bs collection, we could accomplish our treatment of these
attribute accesses within a separate function:
import asyncio
def fetch_and_update_objects(session):
"""run traditional sync-style ORM code in a function that will be
invoked within an awaitable.
"""
stmt = select(A)
result = session.execute(stmt)
for a1 in result.scalars():
print(a1)
# lazy loads
for b1 in a1.bs:
print(b1)
await session.run_sync(fetch_and_update_objects)
await session.commit()
asyncio.run(async_main())
The above approach of running certain functions within a “sync” runner has some parallels to an application that runs a SQLAlchemy
application on top of an event-based programming library such as gevent . The differences are as follows:
1. unlike when using gevent , we can continue to use the standard Python asyncio event loop, or any custom event loop, without the
need to integrate into the gevent event loop.
2. There is no “monkeypatching” whatsoever. The above example makes use of a real asyncio driver and the underlying SQLAlchemy
connection pool is also using the Python built-in asyncio.Queue for pooling connections.
3. The program can freely switch between async/await code and contained functions that use sync code with virtually no
performance penalty. There is no “thread executor” or any additional waiters or synchronization in use.
4. The underlying network drivers are also using pure Python asyncio concepts, no third party networking libraries as gevent and
eventlet provides are in use.
However, as the asyncio extension surrounds the usual synchronous SQLAlchemy API, regular “synchronous” style event handlers are
freely available as they would be if asyncio were not used.
As detailed below, there are two current strategies to register events given asyncio-facing APIs:
Events can be registered at the instance level (e.g. a speciSc AsyncEngine instance) by associating the event with the sync
attribute that refers to the proxied object. For example to register the PoolEvents.connect() event against an AsyncEngine
instance, use its AsyncEngine.sync_engine attribute as target. Targets include:
AsyncEngine.sync_engine
AsyncConnection.sync_connection
AsyncConnection.sync_engine
AsyncSession.sync_session
To register an event at the class level, targeting all instances of the same type (e.g. all AsyncSession instances), use the
corresponding sync-style class. For example to register the SessionEvents.before_commit() event against the AsyncSession
class, use the Session class as the target.
To register at the sessionmaker level, combine an explicit sessionmaker with an async_sessionmaker using
async_sessionmaker.sync_session_class , and associate events with the sessionmaker .
When working within an event handler that is within an asyncio context, objects like the Connection continue to work in their usual
“synchronous” way without requiring await or async usage; when messages are ultimately received by the asyncio database adapter,
the calling style is transparently adapted back into the asyncio calling style. For events that are passed a DBAPI level connection, such
as PoolEvents.connect() , the object is a pep-249 compliant “connection” object which will adapt sync-style calls into the asyncio
driver.
In this example, we access the AsyncEngine.sync_engine attribute of AsyncEngine as the target for ConnectionEvents and
PoolEvents :
import asyncio
engine = create_async_engine("postgresql+asyncpg://scott:tiger@localhost:5432/test")
asyncio.run(go())
Output:
import asyncio
engine = create_async_engine("postgresql+asyncpg://scott:tiger@localhost:5432/test")
session = AsyncSession(engine)
await session.close()
await engine.dispose()
asyncio.run(go())
Output:
before commit!
execute from event
after commit!
For this use case, we make a sessionmaker as the event target, then assign it to the async_sessionmaker using the
async_sessionmaker.sync_session_class parameter:
import asyncio
sync_maker = sessionmaker()
maker = async_sessionmaker(sync_session_class=sync_maker)
@event.listens_for(sync_maker, "before_commit")
def before_commit(session):
print("before commit")
await async_session.commit()
asyncio.run(main())
Output:
before commit
SQLAlchemy events by their nature take place within the interior of a particular SQLAlchemy process; that is, an event always occurs
after some particular SQLAlchemy API has been invoked by end-user code, and before some other internal aspect of that API occurs.
Contrast this to the architecture of the asyncio extension, which takes place on the exterior of SQLAlchemy’s usual `ow from end-user
API to DBAPI function.
Where above, an API call always starts as asyncio, `ows through the synchronous API, and ends as asyncio, before results are
propagated through this same chain in the opposite direction. In between, the message is adapted Srst into sync-style API use, and then
back out to async style. Event hooks then by their nature occur in the middle of the “sync-style API use”. From this it follows that the API
presented within event hooks occurs inside the process by which asyncio API requests have been adapted to sync, and outgoing
messages to the database API will be converted to asyncio transparently.
To accommodate this use case, SQLAlchemy’s AdaptedConnection class provides a method AdaptedConnection.run_async() that
allows an awaitable function to be invoked within the “synchronous” context of an event handler or other SQLAlchemy internal. This
method is directly analogous to the AsyncConnection.run_sync() method that allows a sync-style method to run under async.
AdaptedConnection.run_async() should be passed a function that will accept the innermost “driver” connection as a single
argument, and return an awaitable that will be invoked by the AdaptedConnection.run_async() method. The given function itself
does not need to be declared as async ; it’s perfectly Sne for it to be a Python lambda: , as the return awaitable value will be invoked
after being returned:
engine = create_async_engine(...)
@event.listens_for(engine.sync_engine, "connect")
def register_custom_types(dbapi_connection, *args):
dbapi_connection.run_async(
lambda connection: connection.set_type_codec(
"MyCustomType",
encoder,
decoder, # ...
)
)
Above, the object passed to the register_custom_types event handler is an instance of AdaptedConnection , which provides a
DBAPI-like interface to an underlying async-only driver-level connection object. The AdaptedConnection.run_async() method then
provides access to an awaitable environment where the underlying driver level connection may be acted upon.
If an AsyncEngine is be passed from one event loop to another, the method AsyncEngine.dispose() should be called before it’s re-
used on a new event loop. Failing to do so may lead to a RuntimeError along the lines of Task <Task pending ...> got Future
attached to a different loop
If the same engine must be shared between different loop, it should be conSgured to disable pooling using NullPool , preventing the
Engine from using any connection more than once:
engine = create_async_engine(
"postgresql+asyncpg://user:pass@host/dbname",
poolclass=NullPool,
)
Tip
SQLAlchemy generally does not recommend the “scoped” pattern for new development as it relies upon mutable global state that must
also be explicitly torn down when work within the thread or task is complete. Particularly when using asyncio, it’s likely a better idea to
pass the AsyncSession directly to the awaitable functions that need it.
When using async_scoped_session , as there’s no “thread-local” concept in the asyncio context, the “scopefunc” parameter must be
provided to the constructor. The example below illustrates using the asyncio.current_task() function for this purpose:
async_session_factory = async_sessionmaker(
some_async_engine,
expire_on_commit=False,
)
AsyncScopedSession = async_scoped_session(
async_session_factory,
scopefunc=current_task,
)
some_async_session = AsyncScopedSession()
Warning
The “scopefunc” used by async_scoped_session is invoked an arbitrary number of times within a task, once for each
time the underlying AsyncSession is accessed. The function should therefore be idempotent and lightweight, and
should not attempt to create or mutate any state, such as establishing callbacks, etc.
Warning
Using current_task() for the “key” in the scope requires that the async_scoped_session.remove() method is called
from within the outermost awaitable, to ensure the key is removed from the registry when the task is complete,
otherwise the task handle as well as the AsyncSession will remain in memory, essentially creating a memory leak.
See the following example which illustrates the correct use of async_scoped_session.remove() .
async_scoped_session includes proxy behavior similar to that of scoped_session , which means it can be treated as a AsyncSession
directly, keeping in mind that the usual await keywords are necessary, including for the async_scoped_session.remove() method:
import asyncio
engine = create_async_engine("postgresql+asyncpg://scott:tiger@localhost/test")
def use_inspector(conn):
inspector = inspect(conn)
# use the inspector
print(inspector.get_view_names())
# return any value to the caller
return inspector.get_table_names()
asyncio.run(async_main())
See also
Arguments passed to create_async_engine() are mostly identical to those passed to the create_engine() function. The
speciSed dialect must be an asyncio-compatible dialect such as asyncpg.
Parameters:
async_creator –
an async callable which returns a driver-level asyncio connection. If given, the function should take no arguments, and return a
new asyncio connection from the underlying asyncio database driver; the connection will be wrapped in the appropriate
structures to be used with the AsyncEngine . Note that the parameters speciSed in the URL are not applied here, and the
creator function should use its own connection parameters.
This parameter is the asyncio equivalent of the create_engine.creator parameter of the create_engine() function.
This function is analogous to the engine_from_config() function in SQLAlchemy Core, except that the requested dialect must be
an asyncio-compatible dialect such as asyncpg. The argument signature of the function is identical to that of
engine_from_config() .
Arguments passed to create_async_pool_from_url() are mostly identical to those passed to the create_pool_from_url()
function. The speciSed dialect must be an asyncio-compatible dialect such as asyncpg.
Members
Class signature
Return a context manager which when entered will deliver an AsyncConnection with an AsyncTransaction established.
E.g.:
This applies only to the built-in cache that is established via the create_engine.query_cache_size parameter. It will not impact
any dictionary caches that were passed via the Connection.execution_options.query_cache parameter.
The AsyncConnection will procure a database connection from the underlying connection pool when it is entered as an async
context manager:
The AsyncConnection may also be started outside of a context manager by invoking its AsyncConnection.start() method.
Parameters:
close –
if left at its default of True , has the effect of fully closing all currently checked in database connections. Connections that
are still checked out will not be closed, however they will no longer be associated with this Engine , so when they are closed
individually, eventually the Pool which they are associated with will be garbage collected and they will be closed out fully, if
not already closed on checkin.
If set to False , the previous connection pool is de-referenced, and otherwise not touched in any way.
See also
Engine.dispose()
This has the effect of setting the Python logging level for the namespace of this element’s class and object reference. A value of
boolean True indicates that the loglevel logging.INFO will be set for the logger, whereas the string value debug will set the loglevel
to logging.DEBUG .
Used for legacy schemes that accept Connection / Engine objects within the same variable.
Return a new AsyncEngine that will provide AsyncConnection objects with the given execution options.
Get the non-SQL options which will take effect during execution.
See also
Engine.execution_options()
See also
See also
The given keys/values in **opt are added to the default execution options that will be used for all connections. The initial contents of
this dictionary can be sent via the execution_options parameter to create_engine() .
See also
Connection.execution_options()
Engine.execution_options()
Members
Class signature
The AsyncConnection.aclose() name is speciScally to support the Python standard library @contextlib.aclosing context
manager function.
This has the effect of also rolling back the transaction if one is in place.
This method commits the current transaction if one has been started. If no transaction was started, the method has no effect,
assuming the connection is in a non-invalidated state.
A transaction is begun on a Connection automatically whenever a statement is Srst executed, or when the Connection.begin()
method is called.
The initial-connection time isolation level associated with the Dialect in use.
Calling this accessor does not invoke any new SQL queries.
See also
Parameters:
object –
The statement to be executed. This is always an object that is in both the ClauseElement and Executable hierarchies,
including:
Select
parameters – parameters which will be bound into the statement. This may be either a dictionary of parameter names to
values, or a mutable sequence (e.g. a list) of dictionaries. When a list of dictionaries is passed, the underlying statement
execution will make use of the DBAPI cursor.executemany() method. When a single dictionary is passed, the DBAPI
cursor.execute() method will be used.
execution_options – optional dictionary of execution options, which will be associated with the statement execution. This
dictionary can provide a subset of the options that are accepted by Connection.execution_options() .
Returns:
a Result object.
Set non-SQL options for the connection which take effect during execution.
This returns this AsyncConnection object with the new options added.
This makes use of the underlying synchronous connection’s Connection.get_nested_transaction() method to get the current
Transaction , which is then proxied in a new AsyncTransaction object.
This is a SQLAlchemy connection-pool proxied connection which then has the attribute _ConnectionFairy.driver_connection
that refers to the actual driver connection. Its _ConnectionFairy.dbapi_connection refers instead to an AdaptedConnection
instance that adapts the driver connection to the DBAPI protocol.
This makes use of the underlying synchronous connection’s Connection.get_transaction() method to get the current
Transaction , which is then proxied in a new AsyncTransaction object.
This dictionary is freely writable for user-deSned state to be associated with the database connection.
This attribute is only available if the AsyncConnection is currently connected. If the AsyncConnection.closed attribute is True ,
then accessing this attribute will raise ResourceClosedError .
This method rolls back the current transaction if one has been started. If no transaction was started, the method has no effect. If a
transaction was started and the connection is in an invalidated state, the transaction is cleared using this method.
A transaction is begun on a Connection automatically whenever a statement is Srst executed, or when the Connection.begin()
method is called.
method sqlalchemy.ext.asyncio.AsyncConnection. async run_sync (fn: Callable[[...], _T], *arg: Any, **kw: Any) → _T
Invoke the given synchronous (i.e. not async) callable, passing a synchronous-style Connection as the Srst argument.
This method allows traditional synchronous SQLAlchemy functions to run within the context of an asyncio application.
E.g.:
'''
conn.execute(
some_table.insert().values(int_col=arg1, str_col=arg2)
)
return "success"
This method maintains the asyncio event loop all the way through to the database connection by running the given callable in a
specially instrumented greenlet.
The most rudimentary use of AsyncConnection.run_sync() is to invoke methods such as MetaData.create_all() , given an
AsyncConnection that needs to be provided to MetaData.create_all() as a Connection object:
Note
The provided callable is invoked inline within the asyncio event loop, and will block on traditional IO calls. IO within
this callable should only call into SQLAlchemy’s asyncio database APIs which will be properly adapted to the
greenlet context.
See also
AsyncSession.run_sync()
This method is shorthand for invoking the Result.scalar() method after invoking the Connection.execute() method.
Parameters are equivalent.
Returns:
a scalar Python value representing the Srst column of the Srst row returned.
This method is shorthand for invoking the Result.scalars() method after invoking the Connection.execute() method.
Parameters are equivalent.
Returns:
a ScalarResult object.
Start this AsyncConnection object’s context outside of using a Python with: block.
E.g.:
The AsyncConnection.stream() method supports optional context manager use against the AsyncResult object, as in:
In the above pattern, the AsyncResult.close() method is invoked unconditionally, even if the iterator is interrupted by an
exception throw. Context manager use remains optional, however, and the function may be called in either an async with fn(): or
await fn() style.
Returns:
See also
AsyncConnection.stream_scalars()
E.g.:
This method is shorthand for invoking the AsyncResult.scalars() method after invoking the Connection.stream() method.
Parameters are equivalent.
The AsyncConnection.stream_scalars() method supports optional context manager use against the AsyncScalarResult
object, as in:
In the above pattern, the AsyncScalarResult.close() method is invoked unconditionally, even if the iterator is interrupted by an
exception throw. Context manager use remains optional, however, and the function may be called in either an async with fn(): or
await fn() style.
Returns:
See also
AsyncConnection.stream()
See also
Reference to the sync-style Engine this AsyncConnection is associated with via its underlying Connection .
See also
Members
Class signature
If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns.
This is used to cancel a Transaction without affecting the scope of an enclosing transaction.
Start this AsyncTransaction object’s context outside of using a Python with: block.
AsyncMappingResult A wrapper for a AsyncResult that returns dictionary values rather than Row values.
AsyncScalarResult A wrapper for a AsyncResult that returns scalar values rather than Row values.
AsyncTupleResult A AsyncResult that’s typed as returning plain Python tuples instead of rows.
The AsyncResult only applies to statement executions that use a server-side cursor. It is returned only from the
AsyncConnection.stream() and AsyncSession.stream() methods.
Note
As is the case with Result , this object is used for ORM results returned by AsyncSession.execute() , which can yield
instances of ORM mapped objects either individually or within tuple-like rows. Note that these result objects do
not deduplicate instances or rows automatically as is the case with the legacy Query object. For in-Python de-
duplication of instances or rows, use the AsyncResult.unique() modiSer method.
Members
Class signature
Closes the result set after invocation. Subsequent invocations will return an empty list.
Returns:
proxies the .closed attribute of the underlying result object, if any, else raises AttributeError .
Refer to Result.columns() in the synchronous SQLAlchemy API for a complete behavioral description.
Returns:
See also
AsyncResult.partitions()
To fetch the Srst row of a result only, use the AsyncResult.first() method. To iterate through all rows, iterate the AsyncResult
object directly.
Returns:
Note
This method returns one row, e.g. tuple, by default. To return exactly one single scalar value, that is, the Srst
column of the Srst row, use the AsyncResult.scalar() method, or combine AsyncResult.scalars() and
AsyncResult.first() .
Additionally, in contrast to the behavior of the legacy ORM Query.first() method, no limit is applied to the SQL
query which was invoked to produce this AsyncResult ; for a DBAPI driver that buffers results in memory before
yielding rows, all rows will be sent to the Python process and all but the Srst row will be discarded.
See also
Returns:
See also
AsyncResult.scalar()
AsyncResult.one()
Return a callable object that will produce copies of this AsyncResult when invoked.
This is used for result set caching. The method must be called on the result when it has been unconsumed, and calling the method
will consume the result fully. When the FrozenResult is retrieved from a cache, it can be called any number of times where it will
produce a new Result object each time against its stored set of rows.
See also
Re-Executing Statements - example usage within the ORM to implement a result-set cache.
Return an iterable view which yields the string keys that would be represented by each Row .
The keys can represent the labels of the columns returned by a core statement or the names of the orm classes returned by an orm
execution.
The view also can be tested for key containment using the Python in operator, which will test both for the string keys represented
in the view, as well as for alternate keys such as column objects.
Changed in version 1.4: a key view object is returned rather than a plain list.
When this Slter is applied, fetching rows will return RowMapping objects instead of Row objects.
Returns:
Raises NoResultFound if the result returns no rows, or MultipleResultsFound if multiple rows would be returned.
Note
This method returns one row, e.g. tuple, by default. To return exactly one single scalar value, that is, the Srst
column of the Srst row, use the AsyncResult.scalar_one() method, or combine AsyncResult.scalars() and
AsyncResult.one() .
Returns:
Raises:
MultipleResultsFound , NoResultFound
See also
AsyncResult.first()
AsyncResult.one_or_none()
AsyncResult.scalar_one()
Returns None if the result has no rows. Raises MultipleResultsFound if multiple rows are returned.
Returns:
Raises:
MultipleResultsFound
See also
AsyncResult.first()
AsyncResult.one()
Refer to Result.partitions() in the synchronous SQLAlchemy API for a complete behavioral description.
Fetch the Srst column of the Srst row, and close the result set.
After calling this method, the object is fully closed, e.g. the CursorResult.close() method will have been called.
Returns:
See also
AsyncResult.one()
AsyncResult.scalars()
See also
AsyncResult.one_or_none()
AsyncResult.scalars()
Return an AsyncScalarResult Sltering object which will return single elements rather than Row objects.
Refer to Result.scalars() in the synchronous SQLAlchemy API for a complete behavioral description.
Parameters:
index – integer or row key indicating the column to be fetched from each row, defaults to 0 indicating the Srst column.
Returns:
attribute sqlalchemy.ext.asyncio.AsyncResult. t
This method returns the same AsyncResult object at runtime, however annotates as returning a AsyncTupleResult object that
will indicate to PEP 484 typing tools that plain typed Tuple instances are returned rather than rows. This allows tuple unpacking
and __getitem__ access of Row objects to by typed, for those cases where the statement invoked itself included typing
information.
Returns:
See also
Refer to Result.unique() in the synchronous SQLAlchemy API for a complete behavioral description.
The FilterResult.yield_per() method is a pass through to the Result.yield_per() method. See that method’s
documentation for usage notes.
New in version 1.4.40: - added FilterResult.yield_per() so that the method is available on all result set implementations
See also
Using Server Side Cursors (a.k.a. stream results) - describes Core behavior for Result.yield_per()
Fetching Large Result Sets with Yield Per - in the ORM Querying Guide
A wrapper for a AsyncResult that returns scalar values rather than Row values.
Refer to the ScalarResult object in the synchronous SQLAlchemy API for a complete behavioral description.
Members
Class signature
Equivalent to AsyncResult.all() except that scalar values, rather than Row objects, are returned.
proxies the .closed attribute of the underlying result object, if any, else raises AttributeError .
Equivalent to AsyncResult.first() except that scalar values, rather than Row objects, are returned.
Equivalent to AsyncResult.one() except that scalar values, rather than Row objects, are returned.
Equivalent to AsyncResult.one_or_none() except that scalar values, rather than Row objects, are returned.
Equivalent to AsyncResult.partitions() except that scalar values, rather than Row objects, are returned.
The FilterResult.yield_per() method is a pass through to the Result.yield_per() method. See that method’s
documentation for usage notes.
New in version 1.4.40: - added FilterResult.yield_per() so that the method is available on all result set implementations
See also
Using Server Side Cursors (a.k.a. stream results) - describes Core behavior for Result.yield_per()
Fetching Large Result Sets with Yield Per - in the ORM Querying Guide
A wrapper for a AsyncResult that returns dictionary values rather than Row values.
Refer to the MappingResult object in the synchronous SQLAlchemy API for a complete behavioral description.
Members
all() , close() , closed , columns() , fetchall() , fetchmany() , fetchone() , first() , keys() , one() ,
one_or_none() , partitions() , unique() , yield_per()
Class signature
Equivalent to AsyncResult.all() except that RowMapping values, rather than Row objects, are returned.
proxies the .closed attribute of the underlying result object, if any, else raises AttributeError .
Equivalent to AsyncResult.fetchmany() except that RowMapping values, rather than Row objects, are returned.
Equivalent to AsyncResult.fetchone() except that RowMapping values, rather than Row objects, are returned.
Equivalent to AsyncResult.first() except that RowMapping values, rather than Row objects, are returned.
Return an iterable view which yields the string keys that would be represented by each Row .
The keys can represent the labels of the columns returned by a core statement or the names of the orm classes returned by an orm
execution.
The view also can be tested for key containment using the Python in operator, which will test both for the string keys represented
in the view, as well as for alternate keys such as column objects.
Changed in version 1.4: a key view object is returned rather than a plain list.
Equivalent to AsyncResult.one() except that RowMapping values, rather than Row objects, are returned.
Equivalent to AsyncResult.one_or_none() except that RowMapping values, rather than Row objects, are returned.
Equivalent to AsyncResult.partitions() except that RowMapping values, rather than Row objects, are returned.
The FilterResult.yield_per() method is a pass through to the Result.yield_per() method. See that method’s
documentation for usage notes.
New in version 1.4.40: - added FilterResult.yield_per() so that the method is available on all result set implementations
See also
Using Server Side Cursors (a.k.a. stream results) - describes Core behavior for Result.yield_per()
Fetching Large Result Sets with Yield Per - in the ORM Querying Guide
Since Row acts like a tuple in every way already, this class is a typing only class, regular AsyncResult is still used at runtime.
Class signature
async_object_session (instance) Return the AsyncSession to which the given instance belongs.
async_session (session) Return the AsyncSession which is proxying the given Session object, if any.
AsyncAttrs Mixin class which provides an awaitable accessor for all attributes.
This function makes use of the sync-API function object_session to retrieve the Session which refers to the given instance, and
from there links it to the original AsyncSession .
If the AsyncSession has been garbage collected, the return value is None .
Parameters:
Returns:
Return the AsyncSession which is proxying the given Session object, if any.
Parameters:
Returns:
See also
close_all_sessions()
The async_sessionmaker factory works in the same way as the sessionmaker factory, to generate new AsyncSession objects
when called, creating them given the conSgurational arguments established here.
e.g.:
await run_some_sql(async_session)
await engine.dispose()
The async_sessionmaker is useful so that different parts of a program can create new AsyncSession objects with a Sxed
conSguration established up front. Note that AsyncSession objects may also be instantiated directly when not using
async_sessionmaker .
New in version 2.0: async_sessionmaker provides a sessionmaker class that’s dedicated to the AsyncSession object,
including pep-484 typing support.
See also
sessionmaker architecture
Opening and Closing a Session - introductory text on creating sessions using sessionmaker .
Members
Class signature
Produce a new AsyncSession object using the conSguration established in this async_sessionmaker .
In Python, the __call__ method is invoked on an object when it is “called” in the same way as a function:
All arguments here except for class_ correspond to arguments accepted by Session directly. See the AsyncSession.__init__()
docstring for more details on parameters.
Produce a context manager that both provides a new AsyncSession as well as a transaction that commits.
e.g.:
e.g.:
AsyncSession = async_sessionmaker(some_engine)
AsyncSession.configure(bind=create_async_engine('sqlite+aiosqlite://'))
See the section Using asyncio scoped session for usage details.
Members
Class signature
Return the current AsyncSession , creating it using the scoped_session.session_factory if not present.
Parameters:
**kw – Keyword arguments will be passed to the scoped_session.session_factory callable, if an existing AsyncSession
is not present. If the AsyncSession is present and keyword arguments have been passed, InvalidRequestError is raised.
Parameters:
session_factory – a factory to create new AsyncSession instances. This is usually, but not necessarily, an instance of
async_sessionmaker .
scopefunc – function which deSnes the current scope. A function such as asyncio.current_task may be useful here.
The AsyncSession.aclose() name is speciScally to support the Python standard library @contextlib.aclosing context manager
function.
Objects that are in the transient state when passed to the Session.add() method will move to the pending state, until the next
`ush, at which point they will move to the persistent state.
Objects that are in the detached state when passed to the Session.add() method will move to the persistent state directly.
If the transaction used by the Session is rolled back, objects which were transient when they were passed to Session.add() will
be moved back to the transient state, and will no longer be present within this Session .
See also
Session.add_all()
See also
Session.add()
The underlying Session will perform the “begin” action when the AsyncSessionTransaction object is entered:
Note that database IO will not normally occur when the session-level transaction is begun, as database transactions begin on an on-
demand basis. However, the begin block is async to accommodate for a SessionEvents.after_transaction_create() event
hook that may perform IO.
Return an AsyncSessionTransaction object which will begin a “nested” transaction, e.g. SAVEPOINT.
See also
Serializable isolation / Savepoints / Transactional DDL (asyncio version) - special workarounds required with the SQLite asyncio driver
in order for SAVEPOINT to work correctly.
Close out the transactional resources and ORM objects used by this AsyncSession .
See also
Deprecated since version 2.0: The AsyncSession.close_all() method is deprecated and will be removed in a future
release. Please refer to close_all_sessions() .
See also
See sessionmaker.configure() .
This method may also be used to establish execution options for the database connection used by the current transaction.
New in version 1.4.24: Added **kw arguments which are passed through to the underlying Session.connection() method.
See also
As this operation may need to cascade along unloaded relationships, it is awaitable to allow for those queries to take place.
See also
E.g.:
some_mapped_object in session.dirty
Instances are considered dirty when they were modiSed but not deleted.
Note that this ‘dirty’ calculation is ‘optimistic’; most attribute-setting or collection modiScation operations will mark an instance as
‘dirty’ and place it in this set, even if there is no net change to the attribute’s value. At `ush time, the value of each attribute is
compared to its previously saved value, and if there’s no net change, no SQL operation will occur (this is a more expensive operation
so it’s only done at `ush time).
To check if an instance has actionable net changes to its attributes, use the Session.is_modified() method.
See also
Marks the attributes of an instance as out of date. When an expired attribute is next accessed, a query will be issued to the Session
object’s current transactional context in order to load all expired attributes for the given instance. Note that a highly isolated
transaction will return the same values as were previously read in that same transaction, regardless of changes in database state
outside of that transaction.
The Session object’s default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are
called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire() only makes sense for the
speciSc case that a non-ORM SQL statement was emitted in the current transaction.
Parameters:
attribute_names – optional list of string attribute names indicating a subset of attributes to be expired.
See also
Session.expire()
Session.refresh()
Query.populate_existing()
When any attributes on a persistent instance is next accessed, a query will be issued using the Session object’s current
transactional context in order to load all expired attributes for the given instance. Note that a highly isolated transaction will return
the same values as were previously read in that same transaction, regardless of changes in database state outside of that
transaction.
To expire individual objects and individual attributes on those objects, use Session.expire() .
The Session object’s default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are
called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire_all() is not usually
needed, assuming the transaction is isolated.
See also
Session.expire()
Session.refresh()
Query.populate_existing()
This will free all internal references to the instance. Cascading will be applied according to the expunge cascade rule.
See also
Return an instance based on the given primary key identiSer, or None if not found.
See also
Unlike the Session.get_bind() method, this method is currently not used by this AsyncSession in any way in order to resolve
engines for requests.
Note
This method proxies directly to the Session.get_bind() method, however is currently not useful as an override
target, in contrast to that of the Session.get_bind() method. The example below illustrates how to implement
custom Session.get_bind() schemes that work with AsyncSession and AsyncEngine .
The pattern introduced at Custom Vertical Partitioning illustrates how to apply a custom bind-lookup scheme to a Session given a
set of Engine objects. To apply a corresponding Session.get_bind() implementation for use with a AsyncSession and
AsyncEngine objects, continue to subclass Session and apply it to AsyncSession using AsyncSession.sync_session_class . The
inner method must continue to return Engine instances, which can be acquired from a AsyncEngine using the
AsyncEngine.sync_engine attribute:
import random
class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None, **kw):
# within get_bind(), return sync engines
if mapper and issubclass(mapper.class_, MyOtherClass):
return engines['other'].sync_engine
elif self._flushing or isinstance(clause, (Update, Delete)):
return engines['leader'].sync_engine
else:
return engines[
random.choice(['follower1','follower2'])
].sync_engine
The Session.get_bind() method is called in a non-asyncio, implicitly non-blocking context in the same manner as ORM event
hooks and functions that are invoked via AsyncSession.run_sync() , so routines that wish to run SQL commands inside of
Session.get_bind() can continue to do so using blocking-style code, which will be translated to implicitly async calls at the point
of invoking IO on the database drivers.
Return an instance based on the given primary key identiSer, or raise an exception if not found.
..versionadded: 2.0.22
See also
A user-modiSable dictionary.
The initial value of this dictionary can be populated using the info argument to the Session constructor or sessionmaker
constructor or factory methods. The dictionary here is always local to this Session and can be modiSed independently of all other
Session objects.
Changed in version 1.4: The Session no longer begins a new transaction immediately, so this attribute will be False when the
Session is Srst instantiated.
“partial rollback” state typically indicates that the `ush process of the Session has failed, and that the Session.rollback()
method must be emitted in order to fully roll back the transaction.
If this Session is not in a transaction at all, the Session will autobegin when it is Srst used, so in this case Session.is_active will
return True.
Otherwise, if this Session is within a transaction, and that transaction has not been rolled back internally, the Session.is_active
will also return True.
See also
“This Session’s transaction has been rolled back due to a previous exception during `ush.” (or similar)
Session.in_transaction()
This method retrieves the history for each instrumented attribute on the instance and performs a comparison of the current value
to its previously committed value, if any.
It is in effect a more expensive and accurate version of checking for the given instance in the Session.dirty collection; a full test
for each attribute’s net “dirty” status is performed.
E.g.:
return session.is_modified(someobject)
Instances present in the Session.dirty collection may report False when tested with this method. This is because the object
may have received change events via attribute mutation, thus placing it in Session.dirty , but ultimately the state is the same
as that loaded from the database, resulting in no net change here.
Scalar attributes may not have recorded the previously set value when a new value was applied, if the attribute was not loaded,
or was expired, at the time the new value was received - in these cases, the attribute is assumed to have a change, even if there
is ultimately no net change against its database value. SQLAlchemy in most cases does not need the “old” value when a set
event occurs, so it skips the expense of a SQL call if the old value isn’t present, based on the assumption that an UPDATE of
the scalar value is usually needed, and in those few cases where it isn’t, is less expensive on average than issuing a defensive
SELECT.
The “old” value is fetched unconditionally upon set only if the attribute container has the active_history `ag set to True .
This `ag is set typically for primary key attributes and scalar object references that are not a simple many-to-one. To set this
`ag for any arbitrary mapped column, use the active_history argument with column_property() .
Parameters:
include_collections – Indicates if multivalued collections should be included in the operation. Setting this to False is a way
to detect only local-column based properties (i.e. scalar columns or many-to-one foreign keys) that would result in an
UPDATE for this instance upon `ush.
method sqlalchemy.ext.asyncio.async_scoped_session. async merge (instance: _O, *, load: bool = True, options:
Sequence[ORMOption] | None = None) → _O
Copy the state of a given instance into a corresponding instance within this AsyncSession .
See also
e.g.:
with session.no_autoflush:
some_object = SomeClass()
session.add(some_object)
# won't autoflush
some_object.related_thing = session.query(SomeRelated).first()
Operations that proceed within the with: block will not be subject to `ushes occurring upon query access. This is useful when
initializing a series of objects which involve existing database queries, where the uncompleted object should not yet be `ushed.
A query will be issued to the database and all attributes will be refreshed with their current database value.
This is the async version of the Session.refresh() method. See that method for a complete description of all options.
See also
Different from scoped_session’s remove method, this method would use await to wait for the close method of AsyncSession.
Close out the transactional resources and ORM objects used by this Session , resetting the session to its initial state.
See also
See also
See also
Returns:
a ScalarResult object
See also
The session_factory provided to __init__ is stored in this attribute and may be accessed at a later time. This can be useful when a
new non-scoped AsyncSession is needed.
Returns:
an AsyncScalarResult object
See also
E.g.:
class A(Base):
__tablename__ = "a"
class B(Base):
__tablename__ = "b"
id: Mapped[int] = mapped_column(primary_key=True)
a_id: Mapped[int] = mapped_column(ForeignKey("a.id"))
data: Mapped[str]
In the above example, the AsyncAttrs mixin is applied to the declarative Base class where it takes effect for all subclasses. This
mixin adds a single new attribute AsyncAttrs.awaitable_attrs to all classes, which will yield the value of any attribute as an
awaitable. This allows attributes which may be subject to lazy loading or deferred / unexpiry loading to be accessed such that IO can
still be emitted:
The AsyncAttrs.awaitable_attrs performs a call against the attribute that is approximately equivalent to using the
AsyncSession.run_sync() method, e.g.:
Members
awaitable_attrs
See also
e.g.:
The AsyncSession is not safe for use in concurrent tasks.. See Is the Session thread-safe? Is AsyncSession safe to share in
concurrent tasks? for background.
To use an AsyncSession with custom Session implementations, see the AsyncSession.sync_session_class parameter.
Members
Class signature
The class or callable that provides the underlying Session instance for a particular AsyncSession .
At the class level, this attribute is the default value for the AsyncSession.sync_session_class parameter. Custom subclasses of
AsyncSession can override this.
At the instance level, this attribute indicates the current class or callable that was used to provide the Session instance for this
AsyncSession instance.
All parameters other than sync_session_class are passed to the sync_session_class callable directly to instantiate a new
Session . Refer to Session.__init__() for parameter documentation.
Parameters:
sync_session_class –
A Session subclass or other callable which will be used to construct the Session which will be proxied. This parameter may
be used to provide custom Session subclasses. Defaults to the AsyncSession.sync_session_class class-level attribute.
The AsyncSession.aclose() name is speciScally to support the Python standard library @contextlib.aclosing context manager
function.
Objects that are in the transient state when passed to the Session.add() method will move to the pending state, until the next
`ush, at which point they will move to the persistent state.
Objects that are in the detached state when passed to the Session.add() method will move to the persistent state directly.
If the transaction used by the Session is rolled back, objects which were transient when they were passed to Session.add() will
be moved back to the transient state, and will no longer be present within this Session .
See also
Session.add_all()
See also
Session.add()
The underlying Session will perform the “begin” action when the AsyncSessionTransaction object is entered:
Note that database IO will not normally occur when the session-level transaction is begun, as database transactions begin on an on-
demand basis. However, the begin block is async to accommodate for a SessionEvents.after_transaction_create() event
hook that may perform IO.
Return an AsyncSessionTransaction object which will begin a “nested” transaction, e.g. SAVEPOINT.
See also
Serializable isolation / Savepoints / Transactional DDL (asyncio version) - special workarounds required with the SQLite asyncio driver
in order for SAVEPOINT to work correctly.
Close out the transactional resources and ORM objects used by this AsyncSession .
See also
Deprecated since version 2.0: The AsyncSession.close_all() method is deprecated and will be removed in a future
release. Please refer to close_all_sessions() .
See also
This method may also be used to establish execution options for the database connection used by the current transaction.
New in version 1.4.24: Added **kw arguments which are passed through to the underlying Session.connection() method.
See also
As this operation may need to cascade along unloaded relationships, it is awaitable to allow for those queries to take place.
See also
E.g.:
some_mapped_object in session.dirty
Instances are considered dirty when they were modiSed but not deleted.
Note that this ‘dirty’ calculation is ‘optimistic’; most attribute-setting or collection modiScation operations will mark an instance as
‘dirty’ and place it in this set, even if there is no net change to the attribute’s value. At `ush time, the value of each attribute is
compared to its previously saved value, and if there’s no net change, no SQL operation will occur (this is a more expensive operation
so it’s only done at `ush time).
To check if an instance has actionable net changes to its attributes, use the Session.is_modified() method.
See also
Marks the attributes of an instance as out of date. When an expired attribute is next accessed, a query will be issued to the Session
object’s current transactional context in order to load all expired attributes for the given instance. Note that a highly isolated
transaction will return the same values as were previously read in that same transaction, regardless of changes in database state
outside of that transaction.
The Session object’s default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are
called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire() only makes sense for the
speciSc case that a non-ORM SQL statement was emitted in the current transaction.
Parameters:
attribute_names – optional list of string attribute names indicating a subset of attributes to be expired.
See also
Session.expire()
Session.refresh()
Query.populate_existing()
When any attributes on a persistent instance is next accessed, a query will be issued using the Session object’s current
transactional context in order to load all expired attributes for the given instance. Note that a highly isolated transaction will return
the same values as were previously read in that same transaction, regardless of changes in database state outside of that
transaction.
To expire individual objects and individual attributes on those objects, use Session.expire() .
The Session object’s default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are
called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire_all() is not usually
needed, assuming the transaction is isolated.
See also
Session.expire()
Session.refresh()
Query.populate_existing()
This will free all internal references to the instance. Cascading will be applied according to the expunge cascade rule.
See also
Return an instance based on the given primary key identiSer, or None if not found.
See also
Unlike the Session.get_bind() method, this method is currently not used by this AsyncSession in any way in order to resolve
engines for requests.
Note
This method proxies directly to the Session.get_bind() method, however is currently not useful as an override
target, in contrast to that of the Session.get_bind() method. The example below illustrates how to implement
custom Session.get_bind() schemes that work with AsyncSession and AsyncEngine .
The pattern introduced at Custom Vertical Partitioning illustrates how to apply a custom bind-lookup scheme to a Session given a
set of Engine objects. To apply a corresponding Session.get_bind() implementation for use with a AsyncSession and
AsyncEngine objects, continue to subclass Session and apply it to AsyncSession using AsyncSession.sync_session_class . The
inner method must continue to return Engine instances, which can be acquired from a AsyncEngine using the
AsyncEngine.sync_engine attribute:
import random
class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None, **kw):
# within get_bind(), return sync engines
if mapper and issubclass(mapper.class_, MyOtherClass):
return engines['other'].sync_engine
elif self._flushing or isinstance(clause, (Update, Delete)):
return engines['leader'].sync_engine
else:
return engines[
random.choice(['follower1','follower2'])
].sync_engine
The Session.get_bind() method is called in a non-asyncio, implicitly non-blocking context in the same manner as ORM event
hooks and functions that are invoked via AsyncSession.run_sync() , so routines that wish to run SQL commands inside of
Session.get_bind() can continue to do so using blocking-style code, which will be translated to implicitly async calls at the point
of invoking IO on the database drivers.
Returns:
Return an instance based on the given primary key identiSer, or raise an exception if not found.
..versionadded: 2.0.22
See also
Returns:
Return True if this Session has begun a nested transaction, e.g. SAVEPOINT.
See also
Session.is_active
A user-modiSable dictionary.
The initial value of this dictionary can be populated using the info argument to the Session constructor or sessionmaker
constructor or factory methods. The dictionary here is always local to this Session and can be modiSed independently of all other
Session objects.
Changed in version 1.4: The Session no longer begins a new transaction immediately, so this attribute will be False when the
Session is Srst instantiated.
“partial rollback” state typically indicates that the `ush process of the Session has failed, and that the Session.rollback()
method must be emitted in order to fully roll back the transaction.
If this Session is not in a transaction at all, the Session will autobegin when it is Srst used, so in this case Session.is_active will
return True.
Otherwise, if this Session is within a transaction, and that transaction has not been rolled back internally, the Session.is_active
will also return True.
See also
“This Session’s transaction has been rolled back due to a previous exception during `ush.” (or similar)
Session.in_transaction()
This method retrieves the history for each instrumented attribute on the instance and performs a comparison of the current value
to its previously committed value, if any.
It is in effect a more expensive and accurate version of checking for the given instance in the Session.dirty collection; a full test
for each attribute’s net “dirty” status is performed.
E.g.:
return session.is_modified(someobject)
Instances present in the Session.dirty collection may report False when tested with this method. This is because the object
may have received change events via attribute mutation, thus placing it in Session.dirty , but ultimately the state is the same
as that loaded from the database, resulting in no net change here.
Scalar attributes may not have recorded the previously set value when a new value was applied, if the attribute was not loaded,
or was expired, at the time the new value was received - in these cases, the attribute is assumed to have a change, even if there
is ultimately no net change against its database value. SQLAlchemy in most cases does not need the “old” value when a set
event occurs, so it skips the expense of a SQL call if the old value isn’t present, based on the assumption that an UPDATE of
the scalar value is usually needed, and in those few cases where it isn’t, is less expensive on average than issuing a defensive
SELECT.
The “old” value is fetched unconditionally upon set only if the attribute container has the active_history `ag set to True .
This `ag is set typically for primary key attributes and scalar object references that are not a simple many-to-one. To set this
`ag for any arbitrary mapped column, use the active_history argument with column_property() .
Parameters:
include_collections – Indicates if multivalued collections should be included in the operation. Setting this to False is a way
to detect only local-column based properties (i.e. scalar columns or many-to-one foreign keys) that would result in an
UPDATE for this instance upon `ush.
method sqlalchemy.ext.asyncio.AsyncSession. async merge (instance: _O, *, load: bool = True, options:
Sequence[ORMOption] | None = None) → _O
Copy the state of a given instance into a corresponding instance within this AsyncSession .
See also
e.g.:
with session.no_autoflush:
some_object = SomeClass()
session.add(some_object)
# won't autoflush
some_object.related_thing = session.query(SomeRelated).first()
Operations that proceed within the with: block will not be subject to `ushes occurring upon query access. This is useful when
initializing a series of objects which involve existing database queries, where the uncompleted object should not yet be `ushed.
A query will be issued to the database and all attributes will be refreshed with their current database value.
This is the async version of the Session.refresh() method. See that method for a complete description of all options.
See also
Close out the transactional resources and ORM objects used by this Session , resetting the session to its initial state.
See also
See also
method sqlalchemy.ext.asyncio.AsyncSession. async run_sync (fn: Callable[[...], _T], *arg: Any, **kw: Any) → _T
Invoke the given synchronous (i.e. not async) callable, passing a synchronous-style Session as the Srst argument.
This method allows traditional synchronous SQLAlchemy functions to run within the context of an asyncio application.
E.g.:
'''
session.add(MyObject(param=param))
session.flush()
return "success"
This method maintains the asyncio event loop all the way through to the database connection by running the given callable in a
specially instrumented greenlet.
Tip
The provided callable is invoked inline within the asyncio event loop, and will block on traditional IO calls. IO within this callable should
only call into SQLAlchemy’s asyncio database APIs which will be properly adapted to the greenlet context.
See also
AsyncAttrs - a mixin for ORM mapped classes that provides a similar feature more succinctly on a per-attribute basis
AsyncConnection.run_sync()
See also
a ScalarResult object
See also
Returns:
an AsyncScalarResult object
See also
See also
This object is provided so that a transaction-holding object for the AsyncSession.begin() may be returned.
Members
commit() , rollback()
Class signature
[ambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.
Created using Sphinx 7.2.6. Documentation last generated: Sun 12 Nov 2023 05:10:43 AM
Website content copyright © by SQLAlchemy authors and contributors. SQLAlchemy and its documentation are licensed under the MIT license.
SQLAlchemy is a trademark of Michael Bayer. mike(&)zzzcomputing.com All rights reserved.
Website generation by zeekofile, with huge thanks to the Blogofile project.
Mastodon Mastodon