SQLAlchemy 2.0 Documentation
SQLAlchemy Core
- SQL Statements and Expressions API
- Schema Definition Language
- SQL Datatype Objects
- Engine and Connection Use
- Engine Configuration
- Working with Engines and Connections
- Connection Pooling¶
- Connection Pool Configuration
- Switching Pool Implementations
- Using a Custom Connection Function
- Constructing a Pool
- Reset On Return
- Pool Events
- Dealing with Disconnects
- Using FIFO vs. LIFO
- Using Connection Pools with Multiprocessing or os.fork()
- Using a pool instance directly
- API Documentation - Available Pool Implementations
- Core Events
- Core API Basics
Project Versions
- Previous:Working with Engines and Connections
- Next:Core Events
- Up:Home
- On this page:
- Connection Pooling
- Connection Pool Configuration
- Switching Pool Implementations
- Using a Custom Connection Function
- Constructing a Pool
- Reset On Return
- Pool Events
- Dealing with Disconnects
- Using FIFO vs. LIFO
- Using Connection Pools with Multiprocessing or os.fork()
- Using a pool instance directly
- API Documentation - Available Pool Implementations
Connection Pooling¶
A connection pool is a standard technique used to maintainlong running connections in memory for efficient re-use,as well as to providemanagement for the total number of connections an applicationmight use simultaneously.
Particularly forserver-side web applications, a connection pool is the standard way tomaintain a “pool” of active database connections in memory which arereused across requests.
SQLAlchemy includes several connection pool implementationswhich integrate with theEngine
. They can also be useddirectly for applications that want to add pooling to an otherwiseplain DBAPI approach.
Connection Pool Configuration¶
TheEngine
returned by thecreate_engine()
function in most cases has aQueuePool
integrated, pre-configured with reasonable pooling defaults. Ifyou’re reading this section only to learn how to enable pooling - congratulations!You’re already done.
The most commonQueuePool
tuning parameters can be passeddirectly tocreate_engine()
as keyword arguments:pool_size
,max_overflow
,pool_recycle
andpool_timeout
. For example:
engine=create_engine("postgresql+psycopg2://me@localhost/mydb",pool_size=20,max_overflow=0)
All SQLAlchemy pool implementations have in commonthat none of them “pre create” connections - all implementations waituntil first use before creating a connection. At that point, ifno additional concurrent checkout requests for more connectionsare made, no additional connections are created. This is why it’s perfectlyfine forcreate_engine()
to default to using aQueuePool
of size five without regard to whether or not the application really needs five connectionsqueued up - the pool would only grow to that size if the applicationactually used five connections concurrently, in which case the usage of asmall pool is an entirely appropriate default behavior.
Note
TheQueuePool
class isnot compatible with asyncio.When usingcreate_async_engine
to create an instance ofAsyncEngine
, theAsyncAdaptedQueuePool
class,which makes use of an asyncio-compatible queue implementation, is usedinstead.
Switching Pool Implementations¶
The usual way to use a different kind of pool withcreate_engine()
is to use thepoolclass
argument. This argument accepts a classimported from thesqlalchemy.pool
module, and handles the detailsof building the pool for you. A common use case here is whenconnection pooling is to be disabled, which can be achieved by usingtheNullPool
implementation:
fromsqlalchemy.poolimportNullPoolengine=create_engine("postgresql+psycopg2://scott:tiger@localhost/test",poolclass=NullPool)
Using a Custom Connection Function¶
See the sectionCustom DBAPI connect() arguments / on-connect routines for a rundown of the variousconnection customization routines.
Constructing a Pool¶
To use aPool
by itself, thecreator
function isthe only argument that’s required and is passed first, followedby any additional options:
importsqlalchemy.poolaspoolimportpsycopg2defgetconn():c=psycopg2.connect(user="ed",host="127.0.0.1",dbname="test")returncmypool=pool.QueuePool(getconn,max_overflow=10,pool_size=5)
DBAPI connections can then be procured from the pool using thePool.connect()
function. The return value of this method is a DBAPIconnection that’s contained within a transparent proxy:
# get a connectionconn=mypool.connect()# use itcursor_obj=conn.cursor()cursor_obj.execute("select foo")
The purpose of the transparent proxy is to intercept theclose()
call,such that instead of the DBAPI connection being closed, it is returned to thepool:
# "close" the connection. Returns# it to the pool.conn.close()
The proxy also returns its contained DBAPI connection to the pool when it isgarbage collected, though it’s not deterministic in Python that this occursimmediately (though it is typical with cPython). This usage is not recommendedhowever and in particular is not supported with asyncio DBAPI drivers.
Reset On Return¶
The pool includes “reset on return” behavior which will call therollback()
method of the DBAPI connection when the connection is returned to the pool.This is so that any existing transactional state is removed from theconnection, which includes not just uncommitted data but table and row locks aswell. For most DBAPIs, the call torollback()
is inexpensive, and if theDBAPI has already completed a transaction, the method should be a no-op.
Disabling Reset on Return for non-transactional connections¶
For very specific cases where thisrollback()
is not useful, such as whenusing a connection that is configured forautocommit or when using a databasethat has no ACID capabilities such as the MyISAM engine of MySQL, thereset-on-return behavior can be disabled, which is typically done forperformance reasons. This can be affected by using thePool.reset_on_return
parameter ofPool
, whichis also available fromcreate_engine()
ascreate_engine.pool_reset_on_return
, passing a value ofNone
.This is illustrated in the example below, in conjunction with thecreate_engine.isolation_level
parameter setting ofAUTOCOMMIT
:
non_acid_engine=create_engine("mysql://scott:tiger@host/db",pool_reset_on_return=None,isolation_level="AUTOCOMMIT",)
The above engine won’t actually perform ROLLBACK when connections are returnedto the pool; since AUTOCOMMIT is enabled, the driver will also not performany BEGIN operation.
Custom Reset-on-Return Schemes¶
“reset on return” consisting of a singlerollback()
may not be sufficientfor some use cases; in particular, applications which make use of temporarytables may wish for these tables to be automatically removed on connectioncheckin. Some (but notably not all) backends include features that can “reset”such tables within the scope of a database connection, which may be a desirablebehavior for connection pool reset. Other server resources such as preparedstatement handles and server-side statement caches may persist beyond thecheckin process, which may or may not be desirable, depending on specifics.Again, some (but again not all) backends may provide for a means of resettingthis state. The two SQLAlchemy included dialects which are known to havesuch reset schemes include Microsoft SQL Server, where an undocumented butwidely known stored procedure calledsp_reset_connection
is often used,and PostgreSQL, which has a well-documented series of commands includingDISCARD
RESET
,DEALLOCATE
, andUNLISTEN
.
The following example illustrates how to replace reset on return with theMicrosoft SQL Serversp_reset_connection
stored procedure, using thePoolEvents.reset()
event hook. Thecreate_engine.pool_reset_on_return
parameter is set toNone
so that the custom scheme can replace the default behavior completely. Thecustom hook implementation calls.rollback()
in any case, as it’s usuallyimportant that the DBAPI’s own tracking of commit/rollback will remainconsistent with the state of the transaction:
fromsqlalchemyimportcreate_enginefromsqlalchemyimporteventmssql_engine=create_engine("mssql+pyodbc://scott:tiger^5HHH@mssql2017:1433/test?driver=ODBC+Driver+17+for+SQL+Server",# disable default reset-on-return schemepool_reset_on_return=None,)@event.listens_for(mssql_engine,"reset")def_reset_mssql(dbapi_connection,connection_record,reset_state):ifnotreset_state.terminate_only:dbapi_connection.execute("{call sys.sp_reset_connection}")# so that the DBAPI itself knows that the connection has been# resetdbapi_connection.rollback()
Changed in version 2.0.0b3:Added additional state arguments tothePoolEvents.reset()
event and additionally ensured the eventis invoked for all “reset” occurrences, so that it’s appropriateas a place for custom “reset” handlers. Previous schemes whichuse thePoolEvents.checkin()
handler remain usable as well.
See also
Temporary Table / Resource Reset for Connection Pooling - in theMicrosoft SQL Server documentation
Temporary Table / Resource Reset for Connection Pooling in thePostgreSQL documentation
Logging reset-on-return events¶
Logging for pool events including reset on return can be setlogging.DEBUG
log level along with thesqlalchemy.pool
logger, or by settingcreate_engine.echo_pool
to"debug"
when usingcreate_engine()
:
>>>fromsqlalchemyimportcreate_engine>>>engine=create_engine("postgresql://scott:tiger@localhost/test",echo_pool="debug")
The above pool will show verbose logging including reset on return:
>>>c1=engine.connect()DEBUG sqlalchemy.pool.impl.QueuePool Created new connection <connection object ...>DEBUG sqlalchemy.pool.impl.QueuePool Connection <connection object ...> checked out from pool>>>c1.close()DEBUG sqlalchemy.pool.impl.QueuePool Connection <connection object ...> being returned to poolDEBUG sqlalchemy.pool.impl.QueuePool Connection <connection object ...> rollback-on-return
Pool Events¶
Connection pools support an event interface that allows hooks to executeupon first connect, upon each new connection, and upon checkout andcheckin of connections. SeePoolEvents
for details.
Dealing with Disconnects¶
The connection pool has the ability to refresh individual connections as well asits entire set of connections, setting the previously pooled connections as“invalid”. A common use case is allow the connection pool to gracefully recoverwhen the database server has been restarted, and all previously established connectionsare no longer functional. There are two approaches to this.
Disconnect Handling - Pessimistic¶
The pessimistic approach refers to emitting a test statement on the SQLconnection at the start of each connection pool checkout, to testthat the database connection is still viable. The implementation isdialect-specific, and makes use of either a DBAPI-specific ping method,or by using a simple SQL statement like “SELECT 1”, in order to test theconnection for liveness.
The approach adds a small bit of overhead to the connection checkout process,however is otherwise the most simple and reliable approach to completelyeliminating database errors due to stale pooled connections. The callingapplication does not need to be concerned about organizing operationsto be able to recover from stale connections checked out from the pool.
Pessimistic testing of connections upon checkout is achievable byusing thePool.pre_ping
argument, available fromcreate_engine()
via thecreate_engine.pool_pre_ping
argument:
engine=create_engine("mysql+pymysql://user:pw@host/db",pool_pre_ping=True)
The “pre ping” feature operates on a per-dialect basis either by invoking aDBAPI-specific “ping” method, or if not available will emit SQL equivalent to“SELECT 1”, catching any errors and detecting the error as a “disconnect”situation. If the ping / error check determines that the connection is notusable, the connection will be immediately recycled, and all other pooledconnections older than the current time are invalidated, so that the next timethey are checked out, they will also be recycled before use.
If the database is still not available when “pre ping” runs, then the initialconnect will fail and the error for failure to connect will be propagatednormally. In the uncommon situation that the database is available forconnections, but is not able to respond to a “ping”, the “pre_ping” will try upto three times before giving up, propagating the database error last received.
It is critical to note that the pre-ping approachdoes not accommodate forconnections dropped in the middle of transactions or other SQL operations. Ifthe database becomes unavailable while a transaction is in progress, thetransaction will be lost and the database error will be raised. While theConnection
object will detect a “disconnect” situation andrecycle the connection as well as invalidate the rest of the connection poolwhen this condition occurs, the individual operation where the exception wasraised will be lost, and it’s up to the application to either abandon theoperation, or retry the whole transaction again. If the engine isconfigured using DBAPI-level autocommit connections, as described atSetting Transaction Isolation Levels including DBAPI Autocommit, a connectionmay be reconnected transparentlymid-operation using events. See the sectionHow Do I “Retry” a Statement Execution Automatically? foran example.
For dialects that make use of “SELECT 1” and catch errors in order to detectdisconnects, the disconnection test may be augmented for new backend-specificerror messages using theDialectEvents.handle_error()
hook.
Custom / Legacy Pessimistic Ping¶
Beforecreate_engine.pool_pre_ping
was added, the “pre-ping”approach historically has been performed manually usingtheConnectionEvents.engine_connect()
engine event.The most common recipe for this is below, for referencepurposes in case an application is already using such a recipe, or specialbehaviors are needed:
fromsqlalchemyimportexcfromsqlalchemyimporteventfromsqlalchemyimportselectsome_engine=create_engine(...)@event.listens_for(some_engine,"engine_connect")defping_connection(connection,branch):ifbranch:# this parameter is always False as of SQLAlchemy 2.0,# but is still accepted by the event hook. In 1.x versions# of SQLAlchemy, "branched" connections should be skipped.returntry:# run a SELECT 1. use a core select() so that# the SELECT of a scalar value without a table is# appropriately formatted for the backendconnection.scalar(select(1))exceptexc.DBAPIErroraserr:# catch SQLAlchemy's DBAPIError, which is a wrapper# for the DBAPI's exception. It includes a .connection_invalidated# attribute which specifies if this connection is a "disconnect"# condition, which is based on inspection of the original exception# by the dialect in use.iferr.connection_invalidated:# run the same SELECT again - the connection will re-validate# itself and establish a new connection. The disconnect detection# here also causes the whole connection pool to be invalidated# so that all stale connections are discarded.connection.scalar(select(1))else:raise
The above recipe has the advantage that we are making use of SQLAlchemy’sfacilities for detecting those DBAPI exceptions that are known to indicatea “disconnect” situation, as well as theEngine
object’s abilityto correctly invalidate the current connection pool when this conditionoccurs and allowing the currentConnection
to re-validate ontoa new DBAPI connection.
Disconnect Handling - Optimistic¶
When pessimistic handling is not employed, as well as when the database isshutdown and/or restarted in the middle of a connection’s period of use withina transaction, the other approach to dealing with stale / closed connections isto let SQLAlchemy handle disconnects as they occur, at which point allconnections in the pool are invalidated, meaning they are assumed to bestale and will be refreshed upon next checkout. This behavior assumes thePool
is used in conjunction with aEngine
.TheEngine
has logic which can detectdisconnection events and refresh the pool automatically.
When theConnection
attempts to use a DBAPI connection, and anexception is raised that corresponds to a “disconnect” event, the connectionis invalidated. TheConnection
then calls thePool.recreate()
method, effectively invalidating all connections not currently checked out sothat they are replaced with new ones upon next checkout. This flow isillustrated by the code example below:
fromsqlalchemyimportcreate_engine,exce=create_engine(...)c=e.connect()try:# suppose the database has been restarted.c.execute(text("SELECT * FROM table"))c.close()exceptexc.DBAPIErrorase:# an exception is raised, Connection is invalidated.ife.connection_invalidated:print("Connection was invalidated!")# after the invalidate event, a new connection# starts with a new Poolc=e.connect()c.execute(text("SELECT * FROM table"))
The above example illustrates that no special intervention is needed torefresh the pool, which continues normally after a disconnection event isdetected. However, one database exception is raised, per each connectionthat is in use while the database unavailability event occurred.In a typical web application using an ORM Session, the above condition wouldcorrespond to a single request failing with a 500 error, then the web applicationcontinuing normally beyond that. Hence the approach is “optimistic” in that frequentdatabase restarts are not anticipated.
Setting Pool Recycle¶
An additional setting that can augment the “optimistic” approach is to set thepool recycle parameter. This parameter prevents the pool from using a particularconnection that has passed a certain age, and is appropriate for database backendssuch as MySQL that automatically close connections that have been stale after a particularperiod of time:
fromsqlalchemyimportcreate_enginee=create_engine("mysql+mysqldb://scott:tiger@localhost/test",pool_recycle=3600)
Above, any DBAPI connection that has been open for more than one hour will be invalidated and replaced,upon next checkout. Note that the invalidationonly occurs during checkout - not onany connections that are held in a checked out state.pool_recycle
is a functionof thePool
itself, independent of whether or not anEngine
is in use.
More on Invalidation¶
ThePool
provides “connection invalidation” services which allowboth explicit invalidation of a connection as well as automatic invalidationin response to conditions that are determined to render a connection unusable.
“Invalidation” means that a particular DBAPI connection is removed from thepool and discarded. The.close()
method is called on this connectionif it is not clear that the connection itself might not be closed, howeverif this method fails, the exception is logged but the operation still proceeds.
When using aEngine
, theConnection.invalidate()
method isthe usual entrypoint to explicit invalidation. Other conditions by whicha DBAPI connection might be invalidated include:
a DBAPI exception such as
OperationalError
, raised when amethod likeconnection.execute()
is called, is detected as indicatinga so-called “disconnect” condition. As the Python DBAPI provides nostandard system for determining the nature of an exception, all SQLAlchemydialects include a system calledis_disconnect()
which will examinethe contents of an exception object, including the string message andany potential error codes included with it, in order to determine if thisexception indicates that the connection is no longer usable. If this is thecase, the_ConnectionFairy.invalidate()
method is called and theDBAPI connection is then discarded.When the connection is returned to the pool, andcalling the
connection.rollback()
orconnection.commit()
methods,as dictated by the pool’s “reset on return” behavior, throws an exception.A final attempt at calling.close()
on the connection will be made,and it is then discarded.When a listener implementing
PoolEvents.checkout()
raises theDisconnectionError
exception, indicating that the connectionwon’t be usable and a new connection attempt needs to be made.
All invalidations which occur will invoke thePoolEvents.invalidate()
event.
Supporting new database error codes for disconnect scenarios¶
SQLAlchemy dialects each include a routine calledis_disconnect()
that isinvoked whenever a DBAPI exception is encountered. The DBAPI exception objectis passed to this method, where dialect-specific heuristics will then determineif the error code received indicates that the database connection has been“disconnected”, or is in an otherwise unusable state which indicates it shouldbe recycled. The heuristics applied here may be customized using theDialectEvents.handle_error()
event hook, which is typicallyestablished via the owningEngine
object. Using this hook, allerrors which occur are delivered passing along a contextual object known asExceptionContext
. Custom event hooks may control whether or not aparticular error should be considered a “disconnect” situation or not, as wellas if this disconnect should cause the entire connection pool to be invalidatedor not.
For example, to add support to consider the Oracle Database driver error codesDPY-1001
andDPY-4011
to be handled as disconnect codes, apply an eventhandler to the engine after creation:
importrefromsqlalchemyimportcreate_engineengine=create_engine("oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1")@event.listens_for(engine,"handle_error")defhandle_exception(context:ExceptionContext)->None:ifnotcontext.is_disconnectandre.match(r"^(?:DPY-1001|DPY-4011)",str(context.original_exception)):context.is_disconnect=TruereturnNone
The above error processing function will be invoked for all Oracle Databaseerrors raised, including those caught when using thepool pre ping feature for those backends that rely upondisconnect error handling (new in 2.0).
See also
Using FIFO vs. LIFO¶
TheQueuePool
class features a flag calledQueuePool.use_lifo
, which can also be accessed fromcreate_engine()
via the flagcreate_engine.pool_use_lifo
.Setting this flag toTrue
causes the pool’s “queue” behavior to instead bethat of a “stack”, e.g. the last connection to be returned to the pool is thefirst one to be used on the next request. In contrast to the pool’s long-standing behavior of first-in-first-out, which produces a round-robin effect ofusing each connection in the pool in series, lifo mode allows excessconnections to remain idle in the pool, allowing server-side timeout schemes toclose these connections out. The difference between FIFO and LIFO isbasically whether or not its desirable for the pool to keep a full set ofconnections ready to go even during idle periods:
engine=create_engine("postgresql://",pool_use_lifo=True,pool_pre_ping=True)
Above, we also make use of thecreate_engine.pool_pre_ping
flagso that connections which are closed from the server side are gracefullyhandled by the connection pool and replaced with a new connection.
Note that the flag only applies toQueuePool
use.
Added in version 1.3.
See also
Using Connection Pools with Multiprocessing or os.fork()¶
It’s critical that when using a connection pool, and by extension whenusing anEngine
created viacreate_engine()
, thatthe pooled connectionsare not shared to a forked process. TCP connectionsare represented as file descriptors, which usually work across processboundaries, meaning this will cause concurrent access to the file descriptoron behalf of two or more entirely independent Python interpreter states.
Depending on specifics of the driver and OS, the issues that arise here rangefrom non-working connections to socket connections that are used by multipleprocesses concurrently, leading to broken messaging (the latter case istypically the most common).
The SQLAlchemyEngine
object refers to a connection pool of existingdatabase connections. So when this object is replicated to a child process,the goal is to ensure that no database connections are carried over. Thereare four general approaches to this:
Disable pooling using
NullPool
. This is the most simplistic,one shot system that prevents theEngine
from using any connectionmore than once:fromsqlalchemy.poolimportNullPoolengine=create_engine("mysql+mysqldb://user:pass@host/dbname",poolclass=NullPool)
Call
Engine.dispose()
on any givenEngine
,passing theEngine.dispose.close
parameter with a value ofFalse
, within the initialize phase of the child process. This isso that the new process will not touch any of the parent process’ connectionsand will instead start with new connections.This is the recommended approach:frommultiprocessingimportPoolengine=create_engine("mysql+mysqldb://user:pass@host/dbname")defrun_in_process(some_data_record):withengine.connect()asconn:conn.execute(text("..."))definitializer():"""ensure the parent proc's database connections are not touched in the new connection pool"""engine.dispose(close=False)withPool(10,initializer=initializer)asp:p.map(run_in_process,data)
Added in version 1.4.33:Added the
Engine.dispose.close
parameter to allow the replacement of a connection pool in a childprocess without interfering with the connections used by the parentprocess.Call
Engine.dispose()
directly before the child process iscreated. This will also cause the child process to start with a newconnection pool, while ensuring the parent connections are not transferredto the child process:engine=create_engine("mysql://user:pass@host/dbname")defrun_in_process():withengine.connect()asconn:conn.execute(text("..."))# before process starts, ensure engine.dispose() is calledengine.dispose()p=Process(target=run_in_process)p.start()
An event handler can be applied to the connection pool that tests forconnections being shared across process boundaries, and invalidates them:
fromsqlalchemyimporteventfromsqlalchemyimportexcimportosengine=create_engine("...")@event.listens_for(engine,"connect")defconnect(dbapi_connection,connection_record):connection_record.info["pid"]=os.getpid()@event.listens_for(engine,"checkout")defcheckout(dbapi_connection,connection_record,connection_proxy):pid=os.getpid()ifconnection_record.info["pid"]!=pid:connection_record.dbapi_connection=connection_proxy.dbapi_connection=Noneraiseexc.DisconnectionError("Connection record belongs to pid%s, ""attempting to check out in pid%s"%(connection_record.info["pid"],pid))
Above, we use an approach similar to that described inDisconnect Handling - Pessimistic to treat a DBAPI connection thatoriginated in a different parent process as an “invalid” connection,coercing the pool to recycle the connection record to make a new connection.
The above strategies will accommodate the case of anEngine
being shared among processes. The above steps alone are not sufficient for thecase of sharing a specificConnection
over a process boundary;prefer to keep the scope of a particularConnection
local to asingle process (and thread). It’s additionally not supported to share any kindof ongoing transactional state directly across a process boundary, such as anORMSession
object that’s begun a transaction and referencesactiveConnection
instances; again prefer to create newSession
objects in new processes.
Using a pool instance directly¶
A pool implementation can be used directly without an engine. This could be usedin applications that just wish to use the pool behavior without all otherSQLAlchemy features.In the example below the default pool for theMySQLdb
dialect is obtained usingcreate_pool_from_url()
:
fromsqlalchemyimportcreate_pool_from_urlmy_pool=create_pool_from_url("mysql+mysqldb://",max_overflow=5,pool_size=5,pre_ping=True)con=my_pool.connect()# use the connection...# then close itcon.close()
If the type of pool to create is not specified, the default one for the dialectwill be used. To specify it directly thepoolclass
argument can be used,like in the following example:
fromsqlalchemyimportcreate_pool_from_urlfromsqlalchemyimportNullPoolmy_pool=create_pool_from_url("mysql+mysqldb://",poolclass=NullPool)
API Documentation - Available Pool Implementations¶
Object Name | Description |
---|---|
Proxies a DBAPI connection and provides return-on-dereferencesupport. | |
Maintains a position in a connection pool which references a pooledconnection. | |
A | |
An asyncio-compatible version of | |
Interface for the object that maintains an individual databaseconnection on behalf of a | |
Common base for the two connection-management interfaces | |
A Pool which does not pool connections. | |
Abstract base class for connection pools. | |
A connection-like adapter for aPEP 249 DBAPI connection, whichincludes additional methods specific to the | |
A | |
A Pool that maintains one connection per thread. | |
A Pool of exactly one connection, used for all requests. |
- classsqlalchemy.pool.Pool¶
Abstract base class for connection pools.
Class signature
class
sqlalchemy.pool.Pool
(sqlalchemy.log.Identified
,sqlalchemy.event.registry.EventTarget
)- method
sqlalchemy.pool.Pool.
__init__(creator:_CreatorFnType|_CreatorWRecFnType,recycle:int=-1,echo:log._EchoFlagType=None,logging_name:str|None=None,reset_on_return:_ResetStyleArgType=True,events:List[Tuple[_ListenerFnType,str]]|None=None,dialect:_ConnDialect|Dialect|None=None,pre_ping:bool=False,_dispatch:_DispatchCommon[Pool]|None=None)¶ Construct a Pool.
- Parameters:
creator¶ – a callable function that returns a DB-APIconnection object. The function will be called withparameters.
recycle¶ – If set to a value other than -1, number ofseconds between connection recycling, which means uponcheckout, if this timeout is surpassed the connection will beclosed and replaced with a newly opened connection. Defaults to -1.
logging_name¶ – String identifier which will be used withinthe “name” field of logging records generated within the“sqlalchemy.pool” logger. Defaults to a hexstring of the object’sid.
echo¶ –
if True, the connection pool will loginformational output such as when connections are invalidatedas well as when connections are recycled to the default log handler,which defaults to
sys.stdout
for output.. If set to the string"debug"
, the logging will include pool checkouts and checkins.The
Pool.echo
parameter can also be set from thecreate_engine()
call by using thecreate_engine.echo_pool
parameter.See also
Configuring Logging - further detail on how to configurelogging.
reset_on_return¶ –
Determine steps to take onconnections as they are returned to the pool, which werenot otherwise handled by a
Connection
.Available fromcreate_engine()
via thecreate_engine.pool_reset_on_return
parameter.Pool.reset_on_return
can have any of these values:"rollback"
- call rollback() on the connection,to release locks and transaction resources.This is the default value. The vast majorityof use cases should leave this value set."commit"
- call commit() on the connection,to release locks and transaction resources.A commit here may be desirable for databases thatcache query plans if a commit is emitted,such as Microsoft SQL Server. However, thisvalue is more dangerous than ‘rollback’ becauseany data changes present on the transactionare committed unconditionally.None
- don’t do anything on the connection.This setting may be appropriate if the database / DBAPIworks in pure “autocommit” mode at all times, or ifa custom reset handler is established using thePoolEvents.reset()
event handler.True
- same as ‘rollback’, this is here forbackwards compatibility.False
- same as None, this is here forbackwards compatibility.
For further customization of reset on return, the
PoolEvents.reset()
event hook may be used which can performany connection activity desired on reset.events¶ – a list of 2-tuples, each of the form
(callable,target)
which will be passed tolisten()
upon construction. Provided here so that event listenerscan be assigned viacreate_engine()
before dialect-levellisteners are applied.dialect¶ – a
Dialect
that will handle the jobof calling rollback(), close(), or commit() on DBAPI connections.If omitted, a built-in “stub” dialect is used. Applications thatmake use ofcreate_engine()
should not use this parameteras it is handled by the engine creation strategy.pre_ping¶ –
if True, the pool will emit a “ping” (typically“SELECT 1”, but is dialect-specific) on the connectionupon checkout, to test if the connection is alive or not. If not,the connection is transparently re-connected and upon success, allother pooled connections established prior to that timestamp areinvalidated. Requires that a dialect is passed as well tointerpret the disconnection error.
Added in version 1.2.
- method
sqlalchemy.pool.Pool.
connect()→PoolProxiedConnection¶ Return a DBAPI connection from the pool.
The connection is instrumented such that when its
close()
method is called, the connection will be returned tothe pool.
- method
sqlalchemy.pool.Pool.
dispose()→None¶ Dispose of this pool.
This method leaves the possibility of checked-out connectionsremaining open, as it only affects connections that areidle in the pool.
See also
- method
sqlalchemy.pool.Pool.
recreate()→Pool¶ Return a new
Pool
, of the same class as this oneand configured with identical creation arguments.This method is used in conjunction with
dispose()
to close out an entirePool
and create a new one inits place.
- method
sqlalchemy.pool.Pool.
status()→str¶ Returns a brief description of the state of this pool.
- method
- classsqlalchemy.pool.QueuePool¶
A
Pool
that imposes a limit on the number of open connections.QueuePool
is the default pooling implementation used forallEngine
objects other than SQLite with a:memory:
database.The
QueuePool
classis not compatible with asyncio andcreate_async_engine()
. TheAsyncAdaptedQueuePool
class is used automatically whenusingcreate_async_engine()
, if no other kind of poolis specified.See also
Members
Class signature
- method
sqlalchemy.pool.QueuePool.
__init__(creator:_CreatorFnType|_CreatorWRecFnType,pool_size:int=5,max_overflow:int=10,timeout:float=30.0,use_lifo:bool=False,**kw:Any)¶ Construct a QueuePool.
- Parameters:
creator¶ – a callable function that returns a DB-APIconnection object, same as that of
Pool.creator
.pool_size¶ – The size of the pool to be maintained,defaults to 5. This is the largest number of connections thatwill be kept persistently in the pool. Note that the poolbegins with no connections; once this number of connectionsis requested, that number of connections will remain.
pool_size
can be set to 0 to indicate no size limit; todisable pooling, use aNullPool
instead.max_overflow¶ – The maximum overflow size of thepool. When the number of checked-out connections reaches thesize set in pool_size, additional connections will bereturned up to this limit. When those additional connectionsare returned to the pool, they are disconnected anddiscarded. It follows then that the total number ofsimultaneous connections the pool will allow is pool_size +max_overflow, and the total number of “sleeping”connections the pool will allow is pool_size.max_overflowcan be set to -1 to indicate no overflow limit; no limitwill be placed on the total number of concurrentconnections. Defaults to 10.
timeout¶ – The number of seconds to wait before giving upon returning a connection. Defaults to 30.0. This can be a floatbut is subject to the limitations of Python time functions whichmay not be reliable in the tens of milliseconds.
use_lifo¶ –
use LIFO (last-in-first-out) when retrievingconnections instead of FIFO (first-in-first-out). Using LIFO, aserver-side timeout scheme can reduce the number of connections usedduring non-peak periods of use. When planning for server-sidetimeouts, ensure that a recycle or pre-ping strategy is in use togracefully handle stale connections.
Added in version 1.3.
**kw¶ – Other keyword arguments including
Pool.recycle
,Pool.echo
,Pool.reset_on_return
and others are passed to thePool
constructor.
- method
sqlalchemy.pool.QueuePool.
dispose()→None¶ Dispose of this pool.
This method leaves the possibility of checked-out connectionsremaining open, as it only affects connections that areidle in the pool.
See also
- method
sqlalchemy.pool.QueuePool.
recreate()→QueuePool¶ Return a new
Pool
, of the same class as this oneand configured with identical creation arguments.This method is used in conjunction with
dispose()
to close out an entirePool
and create a new one inits place.
- method
sqlalchemy.pool.QueuePool.
status()→str¶ Returns a brief description of the state of this pool.
- method
- classsqlalchemy.pool.AsyncAdaptedQueuePool¶
An asyncio-compatible version of
QueuePool
.This pool is used by default when using
AsyncEngine
engines thatwere generated fromcreate_async_engine()
. It uses anasyncio-compatible queue implementation that does not usethreading.Lock
.The arguments and operation of
AsyncAdaptedQueuePool
areotherwise identical to that ofQueuePool
.Class signature
class
sqlalchemy.pool.AsyncAdaptedQueuePool
(sqlalchemy.pool.impl.QueuePool
)
- classsqlalchemy.pool.SingletonThreadPool¶
A Pool that maintains one connection per thread.
Maintains one connection per each thread, never moving a connection to athread other than the one which it was created in.
Warning
the
SingletonThreadPool
will call.close()
on arbitrary connections that exist beyond the size setting ofpool_size
, e.g. if more uniquethread identitiesthan whatpool_size
states are used. This cleanup isnon-deterministic and not sensitive to whether or not the connectionslinked to those thread identities are currently in use.SingletonThreadPool
may be improved in a future release,however in its current status it is generally used only for testscenarios using a SQLite:memory:
database and is not recommendedfor production use.The
SingletonThreadPool
classis not compatible with asyncioandcreate_async_engine()
.Options are the same as those of
Pool
, as well as:- Parameters:
pool_size¶ – The number of threads in which to maintain connectionsat once. Defaults to five.
SingletonThreadPool
is used by the SQLite dialectautomatically when a memory-based database is used.SeeSQLite.Members
Class signature
class
sqlalchemy.pool.SingletonThreadPool
(sqlalchemy.pool.base.Pool
)- method
sqlalchemy.pool.SingletonThreadPool.
connect()→PoolProxiedConnection¶ Return a DBAPI connection from the pool.
The connection is instrumented such that when its
close()
method is called, the connection will be returned tothe pool.
- method
sqlalchemy.pool.SingletonThreadPool.
dispose()→None¶ Dispose of this pool.
- method
sqlalchemy.pool.SingletonThreadPool.
recreate()→SingletonThreadPool¶ Return a new
Pool
, of the same class as this oneand configured with identical creation arguments.This method is used in conjunction with
dispose()
to close out an entirePool
and create a new one inits place.
- method
sqlalchemy.pool.SingletonThreadPool.
status()→str¶ Returns a brief description of the state of this pool.
- classsqlalchemy.pool.AssertionPool¶
A
Pool
that allows at most one checked out connection atany given time.This will raise an exception if more than one connection is checked outat a time. Useful for debugging code that is using more connectionsthan desired.
The
AssertionPool
classis compatible with asyncio andcreate_async_engine()
.Members
Class signature
class
sqlalchemy.pool.AssertionPool
(sqlalchemy.pool.base.Pool
)- method
sqlalchemy.pool.AssertionPool.
dispose()→None¶ Dispose of this pool.
This method leaves the possibility of checked-out connectionsremaining open, as it only affects connections that areidle in the pool.
See also
- method
sqlalchemy.pool.AssertionPool.
recreate()→AssertionPool¶ Return a new
Pool
, of the same class as this oneand configured with identical creation arguments.This method is used in conjunction with
dispose()
to close out an entirePool
and create a new one inits place.
- method
sqlalchemy.pool.AssertionPool.
status()→str¶ Returns a brief description of the state of this pool.
- method
- classsqlalchemy.pool.NullPool¶
A Pool which does not pool connections.
Instead it literally opens and closes the underlying DB-API connectionper each connection open/close.
Reconnect-related functions such as
recycle
and connectioninvalidation are not supported by this Pool implementation, sinceno connections are held persistently.The
NullPool
classis compatible with asyncio andcreate_async_engine()
.Members
Class signature
- method
sqlalchemy.pool.NullPool.
dispose()→None¶ Dispose of this pool.
This method leaves the possibility of checked-out connectionsremaining open, as it only affects connections that areidle in the pool.
See also
- method
sqlalchemy.pool.NullPool.
recreate()→NullPool¶ Return a new
Pool
, of the same class as this oneand configured with identical creation arguments.This method is used in conjunction with
dispose()
to close out an entirePool
and create a new one inits place.
- method
sqlalchemy.pool.NullPool.
status()→str¶ Returns a brief description of the state of this pool.
- method
- classsqlalchemy.pool.StaticPool¶
A Pool of exactly one connection, used for all requests.
Reconnect-related functions such as
recycle
and connectioninvalidation (which is also used to support auto-reconnect) are onlypartially supported right now and may not yield good results.The
StaticPool
classis compatible with asyncio andcreate_async_engine()
.Members
Class signature
- method
sqlalchemy.pool.StaticPool.
dispose()→None¶ Dispose of this pool.
This method leaves the possibility of checked-out connectionsremaining open, as it only affects connections that areidle in the pool.
See also
- method
sqlalchemy.pool.StaticPool.
recreate()→StaticPool¶ Return a new
Pool
, of the same class as this oneand configured with identical creation arguments.This method is used in conjunction with
dispose()
to close out an entirePool
and create a new one inits place.
- method
sqlalchemy.pool.StaticPool.
status()→str¶ Returns a brief description of the state of this pool.
- method
- classsqlalchemy.pool.ManagesConnection¶
Common base for the two connection-management interfaces
PoolProxiedConnection
andConnectionPoolEntry
.These two objects are typically exposed in the public facing APIvia the connection pool event hooks, documented at
PoolEvents
.Added in version 2.0.
- attribute
sqlalchemy.pool.ManagesConnection.
dbapi_connection:DBAPIConnection|None¶ A reference to the actual DBAPI connection being tracked.
This is aPEP 249-compliant object that for traditional sync-styledialects is provided by the third-partyDBAPI implementation in use. For asyncio dialects, the implementationis typically an adapter object provided by the SQLAlchemy dialectitself; the underlying asyncio object is available via the
ManagesConnection.driver_connection
attribute.SQLAlchemy’s interface for the DBAPI connection is based on the
DBAPIConnection
protocol object
- attribute
sqlalchemy.pool.ManagesConnection.
driver_connection:Any|None¶ The “driver level” connection object as used by the PythonDBAPI or database driver.
For traditionalPEP 249 DBAPI implementations, this object willbe the same object as that of
ManagesConnection.dbapi_connection
. For an asyncio databasedriver, this will be the ultimate “connection” object used by thatdriver, such as theasyncpg.Connection
object which will not havestandard pep-249 methods.Added in version 1.4.24.
- attribute
sqlalchemy.pool.ManagesConnection.
info¶ Info dictionary associated with the underlying DBAPI connectionreferred to by this
ManagesConnection
instance, allowinguser-defined data to be associated with the connection.The data in this dictionary is persistent for the lifespanof the DBAPI connection itself, including across pool checkinsand checkouts. When the connection is invalidatedand replaced with a new one, this dictionary is cleared.
For a
PoolProxiedConnection
instance that’s not associatedwith aConnectionPoolEntry
, such as if it were detached, theattribute returns a dictionary that is local to thatConnectionPoolEntry
. Therefore theManagesConnection.info
attribute will always provide a Pythondictionary.See also
- method
sqlalchemy.pool.ManagesConnection.
invalidate(e:BaseException|None=None,soft:bool=False)→None¶ Mark the managed connection as invalidated.
- Parameters:
See also
- attribute
sqlalchemy.pool.ManagesConnection.
record_info¶ Persistent info dictionary associated with this
ManagesConnection
.Unlike the
ManagesConnection.info
dictionary, the lifespanof this dictionary is that of theConnectionPoolEntry
which owns it; therefore this dictionary will persist acrossreconnects and connection invalidation for a particular entryin the connection pool.For a
PoolProxiedConnection
instance that’s not associatedwith aConnectionPoolEntry
, such as if it were detached, theattribute returns None. Contrast to theManagesConnection.info
dictionary which is never None.See also
- attribute
- classsqlalchemy.pool.ConnectionPoolEntry¶
Interface for the object that maintains an individual databaseconnection on behalf of a
Pool
instance.The
ConnectionPoolEntry
object represents the long termmaintainance of a particular connection for a pool, including expiring orinvalidating that connection to have it replaced with a new one, which willcontinue to be maintained by that sameConnectionPoolEntry
instance. Compared toPoolProxiedConnection
, which is theshort-term, per-checkout connection manager, this object lasts for thelifespan of a particular “slot” within a connection pool.The
ConnectionPoolEntry
object is mostly visible to public-facingAPI code when it is delivered to connection pool event hooks, such asPoolEvents.connect()
andPoolEvents.checkout()
.Added in version 2.0:
ConnectionPoolEntry
provides the publicfacing interface for the_ConnectionRecord
internal class.Class signature
class
sqlalchemy.pool.ConnectionPoolEntry
(sqlalchemy.pool.base.ManagesConnection
)- method
sqlalchemy.pool.ConnectionPoolEntry.
close()→None¶ Close the DBAPI connection managed by this connection pool entry.
- attribute
sqlalchemy.pool.ConnectionPoolEntry.
dbapi_connection:DBAPIConnection|None¶ A reference to the actual DBAPI connection being tracked.
This is aPEP 249-compliant object that for traditional sync-styledialects is provided by the third-partyDBAPI implementation in use. For asyncio dialects, the implementationis typically an adapter object provided by the SQLAlchemy dialectitself; the underlying asyncio object is available via the
ManagesConnection.driver_connection
attribute.SQLAlchemy’s interface for the DBAPI connection is based on the
DBAPIConnection
protocol object
- attribute
sqlalchemy.pool.ConnectionPoolEntry.
driver_connection:Any|None¶ The “driver level” connection object as used by the PythonDBAPI or database driver.
For traditionalPEP 249 DBAPI implementations, this object willbe the same object as that of
ManagesConnection.dbapi_connection
. For an asyncio databasedriver, this will be the ultimate “connection” object used by thatdriver, such as theasyncpg.Connection
object which will not havestandard pep-249 methods.Added in version 1.4.24.
- attribute
sqlalchemy.pool.ConnectionPoolEntry.
in_use¶ Return True the connection is currently checked out
- attribute
sqlalchemy.pool.ConnectionPoolEntry.
info¶ inherited from the
ManagesConnection.info
attribute ofManagesConnection
Info dictionary associated with the underlying DBAPI connectionreferred to by this
ManagesConnection
instance, allowinguser-defined data to be associated with the connection.The data in this dictionary is persistent for the lifespanof the DBAPI connection itself, including across pool checkinsand checkouts. When the connection is invalidatedand replaced with a new one, this dictionary is cleared.
For a
PoolProxiedConnection
instance that’s not associatedwith aConnectionPoolEntry
, such as if it were detached, theattribute returns a dictionary that is local to thatConnectionPoolEntry
. Therefore theManagesConnection.info
attribute will always provide a Pythondictionary.See also
- method
sqlalchemy.pool.ConnectionPoolEntry.
invalidate(e:BaseException|None=None,soft:bool=False)→None¶ inherited from the
ManagesConnection.invalidate()
method ofManagesConnection
Mark the managed connection as invalidated.
- Parameters:
See also
- attribute
sqlalchemy.pool.ConnectionPoolEntry.
record_info¶ inherited from the
ManagesConnection.record_info
attribute ofManagesConnection
Persistent info dictionary associated with this
ManagesConnection
.Unlike the
ManagesConnection.info
dictionary, the lifespanof this dictionary is that of theConnectionPoolEntry
which owns it; therefore this dictionary will persist acrossreconnects and connection invalidation for a particular entryin the connection pool.For a
PoolProxiedConnection
instance that’s not associatedwith aConnectionPoolEntry
, such as if it were detached, theattribute returns None. Contrast to theManagesConnection.info
dictionary which is never None.See also
- method
- classsqlalchemy.pool.PoolProxiedConnection¶
A connection-like adapter for aPEP 249 DBAPI connection, whichincludes additional methods specific to the
Pool
implementation.PoolProxiedConnection
is the public-facing interface for theinternal_ConnectionFairy
implementation object; users familiarwith_ConnectionFairy
can consider this object to be equivalent.Added in version 2.0:
PoolProxiedConnection
provides the public-facing interface for the_ConnectionFairy
internal class.Members
close(),dbapi_connection,detach(),driver_connection,info,invalidate(),is_detached,is_valid,record_info
Class signature
class
sqlalchemy.pool.PoolProxiedConnection
(sqlalchemy.pool.base.ManagesConnection
)- method
sqlalchemy.pool.PoolProxiedConnection.
close()→None¶ Release this connection back to the pool.
The
PoolProxiedConnection.close()
method shadows thePEP 249.close()
method, altering its behavior to insteadrelease the proxied connection back to the connection pool.Upon release to the pool, whether the connection stays “opened” andpooled in the Python process, versus actually closed out and removedfrom the Python process, is based on the pool implementation in use andits configuration and current state.
- attribute
sqlalchemy.pool.PoolProxiedConnection.
dbapi_connection:DBAPIConnection|None¶ A reference to the actual DBAPI connection being tracked.
This is aPEP 249-compliant object that for traditional sync-styledialects is provided by the third-partyDBAPI implementation in use. For asyncio dialects, the implementationis typically an adapter object provided by the SQLAlchemy dialectitself; the underlying asyncio object is available via the
ManagesConnection.driver_connection
attribute.SQLAlchemy’s interface for the DBAPI connection is based on the
DBAPIConnection
protocol object
- method
sqlalchemy.pool.PoolProxiedConnection.
detach()→None¶ Separate this connection from its Pool.
This means that the connection will no longer be returned to thepool when closed, and will instead be literally closed. Theassociated
ConnectionPoolEntry
is de-associated from thisDBAPI connection.Note that any overall connection limiting constraints imposed by aPool implementation may be violated after a detach, as the detachedconnection is removed from the pool’s knowledge and control.
- attribute
sqlalchemy.pool.PoolProxiedConnection.
driver_connection:Any|None¶ The “driver level” connection object as used by the PythonDBAPI or database driver.
For traditionalPEP 249 DBAPI implementations, this object willbe the same object as that of
ManagesConnection.dbapi_connection
. For an asyncio databasedriver, this will be the ultimate “connection” object used by thatdriver, such as theasyncpg.Connection
object which will not havestandard pep-249 methods.Added in version 1.4.24.
- attribute
sqlalchemy.pool.PoolProxiedConnection.
info¶ inherited from the
ManagesConnection.info
attribute ofManagesConnection
Info dictionary associated with the underlying DBAPI connectionreferred to by this
ManagesConnection
instance, allowinguser-defined data to be associated with the connection.The data in this dictionary is persistent for the lifespanof the DBAPI connection itself, including across pool checkinsand checkouts. When the connection is invalidatedand replaced with a new one, this dictionary is cleared.
For a
PoolProxiedConnection
instance that’s not associatedwith aConnectionPoolEntry
, such as if it were detached, theattribute returns a dictionary that is local to thatConnectionPoolEntry
. Therefore theManagesConnection.info
attribute will always provide a Pythondictionary.See also
- method
sqlalchemy.pool.PoolProxiedConnection.
invalidate(e:BaseException|None=None,soft:bool=False)→None¶ inherited from the
ManagesConnection.invalidate()
method ofManagesConnection
Mark the managed connection as invalidated.
- Parameters:
See also
- attribute
sqlalchemy.pool.PoolProxiedConnection.
is_detached¶ Return True if this
PoolProxiedConnection
is detachedfrom its pool.
- attribute
sqlalchemy.pool.PoolProxiedConnection.
is_valid¶ Return True if this
PoolProxiedConnection
still refersto an active DBAPI connection.
- attribute
sqlalchemy.pool.PoolProxiedConnection.
record_info¶ inherited from the
ManagesConnection.record_info
attribute ofManagesConnection
Persistent info dictionary associated with this
ManagesConnection
.Unlike the
ManagesConnection.info
dictionary, the lifespanof this dictionary is that of theConnectionPoolEntry
which owns it; therefore this dictionary will persist acrossreconnects and connection invalidation for a particular entryin the connection pool.For a
PoolProxiedConnection
instance that’s not associatedwith aConnectionPoolEntry
, such as if it were detached, theattribute returns None. Contrast to theManagesConnection.info
dictionary which is never None.See also
- method
- classsqlalchemy.pool._ConnectionFairy¶
Proxies a DBAPI connection and provides return-on-dereferencesupport.
This is an internal object used by the
Pool
implementationto provide context management to a DBAPI connection delivered bythatPool
. The public facing interface for this classis described by thePoolProxiedConnection
class. See thatclass for public API details.The name “fairy” is inspired by the fact that the
_ConnectionFairy
object’s lifespan is transitory, as it lastsonly for the length of a specific DBAPI connection being checked out fromthe pool, and additionally that as a transparent proxy, it is mostlyinvisible.Class signature
class
sqlalchemy.pool._ConnectionFairy
(sqlalchemy.pool.base.PoolProxiedConnection
)
- classsqlalchemy.pool._ConnectionRecord¶
Maintains a position in a connection pool which references a pooledconnection.
This is an internal object used by the
Pool
implementationto provide context management to a DBAPI connection maintained bythatPool
. The public facing interface for this classis described by theConnectionPoolEntry
class. See thatclass for public API details.Class signature
class
sqlalchemy.pool._ConnectionRecord
(sqlalchemy.pool.base.ConnectionPoolEntry
)
flambé! the dragon andThe Alchemist image designs created and generously donated byRotem Yaari.
Created usingSphinx 8.2.3. Documentation last generated: Fri 18 Jul 2025 01:06:22 PM EDT