Movatterモバイル変換


[0]ホーム

URL:


Following system colour schemeSelected dark colour schemeSelected light colour scheme

Python Enhancement Proposals

PEP 249 – Python Database API Specification v2.0

Author:
Marc-André Lemburg <mal at lemburg.com>
Discussions-To:
Db-SIG list
Status:
Final
Type:
Informational
Created:
12-Apr-1999
Post-History:

Replaces:
248

Table of Contents

Introduction

This API has been defined to encourage similarity between the Pythonmodules that are used to access databases. By doing this, we hope toachieve a consistency leading to more easily understood modules, codethat is generally more portable across databases, and a broader reachof database connectivity from Python.

Comments and questions about this specification may be directed to theSIG for Database Interfacing with Python.

For more information on database interfacing with Python and availablepackages see theDatabase Topic Guide.

This document describes the Python Database API Specification 2.0 anda set of common optional extensions. The previous version 1.0 versionis still available as reference, inPEP 248. Package writers areencouraged to use this version of the specification as basis for newinterfaces.

Module Interface

Constructors

Access to the database is made available through connectionobjects. The module must provide the following constructor for these:

connect(parameters… )
Constructor for creating a connection to the database.

Returns aConnection Object. It takes a number of parameterswhich are database dependent.[1]

Globals

These module globals must be defined:

apilevel
String constant stating the supported DB API level.

Currently only the strings “1.0” and “2.0” are allowed.If not given, a DB-API 1.0 level interface should be assumed.

threadsafety
Integer constant stating the level of thread safety the interfacesupports. Possible values are:
threadsafetyMeaning
0Threads may not share the module.
1Threads may share the module, but not connections.
2Threads may share the module and connections.
3Threads may share the module, connections and cursors.

Sharing in the above context means that two threads may use aresource without wrapping it using a mutex semaphore to implementresource locking. Note that you cannot always make externalresources thread safe by managing access using a mutex: theresource may rely on global variables or other external sourcesthat are beyond your control.

paramstyle
String constant stating the type of parameter marker formattingexpected by the interface. Possible values are[2]:
paramstyleMeaning
qmarkQuestion mark style, e.g....WHEREname=?
numericNumeric, positional style, e.g....WHEREname=:1
namedNamed style, e.g....WHEREname=:name
formatANSI C printf format codes, e.g....WHEREname=%s
pyformatPython extended format codes, e.g....WHEREname=%(name)s

Exceptions

The module should make all error information available through theseexceptions or subclasses thereof:

Warning
Exception raised for important warnings like data truncationswhile inserting, etc. It must be a subclass of the PythonException class[10][11].
Error
Exception that is the base class of all other errorexceptions. You can use this to catch all errors with one singleexcept statement. Warnings are not considered errors and thusshould not use this class as base. It must be a subclass of thePythonException class[10].
InterfaceError
Exception raised for errors that are related to the databaseinterface rather than the database itself. It must be a subclassofError.
DatabaseError
Exception raised for errors that are related to the database. Itmust be a subclass ofError.
DataError
Exception raised for errors that are due to problems with theprocessed data like division by zero, numeric value out of range,etc. It must be a subclass ofDatabaseError.
OperationalError
Exception raised for errors that are related to the database’soperation and not necessarily under the control of the programmer,e.g. an unexpected disconnect occurs, the data source name is notfound, a transaction could not be processed, a memory allocationerror occurred during processing, etc. It must be a subclass ofDatabaseError.
IntegrityError
Exception raised when the relational integrity of the database isaffected, e.g. a foreign key check fails. It must be a subclassofDatabaseError.
InternalError
Exception raised when the database encounters an internal error,e.g. the cursor is not valid anymore, the transaction is out ofsync, etc. It must be a subclass ofDatabaseError.
ProgrammingError
Exception raised for programming errors, e.g. table not found oralready exists, syntax error in the SQL statement, wrong number ofparameters specified, etc. It must be a subclass ofDatabaseError.
NotSupportedError
Exception raised in case a method or database API was used whichis not supported by the database, e.g. requesting a.rollback() on a connection that does not support transactionor has transactions turned off. It must be a subclass ofDatabaseError.

This is the exception inheritance layout[10][11]:

Exception|__Warning|__Error   |__InterfaceError   |__DatabaseError      |__DataError      |__OperationalError      |__IntegrityError      |__InternalError      |__ProgrammingError      |__NotSupportedError

Note

The values of these exceptions are not defined. They should give the usera fairly good idea of what went wrong, though.

Connection Objects

Connection objects should respond to the following methods.

Connection methods

.close()
Close the connection now (rather than whenever.__del__() iscalled).

The connection will be unusable from this point forward; anError(or subclass) exception will be raised if any operation isattempted with the connection. The same applies to all cursorobjects trying to use the connection. Note that closing aconnection without committing the changes first will cause animplicit rollback to be performed.

.commit()
Commit any pending transaction to the database.

Note that if the database supports an auto-commit feature, this must beinitially off. An interface method may be provided to turn it back on.

Database modules that do not support transactions should implement thismethod with void functionality.

.rollback()
This method is optional since not all databases provide transactionsupport.[3]

In case a database does provide transactions this method causes thedatabase to roll back to the start of any pending transaction. Closing aconnection without committing the changes first will cause an implicitrollback to be performed.

.cursor()
Return a newCursor Object using the connection.

If the database does not provide a direct cursor concept, the module willhave to emulate cursors using other means to the extent needed by thisspecification.[4]

Cursor Objects

These objects represent a database cursor, which is used to manage thecontext of a fetch operation. Cursors created from the same connectionare not isolated,i.e., any changes done to the database by a cursorare immediately visible by the other cursors. Cursors created fromdifferent connections can or can not be isolated, depending on how thetransaction support is implemented (see also the connection’s.rollback() and.commit() methods).

Cursor Objects should respond to the following methods and attributes.

Cursor attributes

.description
This read-only attribute is a sequence of 7-item sequences.

Each of these sequences contains information describing one resultcolumn:

  • name
  • type_code
  • display_size
  • internal_size
  • precision
  • scale
  • null_ok

The first two items (name andtype_code) are mandatory,the other five are optional and are set toNone if nomeaningful values can be provided.

This attribute will beNone for operations that do not returnrows or if the cursor has not had an operation invoked via the.execute*() method yet.

Thetype_code can be interpreted by comparing it to theTypeObjects specified in the section below.

.rowcount
This read-only attribute specifies the number of rows that the last.execute*() produced (for DQL statements likeSELECT) or affected(for DML statements likeUPDATE orINSERT).[9]

The attribute is -1 in case no.execute*() has been performedon the cursor or the rowcount of the last operation is cannot bedetermined by the interface.[7]

Note

Future versions of the DB API specification could redefine thelatter case to have the object returnNone instead of -1.

Cursor methods

.callproc(procname [,parameters ] )
(This method is optional since not all databases provide storedprocedures.[3])

Call a stored database procedure with the given name. The sequenceof parameters must contain one entry for each argument that theprocedure expects. The result of the call is returned as modifiedcopy of the input sequence. Input parameters are left untouched,output and input/output parameters replaced with possibly newvalues.

The procedure may also provide a result set as output. This mustthen be made available through the standard.fetch*() methods.

.close()
Close the cursor now (rather than whenever__del__ is called).

The cursor will be unusable from this point forward; anError (orsubclass) exception will be raised if any operation is attemptedwith the cursor.

.execute(operation [,parameters])
Prepare and execute a database operation (query or command).

Parameters may be provided as sequence or mapping and will bebound to variables in the operation. Variables are specified in adatabase-specific notation (see the module’sparamstyle attributefor details).[5]

A reference to the operation will be retained by the cursor. Ifthe same operation object is passed in again, then the cursor canoptimize its behavior. This is most effective for algorithmswhere the same operation is used, but different parameters arebound to it (many times).

For maximum efficiency when reusing an operation, it is best touse the.setinputsizes() method to specify the parameter typesand sizes ahead of time. It is legal for a parameter to not matchthe predefined information; the implementation should compensate,possibly with a loss of efficiency.

The parameters may also be specified as list of tuples toe.g. insert multiple rows in a single operation, but this kind ofusage is deprecated:.executemany() should be used instead.

Return values are not defined.

.executemany(operation,seq_of_parameters )
Prepare a database operation (query or command) and then execute itagainst all parameter sequences or mappings found in the sequenceseq_of_parameters.

Modules are free to implement this method using multiple calls tothe.execute() method or by using array operations to have thedatabase process the sequence as a whole in one call.

Use of this method for an operation which produces one or moreresult sets constitutes undefined behavior, and the implementationis permitted (but not required) to raise an exception when itdetects that a result set has been created by an invocation of theoperation.

The same comments as for.execute() also apply accordingly tothis method.

Return values are not defined.

.fetchone()
Fetch the next row of a query result set, returning a singlesequence, orNone when no more data is available.[6]

AnError (or subclass) exception is raised if the previous callto.execute*() did not produce any result set or no call wasissued yet.

.fetchmany([size=cursor.arraysize])
Fetch the next set of rows of a query result, returning a sequenceof sequences (e.g. a list of tuples). An empty sequence isreturned when no more rows are available.

The number of rows to fetch per call is specified by theparameter. If it is not given, the cursor’s arraysize determinesthe number of rows to be fetched. The method should try to fetchas many rows as indicated by the size parameter. If this is notpossible due to the specified number of rows not being available,fewer rows may be returned.

AnError (or subclass) exception is raised if the previous callto.execute*() did not produce any result set or no call wasissued yet.

Note there are performance considerations involved with thesizeparameter. For optimal performance, it is usually best to use the.arraysize attribute. If the size parameter is used, then itis best for it to retain the same value from one.fetchmany()call to the next.

.fetchall()
Fetch all (remaining) rows of a query result, returning them as asequence of sequences (e.g. a list of tuples). Note that thecursor’s arraysize attribute can affect the performance of thisoperation.

AnError (or subclass) exception is raised if the previous callto.execute*() did not produce any result set or no call wasissued yet.

.nextset()
(This method is optional since not all databases support multipleresult sets.[3])

This method will make the cursor skip to the next available set,discarding any remaining rows from the current set.

If there are no more sets, the method returnsNone. Otherwise,it returns a true value and subsequent calls to the.fetch*()methods will return rows from the next result set.

AnError (or subclass) exception is raised if the previous callto.execute*() did not produce any result set or no call wasissued yet.

.arraysize
This read/write attribute specifies the number of rows to fetch ata time with.fetchmany(). It defaults to 1 meaning to fetch asingle row at a time.

Implementations must observe this value with respect to the.fetchmany() method, but are free to interact with the databasea single row at a time. It may also be used in the implementationof.executemany().

.setinputsizes(sizes)
This can be used before a call to.execute*() to predefinememory areas for the operation’s parameters.

sizes is specified as a sequence — one item for each inputparameter. The item should be a Type Object that corresponds tothe input that will be used, or it should be an integer specifyingthe maximum length of a string parameter. If the item isNone, then no predefined memory area will be reserved for thatcolumn (this is useful to avoid predefined areas for largeinputs).

This method would be used before the.execute*() method isinvoked.

Implementations are free to have this method do nothing and usersare free to not use it.

.setoutputsize(size [,column])
Set a column buffer size for fetches of large columns(e.g.LONGs,BLOBs, etc.). The column is specified asan index into the result sequence. Not specifying the column willset the default size for all large columns in the cursor.

This method would be used before the.execute*() method isinvoked.

Implementations are free to have this method do nothing and usersare free to not use it.

Type Objects and Constructors

Many databases need to have the input in a particular format forbinding to an operation’s input parameters. For example, if an inputis destined for aDATE column, then it must be bound to thedatabase in a particular string format. Similar problems exist for“Row ID” columns or large binary items (e.g. blobs orRAWcolumns). This presents problems for Python since the parameters tothe.execute*() method are untyped. When the database module seesa Python string object, it doesn’t know if it should be bound as asimpleCHAR column, as a rawBINARY item, or as aDATE.

To overcome this problem, a module must provide the constructorsdefined below to create objects that can hold special values. Whenpassed to the cursor methods, the module can then detect the propertype of the input parameter and bind it accordingly.

ACursor Object’s description attribute returns information abouteach of the result columns of a query. Thetype_code must compareequal to one of Type Objects defined below. Type Objects may be equalto more than one type code (e.g.DATETIME could be equal to thetype codes for date, time and timestamp columns; see theImplementation Hints below for details).

The module exports the following constructors and singletons:

Date(year,month,day)
This function constructs an object holding a date value.
Time(hour,minute,second)
This function constructs an object holding a time value.
Timestamp(year,month,day,hour,minute,second)
This function constructs an object holding a time stamp value.
DateFromTicks(ticks)
This function constructs an object holding a date value from thegiven ticks value (number of seconds since the epoch; see thedocumentation ofthe standard Python time module for details).
TimeFromTicks(ticks)
This function constructs an object holding a time value from thegiven ticks value (number of seconds since the epoch; see thedocumentation of the standard Python time module for details).
TimestampFromTicks(ticks)
This function constructs an object holding a time stamp value fromthe given ticks value (number of seconds since the epoch; see thedocumentation of the standard Python time module for details).
Binary(string)
This function constructs an object capable of holding a binary(long) string value.
STRING type
This type object is used to describe columns in a database thatare string-based (e.g.CHAR).
BINARY type
This type object is used to describe (long) binary columns in adatabase (e.g.LONG,RAW,BLOBs).
NUMBER type
This type object is used to describe numeric columns in adatabase.
DATETIME type
This type object is used to describe date/time columns in adatabase.
ROWID type
This type object is used to describe the “Row ID” column in adatabase.

SQLNULL values are represented by the PythonNone singletonon input and output.

Note

Usage of Unix ticks for database interfacing can cause troublesbecause of the limited date range they cover.

Implementation Hints for Module Authors

  • Date/time objects can be implemented asPython datetime module objects (availablesince Python 2.3, with a C API since 2.4) or using themxDateTime package(available for all Python versions since 1.5.2). They both provideall necessary constructors and methods at Python and C level.
  • Here is a sample implementation of the Unix ticks based constructorsfor date/time delegating work to the generic constructors:
    importtimedefDateFromTicks(ticks):returnDate(*time.localtime(ticks)[:3])defTimeFromTicks(ticks):returnTime(*time.localtime(ticks)[3:6])defTimestampFromTicks(ticks):returnTimestamp(*time.localtime(ticks)[:6])
  • The preferred object type for Binary objects are the buffer typesavailable in standard Python starting with version 1.5.2. Pleasesee the Python documentation for details. For information about theC interface have a look atInclude/bufferobject.h andObjects/bufferobject.c in the Python source distribution.
  • This Python class allows implementing the above type objects eventhough the description type code field yields multiple values for ontype object:
    classDBAPITypeObject:def__init__(self,*values):self.values=valuesdef__cmp__(self,other):ifotherinself.values:return0ifother<self.values:return1else:return-1

    The resulting type object compares equal to all values passed to theconstructor.

  • Here is a snippet of Python code that implements the exceptionhierarchy defined above[10]:
    classError(Exception):passclassWarning(Exception):passclassInterfaceError(Error):passclassDatabaseError(Error):passclassInternalError(DatabaseError):passclassOperationalError(DatabaseError):passclassProgrammingError(DatabaseError):passclassIntegrityError(DatabaseError):passclassDataError(DatabaseError):passclassNotSupportedError(DatabaseError):pass

    In C you can use thePyErr_NewException(fullname,base,NULL)API to create the exception objects.

Optional DB API Extensions

During the lifetime of DB API 2.0, module authors have often extendedtheir implementations beyond what is required by this DB APIspecification. To enhance compatibility and to provide a clean upgradepath to possible future versions of the specification, this sectiondefines a set of common extensions to the core DB API 2.0specification.

As with all DB API optional features, the database module authors arefree to not implement these additional attributes and methods (usingthem will then result in anAttributeError) or to raise aNotSupportedError in case the availability can only be checked atrun-time.

It has been proposed to make usage of these extensions optionallyvisible to the programmer by issuing Python warnings through thePython warning framework. To make this feature useful, the warningmessages must be standardized in order to be able to mask them. Thesestandard messages are referred to below asWarning Message.

Cursor.rownumber
This read-only attribute should provide the current 0-based indexof the cursor in the result set orNone if the index cannot bedetermined.

The index can be seen as index of the cursor in a sequence (theresult set). The next fetch operation will fetch the row indexedby.rownumber in that sequence.

Warning Message: “DB-API extension cursor.rownumber used”

Connection.Error,Connection.ProgrammingError, etc.
All exception classes defined by the DB API standard should beexposed on theConnection objects as attributes (in addition tobeing available at module scope).

These attributes simplify error handling in multi-connectionenvironments.

Warning Message: “DB-API extension connection.<exception> used”

Cursor.connection
This read-only attribute return a reference to theConnectionobject on which the cursor was created.

The attribute simplifies writing polymorph code inmulti-connection environments.

Warning Message: “DB-API extension cursor.connection used”

Cursor.scroll(value [,mode=’relative’ ])
Scroll the cursor in the result set to a new position according tomode.

If mode isrelative (default), value is taken as offset to thecurrent position in the result set, if set toabsolute, valuestates an absolute target position.

AnIndexError should be raised in case a scroll operationwould leave the result set. In this case, the cursor position isleft undefined (ideal would be to not move the cursor at all).

Note

This method should use native scrollable cursors, if available,or revert to an emulation for forward-only scrollablecursors. The method may raiseNotSupportedError to signalthat a specific operation is not supported by the database(e.g. backward scrolling).

Warning Message: “DB-API extension cursor.scroll() used”

Cursor.messages
This is a Python list object to which the interface appends tuples(exception class, exception value) for all messages which theinterfaces receives from the underlying database for this cursor.

The list is cleared by all standard cursor methods calls (prior toexecuting the call) except for the.fetch*() callsautomatically to avoid excessive memory usage and can also becleared by executingdelcursor.messages[:].

All error and warning messages generated by the database areplaced into this list, so checking the list allows the user toverify correct operation of the method calls.

The aim of this attribute is to eliminate the need for a Warningexception which often causes problems (some warnings really onlyhave informational character).

Warning Message: “DB-API extension cursor.messages used”

Connection.messages
Same asCursor.messages except that the messages in the list areconnection oriented.

The list is cleared automatically by all standard connectionmethods calls (prior to executing the call) to avoid excessivememory usage and can also be cleared by executingdelconnection.messages[:].

Warning Message: “DB-API extension connection.messages used”

Cursor.next()
Return the next row from the currently executing SQL statementusing the same semantics as.fetchone(). AStopIterationexception is raised when the result set is exhausted for Pythonversions 2.2 and later. Previous versions don’t have theStopIteration exception and so the method should raise anIndexError instead.

Warning Message: “DB-API extension cursor.next() used”

Cursor.__iter__()
Return self to make cursors compatible to the iteration protocol[8].

Warning Message: “DB-API extension cursor.__iter__() used”

Cursor.lastrowid
This read-only attribute provides the rowid of the last modifiedrow (most databases return a rowid only when a singleINSERToperation is performed). If the operation does not set a rowid orif the database does not support rowids, this attribute should beset toNone.

The semantics of.lastrowid are undefined in case the lastexecuted statement modified more than one row, e.g. when usingINSERT with.executemany().

Warning Message: “DB-API extension cursor.lastrowid used”

Connection.autocommit
Attribute to query and set the autocommit mode of the connection.

ReturnTrue if the connection is operating in autocommit(non-transactional) mode. ReturnFalse if the connection isoperating in manual commit (transactional) mode.

Setting the attribute toTrue orFalse adjusts theconnection’s mode accordingly.

Changing the setting fromTrue toFalse (disablingautocommit) will have the database leave autocommit mode and starta new transaction. Changing fromFalse toTrue (enablingautocommit) has database dependent semantics with respect to howpending transactions are handled.[12]

Deprecation notice: Even though several database modules implementboth the read and write nature of this attribute, setting theautocommit mode by writing to the attribute is deprecated, sincethis may result in I/O and related exceptions, making it difficultto implement in an async context.[13]

Warning Message: “DB-API extension connection.autocommit used”

Optional Error Handling Extensions

The core DB API specification only introduces a set of exceptionswhich can be raised to report errors to the user. In some cases,exceptions may be too disruptive for the flow of a program or evenrender execution impossible.

For these cases and in order to simplify error handling when dealingwith databases, database module authors may choose to implement userdefinable error handlers. This section describes a standard way ofdefining these error handlers.

Connection.errorhandler,Cursor.errorhandler
Read/write attribute which references an error handler to call incase an error condition is met.

The handler must be a Python callable taking the following arguments:

errorhandler(connection,cursor,errorclass,errorvalue)

where connection is a reference to the connection on which thecursor operates, cursor a reference to the cursor (orNone incase the error does not apply to a cursor),errorclass is anerror class which to instantiate usingerrorvalue asconstruction argument.

The standard error handler should add the error information to theappropriate.messages attribute (Connection.messages orCursor.messages) and raise the exception defined by the givenerrorclass anderrorvalue parameters.

If no.errorhandler is set (the attribute isNone), thestandard error handling scheme as outlined above, should beapplied.

Warning Message: “DB-API extension .errorhandler used”

Cursors should inherit the.errorhandler setting from theirconnection objects at cursor creation time.

Optional Two-Phase Commit Extensions

Many databases have support for two-phase commit (TPC) which allowsmanaging transactions across multiple database connections and otherresources.

If a database backend provides support for two-phase commit and thedatabase module author wishes to expose this support, the followingAPI should be implemented.NotSupportedError should be raised, if thedatabase backend support for two-phase commit can only be checked atrun-time.

TPC Transaction IDs

As many databases follow the XA specification, transaction IDs areformed from three components:

  • a format ID
  • a global transaction ID
  • a branch qualifier

For a particular global transaction, the first two components shouldbe the same for all resources. Each resource in the globaltransaction should be assigned a different branch qualifier.

The various components must satisfy the following criteria:

  • format ID: a non-negative 32-bit integer.
  • global transaction ID and branch qualifier: byte strings nolonger than 64 characters.

Transaction IDs are created with the.xid() Connection method:

.xid(format_id,global_transaction_id,branch_qualifier)
Returns a transaction ID object suitable for passing to the.tpc_*() methods of this connection.

If the database connection does not support TPC, aNotSupportedError is raised.

The type of the object returned by.xid() is not defined, butit must provide sequence behaviour, allowing access to the threecomponents. A conforming database module could choose torepresent transaction IDs with tuples rather than a custom object.

TPC Connection Methods

.tpc_begin(xid)
Begins a TPC transaction with the given transaction IDxid.

This method should be called outside of a transaction (i.e.nothing may have executed since the last.commit() or.rollback()).

Furthermore, it is an error to call.commit() or.rollback()within the TPC transaction. AProgrammingError is raised, if theapplication calls.commit() or.rollback() during an activeTPC transaction.

If the database connection does not support TPC, aNotSupportedError is raised.

.tpc_prepare()
Performs the first phase of a transaction started with.tpc_begin(). AProgrammingError should be raised if thismethod outside of a TPC transaction.

After calling.tpc_prepare(), no statements can be executeduntil.tpc_commit() or.tpc_rollback() have been called.

.tpc_commit([xid ])
When called with no arguments,.tpc_commit() commits a TPCtransaction previously prepared with.tpc_prepare().

If.tpc_commit() is called prior to.tpc_prepare(), a singlephase commit is performed. A transaction manager may choose to dothis if only a single resource is participating in the globaltransaction.

When called with a transaction IDxid, the database commits thegiven transaction. If an invalid transaction ID is provided, aProgrammingError will be raised. This form should be calledoutside of a transaction, and is intended for use in recovery.

On return, the TPC transaction is ended.

.tpc_rollback([xid ])
When called with no arguments,.tpc_rollback() rolls back a TPCtransaction. It may be called before or after.tpc_prepare().

When called with a transaction IDxid, it rolls back the giventransaction. If an invalid transaction ID is provided, aProgrammingError is raised. This form should be called outsideof a transaction, and is intended for use in recovery.

On return, the TPC transaction is ended.

.tpc_recover()
Returns a list of pending transaction IDs suitable for use with.tpc_commit(xid) or.tpc_rollback(xid).

If the database does not support transaction recovery, it mayreturn an empty list or raiseNotSupportedError.

Frequently Asked Questions

The database SIG often sees reoccurring questions about the DB APIspecification. This section covers some of the issues people sometimeshave with the specification.

Question:

How can I construct a dictionary out of the tuples returned by.fetch*():

Answer:

There are several existing tools available which provide helpers forthis task. Most of them use the approach of using the column namesdefined in the cursor attribute.description as basis for the keysin the row dictionary.

Note that the reason for not extending the DB API specification toalso support dictionary return values for the.fetch*() methods isthat this approach has several drawbacks:

  • Some databases don’t support case-sensitive column names orauto-convert them to all lowercase or all uppercase characters.
  • Columns in the result set which are generated by the query (e.g.using SQL functions) don’t map to table column names and databasesusually generate names for these columns in a very database specificway.

As a result, accessing the columns through dictionary keys variesbetween databases and makes writing portable code impossible.

Major Changes from Version 1.0 to Version 2.0

The Python Database API 2.0 introduces a few major changes compared tothe 1.0 version. Because some of these changes will cause existing DBAPI 1.0 based scripts to break, the major version number was adjustedto reflect this change.

These are the most important changes from 1.0 to 2.0:

  • The need for a separate dbi module was dropped and the functionalitymerged into the module interface itself.
  • New constructors andType Objects were added for date/timevalues, theRAW Type Object was renamed toBINARY. Theresulting set should cover all basic data types commonly found inmodern SQL databases.
  • New constants (apilevel,threadsafety,paramstyle) and methods(.executemany(),.nextset()) were added to provide betterdatabase bindings.
  • The semantics of.callproc() needed to call stored procedures arenow clearly defined.
  • The definition of the.execute() return value changed.Previously, the return value was based on the SQL statement type(which was hard to implement right) — it is undefined now; use themore flexible.rowcount attribute instead. Modules are free toreturn the old style return values, but these are no longer mandatedby the specification and should be considered database interfacedependent.
  • Class basedexceptions were incorporated into the specification.Module implementors are free to extend the exception layout definedin this specification by subclassing the defined exception classes.

Post-publishing additions to the DB API 2.0 specification:

  • Additional optional DB API extensions to the set of corefunctionality were specified.

Open Issues

Although the version 2.0 specification clarifies a lot of questionsthat were left open in the 1.0 version, there are still some remainingissues which should be addressed in future versions:

  • Define a useful return value for.nextset() for the case where anew result set is available.
  • Integrate thedecimal moduleDecimal objectfor use as loss-less monetary and decimal interchange format.

Footnotes

[1]
As a guideline the connection constructor parameters should beimplemented as keyword parameters for more intuitive use andfollow this order of parameters:
ParameterMeaning
dsnData source name as string
userUser name as string (optional)
passwordPassword as string (optional)
hostHostname (optional)
databaseDatabase name (optional)

E.g. a connect could look like this:

connect(dsn='myhost:MYDB',user='guido',password='234$')

Also see[13] regarding planned future additions to this list.

[2]
Module implementors should prefernumeric,named orpyformat over the other formats because these offer moreclarity and flexibility.
[3] (1,2,3)
If the database does not support the functionality required bythe method, the interface should throw an exception in case themethod is used.

The preferred approach is to not implement the method and thus havePython generate anAttributeError in case the method isrequested. This allows the programmer to check for databasecapabilities using the standardhasattr() function.

For some dynamically configured interfaces it may not beappropriate to require dynamically making the methodavailable. These interfaces should then raise aNotSupportedError to indicate the non-ability to perform theroll back when the method is invoked.

[4]
A database interface may choose to support named cursors byallowing a string argument to the method. This feature is not partof the specification, since it complicates semantics of the.fetch*() methods.
[5]
The module will use the__getitem__ method of theparameters object to map either positions (integers) or names(strings) to parameter values. This allows for both sequences andmappings to be used as input.

The termbound refers to the process of binding an input valueto a database execution buffer. In practical terms, this meansthat the input value is directly used as a value in the operation.The client should not be required to “escape” the value so that itcan be used — the value should be equal to the actual databasevalue.

[6]
Note that the interface may implement row fetching using arraysand other optimizations. It is not guaranteed that a call to thismethod will only move the associated cursor forward by one row.
[7]
Therowcount attribute may be coded in a way that updatesits value dynamically. This can be useful for databases thatreturn usablerowcount values only after the first call to a.fetch*() method.
[8]
Implementation Note: Python C extensions will have to implementthetp_iter slot on the cursor object instead of the.__iter__() method.
[9]
The termnumber of affected rows generally refers to thenumber of rows deleted, updated or inserted by the last statementrun on the database cursor. Most databases will return the totalnumber of rows that were found by the correspondingWHEREclause of the statement. Some databases use a differentinterpretation forUPDATEs and only return the number of rowsthat were changed by theUPDATE, even though theWHEREclause of the statement may have found more matching rows.Database module authors should try to implement the more commoninterpretation of returning the total number of rows found by theWHERE clause, or clearly document a different interpretationof the.rowcount attribute.
[10] (1,2,3,4)
In Python 2 and earlier versions of this PEP,StandardErrorwas used as the base class for all DB-API exceptions. SinceStandardError was removed in Python 3, database modulestargeting Python 3 should useException as base class instead.The PEP was updated to useException throughout the text, toavoid confusion. The change should not affect existing modules oruses of those modules, since all DB-API error exception classes arestill rooted at theError orWarning classes.
[11] (1,2)
In a future revision of the DB-API, the base class forWarning will likely change to the builtinWarning class. Atthe time of writing of the DB-API 2.0 in 1999, the warning frameworkin Python did not yet exist.
[12]
Many database modules implementing the autocommit attribute willautomatically commit any pending transaction and then enterautocommit mode. It is generally recommended to explicitly.commit() or.rollback() transactions prior to changing theautocommit setting, since this is portable across database modules.
[13] (1,2)
In a future revision of the DB-API, we are going to introduce anew method.setautocommit(value), which will allow setting theautocommit mode, and make.autocommit a read-only attribute.Additionally, we are considering to add a new standard keywordparameterautocommit to the Connection constructor. Modulesauthors are encouraged to add these changes in preparation for thischange.

Acknowledgements

Many thanks go to Andrew Kuchling who converted the Python DatabaseAPI Specification 2.0 from the original HTML format into the PEPformat in 2001.

Many thanks to James Henstridge for leading the discussion which led tothe standardization of the two-phase commit API extensions in 2008.

Many thanks to Daniele Varrazzo for converting the specification fromtext PEP format to ReST PEP format, which allows linking to variousparts in 2012.

Copyright

This document has been placed in the Public Domain.


Source:https://github.com/python/peps/blob/main/peps/pep-0249.rst

Last modified:2025-02-01 08:55:40 GMT


[8]ページ先頭

©2009-2025 Movatter.jp