Programming FAQ¶
General Questions¶
Is there a source code level debugger with breakpoints, single-stepping, etc.?¶
Yes.
Several debuggers for Python are described below, and the built-in functionbreakpoint()
allows you to drop into any of them.
The pdb module is a simple but adequate console-mode debugger for Python. It ispart of the standard Python library, and isdocumentedintheLibraryReferenceManual
. You can also write your own debugger by using the codefor pdb as an example.
The IDLE interactive development environment, which is part of the standardPython distribution (normally available asTools/scripts/idle3),includes a graphical debugger.
PythonWin is a Python IDE that includes a GUI debugger based on pdb. ThePythonWin debugger colors breakpoints and has quite a few cool features such asdebugging non-PythonWin programs. PythonWin is available as part ofpywin32 project andas a part of theActivePython distribution.
Eric is an IDE built on PyQtand the Scintilla editing component.
trepan3k is a gdb-like debugger.
Visual Studio Code is an IDE with debuggingtools that integrates with version-control software.
There are a number of commercial Python IDEs that include graphical debuggers.They include:
Are there tools to help find bugs or perform static analysis?¶
Yes.
Pylint andPyflakes do basic checking that willhelp you catch bugs sooner.
Static type checkers such asMypy,Pyre, andPytype can check type hints in Pythonsource code.
How can I create a stand-alone binary from a Python script?¶
You don’t need the ability to compile Python to C code if all you want is astand-alone program that users can download and run without having to installthe Python distribution first. There are a number of tools that determine theset of modules required by a program and bind these modules together with aPython binary to produce a single executable.
One is to use the freeze tool, which is included in the Python source tree asTools/freeze.It converts Python byte code to C arrays; with a C compiler you canembed all your modules into a new program, which is then linked with thestandard Python modules.
It works by scanning your source recursively for import statements (in bothforms) and looking for the modules in the standard Python path as well as in thesource directory (for built-in modules). It then turns the bytecode for moduleswritten in Python into C code (array initializers that can be turned into codeobjects using the marshal module) and creates a custom-made config file thatonly contains those built-in modules which are actually used in the program. Itthen compiles the generated C code and links it with the rest of the Pythoninterpreter to form a self-contained binary which acts exactly like your script.
The following packages can help with the creation of console and GUIexecutables:
Nuitka (Cross-platform)
PyInstaller (Cross-platform)
PyOxidizer (Cross-platform)
cx_Freeze (Cross-platform)
py2app (macOS only)
py2exe (Windows only)
Are there coding standards or a style guide for Python programs?¶
Yes. The coding style required for standard library modules is documented asPEP 8.
Core Language¶
Why am I getting an UnboundLocalError when the variable has a value?¶
It can be a surprise to get theUnboundLocalError
in previously workingcode when it is modified by adding an assignment statement somewhere inthe body of a function.
This code:
>>>x=10>>>defbar():...print(x)...>>>bar()10
works, but this code:
>>>x=10>>>deffoo():...print(x)...x+=1
results in anUnboundLocalError
:
>>>foo()Traceback (most recent call last):...UnboundLocalError:local variable 'x' referenced before assignment
This is because when you make an assignment to a variable in a scope, thatvariable becomes local to that scope and shadows any similarly named variablein the outer scope. Since the last statement in foo assigns a new value tox
, the compiler recognizes it as a local variable. Consequently when theearlierprint(x)
attempts to print the uninitialized local variable andan error results.
In the example above you can access the outer scope variable by declaring itglobal:
>>>x=10>>>deffoobar():...globalx...print(x)...x+=1...>>>foobar()10
This explicit declaration is required in order to remind you that (unlike thesuperficially analogous situation with class and instance variables) you areactually modifying the value of the variable in the outer scope:
>>>print(x)11
You can do a similar thing in a nested scope using thenonlocal
keyword:
>>>deffoo():...x=10...defbar():...nonlocalx...print(x)...x+=1...bar()...print(x)...>>>foo()1011
What are the rules for local and global variables in Python?¶
In Python, variables that are only referenced inside a function are implicitlyglobal. If a variable is assigned a value anywhere within the function’s body,it’s assumed to be a local unless explicitly declared as global.
Though a bit surprising at first, a moment’s consideration explains this. Onone hand, requiringglobal
for assigned variables provides a baragainst unintended side-effects. On the other hand, ifglobal
was requiredfor all global references, you’d be usingglobal
all the time. You’d haveto declare as global every reference to a built-in function or to a component ofan imported module. This clutter would defeat the usefulness of theglobal
declaration for identifying side-effects.
Why do lambdas defined in a loop with different values all return the same result?¶
Assume you use a for loop to define a few different lambdas (or even plainfunctions), e.g.:
>>>squares=[]>>>forxinrange(5):...squares.append(lambda:x**2)
This gives you a list that contains 5 lambdas that calculatex**2
. Youmight expect that, when called, they would return, respectively,0
,1
,4
,9
, and16
. However, when you actually try you will see thatthey all return16
:
>>>squares[2]()16>>>squares[4]()16
This happens becausex
is not local to the lambdas, but is defined inthe outer scope, and it is accessed when the lambda is called — not when itis defined. At the end of the loop, the value ofx
is4
, so all thefunctions now return4**2
, i.e.16
. You can also verify this bychanging the value ofx
and see how the results of the lambdas change:
>>>x=8>>>squares[2]()64
In order to avoid this, you need to save the values in variables local to thelambdas, so that they don’t rely on the value of the globalx
:
>>>squares=[]>>>forxinrange(5):...squares.append(lambdan=x:n**2)
Here,n=x
creates a new variablen
local to the lambda and computedwhen the lambda is defined so that it has the same value thatx
had atthat point in the loop. This means that the value ofn
will be0
in the first lambda,1
in the second,2
in the third, and so on.Therefore each lambda will now return the correct result:
>>>squares[2]()4>>>squares[4]()16
Note that this behaviour is not peculiar to lambdas, but applies to regularfunctions too.
How do I share global variables across modules?¶
The canonical way to share information across modules within a single program isto create a special module (often called config or cfg). Just import the configmodule in all modules of your application; the module then becomes available asa global name. Because there is only one instance of each module, any changesmade to the module object get reflected everywhere. For example:
config.py:
x=0# Default value of the 'x' configuration setting
mod.py:
importconfigconfig.x=1
main.py:
importconfigimportmodprint(config.x)
Note that using a module is also the basis for implementing the singleton designpattern, for the same reason.
What are the “best practices” for using import in a module?¶
In general, don’t usefrommodulenameimport*
. Doing so clutters theimporter’s namespace, and makes it much harder for linters to detect undefinednames.
Import modules at the top of a file. Doing so makes it clear what other modulesyour code requires and avoids questions of whether the module name is in scope.Using one import per line makes it easy to add and delete module imports, butusing multiple imports per line uses less screen space.
It’s good practice if you import modules in the following order:
third-party library modules (anything installed in Python’s site-packagesdirectory) – e.g.
dateutil
,requests
,PIL.Image
locally developed modules
It is sometimes necessary to move imports to a function or class to avoidproblems with circular imports. Gordon McMillan says:
Circular imports are fine where both modules use the “import <module>” formof import. They fail when the 2nd module wants to grab a name out of thefirst (“from module import name”) and the import is at the top level. That’sbecause names in the 1st are not yet available, because the first module isbusy importing the 2nd.
In this case, if the second module is only used in one function, then the importcan easily be moved into that function. By the time the import is called, thefirst module will have finished initializing, and the second module can do itsimport.
It may also be necessary to move imports out of the top level of code if some ofthe modules are platform-specific. In that case, it may not even be possible toimport all of the modules at the top of the file. In this case, importing thecorrect modules in the corresponding platform-specific code is a good option.
Only move imports into a local scope, such as inside a function definition, ifit’s necessary to solve a problem such as avoiding a circular import or aretrying to reduce the initialization time of a module. This technique isespecially helpful if many of the imports are unnecessary depending on how theprogram executes. You may also want to move imports into a function if themodules are only ever used in that function. Note that loading a module thefirst time may be expensive because of the one time initialization of themodule, but loading a module multiple times is virtually free, costing only acouple of dictionary lookups. Even if the module name has gone out of scope,the module is probably available insys.modules
.
Why are default values shared between objects?¶
This type of bug commonly bites neophyte programmers. Consider this function:
deffoo(mydict={}):# Danger: shared reference to one dict for all calls...computesomething...mydict[key]=valuereturnmydict
The first time you call this function,mydict
contains a single item. Thesecond time,mydict
contains two items because whenfoo()
beginsexecuting,mydict
starts out with an item already in it.
It is often expected that a function call creates new objects for defaultvalues. This is not what happens. Default values are created exactly once, whenthe function is defined. If that object is changed, like the dictionary in thisexample, subsequent calls to the function will refer to this changed object.
By definition, immutable objects such as numbers, strings, tuples, andNone
,are safe from change. Changes to mutable objects such as dictionaries, lists,and class instances can lead to confusion.
Because of this feature, it is good programming practice to not use mutableobjects as default values. Instead, useNone
as the default value andinside the function, check if the parameter isNone
and create a newlist/dictionary/whatever if it is. For example, don’t write:
deffoo(mydict={}):...
but:
deffoo(mydict=None):ifmydictisNone:mydict={}# create a new dict for local namespace
This feature can be useful. When you have a function that’s time-consuming tocompute, a common technique is to cache the parameters and the resulting valueof each call to the function, and return the cached value if the same value isrequested again. This is called “memoizing”, and can be implemented like this:
# Callers can only provide two parameters and optionally pass _cache by keyworddefexpensive(arg1,arg2,*,_cache={}):if(arg1,arg2)in_cache:return_cache[(arg1,arg2)]# Calculate the valueresult=...expensivecomputation..._cache[(arg1,arg2)]=result# Store result in the cachereturnresult
You could use a global variable containing a dictionary instead of the defaultvalue; it’s a matter of taste.
How can I pass optional or keyword parameters from one function to another?¶
Collect the arguments using the*
and**
specifiers in the function’sparameter list; this gives you the positional arguments as a tuple and thekeyword arguments as a dictionary. You can then pass these arguments whencalling another function by using*
and**
:
deff(x,*args,**kwargs):...kwargs['width']='14.3c'...g(x,*args,**kwargs)
What is the difference between arguments and parameters?¶
Parameters are defined by the names that appear in afunction definition, whereasarguments are the valuesactually passed to a function when calling it. Parameters define whatkind of arguments a function can accept. Forexample, given the function definition:
deffunc(foo,bar=None,**kwargs):pass
foo,bar andkwargs are parameters offunc
. However, when callingfunc
, for example:
func(42,bar=314,extra=somevar)
the values42
,314
, andsomevar
are arguments.
Why did changing list ‘y’ also change list ‘x’?¶
If you wrote code like:
>>>x=[]>>>y=x>>>y.append(10)>>>y[10]>>>x[10]
you might be wondering why appending an element toy
changedx
too.
There are two factors that produce this result:
Variables are simply names that refer to objects. Doing
y=x
doesn’tcreate a copy of the list – it creates a new variabley
that refers tothe same objectx
refers to. This means that there is only one object(the list), and bothx
andy
refer to it.Lists aremutable, which means that you can change their content.
After the call toappend()
, the content of the mutable object haschanged from[]
to[10]
. Since both the variables refer to the sameobject, using either name accesses the modified value[10]
.
If we instead assign an immutable object tox
:
>>>x=5# ints are immutable>>>y=x>>>x=x+1# 5 can't be mutated, we are creating a new object here>>>x6>>>y5
we can see that in this casex
andy
are not equal anymore. This isbecause integers areimmutable, and when we dox=x+1
we are notmutating the int5
by incrementing its value; instead, we are creating anew object (the int6
) and assigning it tox
(that is, changing whichobjectx
refers to). After this assignment we have two objects (the ints6
and5
) and two variables that refer to them (x
now refers to6
buty
still refers to5
).
Some operations (for exampley.append(10)
andy.sort()
) mutate theobject, whereas superficially similar operations (for exampley=y+[10]
andsorted(y)
) create a new object. In general in Python (and in all casesin the standard library) a method that mutates an object will returnNone
to help avoid getting the two types of operations confused. So if youmistakenly writey.sort()
thinking it will give you a sorted copy ofy
,you’ll instead end up withNone
, which will likely cause your program togenerate an easily diagnosed error.
However, there is one class of operations where the same operation sometimeshas different behaviors with different types: the augmented assignmentoperators. For example,+=
mutates lists but not tuples or ints (a_list+=[1,2,3]
is equivalent toa_list.extend([1,2,3])
and mutatesa_list
, whereassome_tuple+=(1,2,3)
andsome_int+=1
createnew objects).
In other words:
If we have a mutable object (
list
,dict
,set
,etc.), we can use some specific operations to mutate it and all the variablesthat refer to it will see the change.If we have an immutable object (
str
,int
,tuple
,etc.), all the variables that refer to it will always see the same value,but operations that transform that value into a new value always return a newobject.
If you want to know if two variables refer to the same object or not, you canuse theis
operator, or the built-in functionid()
.
How do I write a function with output parameters (call by reference)?¶
Remember that arguments are passed by assignment in Python. Since assignmentjust creates references to objects, there’s no alias between an argument name inthe caller and callee, and so no call-by-reference per se. You can achieve thedesired effect in a number of ways.
By returning a tuple of the results:
>>>deffunc1(a,b):...a='new-value'# a and b are local names...b=b+1# assigned to new objects...returna,b# return new values...>>>x,y='old-value',99>>>func1(x,y)('new-value', 100)
This is almost always the clearest solution.
By using global variables. This isn’t thread-safe, and is not recommended.
By passing a mutable (changeable in-place) object:
>>>deffunc2(a):...a[0]='new-value'# 'a' references a mutable list...a[1]=a[1]+1# changes a shared object...>>>args=['old-value',99]>>>func2(args)>>>args['new-value', 100]
By passing in a dictionary that gets mutated:
>>>deffunc3(args):...args['a']='new-value'# args is a mutable dictionary...args['b']=args['b']+1# change it in-place...>>>args={'a':'old-value','b':99}>>>func3(args)>>>args{'a': 'new-value', 'b': 100}
Or bundle up values in a class instance:
>>>classNamespace:...def__init__(self,/,**args):...forkey,valueinargs.items():...setattr(self,key,value)...>>>deffunc4(args):...args.a='new-value'# args is a mutable Namespace...args.b=args.b+1# change object in-place...>>>args=Namespace(a='old-value',b=99)>>>func4(args)>>>vars(args){'a': 'new-value', 'b': 100}
There’s almost never a good reason to get this complicated.
Your best choice is to return a tuple containing the multiple results.
How do you make a higher order function in Python?¶
You have two choices: you can use nested scopes or you can use callable objects.For example, suppose you wanted to definelinear(a,b)
which returns afunctionf(x)
that computes the valuea*x+b
. Using nested scopes:
deflinear(a,b):defresult(x):returna*x+breturnresult
Or using a callable object:
classlinear:def__init__(self,a,b):self.a,self.b=a,bdef__call__(self,x):returnself.a*x+self.b
In both cases,
taxes=linear(0.3,2)
gives a callable object wheretaxes(10e6)==0.3*10e6+2
.
The callable object approach has the disadvantage that it is a bit slower andresults in slightly longer code. However, note that a collection of callablescan share their signature via inheritance:
classexponential(linear):# __init__ inheriteddef__call__(self,x):returnself.a*(x**self.b)
Object can encapsulate state for several methods:
classcounter:value=0defset(self,x):self.value=xdefup(self):self.value=self.value+1defdown(self):self.value=self.value-1count=counter()inc,dec,reset=count.up,count.down,count.set
Hereinc()
,dec()
andreset()
act like functions which share thesame counting variable.
How do I copy an object in Python?¶
In general, trycopy.copy()
orcopy.deepcopy()
for the general case.Not all objects can be copied, but most can.
Some objects can be copied more easily. Dictionaries have acopy()
method:
newdict=olddict.copy()
Sequences can be copied by slicing:
new_l=l[:]
How can I find the methods or attributes of an object?¶
For an instancex
of a user-defined class,dir(x)
returns an alphabetizedlist of the names containing the instance attributes and methods and attributesdefined by its class.
How can my code discover the name of an object?¶
Generally speaking, it can’t, because objects don’t really have names.Essentially, assignment always binds a name to a value; the same is true ofdef
andclass
statements, but in that case the value is acallable. Consider the following code:
>>>classA:...pass...>>>B=A>>>a=B()>>>b=a>>>print(b)<__main__.A object at 0x16D07CC>>>>print(a)<__main__.A object at 0x16D07CC>
Arguably the class has a name: even though it is bound to two names and invokedthrough the nameB
the created instance is still reported as an instance ofclassA
. However, it is impossible to say whether the instance’s name isa
orb
, since both names are bound to the same value.
Generally speaking it should not be necessary for your code to “know the names”of particular values. Unless you are deliberately writing introspectiveprograms, this is usually an indication that a change of approach might bebeneficial.
In comp.lang.python, Fredrik Lundh once gave an excellent analogy in answer tothis question:
The same way as you get the name of that cat you found on your porch: the cat(object) itself cannot tell you its name, and it doesn’t really care – sothe only way to find out what it’s called is to ask all your neighbours(namespaces) if it’s their cat (object)…
….and don’t be surprised if you’ll find that it’s known by many names, orno name at all!
What’s up with the comma operator’s precedence?¶
Comma is not an operator in Python. Consider this session:
>>>"a"in"b","a"(False, 'a')
Since the comma is not an operator, but a separator between expressions theabove is evaluated as if you had entered:
("a"in"b"),"a"
not:
"a"in("b","a")
The same is true of the various assignment operators (=
,+=
etc). Theyare not truly operators but syntactic delimiters in assignment statements.
Is there an equivalent of C’s “?:” ternary operator?¶
Yes, there is. The syntax is as follows:
[on_true]if[expression]else[on_false]x,y=50,25small=xifx<yelsey
Before this syntax was introduced in Python 2.5, a common idiom was to uselogical operators:
[expression]and[on_true]or[on_false]
However, this idiom is unsafe, as it can give wrong results whenon_truehas a false boolean value. Therefore, it is always better to usethe...if...else...
form.
Is it possible to write obfuscated one-liners in Python?¶
Yes. Usually this is done by nestinglambda
withinlambda
. See the following three examples, slightly adapted from Ulf Bartelt:
fromfunctoolsimportreduce# Primes < 1000print(list(filter(None,map(lambday:y*reduce(lambdax,y:x*y!=0,map(lambdax,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000)))))# First 10 Fibonacci numbersprint(list(map(lambdax,f=lambdax,f:(f(x-1,f)+f(x-2,f))ifx>1else1:f(x,f),range(10))))# Mandelbrot setprint((lambdaRu,Ro,Iu,Io,IM,Sx,Sy:reduce(lambdax,y:x+'\n'+y,map(lambday,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambdayc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM,Sx=Sx,Sy=Sy:reduce(lambdax,y:x+y,map(lambdax,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,i=i,Sx=Sx,F=lambdaxc,yc,x,y,k,f=lambdaxc,yc,x,y,k,f:(k<=0)or(x*x+y*y>=4.0)or1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr(64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy))))(-2.1,0.7,-1.2,1.2,30,80,24))# \___ ___/ \___ ___/ | | |__ lines on screen# V V | |______ columns on screen# | | |__________ maximum of "iterations"# | |_________________ range on y axis# |____________________________ range on x axis
Don’t try this at home, kids!
What does the slash(/) in the parameter list of a function mean?¶
A slash in the argument list of a function denotes that the parameters prior toit are positional-only. Positional-only parameters are the ones without anexternally usable name. Upon calling a function that accepts positional-onlyparameters, arguments are mapped to parameters based solely on their position.For example,divmod()
is a function that accepts positional-onlyparameters. Its documentation looks like this:
>>>help(divmod)Help on built-in function divmod in module builtins:divmod(x, y, /) Return the tuple (x//y, x%y). Invariant: div*y + mod == x.
The slash at the end of the parameter list means that both parameters arepositional-only. Thus, callingdivmod()
with keyword arguments would leadto an error:
>>>divmod(x=3,y=4)Traceback (most recent call last): File"<stdin>", line1, in<module>TypeError:divmod() takes no keyword arguments
Numbers and strings¶
How do I specify hexadecimal and octal integers?¶
To specify an octal digit, precede the octal value with a zero, and then a loweror uppercase “o”. For example, to set the variable “a” to the octal value “10”(8 in decimal), type:
>>>a=0o10>>>a8
Hexadecimal is just as easy. Simply precede the hexadecimal number with a zero,and then a lower or uppercase “x”. Hexadecimal digits can be specified in loweror uppercase. For example, in the Python interpreter:
>>>a=0xa5>>>a165>>>b=0XB2>>>b178
Why does -22 // 10 return -3?¶
It’s primarily driven by the desire thati%j
have the same sign asj
.If you want that, and also want:
i==(i//j)*j+(i%j)
then integer division has to return the floor. C also requires that identity tohold, and then compilers that truncatei//j
need to makei%j
havethe same sign asi
.
There are few real use cases fori%j
whenj
is negative. Whenj
is positive, there are many, and in virtually all of them it’s more useful fori%j
to be>=0
. If the clock says 10 now, what did it say 200 hoursago?-190%12==2
is useful;-190%12==-10
is a bug waiting tobite.
How do I get int literal attribute instead of SyntaxError?¶
Trying to lookup anint
literal attribute in the normal manner givesaSyntaxError
because the period is seen as a decimal point:
>>>1.__class__ File"<stdin>", line1 1.__class__ ^SyntaxError:invalid decimal literal
The solution is to separate the literal from the periodwith either a space or parentheses.
>>>1.__class__<class 'int'>>>>(1).__class__<class 'int'>
How do I convert a string to a number?¶
For integers, use the built-inint()
type constructor, e.g.int('144')==144
. Similarly,float()
converts to a floating-point number,e.g.float('144')==144.0
.
By default, these interpret the number as decimal, so thatint('0144')==144
holds true, andint('0x144')
raisesValueError
.int(string,base)
takes the base to convert from as a second optional argument, soint('0x144',16)==324
. If the base is specified as 0, the number is interpretedusing Python’s rules: a leading ‘0o’ indicates octal, and ‘0x’ indicates a hexnumber.
Do not use the built-in functioneval()
if all you need is to convertstrings to numbers.eval()
will be significantly slower and it presents asecurity risk: someone could pass you a Python expression that might haveunwanted side effects. For example, someone could pass__import__('os').system("rm-rf$HOME")
which would erase your homedirectory.
eval()
also has the effect of interpreting numbers as Python expressions,so that e.g.eval('09')
gives a syntax error because Python does not allowleading ‘0’ in a decimal number (except ‘0’).
How do I convert a number to a string?¶
To convert, e.g., the number144
to the string'144'
, use the built-in typeconstructorstr()
. If you want a hexadecimal or octal representation, usethe built-in functionshex()
oroct()
. For fancy formatting, seethef-strings andFormat String Syntax sections,e.g."{:04d}".format(144)
yields'0144'
and"{:.3f}".format(1.0/3.0)
yields'0.333'
.
How do I modify a string in place?¶
You can’t, because strings are immutable. In most situations, you shouldsimply construct a new string from the various parts you want to assembleit from. However, if you need an object with the ability to modify in-placeunicode data, try using anio.StringIO
object or thearray
module:
>>>importio>>>s="Hello, world">>>sio=io.StringIO(s)>>>sio.getvalue()'Hello, world'>>>sio.seek(7)7>>>sio.write("there!")6>>>sio.getvalue()'Hello, there!'>>>importarray>>>a=array.array('w',s)>>>print(a)array('w', 'Hello, world')>>>a[0]='y'>>>print(a)array('w', 'yello, world')>>>a.tounicode()'yello, world'
How do I use strings to call functions/methods?¶
There are various techniques.
The best is to use a dictionary that maps strings to functions. The primaryadvantage of this technique is that the strings do not need to match the namesof the functions. This is also the primary technique used to emulate a caseconstruct:
defa():passdefb():passdispatch={'go':a,'stop':b}# Note lack of parens for funcsdispatch[get_input()]()# Note trailing parens to call function
Use the built-in function
getattr()
:importfoogetattr(foo,'bar')()
Note that
getattr()
works on any object, including classes, classinstances, modules, and so on.This is used in several places in the standard library, like this:
classFoo:defdo_foo(self):...defdo_bar(self):...f=getattr(foo_instance,'do_'+opname)f()
Use
locals()
to resolve the function name:defmyFunc():print("hello")fname="myFunc"f=locals()[fname]f()
Is there an equivalent to Perl’schomp()
for removing trailing newlines from strings?¶
You can useS.rstrip("\r\n")
to remove all occurrences of any lineterminator from the end of the stringS
without removing other trailingwhitespace. If the stringS
represents more than one line, with severalempty lines at the end, the line terminators for all the blank lines willbe removed:
>>>lines=("line 1\r\n"..."\r\n"..."\r\n")>>>lines.rstrip("\n\r")'line 1 '
Since this is typically only desired when reading text one line at a time, usingS.rstrip()
this way works well.
Is there ascanf()
orsscanf()
equivalent?¶
Not as such.
For simple input parsing, the easiest approach is usually to split the line intowhitespace-delimited words using thesplit()
method of string objectsand then convert decimal strings to numeric values usingint()
orfloat()
.split()
supports an optional “sep” parameter which is usefulif the line uses something other than whitespace as a separator.
For more complicated input parsing, regular expressions are more powerfulthan C’ssscanf
and better suited for the task.
What doesUnicodeDecodeError
orUnicodeEncodeError
error mean?¶
See theUnicode HOWTO.
Can I end a raw string with an odd number of backslashes?¶
A raw string ending with an odd number of backslashes will escape the string’s quote:
>>>r'C:\this\will\not\work\' File"<stdin>", line1r'C:\this\will\not\work\'^SyntaxError:unterminated string literal (detected at line 1)
There are several workarounds for this. One is to use regular strings and doublethe backslashes:
>>>'C:\\this\\will\\work\\''C:\\this\\will\\work\\'
Another is to concatenate a regular string containing an escaped backslash to theraw string:
>>>r'C:\this\will\work''\\''C:\\this\\will\\work\\'
It is also possible to useos.path.join()
to append a backslash on Windows:
>>>os.path.join(r'C:\this\will\work','')'C:\\this\\will\\work\\'
Note that while a backslash will “escape” a quote for the purposes ofdetermining where the raw string ends, no escaping occurs when interpreting thevalue of the raw string. That is, the backslash remains present in the value ofthe raw string:
>>>r'backslash\'preserved'"backslash\\'preserved"
Also see the specification in thelanguage reference.
Performance¶
My program is too slow. How do I speed it up?¶
That’s a tough one, in general. First, here are a list of things toremember before diving further:
Performance characteristics vary across Python implementations. This FAQfocuses onCPython.
Behaviour can vary across operating systems, especially when talking aboutI/O or multi-threading.
You should always find the hot spots in your programbefore attempting tooptimize any code (see the
profile
module).Writing benchmark scripts will allow you to iterate quickly when searchingfor improvements (see the
timeit
module).It is highly recommended to have good code coverage (through unit testingor any other technique) before potentially introducing regressions hiddenin sophisticated optimizations.
That being said, there are many tricks to speed up Python code. Here aresome general principles which go a long way towards reaching acceptableperformance levels:
Making your algorithms faster (or changing to faster ones) can yieldmuch larger benefits than trying to sprinkle micro-optimization tricksall over your code.
Use the right data structures. Study documentation for theBuilt-in Typesand the
collections
module.When the standard library provides a primitive for doing something, it islikely (although not guaranteed) to be faster than any alternative youmay come up with. This is doubly true for primitives written in C, suchas builtins and some extension types. For example, be sure to useeither the
list.sort()
built-in method or the relatedsorted()
function to do sorting (and see theSorting Techniques for examplesof moderately advanced usage).Abstractions tend to create indirections and force the interpreter to workmore. If the levels of indirection outweigh the amount of useful workdone, your program will be slower. You should avoid excessive abstraction,especially under the form of tiny functions or methods (which are also oftendetrimental to readability).
If you have reached the limit of what pure Python can allow, there are toolsto take you further away. For example,Cython cancompile a slightly modified version of Python code into a C extension, andcan be used on many different platforms. Cython can take advantage ofcompilation (and optional type annotations) to make your code significantlyfaster than when interpreted. If you are confident in your C programmingskills, you can alsowrite a C extension moduleyourself.
See also
The wiki page devoted toperformance tips.
What is the most efficient way to concatenate many strings together?¶
str
andbytes
objects are immutable, therefore concatenatingmany strings together is inefficient as each concatenation creates a newobject. In the general case, the total runtime cost is quadratic in thetotal string length.
To accumulate manystr
objects, the recommended idiom is to placethem into a list and callstr.join()
at the end:
chunks=[]forsinmy_strings:chunks.append(s)result=''.join(chunks)
(another reasonably efficient idiom is to useio.StringIO
)
To accumulate manybytes
objects, the recommended idiom is to extendabytearray
object using in-place concatenation (the+=
operator):
result=bytearray()forbinmy_bytes_objects:result+=b
Sequences (Tuples/Lists)¶
How do I convert between tuples and lists?¶
The type constructortuple(seq)
converts any sequence (actually, anyiterable) into a tuple with the same items in the same order.
For example,tuple([1,2,3])
yields(1,2,3)
andtuple('abc')
yields('a','b','c')
. If the argument is a tuple, it does not make a copybut returns the same object, so it is cheap to calltuple()
when youaren’t sure that an object is already a tuple.
The type constructorlist(seq)
converts any sequence or iterable into a listwith the same items in the same order. For example,list((1,2,3))
yields[1,2,3]
andlist('abc')
yields['a','b','c']
. If the argumentis a list, it makes a copy just likeseq[:]
would.
What’s a negative index?¶
Python sequences are indexed with positive numbers and negative numbers. Forpositive numbers 0 is the first index 1 is the second index and so forth. Fornegative indices -1 is the last index and -2 is the penultimate (next to last)index and so forth. Think ofseq[-n]
as the same asseq[len(seq)-n]
.
Using negative indices can be very convenient. For exampleS[:-1]
is all ofthe string except for its last character, which is useful for removing thetrailing newline from a string.
How do I iterate over a sequence in reverse order?¶
Use thereversed()
built-in function:
forxinreversed(sequence):...# do something with x ...
This won’t touch your original sequence, but build a new copy with reversedorder to iterate over.
How do you remove duplicates from a list?¶
See the Python Cookbook for a long discussion of many ways to do this:
If you don’t mind reordering the list, sort it and then scan from the end of thelist, deleting duplicates as you go:
ifmylist:mylist.sort()last=mylist[-1]foriinrange(len(mylist)-2,-1,-1):iflast==mylist[i]:delmylist[i]else:last=mylist[i]
If all elements of the list may be used as set keys (i.e. they are allhashable) this is often faster
mylist=list(set(mylist))
This converts the list into a set, thereby removing duplicates, and then backinto a list.
How do you remove multiple items from a list¶
As with removing duplicates, explicitly iterating in reverse with adelete condition is one possibility. However, it is easier and fasterto use slice replacement with an implicit or explicit forward iteration.Here are three variations.:
mylist[:]=filter(keep_function,mylist)mylist[:]=(xforxinmylistifkeep_condition)mylist[:]=[xforxinmylistifkeep_condition]
The list comprehension may be fastest.
How do you make an array in Python?¶
Use a list:
["this",1,"is","an","array"]
Lists are equivalent to C or Pascal arrays in their time complexity; the primarydifference is that a Python list can contain objects of many different types.
Thearray
module also provides methods for creating arrays of fixed typeswith compact representations, but they are slower to index than lists. Alsonote thatNumPyand other third party packages define array-like structures withvarious characteristics as well.
To get Lisp-style linked lists, you can emulatecons cells using tuples:
lisp_list=("like",("this",("example",None)))
If mutability is desired, you could use lists instead of tuples. Here theanalogue of a Lispcar islisp_list[0]
and the analogue ofcdr islisp_list[1]
. Only do this if you’re sure you really need to, because it’susually a lot slower than using Python lists.
How do I create a multidimensional list?¶
You probably tried to make a multidimensional array like this:
>>>A=[[None]*2]*3
This looks correct if you print it:
>>>A[[None, None], [None, None], [None, None]]
But when you assign a value, it shows up in multiple places:
>>>A[0][0]=5>>>A[[5, None], [5, None], [5, None]]
The reason is that replicating a list with*
doesn’t create copies, it onlycreates references to the existing objects. The*3
creates a listcontaining 3 references to the same list of length two. Changes to one row willshow in all rows, which is almost certainly not what you want.
The suggested approach is to create a list of the desired length first and thenfill in each element with a newly created list:
A=[None]*3foriinrange(3):A[i]=[None]*2
This generates a list containing 3 different lists of length two. You can alsouse a list comprehension:
w,h=2,3A=[[None]*wforiinrange(h)]
Or, you can use an extension that provides a matrix datatype;NumPy is the best known.
How do I apply a method or function to a sequence of objects?¶
To call a method or function and accumulate the return values is a list,alist comprehension is an elegant solution:
result=[obj.method()forobjinmylist]result=[function(obj)forobjinmylist]
To just run the method or function without saving the return values,a plainfor
loop will suffice:
forobjinmylist:obj.method()forobjinmylist:function(obj)
Why does a_tuple[i] += [‘item’] raise an exception when the addition works?¶
This is because of a combination of the fact that augmented assignmentoperators areassignment operators, and the difference between mutable andimmutable objects in Python.
This discussion applies in general when augmented assignment operators areapplied to elements of a tuple that point to mutable objects, but we’ll usealist
and+=
as our exemplar.
If you wrote:
>>>a_tuple=(1,2)>>>a_tuple[0]+=1Traceback (most recent call last):...TypeError:'tuple' object does not support item assignment
The reason for the exception should be immediately clear:1
is added to theobjecta_tuple[0]
points to (1
), producing the result object,2
,but when we attempt to assign the result of the computation,2
, to element0
of the tuple, we get an error because we can’t change what an element ofa tuple points to.
Under the covers, what this augmented assignment statement is doing isapproximately this:
>>>result=a_tuple[0]+1>>>a_tuple[0]=resultTraceback (most recent call last):...TypeError:'tuple' object does not support item assignment
It is the assignment part of the operation that produces the error, since atuple is immutable.
When you write something like:
>>>a_tuple=(['foo'],'bar')>>>a_tuple[0]+=['item']Traceback (most recent call last):...TypeError:'tuple' object does not support item assignment
The exception is a bit more surprising, and even more surprising is the factthat even though there was an error, the append worked:
>>>a_tuple[0]['foo', 'item']
To see why this happens, you need to know that (a) if an object implements an__iadd__()
magic method, it gets called when the+=
augmentedassignmentis executed, and its return value is what gets used in the assignment statement;and (b) for lists,__iadd__()
is equivalent to callingextend()
on the listand returning the list. That’s why we say that for lists,+=
is a“shorthand” forlist.extend()
:
>>>a_list=[]>>>a_list+=[1]>>>a_list[1]
This is equivalent to:
>>>result=a_list.__iadd__([1])>>>a_list=result
The object pointed to by a_list has been mutated, and the pointer to themutated object is assigned back toa_list
. The end result of theassignment is a no-op, since it is a pointer to the same object thata_list
was previously pointing to, but the assignment still happens.
Thus, in our tuple example what is happening is equivalent to:
>>>result=a_tuple[0].__iadd__(['item'])>>>a_tuple[0]=resultTraceback (most recent call last):...TypeError:'tuple' object does not support item assignment
The__iadd__()
succeeds, and thus the list is extended, but even thoughresult
points to the same object thata_tuple[0]
already points to,that final assignment still results in an error, because tuples are immutable.
I want to do a complicated sort: can you do a Schwartzian Transform in Python?¶
The technique, attributed to Randal Schwartz of the Perl community, sorts theelements of a list by a metric which maps each element to its “sort value”. InPython, use thekey
argument for thelist.sort()
method:
Isorted=L[:]Isorted.sort(key=lambdas:int(s[10:15]))
How can I sort one list by values from another list?¶
Merge them into an iterator of tuples, sort the resulting list, and then pickout the element you want.
>>>list1=["what","I'm","sorting","by"]>>>list2=["something","else","to","sort"]>>>pairs=zip(list1,list2)>>>pairs=sorted(pairs)>>>pairs[("I'm", 'else'), ('by', 'sort'), ('sorting', 'to'), ('what', 'something')]>>>result=[x[1]forxinpairs]>>>result['else', 'sort', 'to', 'something']
Objects¶
What is a class?¶
A class is the particular object type created by executing a class statement.Class objects are used as templates to create instance objects, which embodyboth the data (attributes) and code (methods) specific to a datatype.
A class can be based on one or more other classes, called its base class(es). Itthen inherits the attributes and methods of its base classes. This allows anobject model to be successively refined by inheritance. You might have agenericMailbox
class that provides basic accessor methods for a mailbox,and subclasses such asMboxMailbox
,MaildirMailbox
,OutlookMailbox
that handle various specific mailbox formats.
What is a method?¶
A method is a function on some objectx
that you normally call asx.name(arguments...)
. Methods are defined as functions inside the classdefinition:
classC:defmeth(self,arg):returnarg*2+self.attribute
What is self?¶
Self is merely a conventional name for the first argument of a method. A methoddefined asmeth(self,a,b,c)
should be called asx.meth(a,b,c)
forsome instancex
of the class in which the definition occurs; the calledmethod will think it is called asmeth(x,a,b,c)
.
See alsoWhy must ‘self’ be used explicitly in method definitions and calls?.
How do I check if an object is an instance of a given class or of a subclass of it?¶
Use the built-in functionisinstance(obj,cls)
. You cancheck if an objectis an instance of any of a number of classes by providing a tuple instead of asingle class, e.g.isinstance(obj,(class1,class2,...))
, and can alsocheck whether an object is one of Python’s built-in types, e.g.isinstance(obj,str)
orisinstance(obj,(int,float,complex))
.
Note thatisinstance()
also checks for virtual inheritance from anabstract base class. So, the test will returnTrue
for aregistered class even if hasn’t directly or indirectly inherited from it. Totest for “true inheritance”, scan theMRO of the class:
fromcollections.abcimportMappingclassP:passclassC(P):passMapping.register(P)
>>>c=C()>>>isinstance(c,C)# directTrue>>>isinstance(c,P)# indirectTrue>>>isinstance(c,Mapping)# virtualTrue# Actual inheritance chain>>>type(c).__mro__(<class 'C'>, <class 'P'>, <class 'object'>)# Test for "true inheritance">>>Mappingintype(c).__mro__False
Note that most programs do not useisinstance()
on user-defined classesvery often. If you are developing the classes yourself, a more properobject-oriented style is to define methods on the classes that encapsulate aparticular behaviour, instead of checking the object’s class and doing adifferent thing based on what class it is. For example, if you have a functionthat does something:
defsearch(obj):ifisinstance(obj,Mailbox):...# code to search a mailboxelifisinstance(obj,Document):...# code to search a documentelif...
A better approach is to define asearch()
method on all the classes and justcall it:
classMailbox:defsearch(self):...# code to search a mailboxclassDocument:defsearch(self):...# code to search a documentobj.search()
What is delegation?¶
Delegation is an object oriented technique (also called a design pattern).Let’s say you have an objectx
and want to change the behaviour of just oneof its methods. You can create a new class that provides a new implementationof the method you’re interested in changing and delegates all other methods tothe corresponding method ofx
.
Python programmers can easily implement delegation. For example, the followingclass implements a class that behaves like a file but converts all written datato uppercase:
classUpperOut:def__init__(self,outfile):self._outfile=outfiledefwrite(self,s):self._outfile.write(s.upper())def__getattr__(self,name):returngetattr(self._outfile,name)
Here theUpperOut
class redefines thewrite()
method to convert theargument string to uppercase before calling the underlyingself._outfile.write()
method. All other methods are delegated to theunderlyingself._outfile
object. The delegation is accomplished via the__getattr__()
method; consultthe language referencefor more information about controlling attribute access.
Note that for more general cases delegation can get trickier. When attributesmust be set as well as retrieved, the class must define a__setattr__()
method too, and it must do so carefully. The basic implementation of__setattr__()
is roughly equivalent to the following:
classX:...def__setattr__(self,name,value):self.__dict__[name]=value...
Many__setattr__()
implementations callobject.__setattr__()
to setan attribute on self without causing infinite recursion:
classX:def__setattr__(self,name,value):# Custom logic here...object.__setattr__(self,name,value)
Alternatively, it is possible to set attributes by insertingentries intoself.__dict__
directly.
How do I call a method defined in a base class from a derived class that extends it?¶
Use the built-insuper()
function:
classDerived(Base):defmeth(self):super().meth()# calls Base.meth
In the example,super()
will automatically determine the instance fromwhich it was called (theself
value), look up themethod resolutionorder (MRO) withtype(self).__mro__
, and return the next in line afterDerived
in the MRO:Base
.
How can I organize my code to make it easier to change the base class?¶
You could assign the base class to an alias and derive from the alias. Then allyou have to change is the value assigned to the alias. Incidentally, this trickis also handy if you want to decide dynamically (e.g. depending on availabilityof resources) which base class to use. Example:
classBase:...BaseAlias=BaseclassDerived(BaseAlias):...
How do I create static class data and static class methods?¶
Both static data and static methods (in the sense of C++ or Java) are supportedin Python.
For static data, simply define a class attribute. To assign a new value to theattribute, you have to explicitly use the class name in the assignment:
classC:count=0# number of times C.__init__ calleddef__init__(self):C.count=C.count+1defgetcount(self):returnC.count# or return self.count
c.count
also refers toC.count
for anyc
such thatisinstance(c,C)
holds, unless overridden byc
itself or by some class on the base-classsearch path fromc.__class__
back toC
.
Caution: within a method of C, an assignment likeself.count=42
creates anew and unrelated instance named “count” inself
’s own dict. Rebinding of aclass-static data name must always specify the class whether inside a method ornot:
C.count=314
Static methods are possible:
classC:@staticmethoddefstatic(arg1,arg2,arg3):# No 'self' parameter!...
However, a far more straightforward way to get the effect of a static method isvia a simple module-level function:
defgetcount():returnC.count
If your code is structured so as to define one class (or tightly related classhierarchy) per module, this supplies the desired encapsulation.
How can I overload constructors (or methods) in Python?¶
This answer actually applies to all methods, but the question usually comes upfirst in the context of constructors.
In C++ you’d write
classC{C(){cout<<"No arguments\n";}C(inti){cout<<"Argument is "<<i<<"\n";}}
In Python you have to write a single constructor that catches all cases usingdefault arguments. For example:
classC:def__init__(self,i=None):ifiisNone:print("No arguments")else:print("Argument is",i)
This is not entirely equivalent, but close enough in practice.
You could also try a variable-length argument list, e.g.
def__init__(self,*args):...
The same approach works for all method definitions.
I try to use __spam and I get an error about _SomeClassName__spam.¶
Variable names with double leading underscores are “mangled” to provide a simplebut effective way to define class private variables. Any identifier of the form__spam
(at least two leading underscores, at most one trailing underscore)is textually replaced with_classname__spam
, whereclassname
is thecurrent class name with any leading underscores stripped.
The identifier can be used unchanged within the class, but to access it outsidethe class, the mangled name must be used:
classA:def__one(self):return1deftwo(self):return2*self.__one()classB(A):defthree(self):return3*self._A__one()four=4*A()._A__one()
In particular, this does not guarantee privacy since an outside user can stilldeliberately access the private attribute; many Python programmers never botherto use private variable names at all.
See also
Theprivate name mangling specificationsfor details and special cases.
My class defines __del__ but it is not called when I delete the object.¶
There are several possible reasons for this.
Thedel
statement does not necessarily call__del__()
– it simplydecrements the object’s reference count, and if this reaches zero__del__()
is called.
If your data structures contain circular links (e.g. a tree where each child hasa parent reference and each parent has a list of children) the reference countswill never go back to zero. Once in a while Python runs an algorithm to detectsuch cycles, but the garbage collector might run some time after the lastreference to your data structure vanishes, so your__del__()
method may becalled at an inconvenient and random time. This is inconvenient if you’re tryingto reproduce a problem. Worse, the order in which object’s__del__()
methods are executed is arbitrary. You can rungc.collect()
to force acollection, but thereare pathological cases where objects will never becollected.
Despite the cycle collector, it’s still a good idea to define an explicitclose()
method on objects to be called whenever you’re done with them. Theclose()
method can then remove attributes that refer to subobjects. Don’tcall__del__()
directly –__del__()
should callclose()
andclose()
should make sure that it can be called more than once for the sameobject.
Another way to avoid cyclical references is to use theweakref
module,which allows you to point to objects without incrementing their reference count.Tree data structures, for instance, should use weak references for their parentand sibling references (if they need them!).
Finally, if your__del__()
method raises an exception, a warning messageis printed tosys.stderr
.
How do I get a list of all instances of a given class?¶
Python does not keep track of all instances of a class (or of a built-in type).You can program the class’s constructor to keep track of all instances bykeeping a list of weak references to each instance.
Why does the result ofid()
appear to be not unique?¶
Theid()
builtin returns an integer that is guaranteed to be unique duringthe lifetime of the object. Since in CPython, this is the object’s memoryaddress, it happens frequently that after an object is deleted from memory, thenext freshly created object is allocated at the same position in memory. Thisis illustrated by this example:
>>>id(1000)13901272>>>id(2000)13901272
The two ids belong to different integer objects that are created before, anddeleted immediately after execution of theid()
call. To be sure thatobjects whose id you want to examine are still alive, create another referenceto the object:
>>>a=1000;b=2000>>>id(a)13901272>>>id(b)13891296
When can I rely on identity tests with theis operator?¶
Theis
operator tests for object identity. The testaisb
isequivalent toid(a)==id(b)
.
The most important property of an identity test is that an object is alwaysidentical to itself,aisa
always returnsTrue
. Identity tests areusually faster than equality tests. And unlike equality tests, identity testsare guaranteed to return a booleanTrue
orFalse
.
However, identity tests canonly be substituted for equality tests whenobject identity is assured. Generally, there are three circumstances whereidentity is guaranteed:
Assignments create new names but do not change object identity. After theassignment
new=old
, it is guaranteed thatnewisold
.Putting an object in a container that stores object references does notchange object identity. After the list assignment
s[0]=x
, it isguaranteed thats[0]isx
.If an object is a singleton, it means that only one instance of that objectcan exist. After the assignments
a=None
andb=None
, it isguaranteed thataisb
becauseNone
is a singleton.
In most other circumstances, identity tests are inadvisable and equality testsare preferred. In particular, identity tests should not be used to checkconstants such asint
andstr
which aren’t guaranteed to besingletons:
>>>a=1000>>>b=500>>>c=b+500>>>aiscFalse>>>a='Python'>>>b='Py'>>>c=b+'thon'>>>aiscFalse
Likewise, new instances of mutable containers are never identical:
>>>a=[]>>>b=[]>>>aisbFalse
In the standard library code, you will see several common patterns forcorrectly using identity tests:
As recommended byPEP 8, an identity test is the preferred way to checkfor
None
. This reads like plain English in code and avoids confusionwith other objects that may have boolean values that evaluate to false.Detecting optional arguments can be tricky when
None
is a valid inputvalue. In those situations, you can create a singleton sentinel objectguaranteed to be distinct from other objects. For example, here is howto implement a method that behaves likedict.pop()
:_sentinel=object()defpop(self,key,default=_sentinel):ifkeyinself:value=self[key]delself[key]returnvalueifdefaultis_sentinel:raiseKeyError(key)returndefault
Container implementations sometimes need to augment equality tests withidentity tests. This prevents the code from being confused by objectssuch as
float('NaN')
that are not equal to themselves.
For example, here is the implementation ofcollections.abc.Sequence.__contains__()
:
def__contains__(self,value):forvinself:ifvisvalueorv==value:returnTruereturnFalse
How can a subclass control what data is stored in an immutable instance?¶
When subclassing an immutable type, override the__new__()
methodinstead of the__init__()
method. The latter only runsafter aninstance is created, which is too late to alter data in an immutableinstance.
All of these immutable classes have a different signature than theirparent class:
fromdatetimeimportdateclassFirstOfMonthDate(date):"Always choose the first day of the month"def__new__(cls,year,month,day):returnsuper().__new__(cls,year,month,1)classNamedInt(int):"Allow text names for some numbers"xlat={'zero':0,'one':1,'ten':10}def__new__(cls,value):value=cls.xlat.get(value,value)returnsuper().__new__(cls,value)classTitleStr(str):"Convert str to name suitable for a URL path"def__new__(cls,s):s=s.lower().replace(' ','-')s=''.join([cforcinsifc.isalnum()orc=='-'])returnsuper().__new__(cls,s)
The classes can be used like this:
>>>FirstOfMonthDate(2012,2,14)FirstOfMonthDate(2012, 2, 1)>>>NamedInt('ten')10>>>NamedInt(20)20>>>TitleStr('Blog: Why Python Rocks')'blog-why-python-rocks'
How do I cache method calls?¶
The two principal tools for caching methods arefunctools.cached_property()
andfunctools.lru_cache()
. Theformer stores results at the instance level and the latter at the classlevel.
Thecached_property approach only works with methods that do not takeany arguments. It does not create a reference to the instance. Thecached method result will be kept only as long as the instance is alive.
The advantage is that when an instance is no longer used, the cachedmethod result will be released right away. The disadvantage is that ifinstances accumulate, so too will the accumulated method results. Theycan grow without bound.
Thelru_cache approach works with methods that havehashablearguments. It creates a reference to the instance unless specialefforts are made to pass in weak references.
The advantage of the least recently used algorithm is that the cache isbounded by the specifiedmaxsize. The disadvantage is that instancesare kept alive until they age out of the cache or until the cache iscleared.
This example shows the various techniques:
classWeather:"Lookup weather information on a government website"def__init__(self,station_id):self._station_id=station_id# The _station_id is private and immutabledefcurrent_temperature(self):"Latest hourly observation"# Do not cache this because old results# can be out of date.@cached_propertydeflocation(self):"Return the longitude/latitude coordinates of the station"# Result only depends on the station_id@lru_cache(maxsize=20)defhistoric_rainfall(self,date,units='mm'):"Rainfall on a given date"# Depends on the station_id, date, and units.
The above example assumes that thestation_id never changes. If therelevant instance attributes are mutable, thecached_property approachcan’t be made to work because it cannot detect changes to theattributes.
To make thelru_cache approach work when thestation_id is mutable,the class needs to define the__eq__()
and__hash__()
methods so that the cache can detect relevant attribute updates:
classWeather:"Example with a mutable station identifier"def__init__(self,station_id):self.station_id=station_iddefchange_station(self,station_id):self.station_id=station_iddef__eq__(self,other):returnself.station_id==other.station_iddef__hash__(self):returnhash(self.station_id)@lru_cache(maxsize=20)defhistoric_rainfall(self,date,units='cm'):'Rainfall on a given date'# Depends on the station_id, date, and units.
Modules¶
How do I create a .pyc file?¶
When a module is imported for the first time (or when the source file haschanged since the current compiled file was created) a.pyc
file containingthe compiled code should be created in a__pycache__
subdirectory of thedirectory containing the.py
file. The.pyc
file will have afilename that starts with the same name as the.py
file, and ends with.pyc
, with a middle component that depends on the particularpython
binary that created it. (SeePEP 3147 for details.)
One reason that a.pyc
file may not be created is a permissions problemwith the directory containing the source file, meaning that the__pycache__
subdirectory cannot be created. This can happen, for example, if you develop asone user but run as another, such as if you are testing with a web server.
Unless thePYTHONDONTWRITEBYTECODE
environment variable is set,creation of a .pyc file is automatic if you’re importing a module and Pythonhas the ability (permissions, free space, etc…) to create a__pycache__
subdirectory and write the compiled module to that subdirectory.
Running Python on a top level script is not considered an import and no.pyc
will be created. For example, if you have a top-level modulefoo.py
that imports another modulexyz.py
, when you runfoo
(bytypingpythonfoo.py
as a shell command), a.pyc
will be created forxyz
becausexyz
is imported, but no.pyc
file will be created forfoo
sincefoo.py
isn’t being imported.
If you need to create a.pyc
file forfoo
– that is, to create a.pyc
file for a module that is not imported – you can, using thepy_compile
andcompileall
modules.
Thepy_compile
module can manually compile any module. One way is to usethecompile()
function in that module interactively:
>>>importpy_compile>>>py_compile.compile('foo.py')
This will write the.pyc
to a__pycache__
subdirectory in the samelocation asfoo.py
(or you can override that with the optional parametercfile
).
You can also automatically compile all files in a directory or directories usingthecompileall
module. You can do it from the shell prompt by runningcompileall.py
and providing the path of a directory containing Python filesto compile:
python-mcompileall.
How do I find the current module name?¶
A module can find out its own module name by looking at the predefined globalvariable__name__
. If this has the value'__main__'
, the program isrunning as a script. Many modules that are usually used by importing them alsoprovide a command-line interface or a self-test, and only execute this codeafter checking__name__
:
defmain():print('Running test...')...if__name__=='__main__':main()
How can I have modules that mutually import each other?¶
Suppose you have the following modules:
foo.py
:
frombarimportbar_varfoo_var=1
bar.py
:
fromfooimportfoo_varbar_var=2
The problem is that the interpreter will perform the following steps:
main imports
foo
Empty globals for
foo
are createdfoo
is compiled and starts executingfoo
importsbar
Empty globals for
bar
are createdbar
is compiled and starts executingbar
importsfoo
(which is a no-op since there already is a module namedfoo
)The import mechanism tries to read
foo_var
fromfoo
globals, to setbar.foo_var=foo.foo_var
The last step fails, because Python isn’t done with interpretingfoo
yet andthe global symbol dictionary forfoo
is still empty.
The same thing happens when you useimportfoo
, and then try to accessfoo.foo_var
in global code.
There are (at least) three possible workarounds for this problem.
Guido van Rossum recommends avoiding all uses offrom<module>import...
,and placing all code inside functions. Initializations of global variables andclass variables should use constants or built-in functions only. This meanseverything from an imported module is referenced as<module>.<name>
.
Jim Roskind suggests performing steps in the following order in each module:
exports (globals, functions, and classes that don’t need imported baseclasses)
import
statementsactive code (including globals that are initialized from imported values).
Van Rossum doesn’t like this approach much because the imports appear in astrange place, but it does work.
Matthias Urlichs recommends restructuring your code so that the recursive importis not necessary in the first place.
These solutions are not mutually exclusive.
__import__(‘x.y.z’) returns <module ‘x’>; how do I get z?¶
Consider using the convenience functionimport_module()
fromimportlib
instead:
z=importlib.import_module('x.y.z')
When I edit an imported module and reimport it, the changes don’t show up. Why does this happen?¶
For reasons of efficiency as well as consistency, Python only reads the modulefile on the first time a module is imported. If it didn’t, in a programconsisting of many modules where each one imports the same basic module, thebasic module would be parsed and re-parsed many times. To force re-reading of achanged module, do this:
importimportlibimportmodnameimportlib.reload(modname)
Warning: this technique is not 100% fool-proof. In particular, modulescontaining statements like
frommodnameimportsome_objects
will continue to work with the old version of the imported objects. If themodule contains class definitions, existing class instances willnot beupdated to use the new class definition. This can result in the followingparadoxical behaviour:
>>>importimportlib>>>importcls>>>c=cls.C()# Create an instance of C>>>importlib.reload(cls)<module 'cls' from 'cls.py'>>>>isinstance(c,cls.C)# isinstance is false?!?False
The nature of the problem is made clear if you print out the “identity” of theclass objects:
>>>hex(id(c.__class__))'0x7352a0'>>>hex(id(cls.C))'0x4198d0'