- Notifications
You must be signed in to change notification settings - Fork384
pythonprofilers/memory_profiler
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Note: This package is no longer actively maintained. I won't be actively responding to issues.
This is a python module for monitoring memory consumption of a processas well as line-by-line analysis of memory consumption for pythonprograms. It is a pure python module which depends on thepsutil module.
Install via pip:
$ pip install -U memory_profiler
The package is also available onconda-forge.
To install from source, download the package, extract and type:
$ pip install .
Use mprof to generate a full memory usage report of your executable and to plot it.
mprof run executablemprof plot
The plot would be something like this:
The line-by-line memory usage mode is used much in the same way of theline_profiler: firstdecorate the function you would like to profile with@profile andthen run the script with a special script (in this case with specificarguments to the Python interpreter).
In the following example, we create a simple functionmy_func thatallocates listsa,b and then deletesb:
@profiledefmy_func():a= [1]* (10**6)b= [2]* (2*10**7)delbreturnaif__name__=='__main__':my_func()
Execute the code passing the option-m memory_profiler to thepython interpreter to load the memory_profiler module and print tostdout the line-by-line analysis. If the file name was example.py,this would result in:
$ python -m memory_profiler example.py
Output will follow:
Line # Mem usage Increment Occurrences Line Contents============================================================ 3 38.816 MiB 38.816 MiB 1 @profile 4 def my_func(): 5 46.492 MiB 7.676 MiB 1 a = [1] * (10 ** 6) 6 199.117 MiB 152.625 MiB 1 b = [2] * (2 * 10 ** 7) 7 46.629 MiB -152.488 MiB 1 del b 8 46.629 MiB 0.000 MiB 1 return a
The first column represents the line number of the code that has beenprofiled, the second column (Mem usage) the memory usage of thePython interpreter after that line has been executed. The third column(Increment) represents the difference in memory of the current linewith respect to the last one. The fourth column (Occurrences) showsthe number of times that profiler has executed each line. The last column(Line Contents) prints the code that has been profiled.
A function decorator is also available. Use as follows:
frommemory_profilerimportprofile@profiledefmy_func():a= [1]* (10**6)b= [2]* (2*10**7)delbreturna
In this case the script can be run without specifying-mmemory_profiler in the command line.
In function decorator, you can specify the precision as an argument to thedecorator function. Use as follows:
frommemory_profilerimportprofile@profile(precision=4)defmy_func():a= [1]* (10**6)b= [2]* (2*10**7)delbreturna
If a python script with decorator@profile is called using-mmemory_profiler in the command line, theprecision parameter is ignored.
Sometimes it is useful to have full memory usage reports as a function oftime (not line-by-line) of external processes (be it Python scripts or not).In this case the executablemprof might be useful. Use it like:
mprof run <executable>mprof plot
The first line run the executable and record memory usage along time,in a file written in the current directory.Once it's done, a graph plot can be obtained using the second line.The recorded file contains a timestamps, that allows for severalprofiles to be kept at the same time.
Help on each mprof subcommand can be obtained with the -h flag,e.g. mprof run -h.
In the case of a Python script, using the previous command does notgive you any information on which function is executed at a giventime. Depending on the case, it can be difficult to identify the partof the code that is causing the highest memory usage.
Adding the profile decorator to a function(ensure nofrom memory_profiler import profile statement) and running the Pythonscript with
mprof run --python python <script>
will record timestamps when entering/leaving the profiled function. Running
mprof plot
afterward will plot the result, making plots (using matplotlib) similar to these:
or, withmprof plot --flame (the function and timestamp names will appear on hover):
A discussion of these capabilities can be foundhere.
Warning
If your Python file imports the memory profiler from memory_profiler import profile these timestamps will not be recorded. Comment out the import, leave your functions decorated, and re-run.
The available commands for mprof are:
mprof run: running an executable, recording memory usagemprof plot: plotting one the recorded memory usage (by default,the last one)mprof list: listing all recorded memory usage files in auser-friendly way.mprof clean: removing all recorded memory usage files.mprof rm: removing specific recorded memory usage files
In a multiprocessing context the main process will spawn child processes whosesystem resources are allocated separately from the parent process. This canlead to an inaccurate report of memory usage since by default only the parentprocess is being tracked. Themprof utility provides two mechanisms totrack the usage of child processes: sum the memory of all children to theparent's usage and track each child individual.
To create a report that combines memory usage of all the children and theparent, use theinclude-children flag in either theprofile decorator oras a command line argument tomprof:
mprof run --include-children <script>
The second method tracks each child independently of the main process,serializing child rows by index to the output stream. Use themultiprocessflag and plot as follows:
mprof run --multiprocess <script>mprof plot
This will create a plot using matplotlib similar to this:

You can combine both theinclude-children andmultiprocess flags to showthe total memory of the program as well as each child individually. If usingthe API directly, note that the return frommemory_usage will include thechild memory in a nested list along with the main process memory.
By default, the command line call is set as the graph title. If you wish to customize it, you can use the-t option to manually set the figure title.
mprof plot -t 'Recorded memory usage'
You can also hide the function timestamps using then flag, such as
mprof plot -n
Trend lines and its numeric slope can be plotted using thes flag, such as
mprof plot -s
The intended usage of the -s switch is to check the labels' numerical slope over a significant time period for :
>0it might mean a memory leak.~0if 0 or near 0, the memory usage may be considered stable.<0to be interpreted depending on the expected process memory usage patterns, also might mean that the sampling period is too small.
The trend lines are for ilustrative purposes and are plotted as (very) small dashed lines.
It is possible to set breakpoints depending on the amount of memory used.That is, you can specify a threshold and as soon as the program uses morememory than what is specified in the threshold it will stop executionand run into the pdb debugger. To use it, you will have to decoratethe function as done in the previous section with@profile and thenrun your script with the option-m memory_profiler --pdb-mmem=X,where X is a number representing the memory threshold in MB. For example:
$ python -m memory_profiler --pdb-mmem=100 my_script.py
will runmy_script.py and step into the pdb debugger as soon as the codeuses more than 100 MB in the decorated function.
memory_profiler exposes a number of functions to be used in third-partycode.
memory_usage(proc=-1, interval=.1, timeout=None) returns the memory usageover a time interval. The first argument,proc represents whatshould be monitored. This can either be the PID of a process (notnecessarily a Python program), a string containing some python code tobe evaluated or a tuple(f, args, kw) containing a function and itsarguments to be evaluated asf(*args, **kw). For example,
>>>frommemory_profilerimportmemory_usage>>>mem_usage=memory_usage(-1,interval=.2,timeout=1)>>>print(mem_usage) [7.296875,7.296875,7.296875,7.296875,7.296875]
Here I've told memory_profiler to get the memory consumption of thecurrent process over a period of 1 second with a time interval of 0.2seconds. As PID I've given it -1, which is a special number (PIDs areusually positive) that means current process, that is, I'm getting thememory usage of the current Python interpreter. Thus I'm gettingaround 7MB of memory usage from a plain python interpreter. If I trythe same thing on IPython (console) I get 29MB, and if I try the samething on the IPython notebook it scales up to 44MB.
If you'd like to get the memory consumption of a Python function, thenyou should specify the function and its arguments in the tuple(f,args, kw). For example:
>>># define a simple function>>>deff(a,n=100): ...importtime ...time.sleep(2) ...b= [a]*n ...time.sleep(1) ...returnb ...>>>frommemory_profilerimportmemory_usage>>>memory_usage((f, (1,), {'n' :int(1e6)}))
This will execute the code f(1, n=int(1e6)) and return the memoryconsumption during this execution.
The output can be redirected to a log file by passing IO stream asparameter to the decorator like @profile(stream=fp)
>>>fp=open('memory_profiler.log','w+')>>>@profile(stream=fp)>>>defmy_func(): ...a= [1]* (10**6) ...b= [2]* (2*10**7) ...delb ...returna
For details refer: examples/reporting_file.py
Reporting via logger Module:
Sometime it would be very convenient to use logger module speciallywhen we need to use RotatingFileHandler.
The output can be redirected to logger module by simply making use ofLogFile of memory profiler module.
>>>frommemory_profilerimportLogFile>>>importsys>>>sys.stdout=LogFile('memory_profile_log')
Customized reporting:
Sending everything to the log file while running the memory_profilercould be cumbersome and one can choose only entries with incrementsby passing True to reportIncrementFlag, where reportIncrementFlag isa parameter to LogFile class of memory profiler module.
>>>frommemory_profilerimportLogFile>>>importsys>>>sys.stdout=LogFile('memory_profile_log',reportIncrementFlag=False)
For details refer: examples/reporting_logger.py
After installing the module, if you use IPython, you can use the %mprun, %%mprun,%memit and %%memit magics.
For IPython 0.11+, you can use the module directly as an extension, with%load_ext memory_profiler
To activate it whenever you start IPython, edit the configuration file for yourIPython profile, ~/.ipython/profile_default/ipython_config.py, to register theextension like this (If you already have other extensions, just add this one tothe list):
c.InteractiveShellApp.extensions= ['memory_profiler',]
(If the config file doesn't already exist, runipython profile create ina terminal.)
It then can be used directly from IPython to obtain a line-by-linereport using the %mprun or %%mprun magic command. In this case, you can skipthe @profile decorator and instead use the -f parameter, likethis. Note however that function my_func must be defined in a file(cannot have been defined interactively in the Python interpreter):
In [1]:fromexampleimportmy_func,my_func_2In [2]:%mprun-fmy_funcmy_func()
or in cell mode:
In [3]:%%mprun-fmy_func-fmy_func_2 ...:my_func() ...:my_func_2()
Another useful magic that we define is %memit, which is analogous to%timeit. It can be used as follows:
In [1]:%memitrange(10000)peakmemory:21.42MiB,increment:0.41MiBIn [2]:%memitrange(1000000)peakmemory:52.10MiB,increment:31.08MiB
or in cell mode (with setup code):
In [3]:%%memitl=range(1000000) ...:len(l) ...:peakmemory:52.14MiB,increment:0.08MiB
For more details, see the docstrings of the magics.
For IPython 0.10, you can install it by editing the IPython configurationfile ~/.ipython/ipy_user_conf.py to add the following lines:
# These two lines are standard and probably already there.importIPython.ipapiip=IPython.ipapi.get()# These two are the important ones.importmemory_profilermemory_profiler.load_ipython_extension(ip)
memory_profiler supports different memory tracking backends including: 'psutil', 'psutil_pss', 'psutil_uss', 'posix', 'tracemalloc'.If no specific backend is specified the default is to use "psutil" which measures RSS aka "Resident Set Size".In some cases (particularly when tracking child processes) RSS may overestimate memory usage (see example/example_psutil_memory_full_info.py for an example).For more information on "psutil_pss" (measuring PSS) and "psutil_uss" please refer to:https://psutil.readthedocs.io/en/latest/index.html?highlight=memory_info#psutil.Process.memory_full_info
Currently, the backend can be set via the CLI
$ python -m memory_profiler --backend psutil my_script.py
and is exposed by the API
>>>frommemory_profilerimportmemory_usage>>>mem_usage=memory_usage(-1,interval=.2,timeout=1,backend="psutil")
- Q: How accurate are the results ?
- A: This module gets the memory consumption by querying theoperating system kernel about the amount of memory the currentprocess has allocated, which might be slightly different fromthe amount of memory that is actually used by the Pythoninterpreter. Also, because of how the garbage collector works inPython the result might be different between platforms and evenbetween runs.
- Q: Does it work under windows ?
- A: Yes, thanks to thepsutil module.
For support, please ask your question onstack overflow and add the*memory-profiling* tag.Send issues, proposals, etc. togithub's issue tracker .
If you've got questions regarding development, you can email medirectly atf@bianp.net
Latest sources are available from github:
https://github.com/pythonprofilers/memory_profiler
PySpeedIT (uses a reduced version of memory_profiler)
pydio-sync (uses custom wrapper on top of memory_profiler)
This module was written byFabian PedregosaandPhilippe Gervaisinspired by Robert Kern'sline profiler.
Tom added windows support and speed improvements via thepsutil module.
Victor added python3 support, bugfixes and generalcleanup.
Vlad Niculae added the %mprun and %memit IPython magics.
Thomas Kluyver added the IPython extension.
Sagar UDAY KUMAR added Report generation feature and examples.
Dmitriy Novozhilov andSergei Lebedev added support fortracemalloc.
Benjamin Bengfort added support for tracking the usage of individual child processes and plotting them.
Muhammad Haseeb Tariq fixed issue #152, which made the whole interpreter hang on functions that launched an exception.
Juan Luis Cano modernized the infrastructure and helped with various things.
Martin Becker added PSS and USS tracking via the psutil backend.
BSD License, see file COPYING for full text.
About
Monitor Memory usage of Python code
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.



