Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

FIX: close mem leak for repeated draw#11972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged

Conversation

jklymak
Copy link
Member

PR Summary

Closes#11956

See test in#11956, but repeated drawing kept growing the size oftransform._parents ad infinitum with dead weak refs. These aren't too big, but add up if you run for quite a while...

PR Checklist

  • Has Pytest style unit tests
  • Code isFlake 8 compliant
  • New features are documented, with examples if plot related
  • Documentation is sphinx and numpydoc compliant
  • Added an entry to doc/users/next_whats_new/ if major new feature (follow instructions in README.rst there)
  • Documented in doc/api/api_changes.rst if API changed in a backward-incompatible way

@anntzer
Copy link
Contributor

anntzer commentedAug 29, 2018
edited
Loading

May be worth checking whether this helps#9141 (unlikely, but also a matter of dead weakref leakage)...

@QuLogic
Copy link
Member

I believeWeakSetDictionary was explicitly removed due to performance reasons:#5664 I'm not sure whether this is still a concern.

@anntzer
Copy link
Contributor

anntzer commentedAug 29, 2018
edited
Loading

I can repro the performance issue

from io import BytesIOimport timeimport matplotlib; matplotlib.use("agg"); matplotlib.rcdefaults()from matplotlib import pyplot as pltimport numpy as npdts = []for _ in range(10):    start = time.perf_counter()    fig, ax = plt.subplots(8, 8)    fig.savefig(BytesIO())    dt = time.perf_counter() - start    dts.append(dt)    print("dt", dt)    plt.close("all")print("median", np.median(dts))

goes from ~1.6s to ~2.0s (apparently we spend alot of time fiddling with transforms...), so that's quite significant.

Perhaps we can use thecallback arg to weakref.ref (https://docs.python.org/3/library/weakref.html#weakref.ref) to manually prune the dict ourselves when the weakref is about to be deleted (you need to key on the id() of the weakref, not of the object itself, as it is not available anymore when the callback is called.
Of course then the question (if that is indeed faster) is where does the overhead of WeakValueDictionary come from...

@jklymak
Copy link
MemberAuthor

Ha, figures.

I spent a bit of time trying to figure out why there were so many unique transforms being created, but gave up. That whole machinery is opaque, at least to me.

@anntzer
Copy link
Contributor

I have a grand plan to rewrite the whole thing in C++ at some point :p

@tacaswelltacaswell added this to thev3.1 milestoneAug 29, 2018
@jklymak
Copy link
MemberAuthor

jklymak commentedAug 29, 2018
edited
Loading

So if I do

forchildinchildren:forkeyinchild._parents.keys():ifchild._parents[key]isNone:child._parents.pop(key)child._parents[id(self)]=weakref.ref(self)

Then I get median dt = 0.814 s

If I do it without I get the memory leak an dt = 0.764, so only 7.5% increase in execution time, versus the WeakDictionary which has a 30% increase. (I got 1.012 s)

I think its pretty crazy that a bit of transform book keeping can add so much to the draw time.

But, is the above modest increase in book keeping time OK?

@WeatherGod
Copy link
Member

WeatherGod commentedAug 29, 2018 via email

would `del child._parents[key]` be slightly more efficient? It is a bitmore explicit about the intent.
On Wed, Aug 29, 2018 at 5:48 PM Jody Klymak ***@***.***> wrote: So if I do for child in children: for key in child._parents.keys(): if child._parents[key] is None: child._parents.pop(key) child._parents[id(self)] = weakref.ref(self) Then I get median dt = 0.814 s If I do it without I get the memory leak an dt = 0.764, so only 7.5% increase in execution time, versus the WeakDictionary which has a 30% increase. (I got 1.012 s) I think its pretty crazy that a bit of transform book keeping can add so much to the draw time. But, is the above modest increase in book keeping time OK? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#11972 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AARy-EUtjr0emCC7SpPVews_jfZV_IEHks5uVwwkgaJpZM4WQyYV> .

@anntzer
Copy link
Contributor

My proposal above was to do something along the lines (untested) of

ref = weakref.ref(self, lambda ref: child._parents.pop(id(ref)))child._parents[id(ref)] = ref

which should auto-remove the dead weakrefs.
On the other hand this means that there's a loop of child to itself (via the closure in the lambda), so we're just dependent on the GC at that point.

@jklymak
Copy link
MemberAuthor

@WeatherGod Actually, the code above still has the memory leak in it. Needs to be longer:

forchildinchildren:badkeys= []forkeyinchild._parents.keys():ifchild._parents[key]()isNone:badkeys+= [key]forkeyinbadkeys:child._parents.pop(key)child._parents[id(self)]=weakref.ref(self)

Which gives a mildly longer run-time: 0.8348 s

@anntzer, your solution just gives a key error, and I don't follow it well enough to know how to fix it.

ref = weakref.ref(self, lambda ref: child._parents.pop(id(ref)))KeyError: (4581245960,)

@tacaswell
Copy link
Member

How about:

ref=weakref.ref(self,lambdaref,sid=id(self),target=child._parents:target.pop(sid))child._parents[id(self)]=ref
anntzer reacted with thumbs up emoji

@jklymak
Copy link
MemberAuthor

@tacaswell that seems to work, and has dt=0.782, so only imperceptibly slower than without that change.

@jklymakjklymakforce-pushed thefix-mem-leak-repeated=draw branch 3 times, most recently from3882397 to2f0a96cCompareAugust 29, 2018 23:03
@jklymak
Copy link
MemberAuthor

This doesn't fix#9141 unfortunately, but maybe someone had a similar misapprehension about what happens to wekrefs that are put into a dictionary...

@efiring
Copy link
Member

This seems reasonable, but I think it leaves pickling/unpickling incomplete, because the new deletion behavior is lost. This would not be the case if a WeakValueDictionary were used throughout. WeakValueDictionary lookups are indeed slow, though. Here is a comparison between a WeakValueDictionary,yy, and a normal dictionary,xx, each with a single entry:

In [42]: %timeit yy['a']479 ns ± 37.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)In [43]: %timeit xx['a']45.7 ns ± 2.08 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)In [44]: xx == yyTrue

The numbers are essentially unchanged for 100-entry dictionaries.

@efiring
Copy link
Member

Looking at the code for WeakValueDictionary gives me a mild suspicion that doing without it involves a risk of extremely rare errors. Here is an excerpt from the__init__:

        def remove(wr, selfref=ref(self), _atomic_removal=_remove_dead_weakref):            self = selfref()            if self is not None:                if self._iterating:                    self._pending_removals.append(wr.key)                else:                    # Atomic removal is necessary since this function                    # can be called asynchronously by the GC                    _atomic_removal(d, wr.key)        self._remove = remove

@jklymak
Copy link
MemberAuthor

For my edification, though, why do we keep getting so many child/parent pairs for these transforms? it seems this is something that we could readily deal with explicitly when the axes is cleared or destroyed. But maybe I'm being naive...

@efiring
Copy link
Member

I've never understood the transform framework well and I still don't, but here is the way it looks to me right now. First, the "parent-child" terminology is misleading; "parents" are transforms thatdepend on their "children", so that if a "child" is modified, the immediate parents, and their parents, etc. must be marked invalid, because the net result of the chain to that point will be modified and must be recalculated. Second, in the chain of transform steps taking a line in data coordinates to its representation in display coordinates, there is a series of links that does not need to change for each new line. The Axes box and the canvas dimensions aren't changing, for example. Therefore, there are links in the transform chain that are reused, and one end of the reused chain becomes a "child" that gets a new "parent" (starting a fresh sequence of links) each time a new line is added or an old line is replaced. Unless the old, no-longer-used "parents" are deleted from the_parents dictionary of that "child", the dictionary keeps growing.

Although I have described the transform sequence as chain, it can be more complicated. In particular, two chains, one for x and one for y, can be combined into a blended transform (a "parent"). Each of the respective "child" ends of the x and y chains then has a_parents dictionary with an entry pointing to the same parent, the node representing the merger of the two chains.

@jklymak
Copy link
MemberAuthor

I still think there is a root cause here that I don't understand.

If I check the transform for the line, which is the only thing that is changing each draw in the test code, then I get the same transform each time. Its not being invalidated, and its not changing. So its children never get set.

So some other transform keeps reseting its own children in this case (or making a new "parent"). My strong suspicion is that it is the data cursor (or whatever we call the data display in the bottom left corner). I'm not sure where to get its transform info, but it doesn't seem like it should be callingtransforms.set_children all the time either... I suspect somewhere it keeps needlessly making new versions of its transform information.

@efiring
Copy link
Member

What is the test code that you are using?

@efiring
Copy link
Member

I suspect the problem will turn out to be pervasive, not restricted to one little plot element. I can trigger massive transform node generation, and get a glimpse of one source, with the following procedure.

First, on master, edit transforms.py, inserting 2 lines after line 170 so that the body ofset_children is

forchildinchildren:iflen(child._parents)>99:raiseExceptionchild._parents[id(self)]=weakref.ref(self)

Next, in ipython, execute

import matplotlib.pyplot as pltfig, ax = plt.subplots()ax.cla()ax.cla()

That should be enough to raise the exception. Then you can see from thelong traceback the chain of function calls that is involved in the transform node generation, and you can use the%debug magic to see the child at which the Exception was raised, count the number of live references in_parents, (16 in this case) etc.

The number "99" is obviously arbitrary. If you use a number like 50, you only need oneax.cla(). Presumably one could go through a long sequence of numbers to find all of the call chains that lead to large numbers of parents.

This is not just a rabbit hole, it is a rabbit metropolis.

anntzer reacted with laugh emoji

@anntzer
Copy link
Contributor

Re:#11972 (comment) I think you mean the use of _atomic_removal? Looks like it went in inpython/cpython@e10ca3a re: one thread's removing dead weakrefs incorrectly killing a new entry set by another thread.
I don't think we have ever made any guarantees regarding the multithreading safety of matplotlib (even outside of the event loop integration, let's say pure agg)(?) so while we should keep an eye on the issue I think this PR is still an improvement over the current situation.

Copy link
Member

@efiringefiring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Almost ready--but I think the_parents construction in__setstate__ needs a similar treatment to include the callback.

# pass a weak reference. The second arg is a callback for
# when the weak ref is garbage collected to also
# remove the dictionary element, otherwise child._parents
# keeps growing for multiple draws.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Alternative comment: "Use weak references so this dictionary won't keep obsolete nodes alive; the callback deletes the dictionary entry. This is a performance improvement over using WeakValueDictionary."

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

WRT__set_state__ no prob - I'll do tonight...

@jklymak
Copy link
MemberAuthor

jklymak commentedSep 4, 2018
edited
Loading

@efiring agreed that its a bit of a warren.

I understand whycla might make a bunch of new children. What I don't understand is why

importnumpyasnpimportmatplotlib.pyplotaspltfig,ax=plt.subplots()l=ax.plot(np.arange(10))plt.show(block=False)foriinrange(1,5000):l[0].set_ydata(np.arange(0,10*i,i))plt.pause(0.0001)

does, which is what the OP was doing.

Its something withplt.pause because I can't get the error if I just callfig.canvas.draw_idle() (but nor can I get the figure to display innbaggqt5agg). So while I think the fix here is appropriate, maybe the other thing to say in#11956 is thatplt.pause() is not meant for "industrial-grade" plots that are supposed to run for many days. Though exactly what is better to use is not at the tip of my brain.

@efiring
Copy link
Member

Right, without thepause it won't draw. I agree that logically, in the example you give above, there should be no need for transform generation with each loop, because the axes limits, ticks, etc. are all constant. But everything is redrawn with each loop.Axes.draw is called. There is a lot of getting of transforms and transform parts, and evidently new transforms are being generated each time. It would be interesting to track down where and why this is happening.

@tacaswell
Copy link
Member

If we can track down why repeatedly drawing is creating new transforms that may yield some nice performance improvements as well.....

@jklymak
Copy link
MemberAuthor

... is there somewhere that all thestale stuff is documented? I don't see that mostdraw methods care if the artist is stale or not. I modified some of them to not draw if stale=False, but that just meant that the next draw those artists disappeared. So, I assume that there is acla that gets called on each draw. But frustratingly I can't readily find it. It seems that wedon't want tocla on every pause, but rather just update elements that are stale, but again, I'm probably not understanding whatstale is supposed to do.

@jklymak
Copy link
MemberAuthor

.. actually, sorry, I see now - unless you use blitting and animation, of course the whole image needs to be recomposed on subsequent draws because otherwise it doesn't know what the "background" looks like and you can't erase the "old" state cleanly.

Still not sure why that means new transforms need to be made each draw, so that is still a mystery...

@tacaswell
Copy link
Member

Stale keeps track if there has been a change to the artist that would require re-rendering the whole figure. In principle it could be used for 'auto-blitting' (which requires tracking what bounding boxes are stale, what artist overlap with that, which makes the stale box bigger, and figuring out where the updated artists will land, and then blanking just those (regions) re-drawing just the the things that change) but currently it is just used to decide if we should calldraw_idle on the figure.

cla nukes our internal data structures so if you call it you lose all of the artists. The line in Agg that clears the canvas before at the start of each draw is

defdraw(self):
"""
Draw the figure using the renderer.
"""
self.renderer=self.get_renderer(cleared=True)
# acquire a lock on the shared font cache
RendererAgg.lock.acquire()
toolbar=self.toolbar
try:
self.figure.draw(self.renderer)
# A GUI class may be need to update a window using this draw, so
# don't forget to call the superclass.
super().draw()
finally:
RendererAgg.lock.release()

jklymak reacted with thumbs up emoji

@jklymakjklymakforce-pushed thefix-mem-leak-repeated=draw branch from2f0a96c tof23d891CompareSeptember 5, 2018 22:01
@jklymakjklymakforce-pushed thefix-mem-leak-repeated=draw branch fromf23d891 to3325bdeCompareSeptember 5, 2018 22:03
@jklymak
Copy link
MemberAuthor

@efiring, I thinkI fixed the setstate properly. OTOH, I don't have a lot invested in squashing this bug for folks who pickle their plotting environment (a feature I think is over the top in requiring us to jump through hoops).

@tacaswell
Copy link
Member

a feature I think is over the top in requiring us to jump through hoops

It is something we support and can not break. The biggest use case is using multi-processing to build figures where generating (but not drawing) the artists is very expensive.

Why isset_children being called so many times?

@jklymak
Copy link
MemberAuthor

Why isset_children being called so many times?

Its still unclear to me!

@jklymak
Copy link
MemberAuthor

defset_children(self,*children):"""        Set the children of the transform, to let the invalidation        system know which transforms can invalidate this transform.        Should be called from the constructor of any transforms that        depend on other transforms.        """# Parents are stored as weak references, so that if the# parents are destroyed, references from the children won't# keep them alive.print('set children!',id(self))forchildinchildren:# Use weak references so this dictionary won't keep obsolete nodes# alive; the callback deletes the dictionary entry. This is a# performance improvement over using WeakValueDictionary.ref=weakref.ref(self,lambdaref,sid=id(self),target=child._parents:target.pop(sid))child._parents[id(self)]=ref

and then run:

importnumpyasnpimportmatplotlib.pyplotaspltfig,ax=plt.subplots()l=ax.plot(np.arange(10))plt.show()

and wiggle the cursor around -set_children gets called continuously... (this doesn't mem leak because it just keeps setting the same parents)...

@jklymak
Copy link
MemberAuthor

More digging arround - if you__add__ transforms, it creates a new transform. I suspect we have lots oftransA + transB running around, and that callsset_children on bothtransA andtransB each time its called.

@efiringefiring merged commit4754b58 intomatplotlib:masterSep 6, 2018
@Bibushka
Copy link

Bibushka commentedSep 7, 2018
edited
Loading

@jklymak sorry to barge in. Have you tried the fix you gave in the commit? I still get between 0.1-0.3 MB increase in memory use with every call to draw. I made the changes yesterday, left the app running for 15 hours, memory usage still went up by 2.2 GB (redraw every 30 seconds). Is the fix still in progress?

@jklymak
Copy link
MemberAuthor

@Bibushka I tested w/ the code below and get results like those below for as long as I want. If you are testing w/ different code, or have a different set up perhaps there is another memory leak, or there is something wrong with the test?

Total allocated size: 1246.4 KiBTotal allocated size: 1314.6 KiBTotal allocated size: 1382.5 KiBTotal allocated size: 1452.1 KiBTotal allocated size: 1523.4 KiBTotal allocated size: 1592.6 KiBTotal allocated size: 1662.1 KiBTotal allocated size: 1245.8 KiBTotal allocated size: 1311.5 KiBTotal allocated size: 1379.6 KiBTotal allocated size: 1448.9 KiBTotal allocated size: 1518.5 KiBTotal allocated size: 1587.7 KiBTotal allocated size: 1657.2 KiBTotal allocated size: 1315.6 KiBTotal allocated size: 1381.1 KiBTotal allocated size: 1380.8 KiBTotal allocated size: 1450.2 KiBTotal allocated size: 1521.5 KiBTotal allocated size: 1573.0 KiBTotal allocated size: 1661.8 KiBTotal allocated size: 1250.8 KiBTotal allocated size: 1316.2 KiBTotal allocated size: 1384.3 KiBTotal allocated size: 1459.2 KiBTotal allocated size: 1512.2 KiBTotal allocated size: 1601.0 KiBTotal allocated size: 1662.2 KiBTotal allocated size: 1019.5 KiBTotal allocated size: 1055.0 KiBTotal allocated size: 1088.9 KiBTotal allocated size: 1123.7 KiBTotal allocated size: 1159.1 KiBTotal allocated size: 1194.6 KiBTotal allocated size: 1230.2 KiBTotal allocated size: 1265.9 KiBTotal allocated size: 1301.3 KiBTotal allocated size: 1339.3 KiBTotal allocated size: 989.7 KiBTotal allocated size: 1021.2 KiB
importmatplotlib.pyplotaspltimportnumpyasnpimportosimportlinecacheimportsysimporttracemallocimporttimedefdisplay_top(snapshot,key_type='lineno',limit=2):'''    function for pretty printing tracemalloc output    '''snapshot=snapshot.filter_traces((tracemalloc.Filter(False,"<frozen importlib._bootstrap>"),tracemalloc.Filter(False,"<unknown>"),    ))top_stats=snapshot.statistics(key_type)total=sum(stat.sizeforstatintop_stats)print("Total allocated size: %.1f KiB"% (total/1024))tracemalloc.start()y=np.random.rand(100)x=range(len(y))fig=plt.figure()ax=fig.add_subplot(111)ax.plot(x,y,'b-')plt.show(block=False)t0=time.time()whileTrue:try:ax.clear()ax.plot(x,np.random.rand(100),'b-')plt.pause(0.0001)snapshot=tracemalloc.take_snapshot()display_top(snapshot)time.sleep(0.05)exceptKeyboardInterrupt:break

@Bibushka
Copy link

Bibushka commentedSep 10, 2018
edited
Loading

I'm kind of a nub and my setup is not as fancy. I use memory_profiler's profile to track the memory changes, see results below:

from PyQt5 import QtCore, QtWidgetsfrom matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvasfrom matplotlib.backends.backend_qt4agg import NavigationToolbar2QTfrom matplotlib.figure import Figureimport numpyimport randomimport sysfrom gc import collectfrom memory_profiler import profilex = []y = []class Ui_main_window(QtWidgets.QMainWindow):    def __init__(self):        super(Ui_main_window, self).__init__()        self.setObjectName("main_window")        self.centralwidget = QtWidgets.QWidget(self)        self.centralwidget.setObjectName("centralwidget")        self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)        self.gridLayout.setObjectName("gridLayout")        self.chart_canvas = MyCanvas(self.centralwidget, width=6, height=3, dpi=100)        self.gridLayout.addWidget(self.chart_canvas, 3, 0, 2, 6)        self.toolbar = NavigationToolbar2QT(self.chart_canvas, self.centralwidget)        self.toolbar.update()        self.gridLayout.addWidget(self.toolbar, 2, 0, 1, 4)        self.setCentralWidget(self.centralwidget)        self.menubar = QtWidgets.QMenuBar(self)        self.menubar.setGeometry(QtCore.QRect(0, 0, 539, 21))        self.menubar.setObjectName("menubar")        self.retranslateUi(self)        QtCore.QMetaObject.connectSlotsByName(self)        self.show()    def retranslateUi(self, main_window):        _translate = QtCore.QCoreApplication.translate        main_window.setWindowTitle(_translate("main_window", "Main Window"))class MyCanvas(FigureCanvas):    def __init__(self, parent=None, width=6, height=3, dpi=100):        self.fig = Figure(figsize=(width, height), dpi=dpi)        self.axes = self.fig.add_subplot(111, frame_on=True)        FigureCanvas.__init__(self, self.fig)        self.setParent(parent)        self.lines = []        self.labels = []        timer = QtCore.QTimer(self)        timer.timeout.connect(self.update_figure)        timer.start(3000)    @profile()    def update_figure(self):        print("update_figure")        (processed_time_values, processed_numeric_values) = self.value_processing()        self.axes.cla()        self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))        self.draw()    def value_processing(self):        global x, y        x.append(random.randint(0, 20))        y.append(random.randint(0, 20))        return x, yapp = QtWidgets.QApplication(sys.argv)GUI_main_window = QtWidgets.QMainWindow()main_window = Ui_main_window()app.exec_()

Results:

update_figure

Line # Mem usage Increment Line Contents
55 65.2 MiB 65.2 MiB@Profile()
56 def update_figure(self):
57 65.2 MiB 0.0 MiB print("update_figure")
58 65.2 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.2 MiB 0.0 MiB self.axes.cla()
60 65.2 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.3 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.3 MiB 65.3 MiB@Profile()
56 def update_figure(self):
57 65.3 MiB 0.0 MiB print("update_figure")
58 65.3 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.3 MiB 0.0 MiB self.axes.cla()
60 65.3 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.4 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.4 MiB 65.4 MiB@Profile()
56 def update_figure(self):
57 65.4 MiB 0.0 MiB print("update_figure")
58 65.4 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.4 MiB 0.0 MiB self.axes.cla()
60 65.4 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.5 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.5 MiB 65.5 MiB@Profile()
56 def update_figure(self):
57 65.5 MiB 0.0 MiB print("update_figure")
58 65.5 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.5 MiB 0.0 MiB self.axes.cla()
60 65.5 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.5 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.5 MiB 65.5 MiB@Profile()
56 def update_figure(self):
57 65.5 MiB 0.0 MiB print("update_figure")
58 65.5 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.5 MiB 0.0 MiB self.axes.cla()
60 65.5 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.5 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.5 MiB 65.5 MiB@Profile()
56 def update_figure(self):
57 65.5 MiB 0.0 MiB print("update_figure")
58 65.5 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.5 MiB 0.0 MiB self.axes.cla()
60 65.5 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.6 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.6 MiB 65.6 MiB@Profile()
56 def update_figure(self):
57 65.6 MiB 0.0 MiB print("update_figure")
58 65.6 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.6 MiB 0.0 MiB self.axes.cla()
60 65.6 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.6 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.6 MiB 65.6 MiB@Profile()
56 def update_figure(self):
57 65.6 MiB 0.0 MiB print("update_figure")
58 65.6 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.6 MiB 0.0 MiB self.axes.cla()
60 65.6 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.6 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.6 MiB 65.6 MiB@Profile()
56 def update_figure(self):
57 65.6 MiB 0.0 MiB print("update_figure")
58 65.6 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.6 MiB 0.0 MiB self.axes.cla()
60 65.6 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.8 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.8 MiB 65.8 MiB@Profile()
56 def update_figure(self):
57 65.8 MiB 0.0 MiB print("update_figure")
58 65.8 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.8 MiB 0.0 MiB self.axes.cla()
60 65.8 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.8 MiB 0.0 MiB self.draw()

@tacaswell
Copy link
Member

@Bibushka Thex andy lists are growing without bound in your example. While your example is closer to your actual use case, the Qt GUI and timer make it much more complex. Can you break up creating the numpy arrays and callingplot into two lines?

@jklymak Are you sure that loop is actually causing re-draws? I think you need adraw_idle ordraw call in there.

@jklymak
Copy link
MemberAuthor

@tacaswell. Plt.pause does the redraw not think. It definitely updates the plot ;-)

@tacaswell
Copy link
Member

Ah, 🐑 nvm

@Bibushka
Copy link

@tacaswell i don't think it's the timer, nor the lists that cause the problem. The problem is that one extra point in the plot shouldn't make the memory jump by 0.1MB by each call of self.draw() and the fact that this memory isn't cleared when i use self.axes.cla().

@Bibushka
Copy link

I have managed to switch to PyQt5 instead of PySide2 and it seams to have stopped the memory leak, never would've imagined a library could cause such a huge problem

@tacaswell
Copy link
Member

See#12089 and the follow on PRs.

This should be fixed in 3.0 and will be fixed in 2.2.4

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@efiringefiringefiring approved these changes

@anntzeranntzeranntzer approved these changes

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
v3.1.0
Development

Successfully merging this pull request may close these issues.

apparent memory leak with live plotting
7 participants
@jklymak@anntzer@QuLogic@WeatherGod@tacaswell@efiring@Bibushka

[8]ページ先頭

©2009-2025 Movatter.jp