Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

GH-108362: Incremental GC implementation#108038

Merged
markshannon merged 17 commits intopython:mainfrom
faster-cpython:incremental-gc
Feb 5, 2024
Merged

GH-108362: Incremental GC implementation#108038
markshannon merged 17 commits intopython:mainfrom
faster-cpython:incremental-gc

Conversation

@markshannon
Copy link
Member

@markshannonmarkshannon commentedAug 16, 2023
edited
Loading

Implements incremental cyclic GC.
Instead of traversing one generation on each collection, we traverse the young generation and the oldest part of the old generation. By traversing the old generation a chunk at a time, we keep pause times down a lot.

Seefaster-cpython/ideas#613 for the idea and algorithm.

shailshouryya, hidaris, bluss, Jacob-Flasheye, and furkanonder reacted with rocket emojishailshouryya reacted with eyes emoji
@markshannonmarkshannon changed the titleIncremental GCGH-108362: Incremental GC implementationAug 23, 2023
@bedevere-botbedevere-bot mentioned this pull requestAug 23, 2023
@markshannon
Copy link
MemberAuthor

Numbers fora recent commit, which is not tuned for performance, beyond using incremental collection.

Speedup: 3-4%
Relative pause times estimates, using objects visited as a proxy:

CollectionMainThis PR
Young1--
Incremental--3-4
Aging9--
Old80--

Shortest pauses go up, but no one cares about those.
It is throughput and longest pause times that matter.

Throughput is a few percent better, but that can also be achieved by increasing thresholds or only using one generation.
It is the longest pause time that is important and that is improved alot.

The above numbers are from the pyperformance suite.

Stats:https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20230813-3.13.0a0-328cfd4/bm-20230813-azure-x86_64-faster%252dcpython-incremental_gc-3.13.0a0-328cfd4-pystats.md

@markshannon
Copy link
MemberAuthor

@pablogsal@nascheme want to take a look?

@pablogsal
Copy link
Member

pablogsal commentedAug 28, 2023
edited
Loading

@pablogsal@nascheme want to take a look?

I can take a look this Thursday 👍

@markshannon
Copy link
MemberAuthor

@pablogsal?

@pablogsal
Copy link
Member

pablogsal commentedSep 8, 2023
edited
Loading

@pablogsal?

Hey Mark, sorry for the lack of review but unfortunately I had an accident last week where I broke my finger and required some surgery. Currently, I am recovering from the surgery and the injury. I will try to review it ASAP, but it may take a bit more time. Apologies for the delay

itamaro reacted with heart emoji

@markshannon
Copy link
MemberAuthor

Take care.
There's no rush, we've got plenty of time before feature freeze.

@nascheme
Copy link
Member

My impression is this is a good idea. The long pause you can get from the current full collections could be quite undesirable, depending on your app. Regarding the statement that "It is guaranteed to collect all cycles eventually", I have some concern about what the worst case might be. E.g. if it collects eventually but takes 1 million runs of the GC to do it, that's not so great. This property sounds similar to what you get with the "Train Algorithm" for M&S style collection.

I suppose we don't want to provide an option to switch between the current "collect everything" and incremental approaches. We could probably turn on the incremental by default and then let people turn it off if they run into trouble. I guess the other solution would be to downgrade to an older Python version.

@markshannon
Copy link
MemberAuthor

Regarding the statement that "It is guaranteed to collect all cycles eventually", I have some concern about what the worst case might be. E.g. if it collects eventually but takes 1 million runs of the GC to do it, that's not so great.

All garbage cycles present in the old generation will be collected in a single traversal of the old generation.
This is true because (ignoring the issue of finalizers):

  • Cycles are unreachable, so will never be modified during a traversal, regardless of how many increments it takes.
  • If an object is part of a cycle and that object is visited by an incremental collection, that cycle will be collected.
  • We visit all objects in the old generation before starting the next traversal.

Obviously how many incremental collections it takes to traverse the whole old generation depends on how big the old generation is, and how big the increments are.

@nascheme
Copy link
Member

If there is a garbage cycle with more thanobjects_per_collection contained in it, I don't see how it ever gets collected. A reference to an object from outside the collected set (e.g. not part ofwork) will make the object look alive to the GC.clear_cycles_and_restore_refcounts() gets called at the end of the incremental collection so I don't see how it ever gets collected.

@markshannon
Copy link
MemberAuthor

If there is a garbage cycle with more thanobjects_per_collection contained in it, I don't see how it ever gets collected.

Choosing an increment is done depth first, so if part of a cycle is in an increment, all of it must be.
objects_per_collection is a guideline not a hard limit
faster-cpython/ideas#613 (comment)

@kumaraditya303kumaraditya303 removed their request for reviewSeptember 12, 2023 17:44
@nascheme
Copy link
Member

Choosing an increment is done depth first, so if part of a cycle is in an increment, all of it must be.objects_per_collection is a guideline not a hard limit

Oh, I see. In that case, if the collector encounters an object with many references, many more objects could be included in the collection. E.g. if you encountersys.modules, you might examine basically all living objects. That's no worse than what's currently done with full collections but I do wonder in practice how much this incremental GC helps. My guess would be that most times you are only working on a subgraph but occasionally you will look at nearly all objects. Running should tests on real applications with big working sets could be informative.

@markshannon
Copy link
MemberAuthor

In the worse case that all the objects form a giant cycle, there is no way to avoid visiting all objects.
I doubt that happens in practice, but if it does we are no worse off than doing a full GC.

We can get long pauses if a large number of objects are reachable from a single object that isn't part of a cycle.
This is more likely to be a problem, but it is also no worse than doing a full GC.

Because we track the number of objects seen, if we end up doing a large collection then we take a larger pause until the next one, so we do no more work. It is just done in bigger chunks.

Possible mitigations (for another PR)

At the start of the full cycle (after swapping the pending and scanning spaces) we could do a marking stage to mark live objects.
Marking requires less work per object than tentative deletion, so should lower the overhead.

Scanning the roots on the stack probably isn't a good idea as many of those could soon become garbage, but scanningsys.modules is probably a good idea.

@DinoV
Copy link
Contributor

I want to try one more thing (which is to simulate a large app with lots of modules, classes, functions...) just to see how that interacts withsys.modules and behaves with the transitive walk.

uintptr_t aging = cf->aging_space;
if (_PyObject_IS_GC(op)) {
PyGC_Head *gc = AS_GC(op);
if (_PyObject_GC_IS_TRACKED(op) &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think including objects from the other space can lead to some problematic behavior. If you have a large object graph which is referenced from smaller more frequently allocated objects this will continuously pull that large object graph in and blow the budget. This is basically what I was concerned with regardingsys.modules - you can have a module which imports sys, you can be creating lambda's in that module which are short lived but become cyclical trash, and to collect the lambda you need to traverse the world.

I also think given this behavior the need for two different lists of objects isn't really necessary, you could instead just move the objects to the end of the collecting space and we'd get back to them when we can, and I think the behavior would be identical to the existing algorithm (except maybe we'd pick up some extra objects when we'd flip the spaces).

I think another potential problem with this behavior is that you're not eagerly adding objects in the current space to this list transitively. That means if we visit a large object graph and blow the budget then we may not get to other transitively referenced objects that are in the space we're collecting from added to the container, and therefore despite collecting a huge object graph we still won't have collected enough to clear the cycle.

Below's a program that seems to grow unbounded, I had briefly experimented with using the_PyGC_PREV_MASK_COLLECTING flag here to mark objects we want to include instead of using the aging space, but that also didn't work (I would have expected on some collections we get a bunch of these little cycles, and then after a flip we need to walk the large object graph once), so I'm not certain what exactly needs to be done to fix this.

class LinkedList:    def __init__(self, next=None, prev=None):        self.next = next        if next is not None:            next.prev = self        self.prev = prev        if prev is not None:            prev.next = selfdef make_ll(depth):    head = LinkedList()    for i in range(depth):        head = LinkedList(head, head.prev)    return headhead = make_ll(10000)while True:    newhead = make_ll(200)    newhead.surprise = head

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Any cycle present in the old generation at the start of the full scan (when we flip spaces)will be collected during that scan (before we flip spaces again) See#108038 (comment)

There will always be object graphs that perform badly, but all cycles will be collected eventually (assume the program runs for long enough).

This program looks like most cycles created will be handled in the young collection, so I don't see a problem there, but if we increasedepth so that the cycles will outlive the young collection, then it might take a while to collect the cycles, and the first increment will likely have to visit all the nodes reachable from the global variablehead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I guess I wasn't clear enough (and maybe you didn't run it) because this program actually runs out of memory :) You're right that most cycles should be collected in young, but I'm guessing one survives every young collection, and those build up and are uncollectible. I think we probably only successfully collect 1 of the non-long lived objects every collection because we repeatedly re-process the long-lived linked list.

If I modify the program to hold onto 100 of these at a time before clearing them out it runs out of memory even faster.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'm seeing the memory use grow slowly, although seemingly without bounds. So something is wrong.

There are two spaces in the old generation.aging andoldest.
After a young or incremental collection we add survivors to the end ofaging.
We only collect cycles within theoldest space. After a flip,all objects will be in theoldest space, so if there are any cycles they will be collected, not moved back to the aging space.

I modified your program to includegc.set_threshold(2000, 2, 0) which makes the incremental collector process objects five times as fast, in which case the memory appears to stays bounded.

I was hoping to merge this and then play with thresholds, but it looks like we will need some sort of adaptive threshold before merging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

You only collect cycles in theoldest space but the reason I placed this comment here is that you do gather the transitive closure from theaging space. Therefore think the statement "We only collect cycles within the oldest space." is incorrect given this code - once you've included a single object fromaging you will consider its transitive closure as well.

But including these objects seems like it should be unnecessary though, once you flip, you'll re-consider those objects and their transitive closure.

And as I said before I think this basically eliminates any usefulness of the two spaces... you may as well just be moving the objects to the end of theoldest space if you're willing to suck them into the transitive closure.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Why do you think we collect objects from the aging space?
That is not the intention, and I don't see where that happens.

@markshannon
Copy link
MemberAuthor

markshannon commentedOct 13, 2023
edited
Loading

With this code:

classLinkedList:def__init__(self,next=None,prev=None):self.next=nextifnextisnotNone:next.prev=selfself.prev=previfprevisnotNone:prev.next=selfdefmake_ll(depth):head=LinkedList()foriinrange(depth):head=LinkedList(head,head.prev)returnheadimportgc#gc.set_threshold(2000, 2, 0)M=10_000N=5_000head=make_ll(M)count=Mnext_count=1_000_000whileTrue:newhead=make_ll(N)newhead.surprise=headcount+=Nifcount>=next_count:print(f"Count ={int(count/1_000_000)}M")print(gc.get_stats()[:2])next_count+=1_000_000

I've upped the size of the the lists, so that they aren't collected by the young collection.
The memory grows unless the line#gc.set_threshold(2000, 2, 0) is commented out, in which case memory stays bounded as the incremental collector is able to keep up.

@DinoV
Copy link
Contributor

DinoV commentedOct 13, 2023
edited
Loading

FWIW this variation grows unbounded even with a greatly increased threshold (although slowly, but I killed it after getting to 10g), but maybe there's some amount of auto-tuning where it would keep up. On a short run it also seems to be spending ~25% time ingc_collect_region as perperf record on Linux:

import gcgc.set_threshold(200000, 2, 0)class LinkedList:    def __init__(self, next=None, prev=None):        self.next = next        if next is not None:            next.prev = self        self.prev = prev        if prev is not None:            prev.next = selfdef make_ll(depth):    head = LinkedList()    for i in range(depth):        head = LinkedList(head, head.prev)    return headhead = make_ll(10000)olds = []while True:    newhead = make_ll(200)    newhead.surprise = head    olds.append(newhead)    if len(olds) == 100:        print('clearing')        del olds[:]

@markshannon
Copy link
MemberAuthor

The first threshold just determines how often a collection is done. It shouldn't really impact whether the collector can keep up.
It is the second threshold that matters. If too high the collector might not be able to keep up. It should always be able to keep up when set to2. I'll try to investigate.

Since this program is doing nothing but producing cyclic garbage, I'm not surprised that it spends a lot of time in GC. Is it worse than the current GC?

@markshannon
Copy link
MemberAuthor

I am seeing much the same behavior on 3.11 and main in terms of the count of objects being collected.
Have you tried your test program on 3.11 or main?
It is possible that we are getting worse fragmentation.

@DinoV
Copy link
Contributor

Since this program is doing nothing but producing cyclic garbage, I'm not surprised that it spends a lot of time in GC. Is it worse than the current GC?

Ahh I hadn't compared the CPU time and indeed the baseline GC is spending as much time in GC, so never mind on the time spent :)

@DinoV
Copy link
Contributor

I am seeing much the same behavior on 3.11 and main in terms of the count of objects being collected. Have you tried your test program on 3.11 or main? It is possible that we are getting worse fragmentation.

I haven't been looking at the collection statistics but rather memory usage. On the most recent program I seemain staying at around 15mb resident and I see the incremental GC version growing unbounded (it reached 1gig after ~2.5 minutes, 2gb after ~5 minutes, and over 3gb after 10 minutes).

@markshannon
Copy link
MemberAuthor

With your latest example, the stats show the leak as well.
I've no idea why as yet, but I will investigate.

DinoV and zacqed reacted with thumbs up emojistonebig reacted with heart emoji

Copy link
Contributor

@DinoVDinoV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

LGTM!

stonebig reacted with heart emoji
@markshannonmarkshannon merged commit36518e6 intopython:mainFeb 5, 2024
@markshannonmarkshannon deleted the incremental-gc branchFebruary 5, 2024 18:28
@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure⚠️⚠️⚠️

Hi! The buildbots390x Debian 3.x has failed when building commit36518e6.

What do you need to do:

  1. Don't panic.
  2. Checkthe buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/49/builds/7912) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/49/builds/7912

Failed tests:

  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote:Enumerating objects: 30, done.remote:Counting objects:   3% (1/29)remote:Counting objects:   6% (2/29)remote:Counting objects:  10% (3/29)remote:Counting objects:  13% (4/29)remote:Counting objects:  17% (5/29)remote:Counting objects:  20% (6/29)remote:Counting objects:  24% (7/29)remote:Counting objects:  27% (8/29)remote:Counting objects:  31% (9/29)remote:Counting objects:  34% (10/29)remote:Counting objects:  37% (11/29)remote:Counting objects:  41% (12/29)remote:Counting objects:  44% (13/29)remote:Counting objects:  48% (14/29)remote:Counting objects:  51% (15/29)remote:Counting objects:  55% (16/29)remote:Counting objects:  58% (17/29)remote:Counting objects:  62% (18/29)remote:Counting objects:  65% (19/29)remote:Counting objects:  68% (20/29)remote:Counting objects:  72% (21/29)remote:Counting objects:  75% (22/29)remote:Counting objects:  79% (23/29)remote:Counting objects:  82% (24/29)remote:Counting objects:  86% (25/29)remote:Counting objects:  89% (26/29)remote:Counting objects:  93% (27/29)remote:Counting objects:  96% (28/29)remote:Counting objects: 100% (29/29)remote:Counting objects: 100% (29/29), done.remote:Compressing objects:   5% (1/20)remote:Compressing objects:  10% (2/20)remote:Compressing objects:  15% (3/20)remote:Compressing objects:  20% (4/20)remote:Compressing objects:  25% (5/20)remote:Compressing objects:  30% (6/20)remote:Compressing objects:  35% (7/20)remote:Compressing objects:  40% (8/20)remote:Compressing objects:  45% (9/20)remote:Compressing objects:  50% (10/20)remote:Compressing objects:  55% (11/20)remote:Compressing objects:  60% (12/20)remote:Compressing objects:  65% (13/20)remote:Compressing objects:  70% (14/20)remote:Compressing objects:  75% (15/20)remote:Compressing objects:  80% (16/20)remote:Compressing objects:  85% (17/20)remote:Compressing objects:  90% (18/20)remote:Compressing objects:  95% (19/20)remote:Compressing objects: 100% (20/20)remote:Compressing objects: 100% (20/20), done.remote:Total 30 (delta 8), reused 10 (delta 8), pack-reused 1From https://github.com/python/cpython * branch                  main       -> FETCH_HEADNote:switching to '36518e69d74607e5f094ce55286188e4545a947d'.You are in 'detached HEAD' state. You can look around, make experimentalchanges and commit them, and you can discard any commits you make in thisstate without impacting any branches by switching back to a branch.If you want to create a new branch to retain commits you create, you maydo so (now or later) by using -c with the switch command. Example:  git switch -c <new-branch-name>Or undo this operation with:  git switch -Turn off this advice by setting config variable advice.detachedHead to falseHEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)Switched to and reset branch 'main'make:*** [Makefile:2099: buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure⚠️⚠️⚠️

Hi! The buildbots390x SLES 3.x has failed when building commit36518e6.

What do you need to do:

  1. Don't panic.
  2. Checkthe buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/540/builds/7870) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/540/builds/7870

Failed tests:

  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote:Enumerating objects: 30, done.remote:Counting objects:   3% (1/29)remote:Counting objects:   6% (2/29)remote:Counting objects:  10% (3/29)remote:Counting objects:  13% (4/29)remote:Counting objects:  17% (5/29)remote:Counting objects:  20% (6/29)remote:Counting objects:  24% (7/29)remote:Counting objects:  27% (8/29)remote:Counting objects:  31% (9/29)remote:Counting objects:  34% (10/29)remote:Counting objects:  37% (11/29)remote:Counting objects:  41% (12/29)remote:Counting objects:  44% (13/29)remote:Counting objects:  48% (14/29)remote:Counting objects:  51% (15/29)remote:Counting objects:  55% (16/29)remote:Counting objects:  58% (17/29)remote:Counting objects:  62% (18/29)remote:Counting objects:  65% (19/29)remote:Counting objects:  68% (20/29)remote:Counting objects:  72% (21/29)remote:Counting objects:  75% (22/29)remote:Counting objects:  79% (23/29)remote:Counting objects:  82% (24/29)remote:Counting objects:  86% (25/29)remote:Counting objects:  89% (26/29)remote:Counting objects:  93% (27/29)remote:Counting objects:  96% (28/29)remote:Counting objects: 100% (29/29)remote:Counting objects: 100% (29/29), done.remote:Compressing objects:   5% (1/20)remote:Compressing objects:  10% (2/20)remote:Compressing objects:  15% (3/20)remote:Compressing objects:  20% (4/20)remote:Compressing objects:  25% (5/20)remote:Compressing objects:  30% (6/20)remote:Compressing objects:  35% (7/20)remote:Compressing objects:  40% (8/20)remote:Compressing objects:  45% (9/20)remote:Compressing objects:  50% (10/20)remote:Compressing objects:  55% (11/20)remote:Compressing objects:  60% (12/20)remote:Compressing objects:  65% (13/20)remote:Compressing objects:  70% (14/20)remote:Compressing objects:  75% (15/20)remote:Compressing objects:  80% (16/20)remote:Compressing objects:  85% (17/20)remote:Compressing objects:  90% (18/20)remote:Compressing objects:  95% (19/20)remote:Compressing objects: 100% (20/20)remote:Compressing objects: 100% (20/20), done.remote:Total 30 (delta 8), reused 10 (delta 8), pack-reused 1From https://github.com/python/cpython * branch                  main       -> FETCH_HEADNote:switching to '36518e69d74607e5f094ce55286188e4545a947d'.You are in 'detached HEAD' state. You can look around, make experimentalchanges and commit them, and you can discard any commits you make in thisstate without impacting any branches by switching back to a branch.If you want to create a new branch to retain commits you create, you maydo so (now or later) by using -c with the switch command. Example:  git switch -c <new-branch-name>Or undo this operation with:  git switch -Turn off this advice by setting config variable advice.detachedHead to falseHEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)Switched to and reset branch 'main'make:*** [Makefile:2096: buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure⚠️⚠️⚠️

Hi! The buildbots390x RHEL7 3.x has failed when building commit36518e6.

What do you need to do:

  1. Don't panic.
  2. Checkthe buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/179/builds/6519) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/179/builds/6519

Failed tests:

  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote:Enumerating objects: 35, done.�[Kremote:Counting objects:   2% (1/34)�[Kremote:Counting objects:   5% (2/34)�[Kremote:Counting objects:   8% (3/34)�[Kremote:Counting objects:  11% (4/34)�[Kremote:Counting objects:  14% (5/34)�[Kremote:Counting objects:  17% (6/34)�[Kremote:Counting objects:  20% (7/34)�[Kremote:Counting objects:  23% (8/34)�[Kremote:Counting objects:  26% (9/34)�[Kremote:Counting objects:  29% (10/34)�[Kremote:Counting objects:  32% (11/34)�[Kremote:Counting objects:  35% (12/34)�[Kremote:Counting objects:  38% (13/34)�[Kremote:Counting objects:  41% (14/34)�[Kremote:Counting objects:  44% (15/34)�[Kremote:Counting objects:  47% (16/34)�[Kremote:Counting objects:  50% (17/34)�[Kremote:Counting objects:  52% (18/34)�[Kremote:Counting objects:  55% (19/34)�[Kremote:Counting objects:  58% (20/34)�[Kremote:Counting objects:  61% (21/34)�[Kremote:Counting objects:  64% (22/34)�[Kremote:Counting objects:  67% (23/34)�[Kremote:Counting objects:  70% (24/34)�[Kremote:Counting objects:  73% (25/34)�[Kremote:Counting objects:  76% (26/34)�[Kremote:Counting objects:  79% (27/34)�[Kremote:Counting objects:  82% (28/34)�[Kremote:Counting objects:  85% (29/34)�[Kremote:Counting objects:  88% (30/34)�[Kremote:Counting objects:  91% (31/34)�[Kremote:Counting objects:  94% (32/34)�[Kremote:Counting objects:  97% (33/34)�[Kremote:Counting objects: 100% (34/34)�[Kremote:Counting objects: 100% (34/34), done.�[Kremote:Compressing objects:   4% (1/25)�[Kremote:Compressing objects:   8% (2/25)�[Kremote:Compressing objects:  12% (3/25)�[Kremote:Compressing objects:  16% (4/25)�[Kremote:Compressing objects:  20% (5/25)�[Kremote:Compressing objects:  24% (6/25)�[Kremote:Compressing objects:  28% (7/25)�[Kremote:Compressing objects:  32% (8/25)�[Kremote:Compressing objects:  36% (9/25)�[Kremote:Compressing objects:  40% (10/25)�[Kremote:Compressing objects:  44% (11/25)�[Kremote:Compressing objects:  48% (12/25)�[Kremote:Compressing objects:  52% (13/25)�[Kremote:Compressing objects:  56% (14/25)�[Kremote:Compressing objects:  60% (15/25)�[Kremote:Compressing objects:  64% (16/25)�[Kremote:Compressing objects:  68% (17/25)�[Kremote:Compressing objects:  72% (18/25)�[Kremote:Compressing objects:  76% (19/25)�[Kremote:Compressing objects:  80% (20/25)�[Kremote:Compressing objects:  84% (21/25)�[Kremote:Compressing objects:  88% (22/25)�[Kremote:Compressing objects:  92% (23/25)�[Kremote:Compressing objects:  96% (24/25)�[Kremote:Compressing objects: 100% (25/25)�[Kremote:Compressing objects: 100% (25/25), done.�[Kremote:Total 35 (delta 12), reused 10 (delta 8), pack-reused 1�[KFrom https://github.com/python/cpython * branch            main       -> FETCH_HEADNote:checking out '36518e69d74607e5f094ce55286188e4545a947d'.You are in 'detached HEAD' state. You can look around, make experimentalchanges and commit them, and you can discard any commits you make in thisstate without impacting any branches by performing another checkout.If you want to create a new branch to retain commits you create, you maydo so (now or later) by using -b with the checkout command again. Example:  git checkout -b new_branch_nameHEAD is now at 36518e6... GH-108362: Incremental GC implementation (GH-108038)Switched to and reset branch 'main'Objects/unicodeobject.c:In function ‘unicode_endswith’:Objects/unicodeobject.c:13043:23: warning: ‘subobj’ may be used uninitialized in this function [-Wmaybe-uninitialized]             substring= PyTuple_GET_ITEM(subobj, i);^Objects/unicodeobject.c:In function ‘unicode_startswith’:Objects/unicodeobject.c:12989:23: warning: ‘subobj’ may be used uninitialized in this function [-Wmaybe-uninitialized]             substring= PyTuple_GET_ITEM(subobj, i);^Python/instrumentation.c:In function ‘allocate_instrumentation_data’:Python/instrumentation.c:1489:9: warning: missing braces around initializer [-Wmissing-braces]         code->_co_monitoring->local_monitors= (_Py_LocalMonitors){0 };^Python/instrumentation.c:1489:9: warning: (near initialization for ‘(anonymous).tools’) [-Wmissing-braces]Python/instrumentation.c:1490:9: warning: missing braces around initializer [-Wmissing-braces]         code->_co_monitoring->active_monitors= (_Py_LocalMonitors){0 };^Python/instrumentation.c:1490:9: warning: (near initialization for ‘(anonymous).tools’) [-Wmissing-braces]./Modules/_xxinterpchannelsmodule.c:In function ‘_channel_get_info’:./Modules/_xxinterpchannelsmodule.c:1984:21: warning: missing braces around initializer [-Wmissing-braces]*info= (struct channel_info){0};^./Modules/_xxinterpchannelsmodule.c:1984:21: warning: (near initialization for ‘(anonymous).status’) [-Wmissing-braces]make:*** [buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure⚠️⚠️⚠️

Hi! The buildbotAMD64 Debian root 3.x has failed when building commit36518e6.

What do you need to do:

  1. Don't panic.
  2. Checkthe buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/345/builds/7026) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/345/builds/7026

Failed tests:

  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_forkserver.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote:Enumerating objects: 30, done.remote:Counting objects:   3% (1/29)remote:Counting objects:   6% (2/29)remote:Counting objects:  10% (3/29)remote:Counting objects:  13% (4/29)remote:Counting objects:  17% (5/29)remote:Counting objects:  20% (6/29)remote:Counting objects:  24% (7/29)remote:Counting objects:  27% (8/29)remote:Counting objects:  31% (9/29)remote:Counting objects:  34% (10/29)remote:Counting objects:  37% (11/29)remote:Counting objects:  41% (12/29)remote:Counting objects:  44% (13/29)remote:Counting objects:  48% (14/29)remote:Counting objects:  51% (15/29)remote:Counting objects:  55% (16/29)remote:Counting objects:  58% (17/29)remote:Counting objects:  62% (18/29)remote:Counting objects:  65% (19/29)remote:Counting objects:  68% (20/29)remote:Counting objects:  72% (21/29)remote:Counting objects:  75% (22/29)remote:Counting objects:  79% (23/29)remote:Counting objects:  82% (24/29)remote:Counting objects:  86% (25/29)remote:Counting objects:  89% (26/29)remote:Counting objects:  93% (27/29)remote:Counting objects:  96% (28/29)remote:Counting objects: 100% (29/29)remote:Counting objects: 100% (29/29), done.remote:Compressing objects:   5% (1/20)remote:Compressing objects:  10% (2/20)remote:Compressing objects:  15% (3/20)remote:Compressing objects:  20% (4/20)remote:Compressing objects:  25% (5/20)remote:Compressing objects:  30% (6/20)remote:Compressing objects:  35% (7/20)remote:Compressing objects:  40% (8/20)remote:Compressing objects:  45% (9/20)remote:Compressing objects:  50% (10/20)remote:Compressing objects:  55% (11/20)remote:Compressing objects:  60% (12/20)remote:Compressing objects:  65% (13/20)remote:Compressing objects:  70% (14/20)remote:Compressing objects:  75% (15/20)remote:Compressing objects:  80% (16/20)remote:Compressing objects:  85% (17/20)remote:Compressing objects:  90% (18/20)remote:Compressing objects:  95% (19/20)remote:Compressing objects: 100% (20/20)remote:Compressing objects: 100% (20/20), done.remote:Total 30 (delta 8), reused 10 (delta 8), pack-reused 1From https://github.com/python/cpython * branch                  main       -> FETCH_HEADNote:switching to '36518e69d74607e5f094ce55286188e4545a947d'.You are in 'detached HEAD' state. You can look around, make experimentalchanges and commit them, and you can discard any commits you make in thisstate without impacting any branches by switching back to a branch.If you want to create a new branch to retain commits you create, you maydo so (now or later) by using -c with the switch command. Example:  git switch -c <new-branch-name>Or undo this operation with:  git switch -Turn off this advice by setting config variable advice.detachedHead to falseHEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)Switched to and reset branch 'main'configure:WARNING: pkg-config is missing. Some dependencies may not be detected correctly.make:*** [Makefile:2095: buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure⚠️⚠️⚠️

Hi! The buildbotAMD64 FreeBSD 3.x has failed when building commit36518e6.

What do you need to do:

  1. Don't panic.
  2. Checkthe buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/1223/builds/1847) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/1223/builds/1847

Failed tests:

  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
Note:switching to '36518e69d74607e5f094ce55286188e4545a947d'.You are in 'detached HEAD' state. You can look around, make experimentalchanges and commit them, and you can discard any commits you make in thisstate without impacting any branches by switching back to a branch.If you want to create a new branch to retain commits you create, you maydo so (now or later) by using -c with the switch command. Example:  git switch -c <new-branch-name>Or undo this operation with:  git switch -Turn off this advice by setting config variable advice.detachedHead to falseHEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)Switched to and reset branch 'main'

@markshannonmarkshannon restored the incremental-gc branchFebruary 6, 2024 18:20
@markshannonmarkshannon deleted the incremental-gc branchFebruary 6, 2024 18:20
@vstinner
Copy link
Member

See issuegh-115124: AMD64 Windows11 Bigmem 3.x: test_bigmem failed with !_Py_IsImmortal(FROM_GC(gc)) assertion error. PR#114931 or PR#108038 caused a regression.

@vstinner
Copy link
Member

See issuegh-115127: multiprocessing test_thread_safety() fails with "gc_list_is_empty(to) || gc_old_space(to_tail) == gc_old_space(from_tail)" assert error.

markshannon added a commit to faster-cpython/cpython that referenced this pull requestFeb 7, 2024
@markshannonmarkshannon restored the incremental-gc branchFebruary 7, 2024 09:55
pablogsal pushed a commit that referenced this pull requestFeb 7, 2024
fsc-eriker pushed a commit to fsc-eriker/cpython that referenced this pull requestFeb 14, 2024
fsc-eriker pushed a commit to fsc-eriker/cpython that referenced this pull requestFeb 14, 2024
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@DinoVDinoVDinoV approved these changes

@pablogsalpablogsalAwaiting requested review from pablogsalpablogsal is a code owner

@ericsnowcurrentlyericsnowcurrentlyAwaiting requested review from ericsnowcurrentlyericsnowcurrently is a code owner

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

6 participants

@markshannon@pablogsal@nascheme@DinoV@bedevere-bot@vstinner

Comments


[8]ページ先頭

©2009-2026 Movatter.jp