Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork33.7k
Description
The PR#31696 attempts to fix the "leak" of file descriptors when the iterator is not exhausted. That PR fixes the warning, but not the underlying issue that the files aren't closed until the next tracing garbage collection cycle.
Note that there isn't truly a leak of file descriptors. The file descriptors are eventually closed when the file object is finalized (at cyclic garbage collection). The point of theResourceWarning (in my understanding) is that waiting until the next garbage collection cycle means that you may temporarily have a lot of unwanted open file descriptors, which could exhaust the global limit or prevent successful writes to those files on Windows.
# run with ulimit -Sn 1000importxml.etree.ElementTreeasETimporttempfileimportgcgc.disable()defrun():withtempfile.NamedTemporaryFile("w")asf:f.write("<document />junk")foriinrange(10000):it=ET.iterparse(f.name)delitrun()
On my system, after lowering the file descriptor limit to 1000 (viaulimit -Sn 1000) I get:
OSError: [Errno 24] Too many open files: '/tmp/tmpwwmd9gp6'