Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork9.6k
[Serializer] PreventCannot traverse an already closed generator
error by materializing Traversable input#60260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
Hey@santysisi , thanks for this PR! Indeed I'm a bit worried about the memory usage. Could you give it a try with more complex and filled rows? At least, I think this behavior should be opt-in (using a context option) to let applications needing to be really memory efficient work properly. |
Cannot traverse an already closed generator
error by materializing Traversable inputHi@mtarld , thanks for your comment 😄 , I really appreciate it! I agree with your concern, and I think it's a really good idea. Yes, I can give it a try with more complex and filled row. I’d be happy to do that if you want! 😄 |
GromNaN left a comment• edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM after some minor changes.
Indeed I'm a bit worried about the memory usage. Could you give it a try with more complex and filled rows?
At least, I think this behavior should be opt-in (using a context option) to let applications needing to be really memory efficient work properly.
@mtarld the$data
is iterated recursively multiple times during the encoding, so all the data must be in-memory to find the column names.
If the iterator is able to re-generate each item on-demand, the memory usage could be lower, but the CPU or network would be more stressed.
src/Symfony/Component/Serializer/Tests/Encoder/CsvEncoderTest.php OutdatedShow resolvedHide resolved
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
c6d056d
toa159cbd
CompareThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
@GromNaN, indeed you're right, didn't see about the column names. Then LGTM after a minor suggestion.
CSV, $this->encoder->encode($data, 'csv')); | ||
} | ||
public function provideIterable() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
publicfunction provideIterable() | |
publicstaticfunction provideIterable():iterable |
…aterializing Traversable input
a159cbd
toc7206a9
CompareThank you@santysisi. |
1fb9d17
intosymfony:6.4Uh oh!
There was an error while loading.Please reload this page.
✅ Pull Request description:
This PR addresses the issue reported in the linked ticket, specifically the error:
Cannot traverse an already closed generator
The fix involves converting
Traversable
input into an array usingiterator_to_array($data)
, in order to allow multiple traversals of generator-based inputs.At first glance, it might seem that this approach significantly increases memory usage, since all generator values are stored in memory. However, this is not the case. In the current logic, the followingloop already forces all values from the generator into memory:
Therefore, materializing the generator with
iterator_to_array()
does not increase peak memory usage in any meaningful way.As an example, here's the comparison of peak memory usage (measured withmemory_get_peak_usage) when processing an array of 1 million integers:
The difference is negligible, confirming that the fix does not introduce a meaningful performance cost in terms of memory.