Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork7.9k
ci: pytest artifact upload#19466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
d49e887
toef37dc3
Comparehenryiii commentedFeb 8, 2021
@andrzejnovak implemented this in mplhep and it is really useful to be able to see what failed! |
I like the idea, but instead of writing it in bash hidden in the workflow, I would modify |
I see how that would work, but that script seems to be really centred about sth else. It would actually be very convenient if GH allowed uploading the HTML file directly as artifact, but since it seems to always produce zips, having the plots in a web page would just be extra steps. I think I find it a bit confusing since |
@QuLogic did you want those changes definitively, or was that just an idea. It seems gathering the failed images is helpful to me. |
We already have tools to determine which files are relevant in
So I disagree that filtering isnecessary, because once downloaded, we can run the above tools on them to whittle them down to the important stuff. I'd rather we just upload the entire directory. |
andrzejnovak commentedMay 21, 2021 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
@QuLogic I think that's somewhat a different use case. I might want to run the tools you linked if I ran test locally and I suppose one could want to cross-check remotes by downloading the outputs from GHA and visualizing them as one would local tests. On the other hand, publishing only the failed images lets one quickly (because it's only the affected files) download the failed result and inspect what's wrong by eye without requiring any extra setup. I suppose you'll notice something broke next time there's a PR with a failed img comparison test (which seems to happen often enough :) ). |
I guess I'm somewhat ambivalent about this. If I know a test fails, I usually need to debug it locally anyway, so getting the failed diff remotely is rarely useful. Occasionally there is something I can't repo locally, though.... |
@andrzejnovak Thanks a lot for this. If the tests fail we now upload all the artifacts to GitHub actions artifacts, so this PR has been superseded. Sorry we didn't catch that when it happened! Thanks... |
@jklymak no worries, glad the functionality made it in one way or another :) |
Uh oh!
There was an error while loading.Please reload this page.
Collects
result_images
from failed tests and uploads them as artifacts on GHA. Filtering was necessary because the full set is ~50MB which is not practical for downloading and inspecting anyway.https://github.com/matplotlib/matplotlib/actions/runs/543223044