Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

ci: pytest artifact upload#19466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Closed

Conversation

andrzejnovak
Copy link
Contributor

@andrzejnovakandrzejnovak commentedFeb 6, 2021
edited
Loading

Collectsresult_images from failed tests and uploads them as artifacts on GHA. Filtering was necessary because the full set is ~50MB which is not practical for downloading and inspecting anyway.

https://github.com/matplotlib/matplotlib/actions/runs/543223044

story645 reacted with thumbs up emoji
@andrzejnovakandrzejnovakforce-pushed thepytestart branch 6 times, most recently fromd49e887 toef37dc3CompareFebruary 6, 2021 14:49
@henryiii
Copy link

@andrzejnovak implemented this in mplhep and it is really useful to be able to see what failed!

story645 reacted with thumbs up emojiandrzejnovak reacted with heart emoji

@QuLogic
Copy link
Member

I like the idea, but instead of writing it in bash hidden in the workflow, I would modifytools/visualize_tests.py. You could add an--only-failed option to only include the failed images, and a--output option that would copy the relevant files into a subdirectory that could be used as the artifact.

@andrzejnovak
Copy link
ContributorAuthor

I see how that would work, but that script seems to be really centred about sth else. It would actually be very convenient if GH allowed uploading the HTML file directly as artifact, but since it seems to always produce zips, having the plots in a web page would just be extra steps.

I think I find it a bit confusing sincepytest-mpl has a very convenient--mpl-results-path, but I wasn't able to pass it here.

@jklymak
Copy link
Member

@QuLogic did you want those changes definitively, or was that just an idea. It seems gathering the failed images is helpful to me.

efiring reacted with thumbs up emoji

@jklymakjklymak requested a review fromQuLogicMay 15, 2021 15:25
@QuLogic
Copy link
Member

We already have tools to determine which files are relevant intools/triage_tests.py andtools/visualize_tests.py. I'd rather not have a third copy of this subsetting that may fall out of sync if things change, because no-one's going to notice that the CI version is wrong until a test result is broken, which may be long after said change. Even the scripts intools break periodically because no-one runs them as often as tests.

Filtering was necessary because the full set is ~50MB which is not practical for downloading and inspecting anyway.

So I disagree that filtering isnecessary, because once downloaded, we can run the above tools on them to whittle them down to the important stuff. I'd rather we just upload the entire directory.

@andrzejnovak
Copy link
ContributorAuthor

andrzejnovak commentedMay 21, 2021
edited
Loading

@QuLogic I think that's somewhat a different use case. I might want to run the tools you linked if I ran test locally and I suppose one could want to cross-check remotes by downloading the outputs from GHA and visualizing them as one would local tests.

On the other hand, publishing only the failed images lets one quickly (because it's only the affected files) download the failed result and inspect what's wrong by eye without requiring any extra setup.

I suppose you'll notice something broke next time there's a PR with a failed img comparison test (which seems to happen often enough :) ).

@jklymak
Copy link
Member

I guess I'm somewhat ambivalent about this. If I know a test fails, I usually need to debug it locally anyway, so getting the failed diff remotely is rarely useful. Occasionally there is something I can't repo locally, though....

@jklymak
Copy link
Member

@andrzejnovak Thanks a lot for this. If the tests fail we now upload all the artifacts to GitHub actions artifacts, so this PR has been superseded. Sorry we didn't catch that when it happened! Thanks...

@andrzejnovak
Copy link
ContributorAuthor

@jklymak no worries, glad the functionality made it in one way or another :)

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@QuLogicQuLogicAwaiting requested review from QuLogic

Assignees
No one assigned
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

4 participants
@andrzejnovak@henryiii@QuLogic@jklymak

[8]ページ先頭

©2009-2025 Movatter.jp