Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

tests: drop dependency on external mock module for newer python#574

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
ambv merged 3 commits intohtml5lib:masterfromeli-schwartz:tests-mock-dep
Jan 10, 2024

Conversation

eli-schwartz
Copy link
Contributor

Starting python 3.3, this is in the stdlib.

Fixes#541

a-detiste and cclauss reacted with thumbs up emoji
@cclauss
Copy link
Contributor

Please rebase so the tests pass.

@eli-schwartz
Copy link
ContributorAuthor

All things considered I'd rather wait until I have core maintainer review and then rebase once instead of potentially many times.

@cclauss
Copy link
Contributor

If you were a core maintainer, would you feel more comfortable about reviewing a PR with failing tests (current state) or passing tests?https://ci.appveyor.com/project/gsnedders/html5lib-python/history

from mock import Mock
try:
from unittest.mock import Mock
except:
Copy link
Contributor

@cclausscclaussJan 10, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

@eli-schwartz
Copy link
ContributorAuthor

If I were a maintainer, and:

  • my tests don't pass because the test runner claims there are fatal errors in the Windows registry
  • the issue has been ongoing for a very long time

I would not appreciate external drive-by contributors pressuring other external contributors to feel guilty about my project which doesn't have passing tests in the first place.

But also, as it happens, yes, I absolutely would review PRs with failing tests! Even if the failing test is a flaw in the contribution itself.

I do thisall the time. Then I offer suggestions to the contributors, who may not be fully aware of deeply hidden quirks of the codebase that require specialized knowledge to handle correctly, knowledge which I can provide in order to aid those contributors in getting their PR into a mergeable state.

Sometimes I,gasp, merge PRs with failing CI because the PR fixes one CI error but not all the errors, and CI will never be fixed if you're not willing to fix issues one by one and instead expect it all to be fixed in one fell swoop and the task becomes so intimidating that everyone just gives up and it never gets fixed.

Sometimes I review a PR and say "lgtm as soon as CI passes, please rebase", so that contributors can feel confident that they're not being ignored, and that if they rebase the PRnow then they won't be asked to rebase it once a month for the next year only to have it never merged.

In short, yes, I review PRs without regard to whether they have passing CI, because I do not see what passing CI has to do with anything at the review stage.

@ambvambv merged commit82c2599 intohtml5lib:masterJan 10, 2024
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@ambvambvambv approved these changes

@cclausscclausscclauss approved these changes

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

use unittest.mock instead of mock
3 participants
@eli-schwartz@cclauss@ambv

[8]ページ先頭

©2009-2025 Movatter.jp