Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Make network based tests as concurrent as possible#4271

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Closed
harshil21 wants to merge6 commits intomasterfromoverhaul-tests

Conversation

harshil21
Copy link
Member

@harshil21harshil21 commentedMay 22, 2024
edited
Loading

Ideas implemented:

  • ... WIP

Benchmarks:

  • WIP

@harshil21harshil21 added the ⚙️ testsaffected functionality: tests labelMay 22, 2024
@harshil21harshil21 marked this pull request as draftMay 22, 2024 21:03
@Bibo-Joshi
Copy link
Member

  • Introducepytest-socket and completely disable network access for*WithoutRequest tests

Just question for my understanding: This is intended to make any tests noticed that should instead go toWithRequest?

  • Extend that naming scheme totelegram.ext tests

Instead of splitting test classes for tg.ext, I would rather just disable network access for tg.ext-tests completely and mock every interaction by TG.

  • Introducepyfakefs to test suite

Which problem does this solve?

  • Check if usinguvloop in tests provides any measurable speed up.

Sounds like an interseting thing to check out, however I see great value in testing on the default setups on the different os-ses.

  • Change CI's pytest--dist toworksteal.
  • Isolate tests from other tests
  • Continue above point and introducepytest-randomly (need to check if pytest-xdist effectively does the job of randomly)

Not sure if I see a notable benefit in this, but I have no objection if you can manage to work around tests cases likesticker_set that currently rely on the order :D

  • Profile all tests and see what clogs up and fix that.
  • solve the TODO intest_official.exceptions
  • dirty-equals sounds like it could be quite useful for us!

sounds interesting :) feel free to try it out!

@harshil21
Copy link
MemberAuthor

question for my understanding: This is intended to make any tests noticed that should instead go toWithRequest?

Yes, and also there are currently some tests in there which don't access the network, but whose fixtures do access the network.

Instead of splitting test classes for tg.ext, I would rather just disable network access for tg.ext-tests completely and mock every interaction by TG.

good idea. Ideally one should be able to test locally without network connection (<1 min with pytest-xdist) and be sure that the ones which do use the network pass.

Which problem does this solve?

hm, it is supposed to be faster (since there's no disk I/O) and safer, but we don't do much I/O so probably not worth it.

Not sure if I see a notable benefit in this, but I have no objection if you can manage to work around tests cases likesticker_set that currently rely on the order :D

it is one of the possible reasons for the flakiness, and it is good practice to keep tests isolated from each other

@Bibo-Joshi
Copy link
Member

question for my understanding: This is intended to make any tests noticed that should instead go toWithRequest?

Yes, and also there are currently some tests in there which don't access the network, but whose fixtures do access the network.

👍

Instead of splitting test classes for tg.ext, I would rather just disable network access for tg.ext-tests completely and mock every interaction by TG.

good idea. Ideally one should be able to test locally without network connection (<1 min with pytest-xdist) and be sure that the ones which do use the network pass.

👍

Which problem does this solve?

hm, it is supposed to be faster (since there's no disk I/O) and safer, but we don't do much I/O so probably not worth it.

I was at least wondering, since we e.g. already use pytests tmp_path fixtures.

Not sure if I see a notable benefit in this, but I have no objection if you can manage to work around tests cases likesticker_set that currently rely on the order :D

it is one of the possible reasons for the flakiness, and it is good practice to keep tests isolated from each other

👍

Copy link
Member

@Bibo-JoshiBibo-Joshi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hey Harshil, I had a look at the changes that are here so far.

What would you think of converting the wishlist in the comments into an issue and adressing them in multiple separate PR? that would give as the benefit of the individual points earlier, make reviewing probably easier and would also make it easier to distribute the open points onto different people :)

@@ -180,6 +180,23 @@ async def tz_bot(timezone, bot_info):
return default_bot


@pytest.fixture(scope="session")
async def tz_bots(tzinfos, bot_info):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

  1. I recommend to add a comment in this fixture about why we use a list here instead of properly parametrizing (speed)
  2. do we still need thetz_bot fixture for anything else, or can it be removed, then?
  3. Can we make sure that the tests at least point out the timezone on failure?

harshil21 reacted with thumbs up emoji
Copy link
MemberAuthor

@harshil21harshil21Jun 18, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

do we still need the tz_bot fixture for anything else, or can it be removed, then?

we do still use it in theNonRequest tests, where the time penalty is very negligible.

Can we make sure that the tests at least point out the timezone on failure?

that would be hard to do without making it even more complex, but let me have another look again

Bibo-Joshi reacted with thumbs up emoji
@@ -309,6 +326,18 @@ def tzinfo(request):
return BasicTimezone(offset=datetime.timedelta(hours=hours_offset), name=request.param)


@pytest.fixture(scope="session")
def tzinfos():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

what's the benefit of having this as a fixture? IISC it's used only intz_bots and could also just be hardcoded there?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

mm yeah, a bit of a premature optimization at this stage in the PR, but yes I'll keep it mind

@harshil21
Copy link
MemberAuthor

What would you think of converting the wishlist in the comments into an issue and adressing them in multiple separate PR? that would give as the benefit of the individual points earlier, make reviewing probably easier and would also make it easier to distribute the open points onto different people :)

okay, though some of the points might be better done sequentially rather than concurrently otherwise one would have to rewrite / resolve many conflicts. I constantly have to test and rewrite many times before I get the iteration right. Anyway it's probably for the best that more than one person work on this.

Bibo-Joshi reacted with thumbs up emoji

@harshil21harshil21 changed the titleImprove tests structure, performance, and flakinessMake network based tests as concurrent as possibleJun 26, 2024
@harshil21
Copy link
MemberAuthor

created a new issue for the other points. I will have to rebase this branch and remove commits which are not relevant to the purpose of this PR

@Bibo-Joshi
Copy link
Member

@harshil21 I'd like to close this PR.#4488,#4489 and#4491 covered most.
What's not covered is converting thetz_bot fixture to a list. Since#4491 and#4493 take away much of the performance penalty of that fixture, I'd prefer to keep the tests well separated

@harshil21
Copy link
MemberAuthor

fair enough.

Sorry I couldn't complete it over several months.

@github-actionsgithub-actionsbot locked and limited conversation to collaboratorsOct 3, 2024
Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.
Reviewers

@Bibo-JoshiBibo-JoshiBibo-Joshi left review comments

Assignees
No one assigned
Labels
⚙️ testsaffected functionality: tests
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@harshil21@Bibo-Joshi

[8]ページ先頭

©2009-2025 Movatter.jp