Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Skipping MCAD CPU Preemption Test#696

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
Fiona-Waters wants to merge29 commits intoproject-codeflare:main
base:main
Choose a base branch
Loading
fromFiona-Waters:skip-flaky-e2e

Conversation

@Fiona-Waters
Copy link
Contributor

Skipping the MCAD CPU Preemption Test which is failing intermittently on PRs so that we can get some outstanding PRs merged.

ronensc reacted with thumbs up emoji
@openshift-ci
Copy link

[APPROVALNOTIFIER] This PR isNOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assignanishasthana for approval. For more information seethe Kubernetes Code Review Process.

The full list of commands accepted by this bot can be foundhere.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing/approve in a comment
Approvers can cancel approval by writing/approve cancel in a comment

@ronensc
Copy link

For future reference, the root cause analysis of the test's failure has been conducted by@dgrove-oss, and it can be found here:
#691 (comment)

@Fiona-Waters
Copy link
ContributorAuthor

For future reference, the root cause analysis of the test's failure has been conducted by@dgrove-oss, and it can be found here:#691 (comment)

Thanks@ronensc That's good to know!

@dgrove-oss
Copy link

I don't think it's worth backporting, but I did redo these test cases for mcad v2 to be robust against different cluster sizes inproject-codeflare/mcad#83

@Fiona-Waters
Copy link
ContributorAuthor

More investigation is required as to why these tests are failing. Closing this PR.

@asm582asm582 removed the request for review frommetalcyclingDecember 17, 2023 20:42
Copy link
Contributor

@KPostOfficeKPostOffice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Looks good. I like the move to more generic tests. One question.

//aw := createDeploymentAWwith550CPU(context, appendRandomString("aw-deployment-2-550cpu"))
cap:=getClusterCapacitycontext(context)
resource:=cpuDemand(cap,0.275).String()
aw:=createGenericDeploymentCustomPodResourcesWithCPUAW(
Copy link
Contributor

@KPostOfficeKPostOfficeDec 19, 2023
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

What happens if the cluster has many smaller nodes resulting a a highcap but inability to schedule AppWrappers becauase they do not fit on the individual nodes? Do we care about that at all in this test case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

From a test case perspective, the cluster is assumed to have homogenous nodes and it requests deployments that fit on a node in the cluster in CPU dimension.

Copy link
ContributorAuthor

@Fiona-WatersFiona-Waters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This looks great, so happy to move forward with this improvement. Just a couple of small comments.


funcgetClusterCapacitycontext(context*context)*clusterstateapi.Resource {
capacity:=clusterstateapi.EmptyResource()
nodes,_:=context.kubeclient.CoreV1().Nodes().List(context.ctx, metav1.ListOptions{})
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

We should handle the error here.

podList,err:=context.kubeclient.CoreV1().Pods("").List(context.ctx, metav1.ListOptions{FieldSelector:labelSelector})
// TODO: when no pods are listed, do we send entire node capacity as available
// this will cause false positive dispatch.
iferr!=nil {
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Should the error be caught like this instead?

Suggested change
iferr!=nil {
Expect(err).NotTo(HaveOccurred()

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@asm582asm582asm582 left review comments

+1 more reviewer

@KPostOfficeKPostOfficeKPostOffice left review comments

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

5 participants

@Fiona-Waters@ronensc@dgrove-oss@asm582@KPostOffice

[8]ページ先頭

©2009-2025 Movatter.jp