Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat(version): add MANUAL_VERSION, --next and --patch to version comm…#1724

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
bearomorphism wants to merge1 commit intocommitizen-tools:v4-11-0
base:v4-11-0
Choose a base branch
Loading
frombearomorphism:feat-next-version

Conversation

@bearomorphism
Copy link
Collaborator

@bearomorphismbearomorphism commentedDec 12, 2025
edited
Loading

…and, remove type alias

Closes#1679

Manually tested and added test cases, the result LGTM.

@bearomorphism
Copy link
CollaboratorAuthor

@Lee-W I already ranpytest tests/commands/test_version_command.py -n auto --regen-all to regenerate the test files, but the pipeline still fails. (The tests passed on my machine)

Do you have any ideas why this happen

version = f"{version_scheme.minor}"
out.write(version.major)
return
if self.arguments.get("minor"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I realize now that this creates problems with the (not)monotonic kind of versions (and possible non-semver). I'm not sure what to do about it.

I think for now it's fine that if you divergetoo much from semver in your custom version scheme, then you won't get the full range of features.

bearomorphism reacted with thumbs up emoji
@woile
Copy link
Member

There was a way to run all the pre-commits, but I don't remember how it works 😅



@pytest.mark.parametrize(
"next_version, current_version, expected_version",
Copy link
Member

@woilewoileDec 12, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

maybe renamenext_version tonext_increment?
In the user facing cli, it makes sense semantically:--next=MAJOR (give me the next major).

assert expected_version in captured.out


def test_next_version_invalid_version(config, capsys):
Copy link
Member

@woilewoileDec 12, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

what about having a parametrize of the combinations that should fail:

@pytest.mark.parametrize(    "args",    [        # incomplete list:        {"next"},        # ...        {"--verbose", "next"}    ])

out.error("Invalid version.")
return

if next_str := self.arguments.get("next"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

what would happen here if I runcz version --project --next? I would expect that to work

Copy link
CollaboratorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

That also works.

Copy link
CollaboratorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The behavior is the same ascz version --next=*

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

But what would be the output?

Copy link
CollaboratorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Oh I misunderstood your question. The output should depend on the commit history, right?
I forgot to implement that.

Copy link
CollaboratorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I will do#1678 in another PR since it's a bit complicated (we need to considermajor_version_zero,bump_map, etc. for this feature and I prefer not to just copy & paste code, need some refactoring)

woile reacted with thumbs up emoji
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Indeed, that logic should be core and it should be possible to re-use it across commands.

@woile
Copy link
Member

Nice to see this going forward 🎉

@codecov
Copy link

codecovbot commentedDec 13, 2025
edited
Loading

Codecov Report

❌ Patch coverage is93.15068% with5 lines in your changes missing coverage. Please review.
⚠️ Pleaseupload report for BASE (v4-11-0@2072f8e).Learn more about missing BASE report.

Files with missing linesPatch %Lines
commitizen/version_increment.py81.25%3 Missing⚠️
commitizen/commands/version.py94.28%2 Missing⚠️
Additional details and impacted files
@@            Coverage Diff             @@##             v4-11-0    #1724   +/-   ##==========================================  Coverage           ?   98.24%           ==========================================  Files              ?       61             Lines              ?     2619             Branches           ?        0           ==========================================  Hits               ?     2573             Misses             ?       46             Partials           ?        0
FlagCoverage Δ
unittests98.24% <93.15%> (?)

Flags with carried forward coverage won't be shown.Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report?Share it here.

🚀 New features to boost your workflow:
  • ❄️Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@bearomorphism
Copy link
CollaboratorAuthor

@Lee-W I already ranpytest tests/commands/test_version_command.py -n auto --regen-all to regenerate the test files, but the pipeline still fails. (The tests passed on my machine)

Do you have any ideas why this happen

I see why the test failure happens. Please see my other PR#1726 .

@bearomorphism
Copy link
CollaboratorAuthor

I will rebase this branch after#1726 is merged. The test failure should be resolved then.

@bearomorphismbearomorphismforce-pushed thefeat-next-version branch 2 times, most recently from33d0f56 to8d938d2CompareDecember 14, 2025 11:57
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@woilewoilewoile left review comments

@Lee-WLee-WAwaiting requested review from Lee-WLee-W is a code owner

@noirbizarrenoirbizarreAwaiting requested review from noirbizarrenoirbizarre is a code owner

Assignees

No one assigned

Labels

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

@bearomorphism@woile

[8]ページ先頭

©2009-2025 Movatter.jp