Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Feature/bug fix of thinking llm in vllm#3510

Merged
parshvadaftari merged 3 commits intomem0ai:mainfrom
dog-last:feature/bug-fix-of-thinking-llm-in-vllm
Oct 3, 2025
Merged

Feature/bug fix of thinking llm in vllm#3510
parshvadaftari merged 3 commits intomem0ai:mainfrom
dog-last:feature/bug-fix-of-thinking-llm-in-vllm

Conversation

@dog-last
Copy link
Contributor

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Please delete options that are not relevant.

  • Unit Test

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • I have checked my code and corrected any misspellings

Maintainer Checklist

  • closes #xxxx (Replace xxxx with the GitHub issue number)
  • Made sure Checks passed

@CLAassistant
Copy link

CLAassistant commentedSep 27, 2025
edited
Loading

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@parshvadaftariparshvadaftari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hey thanks for raising the PR, can you please incorporate the requested changes?

)
try:
response = remove_code_blocks(response)
if '</think>' in response:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hey instead of creating a new utils for removing "think" blocks, it would be better if you incorporate it into theremove_code_blocks. This will reduce the code redundancy and latency.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hi, thanks for the review!
I've remove theremove_thinking_tagsfunction, and adjust theremove_code_blocks.
Is that OK?

@@ -0,0 +1,135 @@
from unittest.mock import MagicMock, patch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

A bit skeptical about a entire separate test for removing the "thinking tags". Can we add this to test_vllm?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Move the test totest_vllm.py

bug__fix.md Outdated
@@ -0,0 +1,64 @@
# Ensure when using vllm, the <think></think> tags are handled instead of throw an error
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Could you please remove this?

@dog-lastdog-lastforce-pushed thefeature/bug-fix-of-thinking-llm-in-vllm branch from3204212 to0dad8d3CompareSeptember 30, 2025 07:21
@dog-last
Copy link
ContributorAuthor

The latest change makes sure:

  • All feedback addressed.
  • Follows project style guide.
  • Tests passed.

Thanks again for your attention😊




def create_mocked_memory():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This test is not required and also doesn't cover any vllm aspect.



def create_mocked_async_memory():
"""Create a fully mocked AsyncMemory instance for testing."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This test is not required and also doesn't cover any vllm aspect.



@pytest.mark.asyncio
async def test_async_thinking_tags_in_add_to_vector_store():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

A bit nit picking over here, but the test function name can be bitter, which doesn't have this long sentence like name.

return memory, mock_llm, mock_vector_store


def test_thinking_tags_in_add_to_vector_store():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The name of the function seems to be a bit too long.

…hink></think> tags output by the LLM, instead of raise a JSON format error.
…the test to test_vllm.py instead of a entire separate test.
@dog-lastdog-lastforce-pushed thefeature/bug-fix-of-thinking-llm-in-vllm branch from0dad8d3 to4b671a9CompareOctober 2, 2025 08:39
@dog-last
Copy link
ContributorAuthor

Thank you for the valuable feedback! I've made the following changes as suggested:

  • Removed the test function that was recommended for deletion.
  • Renamed the two test functions with shorter names.
    A new commit has been submitted with these updates. Please let me know if this meets the requirements or if further adjustments are needed.

Copy link
Contributor

@parshvadaftariparshvadaftari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Looks good to me.

@parshvadaftariparshvadaftari merged commit51ce6f1 intomem0ai:mainOct 3, 2025
6 of 7 checks passed
@parshvadaftari
Copy link
Contributor

Thanks for contributing to mem0.

@dog-lastdog-last deleted the feature/bug-fix-of-thinking-llm-in-vllm branchJanuary 1, 2026 07:02
garciaba79 pushed a commit to garciaba79/mem0 that referenced this pull requestFeb 12, 2026
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

1 more reviewer

@parshvadaftariparshvadaftariparshvadaftari approved these changes

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@dog-last@CLAassistant@parshvadaftari

Comments


[8]ページ先頭

©2009-2026 Movatter.jp