- Notifications
You must be signed in to change notification settings - Fork1.2k
fix multi-modality apply chat template issue#3258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
@sywangyi Can you add a description of the issue and how this PR solves it please? |
sywangyi commentedJun 12, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
#3257 describe it, And I just give a WA as skipping applying chat template to VLM model like llava qwenvl |
Gently pinging@Narsil for review |
This is most likely a wrong fix. We cannot just skip using templates on some models because the answer is not to our liking. We need to fix the template instead, no ? But overriding like this is most likely purely wrong. |
VLM template input like,
] which is different with llm. so we may need reconstruct the msg for vlm before apply chat template. also, there is model that does not have chat template. we need to handle it as well. |
What does this PR do?
Fixes # (issue)
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.