Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

gguf : general usability improvements#3409

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged

Conversation

@cebtenzzre
Copy link
Collaborator

  • avoid copy-pasted tensor names in MODEL_TENSOR_NAMES
  • accept str for path in SpecialVocab.__init__

Resolves the discussion at#2842 (review)

@cebtenzzre
Copy link
CollaboratorAuthor

@ggerganov I updated the PR to remove MODEL_TENSOR_NAMES, since it generally makes more sense to use MODEL_TENSORS and TENSOR_NAMES separately. But this is a breaking change. What do you think?

@cebtenzzrecebtenzzre mentioned this pull requestOct 1, 2023
Copy link
Member

@ggerganovggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hm, why is it a breaking change? If it is just that python scripts have to switch to usinggguf.TENSOR_NAMES then I think it's fine

cebtenzzre reacted with thumbs up emoji
@cebtenzzrecebtenzzre merged commit0fe3210 intoggml-org:masterOct 2, 2023
fortensor,keysinself.mappings_cfg.items():
tensor_name=tensor_names.get(tensor)
iftensor_nameisNone:
iftensornotinMODEL_TENSORS[arch]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

It's kind of late to say anything, but I'm curious why you'd make these changes. There's no usability improvement from the user's perspective except it'll be twice as slow now since it requires the key to get hashed and the dictionary searched twice compared to usingdict.get().

Copy link
CollaboratorAuthor

@cebtenzzrecebtenzzreOct 2, 2023
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think the code makes more sense this way - the standard name of the tensor is never dependent on the model it is from, so we should represent that fact by using separate data structures. You shouldn't have to know what model architecture you are working with to find the name of a standard tensor, either. I don't think this part of the code is performance-critical, so I didn't bother optimizing it.
This has confused developers at least, which is what I mean by usability. See#3417 (comment)

joelkuiper added a commit to vortext/llama.cpp that referenced this pull requestOct 5, 2023
…example* 'master' of github.com:ggerganov/llama.cpp: (24 commits)  convert : fix Baichuan2 models by using vocab size in config.json (ggml-org#3299)  readme : add project status link  ggml : fix build afterggml-org#3329  llm : add Refact model (ggml-org#3329)  sync : ggml (conv 1d + 2d updates, UB fixes) (ggml-org#3468)  finetune : readme fix typo (ggml-org#3465)  ggml : add RISC-V Vector Support for K-Quants and improved the existing intrinsics (ggml-org#3453)  main : consistent prefix/suffix coloring (ggml-org#3425)  llama : fix session saving/loading (ggml-org#3400)  llama : expose model's rope_freq_scale in the API (ggml-org#3418)  metal : alibi for arbitrary number of heads (ggml-org#3426)  cmake : make LLAMA_NATIVE flag actually use the instructions supported by the processor (ggml-org#3273)  Work on the BPE tokenizer (ggml-org#3252)  convert : fix vocab size when not defined in hparams (ggml-org#3421)  cmake : increase minimum version for add_link_options (ggml-org#3444)  CLBlast: Add broadcast support for matrix multiplication (ggml-org#3402)  gguf : add BERT, MPT, and GPT-J arch info (ggml-org#3408)  gguf : general usability improvements (ggml-org#3409)  cmake : make CUDA flags more similar to the Makefile (ggml-org#3420)  finetune :fixggml-org#3404 (ggml-org#3437)  ...
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull requestOct 7, 2023
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@ggerganovggerganovggerganov approved these changes

+1 more reviewer

@KerfuffleV2KerfuffleV2KerfuffleV2 left review comments

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@cebtenzzre@ggerganov@KerfuffleV2

[8]ページ先頭

©2009-2025 Movatter.jp