Disclaimer:
The model is reproduced based on the paperVPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Modelsgithub andarXiv
The model itself is sourced from a community release.
It is intended only for experimental purposes.
Users are responsible for any consequences arising from the use of this model.
Note:
The PPL test results are for reference only and were collected using GPTQ testing script.
{"ctx_2048":{"wikitext2":6.8713483810424805},"ctx_4096":{"wikitext2":6.460692882537842},"ctx_8192":{"wikitext2":6.278013229370117}}
- Downloads last month
- 6
Model tree forVPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-0-woft
Base model
meta-llama/Llama-3.1-70BFinetuned
meta-llama/Llama-3.1-70B-Instruct