switch-large-128_qmoe
This is thegoogle/switch-large-128 model quantized with the QMoE framework to ternary precision and stored in the custom further compressed QMoE format.
Please see theQMoE repository for how to use this model.
- Downloads last month
- 9
Inference ProvidersNEW
This model isn't deployed by any Inference Provider.🙋Ask for provider support