Neural Magic
PinnedLoading
- deepsparse
deepsparse Public archiveSparsity-aware deep learning inference runtime for CPUs
Repositories
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/research’s past year of commit activity - vllm Public Forked fromvllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/vllm’s past year of commit activity - lighteval Public Forked fromhuggingface/lighteval
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/lighteval’s past year of commit activity - model-validation-configs Public
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/model-validation-configs’s past year of commit activity - pytorch Public Forked frompytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/pytorch’s past year of commit activity - lmms-eval Public Forked fromEvolvingLMMs-Lab/lmms-eval
Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/lmms-eval’s past year of commit activity - lm-evaluation-harness Public Forked fromEleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/lm-evaluation-harness’s past year of commit activity - tpu-inference Public Forked fromvllm-project/tpu-inference
TPU inference for vLLM, with unified JAX and PyTorch support.
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/tpu-inference’s past year of commit activity Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/axolotl’s past year of commit activity - nm-vllm Public archive Forked fromvllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Uh oh!
There was an error while loading.Please reload this page.
neuralmagic/nm-vllm’s past year of commit activity
Top languages
Loading…
Uh oh!
There was an error while loading.Please reload this page.
Most used topics
Loading…
Uh oh!
There was an error while loading.Please reload this page.