mlperf-inference
Here are 9 public repositories matching this topic...
Collective Knowledge (CK), Collective Mind (CM/CMX) and MLPerf automations: community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf methodology and benchmarks
- Updated
Mar 20, 2025 - Python
AML's goal is to make benchmarking of various AI architectures on Ampere CPUs a pleasurable experience :)
- Updated
Mar 11, 2025 - Python
TinyNS: Platform-Aware Neurosymbolic Auto Tiny Machine Learning
- Updated
Jun 2, 2023 - C
Automated KRAI X workflows for Google Cloud Platform
- Updated
Mar 14, 2024 - Python
A benchmark suite to used to compare the performance of various models that are optimized by Adlik.
- Updated
Aug 27, 2022 - Python
This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.
- Updated
Mar 29, 2025 - Python
Development version of CodeReefied portable CK workflows for image classification and object detection. Stable "live" versions are available at CodeReef portal:
- Updated
Nov 15, 2020 - C++
MLPerf explorer beta
- Updated
Jan 14, 2025 - PHP
These are automated test submissions for validating the MLPerf inference workflows
- Updated
Feb 13, 2025 - Mermaid
Improve this page
Add a description, image, and links to themlperf-inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with themlperf-inference topic, visit your repo's landing page and select "manage topics."