- Notifications
You must be signed in to change notification settings - Fork822
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
License
XiaoMi/mace
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Documentation |FAQ |Release Notes |Roadmap |MACE Model Zoo |Demo |Join Us |中文
Mobile AI Compute Engine (orMACE for short) is a deep learning inference framework optimized formobile heterogeneous computing on Android, iOS, Linux and Windows devices. The design focuses on the followingtargets:
- Performance
- Runtime is optimized with NEON, OpenCL and Hexagon, andWinograd algorithm is introduced tospeed up convolution operations. The initialization is also optimized to be faster.
- Power consumption
- Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints areincluded as advanced APIs.
- Responsiveness
- UI responsiveness guarantee is sometimes obligatory when running a model.Mechanism like automatically breaking OpenCL kernel into small units isintroduced to allow better preemption for the UI rendering task.
- Memory usage and library footprint
- Graph level memory allocation optimization and buffer reuse are supported.The core library tries to keep minimum external dependencies to keep thelibrary footprint small.
- Model protection
- Model protection has been the highest priority since the beginning ofthe design. Various techniques are introduced like converting models to C++code and literal obfuscations.
- Platform coverage
- Good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM basedchips. CPU runtime supports Android, iOS and Linux.
- Rich model formats support
- TensorFlow,Caffe andONNX model formats are supported.
MACE Model Zoo containsseveral common neural networks and models which will be built daily against a list of mobilephones. The benchmark results can be found inthe CI result page(choose the latest passed pipeline, clickrelease step and you will see the benchmark results).To get the comparison results with other frameworks, you can take a look atMobileAIBench project.
- GitHub issues: bug reports, usage issues, feature requests
- Slack:mace-users.slack.com
- QQ群: 756046893
Any kind of contribution is welcome. For bug reports, feature requests,please just open an issue without any hesitation. For code contributions, it'sstrongly suggested to open an issue for discussion first. For more details,please refer tothe contribution guide.
MACE depends on several open source projects located in thethird_party directory. Particularly, we learned a lot fromthe following projects during the development:
- Qualcomm Hexagon NN Offload Framework: the Hexagon DSP runtimedepends on this library.
- TensorFlow,Caffe,SNPE,ARM ComputeLibrary,ncnn,ONNX and many others: we learned many bestpractices from these projects.
Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams fortheir help.
About
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.