Speed up AI development using Intel®-optimized software on the latest Intel® Core™ Ultra processor, Intel® Xeon® processor, Intel® Gaudi® AI Accelerator, and GPU compute. You can get started right away on theIntel® Tiber™ AI Cloud for free.
As a participant in the open source software community since 1989, Intel uses industry collaboration, co-engineering, and open source contributions to deliver a steady stream of code and optimizations that work across multiple platforms and use cases. We push our contributions upstream so developers get the most current and optimized software that works across multiple platforms and maintains security.
Check out the following repositories to jumpstart your development work on Intel:
- OPEA GenAI Examples - Examples such as ChatQnA which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
- AI PC Notebooks - A collection of notebooks designed to showcase generative AI workloads on AI PC
- Open3D - A modern library for 3D data processing
- Optimum Intel - Accelerate inference with Intel optimization tools
- Optimum Habana - Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
- Intel Neural Compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
- OpenVINO Notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
- SetFit - Efficient few-shot learning with Sentence Transformers
- FastRAG - Efficient retrieval augmentation and generation (RAG) framework
Join us on theIntel DevHub Discord server to chat with other developers in channels like #dev-projects, #gaudi, and #large-language-models.
Visitopen.intel.com to find out more, or follow us onX orLinkedIn!
PinnedLoading
- cve-bin-tool
cve-bin-tool PublicThe CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 350 common, vulnerable components (openssl, libpng, libxml2, expat and others),…
- intel-extension-for-pytorch
intel-extension-for-pytorch PublicA Python package for extending the official PyTorch that can easily obtain performance on Intel platform
- neural-compressor
neural-compressor PublicSOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Repositories
- torch-xpu-ops Public
Uh oh!
There was an error while loading.Please reload this page.
intel/torch-xpu-ops’s past year of commit activity - neural-compressor Public
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
intel/neural-compressor’s past year of commit activity - auto-round Public
Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Transformers, and vLLM.
intel/auto-round’s past year of commit activity - xFasterTransformer Public
Uh oh!
There was an error while loading.Please reload this page.
intel/xFasterTransformer’s past year of commit activity - edge-developer-kit-reference-scripts Public
Developer kits reference setup scripts for various kinds of Intel platforms and GPUs
intel/edge-developer-kit-reference-scripts’s past year of commit activity - Bootcamp-Materials Public archive
intel/Bootcamp-Materials’s past year of commit activity - onnxruntime Public Forked frommicrosoft/onnxruntime
ONNX Runtime: cross-platform, high performance scoring engine for ML models
intel/onnxruntime’s past year of commit activity