Speed up AI development using Intel®-optimized software on the latest Intel® Core™ Ultra processor, Intel® Xeon® processor, Intel® Gaudi® AI Accelerator, and GPU compute. You can get started right away on theIntel® Tiber™ AI Cloud for free.
As a participant in the open source software community since 1989, Intel uses industry collaboration, co-engineering, and open source contributions to deliver a steady stream of code and optimizations that work across multiple platforms and use cases. We push our contributions upstream so developers get the most current and optimized software that works across multiple platforms and maintains security.
Check out the following repositories to jumpstart your development work on Intel:
- OPEA GenAI Examples - Examples such as ChatQnA which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
- AI PC Notebooks - A collection of notebooks designed to showcase generative AI workloads on AI PC
- Open3D - A modern library for 3D data processing
- Optimum Intel - Accelerate inference with Intel optimization tools
- Optimum Habana - Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
- Intel Neural Compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
- OpenVINO Notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
- SetFit - Efficient few-shot learning with Sentence Transformers
- FastRAG - Efficient retrieval augmentation and generation (RAG) framework
Join us on theIntel DevHub Discord server to chat with other developers in channels like #dev-projects, #gaudi, and #large-language-models.
Visitopen.intel.com to find out more, or follow us onX orLinkedIn!
PinnedLoading
- cve-bin-tool
cve-bin-tool PublicThe CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 350 common, vulnerable components (openssl, libpng, libxml2, expat and others),…
- intel-extension-for-pytorch
intel-extension-for-pytorch PublicA Python package for extending the official PyTorch that can easily obtain performance on Intel platform
- neural-compressor
neural-compressor PublicSOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Repositories
- llvm-ci-perf-results Public
Uh oh!
There was an error while loading.Please reload this page.
intel/llvm-ci-perf-results’s past year of commit activity - intel-graphics-compiler Public
intel/intel-graphics-compiler’s past year of commit activity - edge-ai-sizing-tool Public
The Edge AI Sizing Tool is designed to showcase the scalability and performance of AI use cases on edge devices.
intel/edge-ai-sizing-tool’s past year of commit activity - edge-ai-tuning-kit Public
The Edge AI Tuning Kit is a comprehensive solution for creating, tailoring, and implementing AI models in the edge platform.
intel/edge-ai-tuning-kit’s past year of commit activity - neural-compressor Public
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
intel/neural-compressor’s past year of commit activity - AI-Playground Public
AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.
intel/AI-Playground’s past year of commit activity