Speed up AI development using Intel®-optimized software on the latest Intel® Core™ Ultra processor, Intel® Xeon® processor, Intel® Gaudi® AI Accelerator, and GPU compute. You can get started right away on theIntel® Tiber™ AI Cloud for free.
As a participant in the open source software community since 1989, Intel uses industry collaboration, co-engineering, and open source contributions to deliver a steady stream of code and optimizations that work across multiple platforms and use cases. We push our contributions upstream so developers get the most current and optimized software that works across multiple platforms and maintains security.
Check out the following repositories to jumpstart your development work on Intel:
- OPEA GenAI Examples - Examples such as ChatQnA which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
- AI PC Notebooks - A collection of notebooks designed to showcase generative AI workloads on AI PC
- Open3D - A modern library for 3D data processing
- Optimum Intel - Accelerate inference with Intel optimization tools
- Optimum Habana - Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
- Intel Neural Compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
- OpenVINO Notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
- SetFit - Efficient few-shot learning with Sentence Transformers
- FastRAG - Efficient retrieval augmentation and generation (RAG) framework
Join us on theIntel DevHub Discord server to chat with other developers in channels like #dev-projects, #gaudi, and #large-language-models.
Visitopen.intel.com to find out more, or follow us onX orLinkedIn!
PinnedLoading
- cve-bin-tool
cve-bin-tool PublicThe CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 350 common, vulnerable components (openssl, libpng, libxml2, expat and others),…
- intel-extension-for-pytorch
intel-extension-for-pytorch PublicA Python package for extending the official PyTorch that can easily obtain performance on Intel platform
- neural-compressor
neural-compressor PublicSOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Repositories
- cartwheel-gstreamer Public
Intel developer staging area for unmerged upstream patch contributions to gstreamer monorepo
intel/cartwheel-gstreamer’s past year of commit activity - torch-xpu-ops Public
intel/torch-xpu-ops’s past year of commit activity - intel-graphics-compiler Public
Uh oh!
There was an error while loading.Please reload this page.
intel/intel-graphics-compiler’s past year of commit activity - llvm-ci-perf-results Public
intel/llvm-ci-perf-results’s past year of commit activity - mlir-extensions Public
Intel® Extension for MLIR. A staging ground for MLIR dialects and tools for Intel devices using the MLIR toolchain.
intel/mlir-extensions’s past year of commit activity - memory-usage-analyzer Public
intel/memory-usage-analyzer’s past year of commit activity