mixtral-8x7b
Here are 88 public repositories matching this topic...
Language:All
Sort:Most stars
TeleChat: 🤖️ an AI chat Telegram bot can Web Search Powered by GPT-3.5/4/4 Turbo/4o, DALL·E 3, Groq, Gemini 1.5 Pro/Flash and the official Claude2.1/3/3.5 API using Python on Zeabur, fly.io and Replit.
- Updated
Mar 18, 2025 - Python
中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
- Updated
Aug 17, 2024 - Python
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
- Updated
Mar 13, 2024 - Rust
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
- Updated
May 9, 2024 - Python
[ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration
- Updated
Nov 18, 2024 - Python
Build LLM-powered robots in your garage with MachinaScript For Robots!
- Updated
Sep 28, 2024 - Python
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
- Updated
Feb 25, 2024 - Jupyter Notebook
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
- Updated
Feb 5, 2024 - Python
A Free OpenAI-compatible API designed to interact with models like GPT-4o, Claude 3 Haiku, Mixtral 8x7b & Llama 3 70b through DuckDuckGo's AI Chat.
- Updated
Feb 23, 2025 - Python
An innovative Python project that integrates AI-driven agents for Agile software development, leveraging advanced language models and collaborative task automation.
- Updated
Nov 10, 2024 - Python
An unofficial C#/.NET SDK for accessing the Mistral AI API
- Updated
Feb 21, 2025 - C#
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
- Updated
Jan 20, 2024 - Jupyter Notebook
A project to show howto use SpringAI with OpenAI to chat with the documents in a library. Documents are stored in a normal/vector database. The AI is used to create embeddings from documents that are stored in the vector database. The vector database is used to query for the nearest document. That document is used by the AI to generate the answer.
- Updated
Mar 2, 2025 - Java
Reference implementation of Mistral AI 7B v0.1 model.
- Updated
Dec 25, 2023 - Python
📤 Email Classification and Automatic Re-routing with the power of LLMs and Distributed Task Queues. 🏆 Winner at Barclays Hack-O-Hire 2024!
- Updated
Feb 14, 2025 - Python
DuckDuckGo AI to OpenAI API
- Updated
Mar 17, 2025 - Rust
A versatile CLI and Python wrapper for Groq AI's breakthrough LPU Inference Engine. Streamline the creation of chatbots and generate dynamic text with speeds of up to 800 tokens/sec.
- Updated
Sep 12, 2024 - Python
The MistralAI API wrapper for Delphi utilizes the various advanced models developed by Mistral to provide robust capabilities for chat interactions, string embeddings, precise code generation with Codestral, batch and moderation.
- Updated
Jan 6, 2025 - Pascal
LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
- Updated
Jul 22, 2024 - Python
Notes on the Mistral AI model
- Updated
Dec 27, 2023 - Jupyter Notebook
Improve this page
Add a description, image, and links to themixtral-8x7b topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with themixtral-8x7b topic, visit your repo's landing page and select "manage topics."