Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

semantic-cache

Here are 15 public repositories matching this topic...

A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.

  • UpdatedJun 30, 2025
  • Python

Redis Vector Library (RedisVL) -- the AI-native Python client for Redis.

  • UpdatedNov 25, 2025
  • Python

Semantic caching layer for your LLM applications. Reuse responses and reduce token usage.

  • UpdatedJun 21, 2025
  • Rust

This is a RAG based chatbot in which semantic cache and guardrails have been incorporated.

  • UpdatedNov 11, 2024
  • HTML

Redis Vector Library (RedisVL) -- the AI-native Java client for Redis.

  • UpdatedOct 23, 2025
  • Java

This repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.

  • UpdatedApr 3, 2025
  • Jupyter Notebook

Enhance LLM retrieval performance with Azure Cosmos DB Semantic Cache. Learn how to integrate and optimize caching strategies in real-world web applications.

  • UpdatedMar 22, 2024
  • Python

Redis Vector Similarity Search, Semantic Caching, Recommendation Systems and RAG

  • UpdatedApr 3, 2024
  • Python

A ChatBot using Redis Vector Similarity Search, which can recommend blogs based on user prompt

  • UpdatedSep 30, 2023
  • Python

Optimized RAG Retrieval with Indexing, Quantization, Hybrid Search and Caching

  • UpdatedNov 6, 2024
  • Python

Redis Database offers unique capability to keep your data fresh while serving through LLM chatbot

  • UpdatedJul 16, 2024
  • Python

Semantic cache for your LLM apps in Go!

  • UpdatedMay 17, 2024
  • Go

🔍 Optimize RAG systems by exploring Lexical, Semantic, and Hybrid Search methods for better context retrieval and improved LLM responses.

  • UpdatedNov 29, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to thesemantic-cache topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thesemantic-cache topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp