Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Redis Vector Library (RedisVL) -- the AI-native Python client for Redis.

License

NotificationsYou must be signed in to change notification settings

redis/redis-vl-python

Repository files navigation

Redis

🔥 Vector Library

The *AI-native* Redis Python client

License: MITLanguageCode style: blackGitHub last commitpypiPyPI - DownloadsGitHub stars

Home   Documentation   Recipes   

Introduction

Redis Vector Library is the ultimate Python client designed for AI-native applications harnessing the power ofRedis.

redisvl is your go-to client for:

  • Lightning-fast information retrieval & vector similarity search
  • Real-time RAG pipelines
  • Agentic memory structures
  • Smart recommendation engines

💪 Getting Started

Installation

Installredisvl into your Python (>=3.8) environment usingpip:

pip install redisvl

For more detailed instructions, visit theinstallation guide.

Setting up Redis

Choose from multiple Redis deployment options:

  1. Redis Cloud: Managed cloud database (free tier available)
  2. Redis Stack: Docker image for development
    docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
  3. Redis Enterprise: Commercial, self-hosted database
  4. Azure Managed Redis: Fully managed Redis Enterprise on Azure

Enhance your experience and observability with the freeRedis Insight GUI.

Overview

🗃️ Redis Index Management

  1. Design a schema for your use case that models your dataset with built-in Redis and indexable fields (e.g. text, tags, numerics, geo, and vectors).Load a schema from a YAML file:

    index:name:user-idxprefix:userstorage_type:jsonfields:  -name:usertype:tag  -name:credit_scoretype:tag  -name:embeddingtype:vectorattrs:algorithm:flatdims:4distance_metric:cosinedatatype:float32
    fromredisvl.schemaimportIndexSchemaschema=IndexSchema.from_yaml("schemas/schema.yaml")

    Or load directly from a Python dictionary:

    schema=IndexSchema.from_dict({"index": {"name":"user-idx","prefix":"user","storage_type":"json"    },"fields": [        {"name":"user","type":"tag"},        {"name":"credit_score","type":"tag"},        {"name":"embedding","type":"vector","attrs": {"algorithm":"flat","datatype":"float32","dims":4,"distance_metric":"cosine"            }        }    ]})
  2. Create a SearchIndex class with an input schema to perform admin and search operations on your index in Redis:

    fromredisimportRedisfromredisvl.indeximportSearchIndex# Define the indexindex=SearchIndex(schema,redis_url="redis://localhost:6379")# Create the index in Redisindex.create()

    An async-compatible index class also available:AsyncSearchIndex.

  3. Loadandfetch data to/from your Redis instance:

    data= {"user":"john","credit_score":"high","embedding": [0.23,0.49,-0.18,0.95]}# load list of dictionaries, specify the "id" fieldindex.load([data],id_field="user")# fetch by "id"john=index.fetch("john")

🔍 Retrieval

Define queries and perform advanced searches over your indices, including the combination of vectors, metadata filters, and more.

  • VectorQuery - Flexible vector queries with customizable filters enabling semantic search:

    fromredisvl.queryimportVectorQueryquery=VectorQuery(vector=[0.16,-0.34,0.98,0.23],vector_field_name="embedding",num_results=3)# run the vector search query against the embedding fieldresults=index.query(query)

    Incorporate complex metadata filters on your queries:

    fromredisvl.query.filterimportTag# define a tag match filtertag_filter=Tag("user")=="john"# update query definitionquery.set_filter(tag_filter)# execute queryresults=index.query(query)
  • RangeQuery - Vector search within a defined range paired with customizable filters

  • FilterQuery - Standard search using filters and the full-text search

  • CountQuery - Count the number of indexed records given attributes

Read more about buildingadvanced Redis queries.

🔧 Utilities

Vectorizers

Integrate with popular embedding providers to greatly simplify the process of vectorizing unstructured data for your index and queries:

fromredisvl.utils.vectorizeimportCohereTextVectorizer# set COHERE_API_KEY in your environmentco=CohereTextVectorizer()embedding=co.embed(text="What is the capital city of France?",input_type="search_query")embeddings=co.embed_many(texts=["my document chunk content","my other document chunk content"],input_type="search_document")

Learn more about using vectorizers in your embedding workflows.

Rerankers

Integrate with popular reranking providers to improve the relevancy of the initial search results from Redis

💫 Extensions

We're excited to announce the support forRedisVL Extensions. These modules implement interfaces exposing best practices and design patterns for working with LLM memory and agents. We've taken the best from what we've learned from our users (that's you) as well as bleeding-edge customers, and packaged it up.

Have an idea for another extension? Open a PR or reach out to us atapplied.ai@redis.com. We're always open to feedback.

LLM Semantic Caching

Increase application throughput and reduce the cost of using LLM models in production by leveraging previously generated knowledge with theSemanticCache.

fromredisvl.extensions.llmcacheimportSemanticCache# init cache with TTL and semantic distance thresholdllmcache=SemanticCache(name="llmcache",ttl=360,redis_url="redis://localhost:6379",distance_threshold=0.1)# store user queries and LLM responses in the semantic cachellmcache.store(prompt="What is the capital city of France?",response="Paris")# quickly check the cache with a slightly different prompt (before invoking an LLM)response=llmcache.check(prompt="What is France's capital city?")print(response[0]["response"])
>>> Paris

Learn more about semantic caching for LLMs.

LLM Session Management

Improve personalization and accuracy of LLM responses by providing user chat history as context. Manage access to the session data using recency or relevancy,powered by vector search with theSemanticSessionManager.

fromredisvl.extensions.session_managerimportSemanticSessionManagersession=SemanticSessionManager(name="my-session",redis_url="redis://localhost:6379",distance_threshold=0.7)session.add_messages([    {"role":"user","content":"hello, how are you?"},    {"role":"assistant","content":"I'm doing fine, thanks."},    {"role":"user","content":"what is the weather going to be today?"},    {"role":"assistant","content":"I don't know"}])

Get recent chat history:

session.get_recent(top_k=1)
>>> [{"role": "assistant", "content": "I don't know"}]

Get relevant chat history (powered by vector search):

session.get_relevant("weather",top_k=1)
>>> [{"role": "user", "content": "what is the weather going to be today?"}]

Learn more about LLM session management.

LLM Semantic Routing

Build fast decision models that run directly in Redis and route user queries to the nearest "route" or "topic".

fromredisvl.extensions.routerimportRoute,SemanticRouterroutes= [Route(name="greeting",references=["hello","hi"],metadata={"type":"greeting"},distance_threshold=0.3,    ),Route(name="farewell",references=["bye","goodbye"],metadata={"type":"farewell"},distance_threshold=0.3,    ),]# build semantic router from routesrouter=SemanticRouter(name="topic-router",routes=routes,redis_url="redis://localhost:6379",)router("Hi, good morning")
>>> RouteMatch(name='greeting', distance=0.273891836405)

Learn more aboutsemantic routing.

🖥️ Command Line Interface

Create, destroy, and manage Redis index configurations from a purpose-built CLI interface:rvl.

$ rvl -husage: rvl<command> [<args>]Commands:        index       Index manipulation (create, delete, etc.)        version     Obtain the version of RedisVL        stats       Obtain statistics about an index

Read more aboutusing the CLI.

🚀 Why RedisVL?

In the age of GenAI,vector databases andLLMs are transforming information retrieval systems. With emerging and popular frameworks likeLangChain andLlamaIndex, innovation is rapid. Yet, many organizations face the challenge of delivering AI solutionsquickly and atscale.

EnterRedis – a cornerstone of the NoSQL world, renowned for its versatiledata structures andprocessing engines. Redis excels in real-time workloads like caching, session management, and search. It's also a powerhouse as a vector database for RAG, an LLM cache, and a chat session memory store for conversational AI.

The Redis Vector Library bridges the gap between the AI-native developer ecosystem and Redis's robust capabilities. With a lightweight, elegant, and intuitive interface, RedisVL makes it easy to leverage Redis's power. Built on theRedis Python client,redisvl transforms Redis's features into a grammar perfectly aligned with the needs of today's AI/ML Engineers and Data Scientists.

😁 Helpful Links

For additional help, check out the following resources:

🫱🏼‍🫲🏽 Contributing

Please help us by contributing PRs, opening GitHub issues for bugs or new feature ideas, improving documentation, or increasing test coverage.Read more about how to contribute!

🚧 Maintenance

This project is supported byRedis, Inc on a good faith effort basis. To report bugs, request features, or receive assistance, pleasefile an issue.


[8]ページ先頭

©2009-2025 Movatter.jp