Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Data transformation framework for AI. Ultra performant, with incremental processing.

License

NotificationsYou must be signed in to change notification settings

cocoindex-io/cocoindex

Repository files navigation

CocoIndex

Data transformation for AI

GitHubDocumentationLicensePyPI versionPyPI - DownloadsCIreleaseDiscord

cocoindex-io%2Fcocoindex | Trendshift

Ultra performant data transformation framework for AI, with core engine written in Rust. Support incremental processing and data lineage out-of-box. Exceptional developer velocity. Production-ready at day 0.

⭐ Drop a star to help us grow!


CocoIndex Transformation


CocoIndex makes it super easy to transform data with AI workloads, and keep source data and target in sync effortlessly.


CocoIndex Features


Either creating embedding, building knowledge graphs, or any data transformations - beyond traditional SQL.

Exceptional velocity

Just declare transformation in dataflow with ~100 lines of python

# importdata['content']=flow_builder.add_source(...)# transformdata['out']=data['content']    .transform(...)    .transform(...)# collect datacollector.collect(...)# export to db, vector db, graph db ...collector.export(...)

CocoIndex follows the idea ofDataflow programming model. Each transformation creates a new field solely based on input fields, without hidden states and value mutation. All data before/after each transformation is observable, with lineage out of the box.

Particularly, developers don't explicitly mutate data by creating, updating and deleting. They just need to define transformation/formula for a set of source data.

Build like LEGO

Native builtins for different source, targets and transformations. Standardize interface, make it 1-line code switch between different components.

CocoIndex Features

Data Freshness

CocoIndex keep source data and target in sync effortlessly.

Incremental Processing

It has out-of-box support for incremental indexing:

  • minimal recomputation on source or logic change.
  • (re-)processing necessary portions; reuse cache when possible

Quick Start:

If you're new to CocoIndex, we recommend checking out

Setup

  1. Install CocoIndex Python library
pip install -U cocoindex
  1. Install Postgres if you don't have one. CocoIndex uses it for incremental processing.

Define data flow

FollowQuick Start Guide to define your first indexing flow. An example flow looks like:

@cocoindex.flow_def(name="TextEmbedding")deftext_embedding_flow(flow_builder:cocoindex.FlowBuilder,data_scope:cocoindex.DataScope):# Add a data source to read files from a directorydata_scope["documents"]=flow_builder.add_source(cocoindex.sources.LocalFile(path="markdown_files"))# Add a collector for data to be exported to the vector indexdoc_embeddings=data_scope.add_collector()# Transform data of each documentwithdata_scope["documents"].row()asdoc:# Split the document into chunks, put into `chunks` fielddoc["chunks"]=doc["content"].transform(cocoindex.functions.SplitRecursively(),language="markdown",chunk_size=2000,chunk_overlap=500)# Transform data of each chunkwithdoc["chunks"].row()aschunk:# Embed the chunk, put into `embedding` fieldchunk["embedding"]=chunk["text"].transform(cocoindex.functions.SentenceTransformerEmbed(model="sentence-transformers/all-MiniLM-L6-v2"))# Collect the chunk into the collector.doc_embeddings.collect(filename=doc["filename"],location=chunk["location"],text=chunk["text"],embedding=chunk["embedding"])# Export collected data to a vector index.doc_embeddings.export("doc_embeddings",cocoindex.targets.Postgres(),primary_key_fields=["filename","location"],vector_indexes=[cocoindex.VectorIndexDef(field_name="embedding",metric=cocoindex.VectorSimilarityMetric.COSINE_SIMILARITY)])

It defines an index flow like this:

Data Flow

🚀 Examples and demo

ExampleDescription
Text EmbeddingIndex text documents with embeddings for semantic search
Code EmbeddingIndex code embeddings for semantic search
PDF EmbeddingParse PDF and index text embeddings for semantic search
Manuals LLM ExtractionExtract structured information from a manual using LLM
Amazon S3 EmbeddingIndex text documents from Amazon S3
Azure Blob Storage EmbeddingIndex text documents from Azure Blob Storage
Google Drive Text EmbeddingIndex text documents from Google Drive
Docs to Knowledge GraphExtract relationships from Markdown documents and build a knowledge graph
Embeddings to QdrantIndex documents in a Qdrant collection for semantic search
FastAPI Server with DockerRun the semantic search server in a Dockerized FastAPI setup
Product RecommendationBuild real-time product recommendations with LLM and graph database
Image Search with Vision APIGenerates detailed captions for images using a vision model, embeds them, enables live-updating semantic search via FastAPI and served on a React frontend
Paper MetadataIndex papers in PDF files, and build metadata tables for each paper

More coming and stay tuned 👀!

📖 Documentation

For detailed documentation, visitCocoIndex Documentation, including aQuickstart guide.

🤝 Contributing

We love contributions from our community ❤️. For details on contributing or running the project for development, check out ourcontributing guide.

👥 Community

Welcome with a huge coconut hug 🥥⋆。˚🤗. We are super excited for community contributions of all kinds - whether it's code improvements, documentation updates, issue reports, feature requests, and discussions in our Discord.

Join our community here:

Support us:

We are constantly improving, and more features and examples are coming soon. If you love this project, please drop us a star ⭐ at GitHub repoGitHub to stay tuned and help us grow.

License

CocoIndex is Apache 2.0 licensed.


[8]ページ先頭

©2009-2025 Movatter.jp