- Notifications
You must be signed in to change notification settings - Fork322
Data transformation framework for AI. Ultra performant, with incremental processing. 🌟 Star if you like it!
License
cocoindex-io/cocoindex
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Ultra performant data transformation framework for AI, with core engine written in Rust. Support incremental processing and data lineage out-of-box. Exceptional developer velocity. Production-ready at day 0.
⭐ Drop a star to help us grow!
CocoIndex makes it effortless to transform data with AI, and keep source data and target in sync. Whether you’re building a vector index, creating knowledge graphs for context engineering or performing any custom data transformations — goes beyond SQL.
Just declare transformation in dataflow with ~100 lines of python
# importdata['content']=flow_builder.add_source(...)# transformdata['out']=data['content'] .transform(...) .transform(...)# collect datacollector.collect(...)# export to db, vector db, graph db ...collector.export(...)
CocoIndex follows the idea ofDataflow programming model. Each transformation creates a new field solely based on input fields, without hidden states and value mutation. All data before/after each transformation is observable, with lineage out of the box.
Particularly, developers don't explicitly mutate data by creating, updating and deleting. They just need to define transformation/formula for a set of source data.
Native builtins for different source, targets and transformations. Standardize interface, make it 1-line code switch between different components - as easy as assembling building blocks.
CocoIndex keep source data and target in sync effortlessly.
It has out-of-box support for incremental indexing:
- minimal recomputation on source or logic change.
- (re-)processing necessary portions; reuse cache when possible
If you're new to CocoIndex, we recommend checking out
- Install CocoIndex Python library
pip install -U cocoindex
Install Postgres if you don't have one. CocoIndex uses it for incremental processing.
(Optional) Install Claude Code skill for enhanced development experience. Run these commands inClaude Code:
/plugin marketplace add cocoindex-io/cocoindex-claude/plugin install cocoindex-skills@cocoindexFollowQuick Start Guide to define your first indexing flow. An example flow looks like:
@cocoindex.flow_def(name="TextEmbedding")deftext_embedding_flow(flow_builder:cocoindex.FlowBuilder,data_scope:cocoindex.DataScope):# Add a data source to read files from a directorydata_scope["documents"]=flow_builder.add_source(cocoindex.sources.LocalFile(path="markdown_files"))# Add a collector for data to be exported to the vector indexdoc_embeddings=data_scope.add_collector()# Transform data of each documentwithdata_scope["documents"].row()asdoc:# Split the document into chunks, put into `chunks` fielddoc["chunks"]=doc["content"].transform(cocoindex.functions.SplitRecursively(),language="markdown",chunk_size=2000,chunk_overlap=500)# Transform data of each chunkwithdoc["chunks"].row()aschunk:# Embed the chunk, put into `embedding` fieldchunk["embedding"]=chunk["text"].transform(cocoindex.functions.SentenceTransformerEmbed(model="sentence-transformers/all-MiniLM-L6-v2"))# Collect the chunk into the collector.doc_embeddings.collect(filename=doc["filename"],location=chunk["location"],text=chunk["text"],embedding=chunk["embedding"])# Export collected data to a vector index.doc_embeddings.export("doc_embeddings",cocoindex.targets.Postgres(),primary_key_fields=["filename","location"],vector_indexes=[cocoindex.VectorIndexDef(field_name="embedding",metric=cocoindex.VectorSimilarityMetric.COSINE_SIMILARITY)])
It defines an index flow like this:
| Example | Description |
|---|---|
| Text Embedding | Index text documents with embeddings for semantic search |
| Code Embedding | Index code embeddings for semantic search |
| PDF Embedding | Parse PDF and index text embeddings for semantic search |
| PDF Elements Embedding | Extract text and images from PDFs; embed text with SentenceTransformers and images with CLIP; store in Qdrant for multimodal search |
| Manuals LLM Extraction | Extract structured information from a manual using LLM |
| Amazon S3 Embedding | Index text documents from Amazon S3 |
| Azure Blob Storage Embedding | Index text documents from Azure Blob Storage |
| Google Drive Text Embedding | Index text documents from Google Drive |
| Meeting Notes to Knowledge Graph | Extract structured meeting info from Google Drive and build a knowledge graph |
| Docs to Knowledge Graph | Extract relationships from Markdown documents and build a knowledge graph |
| Embeddings to Qdrant | Index documents in a Qdrant collection for semantic search |
| Embeddings to LanceDB | Index documents in a LanceDB collection for semantic search |
| FastAPI Server with Docker | Run the semantic search server in a Dockerized FastAPI setup |
| Product Recommendation | Build real-time product recommendations with LLM and graph database |
| Image Search with Vision API | Generates detailed captions for images using a vision model, embeds them, enables live-updating semantic search via FastAPI and served on a React frontend |
| Face Recognition | Recognize faces in images and build embedding index |
| Paper Metadata | Index papers in PDF files, and build metadata tables for each paper |
| Multi Format Indexing | Build visual document index from PDFs and images with ColPali for semantic search |
| Custom Source HackerNews | Index HackerNews threads and comments, usingCocoIndex Custom Source |
| Custom Output Files | Convert markdown files to HTML files and save them to a local directory, usingCocoIndex Custom Targets |
| Patient intake form extraction | Use LLM to extract structured data from patient intake forms with different formats |
| HackerNews Trending Topics | Extract trending topics from HackerNews threads and comments, usingCocoIndex Custom Source and LLM |
| Patient Intake Form Extraction with BAML | Extract structured data from patient intake forms using BAML |
| Patient Intake Form Extraction with DSPy | Extract structured data from patient intake forms using DSPy |
More coming and stay tuned 👀!
For detailed documentation, visitCocoIndex Documentation, including aQuickstart guide.
We love contributions from our community ❤️. For details on contributing or running the project for development, check out ourcontributing guide.
Welcome with a huge coconut hug 🥥⋆。˚🤗. We are super excited for community contributions of all kinds - whether it's code improvements, documentation updates, issue reports, feature requests, and discussions in our Discord.
Join our community here:
We are constantly improving, and more features and examples are coming soon. If you love this project, please drop us a star ⭐ at GitHub repo to stay tuned and help us grow.
CocoIndex is Apache 2.0 licensed.
About
Data transformation framework for AI. Ultra performant, with incremental processing. 🌟 Star if you like it!
Topics
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.

