Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Sync API

Reproducibility is critical for AI. For code, it's easy to keep track of changes using Github or Gitlab.For data, it's not as easy. Most of the time, we're manually writing complicated data tracking code, wrestling with an external tool, and dealing with expensive duplicate snapshot copies with low granularity.

While working with most other vector databases, if we loaded in the wrong data (or any other such mistakes), we have to blow away the index, correct the mistake, and then completely rebuild it. It'sreally difficult to rollback to an earlier state, and any such corrective actiondestroys historical data and evidence, which may be useful down the line to debug and diagnose issues.

To our knowledge, LanceDB is the first and only vector database that supports full reproducibility and rollbacks natively.Taking advantage of the Lance columnar data format, LanceDB supports:

  • Automatic versioning
  • Instant rollback
  • Appends, updates, deletions
  • Schema evolution

This makes auditing, tracking, and reproducibility a breeze!

Let's see how this all works.

Pickle Rick!

Let's first prepare the data. We will be using a CSV file with a bunch of quotes from Rick and Morty

In [1]:
!wgethttp://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csv!headrick_and_morty_quotes.csv
!wget http://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csv!head rick_and_morty_quotes.csv
--2024-12-17 11:54:43--  http://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csvResolving vectordb-recipes.s3.us-west-2.amazonaws.com (vectordb-recipes.s3.us-west-2.amazonaws.com)... 52.92.138.34, 3.5.82.160, 52.218.236.161, ...Connecting to vectordb-recipes.s3.us-west-2.amazonaws.com (vectordb-recipes.s3.us-west-2.amazonaws.com)|52.92.138.34|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 8236 (8.0K) [text/csv]Saving to: ‘rick_and_morty_quotes.csv.1’rick_and_morty_quot 100%[===================>]   8.04K  --.-KB/s    in 0s      2024-12-17 11:54:43 (77.8 MB/s) - ‘rick_and_morty_quotes.csv.1’ saved [8236/8236]id,author,quote1,Rick," Morty, you got to come on. You got to come with me."2,Morty," Rick, what’s going on?"3,Rick," I got a surprise for you, Morty."4,Morty," It’s the middle of the night. What are you talking about?"5,Rick," I got a surprise for you."6,Morty," Ow! Ow! You’re tugging me too hard."7,Rick," I got a surprise for you, Morty."8,Rick," What do you think of this flying vehicle, Morty? I built it out of stuff I found in the garage."9,Morty," Yeah, Rick, it’s great. Is this the surprise?"

Let's load this into a pandas dataframe.

It's got 3 columns, a quote id, the quote string, and the first name of the author of the quote:

In [2]:
importpandasaspddf=pd.read_csv("rick_and_morty_quotes.csv")df.head()
import pandas as pddf = pd.read_csv("rick_and_morty_quotes.csv")df.head()
Out[2]:
idauthorquote
01RickMorty, you got to come on. You got to come wi...
12MortyRick, what’s going on?
23RickI got a surprise for you, Morty.
34MortyIt’s the middle of the night. What are you ta...
45RickI got a surprise for you.

We'll start with a local LanceDB connection

In [3]:
!pipinstalllancedb-q
!pip install lancedb -q
In [ ]:
importlancedbdb=lancedb.connect("~/.lancedb")
import lancedbdb = lancedb.connect("~/.lancedb")

Creating a LanceDB table from a pandas dataframe is straightforward usingcreate_table:

In [5]:
db.drop_table("rick_and_morty",ignore_missing=True)table=db.create_table("rick_and_morty",df)table.head().to_pandas()
db.drop_table("rick_and_morty", ignore_missing=True)table = db.create_table("rick_and_morty", df)table.head().to_pandas()
Out[5]:
idauthorquote
01RickMorty, you got to come on. You got to come wi...
12MortyRick, what’s going on?
23RickI got a surprise for you, Morty.
34MortyIt’s the middle of the night. What are you ta...
45RickI got a surprise for you.

Updates

Now, since Rick is the smartest man in the multiverse, he deserves to have his quotes attributed to his full name: Richard Daniel Sanchez.

This can be done viaLanceTable.update. It needs two arguments:

  1. Awhere string filter (sql syntax) to determine the rows to update
  2. A dict ofvalues where the keys are the column names to update and the values are the new values
In [6]:
table.update(where="author='Rick'",values={"author":"Richard Daniel Sanchez"})table.to_pandas()
table.update(where="author='Rick'", values={"author": "Richard Daniel Sanchez"})table.to_pandas()
Out[6]:
idauthorquote
02MortyRick, what’s going on?
14MortyIt’s the middle of the night. What are you ta...
26MortyOw! Ow! You’re tugging me too hard.
39MortyYeah, Rick, it’s great. Is this the surprise?
411MortyWhat?! A bomb?!
............
9480Richard Daniel SanchezThere you are, Morty. Listen to me. I got an ...
9582Richard Daniel SanchezIt’s pretty obvious, Morty. I froze him. Now ...
9684Richard Daniel SanchezDo you have any concept of how much higher th...
9786Richard Daniel SanchezI’ll do it later, Morty. He’ll be fine. Let’s...
9897Richard Daniel SanchezThere she is. All right. Come on, Morty. Let’...

99 rows × 3 columns

Schema evolution

Ok so this is a vector database, so we need actual vectors.We'll use sentence transformers here to avoid having to deal with API keys.

Let's create a basic model using the "all-MiniLM-L6-v2" model and embed the quotes:

In [7]:
fromsentence_transformersimportSentenceTransformermodel=SentenceTransformer("all-MiniLM-L6-v2",device="cpu")vectors=model.encode(df.quote.values.tolist(),convert_to_numpy=True,normalize_embeddings=True).tolist()
from sentence_transformers import SentenceTransformermodel = SentenceTransformer("all-MiniLM-L6-v2", device="cpu")vectors = model.encode(df.quote.values.tolist(), convert_to_numpy=True, normalize_embeddings=True).tolist()

We can then convert the vectors into a pyarrow Table and merge it to the LanceDB Table.

For the merge to work successfully, we need to have an overlapping column. Here the natural choice is to use the id column:

In [8]:
fromlance.vectorimportvec_to_tableimportnumpyasnpimportpyarrowaspa
from lance.vector import vec_to_tableimport numpy as npimport pyarrow as pa
In [9]:
embeddings=vec_to_table(vectors)embeddings=embeddings.append_column("id",pa.array(np.arange(len(table))+1))embeddings.to_pandas().head()
embeddings = vec_to_table(vectors)embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1))embeddings.to_pandas().head()
Out[9]:
vectorid
0[-0.10369808, -0.038807657, -0.07471153, -0.05...1
1[-0.11813704, -0.0533092, 0.025554786, -0.0242...2
2[-0.09807682, -0.035231438, -0.04206024, -0.06...3
3[0.032292824, 0.038136397, 0.013615396, 0.0335...4
4[-0.050369408, -0.0043397923, 0.013419108, -0....5

And now we'll use theLanceTable.merge function to add the vector column into the LanceTable:

In [10]:
table.merge(embeddings,left_on="id")table.head().to_pandas()
table.merge(embeddings, left_on="id")table.head().to_pandas()
Out[10]:
idauthorquotevector
02MortyRick, what’s going on?[-0.11813704, -0.0533092, 0.025554786, -0.0242...
14MortyIt’s the middle of the night. What are you ta...[0.032292824, 0.038136397, 0.013615396, 0.0335...
26MortyOw! Ow! You’re tugging me too hard.[-0.035019904, -0.070963725, 0.003859435, -0.0...
39MortyYeah, Rick, it’s great. Is this the surprise?[-0.12578955, -0.019364933, 0.01606114, -0.082...
411MortyWhat?! A bomb?![0.0018287548, 0.07033146, -0.023754105, 0.047...

If we look at the schema, we see thatall-MiniLM-L6-v2 produces 384-dimensional vectors:

In [11]:
table.schema
table.schema
Out[11]:
id: int64author: stringquote: stringvector: fixed_size_list<item: float>[384]  child 0, item: float

Rollback

Suppose we used the table and found that theall-MiniLM-L6-v2 model doesn't produce ideal results. Instead we want to try a larger model. How do we use the new embeddings without losing the change history?

First, major operations are automatically versioned in LanceDB.Version 1 is the table creation, with the initial insertion of data.Versions 2 and 3 represents the update (deletion + append)Version 4 is adding the new column.

In [12]:
table.list_versions()
table.list_versions()
Out[12]:
[{'version': 1,  'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 613932),  'metadata': {}}, {'version': 2,  'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 626525),  'metadata': {}}, {'version': 3,  'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 27, 91378),  'metadata': {}}, {'version': 4,  'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 4, 513085),  'metadata': {}}]

We can restore version 3, before we added the old vector column

In [13]:
table.restore(3)table.head().to_pandas()
table.restore(3)table.head().to_pandas()
Out[13]:
idauthorquote
02MortyRick, what’s going on?
14MortyIt’s the middle of the night. What are you ta...
26MortyOw! Ow! You’re tugging me too hard.
39MortyYeah, Rick, it’s great. Is this the surprise?
411MortyWhat?! A bomb?!

Notice that we now have one more, not less versions. When we restore an old version, we're not deleting the version history, we're just creating a new version where the schema and data is equivalent to the restored old version. In this way, we can keep track of all of the changes and always rollback to a previous state.

In [14]:
table.list_versions()
table.list_versions()
Out[14]:
[{'version': 1,  'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 613932),  'metadata': {}}, {'version': 2,  'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 626525),  'metadata': {}}, {'version': 3,  'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 27, 91378),  'metadata': {}}, {'version': 4,  'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 4, 513085),  'metadata': {}}, {'version': 5,  'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 27, 153807),  'metadata': {}}]

Switching Models

Now we'll switch to theall-mpnet-base-v2 model and add the vectors to the restored dataset again. Note that this step can take a couple of minutes.

In [ ]:
model=SentenceTransformer("all-mpnet-base-v2",device="cpu")vectors=model.encode(df.quote.values.tolist(),convert_to_numpy=True,normalize_embeddings=True).tolist()embeddings=vec_to_table(vectors)embeddings=embeddings.append_column("id",pa.array(np.arange(len(table))+1))table.merge(embeddings,left_on="id")
model = SentenceTransformer("all-mpnet-base-v2", device="cpu")vectors = model.encode(df.quote.values.tolist(), convert_to_numpy=True, normalize_embeddings=True).tolist()embeddings = vec_to_table(vectors)embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1))table.merge(embeddings, left_on="id")
In [16]:
table.schema
table.schema
Out[16]:
id: int64author: stringquote: stringvector: fixed_size_list<item: float>[768]  child 0, item: float

Deletion

What if the whole show was just Rick-isms?Let's delete any quote not said by Rick:

In [17]:
table.delete("author != 'Richard Daniel Sanchez'")
table.delete("author != 'Richard Daniel Sanchez'")

We can see that the number of rows has been reduced to 30

In [18]:
len(table)
len(table)
Out[18]:
28

Ok we had our fun, let's get back to the full quote set

In [20]:
table.restore(6)
table.restore(6)
In [21]:
len(table)
len(table)
Out[21]:
99

History

We now have 9 versions in the data. We can review the operations that corresponds to each version below:

In [22]:
table.version
table.version
Out[22]:
8

Versions:

  • 1 - Create and append
  • 2 - Update (deletion)
  • 3 - Update (append)
  • 4 - Merge (vector column)
  • 5 - Restore (4)
  • 6 - Merge (new vector column)
  • 7 - Deletion
  • 8 - Restore

Summary

We never had to explicitly manage the versioning. And we never had to create expensive and slow snapshots. LanceDB automatically tracks the full history of operations and supports fast rollbacks. In production this is critical for debugging issues and minimizing downtime by rolling back to a previously successful state in seconds.


[8]ページ先頭

©2009-2025 Movatter.jp