Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Updated Docs#949

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
SilasMarvin merged 1 commit intomasterfromsilas-js-documentation-fixes
Aug 25, 2023
Merged
Show file tree
Hide file tree
Changes fromall commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
132 changes: 66 additions & 66 deletionspgml-sdks/rust/pgml/javascript/README.md
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -86,9 +86,9 @@ const main = async () => {
Continuing within `const main`

```javascript
model = pgml.newModel();
splitter = pgml.newSplitter();
pipeline = pgml.Pipeline("my_javascript_pipeline", model, splitter);
constmodel = pgml.newModel();
constsplitter = pgml.newSplitter();
constpipeline = pgml.newPipeline("my_javascript_pipeline", model, splitter);
await collection.add_pipeline(pipeline);
```

Expand DownExpand Up@@ -213,7 +213,7 @@ Documents are dictionaries with two required keys: `id` and `text`. All other ke

**Upsert documents with metadata**
```javascript
documents = [
constdocuments = [
{
id: "Document 1",
text: "Here are the contents of Document 1",
Expand All@@ -225,7 +225,7 @@ documents = [
random_key: "this will be metadata for the document"
}
]
collection =Collection("test_collection")
constcollection =pgml.newCollection("test_collection")
await collection.upsert_documents(documents)
```

Expand All@@ -237,16 +237,16 @@ Pipelines are required to perform search. See the [Pipelines Section](#pipelines

**Basic vector search**
```javascript
collection = pgml.newCollection("test_collection")
pipeline = pgml.newPipeline("test_pipeline")
results = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).fetch_all()
constcollection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constresults = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).fetch_all()
```

**Vector search with custom limit**
```javascript
collection = pgml.newCollection("test_collection")
pipeline = pgml.newPipeline("test_pipeline")
results = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).limit(10).fetch_all()
constcollection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constresults = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).limit(10).fetch_all()
```

#### Metadata Filtering
Expand All@@ -255,15 +255,15 @@ We provide powerful and flexible arbitrarly nested metadata filtering based off

**Vector search with $eq metadata filtering**
```javascript
collection = pgml.newCollection("test_collection")
pipeline = pgml.newPipeline("test_pipeline")
results = await collection.query()
constcollection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constresults = await collection.query()
.vector_recall("Here is some query", pipeline)
.limit(10)
.filter({
"metadata": {
"uuid": {
"$eq": 1
metadata: {
uuid: {
$eq: 1
}
}
})
Expand All@@ -274,15 +274,15 @@ The above query would filter out all documents that do not contain a key `uuid`

**Vector search with $gte metadata filtering**
```javascript
collection = pgml.newCollection("test_collection")
pipeline = pgml.newPipeline("test_pipeline")
results = await collection.query()
constcollection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constresults = await collection.query()
.vector_recall("Here is some query", pipeline)
.limit(10)
.filter({
"metadata": {
"index": {
"$gte": 3
metadata: {
index: {
$gte: 3
}
}
})
Expand All@@ -294,31 +294,31 @@ The above query would filter out all documents that do not contain a key `index`

**Vector search with $or and $and metadata filtering**
```javascript
collection = pgml.newCollection("test_collection")
pipeline = pgml.newPipeline("test_pipeline")
results = await collection.query()
constcollection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constresults = await collection.query()
.vector_recall("Here is some query", pipeline)
.limit(10)
.filter({
"metadata": {
"$or": [
metadata: {
$or: [
{
"$and": [
$and: [
{
"$eq": {
"uuid": 1
uuid: {
$eq: 1
}
},
{
"$lt": {
"index": 100
index: {
$lt: 100
}
}
]
},
{
"special": {
"$ne": True
special: {
$ne: true
}
}
]
Expand All@@ -334,15 +334,15 @@ The above query would filter out all documents that do not have a key `special`
If full text search is enabled for the associated Pipeline, documents can be first filtered by full text search and then recalled by embedding similarity.

```javascript
collection = pgml.newCollection("test_collection")
pipeline = pgml.newPipeline("test_pipeline")
results = await collection.query()
constcollection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constresults = await collection.query()
.vector_recall("Here is some query", pipeline)
.limit(10)
.filter({
"full_text": {
"configuration": "english",
"text": "Match Me"
full_text: {
configuration: "english",
text: "Match Me"
}
})
.fetch_all()
Expand All@@ -362,20 +362,20 @@ Models are used for embedding chuncked documents. We support most every open sou

**Create a default Model "intfloat/e5-small" with default parameters: {}**
```javascript
model = pgml.newModel()
constmodel = pgml.newModel()
```

**Create a Model with custom parameters**
```javascript
model = pgml.newModel(
name="hkunlp/instructor-base",
parameters={instruction: "Represent the Wikipedia document for retrieval: "}
constmodel = pgml.newModel(
"hkunlp/instructor-base",
{instruction: "Represent the Wikipedia document for retrieval: "}
)
```

**Use an OpenAI model**
```javascript
model = pgml.newModel(name="text-embedding-ada-002", source="openai")
constmodel = pgml.newModel(name="text-embedding-ada-002", source="openai")
```

### Splitters
Expand All@@ -384,14 +384,14 @@ Splitters are used to split documents into chunks before embedding them. We supp

**Create a default Splitter "recursive_character" with default parameters: {}**
```javascript
splitter = pgml.newSplitter()
constsplitter = pgml.newSplitter()
```

**Create a Splitter with custom parameters**
```javascript
splitter = pgml.newSplitter(
name="recursive_character",
parameters={chunk_size: 1500, chunk_overlap: 40}
constsplitter = pgml.newSplitter(
"recursive_character",
{chunk_size: 1500, chunk_overlap: 40}
)
```

Expand All@@ -402,9 +402,9 @@ When adding a Pipeline to a collection it is required that Pipeline has a Model
The first time a Pipeline is added to a Collection it will automatically chunk and embed any documents already in that Collection.

```javascript
model = pgml.newModel()
splitter = pgml.newSplitter()
pipeline = pgml.newPipeline("test_pipeline", model, splitter)
constmodel = pgml.newModel()
constsplitter = pgml.newSplitter()
constpipeline = pgml.newPipeline("test_pipeline", model, splitter)
await collection.add_pipeline(pipeline)
```

Expand All@@ -415,9 +415,9 @@ Pipelines can take additional arguments enabling full text search. When full tex
For more information on full text search please see: [Postgres Full Text Search](https://www.postgresql.org/docs/15/textsearch.html).

```javascript
model = pgml.newModel()
splitter = pgml.newSplitter()
pipeline = pgml.newPipeline("test_pipeline", model, splitter, {
constmodel = pgml.newModel()
constsplitter = pgml.newSplitter()
constpipeline = pgml.newPipeline("test_pipeline", model, splitter, {
"full_text_search": {
active: True,
configuration: "english"
Expand All@@ -431,9 +431,9 @@ await collection.add_pipeline(pipeline)
Pipelines are a required argument when performing vector search. After a Pipeline has been added to a Collection, the Model and Splitter can be omitted when instantiating it.

```javascript
pipeline = pgml.newPipeline("test_pipeline")
collection = pgml.newCollection("test_collection")
results = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).fetch_all()
constpipeline = pgml.newPipeline("test_pipeline")
constcollection = pgml.newCollection("test_collection")
constresults = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).fetch_all()
```

### Enabling, Disabling, and Removing Pipelines
Expand All@@ -442,26 +442,26 @@ Pipelines can be disabled or removed to prevent them from running automatically

**Disable a Pipeline**
```javascript
pipeline = pgml.newPipeline("test_pipeline")
collection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constcollection = pgml.newCollection("test_collection")
await collection.disable_pipeline(pipeline)
```

Disabling a Pipeline prevents it from running automatically, but leaves all chunks and embeddings already created by that Pipeline in the database.

**Enable a Pipeline**
```javascript
pipeline = pgml.newPipeline("test_pipeline")
collection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constcollection = pgml.newCollection("test_collection")
await collection.enable_pipeline(pipeline)
```

Enabling a Pipeline will cause it to automatically run and chunk and embed all documents it may have missed while disabled.

**Remove a Pipeline**
```javascript
pipeline = pgml.newPipeline("test_pipeline")
collection = pgml.newCollection("test_collection")
constpipeline = pgml.newPipeline("test_pipeline")
constcollection = pgml.newCollection("test_collection")
await collection.remove_pipeline(pipeline)
```

Expand All@@ -478,4 +478,4 @@ This javascript library is generated from our core rust-sdk. Please check [rust-
- [x] `hybrid_search` functionality that does a combination of `vector_search` and `text_search`. [Issue](https://github.com/postgresml/postgresml/issues/665)
- [x] Ability to call and manage OpenAI embeddings for comparison purposes. [Issue](https://github.com/postgresml/postgresml/issues/666)
- [x] Perform chunking on the DB with multiple langchain splitters. [Issue](https://github.com/postgresml/postgresml/issues/668)
- [ ] Save `vector_search` history for downstream monitoring of model performance. [Issue](https://github.com/postgresml/postgresml/issues/667)
- [ ] Save `vector_search` history for downstream monitoring of model performance. [Issue](https://github.com/postgresml/postgresml/issues/667)
View file
Open in desktop

Some generated files are not rendered by default. Learn more abouthow customized files appear on GitHub.

View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -10,6 +10,6 @@
"license": "ISC",
"dependencies": {
"dotenv": "^16.3.1",
"pgml": "^0.1.6"
"pgml": "^0.9.0"
}
}

[8]ページ先頭

©2009-2025 Movatter.jp