Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Update openai switch kit blog and guide#1556

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
SilasMarvin merged 3 commits intomasterfromsilas-update-openai-switch-kit
Jul 12, 2024
Merged
Show file tree
Hide file tree
Changes fromall commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -41,8 +41,8 @@ The Switch Kit is an open-source AI SDK that provides a drop in replacement for
{% tabs %}
{% tab title="JavaScript" %}
```javascript
constpgml = require("pgml");
const client =pgml.newOpenSourceAI();
constkorvus = require("korvus");
const client =korvus.newOpenSourceAI();
const results = client.chat_completions_create(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand All@@ -62,8 +62,8 @@ console.log(results);

{% tab title="Python" %}
```python
importpgml
client =pgml.OpenSourceAI()
importkorvus
client =korvus.OpenSourceAI()
results = client.chat_completions_create(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand DownExpand Up@@ -117,17 +117,15 @@ The above is an example using our open-source AI SDK with Meta-Llama-3-8B-Instru

Notice there is near one to one relation between the parameters and return type of OpenAI’s `chat.completions.create` and our `chat_completion_create`.

The best part of using open-source AI is the flexibility with models. Unlike OpenAI, we are not restricted to using a few censored models, but have access to almost any model out there.

Here is an example of streaming with the popular Mythalion model, an uncensored MythoMax variant designed for chatting.
Here is an example of streaming:

{% tabs %}
{% tab title="JavaScript" %}
```javascript
constpgml = require("pgml");
const client =pgml.newOpenSourceAI();
constkorvus = require("korvus");
const client =korvus.newOpenSourceAI();
const it = client.chat_completions_create_stream(
"PygmalionAI/mythalion-13b",
"meta-llama/Meta-Llama-3-8B-Instruct",
[
{
role: "system",
Expand All@@ -149,10 +147,10 @@ while (!result.done) {

{% tab title="Python" %}
```python
importpgml
client =pgml.OpenSourceAI()
importkorvus
client =korvus.OpenSourceAI()
results = client.chat_completions_create_stream(
"PygmalionAI/mythalion-13b",
"meta-llama/Meta-Llama-3-8B-Instruct",
[
{
"role": "system",
Expand DownExpand Up@@ -184,7 +182,7 @@ for c in results:
],
"created": 1701296792,
"id": "62a817f5-549b-43e0-8f0c-a7cb204ab897",
"model": "PygmalionAI/mythalion-13b",
"model": "meta-llama/Meta-Llama-3-8B-Instruct",
"object": "chat.completion.chunk",
"system_fingerprint": "f366d657-75f9-9c33-8e57-1e6be2cf62f3"
}
Expand All@@ -200,7 +198,7 @@ for c in results:
],
"created": 1701296792,
"id": "62a817f5-549b-43e0-8f0c-a7cb204ab897",
"model": "PygmalionAI/mythalion-13b",
"model": "meta-llama/Meta-Llama-3-8B-Instruct",
"object": "chat.completion.chunk",
"system_fingerprint": "f366d657-75f9-9c33-8e57-1e6be2cf62f3"
}
Expand All@@ -212,15 +210,15 @@ We have truncated the output to two items

!!!

We also have asynchronous versions of the create and `create_stream` functions relatively named `create_async` and `create_stream_async`. Checkout [our documentation](https://postgresml.org/docs/introduction/machine-learning/sdks/opensourceai) for a complete guide of the open-source AI SDK including guides on how to specify custom models.
We also have asynchronous versions of the create and `create_stream` functions relatively named `create_async` and `create_stream_async`. Checkout [our documentation](https://postgresml.org/docs/guides/opensourceai) for a complete guide of the open-source AI SDK including guides on how to specify custom models.

PostgresML is free and open source. To run the above examples yourself[create an account](https://postgresml.org/signup), installpgml, and get running!
PostgresML is free and open source. To run the above examples yourself [create an account](https://postgresml.org/signup), installkorvus, and get running!

### Why use open-source models on PostgresML?

PostgresML is a complete MLOps platform in a simple PostgreSQL extension. It’s the tool our team wished they’d had scaling MLOps at Instacart during its peak years of growth. You can host your database with us or locally. However you want to engage, we know from experience that it’s better to bring your ML workload to the database rather than bringing the data to the codebase.

Fundamentally, PostgresML enables PostgreSQL to act as a GPU-powered AI application database — where you can both save models and index data. That eliminates the need for the myriad of separate services you have to tie together for your ML workflow.Pgml + pgvector create a complete ML platform (vector DB, model store, inference service, open-source LLMs) all within open-source extensions for PostgreSQL. That takes a lot of the complexity out of your infra, and it's ultimately faster for your users.
Fundamentally, PostgresML enables PostgreSQL to act as a GPU-powered AI application database — where you can both save models and index data. That eliminates the need for the myriad of separate services you have to tie together for your ML workflow.pgml + pgvector create a complete ML platform (vector DB, model store, inference service, open-source LLMs) all within open-source extensions for PostgreSQL. That takes a lot of the complexity out of your infra, and it's ultimately faster for your users.

We're bullish on the power of in-database and open-source ML/AI, and we’re excited for you to see the power of this approach yourself. You can try it out in our serverless database for $0, with usage based billing starting at just five cents an hour per GB GPU cache. You can even mess with it for free on our homepage.

Expand Down
44 changes: 22 additions & 22 deletionspgml-cms/docs/guides/opensourceai.md
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -6,26 +6,26 @@ OpenSourceAI is a drop in replacement for OpenAI's chat completion endpoint.

Follow the instillation section in [getting-started.md](../api/client-sdk/getting-started.md "mention")

When done, set the environment variable `DATABASE_URL` to your PostgresML database url.
When done, set the environment variable `KORVUS_DATABASE_URL` to your PostgresML database url.

```bash
exportDATABASE_URL=postgres://user:pass@.db.cloud.postgresml.org:6432/pgml
exportKORVUS_DATABASE_URL=postgres://user:pass@.db.cloud.postgresml.org:6432/pgml
```

Note that an alternative to setting the environment variable is passing the url to the constructor of `OpenSourceAI`

{% tabs %}
{% tab title="JavaScript" %}
```javascript
constpgml = require("pgml");
const client =pgml.newOpenSourceAI(YOUR_DATABASE_URL);
constkorvus = require("korvus");
const client =korvus.newOpenSourceAI(YOUR_DATABASE_URL);
```
{% endtab %}

{% tab title="Python" %}
```python
importpgml
client =pgml.OpenSourceAI(YOUR_DATABASE_URL)
importkorvus
client =korvus.OpenSourceAI(YOUR_DATABASE_URL)
```
{% endtab %}
{% endtabs %}
Expand DownExpand Up@@ -59,8 +59,8 @@ Here is a simple example using zephyr-7b-beta, one of the best 7 billion paramet
{% tabs %}
{% tab title="JavaScript" %}
```javascript
constpgml = require("pgml");
const client =pgml.newOpenSourceAI();
constkorvus = require("korvus");
const client =korvus.newOpenSourceAI();
const results = client.chat_completions_create(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand All@@ -80,8 +80,8 @@ console.log(results);

{% tab title="Python" %}
```python
importpgml
client =pgml.OpenSourceAI()
importkorvus
client =korvus.OpenSourceAI()
results = client.chat_completions_create(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand DownExpand Up@@ -138,8 +138,8 @@ Here is an example of streaming with the popular `meta-llama/Meta-Llama-3-8B-Ins
{% tabs %}
{% tab title="JavaScript" %}
```javascript
constpgml = require("pgml");
const client =pgml.newOpenSourceAI();
constkorvus = require("korvus");
const client =korvus.newOpenSourceAI();
const it = client.chat_completions_create_stream(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand All@@ -163,8 +163,8 @@ while (!result.done) {

{% tab title="Python" %}
```python
importpgml
client =pgml.OpenSourceAI()
importkorvus
client =korvus.OpenSourceAI()
results = client.chat_completions_create_stream(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand DownExpand Up@@ -231,8 +231,8 @@ We also have asynchronous versions of the `chat_completions_create` and `chat_co
{% tabs %}
{% tab title="JavaScript" %}
```javascript
constpgml = require("pgml");
const client =pgml.newOpenSourceAI();
constkorvus = require("korvus");
const client =korvus.newOpenSourceAI();
const results = await client.chat_completions_create_async(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand All@@ -252,8 +252,8 @@ console.log(results);

{% tab title="Python" %}
```python
importpgml
client =pgml.OpenSourceAI()
importkorvus
client =korvus.OpenSourceAI()
results = await client.chat_completions_create_async(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand DownExpand Up@@ -300,8 +300,8 @@ Notice the return types for the sync and async variations are the same.
{% tabs %}
{% tab title="JavaScript" %}
```javascript
constpgml = require("pgml");
const client =pgml.newOpenSourceAI();
constkorvus = require("korvus");
const client =korvus.newOpenSourceAI();
const it = await client.chat_completions_create_stream_async(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand All@@ -325,8 +325,8 @@ while (!result.done) {

{% tab title="Python" %}
```python
importpgml
client =pgml.OpenSourceAI()
importkorvus
client =korvus.OpenSourceAI()
results = await client.chat_completions_create_stream_async(
"meta-llama/Meta-Llama-3-8B-Instruct",
[
Expand Down

[8]ページ先頭

©2009-2025 Movatter.jp