Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit4ac13a0

Browse files
Lev Kokotovgitbook-bot
Lev Kokotov
authored andcommitted
GITBOOK-82: Update screenshots for new signup
1 parent8e60653 commit4ac13a0

15 files changed

+18
-32
lines changed
Loading
102 KB
Loading
226 KB
Loading

‎pgml-cms/docs/benchmarks/million-requests-per-second.md

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
#Million Requests per Second
22

3-
4-
53
The question "Does it Scale?" has become somewhat of a meme in software engineering. There is a good reason for it though, because most businesses plan for success. If your app, online store, or SaaS becomes popular, you want to be sure that the system powering it can serve all your new customers.
64

75
At PostgresML, we are very concerned with scale. Our engineering background took us through scaling PostgreSQL to 100 TB+, so we're certain that it scales, but could we scale machine learning alongside it?
@@ -12,18 +10,14 @@ If you missed our previous post and are wondering why someone would combine mach
1210

1311
##Architecture Overview
1412

15-
If you're familiar with how one runs PostgreSQL at scale, you can skip straight to the[results](broken-reference).
13+
If you're familiar with how one runs PostgreSQL at scale, you can skip straight to the[results](broken-reference/).
1614

1715
Part of our thesis, and the reason why we chose Postgres as our host for machine learning, is that scaling machine learning inference is very similar to scaling read queries in a typical database cluster.
1816

1917
Inference speed varies based on the model complexity (e.g.`n_estimators` for XGBoost) and the size of the dataset (how many features the model uses), which is analogous to query complexity and table size in the database world and, as we'll demonstrate further on, scaling the latter is mostly a solved problem.
2018

21-
22-
2319
<figure><imgsrc="../.gitbook/assets/scaling-postgresml-3.svg"alt=""><figcaption><p><em>System Architecture</em></p></figcaption></figure>
2420

25-
26-
2721
| Component| Description|
2822
| ---------| ---------------------------------------------------------------------------------------------------------|
2923
| Clients| Regular Postgres clients|
@@ -73,8 +67,6 @@ Scaling XGBoost predictions is a little bit more interesting. XGBoost cannot ser
7367

7468
PostgresML bypasses that limitation because of how Postgres itself handles concurrency:
7569

76-
77-
7870
<figure><imgsrc="../.gitbook/assets/postgres-multiprocess-2.png"alt=""><figcaption></figcaption></figure>
7971

8072
_PostgresML concurrency_
@@ -89,8 +81,6 @@ One of the tests we ran used 1,000 clients, which were connected to 1, 2, and 5
8981

9082
###Linear Scaling
9183

92-
93-
9484
<div>
9585

9686
<figure><imgsrc="../.gitbook/assets/1M-RPS-latency.png"alt=""><figcaption><p>Latency</p></figcaption></figure>
@@ -131,11 +121,11 @@ If batching did not work at all, we would see a linear increase in latency and a
131121

132122
<div>
133123

134-
<figure><imgsrc="../.gitbook/assets/1M-RPS-batching-latency%20(1).png"alt=""><figcaption></figcaption></figure>
124+
<figure><imgsrc="../.gitbook/assets/1M-RPS-batching-latency (1)(1).png"alt=""><figcaption></figcaption></figure>
135125

136126

137127

138-
<figure><imgsrc="../.gitbook/assets/1M-RPS-batching-throughput%20(1).png"alt=""><figcaption></figcaption></figure>
128+
<figure><imgsrc="../.gitbook/assets/1M-RPS-batching-throughput (1)(1).png"alt=""><figcaption></figcaption></figure>
139129

140130
</div>
141131

‎pgml-cms/docs/benchmarks/mindsdb-vs-postgresml.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -40,10 +40,9 @@ Another difference is that PostgresML also supports embedding models, and closel
4040

4141
The architectural implementations for these projects is significantly different. PostgresML takes a data centric approach with Postgres as the provider for both storage_and_ compute. To provide horizontal scalability for inference, the PostgresML team has also created[PgCat](https://github.com/postgresml/pgcat) to distribute workloads across many Postgres databases. On the other hand, MindsDB takes a service oriented approach that connects to various databases over the network.
4242

43-
\
44-
43+
\\
4544

46-
<figure><imgsrc="../.gitbook/assets/mindsdb-architecture.png"alt=""><figcaption></figcaption></figure>
45+
<figure><imgsrc="../.gitbook/assets/mindsdb-architecture (1).png"alt=""><figcaption></figcaption></figure>
4746

4847
|| MindsDB| PostgresML|
4948
| -------------| -------------| ----------|
@@ -56,8 +55,7 @@ The architectural implementations for these projects is significantly different.
5655
| On Premise|||
5756
| Web UI|||
5857

59-
\
60-
58+
\\
6159

6260
The difference in architecture leads to different tradeoffs and challenges. There are already hundreds of ways to get data into and out of a Postgres database, from just about every other service, language and platform that makes PostgresML highly compatible with other application workflows. On the other hand, the MindsDB Python service accepts connections from specifically supported clients like`psql` and provides a pseudo-SQL interface to the functionality. The service will parse incoming MindsDB commands that look similar to SQL (but are not), for tasks like configuring database connections, or doing actual machine learning. These commands typically have what looks like a sub-select, that will actually fetch data over the wire from configured databases for Machine Learning training and inference.
6361

@@ -285,8 +283,7 @@ PostgresML is the clear winner in terms of performance. It seems to me that it c
285283
| translation\_en\_to\_es| t5-base| 1573| 1148| 294|
286284
| summarization| sshleifer/distilbart-cnn-12-6| 4289| 3450| 479|
287285

288-
\
289-
286+
\\
290287

291288
There is a general trend, the larger and slower the model is, the more work is spent inside libtorch, the less the performance of the rest matters, but for interactive models and use cases there is a significant difference. We've tried to cover the most generous use case we could between these two. If we were to compare XGBoost or other classical algorithms, that can have sub millisecond prediction times in PostgresML, the 20ms Python service overhead of MindsDB just to parse the incoming query would be hundreds of times slower.
292289

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp