Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit03bf339

Browse files
Merge branch 'master' into ryan-image-updates
2 parents66a1ee4 +6c0a09d commit03bf339

File tree

11 files changed

+887
-7
lines changed

11 files changed

+887
-7
lines changed

‎pgml-cms/docs/.gitbook/assets/Chatbots_Flow-Diagram.svg

Lines changed: 281 additions & 0 deletions
Loading

‎pgml-cms/docs/.gitbook/assets/Chatbots_King-Diagram.svg

Lines changed: 78 additions & 0 deletions
Loading

‎pgml-cms/docs/.gitbook/assets/Chatbots_Limitations-Diagram.svg

Lines changed: 275 additions & 0 deletions
Loading

‎pgml-cms/docs/.gitbook/assets/Chatbots_Tokens-Diagram.svg

Lines changed: 238 additions & 0 deletions
Loading
-107 KB
Binary file not shown.
-14.7 KB
Binary file not shown.
Binary file not shown.

‎pgml-cms/docs/guides/chatbots/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Here is an example flowing from:
3030

3131
text -> tokens -> LLM -> probability distribution -> predicted token -> text
3232

33-
<figure><imgsrc="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FrvfCoPdoQeoovZiqNG90%2Fuploads%2FPzJzmVS3uNhbvseiJbgi%2FScreenshot%20from%202023-12-13%2013-19-33.png?alt=media&#x26;token=11d57b2a-6aa3-4374-b26c-afc6f531d2f3"alt=""><figcaption><p>The flow of inputs through an LLM. In this case the inputs are "What is Baldur's Gate 3?" and the output token "14" maps to the word "I"</p></figcaption></figure>
33+
<figure><imgsrc="../../.gitbook/assets/Chatbots_Limitations-Diagram.svg"alt=""><figcaption><p>The flow of inputs through an LLM. In this case the inputs are "What is Baldur's Gate 3?" and the output token "14" maps to the word "I"</p></figcaption></figure>
3434

3535
{% hint style="info" %}
3636
We have simplified the tokenization process. Words do not always map directly to tokens. For instance, the word "Baldur's" may actually map to multiple tokens. For more information on tokenization checkout[HuggingFace's summary](https://huggingface.co/docs/transformers/tokenizer\_summary).
@@ -108,11 +108,11 @@ What does an `embedding` look like? `Embeddings` are just vectors (for our use c
108108
embedding_1= embed("King")# embed returns something like [0.11, -0.32, 0.46, ...]
109109
```
110110

111-
<figure><imgsrc="../../.gitbook/assets/embedding_king.png"alt=""><figcaption><p>The flow of word -> token -> embedding</p></figcaption></figure>
111+
<figure><imgsrc="../../.gitbook/assets/Chatbots_King-Diagram.svg"alt=""><figcaption><p>The flow of word -> token -> embedding</p></figcaption></figure>
112112

113113
`Embeddings` aren't limited to words, we have models that can embed entire sentences.
114114

115-
<figure><imgsrc="../../.gitbook/assets/embeddings_tokens.png"alt=""><figcaption><p>The flow of sentence -> tokens -> embedding</p></figcaption></figure>
115+
<figure><imgsrc="../../.gitbook/assets/Chatbots_Tokens-Diagram.svg"alt=""><figcaption><p>The flow of sentence -> tokens -> embedding</p></figcaption></figure>
116116

117117
Why do we care about`embeddings`?`Embeddings` have a very interesting property. Words and sentences that have close[semantic similarity](https://en.wikipedia.org/wiki/Semantic\_similarity) sit closer to one another in vector space than words and sentences that do not have close semantic similarity.
118118

@@ -157,7 +157,7 @@ print(context)
157157

158158
Thereis a lot going onwith this, let's check out this diagram and step through it.
159159

160-
<figure><imgsrc="../../.gitbook/assets/chatbot_flow.png"alt=""><figcaption><p>The flow of taking a document, splitting it into chunks, embedding those chunks,and then retrieving a chunk based off of a users query</p></figcaption></figure>
160+
<figure><imgsrc="../../.gitbook/assets/Chatbots_Flow-Diagram.svg"alt=""><figcaption><p>The flow of taking a document, splitting it into chunks, embedding those chunks,and then retrieving a chunk based off of a users query</p></figcaption></figure>
161161

162162
Step1: We take the documentand split it into chunks. Chunks are typically a paragraphor twoin size. There are many ways to split documents into chunks,for more information check out [this guide](https://www.pinecone.io/learn/chunking-strategies/).
163163

‎pgml-dashboard/src/components/dropdown/mod.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ pub struct Dropdown {
7272
/// Position of the dropdown menu.
7373
offset:String,
7474

75-
/// Whether or not the dropdownis collapsable.
75+
/// Whether or not the dropdownresponds to horizontal collapse, i.e. in product left nav.
7676
collapsable:bool,
7777
offset_collapsed:String,
7878

‎pgml-dashboard/src/components/inputs/text/search/search/search_controller.js

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,4 +30,11 @@ export default class extends Controller {
3030
search(id,url){
3131
this.element.querySelector(`turbo-frame[id=${id}]`).src=url;
3232
}
33+
34+
// Hide the dropdown if the user clicks outside of it.
35+
hideDropdown(e){
36+
if(!this.element.contains(e.target)){
37+
this.endSearch();
38+
}
39+
}
3340
}

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp