Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commita0b0c25

Browse files
authored
Merge pull requestex3ndr#17 from fuad00/patch-1
README.md: Typo fixed
2 parents241d47b +84f3334 commita0b0c25

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

‎README.md‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Llama Coder is a better and self-hosted Github Copilot replacement for VS Studio
1414

1515
Minimum required RAM: 16GB is a minimum, more is better since even smallest model takes 5GB of RAM.
1616
The best way: dedicated machine with RTX 4090. Install[Ollama](https://ollama.ai) on this machine and configure endpoint in extension settings to offload to this machine.
17-
Second best way: run onMacBooc M1/M2/M3 with enougth RAM (more == better, but 10gb extra would be enougth).
17+
Second best way: run onMacBook M1/M2/M3 with enougth RAM (more == better, but 10gb extra would be enougth).
1818
For windows notebooks: it runs good with decent GPU, but dedicated machine with a good GPU is recommended. Perfect if you have a dedicated gaming PC.
1919

2020
##Local Installation
@@ -69,4 +69,4 @@ Most of the problems could be seen in output of a plugin in VS Code extension ou
6969

7070
##[0.0.4]
7171

72-
- Initial release of Llama Coder
72+
- Initial release of Llama Coder

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp