Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Neovim plugin to generate text using LLMs with customizable prompts

License

NotificationsYou must be signed in to change notification settings

David-Kunz/gen.nvim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Generate text using LLMs with customizable prompts

gen_nvim

Video

Local LLMs in Neovim: gen.nvim

Requires

Install

Install with your favorite plugin manager, e.g.lazy.nvim

Example with Lazy

-- Minimal configuration{"David-Kunz/gen.nvim"},
-- Custom Parameters (with defaults){"David-Kunz/gen.nvim",opts= {model="mistral",-- The default model to use.quit_map="q",-- set keymap to close the response windowretry_map="<c-r>",-- set keymap to re-send the current promptaccept_map="<c-cr>",-- set keymap to replace the previous selection with the last resulthost="localhost",-- The host running the Ollama service.port="11434",-- The port on which the Ollama service is listening.display_mode="float",-- The display mode. Can be "float" or "split" or "horizontal-split".show_prompt=false,-- Shows the prompt submitted to Ollama. Can be true (3 lines) or "full".show_model=false,-- Displays which model you are using at the beginning of your chat session.no_auto_close=false,-- Never closes the window automatically.file=false,-- Write the payload to a temporary file to keep the command short.hidden=false,-- Hide the generation window (if true, will implicitly set `prompt.replace = true`), requires Neovim >= 0.10init=function(options)pcall(io.popen,"ollama serve > /dev/null 2>&1 &")end,-- Function to initialize Ollamacommand=function(options)localbody= {model=options.model,stream=true}return"curl --silent --no-buffer -X POST http://"..options.host..":"..options.port.."/api/chat -d $body"end,-- The command for the Ollama service. You can use placeholders $prompt, $model and $body (shellescaped).-- This can also be a command string.-- The executed command must return a JSON object with { response, context }-- (context property is optional).-- list_models = '<omitted lua function>', -- Retrieves a list of model namesresult_filetype="markdown",-- Configure filetype of the result bufferdebug=false-- Prints errors and the command which is run.    }},

Here are allavailable models.

Alternatively, you can call thesetup function:

require('gen').setup({-- same as above})

Usage

Use commandGen to generate text based on predefined and customizable prompts.

Example key maps:

vim.keymap.set({'n','v'},'<leader>]',':Gen<CR>')

You can also directly invoke it with one of thepredefined prompts or your custom prompts:

vim.keymap.set('v','<leader>]',':Gen Enhance_Grammar_Spelling<CR>')

After a conversation begins, the entire context is sent to the LLM. That allows you to ask follow-up questions with

:GenChat

and once the window is closed, you start with a fresh conversation.

For prompts which don't automatically replace the previously selected text (replace = false), you can replace the selected text with the generated output with<c-cr>.

You can select a model from a list of all installed models with

require('gen').select_model()

Custom Prompts

All prompts are defined inrequire('gen').prompts, you can enhance or modify them.

Example:

require('gen').prompts['Elaborate_Text']= {prompt="Elaborate the following text:\n$text",replace=true}require('gen').prompts['Fix_Code']= {prompt="Fix the following code. Only output the result in format ```$filetype\n...\n```:\n```$filetype\n$text\n```",replace=true,extract="```$filetype\n(.-)```"}

You can use the following properties per prompt:

  • prompt: (string | function) Prompt either as a string or a function which should return a string. The result can use the following placeholders:
    • $text: Visually selected text or the content of the current buffer
    • $filetype: File type of the buffer (e.g.javascript)
    • $input: Additional user input
    • $register: Value of the unnamed register (yanked text)
  • replace:true if the selected text shall be replaced with the generated output
  • extract: Regular expression used to extract the generated result
  • model: The model to use, default:mistral

Tip

User selections can be delegated toTelescope withtelescope-ui-select.

About

Neovim plugin to generate text using LLMs with customizable prompts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp