- Notifications
You must be signed in to change notification settings - Fork0
Model Context Protocol server to run commands
License
syntax-syndicate/mcp-server-commands
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Tools are for LLMs to request. Claude Sonnet 3.5 intelligently usesrun_command. And, initial testing shows promising results withGroq Desktop with MCP andllama4 models.
Currently, just one command to rule them all!
run_command- run a command, i.e.hostnameorls -alorecho "hello world"etc- Returns
STDOUTandSTDERRas text - Optional
stdinparameter means your LLM can- pass code in
stdinto commands likefish,bash,zsh,python - create files with
cat >> foo/bar.txtfrom the text instdin
- pass code in
- Returns
Warning
Be careful what you ask this server to run!In Claude Desktop app, useApprove Once (notAllow for This Chat) so you can review each command, useDeny if you don't trust the command.Permissions are dictated by the user that runs the server.DO NOT run withsudo.
Prompts are for users to include in chat history, i.e. viaZed's slash commands (in its AI Chat panel)
run_command- generate a prompt message with the command output
Install dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
To use with Claude Desktop, add the server config:
On MacOS:~/Library/Application Support/Claude/claude_desktop_config.jsonOn Windows:%APPDATA%/Claude/claude_desktop_config.json
Groq Desktop (beta, macOS) uses~/Library/Application Support/groq-desktop-app/settings.json
Published to npm asmcp-server-commands using thisworkflow
{"mcpServers": {"mcp-server-commands": {"command":"npx","args": ["mcp-server-commands"] } }}Make sure to runnpm run build
{"mcpServers": {"mcp-server-commands": {// works b/c of shebang in index.js"command":"/path/to/mcp-server-commands/build/index.js" } }}- Most models are trained such that they don't think they can run commands for you.
- Sometimes, they use tools w/o hesitation... other times, I have to coax them.
- Use a system prompt or prompt template to instruct that they should follow user requests. Including to use
run_commandswithout double checking.
- Ollama is a great way to run a model locally (w/ Open-WebUI)
# NOTE: make sure to review variants and sizes, so the model fits in your VRAM to perform well!# Probably the best so far is [OpenHands LM](https://www.all-hands.dev/blog/introducing-openhands-lm-32b----a-strong-open-coding-agent-model)ollama pull https://huggingface.co/lmstudio-community/openhands-lm-32b-v0.1-GGUF# https://ollama.com/library/devstralollama pull devstral# Qwen2.5-Coder has tool use but you have to coax itollama pull qwen2.5-coder
The server is implemented with theSTDIO transport.ForHTTP, usemcpo for anOpenAPI compatible web server interface.This works withOpen-WebUI
uvx mcpo --port 3010 --api-key"supersecret" -- npx mcp-server-commands# uvx runs mcpo => mcpo run npx => npx runs mcp-server-commands# then, mcpo bridges STDIO <=> HTTP
Warning
I briefly usedmcpo withopen-webui, make sure to vet it for security concerns.
Claude Desktop app writes logs to~/Library/Logs/Claude/mcp-server-mcp-server-commands.log
By default, only important messages are logged (i.e. errors).If you want to see more messages, add--verbose to theargs when configuring the server.
By the way, logs are written toSTDERR because that is what Claude Desktop routes to the log files.In the future, I expect well formatted log messages to be written over theSTDIO transport to the MCP client (note: not Claude Desktop app).
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using theMCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
About
Model Context Protocol server to run commands
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Languages
- TypeScript74.3%
- JavaScript22.6%
- Shell3.1%
