Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The python library for research and development in NLP, multimodal LLMs, Agents, ML, Knowledge Graphs, and more.

License

NotificationsYou must be signed in to change notification settings

NPC-Worldwide/npcpy

Repository files navigation

npc-python logo

npcpy

npcpy is a flexible agent framework for building AI applications and conducting research with LLMs. It supports local and cloud providers, multi-agent teams, tool calling, image/audio/video generation, knowledge graphs, fine-tuning, and more.

pip install npcpy

Quick Examples

Agent with persona

fromnpcpy.npc_compilerimportNPCsimon=NPC(name='Simon Bolivar',primary_directive='Liberate South America from the Spanish Royalists.',model='gemma3:4b',provider='ollama')response=simon.get_llm_response("What is the most important territory to retain in the Andes?")print(response['response'])

Direct LLM call

fromnpcpy.llm_funcsimportget_llm_responseresponse=get_llm_response("Who was the celtic messenger god?",model='qwen3:4b',provider='ollama')print(response['response'])

Agent with tools

importosfromnpcpy.npc_compilerimportNPCdeflist_files(directory:str=".")->list:"""List all files in a directory."""returnos.listdir(directory)defread_file(filepath:str)->str:"""Read and return the contents of a file."""withopen(filepath,'r')asf:returnf.read()assistant=NPC(name='File Assistant',primary_directive='You help users explore files.',model='llama3.2',provider='ollama',tools=[list_files,read_file],)response=assistant.get_llm_response("List the files in the current directory.")print(response['response'])# Access individual tool resultsforresultinresponse.get('tool_results', []):print(f"{result['tool_name']}:{result['result']}")

Streaming responses

fromnpcpy.llm_funcsimportget_llm_responseresponse=get_llm_response("Tell me about the history of the Inca Empire.",model='llama3.2',provider='ollama',stream=True)forchunkinresponse['response']:msg=chunk.get('message', {})print(msg.get('content',''),end='',flush=True)

JSON output

fromnpcpy.llm_funcsimportget_llm_responseresponse=get_llm_response("List 3 planets with their distances from the sun in AU.",model='llama3.2',provider='ollama',format='json')print(response['response'])

Multi-agent team orchestration

fromnpcpy.npc_compilerimportNPC,Team# Create specialist agentscoordinator=NPC(name='coordinator',primary_directive='''You coordinate a team of specialists.    Delegate tasks by mentioning @analyst for data questions or @writer for content.    Synthesize their responses into a final answer.''',model='llama3.2',provider='ollama')analyst=NPC(name='analyst',primary_directive='You analyze data and provide insights with specific numbers.',model='~/models/mistral-7b-instruct-v0.2.Q4_K_M.gguf',provider='llamacpp')writer=NPC(name='writer',primary_directive='You write clear, engaging summaries and reports.',model='gemini-2.5-flash',provider='gemini')# Create team - coordinator (forenpc) automatically delegates via @mentionsteam=Team(npcs=[coordinator,analyst,writer],forenpc='coordinator')# Orchestrate a request - coordinator decides who to involveresult=team.orchestrate("What are the trends in renewable energy adoption?")print(result['output'])

Initialize a team

Installingnpcpy also installs two command-line tools:

  • npc — CLI for project management and one-off commands
  • npcsh — Interactive shell for chatting with agents and running jinxs
# Using npc CLInpc init ./my_project# Using npcsh (interactive)npcsh📁~/projects🤖 npcsh| llama3.2> /init directory=./my_project> what files arein the current directory?

This creates:

my_project/├── npc_team/│   ├── forenpc.npc      # Default coordinator│   ├── jinxs/           # Workflows│   │   └── skills/      # Knowledge skills│   ├── tools/           # Custom tools│   └── triggers/        # Event triggers├── images/├── models/└── mcp_servers/

Then add your agents:

# Add team contextcat> my_project/npc_team/team.ctx<< 'EOF'context: Research and analysis teamforenpc: leadmodel: llama3.2provider: ollamaEOF# Add agentscat> my_project/npc_team/lead.npc<< 'EOF'name: leadprimary_directive: |  You lead the team. Delegate to @researcher for data  and @writer for content. Synthesize their output.EOFcat> my_project/npc_team/researcher.npc<< 'EOF'name: researcherprimary_directive: You research topics and provide detailed findings.model: gemini-2.5-flashprovider: geminiEOFcat> my_project/npc_team/writer.npc<< 'EOF'name: writerprimary_directive: You write clear, engaging content.model: qwen3:8bprovider: ollamaEOF

Team directory structure

npc_team/├── team.ctx           # Team configuration├── coordinator.npc    # Coordinator agent├── analyst.npc        # Specialist agent├── writer.npc         # Specialist agent└── jinxs/             # Optional workflows    └── research.jinx

team.ctx - Team configuration:

context:|  A research team that analyzes topics and produces reports.  The coordinator delegates to specialists as needed.forenpc:coordinatormodel:llama3.2provider:ollamamcp_servers:  -~/.npcsh/mcp_server.py

coordinator.npc - Agent definition:

name:coordinatorprimary_directive:|  You coordinate research tasks. Delegate to @analyst for data  analysis and @writer for content creation. Synthesize results.model:llama3.2provider:ollama

analyst.npc - Specialist agent:

name:analystprimary_directive:|  You analyze data and provide insights with specific numbers and trends.model:qwen3:8bprovider:ollama

Team from directory

fromnpcpy.npc_compilerimportTeam# Load team from directory with .npc files and team.ctxteam=Team(team_path='./npc_team')# Orchestrate through the forenpc (set in team.ctx)result=team.orchestrate("Analyze the sales data and write a summary")print(result['output'])

Agent with skills

Skills are knowledge-content jinxs that provide instructional sections to agents on demand.

1. Create a skill file (npc_team/jinxs/skills/code-review/SKILL.md):

---name:code-reviewdescription:Use when reviewing code for quality, security, and best practices.---#Code Review Skill##checklist- Check for security vulnerabilities (SQL injection, XSS, etc.)- Verify error handling and edge cases- Review naming conventions and code clarity##securityFocus on OWASP top 10 vulnerabilities...

2. Reference it in your NPC (npc_team/reviewer.npc):

name:reviewerprimary_directive:You review code for quality and security issues.model:llama3.2provider:ollamajinxs:  -skills/code-review

3. Use the NPC:

fromnpcpy.npc_compilerimportNPC# Load NPC from file - skills are automatically available as callable jinxsreviewer=NPC(file='./npc_team/reviewer.npc')response=reviewer.get_llm_response("Review this function: def login(user, pwd): ...")print(response['response'])

Skills let the agent request specific knowledge sections (likechecklist orsecurity) as needed during responses.

Agent with MCP server

Connect any MCP server to an NPC and its tools become available for agentic tool calling:

fromnpcpy.npc_compilerimportNPCfromnpcpy.serveimportMCPClientNPC# Connect to your MCP servermcp=MCPClientNPC()mcp.connect_sync('./my_mcp_server.py')# Create an NPCassistant=NPC(name='Assistant',primary_directive='You help users with tasks using available tools.',model='llama3.2',provider='ollama')# Pass MCP tools to get_llm_response - the agent handles tool calls automaticallyresponse=assistant.get_llm_response("Search the database for recent orders",tools=mcp.available_tools_llm,tool_map=mcp.tool_map)print(response['response'])# Clean up when donemcp.disconnect_sync()

Example MCP server (my_mcp_server.py):

frommcp.server.fastmcpimportFastMCPmcp=FastMCP("My Tools")@mcp.tool()defsearch_database(query:str)->str:"""Search the database for records matching the query."""returnf"Found results for:{query}"@mcp.tool()defsend_notification(message:str,channel:str="general")->str:"""Send a notification to a channel."""returnf"Sent '{message}' to #{channel}"if__name__=="__main__":mcp.run()

MCPClientNPC methods:

  • connect_sync(server_path) — Connect to an MCP server script
  • disconnect_sync() — Disconnect from the server
  • available_tools_llm — Tool schemas for LLM consumption
  • tool_map — Dict mapping tool names to callable functions

Image generation

fromnpcpy.llm_funcsimportgen_imageimages=gen_image("A sunset over the mountains",model='sdxl',provider='diffusers')images[0].save("sunset.png")

Features

  • Agents (NPCs) — Agents with personas, directives, and tool calling
  • Multi-Agent Teams — Team orchestration with a coordinator (forenpc)
  • Jinx Workflows — Jinja Execution templates for multi-step prompt pipelines
  • Skills — Knowledge-content jinxs that serve instructional sections to agents on demand
  • NPCArray — NumPy-like vectorized operations over model populations
  • Image, Audio & Video — Generation via Ollama, diffusers, OpenAI, Gemini
  • Knowledge Graphs — Build and evolve knowledge graphs from text
  • Fine-Tuning & Evolution — SFT, RL, diffusion, genetic algorithms
  • Serving — Flask server for deploying teams via REST API
  • ML Functions — Scikit-learn grid search, ensemble prediction, PyTorch training
  • Streaming & JSON — Streaming responses, structured JSON output, message history

Providers

Works with all major LLM providers through LiteLLM:ollama,openai,anthropic,gemini,deepseek,airllm,openai-like, and more.

Installation

pip install npcpy# basepip install npcpy[lite]# + API provider librariespip install npcpy[local]# + ollama, diffusers, transformers, airllmpip install npcpy[yap]# + TTS/STTpip install npcpy[all]# everything
System dependencies

Linux:

sudo apt-get install espeak portaudio19-dev python3-pyaudio ffmpeg libcairo2-dev libgirepository1.0-devcurl -fsSL https://ollama.com/install.sh| shollama pull llama3.2

macOS:

brew install portaudio ffmpeg pygobject3 ollamabrew services start ollamaollama pull llama3.2

Windows: InstallOllama andffmpeg, thenollama pull llama3.2.

API keys go in a.env file:

export OPENAI_API_KEY="your_key"export ANTHROPIC_API_KEY="your_key"export GEMINI_API_KEY="your_key"

Read the Docs

Full documentation, guides, and API reference atnpcpy.readthedocs.io.

Inference Capabilities

Works with local and cloud providers through LiteLLM (Ollama, OpenAI, Anthropic, Gemini, Deepseek, and more) with support for text, image, audio, and video generation.

Links

Research

  • Quantum-like nature of natural language interpretation:arxiv, accepted atQNLP 2025
  • Simulating hormonal cycles for AI:arxiv

Has your research benefited from npcpy? Let us know!

Support

Monthly donation |Merch | Consulting:info@npcworldwi.de

Contributing

Contributions welcome! Submit issues and pull requests on theGitHub repository.

License

MIT License.

Star History

Star History Chart

About

The python library for research and development in NLP, multimodal LLMs, Agents, ML, Knowledge Graphs, and more.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

  •  

Languages


[8]ページ先頭

©2009-2026 Movatter.jp