Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

"Your Fully-Automated Personal AI Assistant, and Open-Source & Cost-Efficient Alternative to OpenAI's Deep Research"

NotificationsYou must be signed in to change notification settings

HKUDS/Auto-Deep-Research

Repository files navigation

Logo

Auto-Deep-Research:
Your Fully-Automated and Cost-Effective Personal AI Assistant

CreditsJoin our Slack communityJoin our Discord community
Check out the documentationPaperEvaluation Benchmark Score

Welcome to Auto-Deep-Research! Auto-Deep-Research is a open-source and cost-efficient alternative to OpenAI's Deep Research, based onAutoAgent framework.

✨Key Features

  • 🏆High Performance: Ranks the#1 spot among open-sourced methods, delivering comparable performance toOpenAI's Deep Research.
  • 🌐Universal LLM Support: Seamlessly integrates withA Wide Range of LLMs (e.g., OpenAI, Anthropic, Deepseek, vLLM, Grok, Huggingface ...)
  • 🔀Flexible Interaction: Supports bothfunction-calling andnon-function-calling interaction LLMs.
  • 💰Cost-Efficient: Open-source alternative to Deep Research's $200/month subscription with your own pay-as-you-go LLM API keys.
  • 📁File Support: Handles file uploads for enhanced data interaction
  • 🚀One-Click Launch: Get started instantly with a simpleauto deep-research command -Zero Configuration needed, truly out-of-the-box experience.

🚀 Own your own personal assistant with much lower cost. Try 🔥Auto-Deep-Research🔥 Now!

🔥 News

  • [2025, Feb 16]:  🎉🎉We've cleaned up the codebase ofAutoAgent, removed the irrelevant parts for Auto-Deep-Research and released the first version of Auto-Deep-Research.

📑 Table of Contents

🧐 Why to release Auto-Deep-Research?

After releasing AutoAgent (previously known as MetaChain) for a week, we've observed three compelling reasons to introduce Auto-Deep-Research:

  1. Community Interest
    We noticed significant community interest in our Deep Research alternative functionality. In response, we've streamlined the codebase by removing non-Deep-Research related components to create a more focused tool.

  2. Framework Extensibility
    Auto-Deep-Research serves as the first ready-to-use product built on AutoAgent, demonstrating how quickly and easily you can create powerful Agent Apps using our framework.

  3. Community-Driven Improvements
    We've incorporated valuable community feedback from the first week, introducing features like one-click launch and enhanced LLM compatibility to make the tool more accessible and versatile.

Auto-Deep-Research represents our commitment to both the community's needs and the demonstration of AutoAgent's potential as a foundation for building practical AI applications.

⚡ Quick Start

Installation

Auto-Deep-Research Installation

conda create -n auto_deep_research python=3.10conda activate auto_deep_researchgit clone https://github.com/HKUDS/Auto-Deep-Research.gitcd Auto-Deep-Researchpip install -e.

Docker Installation

We use Docker to containerize the agent-interactive environment. So please installDocker first. You don't need to manually pull the pre-built image, because we have let Auto-Deep-Researchautomatically pull the pre-built image based on your architecture of your machine.

API Keys Setup

Create a environment variable file, just like.env.template, and set the API keys for the LLMs you want to use. Not every LLM API Key is required, use what you need.

Start Auto-Deep-Research

Command Options:

You can runauto deep-research to start Auto-Deep-Research. Some configuration of this command is shown below.

  • --container_name: Name of the Docker container (default: 'deepresearch')
  • --port: Port for the container (default: 12346)
  • COMPLETION_MODEL: Specify the LLM model to use, you should follow the name ofLitellm to set the model name. (Default:claude-3-5-sonnet-20241022)
  • DEBUG: Enable debug mode for detailed logs (default: False)
  • API_BASE_URL: The base URL for the LLM provider (default: None)
  • FN_CALL: Enable function calling (default: None). Most of time, you could ignore this option because we have already set the default value based on the model name.

Different LLM Providers

We will show you how easy it is to start Auto-Deep-Research with different LLM providers.

Anthropic
  • set theANTHROPIC_API_KEY in the.env file.
ANTHROPIC_API_KEY=your_anthropic_api_key
  • run the following command to start Auto-Deep-Research.
auto deep-research# default model is claude-3-5-sonnet-20241022
OpenAI
  • set theOPENAI_API_KEY in the.env file.
OPENAI_API_KEY=your_openai_api_key
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=gpt-4o auto deep-research
Mistral
  • set theMISTRAL_API_KEY in the.env file.
MISTRAL_API_KEY=your_mistral_api_key
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=mistral/mistral-large-2407 auto deep-research
Gemini - Google AI Studio
  • set theGEMINI_API_KEY in the.env file.
GEMINI_API_KEY=your_gemini_api_key
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=gemini/gemini-2.0-flash auto deep-research
Huggingface
  • set theHUGGINGFACE_API_KEY in the.env file.
HUGGINGFACE_API_KEY=your_huggingface_api_key
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=huggingface/meta-llama/Llama-3.3-70B-Instruct auto deep-research
Groq
  • set theGROQ_API_KEY in the.env file.
GROQ_API_KEY=your_groq_api_key
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=groq/deepseek-r1-distill-llama-70b auto deep-research
OpenAI-Compatible Endpoints (e.g., Grok)
  • set theOPENAI_API_KEY in the.env file.
OPENAI_API_KEY=your_api_key_for_openai_compatible_endpoints
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=openai/grok-2-latest API_BASE_URL=https://api.x.ai/v1 auto deep-research
OpenRouter (e.g., DeepSeek-R1)

We recommend using OpenRouter as LLM provider of DeepSeek-R1 temporarily. Because official API of DeepSeek-R1 can not be used efficiently.

  • set theOPENROUTER_API_KEY in the.env file.
OPENROUTER_API_KEY=your_openrouter_api_key
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=openrouter/deepseek/deepseek-r1 auto deep-research
DeepSeek
  • set theDEEPSEEK_API_KEY in the.env file.
DEEPSEEK_API_KEY=your_deepseek_api_key
  • run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=deepseek/deepseek-chat auto deep-research

Tips

Import browser cookies to browser environment

You can import the browser cookies to the browser environment to let the agent better access some specific websites. For more details, please refer to thecookies folder.

More features coming soon! 🚀Web GUI interface under development.

☑️ Todo List

Auto-Deep-Research is continuously evolving! Here's what's coming:

  • 🖥️GUI Agent: SupportingComputer-Use agents with GUI interaction
  • 🏗️Code Sandboxes: Supporting additional environments likeE2B
  • 🎨Web Interface: Developing comprehensive GUI for better user experience

Have ideas or suggestions? Feel free to open an issue! Stay tuned for more exciting updates! 🚀

📖 Documentation

A more detailed documentation is coming soon 🚀, and we will update in theDocumentation page.

🤝 Join the Community

If you think the Auto-Deep-Research is helpful, you can join our community by:

🙏 Acknowledgements

Rome wasn't built in a day. Auto-Deep-Research is built on theAutoAgent framework. We extend our sincere gratitude to all the pioneering works that have shaped AutoAgent, including OpenAI Swarm for framework architecture inspiration, Magentic-one for the three-agent design insights, OpenHands for documentation structure, and many other excellent projects that contributed to agent-environment interaction design. Your innovations have been instrumental in making both AutoAgent and Auto-Deep-Research possible.

🌟 Cite

@misc{AutoAgent,      title={{AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents}},      author={Jiabin Tang, Tianyu Fan, Chao Huang},      year={2025},      eprint={202502.05957},      archivePrefix={arXiv},      primaryClass={cs.AI},      url={https://arxiv.org/abs/2502.05957},}

About

"Your Fully-Automated Personal AI Assistant, and Open-Source & Cost-Efficient Alternative to OpenAI's Deep Research"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp