Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork344
Convert documentation websites, GitHub repositories, and PDFs into Claude AI skills with automatic conflict detection
License
yusufkaraaslan/Skill_Seekers
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes.
📋View Development Roadmap & Tasks - 134 tasks across 10 categories, pick any to contribute!
Skill Seeker is an automated tool that transforms documentation websites, GitHub repositories, and PDF files into production-readyClaude AI skills. Instead of manually reading and summarizing documentation, Skill Seeker:
- Scrapes multiple sources (docs, GitHub repos, PDFs) automatically
- Analyzes code repositories with deep AST parsing
- Detects conflicts between documentation and code implementation
- Organizes content into categorized reference files
- Enhances with AI to extract best examples and key concepts
- Packages everything into an uploadable
.zipfile for Claude
Result: Get comprehensive Claude skills for any framework, API, or tool in 20-40 minutes instead of hours of manual work.
- 🎯For Developers: Create skills from documentation + GitHub repos with conflict detection
- 🎮For Game Devs: Generate skills for game engines (Godot docs + GitHub, Unity, etc.)
- 🔧For Teams: Combine internal docs + code repositories into single source of truth
- 📚For Learners: Build comprehensive skills from docs, code examples, and PDFs
- 🔍For Open Source: Analyze repos to find documentation gaps and outdated examples
- ✅llms.txt Support - Automatically detects and uses LLM-ready documentation files (10x faster)
- ✅Universal Scraper - Works with ANY documentation website
- ✅Smart Categorization - Automatically organizes content by topic
- ✅Code Language Detection - Recognizes Python, JavaScript, C++, GDScript, etc.
- ✅8 Ready-to-Use Presets - Godot, React, Vue, Django, FastAPI, and more
- ✅Basic PDF Extraction - Extract text, code, and images from PDF files
- ✅OCR for Scanned PDFs - Extract text from scanned documents
- ✅Password-Protected PDFs - Handle encrypted PDFs
- ✅Table Extraction - Extract complex tables from PDFs
- ✅Parallel Processing - 3x faster for large PDFs
- ✅Intelligent Caching - 50% faster on re-runs
- ✅Deep Code Analysis - AST parsing for Python, JavaScript, TypeScript, Java, C++, Go
- ✅API Extraction - Functions, classes, methods with parameters and types
- ✅Repository Metadata - README, file tree, language breakdown, stars/forks
- ✅GitHub Issues & PRs - Fetch open/closed issues with labels and milestones
- ✅CHANGELOG & Releases - Automatically extract version history
- ✅Conflict Detection - Compare documented APIs vs actual code implementation
- ✅MCP Integration - Natural language: "Scrape GitHub repo facebook/react"
- ✅Combine Multiple Sources - Mix documentation + GitHub + PDF in one skill
- ✅Conflict Detection - Automatically finds discrepancies between docs and code
- ✅Intelligent Merging - Rule-based or AI-powered conflict resolution
- ✅Transparent Reporting - Side-by-side comparison with
⚠️ warnings - ✅Documentation Gap Analysis - Identifies outdated docs and undocumented features
- ✅Single Source of Truth - One skill showing both intent (docs) and reality (code)
- ✅Backward Compatible - Legacy single-source configs still work
- ✅AI-Powered Enhancement - Transforms basic templates into comprehensive guides
- ✅No API Costs - FREE local enhancement using Claude Code Max
- ✅MCP Server for Claude Code - Use directly from Claude Code with natural language
- ✅Async Mode - 2-3x faster scraping with async/await (use
--asyncflag) - ✅Large Documentation Support - Handle 10K-40K+ page docs with intelligent splitting
- ✅Router/Hub Skills - Intelligent routing to specialized sub-skills
- ✅Parallel Scraping - Process multiple skills simultaneously
- ✅Checkpoint/Resume - Never lose progress on long scrapes
- ✅Caching System - Scrape once, rebuild instantly
- ✅Fully Tested - 299 tests with 100% pass rate
# One-time setup (5 minutes)./setup_mcp.sh# Then in Claude Code, just ask:"Generate a React skill from https://react.dev/""Scrape PDF at docs/manual.pdf and create skill"
Time: Automated |Quality: Production-ready |Cost: Free
# Install dependencies (2 pip packages)pip3 install requests beautifulsoup4# Generate a React skill in one commandpython3 cli/doc_scraper.py --config configs/react.json --enhance-local# Upload output/react.zip to Claude - Done!
Time: ~25 minutes |Quality: Production-ready |Cost: Free
# Install PDF supportpip3 install PyMuPDF# Basic PDF extractionpython3 cli/pdf_scraper.py --pdf docs/manual.pdf --name myskill# Advanced featurespython3 cli/pdf_scraper.py --pdf docs/manual.pdf --name myskill \ --extract-tables\# Extract tables --parallel\# Fast parallel processing --workers 8# Use 8 CPU cores# Scanned PDFs (requires: pip install pytesseract Pillow)python3 cli/pdf_scraper.py --pdf docs/scanned.pdf --name myskill --ocr# Password-protected PDFspython3 cli/pdf_scraper.py --pdf docs/encrypted.pdf --name myskill --password mypassword# Upload output/myskill.zip to Claude - Done!
Time: ~5-15 minutes (or 2-5 minutes with parallel) |Quality: Production-ready |Cost: Free
Advanced Features:
- ✅ OCR for scanned PDFs (requires pytesseract)
- ✅ Password-protected PDF support
- ✅ Table extraction
- ✅ Parallel processing (3x faster)
- ✅ Intelligent caching
# Install GitHub supportpip3 install PyGithub# Basic repository scrapingpython3 cli/github_scraper.py --repo facebook/react# Using a config filepython3 cli/github_scraper.py --config configs/react_github.json# With authentication (higher rate limits)export GITHUB_TOKEN=ghp_your_token_herepython3 cli/github_scraper.py --repo facebook/react# Customize what to includepython3 cli/github_scraper.py --repo django/django \ --include-issues\# Extract GitHub Issues --max-issues 100\# Limit issue count --include-changelog\# Extract CHANGELOG.md --include-releases# Extract GitHub Releases# MCP usage in Claude Code"Scrape GitHub repository facebook/react"# Upload output/react.zip to Claude - Done!
Time: ~5-10 minutes |Quality: Production-ready |Cost: Free
What Gets Extracted:
- ✅ README.md and documentation files
- ✅ GitHub Issues (open/closed, labels, milestones)
- ✅ CHANGELOG.md and version history
- ✅ GitHub Releases with release notes
- ✅ Repository metadata (stars, language, topics)
- ✅ File structure and language breakdown
The Problem: Documentation and code often drift apart. Docs might be outdated, missing features that exist in code, or documenting features that were removed.
The Solution: Combine documentation + GitHub + PDF into one unified skill that shows BOTH what's documented AND what actually exists, with clear warnings about discrepancies.
# Create unified config (mix documentation + GitHub)cat> configs/myframework_unified.json<< 'EOF'{ "name": "myframework", "description": "Complete framework knowledge from docs + code", "merge_mode": "rule-based", "sources": [ { "type": "documentation", "base_url": "https://docs.myframework.com/", "extract_api": true, "max_pages": 200 }, { "type": "github", "repo": "owner/myframework", "include_code": true, "code_analysis_depth": "surface" } ]}EOF# Run unified scraperpython3 cli/unified_scraper.py --config configs/myframework_unified.json# Upload output/myframework.zip to Claude - Done!
Time: ~30-45 minutes |Quality: Production-ready with conflict detection |Cost: Free
What Makes It Special:
✅Conflict Detection - Automatically finds 4 types of discrepancies:
- 🔴Missing in code (high): Documented but not implemented
- 🟡Missing in docs (medium): Implemented but not documented
⚠️ Signature mismatch: Different parameters/types- ℹ️Description mismatch: Different explanations
✅Transparent Reporting - Shows both versions side-by-side:
####`move_local_x(delta: float)`⚠️**Conflict**: Documentation signature differs from implementation**Documentation says:**
def move_local_x(delta: float)
**Code implementation:**```pythondef move_local_x(delta: float, snap: bool = False) -> None✅ **Advantages:**- **Identifies documentation gaps** - Find outdated or missing docs automatically- **Catches code changes** - Know when APIs change without docs being updated- **Single source of truth** - One skill showing intent (docs) AND reality (code)- **Actionable insights** - Get suggestions for fixing each conflict- **Development aid** - See what's actually in the codebase vs what's documented**Example Unified Configs:**- `configs/react_unified.json` - React docs + GitHub repo- `configs/django_unified.json` - Django docs + GitHub repo- `configs/fastapi_unified.json` - FastAPI docs + GitHub repo**Full Guide:** See [docs/UNIFIED_SCRAPING.md](docs/UNIFIED_SCRAPING.md) for complete documentation.## How It Works```mermaidgraph LR A[Documentation Website] --> B[Skill Seeker] B --> C[Scraper] B --> D[AI Enhancement] B --> E[Packager] C --> F[Organized References] D --> F F --> E E --> G[Claude Skill .zip] G --> H[Upload to Claude AI]- Detect llms.txt - Checks for llms-full.txt, llms.txt, llms-small.txt first
- Scrape: Extracts all pages from documentation
- Categorize: Organizes content into topics (API, guides, tutorials, etc.)
- Enhance: AI analyzes docs and creates comprehensive SKILL.md with examples
- Package: Bundles everything into a Claude-ready
.zipfile
Before you start, make sure you have:
- Python 3.10 or higher -Download | Check:
python3 --version - Git -Download | Check:
git --version - 15-30 minutes for first-time setup
First time user? →Start Here: Bulletproof Quick Start Guide 🎯
This guide walks you through EVERYTHING step-by-step (Python install, git clone, first skill creation).
Use Skill Seeker directly from Claude Code with natural language!
# Clone repositorygit clone https://github.com/yusufkaraaslan/Skill_Seekers.gitcd Skill_Seekers# One-time setup (5 minutes)./setup_mcp.sh# Restart Claude Code, then just ask:
In Claude Code:
List all available configsGenerate config for Tailwind at https://tailwindcss.com/docsScrape docs using configs/react.jsonPackage skill at output/react/Benefits:
- ✅ No manual CLI commands
- ✅ Natural language interface
- ✅ Integrated with your workflow
- ✅ 9 tools available instantly (includes automatic upload!)
- ✅Tested and working in production
Full guides:
- 📘MCP Setup Guide - Complete installation instructions
- 🧪MCP Testing Guide - Test all 9 tools
- 📦Large Documentation Guide - Handle 10K-40K+ pages
- 📤Upload Guide - How to upload skills to Claude
# Clone repositorygit clone https://github.com/yusufkaraaslan/Skill_Seekers.gitcd Skill_Seekers# Create virtual environmentpython3 -m venv venv# Activate virtual environmentsource venv/bin/activate# macOS/Linux# OR on Windows: venv\Scripts\activate# Install dependenciespip install requests beautifulsoup4 pytest# Save dependenciespip freeze> requirements.txt# Optional: Install anthropic for API-based enhancement (not needed for LOCAL enhancement)# pip install anthropic
Always activate the virtual environment before using Skill Seeker:
source venv/bin/activate# Run this each time you start a new terminal session
# Make sure venv is activated (you should see (venv) in your prompt)source venv/bin/activate# Optional: Estimate pages first (fast, 1-2 minutes)python3 cli/estimate_pages.py configs/godot.json# Use Godot presetpython3 cli/doc_scraper.py --config configs/godot.json# Use React presetpython3 cli/doc_scraper.py --config configs/react.json# See all presetsls configs/
python3 cli/doc_scraper.py --interactive
python3 cli/doc_scraper.py \ --name react \ --url https://react.dev/ \ --description"React framework for UIs"Once your skill is packaged, you need to upload it to Claude:
# Set your API key (one-time)export ANTHROPIC_API_KEY=sk-ant-...# Package and upload automaticallypython3 cli/package_skill.py output/react/ --upload# OR upload existing .zippython3 cli/upload_skill.py output/react.zip
Benefits:
- ✅ Fully automatic
- ✅ No manual steps
- ✅ Works from command line
Requirements:
- Anthropic API key (get fromhttps://console.anthropic.com/)
# Package skillpython3 cli/package_skill.py output/react/# This will:# 1. Create output/react.zip# 2. Open the output/ folder automatically# 3. Show upload instructions# Then manually upload:# - Go to https://claude.ai/skills# - Click "Upload Skill"# - Select output/react.zip# - Done!
Benefits:
- ✅ No API key needed
- ✅ Works for everyone
- ✅ Folder opens automatically
In Claude Code, just ask:"Package and upload the React skill"# With API key set:# - Packages the skill# - Uploads to Claude automatically# - Done! ✅# Without API key:# - Packages the skill# - Shows where to find the .zip# - Provides manual upload instructionsBenefits:
- ✅ Natural language
- ✅ Smart auto-detection (uploads if API key available)
- ✅ Works with or without API key
- ✅ No errors or failures
doc-to-skill/├── cli/│ ├── doc_scraper.py # Main scraping tool│ ├── package_skill.py # Package to .zip│ ├── upload_skill.py # Auto-upload (API)│ └── enhance_skill.py # AI enhancement├── mcp/ # MCP server for Claude Code│ └── server.py # 9 MCP tools├── configs/ # Preset configurations│ ├── godot.json # Godot Engine│ ├── react.json # React│ ├── vue.json # Vue.js│ ├── django.json # Django│ └── fastapi.json # FastAPI└── output/ # All output (auto-created) ├── godot_data/ # Scraped data ├── godot/ # Built skill └── godot.zip # Packaged skillpython3 cli/estimate_pages.py configs/react.json# Output:📊 ESTIMATION RESULTS✅ Pages Discovered: 180📈 Estimated Total: 230⏱️ Time Elapsed: 1.2 minutes💡 Recommended max_pages: 280Benefits:
- Know page count BEFORE scraping (saves time)
- Validates URL patterns work correctly
- Estimates total scraping time
- Recommends optimal
max_pagessetting - Fast (1-2 minutes vs 20-40 minutes full scrape)
python3 cli/doc_scraper.py --config configs/godot.json# If data exists:✓ Found existing data: 245 pagesUse existing data? (y/n): y⏭️ Skipping scrape, using existing data
Automatic pattern extraction:
- Extracts common code patterns from docs
- Detects programming language
- Creates quick reference with real examples
- Smarter categorization with scoring
Enhanced SKILL.md:
- Real code examples from documentation
- Language-annotated code blocks
- Common patterns section
- Quick reference from actual usage examples
Automatically infers categories from:
- URL structure
- Page titles
- Content keywords
- With scoring for better accuracy
# Automatically detects:-Python (def,import,from)-JavaScript (const,let,=>)-GDScript (func,var,extends)-C++ (#include, int main)-Andmore...
# Scrape oncepython3 cli/doc_scraper.py --config configs/react.json# Later, just rebuild (instant)python3 cli/doc_scraper.py --config configs/react.json --skip-scrape
# Enable async mode with 8 workers (recommended for large docs)python3 cli/doc_scraper.py --config configs/react.json --async --workers 8# Small docs (~100-500 pages)python3 cli/doc_scraper.py --config configs/mydocs.json --async --workers 4# Large docs (2000+ pages) with no rate limitingpython3 cli/doc_scraper.py --config configs/largedocs.json --async --workers 8 --no-rate-limit
Performance Comparison:
- Sync mode (threads): ~18 pages/sec, 120 MB memory
- Async mode: ~55 pages/sec, 40 MB memory
- Result: 3x faster, 66% less memory!
When to use:
- ✅ Large documentation (500+ pages)
- ✅ Network latency is high
- ✅ Memory is constrained
- ❌ Small docs (< 100 pages) - overhead not worth it
See full guide:ASYNC_SUPPORT.md
# Option 1: During scraping (API-based, requires API key)pip3 install anthropicexport ANTHROPIC_API_KEY=sk-ant-...python3 cli/doc_scraper.py --config configs/react.json --enhance# Option 2: During scraping (LOCAL, no API key - uses Claude Code Max)python3 cli/doc_scraper.py --config configs/react.json --enhance-local# Option 3: After scraping (API-based, standalone)python3 cli/enhance_skill.py output/react/# Option 4: After scraping (LOCAL, no API key, standalone)python3 cli/enhance_skill_local.py output/react/
What it does:
- Reads your reference documentation
- Uses Claude to generate an excellent SKILL.md
- Extracts best code examples (5-10 practical examples)
- Creates comprehensive quick reference
- Adds domain-specific key concepts
- Provides navigation guidance for different skill levels
- Automatically backs up original
- Quality: Transforms 75-line templates into 500+ line comprehensive guides
LOCAL Enhancement (Recommended):
- Uses your Claude Code Max plan (no API costs)
- Opens new terminal with Claude Code
- Analyzes reference files automatically
- Takes 30-60 seconds
- Quality: 9/10 (comparable to API version)
For massive documentation sites like Godot (40K pages), AWS, or Microsoft Docs:
# 1. Estimate first (discover page count)python3 cli/estimate_pages.py configs/godot.json# 2. Auto-split into focused sub-skillspython3 cli/split_config.py configs/godot.json --strategy router# Creates:# - godot-scripting.json (5K pages)# - godot-2d.json (8K pages)# - godot-3d.json (10K pages)# - godot-physics.json (6K pages)# - godot-shaders.json (11K pages)# 3. Scrape all in parallel (4-8 hours instead of 20-40!)forconfigin configs/godot-*.json;do python3 cli/doc_scraper.py --config$config&donewait# 4. Generate intelligent router/hub skillpython3 cli/generate_router.py configs/godot-*.json# 5. Package all skillspython3 cli/package_multi.py output/godot*/# 6. Upload all .zip files to Claude# Users just ask questions naturally!# Router automatically directs to the right sub-skill!
Split Strategies:
- auto - Intelligently detects best strategy based on page count
- category - Split by documentation categories (scripting, 2d, 3d, etc.)
- router - Create hub skill + specialized sub-skills (RECOMMENDED)
- size - Split every N pages (for docs without clear categories)
Benefits:
- ✅ Faster scraping (parallel execution)
- ✅ More focused skills (better Claude performance)
- ✅ Easier maintenance (update one topic at a time)
- ✅ Natural user experience (router handles routing)
- ✅ Avoids context window limits
Configuration:
{"name":"godot","max_pages":40000,"split_strategy":"router","split_config": {"target_pages_per_skill":5000,"create_router":true,"split_by_categories": ["scripting","2d","3d","physics"] }}Full Guide:Large Documentation Guide
Never lose progress on long-running scrapes:
# Enable in config{"checkpoint": {"enabled": true,"interval": 1000 // Save every 1000 pages }}# If scrape is interrupted (Ctrl+C or crash)python3 cli/doc_scraper.py --config configs/godot.json --resume# Resume from last checkpoint✅ Resuming from checkpoint (12,450 pages scraped)⏭️ Skipping 12,450 already-scraped pages🔄 Continuing from where we left off...# Start fresh (clear checkpoint)python3 cli/doc_scraper.py --config configs/godot.json --fresh
Benefits:
- ✅ Auto-saves every 1000 pages (configurable)
- ✅ Saves on interruption (Ctrl+C)
- ✅ Resume with
--resumeflag - ✅ Never lose hours of scraping progress
# 1. Scrape + Build + AI Enhancement (LOCAL, no API key)python3 cli/doc_scraper.py --config configs/godot.json --enhance-local# 2. Wait for new terminal to close (enhancement completes)# Check the enhanced SKILL.md:cat output/godot/SKILL.md# 3. Packagepython3 cli/package_skill.py output/godot/# 4. Done! You have godot.zip with excellent SKILL.md
Time: 20-40 minutes (scraping) + 60 seconds (enhancement) = ~21-41 minutes
# 1. Use cached data + Local Enhancementpython3 cli/doc_scraper.py --config configs/godot.json --skip-scrapepython3 cli/enhance_skill_local.py output/godot/# 2. Packagepython3 cli/package_skill.py output/godot/# 3. Done!
Time: 1-3 minutes (build) + 60 seconds (enhancement) = ~2-4 minutes total
# 1. Scrape + Build (no enhancement)python3 cli/doc_scraper.py --config configs/godot.json# 2. Packagepython3 cli/package_skill.py output/godot/# 3. Done! (SKILL.md will be basic template)
Time: 20-40 minutesNote: SKILL.md will be generic - enhancement strongly recommended!
| Config | Framework | Description |
|---|---|---|
godot.json | Godot Engine | Game development |
react.json | React | UI framework |
vue.json | Vue.js | Progressive framework |
django.json | Django | Python web framework |
fastapi.json | FastAPI | Modern Python API |
ansible-core.json | Ansible Core 2.19 | Automation & configuration |
# Godotpython3 cli/doc_scraper.py --config configs/godot.json# Reactpython3 cli/doc_scraper.py --config configs/react.json# Vuepython3 cli/doc_scraper.py --config configs/vue.json# Djangopython3 cli/doc_scraper.py --config configs/django.json# FastAPIpython3 cli/doc_scraper.py --config configs/fastapi.json# Ansiblepython3 cli/doc_scraper.py --config configs/ansible-core.json
python3 cli/doc_scraper.py --interactive# Follow prompts, it will create the config for you# Copy a presetcp configs/react.json configs/myframework.json# Edit itnano configs/myframework.json# Use itpython3 cli/doc_scraper.py --config configs/myframework.json
{"name":"myframework","description":"When to use this skill","base_url":"https://docs.myframework.com/","selectors": {"main_content":"article","title":"h1","code_blocks":"pre code" },"url_patterns": {"include": ["/docs","/guide"],"exclude": ["/blog","/about"] },"categories": {"getting_started": ["intro","quickstart"],"api": ["api","reference"] },"rate_limit":0.5,"max_pages":500}output/├── godot_data/ # Scraped raw data│ ├── pages/ # JSON files (one per page)│ └── summary.json # Overview│└── godot/ # The skill ├── SKILL.md # Enhanced with real examples ├── references/ # Categorized docs │ ├── index.md │ ├── getting_started.md │ ├── scripting.md │ └── ... ├── scripts/ # Empty (add your own) └── assets/ # Empty (add your own)# Interactive modepython3 cli/doc_scraper.py --interactive# Use config filepython3 cli/doc_scraper.py --config configs/godot.json# Quick modepython3 cli/doc_scraper.py --name react --url https://react.dev/# Skip scraping (use existing data)python3 cli/doc_scraper.py --config configs/godot.json --skip-scrape# With descriptionpython3 cli/doc_scraper.py \ --name react \ --url https://react.dev/ \ --description"React framework for building UIs"
Editmax_pages in config to test:
{"max_pages":20// Test with just 20 pages}# Scrape oncepython3 cli/doc_scraper.py --config configs/react.json# Rebuild multiple times (instant)python3 cli/doc_scraper.py --config configs/react.json --skip-scrapepython3 cli/doc_scraper.py --config configs/react.json --skip-scrape
# Test in Pythonfrombs4importBeautifulSoupimportrequestsurl="https://docs.example.com/page"soup=BeautifulSoup(requests.get(url).content,'html.parser')# Try different selectorsprint(soup.select_one('article'))print(soup.select_one('main'))print(soup.select_one('div[role="main"]'))
# After building, check:cat output/godot/SKILL.md# Should have real examplescat output/godot/references/index.md# Categories
- Check your
main_contentselector - Try:
article,main,div[role="main"]
# Force re-scraperm -rf output/myframework_data/python3 cli/doc_scraper.py --config configs/myframework.jsonEdit the configcategories section with better keywords.
# Delete old datarm -rf output/godot_data/# Re-scrapepython3 cli/doc_scraper.py --config configs/godot.json
| Task | Time | Notes |
|---|---|---|
| Scraping (sync) | 15-45 min | First time only, thread-based |
| Scraping (async) | 5-15 min | 2-3x faster with --async flag |
| Building | 1-3 min | Fast! |
| Re-building | <1 min | With --skip-scrape |
| Packaging | 5-10 sec | Final zip |
One tool does everything:
- ✅ Scrapes documentation
- ✅ Auto-detects existing data
- ✅ Generates better knowledge
- ✅ Creates enhanced skills
- ✅ Works with presets or custom configs
- ✅ Supports skip-scraping for fast iteration
Simple structure:
doc_scraper.py- The toolconfigs/- Presetsoutput/- Everything else
Better output:
- Real code examples with language detection
- Common patterns extracted from docs
- Smart categorization
- Enhanced SKILL.md with actual examples
- BULLETPROOF_QUICKSTART.md - 🎯START HERE if you're new!
- QUICKSTART.md - Quick start for experienced users
- TROUBLESHOOTING.md - Common issues and solutions
- docs/LARGE_DOCUMENTATION.md - Handle 10K-40K+ page docs
- ASYNC_SUPPORT.md - Async mode guide (2-3x faster scraping)
- docs/ENHANCEMENT.md - AI enhancement guide
- docs/UPLOAD_GUIDE.md - How to upload skills to Claude
- docs/MCP_SETUP.md - MCP integration setup
- docs/CLAUDE.md - Technical architecture
- STRUCTURE.md - Repository structure
# Try Godotpython3 cli/doc_scraper.py --config configs/godot.json# Try Reactpython3 cli/doc_scraper.py --config configs/react.json# Or go interactivepython3 cli/doc_scraper.py --interactive
MIT License - seeLICENSE file for details
Happy skill building! 🚀
About
Convert documentation websites, GitHub repositories, and PDFs into Claude AI skills with automatic conflict detection
Topics
Resources
License
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Sponsor this project
Uh oh!
There was an error while loading.Please reload this page.
