- Notifications
You must be signed in to change notification settings - Fork0
Mini website crawler to make sitemap from a website.
License
marshvee/python-sitemap
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Simple script to crawl websites and create a sitemap.xml of all public link in it.
Warning : This script only works withPython3
>>> python main.py --domain http://blog.lesite.us --output sitemap.xmlRead a config file to set parameters:You can overide (or add for list) any parameters define in the config.json
>>> python main.py --config config/config.json $ python main.py --domain https://blog.lesite.us --output sitemap.xml --debug$ python main.py --domain https://blog.lesite.us --output sitemap.xml --verbose$ python main.py --domain https://blog.lesite.us --output sitemap.xml --no-sortMore informations herehttps://support.google.com/webmasters/answer/178636?hl=en
$ python main.py --domain https://blog.lesite.us --output sitemap.xml --images$ python main.py --domain https://blog.lesite.us --output sitemap.xml --fetch-iframes$ python main.py --domain https://blog.lesite.us --output sitemap.xml --report$ python main.py --domain https://blog.lesite.us --output sitemap.xml --skipext pdf --skipext xml$ python main.py --domain https://blog.lesite.us --output sitemap.xml --drop "id=[0-9]{5}"$ python main.py --domain https://blog.lesite.us --output sitemap.xml --exclude "action=edit"$ python main.py --domain https://blog.lesite.us --output sitemap.xml --parserobots$ python main.py --domain https://blog.lesite.us --output sitemap.xml --parserobots --user-agent Googlebot$ python3 main.py --domain https://blog.lesite.us --images --parserobots | xmllint --format -$ python3 main.py --domain https://blog.lesite.us --num-workers 4You need to configureusername andpassword in yourconfig.py before
$ python3 main.py --domain https://blog.lesite.us --authSitemaps with over 50,000 URLs should be split into an index file that points to sitemap files that each contain 50,000 URLs or fewer. Outputting as an index requires specifying an output file. An index will only be output if a crawl has more than 50,000 URLs:
$ python3 main.py --domain https://blog.lesite.us --as-index --output sitemap.xml$ docker build -t python-sitemap:latest .$ docker run -it python-sitemap$ docker run -it python-sitemap --domain https://www.graylog.frYou need to configure config.json file before
$ docker run -it -v `pwd`/config/:/config/ -v `pwd`:/home/python-sitemap/ python-sitemap --config config/config.jsonAbout
Mini website crawler to make sitemap from a website.
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- Python99.0%
- Dockerfile1.0%