- Notifications
You must be signed in to change notification settings - Fork746
A Python module to scrape several search engines (like Google, Yandex, Bing, Duckduckgo, ...). Including asynchronous networking support.
License
NikolaiT/GoogleScraper
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Scrapeulous.com - Scraping Service
GoogleScraper is a open source tool and will remain a open source tool in the future.
Also the modern successor of GoogleScraper, the general purposecrawling infrastructure, will remain open source and free.
Some people however would want to quickly have a service that lets them scrape some data from Google orany other search engine. For this reason, I created the web servicescrapeulous.com.
Last State:Feburary 2019
The successor of GoogleScraper can befound here
This means that I won't maintain this project anymore. All new development goes in the above project.
There are several reasons why I won't continue to put much effort into this project.
- Python is not the language/framework for modern scraping. Node/Javascript is. The reason is puppeteer. puppeteer is the de-facto standard for controlling and automatizing web browsers (especially Chrome). This project uses Selenium. Selenium is kind of old and outdated.
- Scraping in 2019 is almost completely reduced to controlling webbrowsers. There is no more need to scrape directly on the HTTP protocol level. It's too bugy and too easy to fend of by anit-bot mechanisms. And this project still supports raw http requests.
- Scraping should be parallelized in the cloud or among a set of dedicated machines. GoogleScraper cannot handle such use cases without significant effort.
- This project is extremely buggy.
For this reason I am going to continue developing a scraping library namedhttps://www.npmjs.com/package/se-scraper in Javascript which runs on top of puppeteer.
You can download the app here:https://www.npmjs.com/package/se-scraper
It supports a wide range of different search engines and is much more efficient than GoogleScraper. The code base is also much less complex without threading/queueing and complex logging capabilities.
For questions you cancontact me on my wegpage and write me an email there.
This project is back to live after two years of abandonment. In the coming weeks, I will take some time to update all functionality to the most recent developments. This encompasses updating all Regexes and changes in search engine behavior. After a couple of weeks, you can expect this project to work again as documented here.
GoogleScraper is written in Python 3. You should install at least Python 3.6. The last major development was all done with Python 3.7. So when using Ubuntu 16.04 and Python 3.7 for instance, please install Python 3 from the official packages. I use theAnaconda Python distribution, which does work very well for me.
Furthermore, you need to install the Chrome Browser and also the ChromeDriver for Selenium mode. Alternatively install the Firefox Browser and the geckodriver for Selenium Mode. See instructions below.
You can also install GoogleScraper comfortably with pip:
virtualenv --python python3 envsource env/bin/activatepip install GoogleScraper
Right now (September 2018) this is discouraged. Please install from latest Github sources.
Sometimes the newest and most awesome stuff is not available in the cheeseshop (That's how they callhttps://pypi.python.org/pypi/pip). Therefore you maybe want to install GoogleScraper from the latest source that resides in this Github repository. You can do so like this:
virtualenv --python python3 envsource env/bin/activatepip install git+git://github.com/NikolaiT/GoogleScraper/
Please note that some features and examples might not work as expected. I also don't guarantee thatthe app even runs. I only guarantee (to a certain degree at least) that installing from pip will yield ausable version.
Download the latest chromedriver from here:https://sites.google.com/a/chromium.org/chromedriver/downloads
Unzip the driver and save it somewhere and then update thechromedriver_path
in the GoogleScraper configuration filescrape_config.py
to the path where you saved the driverchromedriver_path = 'Drivers/chromedriver'
Download the latest geckodriver from here:https://github.com/mozilla/geckodriver/releases
Unzip the driver and save it somewhere and then update thegeckodriver_path
in the GoogleScraper configuration filescrape_config.py
to the path where you saved the drivergeckodriver_path = 'Drivers/geckodriver'
Update the following settings in the GoogleScraper configuration filescrape_config.py
to your values.
# chrome driver executable path# get chrome drivers here: https://chromedriver.storage.googleapis.com/index.html?path=2.41/chromedriver_path = 'Drivers/chromedriver'# geckodriver executable path# get gecko drivers here: https://github.com/mozilla/geckodriver/releasesgeckodriver_path = 'Drivers/geckodriver'# path to firefox binaryfirefox_binary_path = '/home/nikolai/firefox/firefox'# path to chromium browser binarychrome_binary_path = '/usr/bin/chromium-browser'
Install as described above. Make sure that you have the selenium drivers for chrome/firefox if you want to use GoogleScraper in selenium mode.
See all options
GoogleScraper -h
Scrape the single keyword "apple" with http mode:
GoogleScraper -m http --keyword "apple" -v info
Scrape all keywords that are in the fileSearchData/5words
in selenium mode using chrome in headless mode:
GoogleScraper -m selenium --sel-browser chrome --browser-mode headless --keyword-file SearchData/5words -v info
Scrape all keywords that are in
- keywords.txt
- with http mode
- using 5 threads
- scrape in the search engines bing and yahoo
- store the output in a JSON file
- increase verbosity to the debug level
GoogleScraper -m http --keyword-file SearchData/some_words.txt --num-workers 5 --search-engines "bing,yahoo" --output-filename threaded-results.json -v debug
Do an image search for the keyword "K2 mountain" on google:
GoogleScraper -s "google" -q "K2 mountain" -t image -v info
This is probably the most awesome feature of GoogleScraper. You can scrape with thousands of requests per second if either
- The search engine doesn't block you (Bing didn't block me when requesting100 keywords / second)
- You have enough proxies
Example for Asynchronous mode:
Search the keywords in the keyword fileSearchData/marketing-models-brands.txt on bing and yahoo. By default asynchronous mode spawns 100 requests at the same time. This means around 100 requests per second (depends on the actual connection...).
GoogleScraper -s "bing,yahoo" --keyword-file SearchData/marketing-models-brands.txt -m http-async -v info -o marketing.json
The results (partial results, because there were too many keywords for one IP address) can be inspected in the fileOutputs/marketing.json.
GoogleScraper is hugely complex. Because GoogleScraper supports many search engines and the HTML and Javascript of those Search Providers changes frequently, it is often the case that GoogleScraper ceases to function for some search engine. To spot this, you can runfunctional tests.
For example the test below runs a scraping session for Google and Bing and tests that the gathered data looks more or less okay.
python -m pytest Tests/functional_tests.py::GoogleScraperMinimalFunctionalTestCase
GoogleScraper parses Google search engine results (and many other search engines_) easily and in a fast way. It allows you to extract all foundlinks and their titles and descriptions programmatically which enables you to process scraped data further.
There are unlimitedusage scenarios:
- Quickly harvest masses ofgoogle dorks.
- Use it as a SEO tool.
- Discover trends.
- Compile lists of sites to feed your own database.
- Many more use cases...
- Quite easily extendable since the code is well documented
First of all you need to understand that GoogleScraper usestwo completely different scraping approaches:
- Scraping with low level http libraries such as
urllib.request
orrequests
modules. This simulates the http packets sent by real browsers. - Scrape by controlling a real browser with the selenium framework
Whereas the former approach was implemented first, the later approach looks much more promising in comparison, becausesearch engines have no easy way detecting it.
GoogleScraper is implemented with the following techniques/software:
- Written in Python 3.7
- Uses multithreading/asynchronous IO.
- Supports parallel scraping with multiple IP addresses.
- Provides proxy support usingsocksipy and built in browser proxies:
- Socks5
- Socks4
- HttpProxy
- Support for alternative search modes like news/image/video search.
Currently the following search engines are supported:
- Bing
- Yahoo
- Yandex
- Baidu
- Duckduckgo
Scraping is a critical and highly complex subject. Google and other search engine giants have a strong inclinationto make the scrapers life as hard as possible. There are several ways for the search engine providers to detect that a robot is usingtheir search engine:
- The User-Agent is not one of a browser.
- The search params are not identical to the ones that browser used by a human sets:
- Javascript generates challenges dynamically on the client side. This might include heuristics that try to detect human behaviour. Example: Only humans move their mouses and hover over the interesting search results.
- Robots have a strict requests pattern (very fast requests, without a random time between the sent packets).
- Dorks are heavily used
- No pictures/ads/css/javascript are loaded (like a browser does normally) which in turn won't trigger certain javascript events
So the biggest hurdle to tackle is the javascript detection algorithms. I don't know what Google does in their javascript, but I will soon investigate it further and then decide if it's not better to change strategies andswitch to aapproach that scrapes by simulating browsers in a browserlike environment that can execute javascript. The networking of each of these virtual browsers is proxified and manipulated such that it behaves likea real physical user agent. I am pretty sure that it must be possible to handle 20 such browser sessions in a parallel way without stressing resources too much. The real problem is as always the lack of good proxies...
As mentioned above, there are several drawbacks when scraping withurllib.request
orrequests
modules and doing the networking on my own:
Browsers are ENORMOUSLY complex software systems. Chrome has around 8 millions line of code and firefox even 10 LOC. Huge companies invest a lot of money to push technology forward (HTML5, CSS3, new standards) and each browserhas a unique behaviour. Therefore it's almost impossible to simulate such a browser manually with HTTP requests. This means Google has numerous ways to detect anomalies and inconsistencies in the browsing usage. Alone thedynamic nature of Javascript makes it impossible to scrape undetected.
This cries for an alternative approach, that automates areal browser with Python. Best would be to control the Chrome browser since Google has the least incentives to restrict capabilities for their own native browser.Hence I need a way to automate Chrome with Python and controlling several independent instances with different proxies set. Then the output of result grows linearly with the number of used proxies...
Some interesting technologies/software to do so:
Probably the best way to use GoogleScraper is to use it from the command line and fire a command such asthe following:
GoogleScraper --keyword-file /tmp/keywords --search-engine bing --num-pages-for-keyword 3 --scrape-method selenium
Heresel marks the scraping mode as 'selenium'. This means GoogleScraper.py scrapes with real browsers. This is pretty powerful, sinceyou can scrape long and a lot of sites (Google has a hard time blocking real browsers). The argument of the flag--keyword-file
must be a file with keywords separated bynewlines. So: For every google query one line. Easy, isnt' it?
Furthermore, the option--num-pages-for-keyword
means that GoogleScraper will fetch 3 consecutive pages for each keyword.
Example keyword-file:
keyword number onehow to become a good rapperinurl:"index.php?sl=43"filetype:.cfgallintext:"You have a Mysql Error in your"intitle:"admin config"Best brothels in atlanta
After the scraping you'll automatically have a new sqlite3 database in the namedgoogle_scraper.db
in the same directory. You can open and inspect the database with the command:
GoogleScraper --shell
It shouldn't be a problem to scrape10'000 keywords in 2 hours. If you are really crazy, set the maximal browsers in the config a littlebit higher (in the top of the script file).
If you want, you can specify the flag--proxy-file
. As argument you need to pass a file with proxies in it and with the following format:
protocol proxyhost:proxyport username:password(...)
Example:
socks5 127.0.0.1:1080 blabla:12345socks4 77.66.55.44:9999 elite:js@fkVA3(Va3)
In case you want to use GoogleScraper.py inhttp mode (which means that raw http headers are sent), use it as follows:
GoogleScraper -m http -p 1 -n 25 -q "white light"
If you feel like contacting me, do so and send me a mail. You can find my contact information on myblog.