- Notifications
You must be signed in to change notification settings - Fork1
oxylabs/scrape-google-python
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
In this tutorial, we showcase how to scrape public Google data with Python and OxylabsSERP Scraper API (a part of Web Scraper API), which requires a subscription ora free trial.
- What is a Google SERP?
- Is it legal to scrape Google results?
- Scraping public Google data with Python and Oxylabs Scraper API
- Set up a payload and send a POST request
- Location query parameters
Upon any discussion of scraping Google search results, you’ll likely run into the “SERP” abbreviation. SERP stands for Search Engine Results Page; it’s the page you get after entering a query into the search bar. SERPs contain various features and elements, such as:
- Featured snippets
- Paid ads
- Video carousel
- People also ask
- Local pack
- Related searches
The legality of scraping Google search data is largely discussed in the scraping field. As a matter of fact, scraping publicly available data on the internet – including Google SERP data – is legal. However, it may vary from one situation to another, so it’s best to seek legal advice about your specific case.
- Install required Python librariesTo follow this guide on scraping Google search results, you’ll need the following:
- Credentials for Oxylabs'SERP Scraper API – you can get a 7-day free trial by registering on thedashboard;
- Python;
- Requests library.
First, sign up for Oxylabs' Google Search Results API and save yourusername
andpassword
.
Then, download and install Python 3.8 or above from thepython.org website. Finally, install theRequest library by using the following command:
$python3 -m pip install requests
If you’re using Windows, choose Python instead of Python3. The rest of the command remains the same:
d:\amazon>python -m pip install requests
Create a new file and enter the following code:
import requestsfrom pprint import pprintpayload = { 'source': 'google', 'url': 'https://www.google.com/search?hl=en&q=newton' # search for newton}response = requests.request( 'POST', 'https://realtime.oxylabs.io/v1/queries', auth=('USERNAME', 'PASSWORD'), json=payload,)pprint(response.json())
Here’s what the result should look like:
{ "results": [ { "content": "<!doctype html><html>...</html>", "created_at": "YYYY-DD-MM HH:MM:SS", "updated_at": "YYYY-DD-MM HH:MM:SS", "page": 1, "url": "https://www.google.com/search?hl=en&q=newton", "job_id": "1234567890123456789", "status_code": 200 } ]}
Notice how theurl
in the payload dictionary is a Google search results page. In this example, the keyword isnewton
.
As you can see, the query is executed and the page result in HTML is returned in the content key of the response.
Let's review the payload dictionary from the above example for scraping Google search data.
payload = { 'source': 'google', 'url': 'https://www.google.com/search?hl=en&q=newton'}
The dictionary keys are parameters used to inform Google Scraper API about required customization.
The first parameter is thesource
, which is really important because it sets the scraper we’re going to use.
The default value isGoogle
– when you use it, you can set the url as any Google search page, and all the other parameters will be extracted from the URL.
Although in this guide we’ll be using thegoogle_search
parameter, there's many others:google_ads
,google_hotels
,google_images
,google_suggest
, and more (full listhere.
Keep in mind that if you set the source asgoogle_search
, you cannot use theurl
parameter. Luckily, you can use several different parameters for acquiring public Google SERP data without having to create multiple URLs (more on that in the next paragraph.)
We’ll build the payload by adding the parameters one by one. First, begin with setting the source asgoogle_search
.
payload = { 'source': 'google_search',}
Now, let’s addquery
– a crucial parameter that determines what search results you’ll be retrieving. In our example, we’ll usenewton
as our search query. At this stage, the payload dictionary looks like this:
payload = { 'source': 'google_search', 'query': 'newton',}
That said,google_search
and query are the two essential parameters for scraping public Google search data. If you want the API to return Google search results at this stage, you can usepayload
. Now, let’s move to the next parameter.
You can work with a domain parameter if you want to use a localized domain – for example,'domain':'de'
will fetch results from google.de. If you want to see the results from Germany, use thegeo_location
parameter—'geo_location':'Germany'
. See thedocumentation for thegeo_location
parameter to learn more about the correct values.
Also, here’s what changing the locale parameter looks like:
payload = {'source':'google_search','query':'newton','domain':'de' ,'geo_location': 'Germany','locale' : 'en-us'}
To learn more about the potential values of the locale parameter, check thedocumentation, as well.
If you send the above payload, you’ll receive search results in American English from google.de, just like anyone physically located in Germany would.
By default, you’ll see the first ten results from the first page. If you want to customize this, you can use these parameters:start_page
,pages
, andlimit
.
Thestart_page
parameter determines which page of search results to return. Thepages
parameter specifies the number of pages. Finally, thelimit parameter
sets the number of results on each page.
For example, the following set of parameters fetch results from pages 11 and 12 of the search engine results, with 20 results on each page:
payload = { 'start_page': 11, 'pages': 2, 'limit': 20, ... # other parameters}
Apart from the search parameters we’ve covered so far, there are a few more you can use to fine-tune your results – see ourdocumentation on collecting public Google Search data.
Now, let’s put together everything we’ve learned so far – here’s what the final script with the shoes keyword looks like:
import requestsfrom pprint import pprintpayload = { 'source': 'google_search', 'query': 'shoes', 'domain': 'de', 'geo_location': 'Germany', 'locale': 'en-us', 'parse': True, 'start_page': 1, 'pages': 5, 'limit': 10,}# Get response.response = requests.request( 'POST', 'https://realtime.oxylabs.io/v1/queries', auth=('USERNAME', 'PASSWORD'), json=payload,)if response.status_code != 200: print("Error - ", response.json()) exit(-1)pprint(response.json())
One of the best Google Scraper API features is the ability to parse an HTML page into JSON. For that, you don't need to use BeautifulSoup or any other library – just send the parse parameter as True.
Here is a sample payload:
payload = { 'source': 'google_search', 'query': 'adidas', 'parse': True,}
When sent to the Google Scraper API, this payload will return the results in JSON. To see a detailed JSON data structure, see ourdocumentation.
The key highlights:
- The results are in the dedicated results list. Here, each page gets a new entry.
- Each result contains the content in a dictionary key named content.
- The actual results are in the results key.
Note that there’s ajob_id
in the results.
The easiest way to save the data is by using the Pandas library, since it can normalize JSON quite effectively.
import pandas as pd...data = response.json()df = pd.json_normalize(data['results'])df.to_csv('export.csv', index=False)
Alternatively, you can also take note of thejob_id
and send a GET request to the following URL, along with your credentials.
http://data.oxylabs.io/v1/queries/{job_id}/results/normalized?format=csv
When scraping Google, you can run into several challenges: network issues, invalid query parameters, or API quota limitations.
To handle these, you can use try-except blocks in your code. For example, if an error occurs when sending the API request, you can catch the exception and print an error message:
try: response = requests.request( 'POST', 'https://realtime.oxylabs.io/v1/queries', auth=('USERNAME', 'PASSWORD'), json=payload, )except requests.exceptions.RequestException as e: print("Error:", e)
If you send an invalid parameter, Google Scraper API will return the 400 response code.
To catch these errors, check the status code:
if response.status_code != 200: print("Error - ", response.json())
Looking to scrape data from other Google sources?Google Sheets for Basic Web Scraping,How to Scrape Google Shopping Results,Google Play Scraper,How To Scrape Google Jobs,Google News Scrpaer,How to Scrape Google Scholar,How to Scrape Google Flights with Python,Scrape Google Search Results,Scrape Google Trends
About
In this tutorial, we showcase how to scrape public Google data with Python and Oxylabs API.
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.