Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The Apify SDK for Python is the official library for creating Apify Actors in Python. It provides useful features like actor lifecycle management, local storage emulation, and actor event handling.

License

NotificationsYou must be signed in to change notification settings

apify/apify-sdk-python

 
 

Repository files navigation

The Apify SDK for Python is the official library to createApify Actorsin Python. It provides useful features like Actor lifecycle management, local storage emulation, and Actorevent handling.

If you just need to access theApify API from your Python applications,check out theApify Client for Python instead.

Installation

The Apify SDK for Python is available on PyPI as theapify package.For default installation, using Pip, run the following:

pip install apify

For users interested in integrating Apify with Scrapy, we provide a package extra calledscrapy.To install Apify with thescrapy extra, use the following command:

pip install apify[scrapy]

Documentation

For usage instructions, check the documentation onApify Docs.

Examples

Below are few examples demonstrating how to use the Apify SDK with some web scraping-related libraries.

Apify SDK with HTTPX and BeautifulSoup

This example illustrates how to integrate the Apify SDK withHTTPX andBeautifulSoup to scrape data from web pages.

frombs4importBeautifulSoupfromhttpximportAsyncClientfromapifyimportActorasyncdefmain()->None:asyncwithActor:# Retrieve the Actor input, and use default values if not provided.actor_input=awaitActor.get_input()or {}start_urls=actor_input.get('start_urls', [{'url':'https://apify.com'}])# Open the default request queue for handling URLs to be processed.request_queue=awaitActor.open_request_queue()# Enqueue the start URLs.forstart_urlinstart_urls:url=start_url.get('url')awaitrequest_queue.add_request(url)# Process the URLs from the request queue.whilerequest:=awaitrequest_queue.fetch_next_request():Actor.log.info(f'Scraping{request.url} ...')# Fetch the HTTP response from the specified URL using HTTPX.asyncwithAsyncClient()asclient:response=awaitclient.get(request.url)# Parse the HTML content using Beautiful Soup.soup=BeautifulSoup(response.content,'html.parser')# Extract the desired data.data= {'url':actor_input['url'],'title':soup.title.string,'h1s': [h1.textforh1insoup.find_all('h1')],'h2s': [h2.textforh2insoup.find_all('h2')],'h3s': [h3.textforh3insoup.find_all('h3')],            }# Store the extracted data to the default dataset.awaitActor.push_data(data)

Apify SDK with PlaywrightCrawler from Crawlee

This example demonstrates how to use the Apify SDK alongsidePlaywrightCrawler fromCrawlee to perform web scraping.

fromcrawlee.crawlersimportPlaywrightCrawler,PlaywrightCrawlingContextfromapifyimportActorasyncdefmain()->None:asyncwithActor:# Retrieve the Actor input, and use default values if not provided.actor_input=awaitActor.get_input()or {}start_urls= [url.get('url')forurlinactor_input.get('start_urls', [{'url':'https://apify.com'}])]# Exit if no start URLs are provided.ifnotstart_urls:Actor.log.info('No start URLs specified in Actor input, exiting...')awaitActor.exit()# Create a crawler.crawler=PlaywrightCrawler(# Limit the crawl to max requests. Remove or increase it for crawling all links.max_requests_per_crawl=50,headless=True,        )# Define a request handler, which will be called for every request.@crawler.router.default_handlerasyncdefrequest_handler(context:PlaywrightCrawlingContext)->None:url=context.request.urlActor.log.info(f'Scraping{url}...')# Extract the desired data.data= {'url':context.request.url,'title':awaitcontext.page.title(),'h1s': [awaith1.text_content()forh1inawaitcontext.page.locator('h1').all()],'h2s': [awaith2.text_content()forh2inawaitcontext.page.locator('h2').all()],'h3s': [awaith3.text_content()forh3inawaitcontext.page.locator('h3').all()],            }# Store the extracted data to the default dataset.awaitcontext.push_data(data)# Enqueue additional links found on the current page.awaitcontext.enqueue_links()# Run the crawler with the starting URLs.awaitcrawler.run(start_urls)

What are Actors?

Actors are serverless cloud programs that can do almost anything a human can do in a web browser.They can do anything from small tasks such as filling in forms or unsubscribing from online services,all the way up to scraping and processing vast numbers of web pages.

They can be run either locally, or on theApify platform,where you can run them at scale, monitor them, schedule them, or publish and monetize them.

If you're new to Apify, learnwhat is Apifyin the Apify platform documentation.

Creating Actors

To create and run Actors through Apify Console,see theConsole documentation.

To create and run Python Actors locally, check the documentation forhow to create and run Python Actors locally.

Guides

To see how you can use the Apify SDK with other popular libraries used for web scraping,check out our guides for usingRequests and HTTPX,Beautiful Soup,Playwright,Selenium,orScrapy.

Usage concepts

To learn more about the features of the Apify SDK and how to use them,check out the Usage Concepts section in the sidebar,particularly the guides for theActor lifecycle,working with storages,handling Actor eventsorhow to use proxies.

About

The Apify SDK for Python is the official library for creating Apify Actors in Python. It provides useful features like actor lifecycle management, local storage emulation, and actor event handling.

Topics

Resources

License

Stars

Watchers

Forks


[8]ページ先頭

©2009-2025 Movatter.jp