21.10.urllib.robotparser — Parser for robots.txt¶
Source code:Lib/urllib/robotparser.py
This module provides a single class,RobotFileParser, which answersquestions about whether or not a particular user agent can fetch a URL on theWeb site that published therobots.txt file. For more details on thestructure ofrobots.txt files, seehttp://www.robotstxt.org/orig.html.
- class
urllib.robotparser.RobotFileParser(url='')¶ This class provides methods to read, parse and answer questions about the
robots.txtfile aturl.set_url(url)¶Sets the URL referring to a
robots.txtfile.
read()¶Reads the
robots.txtURL and feeds it to the parser.
parse(lines)¶Parses the lines argument.
can_fetch(useragent,url)¶Returns
Trueif theuseragent is allowed to fetch theurlaccording to the rules contained in the parsedrobots.txtfile.
mtime()¶Returns the time the
robots.txtfile was last fetched. This isuseful for long-running web spiders that need to check for newrobots.txtfiles periodically.
modified()¶Sets the time the
robots.txtfile was last fetched to the currenttime.
The following example demonstrates basic use of the RobotFileParser class.
>>>importurllib.robotparser>>>rp=urllib.robotparser.RobotFileParser()>>>rp.set_url("http://www.musi-cal.com/robots.txt")>>>rp.read()>>>rp.can_fetch("*","http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")False>>>rp.can_fetch("*","http://www.musi-cal.com/")True
