This module provides a single class,RobotFileParser, which answersquestions about whether or not a particular user agent can fetch a URL on theWeb site that published therobots.txt file. For more details on thestructure ofrobots.txt files, seehttp://www.robotstxt.org/orig.html.
This class provides methods to read, parse and answer questions about therobots.txt file aturl.
Sets the URL referring to arobots.txt file.
Reads therobots.txt URL and feeds it to the parser.
Parses the lines argument.
ReturnsTrue if theuseragent is allowed to fetch theurlaccording to the rules contained in the parsedrobots.txtfile.
Returns the time therobots.txt file was last fetched. This isuseful for long-running web spiders that need to check for newrobots.txt files periodically.
Sets the time therobots.txt file was last fetched to the currenttime.
The following example demonstrates basic use of the RobotFileParser class.
>>>importurllib.robotparser>>>rp=urllib.robotparser.RobotFileParser()>>>rp.set_url("http://www.musi-cal.com/robots.txt")>>>rp.read()>>>rp.can_fetch("*","http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")False>>>rp.can_fetch("*","http://www.musi-cal.com/")True
21.9.urllib.error — Exception classes raised by urllib.request
Enter search terms or a module, class or function name.