urllib.parse --- 將 URL 剖析成元件

原始碼:Lib/urllib/parse.py


This module defines a standard interface to break Uniform Resource Locator (URL)strings up in components (addressing scheme, network location, path etc.), tocombine the components back into a URL string, and to convert a "relative URL"to an absolute URL given a "base URL."

The module has been designed to match the internet RFC on Relative UniformResource Locators. It supports the following URL schemes:file,ftp,gopher,hdl,http,https,imap,itms-services,mailto,mms,news,nntp,prospero,rsync,rtsp,rtsps,rtspu,sftp,shttp,sip,sips,snews,svn,svn+ssh,telnet,wais,ws,wss.

CPython 實作細節: The inclusion of theitms-services URL scheme can prevent an app frompassing Apple's App Store review process for the macOS and iOS App Stores.Handling for theitms-services scheme is always removed on iOS; onmacOS, itmay be removed if CPython has been built with the--with-app-store-compliance option.

Theurllib.parse module defines functions that fall into two broadcategories: URL parsing and URL quoting. These are covered in detail inthe following sections.

This module's functions use the deprecated termnetloc (ornet_loc),which was introduced inRFC 1808. However, this term has been obsoleted byRFC 3986, which introduced the termauthority as its replacement.The use ofnetloc is continued for backward compatibility.

URL Parsing

The URL parsing functions focus on splitting a URL string into its components,or on combining URL components into a URL string.

urllib.parse.urlparse(urlstring,scheme='',allow_fragments=True)

Parse a URL into six components, returning a 6-itemnamed tuple. Thiscorresponds to the general structure of a URL:scheme://netloc/path;parameters?query#fragment.Each tuple item is a string, possibly empty. The components are not broken upinto smaller parts (for example, the network location is a single string), and %escapes are not expanded. The delimiters as shown above are not part of theresult, except for a leading slash in thepath component, which is retained ifpresent. For example:

>>>fromurllib.parseimporturlparse>>>urlparse("scheme://netloc/path;parameters?query#fragment")ParseResult(scheme='scheme', netloc='netloc', path='/path;parameters', params='',            query='query', fragment='fragment')>>>o=urlparse("http://docs.python.org:80/3/library/urllib.parse.html?"..."highlight=params#url-parsing")>>>oParseResult(scheme='http', netloc='docs.python.org:80',            path='/3/library/urllib.parse.html', params='',            query='highlight=params', fragment='url-parsing')>>>o.scheme'http'>>>o.netloc'docs.python.org:80'>>>o.hostname'docs.python.org'>>>o.port80>>>o._replace(fragment="").geturl()'http://docs.python.org:80/3/library/urllib.parse.html?highlight=params'

Following the syntax specifications inRFC 1808, urlparse recognizesa netloc only if it is properly introduced by '//'. Otherwise theinput is presumed to be a relative URL and thus to start witha path component.

>>>fromurllib.parseimporturlparse>>>urlparse('//www.cwi.nl:80/%7Eguido/Python.html')ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',            params='', query='', fragment='')>>>urlparse('www.cwi.nl/%7Eguido/Python.html')ParseResult(scheme='', netloc='', path='www.cwi.nl/%7Eguido/Python.html',            params='', query='', fragment='')>>>urlparse('help/Python.html')ParseResult(scheme='', netloc='', path='help/Python.html', params='',            query='', fragment='')

Thescheme argument gives the default addressing scheme, to beused only if the URL does not specify one. It should be the same type(text or bytes) asurlstring, except that the default value'' isalways allowed, and is automatically converted tob'' if appropriate.

If theallow_fragments argument is false, fragment identifiers are notrecognized. Instead, they are parsed as part of the path, parametersor query component, andfragment is set to the empty string inthe return value.

The return value is anamed tuple, which means that its items canbe accessed by index or as named attributes, which are:

屬性

Index

Value

Value if not present

scheme

0

URL scheme specifier

scheme parameter

netloc

1

Network location part

empty string

path

2

Hierarchical path

empty string

params

3

Parameters for lastpath element

empty string

query

4

Query component

empty string

fragment

5

Fragment identifier

empty string

username

User name

None

password

Password

None

hostname

Host name (lower case)

None

port

Port number as integer,if present

None

Reading theport attribute will raise aValueError ifan invalid port is specified in the URL. See sectionStructured Parse Results for more information on the result object.

Unmatched square brackets in thenetloc attribute will raise aValueError.

Characters in thenetloc attribute that decompose under NFKCnormalization (as used by the IDNA encoding) into any of/,?,#,@, or: will raise aValueError. If the URL isdecomposed before parsing, no error will be raised.

As is the case with all named tuples, the subclass has a few additional methodsand attributes that are particularly useful. One such method is_replace().The_replace() method will return a new ParseResult object replacing specifiedfields with new values.

>>>fromurllib.parseimporturlparse>>>u=urlparse('//www.cwi.nl:80/%7Eguido/Python.html')>>>uParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',            params='', query='', fragment='')>>>u._replace(scheme='http')ParseResult(scheme='http', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',            params='', query='', fragment='')

警告

urlparse() does not perform validation. SeeURL parsingsecurity for details.

在 3.2 版的變更:新增剖析 IPv6 URL 的能力。

在 3.3 版的變更:The fragment is now parsed for all URL schemes (unlessallow_fragments isfalse), in accordance withRFC 3986. Previously, an allowlist ofschemes that support fragments existed.

在 3.6 版的變更:Out-of-range port numbers now raiseValueError, instead ofreturningNone.

在 3.8 版的變更:Characters that affect netloc parsing under NFKC normalization willnow raiseValueError.

urllib.parse.parse_qs(qs,keep_blank_values=False,strict_parsing=False,encoding='utf-8',errors='replace',max_num_fields=None,separator='&')

Parse a query string given as a string argument (data of typeapplication/x-www-form-urlencoded). Data are returned as adictionary. The dictionary keys are the unique query variable names and thevalues are lists of values for each name.

The optional argumentkeep_blank_values is a flag indicating whether blankvalues in percent-encoded queries should be treated as blank strings. A true valueindicates that blanks should be retained as blank strings. The default falsevalue indicates that blank values are to be ignored and treated as if they werenot included.

The optional argumentstrict_parsing is a flag indicating what to do withparsing errors. If false (the default), errors are silently ignored. If true,errors raise aValueError exception.

The optionalencoding anderrors parameters specify how to decodepercent-encoded sequences into Unicode characters, as accepted by thebytes.decode() method.

The optional argumentmax_num_fields is the maximum number of fields toread. If set, then throws aValueError if there are more thanmax_num_fields fields read.

The optional argumentseparator is the symbol to use for separating thequery arguments. It defaults to&.

Use theurllib.parse.urlencode() function (with thedoseqparameter set toTrue) to convert such dictionaries into querystrings.

在 3.2 版的變更:新增encodingerrors 參數。

在 3.8 版的變更:新增max_num_fields 參數。

在 3.10 版的變更:Addedseparator parameter with the default value of&. Pythonversions earlier than Python 3.10 allowed using both; and& asquery parameter separator. This has been changed to allow only a singleseparator key, with& as the default separator.

urllib.parse.parse_qsl(qs,keep_blank_values=False,strict_parsing=False,encoding='utf-8',errors='replace',max_num_fields=None,separator='&')

Parse a query string given as a string argument (data of typeapplication/x-www-form-urlencoded). Data are returned as a list ofname, value pairs.

The optional argumentkeep_blank_values is a flag indicating whether blankvalues in percent-encoded queries should be treated as blank strings. A true valueindicates that blanks should be retained as blank strings. The default falsevalue indicates that blank values are to be ignored and treated as if they werenot included.

The optional argumentstrict_parsing is a flag indicating what to do withparsing errors. If false (the default), errors are silently ignored. If true,errors raise aValueError exception.

The optionalencoding anderrors parameters specify how to decodepercent-encoded sequences into Unicode characters, as accepted by thebytes.decode() method.

The optional argumentmax_num_fields is the maximum number of fields toread. If set, then throws aValueError if there are more thanmax_num_fields fields read.

The optional argumentseparator is the symbol to use for separating thequery arguments. It defaults to&.

Use theurllib.parse.urlencode() function to convert such lists of pairs intoquery strings.

在 3.2 版的變更:新增encodingerrors 參數。

在 3.8 版的變更:新增max_num_fields 參數。

在 3.10 版的變更:Addedseparator parameter with the default value of&. Pythonversions earlier than Python 3.10 allowed using both; and& asquery parameter separator. This has been changed to allow only a singleseparator key, with& as the default separator.

urllib.parse.urlunparse(parts)

Construct a URL from a tuple as returned byurlparse(). Thepartsargument can be any six-item iterable. This may result in a slightlydifferent, but equivalent URL, if the URL that was parsed originally hadunnecessary delimiters (for example, a? with an empty query; the RFCstates that these are equivalent).

urllib.parse.urlsplit(urlstring,scheme='',allow_fragments=True)

This is similar tourlparse(), but does not split the params from the URL.This should generally be used instead ofurlparse() if the more recent URLsyntax allowing parameters to be applied to each segment of thepath portionof the URL (seeRFC 2396) is wanted. A separate function is needed toseparate the path segments and parameters. This function returns a 5-itemnamed tuple:

(addressingscheme,networklocation,path,query,fragmentidentifier).

The return value is anamed tuple, its items can be accessed by indexor as named attributes:

屬性

Index

Value

Value if not present

scheme

0

URL scheme specifier

scheme parameter

netloc

1

Network location part

empty string

path

2

Hierarchical path

empty string

query

3

Query component

empty string

fragment

4

Fragment identifier

empty string

username

User name

None

password

Password

None

hostname

Host name (lower case)

None

port

Port number as integer,if present

None

Reading theport attribute will raise aValueError ifan invalid port is specified in the URL. See sectionStructured Parse Results for more information on the result object.

Unmatched square brackets in thenetloc attribute will raise aValueError.

Characters in thenetloc attribute that decompose under NFKCnormalization (as used by the IDNA encoding) into any of/,?,#,@, or: will raise aValueError. If the URL isdecomposed before parsing, no error will be raised.

Following some of theWHATWG spec that updates RFC 3986, leading C0control and space characters are stripped from the URL.\n,\r and tab\t characters are removed from the URL at any position.

警告

urlsplit() does not perform validation. SeeURL parsingsecurity for details.

在 3.6 版的變更:Out-of-range port numbers now raiseValueError, instead ofreturningNone.

在 3.8 版的變更:Characters that affect netloc parsing under NFKC normalization willnow raiseValueError.

在 3.10 版的變更:ASCII newline and tab characters are stripped from the URL.

在 3.12 版的變更:Leading WHATWG C0 control and space characters are stripped from the URL.

urllib.parse.urlunsplit(parts)

Combine the elements of a tuple as returned byurlsplit() into acomplete URL as a string. Theparts argument can be any five-itemiterable. This may result in a slightly different, but equivalent URL, if theURL that was parsed originally had unnecessary delimiters (for example, a ?with an empty query; the RFC states that these are equivalent).

urllib.parse.urljoin(base,url,allow_fragments=True)

Construct a full ("absolute") URL by combining a "base URL" (base) withanother URL (url). Informally, this uses components of the base URL, inparticular the addressing scheme, the network location and (part of) thepath, to provide missing components in the relative URL. For example:

>>>fromurllib.parseimporturljoin>>>urljoin('http://www.cwi.nl/%7Eguido/Python.html','FAQ.html')'http://www.cwi.nl/%7Eguido/FAQ.html'

Theallow_fragments argument has the same meaning and default as forurlparse().

備註

Ifurl is an absolute URL (that is, it starts with// orscheme://),theurl's hostname and/or scheme will be present in the result. For example:

>>>urljoin('http://www.cwi.nl/%7Eguido/Python.html',...'//www.python.org/%7Eguido')'http://www.python.org/%7Eguido'

If you do not want that behavior, preprocess theurl withurlsplit() andurlunsplit(), removing possiblescheme andnetloc parts.

警告

Because an absolute URL may be passed as theurl parameter, it isgenerallynot secure to useurljoin with an attacker-controlledurl. For example in,urljoin("https://website.com/users/",username), ifusername cancontain an absolute URL, the result ofurljoin will be the absoluteURL.

在 3.5 版的變更:Behavior updated to match the semantics defined inRFC 3986.

urllib.parse.urldefrag(url)

Ifurl contains a fragment identifier, return a modified version ofurlwith no fragment identifier, and the fragment identifier as a separatestring. If there is no fragment identifier inurl, returnurl unmodifiedand an empty string.

The return value is anamed tuple, its items can be accessed by indexor as named attributes:

屬性

Index

Value

Value if not present

url

0

URL with no fragment

empty string

fragment

1

Fragment identifier

empty string

See sectionStructured Parse Results for more information on the resultobject.

在 3.2 版的變更:Result is a structured object rather than a simple 2-tuple.

urllib.parse.unwrap(url)

Extract the url from a wrapped URL (that is, a string formatted as<URL:scheme://host/path>,<scheme://host/path>,URL:scheme://host/pathorscheme://host/path). Ifurl is not a wrapped URL, it is returnedwithout changes.

URL parsing security

Theurlsplit() andurlparse() APIs do not performvalidation ofinputs. They may not raise errors on inputs that other applications considerinvalid. They may also succeed on some inputs that might not be consideredURLs elsewhere. Their purpose is for practical functionality rather thanpurity.

Instead of raising an exception on unusual input, they may instead return somecomponent parts as empty strings. Or components may contain more than perhapsthey should.

We recommend that users of these APIs where the values may be used anywherewith security implications code defensively. Do some verification within yourcode before trusting a returned component part. Does thatscheme makesense? Is that a sensiblepath? Is there anything strange about thathostname? etc.

What constitutes a URL is not universally well defined. Different applicationshave different needs and desired constraints. For instance the livingWHATWGspec describes what user facing web clients such as a web browser require.WhileRFC 3986 is more general. These functions incorporate some aspects ofboth, but cannot be claimed compliant with either. The APIs and existing usercode with expectations on specific behaviors predate both standards leading usto be very cautious about making API behavior changes.

Parsing ASCII Encoded Bytes

The URL parsing functions were originally designed to operate on characterstrings only. In practice, it is useful to be able to manipulate properlyquoted and encoded URLs as sequences of ASCII bytes. Accordingly, theURL parsing functions in this module all operate onbytes andbytearray objects in addition tostr objects.

Ifstr data is passed in, the result will also contain onlystr data. Ifbytes orbytearray data ispassed in, the result will contain onlybytes data.

Attempting to mixstr data withbytes orbytearray in a single function call will result in aTypeError being raised, while attempting to pass in non-ASCIIbyte values will triggerUnicodeDecodeError.

To support easier conversion of result objects betweenstr andbytes, all return values from URL parsing functions provideeither anencode() method (when the result containsstrdata) or adecode() method (when the result containsbytesdata). The signatures of these methods match those of the correspondingstr andbytes methods (except that the default encodingis'ascii' rather than'utf-8'). Each produces a value of acorresponding type that contains eitherbytes data (forencode() methods) orstr data (fordecode() methods).

Applications that need to operate on potentially improperly quoted URLsthat may contain non-ASCII data will need to do their own decoding frombytes to characters before invoking the URL parsing methods.

The behaviour described in this section applies only to the URL parsingfunctions. The URL quoting functions use their own rules when producingor consuming byte sequences as detailed in the documentation of theindividual URL quoting functions.

在 3.2 版的變更:URL parsing functions now accept ASCII encoded byte sequences

Structured Parse Results

The result objects from theurlparse(),urlsplit() andurldefrag() functions are subclasses of thetuple type.These subclasses add the attributes listed in the documentation forthose functions, the encoding and decoding support described in theprevious section, as well as an additional method:

urllib.parse.SplitResult.geturl()

Return the re-combined version of the original URL as a string. This maydiffer from the original URL in that the scheme may be normalized to lowercase and empty components may be dropped. Specifically, empty parameters,queries, and fragment identifiers will be removed.

Forurldefrag() results, only empty fragment identifiers will be removed.Forurlsplit() andurlparse() results, all noted changes will bemade to the URL returned by this method.

The result of this method remains unchanged if passed back through the originalparsing function:

>>>fromurllib.parseimporturlsplit>>>url='HTTP://www.Python.org/doc/#'>>>r1=urlsplit(url)>>>r1.geturl()'http://www.Python.org/doc/'>>>r2=urlsplit(r1.geturl())>>>r2.geturl()'http://www.Python.org/doc/'

The following classes provide the implementations of the structured parseresults when operating onstr objects:

classurllib.parse.DefragResult(url,fragment)

Concrete class forurldefrag() results containingstrdata. Theencode() method returns aDefragResultBytesinstance.

在 3.2 版被加入.

classurllib.parse.ParseResult(scheme,netloc,path,params,query,fragment)

Concrete class forurlparse() results containingstrdata. Theencode() method returns aParseResultBytesinstance.

classurllib.parse.SplitResult(scheme,netloc,path,query,fragment)

Concrete class forurlsplit() results containingstrdata. Theencode() method returns aSplitResultBytesinstance.

The following classes provide the implementations of the parse results whenoperating onbytes orbytearray objects:

classurllib.parse.DefragResultBytes(url,fragment)

Concrete class forurldefrag() results containingbytesdata. Thedecode() method returns aDefragResultinstance.

在 3.2 版被加入.

classurllib.parse.ParseResultBytes(scheme,netloc,path,params,query,fragment)

Concrete class forurlparse() results containingbytesdata. Thedecode() method returns aParseResultinstance.

在 3.2 版被加入.

classurllib.parse.SplitResultBytes(scheme,netloc,path,query,fragment)

Concrete class forurlsplit() results containingbytesdata. Thedecode() method returns aSplitResultinstance.

在 3.2 版被加入.

URL Quoting

The URL quoting functions focus on taking program data and making it safefor use as URL components by quoting special characters and appropriatelyencoding non-ASCII text. They also support reversing these operations torecreate the original data from the contents of a URL component if thattask isn't already covered by the URL parsing functions above.

urllib.parse.quote(string,safe='/',encoding=None,errors=None)

Replace special characters instring using the%xx escape. Letters,digits, and the characters'_.-~' are never quoted. By default, thisfunction is intended for quoting the path section of a URL. The optionalsafe parameter specifies additional ASCII characters that should not bequoted --- its default value is'/'.

string may be either astr or abytes object.

在 3.7 版的變更:Moved fromRFC 2396 toRFC 3986 for quoting URL strings. "~" is nowincluded in the set of unreserved characters.

The optionalencoding anderrors parameters specify how to deal withnon-ASCII characters, as accepted by thestr.encode() method.encoding defaults to'utf-8'.errors defaults to'strict', meaning unsupported characters raise aUnicodeEncodeError.encoding anderrors must not be supplied ifstring is abytes, or aTypeError is raised.

Note thatquote(string,safe,encoding,errors) is equivalent toquote_from_bytes(string.encode(encoding,errors),safe).

Example:quote('/ElNiño/') yields'/El%20Ni%C3%B1o/'.

urllib.parse.quote_plus(string,safe='',encoding=None,errors=None)

Likequote(), but also replace spaces with plus signs, as required forquoting HTML form values when building up a query string to go into a URL.Plus signs in the original string are escaped unless they are included insafe. It also does not havesafe default to'/'.

Example:quote_plus('/ElNiño/') yields'%2FEl+Ni%C3%B1o%2F'.

urllib.parse.quote_from_bytes(bytes,safe='/')

Likequote(), but accepts abytes object rather than astr, and does not perform string-to-bytes encoding.

Example:quote_from_bytes(b'a&\xef') yields'a%26%EF'.

urllib.parse.unquote(string,encoding='utf-8',errors='replace')

Replace%xx escapes with their single-character equivalent.The optionalencoding anderrors parameters specify how to decodepercent-encoded sequences into Unicode characters, as accepted by thebytes.decode() method.

string may be either astr or abytes object.

encoding defaults to'utf-8'.errors defaults to'replace', meaning invalid sequences are replacedby a placeholder character.

Example:unquote('/El%20Ni%C3%B1o/') yields'/ElNiño/'.

在 3.9 版的變更:string parameter supports bytes and str objects (previously only str).

urllib.parse.unquote_plus(string,encoding='utf-8',errors='replace')

Likeunquote(), but also replace plus signs with spaces, as requiredfor unquoting HTML form values.

string must be astr.

Example:unquote_plus('/El+Ni%C3%B1o/') yields'/ElNiño/'.

urllib.parse.unquote_to_bytes(string)

Replace%xx escapes with their single-octet equivalent, and return abytes object.

string may be either astr or abytes object.

If it is astr, unescaped non-ASCII characters instringare encoded into UTF-8 bytes.

Example:unquote_to_bytes('a%26%EF') yieldsb'a&\xef'.

urllib.parse.urlencode(query,doseq=False,safe='',encoding=None,errors=None,quote_via=quote_plus)

Convert a mapping object or a sequence of two-element tuples, which maycontainstr orbytes objects, to a percent-encoded ASCIItext string. If the resultant string is to be used as adata for POSToperation with theurlopen() function, thenit should be encoded to bytes, otherwise it would result in aTypeError.

The resulting string is a series ofkey=value pairs separated by'&'characters, where bothkey andvalue are quoted using thequote_viafunction. By default,quote_plus() is used to quote the values, whichmeans spaces are quoted as a'+' character and '/' characters areencoded as%2F, which follows the standard for GET requests(application/x-www-form-urlencoded). An alternate function that can bepassed asquote_via isquote(), which will encode spaces as%20and not encode '/' characters. For maximum control of what is quoted, usequote and specify a value forsafe.

When a sequence of two-element tuples is used as thequeryargument, the first element of each tuple is a key and the second is avalue. The value element in itself can be a sequence and in that case, ifthe optional parameterdoseq evaluates toTrue, individualkey=value pairs separated by'&' are generated for each element ofthe value sequence for the key. The order of parameters in the encodedstring will match the order of parameter tuples in the sequence.

Thesafe,encoding, anderrors parameters are passed down toquote_via (theencoding anderrors parameters are only passedwhen a query element is astr).

To reverse this encoding process,parse_qs() andparse_qsl() areprovided in this module to parse query strings into Python data structures.

Refer tourllib examples to find out how theurllib.parse.urlencode() method can be used for generating the querystring of a URL or data for a POST request.

在 3.2 版的變更:query supports bytes and string objects.

在 3.5 版的變更:新增quote_via 參數。

也參考

WHATWG - URL Living standard

Working Group for the URL Standard that defines URLs, domains, IP addresses, theapplication/x-www-form-urlencoded format, and their API.

RFC 3986 - Uniform Resource Identifiers

This is the current standard (STD66). Any changes to urllib.parse moduleshould conform to this. Certain deviations could be observed, which aremostly for backward compatibility purposes and for certain de-factoparsing requirements as commonly observed in major browsers.

RFC 2732 - Format for Literal IPv6 Addresses in URL's.

This specifies the parsing requirements of IPv6 URLs.

RFC 2396 - Uniform Resource Identifiers (URI): Generic Syntax

Document describing the generic syntactic requirements for both Uniform ResourceNames (URNs) and Uniform Resource Locators (URLs).

RFC 2368 - The mailto URL scheme.

Parsing requirements for mailto URL schemes.

RFC 1808 - 相對的統一資源定位器 (Relative Uniform Resource Locators)

This Request For Comments includes the rules for joining an absolute and arelative URL, including a fair number of "Abnormal Examples" which govern thetreatment of border cases.

RFC 1738 - 統一資源定位器 (URL, Uniform Resource Locators)

This specifies the formal syntax and semantics of absolute URLs.