Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit5c87206

Browse files
Deploy preview for PR 1148 🛫
1 parent97b57b0 commit5c87206

File tree

574 files changed

+1345
-1079
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

574 files changed

+1345
-1079
lines changed

‎pr-preview/pr-1148/_sources/library/csv.rst.txt‎

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ The :mod:`csv` module defines the following functions:
113113
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
114114

115115

116-
..function::register_dialect(name[,dialect[, **fmtparams]])
116+
..function::register_dialect(name, /,dialect='excel', **fmtparams)
117117

118118
Associate *dialect* with *name*. *name* must be a string. The
119119
dialect can be specified either by passing a sub-class of:class:`Dialect`, or
@@ -139,7 +139,8 @@ The :mod:`csv` module defines the following functions:
139139
Return the names of all registered dialects.
140140

141141

142-
..function::field_size_limit([new_limit])
142+
..function::field_size_limit()
143+
field_size_limit(new_limit)
143144

144145
Returns the current maximum field size allowed by the parser. If *new_limit* is
145146
given, this becomes the new limit.
@@ -527,7 +528,7 @@ out surrounded by parens. This may cause some problems for other programs which
527528
read CSV files (assuming they support complex numbers at all).
528529

529530

530-
..method::csvwriter.writerow(row)
531+
..method::csvwriter.writerow(row, /)
531532

532533
Write the *row* parameter to the writer's file object, formatted according
533534
to the current:class:`Dialect`. Return the return value of the call to the
@@ -536,7 +537,7 @@ read CSV files (assuming they support complex numbers at all).
536537
..versionchanged::3.5
537538
Added support of arbitrary iterables.
538539

539-
..method::csvwriter.writerows(rows)
540+
..method::csvwriter.writerows(rows, /)
540541

541542
Write all elements in *rows* (an iterable of *row* objects as described
542543
above) to the writer's file object, formatted according to the current

‎pr-preview/pr-1148/_sources/library/subprocess.rst.txt‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -649,7 +649,7 @@ functions.
649649

650650
If specified, *env* must provide any variables required for the program to
651651
execute. On Windows, in order to run a `side-by-side assembly`_ the
652-
specified *env* **must** include a valid:envvar:`SystemRoot`.
652+
specified *env* **must** include a valid``%SystemRoot%``.
653653

654654
.. _side-by-side assembly:https://en.wikipedia.org/wiki/Side-by-Side_Assembly
655655

@@ -1473,7 +1473,7 @@ handling consistency are valid for these functions.
14731473

14741474
Return ``(exitcode, output)`` of executing *cmd* in a shell.
14751475

1476-
Execute the string *cmd* in a shell with:meth:`Popen.check_output` and
1476+
Execute the string *cmd* in a shell with:func:`check_output` and
14771477
return a 2-tuple ``(exitcode, output)``.
14781478
*encoding* and *errors* are used to decode output;
14791479
see the notes on:ref:`frequently-used-arguments` for more details.

‎pr-preview/pr-1148/_sources/reference/lexical_analysis.rst.txt‎

Lines changed: 71 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,76 @@ Lexical analysis
1010
A Python program is read by a *parser*. Input to the parser is a stream of
1111
:term:`tokens <token>`, generated by the *lexical analyzer* (also known as
1212
the *tokenizer*).
13-
This chapter describes how the lexical analyzerbreaks a file into tokens.
13+
This chapter describes how the lexical analyzerproduces these tokens.
1414

15-
Python reads program text as Unicode code points; the encoding of a source file
16-
can be given by an encoding declaration and defaults to UTF-8, see:pep:`3120`
17-
for details. If the source file cannot be decoded, a:exc:`SyntaxError` is
18-
raised.
15+
The lexical analyzer determines the program text's:ref:`encoding<encodings>`
16+
(UTF-8 by default), and decodes the text into
17+
:ref:`source characters<lexical-source-character>`.
18+
If the text cannot be decoded, a:exc:`SyntaxError` is raised.
19+
20+
Next, the lexical analyzer uses the source characters to generate a stream of tokens.
21+
The type of a generated token generally depends on the next source character to
22+
be processed. Similarly, other special behavior of the analyzer depends on
23+
the first source character that hasn't yet been processed.
24+
The following table gives a quick summary of these source characters,
25+
with links to sections that contain more information.
26+
27+
..list-table::
28+
:header-rows: 1
29+
30+
* - Character
31+
- Next token (or other relevant documentation)
32+
33+
* - * space
34+
* tab
35+
* formfeed
36+
- *:ref:`Whitespace<whitespace>`
37+
38+
* - * CR, LF
39+
- *:ref:`New line<line-structure>`
40+
*:ref:`Indentation<indentation>`
41+
42+
* - * backslash (``\``)
43+
- *:ref:`Explicit line joining<explicit-joining>`
44+
* (Also significant in:ref:`string escape sequences<escape-sequences>`)
45+
46+
* - * hash (``#``)
47+
- *:ref:`Comment<comments>`
48+
49+
* - * quote (``'``, ``"``)
50+
- *:ref:`String literal<strings>`
51+
52+
* - * ASCII letter (``a``-``z``, ``A``-``Z``)
53+
* non-ASCII character
54+
- *:ref:`Name<identifiers>`
55+
* Prefixed:ref:`string or bytes literal<strings>`
56+
57+
* - * underscore (``_``)
58+
- *:ref:`Name<identifiers>`
59+
* (Can also be part of:ref:`numeric literals<numbers>`)
60+
61+
* - * number (``0``-``9``)
62+
- *:ref:`Numeric literal<numbers>`
63+
64+
* - * dot (``.``)
65+
- *:ref:`Numeric literal<numbers>`
66+
*:ref:`Operator<operators>`
67+
68+
* - * question mark (``?``)
69+
* dollar (``$``)
70+
*
71+
.. (the following uses zero-width space characters to render
72+
.. a literal backquote)
73+
74+
backquote (``​`​``)
75+
* control character
76+
- * Error (outside string literals and comments)
77+
78+
* - * other printing character
79+
- *:ref:`Operator or delimiter<operators>`
80+
81+
* - * end of file
82+
- *:ref:`End marker<endmarker-token>`
1983

2084

2185
.. _line-structure:
@@ -120,6 +184,8 @@ If an encoding is declared, the encoding name must be recognized by Python
120184
encoding is used for all lexical analysis, including string literals, comments
121185
and identifiers.
122186

187+
.. _lexical-source-character:
188+
123189
All lexical analysis, including string literals, comments
124190
and identifiers, works on Unicode text decoded using the source encoding.
125191
Any Unicode code point, except the NUL control character, can appear in

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp