Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[3.8] closes bpo-37966: Fully implement the UAX GH-15 quick-check algorithm. (GH-15558)#15671

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged

Conversation

miss-islington
Copy link
Contributor

@miss-islingtonmiss-islington commentedSep 4, 2019
edited by bedevere-bot
Loading

The purpose of theunicodedata.is_normalized function is to answer
the questionstr == unicodedata.normalized(form, str) more
efficiently than writing just that, by using the "quick check"
optimization described in the Unicode standard in UAXGH-15.

However, it turns out the code doesn't implement the full algorithm
from the standard, and as a result we often miss the optimization and
end up having to compute the whole normalized string after all.

Implement the standard's algorithm. This greatly speeds up
unicodedata.is_normalized in many cases where our partial variant
of quick-check had been returning MAYBE and the standard algorithm
returns NO.

At a quick test on my desktop, the existing code takes about 4.4 ms/MB
(so 4.4 ns per byte) when the partial quick-check returns MAYBE and it
has to do the slow normalize-and-compare:

$ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000'
-- 'unicodedata.is_normalized("NFD", s)'
50 loops, best of 5: 4.39 msec per loop

With this patch, it gets the answer instantly (58 ns) on the same 1 MB
string:

$ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*500000'
-- 'unicodedata.is_normalized("NFD", s)'
5000000 loops, best of 5: 58.2 nsec per loop

This restores a small optimization that the original version of this
code had for theunicodedata.normalize use case.

With this, that case is actually faster than in master!

$ build.base/python -m timeit -s 'import unicodedata; s = "\u0338"*500000'
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 561 usec per loop

$ build.dev/python -m timeit -s 'import unicodedata; s = "\u0338"*500000'
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 512 usec per loop
(cherry picked from commit2f09413)

Co-authored-by: Greg Pricegnprice@gmail.com

https://bugs.python.org/issue37966

…orithm. (pythonGH-15558)The purpose of the `unicodedata.is_normalized` function is to answerthe question `str == unicodedata.normalized(form, str)` moreefficiently than writing just that, by using the "quick check"optimization described in the Unicode standard in UAXpythonGH-15.However, it turns out the code doesn't implement the full algorithmfrom the standard, and as a result we often miss the optimization andend up having to compute the whole normalized string after all.Implement the standard's algorithm.  This greatly speeds up`unicodedata.is_normalized` in many cases where our partial variantof quick-check had been returning MAYBE and the standard algorithmreturns NO.At a quick test on my desktop, the existing code takes about 4.4 ms/MB(so 4.4 ns per byte) when the partial quick-check returns MAYBE and ithas to do the slow normalize-and-compare:  $ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \      -- 'unicodedata.is_normalized("NFD", s)'  50 loops, best of 5: 4.39 msec per loopWith this patch, it gets the answer instantly (58 ns) on the same 1 MBstring:  $ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \      -- 'unicodedata.is_normalized("NFD", s)'5000000 loops, best of 5: 58.2 nsec per loopThis restores a small optimization that the original version of thiscode had for the `unicodedata.normalize` use case.With this, that case is actually faster than in master!$ build.base/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \    -- 'unicodedata.normalize("NFD", s)'500 loops, best of 5: 561 usec per loop$ build.dev/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \    -- 'unicodedata.normalize("NFD", s)'500 loops, best of 5: 512 usec per loop(cherry picked from commit2f09413)Co-authored-by: Greg Price <gnprice@gmail.com>
@miss-islington
Copy link
ContributorAuthor

@gnprice and@benjaminp: Status check is done, and it's a success ✅ .

@miss-islingtonmiss-islington merged commit4dd1c9d intopython:3.8Sep 4, 2019
@miss-islingtonmiss-islington deleted the backport-2f09413-3.8 branchSeptember 4, 2019 03:03
@miss-islington
Copy link
ContributorAuthor

@gnprice and@benjaminp: Status check is done, and it's a success ✅ .

1 similar comment
@miss-islington
Copy link
ContributorAuthor

@gnprice and@benjaminp: Status check is done, and it's a success ✅ .

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@benjaminpbenjaminpbenjaminp approved these changes

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

5 participants
@miss-islington@benjaminp@the-knights-who-say-ni@bedevere-bot@gnprice

[8]ページ先頭

©2009-2025 Movatter.jp