Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork34.1k
[3.8] closes bpo-37966: Fully implement the UAX GH-15 quick-check algorithm. (GH-15558)#15671
Merged
miss-islington merged 1 commit intopython:3.8fromSep 4, 2019
Merged
Conversation
…orithm. (pythonGH-15558)The purpose of the `unicodedata.is_normalized` function is to answerthe question `str == unicodedata.normalized(form, str)` moreefficiently than writing just that, by using the "quick check"optimization described in the Unicode standard in UAXpythonGH-15.However, it turns out the code doesn't implement the full algorithmfrom the standard, and as a result we often miss the optimization andend up having to compute the whole normalized string after all.Implement the standard's algorithm. This greatly speeds up`unicodedata.is_normalized` in many cases where our partial variantof quick-check had been returning MAYBE and the standard algorithmreturns NO.At a quick test on my desktop, the existing code takes about 4.4 ms/MB(so 4.4 ns per byte) when the partial quick-check returns MAYBE and ithas to do the slow normalize-and-compare: $ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \ -- 'unicodedata.is_normalized("NFD", s)' 50 loops, best of 5: 4.39 msec per loopWith this patch, it gets the answer instantly (58 ns) on the same 1 MBstring: $ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \ -- 'unicodedata.is_normalized("NFD", s)'5000000 loops, best of 5: 58.2 nsec per loopThis restores a small optimization that the original version of thiscode had for the `unicodedata.normalize` use case.With this, that case is actually faster than in master!$ build.base/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \ -- 'unicodedata.normalize("NFD", s)'500 loops, best of 5: 561 usec per loop$ build.dev/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \ -- 'unicodedata.normalize("NFD", s)'500 loops, best of 5: 512 usec per loop(cherry picked from commit2f09413)Co-authored-by: Greg Price <gnprice@gmail.com>
benjaminp approved these changesSep 4, 2019
ContributorAuthor
miss-islington commentedSep 4, 2019
@gnprice and@benjaminp: Status check is done, and it's a success ✅ . |
ContributorAuthor
miss-islington commentedSep 4, 2019
@gnprice and@benjaminp: Status check is done, and it's a success ✅ . |
1 similar comment
ContributorAuthor
miss-islington commentedSep 4, 2019
@gnprice and@benjaminp: Status check is done, and it's a success ✅ . |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading.Please reload this page.
The purpose of the
unicodedata.is_normalizedfunction is to answerthe question
str == unicodedata.normalized(form, str)moreefficiently than writing just that, by using the "quick check"
optimization described in the Unicode standard in UAXGH-15.
However, it turns out the code doesn't implement the full algorithm
from the standard, and as a result we often miss the optimization and
end up having to compute the whole normalized string after all.
Implement the standard's algorithm. This greatly speeds up
unicodedata.is_normalizedin many cases where our partial variantof quick-check had been returning MAYBE and the standard algorithm
returns NO.
At a quick test on my desktop, the existing code takes about 4.4 ms/MB
(so 4.4 ns per byte) when the partial quick-check returns MAYBE and it
has to do the slow normalize-and-compare:
$ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000'
-- 'unicodedata.is_normalized("NFD", s)'
50 loops, best of 5: 4.39 msec per loop
With this patch, it gets the answer instantly (58 ns) on the same 1 MB
string:
$ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*500000'
-- 'unicodedata.is_normalized("NFD", s)'
5000000 loops, best of 5: 58.2 nsec per loop
This restores a small optimization that the original version of this
code had for the
unicodedata.normalizeuse case.With this, that case is actually faster than in master!
$ build.base/python -m timeit -s 'import unicodedata; s = "\u0338"*500000'
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 561 usec per loop
$ build.dev/python -m timeit -s 'import unicodedata; s = "\u0338"*500000'
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 512 usec per loop
(cherry picked from commit2f09413)
Co-authored-by: Greg Pricegnprice@gmail.com
https://bugs.python.org/issue37966