Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

gh-124951: Optimize base64 encode & decode for an easy 2-3x speedup [no SIMD]#143262

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
gpshead merged 13 commits intopython:mainfromgpshead:opt-base64-cpu
Jan 2, 2026

Conversation

@gpshead
Copy link
Member

@gpsheadgpshead commentedDec 29, 2025
edited
Loading

Optimize base64 encoding/decoding by eliminating loop-carried dependencies. Key changes:

  • Addbase64_encode_trio() andbase64_decode_quad() helper functions that process complete groups independently
  • Addbase64_encode_fast() andbase64_decode_fast() wrappers
  • Updateb2a_base64 anda2b_base64 to use fast path for complete groups

The binasciibench I used measuring base64 encoding/decoding throughput is included in commit history, but i pulled it out of the PR in favor of adding to pyperformance.

Performance gains (encode/decode speedup vs main, PGO builds):

             64 bytes    64K        1M  Zen2:      1.2x/1.8x   1.7x/2.8x  1.5x/2.8x  Zen4:      1.2x/1.7x   1.6x/3.0x  1.5x/3.0x  [old data, likely faster]  M4:        1.3x/1.9x   2.3x/2.8x  2.4x/2.9x  [old data, likely faster]  RPi5-32:   1.2x/1.2x   2.4x/2.4x  2.0x/2.1x  RPi4-64:   1.3x/2.0x   2.4x/5.0x  1.8x/5.0x

Additional SIMD implementations (NEON, AVX-512 VBMI) can achieve +50% (M4) to +1500% (!! Zen4) further gains and are planned for follow-on work if deemed simple to maintain.

Widely used third party libraries contain industry canonical SIMD accelerated variants such assimdutf (C++ based unfortunately) so the decision of how to link and use those and when is best kept separate.

This PR's simple pure better use of modern CPU functional unit pipelining wins make sense regardless.

Based on my exploratory work done inmain...gpshead:cpython:claude/vectorize-base64-c-S7Hku

stonebig reacted with heart emojijohnslavik, Fidget-Spinner, dolfinus, and emmatyping reacted with rocket emoji
Add Tools/binasciibench/binasciibench.py benchmark for measuring base64encoding/decoding throughput.Optimize base64 encoding/decoding by eliminating loop-carried dependencies.Key changes:- Add base64_encode_trio() and base64_decode_quad() helper functions  that process complete groups independently- Add base64_encode_fast() and base64_decode_fast() wrappers- Update b2a_base64 and a2b_base64 to use fast path for complete groupsPerformance gains (encode/decode speedup vs main, PGO builds):             64 bytes    64K        1M  Zen2:      1.1x/1.6x   1.6x/2.4x  1.4x/2.4x  Zen4:      1.2x/1.7x   1.6x/3.0x  1.5x/3.0x  M4:        1.3x/1.9x   2.3x/2.8x  2.4x/2.9x  RPi5-32:   1.4x/1.4x   2.4x/2.0x  2.0x/1.9xAdditional SIMD implementations (NEON, AVX-512 VBMI) can achieve+50% to +1500% further gains and are planned for follow-on work.Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
@gpsheadgpshead changed the titleOptimize base64 encode and decode for an easy 2-3x performance wingh-124951: Optimize base64 encode and decode for an easy 2-3x performance win [no SIMD required]Dec 29, 2025
@gpsheadgpshead added the performancePerformance or resource usage labelDec 29, 2025
@gpsheadgpshead changed the titlegh-124951: Optimize base64 encode and decode for an easy 2-3x performance win [no SIMD required]gh-124951: Optimize base64 encode and decode for an easy 2-3x speedup [no SIMD required]Dec 29, 2025
@gpsheadgpshead changed the titlegh-124951: Optimize base64 encode and decode for an easy 2-3x speedup [no SIMD required]gh-124951: Optimize base64 encode & decode for an easy 2-3x speedup [no SIMD]Dec 29, 2025
gpsheadand others added3 commitsDecember 29, 2025 00:30
MSVC doesn't support forward declarations of arrays without explicitsize. Move the table definition before the inline functions that useit, eliminating the need for a forward declaration.Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
@gpsheadgpshead marked this pull request as ready for reviewDecember 29, 2025 01:08
@gpsheadgpshead self-assigned thisDec 29, 2025
gpsheadand others added3 commitsDecember 29, 2025 01:40
Add Py_ALIGNED(64) to both lookup tables to ensure each fitswithin a single L1 cache line, reducing potential cache missesduring encoding/decoding loops.Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Replace hardcoded '=' characters with the BASE64_PAD macrofor consistency with the rest of the codebase.Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Member

@serhiy-storchakaserhiy-storchaka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Looks pretty simple with large benefit.

BTW, I'm going to add support forignorechars in the decoder, so it could support a multiline input without ignoring all other errors. The decoder will return on the fast path for each line.

gpshead reacted with thumbs up emojigpshead reacted with heart emoji
gpsheadand others added4 commitsJanuary 2, 2026 04:28
Address review feedback from serhiy-storchaka: the fast path was doingtwo checks per group - an explicit PAD comparison and the invalid charcheck in base64_decode_quad().Change PAD's table entry from 0 to 64 so the existing (v0|v1|v2|v3)&0xc0check catches it, eliminating 4 comparisons per group.The slow path is unaffected since it checks for PAD character beforethe table lookup.Decode is ~16% faster at 64K (1.62 GB/s → 1.88 GB/s).Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Suggested by serhiy-storchaka: replace index math (in + i*3, out + i*4)with pointer increments. Encode is ~7% faster at 64K (2.11 → 2.25 GB/s).🤖 Generated with [Claude Code](https://claude.com/claude-code)Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@gpsheadgpshead merged commit61fc72a intopython:mainJan 2, 2026
46 checks passed
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@serhiy-storchakaserhiy-storchakaserhiy-storchaka left review comments

@picnixzpicnixzpicnixz left review comments

@AA-TurnerAA-TurnerAwaiting requested review from AA-TurnerAA-Turner is a code owner

Assignees

@gpsheadgpshead

Labels

performancePerformance or resource usage

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@gpshead@serhiy-storchaka@picnixz

[8]ページ先頭

©2009-2026 Movatter.jp