Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork32.1k
Open
Description
...as discussed infaster-cpython/ideas#462.
Basically, we can simplify the quickening process by quickening code objects as they are created. This works by ditching our per-code-object warmup counter, and instead using our adaptive backoff counters for more-granular "warming".
I have a branch that implements this change, resulting in consistent a ~1% performance improvement. I imagine that this comes from several places:
- A lower specialization threshold. I tried a wide range of initial adaptive counter values (ranging from 0 to 4095), and a value of 1 worked best. With this value, adaptive instructions will specialize the second time they are run.
- A simpler interpreter loop.
RESUME_QUICK
andJUMP_BACKWARD_QUICK
are no longer needed. - Superinstructions everywhere. All code objects have superinstructions now.
Next steps will be simplifying the unmarshal / code creation process as part of this same work.