.NET 10 is here!

.NET 10 is now available: the most productive, modern, secure, intelligent, and performant release of .NET yet.

September 10th, 2025
heartcompellinglikeintriguingcelebratemind blown87 reactions

Performance Improvements in .NET 10

Stephen Toub - MSFT
Partner Software Engineer

My kidslove “Frozen”. They can sing every word, re-enact every scene, and provide detailed notes on the proper sparkle of Elsa’s ice dress. I’ve seen the movie more times than I can recount, to the point where, if you’ve seen me do any live coding, you’ve probably seen my subconscious incorporate an Arendelle reference or two. After so many viewings, I began paying closer attention to the details, like how at the very beginning of the film the ice harvesters are singing a song that subtly foreshadows the story’s central conflicts, the characters’ journeys, and even the key to resolving the climax. I’m slightly ashamed to admit I didn’t comprehend this connection until viewing number ten or so, at which point I also realized I had no idea if this ice harvesting was actually “a thing” or if it was just a clever vehicle for Disney to spin a yarn. Turns out, as I subsequently researched, it’s quite real.

In the 19th century, before refrigeration, ice was an incredibly valuable commodity. Winters in the northern United States turned ponds and lakes into seasonal gold mines. The most successful operations ran with precision: workers cleared snow from the surface so the ice would grow thicker and stronger, and they scored the surface into perfect rectangles using horse-drawn plows, turning the lake into a frozen checkerboard. Once the grid was cut, teams with long saws worked to free uniform blocks weighing several hundred pounds each. These blocks were floated along channels of open water toward the shore, at which point men with poles levered the blocks up ramps and hauled them into storage. Basically, what the movie shows.

The storage itself was an art. Massive wooden ice houses, sometimes holding tens of thousands of tons, were lined with insulation, typically straw. Done well, this insulation could keep the ice solid for months, even through summer heat. Done poorly, you would open the doors to slush. And for those moving ice over long distances, typically by ship, every degree, every crack in the insulation, every extra day in transit meant more melting and more loss.

Enter Frederic Tudor, the “Ice King” of Boston. He was obsessed with systemic efficiency. Where competitors saw unavoidable loss, Tudor saw a solvable problem. After experimenting with different insulators, he leaned on cheap sawdust, a lumber mill byproduct that outperformed straw, packing it densely around the ice to cut melt losses significantly. For harvesting efficiency, his operations adopted Nathaniel Jarvis Wyeth’s grid-scoring system, which produced uniform blocks that could be packed tightly, minimizing air gaps that would otherwise increase exposure in a ship’s hold. And to shorten the critical time between shore and ship, Tudor built out port infrastructure and depots near docks, allowing ships to load and unload much faster. Each change, from tools to ice house design to logistics, amplified the last, turning a risky local harvest into a reliable global trade. With Tudor’s enhancements, he had solid ice arriving in places like Havana, Rio de Janeiro, and even Calcutta (a voyage of four months in the 1830s). His performance gains allowed the product to survive journeys that were previously unthinkable.

What made Tudor’s ice last halfway around the world wasn’t one big idea. It was a plethora of small improvements, each multiplying the effect of the last. In software development, the same principle holds: big leaps forward in performance rarely come from a single sweeping change, rather from hundreds or thousands of targeted optimizations that compound into something transformative. .NET 10’s performance story isn’t about one Disney-esque magical idea; it’s about carefully shaving off nanoseconds here and tens of bytes there, streamlining operations that are executed trillions of times.

In the rest of this post, just as we did in Performance Improvements in.NET 9,.NET 8,.NET 7,.NET 6,.NET 5,.NET Core 3.0,.NET Core 2.1, and.NET Core 2.0, we’ll dig into hundreds of the small but meaningful and compounding performance improvements since .NET 9 that make up .NET 10’s story (if you instead stay on LTS releases and thus are upgrading from .NET 8 instead of from .NET 9, you’ll see even more improvements based on the aggregation from all theimprovements in .NET 9 as well). So, without further ado, go grab a cup of your favorite hot beverage (or, given my intro, maybe something a bit more frosty), sit back, relax, and “Let It Go”!

Or, hmm, maybe, let’s push performance “Into the Unknown”?

Let .NET 10 performance “Show Yourself”?

“Do You Want To Build aSnowman Fast Service?”

I’ll see myself out.

Benchmarking Setup

As in previous posts, this tour is chock full of micro-benchmarks intended to showcase various performance improvements. Most of these benchmarks are implemented usingBenchmarkDotNet 0.15.2, with a simple setup for each.

To follow along, make sure you have.NET 9 and.NET 10 installed, as most of the benchmarks compare the same test running on each. Then, create a new C# project in a newbenchmarks directory:

dotnet new console -o benchmarkscd benchmarks

That will produce two files in thebenchmarks directory:benchmarks.csproj, which is the project file with information about how the application should be compiled, andProgram.cs, which contains the code for the application. Finally, replace everything inbenchmarks.csproj with this:

<Project Sdk="Microsoft.NET.Sdk">  <PropertyGroup>    <OutputType>Exe</OutputType>    <TargetFrameworks>net10.0;net9.0</TargetFrameworks>    <LangVersion>Preview</LangVersion>    <ImplicitUsings>enable</ImplicitUsings>    <Nullable>enable</Nullable>    <ServerGarbageCollection>true</ServerGarbageCollection>  </PropertyGroup>  <ItemGroup>    <PackageReference Include="BenchmarkDotNet" Version="0.15.2" />  </ItemGroup></Project>

With that, we’re good to go. Unless otherwise noted, I’ve tried to make each benchmark standalone; just copy/paste its whole contents into the Program.cs file, overwriting everything that’s there, and then run the benchmarks. Each test includes at its top a comment for thedotnet command to use to run the benchmark. It’s typically something like this:

dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

which will run the benchmark in release on both .NET 9 and .NET 10 and show the compared results. The other common variation, used when the benchmark should only be run on .NET 10 (typically because it’s comparing two approaches rather than comparing one thing on two versions), is the following:

dotnet run -c Release -f net10.0 --filter "*"

Throughout the post, I’ve shown many benchmarks and the results I received from running them. Unless otherwise stated (e.g. because I’m demonstrating an OS-specific improvement), the results shown are from running them on Linux (Ubuntu 24.04.1) on an x64 processor.

BenchmarkDotNet v0.15.2, Linux Ubuntu 24.04.1 LTS (Noble Numbat)11th Gen Intel Core i9-11950H 2.60GHz, 1 CPU, 16 logical and 8 physical cores.NET SDK 10.0.100-rc.1.25451.107  [Host]     : .NET 9.0.9 (9.0.925.41916), X64 RyuJIT AVX-512F+CD+BW+DQ+VL+VBMI

As always, a quick disclaimer: these are micro-benchmarks, timing operations so short you’d miss them by blinking (but when such operations run millions of times, the savings really add up). The exact numbers you get will depend on your hardware, your operating system, what else your machine is juggling at the moment, how much coffee you’ve had since breakfast, and perhaps whether Mercury is in retrograde. In other words, don’t expect your results to match mine exactly, but I’ve picked tests that should still be reasonably reproducible in the real world.

Now, let’s start at the bottom of the stack. Code generation.

JIT

Among all areas of .NET, the Just-In-Time (JIT) compiler stands out as one of the most impactful. Every .NET application, whether a small console tool or a large-scale enterprise service, ultimately relies on the JIT to turn intermediate language (IL) code into optimized machine code. Any enhancement to the JIT’s generated code quality has a ripple effect, improving performance across the entire ecosystem without requiring developers to change any of their own code or even recompile their C#. And with .NET 10, there’s no shortage of these improvements.

Deabstraction

As with many languages, .NET historically has had an “abstraction penalty,” those extra allocations and indirections that can occur when using high-level language features like interfaces, iterators, and delegates. Each year, the JIT gets better and better at optimizing away layers of abstraction, so that developers get to write simple code and still get great performance. .NET 10 continues this tradition. The result is that idiomatic C# (using interfaces,foreach loops, lambdas, etc.) runs even closer to the raw speed of meticulously crafted and hand-tuned code.

Object Stack Allocation

One of the most exciting areas of deabstraction progress in .NET 10 is the expanded use of escape analysis to enable stack allocation of objects. Escape analysis is a compiler technique to determine whether an object allocated in a method escapes that method, meaning determining whether that object is reachable after the method returns (for example, by being stored in a field or returned to the caller) or used in some way that the runtime can’t track within the method (like passed to an unknown callee). If the compiler can prove an object doesn’t escape, then that object’s lifetime is bounded by the method, and it can be allocated on the stack instead of on the heap. Stack allocation is much cheaper (just pointer bumping for allocation and automatic freeing when the method exits) and reduces GC pressure because, well, the object doesn’t need to be tracked by the GC. .NET 9 had already introduced some limited escape analysis and stack allocation support; .NET 10 takes this significantly further.

dotnet/runtime#115172 teaches the JIT how to perform escape analysis related to delegates, and in particular that a delegate’sInvoke method (which is implemented by the runtime) does not stash away thethis reference. Then if escape analysis can prove that the delegate’s object reference is something that otherwise hasn’t escaped, the delegate can effectively evaporate. Consider this benchmark:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "y")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public int Sum(int y)    {        Func<int, int> addY = x => x + y;        return DoubleResult(addY, y);    }    private int DoubleResult(Func<int, int> func, int arg)    {        int result = func(arg);        return result + result;    }}

If we just run this benchmark and compare .NET 9 and .NET 10, we can immediately tell something interesting is happening.

MethodRuntimeMeanRatioCode SizeAllocatedAlloc Ratio
Sum.NET 9.019.530 ns1.00118 B88 B1.00
Sum.NET 10.06.685 ns0.3432 B24 B0.27

The C# code forSum belies complicated code generation by the C# compiler. It needs to create aFunc<int, int>, which is “closing over” they “local”. That means the compiler needs to “lift”y to no longer be an actual local, and instead live as a field on an object; the delegate can then point to a method on that object, giving it access toy. This is approximately what the IL generated by the C# compiler looks like when decompiled to C#:

public int Sum(int y){    <>c__DisplayClass0_0 c = new();    c.y = y;    Func<int, int> func = new(c.<Sum>b__0);    return DoubleResult(func, c.y);}private sealed class <>c__DisplayClass0_0{    public int y;    internal int <Sum>b__0(int x) => x + y;}

From that, we can see the closure is resulting in two allocations: an allocation for the “display class” (what the C# compiler calls these closure types) and an allocation for the delegate that points to the<Sum>b__0 method on that display class instance. That’s what’s accounting for the88 bytes of allocation in the .NET 9 results: the display class is 24 bytes, and the delegate is 64 bytes. In the .NET 10 version, though, we only see a 24 byte allocation; that’s because the JIT has successfully elided the delegate allocation. Here is the resulting assembly code:

; .NET 9; Tests.Sum(Int32)       push      rbp       push      r15       push      rbx       lea       rbp,[rsp+10]       mov       ebx,esi       mov       rdi,offset MT_Tests+<>c__DisplayClass0_0       call      CORINFO_HELP_NEWSFAST       mov       r15,rax       mov       [r15+8],ebx       mov       rdi,offset MT_System.Func<System.Int32, System.Int32>       call      CORINFO_HELP_NEWSFAST       mov       rbx,rax       lea       rdi,[rbx+8]       mov       rsi,r15       call      CORINFO_HELP_ASSIGN_REF       mov       rax,offset Tests+<>c__DisplayClass0_0.<Sum>b__0(Int32)       mov       [rbx+18],rax       mov       esi,[r15+8]       cmp       [rbx+18],rax       jne       short M00_L01       mov       rax,[rbx+8]       add       esi,[rax+8]       mov       eax,esiM00_L00:       add       eax,eax       pop       rbx       pop       r15       pop       rbp       retM00_L01:       mov       rdi,[rbx+8]       call      qword ptr [rbx+18]       jmp       short M00_L00; Total bytes of code 112; .NET 10; Tests.Sum(Int32)       push      rbx       mov       ebx,esi       mov       rdi,offset MT_Tests+<>c__DisplayClass0_0       call      CORINFO_HELP_NEWSFAST       mov       [rax+8],ebx       mov       eax,[rax+8]       mov       ecx,eax       add       eax,ecx       add       eax,eax       pop       rbx       ret; Total bytes of code 32

In both .NET 9 and .NET 10, the JIT successfully inlinedDoubleResult, such that the delegate doesn’t escape, but then in .NET 10, it’s able to stack allocate it. Woo hoo! There’s obviously still future opportunity here, as the JIT doesn’t elide the allocation of the closure object, but that should be addressable with some more effort, hopefully in the near future.

dotnet/runtime#104906 from@hez2010 anddotnet/runtime#112250 extend this kind of analysis and stack allocation to arrays. How many times have you written code like this?

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public void Test()    {        Process(new string[] { "a", "b", "c" });        static void Process(string[] inputs)        {            foreach (string input in inputs)            {                Use(input);            }            [MethodImpl(MethodImplOptions.NoInlining)]            static void Use(string input) { }        }    }}

Some method I want to call accepts an array of inputs and does something for each input. I need to allocate an array to pass my inputs in, either explicitly, or maybe implicitly due to usingparams or a collection expression. Ideally moving forward there would be an overload of such aProcess method that accepted aReadOnlySpan<string> instead of astring[], and I could then avoid the allocation by construction. But for all of these cases where I’m forced to create an array, .NET 10 comes to the rescue.

MethodRuntimeMeanRatioAllocatedAlloc Ratio
Test.NET 9.011.580 ns1.0048 B1.00
Test.NET 10.03.960 ns0.340.00

The JIT was able to inlineProcess, see that the array never leaves the frame, and stack allocate it.

Of course, now that we’re able to stack allocate arrays, we also want to be able to deal with a common way those arrays are used: via spans.dotnet/runtime#113977 anddotnet/runtime#116124 teach escape analysis to be able to reason about the fields in structs, which includesSpan<T>, as it’s “just” a struct that stores aref T field and anint length field.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _buffer = new byte[3];    [Benchmark]    public void Test() => Copy3Bytes(0x12345678, _buffer);    [MethodImpl(MethodImplOptions.NoInlining)]    private static void Copy3Bytes(int value, Span<byte> dest) =>        BitConverter.GetBytes(value).AsSpan(0, 3).CopyTo(dest);}

Here, we’re usingBitConverter.GetBytes, which allocates abyte[] containing the bytes from the input (in this case, it’ll be a four-byte array for theint), then we slice off three of the four bytes, and we copy them to the destination span.

MethodRuntimeMeanRatioAllocatedAlloc Ratio
Test.NET 9.09.7717 ns1.0432 B1.00
Test.NET 10.00.8718 ns0.090.00

In .NET 9, we get the 32-byte allocation we’d expect for thebyte[] inGetBytes (every object on 64-bit is at least 24 bytes, which will include the four bytes for the array’s length, and then the four bytes for the data will be in slots 24-27, and the size will be padded up to the next word boundary, for 32). In .NET 10, withGetBytes andAsSpan inlined, the JIT can see that the array doesn’t escape, and a stack allocated version of it can be used to seed the span, just as if it were created from any other stack allocation (likestackalloc). (This case also needed a little help fromdotnet/runtime#113093, which taught the JIT that certain span operations, like theMemmove used internally byCopyTo, are non-escaping.)

Devirtualization

Interfaces and virtual methods are a critical aspect of .NET and the abstractions it enables. Being able to unwind these abstractions and “devirtualize” is then an important job for the JIT, which has taken notable leaps in capabilities here in .NET 10.

While arrays are one of the most central features provided by C# and .NET, and while the JIT exerts a lot of energy and does a great job optimizing many aspects of arrays, one area in particular has caused it pain: an array’s interface implementations. The runtime manufactures a bunch of interface implementations forT[], and because they’re implemented differently from literally every other interface implementation in .NET, the JIT hasn’t been able to apply the same devirtualization capabilities it’s applied elsewhere. And, for anyone who’s dived deep into micro-benchmarks, this can lead to some odd observations. Here’s a performance comparison between iterating over aReadOnlyCollection<T> using aforeach loop (going through its enumerator) and using afor loop (indexing on each element).

// dotnet run -c Release -f net9.0 --filter "*"// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections.ObjectModel;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private ReadOnlyCollection<int> _list = new(Enumerable.Range(1, 1000).ToArray());    [Benchmark]    public int SumEnumerable()    {        int sum = 0;        foreach (var item in _list)        {            sum += item;        }        return sum;    }    [Benchmark]    public int SumForLoop()    {        ReadOnlyCollection<int> list = _list;        int sum = 0;        int count = list.Count;        for (int i = 0; i < count; i++)        {            sum += _list[i];        }        return sum;    }}

When asked “which of these will be faster”, the obvious answer is “SumForLoop“. After all,SumEnumerable is going to allocate an enumerator and has to make twice the number of interface calls (MoveNext+Current per iteration vsthis[int] per iteration). As it turns out, the obvious answer is also wrong. Here are the timings on my machine for .NET 9:

MethodMean
SumEnumerable949.5 ns
SumForLoop1,932.7 ns

What the what?? If I change theToArray to instead beToList, however, the numbers are much more in line with our expectations.

MethodMean
SumEnumerable1,542.0 ns
SumForLoop894.1 ns

So what’s going on here? It’s super subtle. First, it’s necessary to know thatReadOnlyCollection<T> just wraps an arbitraryIList<T>, theReadOnlyCollection<T>‘sGetEnumerator() returns_list.GetEnumerator() (I’m ignoring for this discussion the special-case where the list is empty), andReadOnlyCollection<T>‘s indexer just indexes into theIList<T>‘s indexer. So far presumably this all sounds like what you’d expect. But where things gets interesting is around what the JIT is able to devirtualize. In .NET 9, it struggles to devirtualize calls to the interface implementations specifically onT[], so it won’t devirtualize either the_list.GetEnumerator() call nor the_list[index] call. However, the enumerator that’s returned is just a normal type that implementsIEnumerator<T>, and the JIT has no problem devirtualizing itsMoveNext andCurrent members. Which means that we’re actually paying a lot more going through the indexer, because forN elements, we’re having to makeN interface calls, whereas with the enumerator, we only need the one withGetEnumerator interface call and then no more after that.

Thankfully, this is now addressed in .NET 10.dotnet/runtime#108153,dotnet/runtime#109209,dotnet/runtime#109237, anddotnet/runtime#116771 all make it possible for the JIT to devirtualize array’s interface method implementations. Now when we run the same benchmark (reverted back to usingToArray), we get results much more in line with our expectations, with both benchmarks improving from .NET 9 to .NET 10, and withSumForLoop on .NET 10 being the fastest.

MethodRuntimeMeanRatio
SumEnumerable.NET 9.0968.5 ns1.00
SumEnumerable.NET 10.0775.5 ns0.80
SumForLoop.NET 9.01,960.5 ns1.00
SumForLoop.NET 10.0624.6 ns0.32

One of the really interesting things about this is how many libraries are implemented on the premise that it’s faster to use anIList<T>‘s indexer for iteration than it is to use itsIEnumerable<T> for iteration, and that includesSystem.Linq. All these years, where LINQ has had specialized code paths for working withIList<T> when possible, while in many cases it’s been a welcome optimization, insome cases (such as when the concrete type is aReadOnlyCollection<T>), it’s actually been a deoptimization.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections.ObjectModel;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private ReadOnlyCollection<int> _list = new(Enumerable.Range(1, 1000).ToArray());    [Benchmark]    public int SkipTakeSum() => _list.Skip(100).Take(800).Sum();}
MethodRuntimeMeanRatio
SkipTakeSum.NET 9.03.525 us1.00
SkipTakeSum.NET 10.01.773 us0.50

Fixing devirtualization for array’s interface implementation then also has this transitive effect on LINQ.

Guarded Devirtualization (GDV) is also improved in .NET 10, such as fromdotnet/runtime#116453 anddotnet/runtime#109256. Withdynamic PGO, the JIT is able to instrument a method’s compilation and then use the resulting profiling data as part of emitting an optimized version of the method. One of the things it can profile are which types are used in a virtual dispatch. If one type dominates, it can special-case that type in the code gen and emit a customized implementation specific to that type. That then enables devirtualization in that dedicated path, which is “guarded” by the relevant type check, hence “GDV”. In some cases, however, such as if a virtual call was being made in a shared generic context, GDV would not kick in. Now it will.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public bool Test() => GenericEquals("abc", "abc");    [MethodImpl(MethodImplOptions.NoInlining)]    private static bool GenericEquals<T>(T a, T b) => EqualityComparer<T>.Default.Equals(a, b);}
MethodRuntimeMeanRatio
Test.NET 9.02.816 ns1.00
Test.NET 10.01.511 ns0.54

dotnet/runtime#110827 from@hez2010 also helps more methods to be inlined by doing another pass looking for opportunities after later phases of devirtualization. The JIT’s optimizations are split up into multiple phases; each phase can make improvements, and those improvements can expose additional opportunities. If those opportunities would only be capitalized on by a phase that already ran, they can be missed. But for phases that are relatively cheap to perform, such as doing a pass looking for additional inlining opportunities, those phases can be repeated once enough other optimization has happened that it’s likely productive to do so again.

Bounds Checking

C# is a memory-safe language, an important aspect of modern programming languages. A key component of this is the inability to walk off the beginning or end of an array, string, or span. The runtime ensures that any such invalid attempt produces an exception, rather than being allowed to perform the invalid memory access. We can see what this looks like with a small benchmark:

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _array = new int[3];    [Benchmark]    public int Read() => _array[2];}

This is a valid access: the_array contains three elements, and theRead method is reading its last element. However, the JIT can’t be 100% certain that this access is in bounds (something could have changed what’s in the_array field to be a shorter array), and thus it needs to emit a check to ensure we’re not walking off the end of the array. Here’s what the generated assembly code forRead looks like:

; .NET 10; Tests.Read()       push      rax       mov       rax,[rdi+8]       cmp       dword ptr [rax+8],2       jbe       short M00_L00       mov       eax,[rax+18]       add       rsp,8       retM00_L00:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 25

Thethis reference is passed into theRead instance method in therdi register, and the_array field is at offset 8, so themov rax,[rdi+8] instruction is loading the address of the array into therax register. Then thecmp is loading the value at offset 8 from that address; it so happens that’s where the length of the array is stored in the array object. So, thiscmp instruction is the bounds check; it’s comparing2 against that length to ensure it’s in bounds. If the array were too short for this access, the nextjbe instruction would branch to theM00_L00 label, which calls theCORINFO_HELP_RNGCHKFAIL helper function that throws anIndexOutOfRangeException. Any time you see this pair ofcall CORINFO_HELP_RNGCHKFAIL/int 3 at the end of a method, there was at least one bounds check emitted by the JIT in that method.

Of course, we not only want safety, we also want great performance, and it’d be terrible for performance if every single read from an array (or string or span) incurred such an additional check. As such, the JIT strives to avoid emitting these checks when they’d be redundant, when it can prove by construction that the accesses are safe. For example, let me tweak my benchmark slightly, moving the array from an instance field into astatic readonly field:

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly int[] s_array = new int[3];    [Benchmark]    public int Read() => s_array[2];}

We now get this assembly:

; .NET 10; Tests.Read()       mov       rax,705D5419FA20       mov       eax,[rax+18]       ret; Total bytes of code 14

Thestatic readonly field is immutable, arrays can’t be resized, and the JIT can guarantee that the field is initialized prior to generating the code forRead. Therefore, when generating the code forRead, it can know with certainty that the array is of length three, and we’re accessing the element at index two. Therefore, the specified array index is guaranteed to be within bounds, and there’s no need for a bounds check. We simply get twomovs, the firstmov to load the address of the array (which, thanks to improvements in previous releases, is allocated on a heap that doesn’t need to be compacted such that the array lives at a fixed address), and the secondmov to read theint value at the location of index two (these areints, so index two lives2 * sizeof(int) = 8 bytes from the start of the array’s data, which itself on 64-bit is offset 16 bytes from the start of the array reference, for a total offset of 24 bytes, or in hex 0x18, hence therax+18 in the disassembly).

Every release of .NET, more and more opportunities are found and implemented to eschew bounds checks that were previously being generated. .NET 10 continues this trend.

Our first example comes fromdotnet/runtime#109900, which was inspired by the implementation ofBitOperations.Log2. The operation has intrinsic hardware support on many architectures, and generallyBitOperations.Log2 will use one of the hardware intrinsics available to it for a very efficient implementation (e.g.Lscnt.LeadingZeroCount,ArmBase.LeadingZeroCount, orX86Base.BitScanReverse), however as a fallback implementation it uses a lookup table. The lookup table has 32 elements, and the operation involves computing auint value and then shifting it down by 27 in order to get the top 5 bits. Any possible result is guaranteed to be a non-negative number less than 32, but indexing into the span with that result still produced a bounds check, and, as this is a critical path, “unsafe” code (meaning code that eschews the guardrails the runtime supplies by default) was then used to avoid the bounds check.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "value")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public int Log2SoftwareFallback2(uint value)    {        ReadOnlySpan<byte> Log2DeBruijn =        [            00, 09, 01, 10, 13, 21, 02, 29,            11, 14, 16, 18, 22, 25, 03, 30,            08, 12, 20, 28, 15, 17, 24, 07,            19, 27, 23, 06, 26, 05, 04, 31        ];        value |= value >> 01;        value |= value >> 02;        value |= value >> 04;        value |= value >> 08;        value |= value >> 16;        return Log2DeBruijn[(int)((value * 0x07C4ACDDu) >> 27)];    }}

Now in .NET 10, the bounds check is gone (note the presence of thecall CORINFO_HELP_RNGCHKFAIL in the .NET 9 assembly and the lack of it in the .NET 10 assembly).

; .NET 9; Tests.Log2SoftwareFallback2(UInt32)       push      rax       mov       eax,esi       shr       eax,1       or        esi,eax       mov       eax,esi       shr       eax,2       or        esi,eax       mov       eax,esi       shr       eax,4       or        esi,eax       mov       eax,esi       shr       eax,8       or        esi,eax       mov       eax,esi       shr       eax,10       or        eax,esi       imul      eax,7C4ACDD       shr       eax,1B       cmp       eax,20       jae       short M00_L00       mov       rcx,7913CA812E10       movzx     eax,byte ptr [rax+rcx]       add       rsp,8       retM00_L00:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 74; .NET 10; Tests.Log2SoftwareFallback2(UInt32)       mov       eax,esi       shr       eax,1       or        esi,eax       mov       eax,esi       shr       eax,2       or        esi,eax       mov       eax,esi       shr       eax,4       or        esi,eax       mov       eax,esi       shr       eax,8       or        esi,eax       mov       eax,esi       shr       eax,10       or        eax,esi       imul      eax,7C4ACDD       shr       eax,1B       mov       rcx,7CA298325E10       movzx     eax,byte ptr [rcx+rax]       ret; Total bytes of code 58

This improvement then enableddotnet/runtime#118560 to simplify the code in the realLog2SoftwareFallback, avoiding manual use of unsafe constructs.

dotnet/runtime#113790 implements a similar case, where the result of a mathematical operation is guaranteed to be in bounds. In this case, it’s the result ofLog2. The change teaches the JIT to understand the maximum possible value thatLog2 could produce, and if that maximum is in bounds, then any result is guaranteed to be in bounds as well.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "value")]public partial class Tests{    [Benchmark]    [Arguments(12345)]    public nint CountDigits(ulong value)    {        ReadOnlySpan<byte> log2ToPow10 =        [            1,  1,  1,  2,  2,  2,  3,  3,  3,  4,  4,  4,  4,  5,  5,  5,            6,  6,  6,  7,  7,  7,  7,  8,  8,  8,  9,  9,  9,  10, 10, 10,            10, 11, 11, 11, 12, 12, 12, 13, 13, 13, 13, 14, 14, 14, 15, 15,            15, 16, 16, 16, 16, 17, 17, 17, 18, 18, 18, 19, 19, 19, 19, 20        ];        return log2ToPow10[(int)ulong.Log2(value)];    }}

We can see the bounds check present in the .NET 9 output and absent in the .NET 10 output:

; .NET 9; Tests.CountDigits(UInt64)       push      rax       or        rsi,1       xor       eax,eax       lzcnt     rax,rsi       xor       eax,3F       cmp       eax,40       jae       short M00_L00       mov       rcx,7C2D0A213DF8       movzx     eax,byte ptr [rax+rcx]       add       rsp,8       retM00_L00:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 45; .NET 10; Tests.CountDigits(UInt64)       or        rsi,1       xor       eax,eax       lzcnt     rax,rsi       xor       eax,3F       mov       rcx,71EFA9400DF8       movzx     eax,byte ptr [rcx+rax]       ret; Total bytes of code 29

My choice of benchmark in this case was not coincidental. This pattern shows up in theFormattingHelpers.CountDigits internal method that’s used by the core primitive types in theirToString andTryFormat implementations, in order to determine how much space will be needed to store rendered digits for a number. As with the previous example, this routine is considered core enough that it was using unsafe code to avoid the bounds check. With this fix, the code was able to be changed back to using a simple span access, and even with the simpler code, it’s now also faster.

Now, consider this code:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "ids")]public partial class Tests{    public IEnumerable<int[]> Ids { get; } = [[1, 2, 3, 4, 5, 1]];    [Benchmark]    [ArgumentsSource(nameof(Ids))]    public bool StartAndEndAreSame(int[] ids) => ids[0] == ids[^1];}

I have a method that’s accepting anint[] and checking to see whether it starts and ends with the same value. The JIT has no way of knowing whether theint[] is empty or not, so itdoes need a bounds check; otherwise, accessingids[0] could walk off the end of the array. However, this is what we see on .NET 9:

; .NET 9; Tests.StartAndEndAreSame(Int32[])       push      rax       mov       eax,[rsi+8]       test      eax,eax       je        short M00_L00       mov       ecx,[rsi+10]       lea       edx,[rax-1]       cmp       edx,eax       jae       short M00_L00       mov       eax,edx       cmp       ecx,[rsi+rax*4+10]       sete      al       movzx     eax,al       add       rsp,8       retM00_L00:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 41

Note there are two jumps to theM00_L00 label that handles failed bounds checks… that’s because there are two bounds checks here, one for the start access and one for the end access. But that shouldn’t be necessary.ids[^1] is the same asids[ids.Length - 1]. If the code has successfully accessedids[0], that means the array is at least one element in length, and if it’s at least one element in length,ids[ids.Length - 1] will always be in bounds. Thus, the second bounds check shouldn’t be needed. Indeed, thanks todotnet/runtime#116105, this is what we now get on .NET 10 (one branch toM00_L00 instead of two):

; .NET 10; Tests.StartAndEndAreSame(Int32[])       push      rax       mov       eax,[rsi+8]       test      eax,eax       je        short M00_L00       mov       ecx,[rsi+10]       dec       eax       cmp       ecx,[rsi+rax*4+10]       sete      al       movzx     eax,al       add       rsp,8       retM00_L00:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 34

What’s really interesting to me here is the knock-on effect of having removed the bounds check. It didn’t just eliminate thecmp/jae pair of instructions that’s typical of a bounds check. The .NET 9 version of the code had this:

lea edx,[rax-1]cmp edx,eaxjae short M00_L00mov eax,edx

At this point in the assembly, therax register is storing the length of the array. It’s calculatingids.Length - 1 and storing the result intoedx, and then checking to see whetherids.Length-1 is in bounds ofids.Length (the only way it wouldn’t be is if the array were empty such thatids.Length-1 wrapped around touint.MaxValue); if it’s not, it jumps to the fail handler, and if it is, it stores the already computedids.Length - 1 intoeax. By removing the bounds check, we get rid of those two intervening instructions, leaving these:

lea edx,[rax-1]mov eax,edx

which is a little silly, as this sequence is just computing a decrement, and as long as it’s ok that flags get modified, it could instead just be:

dec eax

which, as you can see in the .NET 10 output, is exactly what .NET 10 now does.

dotnet/runtime#115980 addresses another case. Let’s say I have this method:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "start", "text")]public partial class Tests{    [Benchmark]    [Arguments("abc", "abc.")]    public bool IsFollowedByPeriod(string start, string text) =>        start.Length < text.Length && text[start.Length] == '.';}

We’re validating that one input’s length is less than the other, and then checking to see what comes immediately after it in the other. We know thatstring.Length is immutable, so a bounds check here is redundant, but until .NET 10, the JIT couldn’t see that.

; .NET 9; Tests.IsFollowedByPeriod(System.String, System.String)       push      rbp       mov       rbp,rsp       mov       eax,[rsi+8]       mov       ecx,[rdx+8]       cmp       eax,ecx       jge       short M00_L00       cmp       eax,ecx       jae       short M00_L01       cmp       word ptr [rdx+rax*2+0C],2E       sete      al       movzx     eax,al       pop       rbp       retM00_L00:       xor       eax,eax       pop       rbp       retM00_L01:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 42; .NET 10; Tests.IsFollowedByPeriod(System.String, System.String)       mov       eax,[rsi+8]       mov       ecx,[rdx+8]       cmp       eax,ecx       jge       short M00_L00       cmp       word ptr [rdx+rax*2+0C],2E       sete      al       movzx     eax,al       retM00_L00:       xor       eax,eax       ret; Total bytes of code 26

The removal of the bounds check almost halves the size of the function. If we don’t need to do a bounds check, we get to elide thecmp/jae. Without that branch, nothing is targetingM00_L01, and we can remove thecall/int pair that were only necessary to support a bounds check. Then without thecall inM00_L01, which was the onlycall in the whole method, the prologue and epilogue can be elided, meaning we also don’t need the opening and closingpush andpop instructions.

dotnet/runtime#113233 improved handling “assertions” (facts the JIT claims and based on which the JIT makes optimizations) to be less order dependent. In .NET 9, this code:

static bool Test(ReadOnlySpan<char> span, int pos) =>    pos > 0 &&    pos <= span.Length - 42 &&    span[pos - 1] != '\n';

was successfully removing the bounds check on the span access, but the following variant, which just switches the order of the first two conditions, was still incurring the bounds check.

static bool Test(ReadOnlySpan<char> span, int pos) =>    pos <= span.Length - 42 &&    pos > 0 &&    span[pos - 1] != '\n';

Note that both conditions contribute an assertion (fact) that need to be merged in order to know the bounds check can be avoided. Now in .NET 10, the bounds check is elided, regardless of the order.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _s = new string('s', 100);    private int _pos = 10;    [Benchmark]    public bool Test()    {        string s = _s;        int pos = _pos;        return            pos <= s.Length - 42 &&            pos > 0 &&            s[pos - 1] != '\n';    }}
; .NET 9; Tests.Test()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       mov       ecx,[rdi+10]       mov       edx,[rax+8]       lea       edi,[rdx-2A]       cmp       edi,ecx       jl        short M00_L00       test      ecx,ecx       jle       short M00_L00       dec       ecx       cmp       ecx,edx       jae       short M00_L01       cmp       word ptr [rax+rcx*2+0C],0A       setne     al       movzx     eax,al       pop       rbp       retM00_L00:       xor       eax,eax       pop       rbp       retM00_L01:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 55; .NET 10; Tests.Test()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       mov       ecx,[rdi+10]       mov       edx,[rax+8]       add       edx,0FFFFFFD6       cmp       edx,ecx       jl        short M00_L00       test      ecx,ecx       jle       short M00_L00       dec       ecx       cmp       word ptr [rax+rcx*2+0C],0A       setne     al       movzx     eax,al       pop       rbp       retM00_L00:       xor       eax,eax       pop       rbp       ret; Total bytes of code 45

dotnet/runtime#113862 addresses a similar case where assertions weren’t being handled as precisely as they could have been. Consider this code:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _arr = Enumerable.Range(0, 10).ToArray();    [Benchmark]    public int Sum()    {        int[] arr = _arr;        int sum = 0;        int i;        for (i = 0; i < arr.Length - 3; i += 4)        {            sum += arr[i + 0];            sum += arr[i + 1];            sum += arr[i + 2];            sum += arr[i + 3];        }        for (; i < arr.Length; i++)        {            sum += arr[i];        }        return sum;    }}

TheSum method is trying to do manual loop unrolling. Rather than incurring a branch on each element, it’s handling four elements per iteration. Then, for the case where the length of the input isn’t evenly divisible by four, it’s handling the remaining elements in a separate loop. In .NET 9, the JIT successfully elides the bounds checks in the main unrolled loop:

; .NET 9; Tests.Sum()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       xor       ecx,ecx       xor       edx,edx       mov       edi,[rax+8]       lea       esi,[rdi-3]       test      esi,esi       jle       short M00_L02M00_L00:       mov       r8d,edx       add       ecx,[rax+r8*4+10]       lea       r8d,[rdx+1]       add       ecx,[rax+r8*4+10]       lea       r8d,[rdx+2]       add       ecx,[rax+r8*4+10]       lea       r8d,[rdx+3]       add       ecx,[rax+r8*4+10]       add       edx,4       cmp       esi,edx       jg        short M00_L00       jmp       short M00_L02M00_L01:       cmp       edx,edi       jae       short M00_L03       mov       esi,edx       add       ecx,[rax+rsi*4+10]       inc       edxM00_L02:       cmp       edi,edx       jg        short M00_L01       mov       eax,ecx       pop       rbp       retM00_L03:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 92

You can see this in theM00_L00 section, which has the fiveadd instructions (four for the summed elements, and one for the index). However, we still see theCORINFO_HELP_RNGCHKFAIL at the end, indicating this method has a bounds check. That’s coming from the final loop, due to the JIT losing track of the fact thati is guaranteed to be non-negative. Now in .NET 10, that bounds check is removed as well (again, just look for the lack of theCORINFO_HELP_RNGCHKFAIL call).

; .NET 10; Tests.Sum()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       xor       ecx,ecx       xor       edx,edx       mov       edi,[rax+8]       lea       esi,[rdi-3]       test      esi,esi       jle       short M00_L01M00_L00:       mov       r8d,edx       add       ecx,[rax+r8*4+10]       lea       r8d,[rdx+1]       add       ecx,[rax+r8*4+10]       lea       r8d,[rdx+2]       add       ecx,[rax+r8*4+10]       lea       r8d,[rdx+3]       add       ecx,[rax+r8*4+10]       add       edx,4       cmp       esi,edx       jg        short M00_L00M00_L01:       cmp       edi,edx       jle       short M00_L03       test      edx,edx       jl        short M00_L04M00_L02:       mov       esi,edx       add       ecx,[rax+rsi*4+10]       inc       edx       cmp       edi,edx       jg        short M00_L02M00_L03:       mov       eax,ecx       pop       rbp       retM00_L04:       mov       esi,edx       add       ecx,[rax+rsi*4+10]       inc       edx       cmp       edi,edx       jg        short M00_L04       jmp       short M00_L03; Total bytes of code 102

Another nice improvement comes fromdotnet/runtime#112824, which teaches the JIT to turn facts it already learned from earlier checks into concrete numeric ranges, and then use those ranges to fold away later relational tests and bounds checks. Consider this example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _array = new int[10];    [Benchmark]    public void Test() => SetAndSlice(_array);    [MethodImpl(MethodImplOptions.NoInlining)]    private static Span<int> SetAndSlice(Span<int> src)    {        src[5] = 42;        return src.Slice(4);    }}

We have to incur a bounds check for thesrc[5], as the JIT has no evidence thatsrc is at least six elements long. However, by the time we get to theSlice call, we know the span has a length of at least six, or else writing intosrc[5] would have failed. We can use that knowledge to remove the length check from within theSlice call (note the removal of thecall qword ptr [7F8DDB3A7810]/int 3 sequence, which is the manual length check and call to a throw helper method inSlice).

; .NET 9; Tests.SetAndSlice(System.Span`1<Int32>)       push      rbp       mov       rbp,rsp       cmp       esi,5       jbe       short M01_L01       mov       dword ptr [rdi+14],2A       cmp       esi,4       jb        short M01_L00       add       rdi,10       mov       rax,rdi       add       esi,0FFFFFFFC       mov       edx,esi       pop       rbp       retM01_L00:       call      qword ptr [7F8DDB3A7810]       int       3M01_L01:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 48; .NET 10; Tests.SetAndSlice(System.Span`1<Int32>)       push      rax       cmp       esi,5       jbe       short M01_L00       mov       dword ptr [rdi+14],2A       lea       rax,[rdi+10]       lea       edx,[rsi-4]       add       rsp,8       retM01_L00:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 31

Let’s look at one more, which has a very nice impact on bounds checking, even though technically the optimization is broader than just that.dotnet/runtime#113998 creates assertions fromswitch targets. This means that the body of aswitch case statement inherits facts about what was switched over based on what thecase was, e.g. in acase 3 forswitch (x), the body of that case will now “know” thatx is three. This is great for very popular patterns with arrays, strings, and spans, where developers switch over the length and then index into available indices in the appropriate branches. Consider this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _array = [1, 2];    [Benchmark]    public int SumArray() => Sum(_array);    [MethodImpl(MethodImplOptions.NoInlining)]    public int Sum(ReadOnlySpan<int> span)    {        switch (span.Length)        {            case 0: return 0;            case 1: return span[0];            case 2: return span[0] + span[1];            case 3: return span[0] + span[1] + span[2];            default: return -1;        }    }}

On .NET 9, each of those sixspan dereferences ends up with a bounds check:

; .NET 9; Tests.Sum(System.ReadOnlySpan`1<Int32>)       push      rbp       mov       rbp,rspM01_L00:       cmp       edx,2       jne       short M01_L02       test      edx,edx       je        short M01_L04       mov       eax,[rsi]       cmp       edx,1       jbe       short M01_L04       add       eax,[rsi+4]M01_L01:       pop       rbp       retM01_L02:       cmp       edx,3       ja        short M01_L03       mov       eax,edx       lea       rcx,[783DA42091B8]       mov       ecx,[rcx+rax*4]       lea       rdi,[M01_L00]       add       rcx,rdi       jmp       rcxM01_L03:       mov       eax,0FFFFFFFF       pop       rbp       ret       test      edx,edx       je        short M01_L04       mov       eax,[rsi]       cmp       edx,1       jbe       short M01_L04       add       eax,[rsi+4]       cmp       edx,2       jbe       short M01_L04       add       eax,[rsi+8]       jmp       short M01_L01       test      edx,edx       je        short M01_L04       mov       eax,[rsi]       jmp       short M01_L01       xor       eax,eax       pop       rbp       retM01_L04:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 103

You can see the tell-tale bounds check sign (CORINFO_HELP_RNGCHKFAIL) underM01_L04, and no fewer than six jumps targeting that label, one for eachspan[...] access. But on .NET 10, we get this:

; .NET 10; Tests.Sum(System.ReadOnlySpan`1<Int32>)       push      rbp       mov       rbp,rspM01_L00:       cmp       edx,2       jne       short M01_L02       mov       eax,[rsi]       add       eax,[rsi+4]M01_L01:       pop       rbp       retM01_L02:       cmp       edx,3       ja        short M01_L03       mov       eax,edx       lea       rcx,[72C15C0F8FD8]       mov       ecx,[rcx+rax*4]       lea       rdx,[M01_L00]       add       rcx,rdx       jmp       rcxM01_L03:       mov       eax,0FFFFFFFF       pop       rbp       ret       xor       eax,eax       pop       rbp       ret       mov       eax,[rsi]       jmp       short M01_L01       mov       eax,[rsi]       add       eax,[rsi+4]       add       eax,[rsi+8]       jmp       short M01_L01; Total bytes of code 70

TheCORINFO_HELP_RNGCHKFAIL and all the jumps to it have evaporated.

Cloning

There are other ways the JIT can remove bounds checking even when it can’t prove statically that every individual access is safe. Consider this method:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _arr = new int[16];    [Benchmark]    public void Test()    {        int[] arr = _arr;        arr[0] = 2;        arr[1] = 3;        arr[2] = 5;        arr[3] = 8;        arr[4] = 13;        arr[5] = 21;        arr[6] = 34;        arr[7] = 55;    }}

Here’s the assembly code generated on .NET 9:

; .NET 9; Tests.Test()       push      rax       mov       rax,[rdi+8]       mov       ecx,[rax+8]       test      ecx,ecx       je        short M00_L00       mov       dword ptr [rax+10],2       cmp       ecx,1       jbe       short M00_L00       mov       dword ptr [rax+14],3       cmp       ecx,2       jbe       short M00_L00       mov       dword ptr [rax+18],5       cmp       ecx,3       jbe       short M00_L00       mov       dword ptr [rax+1C],8       cmp       ecx,4       jbe       short M00_L00       mov       dword ptr [rax+20],0D       cmp       ecx,5       jbe       short M00_L00       mov       dword ptr [rax+24],15       cmp       ecx,6       jbe       short M00_L00       mov       dword ptr [rax+28],22       cmp       ecx,7       jbe       short M00_L00       mov       dword ptr [rax+2C],37       add       rsp,8       retM00_L00:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 114

Even if you’re not proficient at reading assembly, the pattern should still be obvious. In the C# code, we have eight writes into the array, and in the assembly code, we have eight repetitions of the same pattern:cmp ecx,LENGTH to compare the length of the array against the requiredLENGTH,jbe short M00_L00 to jump to theCORINFO_HELP_RNGCHKFAIL helper if the bounds check fails, andmov dword ptr [rax+OFFSET],VALUE to storeVALUE into the array at byte offsetOFFSET. Inside theTest method, the JIT can’t know how long_arr is, so it must include bounds checks. Moreover, it must include all of the bounds checks, rather than coalescing them, because it is forbidden from introducing behavioral changes as part of optimizations. Imagine instead if it chose to coalesce all of the bounds checks into a single check, and emitted this method as if it were the equivalent of the following:

if (arr.Length >= 8){    arr[0] = 2;    arr[1] = 3;    arr[2] = 5;    arr[3] = 8;    arr[4] = 13;    arr[5] = 21;    arr[6] = 34;    arr[7] = 55;}else{    throw new IndexOutOfRangeException();}

Now, let’s say the array was actually of length four. The original program would have filled the array with values[2, 3, 5, 8] before throwing an exception, but this transformed code wouldn’t (there wouldn’t be any writes to the array). That’s an observable behavioral change. An enterprising developer could of coursechoose to rewrite their code to avoid some of these checks, e.g.

arr[7] = 55;arr[0] = 2;arr[1] = 3;arr[2] = 5;arr[3] = 8;arr[4] = 13;arr[5] = 21;arr[6] = 34;

By moving the last store to the beginning, the developer has given the JIT extra knowledge. The JIT can now see thatif the first store succeeds, the rest are guaranteed to succeed as well, and the JIT will emit a single bounds check. But, again, that’s the developer choosing to change their program in a way the JIT must not. However, there are other things the JITcan do. Imagine the JIT chose to rewrite the method like this instead:

if (arr.Length >= 8){    arr[0] = 2;    arr[1] = 3;    arr[2] = 5;    arr[3] = 8;    arr[4] = 13;    arr[5] = 21;    arr[6] = 34;    arr[7] = 55;}else{    arr[0] = 2;    arr[1] = 3;    arr[2] = 5;    arr[3] = 8;    arr[4] = 13;    arr[5] = 21;    arr[6] = 34;    arr[7] = 55;}

To our C# sensibilities, that looks unnecessarily complicated; theif and theelse block containexactly the same C# code. But, knowing what we now know about how the JIT can use known length information to elide bounds checks, it starts to make a bit more sense. Here’s what the JIT emits for this variant on .NET 9:

; .NET 9; Tests.Test()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       mov       ecx,[rax+8]       cmp       ecx,8       jl        short M00_L00       mov       rcx,300000002       mov       [rax+10],rcx       mov       rcx,800000005       mov       [rax+18],rcx       mov       rcx,150000000D       mov       [rax+20],rcx       mov       rcx,3700000022       mov       [rax+28],rcx       pop       rbp       retM00_L00:       test      ecx,ecx       je        short M00_L01       mov       dword ptr [rax+10],2       cmp       ecx,1       jbe       short M00_L01       mov       dword ptr [rax+14],3       cmp       ecx,2       jbe       short M00_L01       mov       dword ptr [rax+18],5       cmp       ecx,3       jbe       short M00_L01       mov       dword ptr [rax+1C],8       cmp       ecx,4       jbe       short M00_L01       mov       dword ptr [rax+20],0D       cmp       ecx,5       jbe       short M00_L01       mov       dword ptr [rax+24],15       cmp       ecx,6       jbe       short M00_L01       mov       dword ptr [rax+28],22       cmp       ecx,7       jbe       short M00_L01       mov       dword ptr [rax+2C],37       pop       rbp       retM00_L01:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 177

Theelse block is compiled to theM00_L00 label, which contains those same eight repeated blocks we saw earlier. But theif block (above theM00_L00 label) is interesting. The only branch there is the initialarray.Length >= 8 check I wrote in the C# code, emitted as thecmp ecx,8/jl short M00_L00 pair of instructions. The rest of the block is justmov instructions (and you can see there are only four writes into the array rather than eight… the JIT has optimized the eight four-byte writes into four eight-byte writes). In our rewrite, we’ve manually cloned the code, so that in what we expect to be the vast, vast, vast majority case (presumably we wouldn’t have written the array writes in the first place if we thought they’d fail), we only incur the single length check, and then we have our “hopefully this is never needed” fallback case for the rare situation where it is. Of course, you shouldn’t (and shouldn’t need to) do such manual cloning. But, the JIT can do such cloning for you, and does.

“Cloning” is an optimization long employed by the JIT, where it will do this kind of code duplication, typically of loops, when it believes that in doing so, it can heavily optimize a common case. Now in .NET 10, thanks todotnet/runtime#112595, it can employ this same technique for these kinds of sequences of writes. Going back to our original benchmark, here’s what we now get on .NET 10:

; .NET 10; Tests.Test()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       mov       ecx,[rax+8]       mov       edx,ecx       cmp       edx,7       jle       short M00_L01       mov       rdx,300000002       mov       [rax+10],rdx       mov       rcx,800000005       mov       [rax+18],rcx       mov       rcx,150000000D       mov       [rax+20],rcx       mov       rcx,3700000022       mov       [rax+28],rcxM00_L00:       pop       rbp       retM00_L01:       test      edx,edx       je        short M00_L02       mov       dword ptr [rax+10],2       cmp       ecx,1       jbe       short M00_L02       mov       dword ptr [rax+14],3       cmp       ecx,2       jbe       short M00_L02       mov       dword ptr [rax+18],5       cmp       ecx,3       jbe       short M00_L02       mov       dword ptr [rax+1C],8       cmp       ecx,4       jbe       short M00_L02       mov       dword ptr [rax+20],0D       cmp       ecx,5       jbe       short M00_L02       mov       dword ptr [rax+24],15       cmp       ecx,6       jbe       short M00_L02       mov       dword ptr [rax+28],22       cmp       ecx,7       jbe       short M00_L02       mov       dword ptr [rax+2C],37       jmp       short M00_L00M00_L02:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 179

This structure looks almost identical to what we got when we manually cloned: the JIT has emitted the same code twice, except in one case, there are no bounds checks, and in the other case, there are all the bounds checks, and a single length check determines which path to follow. Pretty neat.

As noted, the JIT has been doing cloning for years, in particular for loops over arrays. However, more and more code is being written against spans instead of arrays, and unfortunately this valuable optimization didn’t apply to spans. Now withdotnet/runtime#113575, it does! We can see this with a basic looping example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _arr = new int[16];    private int _count = 8;    [Benchmark]    public void WithSpan()    {        Span<int> span = _arr;        int count = _count;        for (int i = 0; i < count; i++)        {            span[i] = i;        }    }    [Benchmark]    public void WithArray()    {        int[] arr = _arr;        int count = _count;        for (int i = 0; i < count; i++)        {            arr[i] = i;        }    }}

In bothWithArray andWithSpan, we have the same loop, iterating from 0 to a_count with an unknown relationship to the length of_arr, so there has to be some kind of bounds checking emitted. Here’s what we get on .NET 9 forWithSpan:

; .NET 9; Tests.WithSpan()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       test      rax,rax       je        short M00_L03       lea       rcx,[rax+10]       mov       eax,[rax+8]M00_L00:       mov       edi,[rdi+10]       xor       edx,edx       test      edi,edi       jle       short M00_L02       nop       dword ptr [rax]M00_L01:       cmp       edx,eax       jae       short M00_L04       mov       [rcx+rdx*4],edx       inc       edx       cmp       edx,edi       jl        short M00_L01M00_L02:       pop       rbp       retM00_L03:       xor       ecx,ecx       xor       eax,eax       jmp       short M00_L00M00_L04:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 59

There’s some upfront assembly here associated with loading_array into a span, loading_count, and checking to see whether the count is 0 (in which case the whole loop can be skipped). Then the core of the loop is atM00_L01, which is repeatedly checkingedx (which containsi) against the length of the span (ineax), jumping toCORINFO_HELP_RNGCHKFAIL if it’s an out-of-bounds access, writingedx (i) into the span at the next position, bumping upi, and then jumping back toM00_L01 to keep iterating ifi is still less thancount (stored inedi). In other words, we have two checks per iteration: isi still within the bounds of the span, and isi still less thancount. Now here’s what we get on .NET 9 forWithArray:

; .NET 9; Tests.WithArray()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       mov       ecx,[rdi+10]       xor       edx,edx       test      ecx,ecx       jle       short M00_L01       test      rax,rax       je        short M00_L02       cmp       [rax+8],ecx       jl        short M00_L02       nop       dword ptr [rax+rax]M00_L00:       mov       edi,edx       mov       [rax+rdi*4+10],edx       inc       edx       cmp       edx,ecx       jl        short M00_L00M00_L01:       pop       rbp       retM00_L02:       cmp       edx,[rax+8]       jae       short M00_L03       mov       edi,edx       mov       [rax+rdi*4+10],edx       inc       edx       cmp       edx,ecx       jl        short M00_L02       jmp       short M00_L01M00_L03:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 71

Here, labelM00_L02 looks very similar to the loop we just saw inWithSpan, incurring both the check againstcount and the bounds check on every iteration. But note sectionM00_L00: it’s a clone of the same loop, still with thecmp edx,ecx that checksi againstcount on each iteration, but no additional bounds checking in sight. The JIT has cloned the loop, specializing one to not have bounds checks, and then in the upfront section, it determines which path to follow based on a single check against the array’s length (cmp [rax+8],ecx/jl short M00_L02). Now in .NET 10, here’s what we get forWithSpan:

; .NET 10; Tests.WithSpan()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       test      rax,rax       je        short M00_L04       lea       rcx,[rax+10]       mov       eax,[rax+8]M00_L00:       mov       edx,[rdi+10]       xor       edi,edi       test      edx,edx       jle       short M00_L02       cmp       edx,eax       jg        short M00_L03M00_L01:       mov       eax,edi       mov       [rcx+rax*4],edi       inc       edi       cmp       edi,edx       jl        short M00_L01M00_L02:       pop       rbp       retM00_L03:       cmp       edi,eax       jae       short M00_L05       mov       esi,edi       mov       [rcx+rsi*4],edi       inc       edi       cmp       edi,edx       jl        short M00_L03       jmp       short M00_L02M00_L04:       xor       ecx,ecx       xor       eax,eax       jmp       short M00_L00M00_L05:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 75

As withWithArray in .NET 9,WithSpan for .NET 10 has the loop cloned, with theM00_L03 block containing the bounds check on each iteration, and theM00_L01 block eliding the bounds check on each iteration.

The JIT gains more cloning abilities in .NET 10, as well.dotnet/runtime#110020,dotnet/runtime#108604, anddotnet/runtime#110483 make it possible for the JIT to clonetry/finally blocks, whereas previously it would immediately bail out of cloning any regions containing such constructs. This might seem niche, but it’s actually quite valuable when you consider thatforeach‘ing over an enumerable typically involves a hiddentry/finally for thefinally to call the enumerator’sDispose.

Many of these different optimizations interact with each other. Dynamic PGO triggers a form of cloning, as part of the guarded devirtualization (GDV) mentioned earlier: if the instrumentation data reveals that a particular virtual call is generally performed on an instance of a specific type, the JIT can clone the resulting code into one path specific to that type and another path that handles any type. That then enables the specific-type code path to devirtualize the call and possibly inline it. And if it inlines it, that then provides more opportunities for the JIT to see that an object doesn’t escape, and potentially stack allocate it.dotnet/runtime#111473,dotnet/runtime#116978,dotnet/runtime#116992,dotnet/runtime#117222, anddotnet/runtime#117295 enable that, enhancing escape analysis to determine if an object only escapes when such a generated type test fails (when the target object isn’t of the expected common type).

I want to pause for a moment, because my words thus far aren’t nearly enthusiastic enough to highlight the magnitude of what this enables. Thedotnet/runtime repo uses an automated performance analysis system which flags when benchmarks significantly improve or regress and ties those changes back to the responsible PR. This is what it looked like for this PR:Conditional Escape Analysis Triggering Many Benchmark ImprovementsWe can see why this is so good from a simple example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _values = Enumerable.Range(1, 100).ToArray();    [Benchmark]    public int Sum() => Sum(_values);    [MethodImpl(MethodImplOptions.NoInlining)]    private static int Sum(IEnumerable<int> values)    {        int sum = 0;        foreach (int value in values)        {            sum += value;        }        return sum;    }}

With dynamic PGO, the instrumented code forSum will see thatvalues is generally anint[], and it’ll be able to emit a specialized code path in the optimizedSum implementation for when it is. And then with this ability to do conditional escape analysis, for the common path the JIT can see that the resultingGetEnumerator produces anIEnumerator<int> that never escapes, such that along with all of the relevant methods being devirtualized and inlined, the enumerator can be stack allocated.

MethodRuntimeMeanRatioAllocatedAlloc Ratio
Sum.NET 9.0109.86 ns1.0032 B1.00
Sum.NET 10.035.45 ns0.320.00

Just think about how many places in your apps and services you enumerate collections like this, and you can see why it’s such an exciting improvement. Note that these cases don’t always even require PGO. Consider a case like this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly IEnumerable<int> s_values = new int[] { 1, 2, 3, 4, 5 };    [Benchmark]    public int Sum()    {        int sum = 0;        foreach (int value in s_values)        {            sum += value;        }        return sum;    }}

Here, the JIT can see that even though thes_values is typed asIEnumerable<int>, it’s always actually anint[]. In that case,dotnet/runtime#111948 enables the return type to be retyped in the JIT asint[] and the enumerator can be stack allocated.

MethodRuntimeMeanRatioAllocatedAlloc Ratio
Sum.NET 9.016.341 ns1.0032 B1.00
Sum.NET 10.02.059 ns0.130.00

Of course, too much cloning can be a bad thing, in particular as it increases code size.dotnet/runtime#108771 employs a heuristic to determine whether loops thatcan be clonedshould be cloned; the larger the loop, the less likely it’ll be to be cloned.

Inlining

“Inlining”, which replaces a call to a function with a copy of that function’s implementation, has always been a critically important optimization. It’s easy to think about the benefits of inlining as just being about avoiding the overhead of a call, and while that can be meaningful (especially when considering security mechanisms like Intel’s Control-Flow Enforcement Technology, which slightly increases the cost of calls), generally the most benefit from inlining comes from knock-on benefits. Just as a simple example, if you have code like:

int i = Divide(10, 5);static int Divide(int n, int d) => n / d;

ifDivide doesn’t get inlined, then whenDivide is called, it’ll need to perform the actualidiv, which is a relatively expensive operation. In contrast, ifDivide is inlined, then the call site becomes:

int i = 10 / 5;

which can be evaluated at compile time and becomes just:

int i = 2;

More compelling examples were already seen throughout the discussion of escape analysis and stack allocation, which depend heavily on the ability to inline methods. Given the increased importance of inlining, it’s gotten even more focus in .NET 10.

Some of the .NET work related to inlining is about enabling more kinds of things to be inlined. Historically, a variety of constructs present in a method would prevent that method from even being considered for inlining. Arguably the most well known of these is exception handling: methods with exception handling clauses, e.g.try/catch ortry/finally, would not be inlined. Even a simple method likeM in this example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private readonly object _o = new();    [Benchmark]    public int Test()    {        M(_o);        return 42;    }    private static void M(object o)    {        Monitor.Enter(o);        try        {        }        finally        {            Monitor.Exit(o);        }    }}

does not get inlined on .NET 9:

; .NET 9; Tests.Test()       push      rax       mov       rdi,[rdi+8]       call      qword ptr [78F199864EE8]; Tests.M(System.Object)       mov       eax,2A       add       rsp,8       ret; Total bytes of code 21

But with a plethora of PRs, in particulardotnet/runtime#112968,dotnet/runtime#113023,dotnet/runtime#113497, anddotnet/runtime#112998, methods containingtry/finally are no longer blocked from inlining (try/catch regions are still a challenge). For the same benchmark on .NET 10, we now get this assembly:

; .NET 10; Tests.Test()       push      rbp       push      rbx       push      rax       lea       rbp,[rsp+10]       mov       rbx,[rdi+8]       test      rbx,rbx       je        short M00_L03       mov       rdi,rbx       call      00007920A0EE65E0       test      eax,eax       je        short M00_L02M00_L00:       mov       rdi,rbx       call      00007920A0EE6D50       test      eax,eax       jne       short M00_L04M00_L01:       mov       eax,2A       add       rsp,8       pop       rbx       pop       rbp       retM00_L02:       mov       rdi,rbx       call      qword ptr [79202393C1F8]       jmp       short M00_L00M00_L03:       xor       edi,edi       call      qword ptr [79202393C1C8]       int       3M00_L04:       mov       edi,eax       mov       rsi,rbx       call      qword ptr [79202393C1E0]       jmp       short M00_L01; Total bytes of code 86

The details of the assembly don’t matter, other than it’s a whole lot more than was there before, because we’re now looking in large part at the implementation ofM. In addition to methods withtry/finally now being inlineable, other improvements have also been made around exception handling. For example,dotnet/runtime#110273 anddotnet/runtime#110464 enable the removal oftry/catch andtry/fault blocks if it can prove thetry block can’t possibly throw. Consider this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public int Test(int i)    {        try        {            i++;        }        catch        {            Console.WriteLine("Exception caught");        }        return i;    }}

There’s nothing thetry block here can do that will result in an exception being thrown (assuming the developer hasn’t enabled checked arithmetic, in which case it could possibly throw anOverflowException), yet on .NET 9 we get this assembly:

; .NET 9; Tests.Test(Int32)       push      rbp       sub       rsp,10       lea       rbp,[rsp+10]       mov       [rbp-10],rsp       mov       [rbp-4],esi       mov       eax,[rbp-4]       inc       eax       mov       [rbp-4],eaxM00_L00:       mov       eax,[rbp-4]       add       rsp,10       pop       rbp       ret       push      rbp       sub       rsp,10       mov       rbp,[rdi]       mov       [rsp],rbp       lea       rbp,[rbp+10]       mov       rdi,784B08950018       call      qword ptr [784B0DE44EE8]       lea       rax,[M00_L00]       add       rsp,10       pop       rbp       ret; Total bytes of code 79

Now on .NET 10, the JIT is able to elide thecatch and remove all ceremony related to thetry because it can see that ceremony is pointless overhead.

; .NET 10; Tests.Test(Int32)       lea       eax,[rsi+1]       ret; Total bytes of code 4

That’s true even when the contents of thetry calls into other methods that are then inlined, exposing their contents to the JIT’s analysis.

(As an aside, the JIT was already able to removetry/finally when thefinally was empty, butdotnet/runtime#108003 catches even more cases of checking for emptyfinallys again after most other optimizations have been run, in case they revealed additional empty blocks.)

Another example is “GVM”. Previously, any method that called a GVM, or generic virtual method (a virtual method with a generic type parameter), would be blocked from being inlined.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private Base _base = new();    [Benchmark]    public int Test()    {        M();        return 42;    }    private void M() => _base.M<object>();}class Base{    public virtual void M<T>() { }}

On .NET 9, the above results in this assembly:

; .NET 9; Tests.Test()       push      rax       call      qword ptr [728ED5664FD8]; Tests.M()       mov       eax,2A       add       rsp,8       ret; Total bytes of code 17

Now on .NET 10, withdotnet/runtime#116773,M can now be inlined.

; .NET 10; Tests.Test()       push      rbp       push      rbx       push      rax       lea       rbp,[rsp+10]       mov       rbx,[rdi+8]       mov       rdi,rbx       mov       rsi,offset MT_Base       mov       rdx,78034C95D2A0       call      System.Runtime.CompilerServices.VirtualDispatchHelpers.VirtualFunctionPointer(System.Object, IntPtr, IntPtr)       mov       rdi,rbx       call      rax       mov       eax,2A       add       rsp,8       pop       rbx       pop       rbp       ret; Total bytes of code 57

Another area of investment with inlining is to do with the heuristics around when methods should be inlined. Just inlining everything would be bad; inlining copies code, which results in more code, which can have significant negative repercussions. For example, inlining’s increased code size puts more pressure on caches. Processors have an instruction cache, a small amount of super fast memory in a CPU that stores recently used instructions, making them really fast to access again the next time they’re needed (such as the next iteration through a loop, or the next time that same function is called). Consider a methodM, and 100 call sites toM that are all being accessed. If all of those share the same instructions forM, because the 100 call sites are all actually callingM, the instruction cache will only need to loadM‘s instructions once. If all of those 100 call sites each have their own copy ofM‘s instructions, then all 100 copies will separately be loaded through the cache, fighting with each other and other instructions for residence. The less likely it is that instructions are in the cache, the more likely it is that the CPU will stall waiting for the instructions to be loaded from memory.

For this reason, the JIT needs to be careful what it inlines. It tries hard to avoid inlining anything that won’t benefit (e.g. a larger method whose instructions won’t be materially influenced by the caller’s context) while also trying hard to inline anything that will materially benefit (e.g. small functions where the code required to call the function is similar in size to the contents of the function, functions with instructions that could be materially impacted by information from the call site, etc.) As part of these heuristics, the JIT has the notion of “boosts,” where observations it makes about things methods do boost the chances of that method being inlined.dotnet/runtime#114806 gives a boost to methods that appear to be returning new arrays of a small, fixed length; if those arrays can instead be allocated in the caller’s frame, the JIT might then be able to discover they don’t escape and enable them to be stack allocated.dotnet/runtime#110596 similarly looks for boxing, as the caller could possibly instead avoid the box entirely.

For the same purpose (and also just to minimize time spent performing compilation), the JIT also maintains a budget for how much it allows to be inlined into a method compilation… once it hits that budget, it might stop inlining anything. The budgeting scheme overall worksok, however in certain circumstances it can run out of budget at very inopportune times, for example doing a lot of inlining at top-level call sites but then running out of budget by the time it gets to small methods that are critically-important to inline for good performance. To help mitigate these scenarios,dotnet/runtime#114191 anddotnet/runtime#118641 more than double the JIT’s default inlining budget.

The JIT also pays a lot of attention to the number of local variables (e.g. parameters/locals explicitly in the IL, JIT-created temporary locals, promoted struct fields, etc.) it tracks. To avoid creating too many, the JIT would stop inlining once it was already tracking 512. But as other changes have made inlining more aggressive, this (strangely hardcoded) limit gets hit more often, leaving very valuable inlinees out in the cold.dotnet/runtime#118515 removed this fixed limit and instead ties it to a large percentage of the number of locals the JIT is allowed to track (by default, this ends up almost doubling the limit used by the inliner).

Constant Folding

Constant folding is a compiler’s ability to perform operations, typically math, at compile-time rather than at run-time: given multiple constants and an expressed relationship between them, the compiler can “fold” those constants together into a new constant. So, if you have the C# codeint M(int i) => i + 2 * 3;, the C# compiler does constant folding and emits that into your compilation as if you’d writtenint M(int i) => i + 6;. The JIT can and does also do constant folding, which is valuable especially when it’s based on information not available to the C# compiler. For example, the JIT can treatstatic readonly fields orIntPtr.Size orVector128<T>.Count as constants. And the JIT can do folding across inlines. For example, if you have:

int M1(int i) => i + M2(2 * 3);int M2(int j) => j * Environment.ProcessorCount;

the C# compiler will only be able to fold the2 * 3, and will emit the equivalent of:

int M1(int i) => i + M2(6);int M2(int j) => j * Environment.ProcessorCount;

but when compilingM1, the JIT can inlineM2 and treatProcessorCount as a constant (on my machine it’s 16), and produce the following assembly code forM1:

// dotnet run -c Release -f net9.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public int M1(int i) => i + M2(6);    private int M2(int j) => j * Environment.ProcessorCount;}
; .NET 9; Tests.M1(Int32)       lea       eax,[rsi+60]       ret; Total bytes of code 4

That’s as if the code forM1 had beenpublic int M1(int i) => i + 96; (the displayed assembly renders hexadecimal, so the60 is hexadecimal0x60 and thus decimal96).

Or consider:

string M() => GetString() ?? throw new Exception();static string GetString() => "test";

The JIT will be able to inlineGetString, at which point it can see that the result is non-null and can fold away the check for thenull constant, at which point it can also dead-code eliminate thethrow. Constant folding is useful on its own in avoiding unnecessary work, but it also often unlocks other optimizations, like dead-code elimination and bounds-check elimination. The JIT is already quite good at finding constant folding opportunities, and gets better in .NET 10. Consider this benchmark:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "s")]public partial class Tests{    [Benchmark]    [Arguments("test")]    public ReadOnlySpan<char> Test(string s)    {        s ??= "";        return s.AsSpan();    }}

Here’s the assembly that gets produced for .NET 9:

; .NET 9; Tests.Test(System.String)       push      rbp       mov       rbp,rsp       mov       rax,75B5D6200008       test      rsi,rsi       cmove     rsi,rax       test      rsi,rsi       jne       short M00_L01       xor       eax,eax       xor       edx,edxM00_L00:       pop       rbp       retM00_L01:       lea       rax,[rsi+0C]       mov       edx,[rsi+8]       jmp       short M00_L00; Total bytes of code 41

Of particular note are those twotest rsi,rsi instructions, which arenull checks. The assembly starts by loading a value intorax; that value is the address of the"" string literal. It then usestest rsi,rsi to check whether thes parameter, which was passed into this instance method in thersi register, isnull. If it isnull, thecmove rsi,rax instruction sets it to the address of the"" literal. And then… it doestest rsi,rsi again? That second test is thenull check at the beginning ofAsSpan, which looks like this:

public static ReadOnlySpan<char> AsSpan(this string? text){    if (text is null) return default;    return new ReadOnlySpan<char>(ref text.GetRawStringData(), text.Length);}

Now withdotnet/runtime#111985, that secondnull check, along with others, can be folded, resulting in this:

; .NET 10; Tests.Test(System.String)       mov       rax,7C01C4600008       test      rsi,rsi       cmove     rsi,rax       lea       rax,[rsi+0C]       mov       edx,[rsi+8]       ret; Total bytes of code 25

Similar impact comes fromdotnet/runtime#108420, which is also able to fold a different class ofnull checks.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "condition")]public partial class Tests{    [Benchmark]    [Arguments(true)]    public bool Test(bool condition)    {        string tmp = condition ? GetString1() : GetString2();        return tmp is not null;    }    private static string GetString1() => "Hello";    private static string GetString2() => "World";}

In this benchmark,we can see that neitherGetString1 norGetString2 returnnull, and thus theis not null check shouldn’t be necessary. The JIT in .NET 9 couldn’t see that, but its improved .NET 10 self can.

; .NET 9; Tests.Test(Boolean)       mov       rax,7407F000A018       mov       rcx,7407F000A050       test      sil,sil       cmove     rax,rcx       test      rax,rax       setne     al       movzx     eax,al       ret; Total bytes of code 37; .NET 10; Tests.Test(Boolean)       mov       eax,1       ret; Total bytes of code 6

Constant folding also applies to SIMD (Single Instruction Multiple Data), instructions that enable processing multiple pieces of data at once rather than only one element at a time.dotnet/runtime#117099 anddotnet/runtime#117572 both enable more SIMD comparison operations to participate in folding.

Code Layout

When the JIT compiler generates assembly from the IL emitted by the C# compiler, it organizes that code into “basic blocks,” a sequence of instructions with one entry point and one exit point, no jumps inside, no branches out except at the end. These blocks can then be moved around as a unit, and the order in which these blocks are placed in memory is referred to as “code layout” or “basic block layout.” This ordering can have a significant performance impact because modern CPUs rely heavily on an instruction cache and on branch prediction to keep things moving fast. If frequently executed (“hot”) blocks are close together and follow a common execution path, the CPU can execute them with fewer cache misses and fewer mispredicted jumps. If the layout is poor, where the hot code is split into pieces far apart from each other, or where rarely executed (“cold”) code sits in between, the CPU can spend more time jumping around and refilling caches than doing actual work. Consider a tight loop executed millions of times. A good layout keeps the loop entry, body, and backward edge (the jump back to the beginning of the body to do the next iteration) right next to each other, letting the CPU fetch them straight from the cache. In a bad layout, that loop might be interwoven with unrelated cold blocks (say, acatch block for atry in the loop), forcing the CPU to load instructions from different places and disrupting the flow. Similarly, for anif block, the likely path should generally be the next block so no jump is required, with the unlikely branch behind a short jump away, as that better aligns with the sensibilities of branch predictors. Code layout heuristics control how that happens, and as a result, how efficient the resulting code is able to execute.

When determining the starting layout of the blocks (before additional optimizations are done for the layout),dotnet/runtime#108903 employs a “loop-aware reverse post-order” traversal. A reverse post-order traversal is an algorithm for visiting the nodes in a control flow graph such that each block appears after its predecessors. The “loop aware” part means the traversal recognizes loops as units, effectively creating a block around the whole loop, and tries to keep the whole loop together as the layout algorithm moves things around. The intent here is to start the larger layout optimizations from a more sensible place, reducing the amount of later reshuffling and situations where loop bodies get broken up.

In the extreme, layout is essentially thetraveling salesman problem. The JIT must decide the order of basic blocks so that control transfers follow short, predictable paths and make efficient use of instruction cache and branch prediction. Just like the salesman visiting cities with minimal total travel distance, the compiler is trying to arrange blocks so that the “distance” between blocks, which might be measured in bytes or instruction fetch cost or something similar, is minimized. For any meaningfully-sized set of blocks, this is prohibitively expensive to compute optimally, as the number of possible orderings grows factorially with the number of blocks. Thus, the JIT has to rely on approximations rather than attempting an exact solution. One such approximation it employs now as ofdotnet/runtime#103450 (and then tweaked further indotnet/runtime#109741 anddotnet/runtime#109835) is a “3-opt,” which really just means that rather than considering all blocks together, it looks at only three and tries to produce an optimal ordering amongst those (there are only eight possible orderings to be checked). The JIT can choose to iterate through sets of three blocks until either it doesn’t see any more improvements or hits a self-imposed limit. Specifically when handling backward jumps, withdotnet/runtime#110277, it expands this “3-opt” to “4-opt” (four blocks).

.NET 10 also does a better job of factoring PGO data into layout. With dynamic PGO, the JIT is able to gather instrumentation data from an initial compilation and then use the results of that profiling to impact an optimized re-compilation. That data can lead to conclusions about what blocks are hot or cold, and which direction branches take, all information that’s valuable for layout optimization. However, data can sometimes be missing from these profiles, so the JIT has a “profile synthesis” algorithm that makes realistic guesses for these gaps in order to fill them in (if you’ve read or seen “Jurassic Park,” this is the JIT-equivalent to filling in gaps in the dinosaur DNA sequences with that from present-day frogs.) Withdotnet/runtime#111915, that repairing of the profile data is now performed just before layout, so that layout has a more complete picture.

Let’s take a concrete example of all this. Here I’ve extracted the core function fromMemoryExtensions.BinarySearch:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private int[] _values = Enumerable.Range(0, 512).ToArray();    [Benchmark]    public int BinarySearch()    {        int[] values = _values;        return BinarySearch(ref values[0], values.Length, 256);    }    [MethodImpl(MethodImplOptions.NoInlining)]    private static int BinarySearch<T, TComparable>(        ref T spanStart, int length, TComparable comparable)        where TComparable : IComparable<T>, allows ref struct    {        int lo = 0;        int hi = length - 1;        while (lo <= hi)        {            int i = (int)(((uint)hi + (uint)lo) >> 1);            int c = comparable.CompareTo(Unsafe.Add(ref spanStart, i));            if (c == 0)            {                return i;            }            else if (c > 0)            {                lo = i + 1;            }            else            {                hi = i - 1;            }        }        return ~lo;    }}

And here’s the assembly we get for .NET 9 and .NET 10, diff’d from the former to the latter:

; Tests.BinarySearch[[System.Int32, System.Private.CoreLib],[System.Int32, System.Private.CoreLib]](Int32 ByRef, Int32, Int32)       push      rbp       mov       rbp,rsp       xor       ecx,ecx       dec       esi       js        short M01_L07+      jmp       short M01_L03M01_L00:-      lea       eax,[rsi+rcx]-      shr       eax,1-      movsxd    r8,eax-      mov       r8d,[rdi+r8*4]-      cmp       edx,r8d-      jge       short M01_L03       mov       r9d,0FFFFFFFFM01_L01:       test      r9d,r9d       je        short M01_L06       test      r9d,r9d       jg        short M01_L05       lea       esi,[rax-1]M01_L02:       cmp       ecx,esi-      jle       short M01_L00-      jmp       short M01_L07+      jg        short M01_L07M01_L03:+      lea       eax,[rsi+rcx]+      shr       eax,1+      movsxd    r8,eax+      mov       r8d,[rdi+r8*4]       cmp       edx,r8d-      jg        short M01_L04-      xor       r9d,r9d+      jl        short M01_L00+      cmp       edx,r8d+      jle       short M01_L04+      mov       r9d,1       jmp       short M01_L01M01_L04:-      mov       r9d,1+      xor       r9d,r9d       jmp       short M01_L01M01_L05:       lea       ecx,[rax+1]       jmp       short M01_L02M01_L06:       pop       rbp       retM01_L07:       mov       eax,ecx       not       eax       pop       rbp       ret; Total bytes of code 83

We can see that the main change here is a block that’s moved (the bulk ofM01_L00 moving down toM01_L03). In .NET 9, thelo <= hi “stay in the loop check” (cmp ecx,esi) is a backward conditional branch (jle short M01_L00), where every iteration of the loop except for the last jumps back to the top (M01_L00). In .NET 10, it instead does a forward branch to exit the loop only in the rarer case, otherwise falling through to the body of the loop in the common case, and then unconditionally branching back.

GC Write Barriers

The .NET garbage collector (GC) works on a generational model, organizing the managed heap according to how long objects have been alive. The newest allocations land in “generation 0” (gen0), objects that have survived at least one collection are promoted to “generation 1” (gen1), and those that have been around for longer end up in “generation 2” (gen2). This is based on the premise that most objects are temporary, and that once an object has been around for a while, it’s likely to stick around for a while longer. Splitting up the heap into generations enables for quickly collecting gen0 objects by only scanning the gen0 heap for remaining references to that object. The expectation is that all, or at least the vast majority, of references to a gen0 object are also in gen0. Of course, if a reference to a gen0 object snuck into gen1 or gen2, not scanning gen1 or gen2 during a gen0 collection could be, well, bad. To avoid that case, the JIT collaborates with the GC to track references from older to younger generations. Whenever there’s a reference write that could cross a generation, the JIT emits a call to a helper that tracks the information in a “card table,” and when the GC runs, it consults this table to see if it needs to scan a portion of the higher generations. That helper is referred to as a “GC write barrier.” Since a write barrier is potentially employed on every reference write, it must be super fast, and in fact the runtime has several different variations of write barriers so that the JIT can pick one optimized for the given situation. Of course, the fastest write barrier is one that doesn’t need to exist at all, so as with bounds checks, the JIT also exerts energy to try to prove when write barriers aren’t needed, eliding them when it can. And it can even more in .NET 10.

ref structs, referred to in runtime vernacular as “byref-like types,” can never live on the heap, which means any reference fields in them will similarly never live on the heap. As such, if the JIT can prove that a reference write is targeting a field of aref struct, it can elide the write barrier. Consider this example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private object _object = new();    [Benchmark]    public MyRefStruct Test() => new MyRefStruct() { Obj1 = _object, Obj2 = _object, Obj3 = _object };    public ref struct MyRefStruct    {        public object Obj1;        public object Obj2;        public object Obj3;    }}

In the .NET 9 assembly, we can see three write barriers (CORINFO_HELP_CHECKED_ASSIGN_REF) corresponding to the three fields inMyRefStruct in the benchmark:

; .NET 9; Tests.Test()       push      r15       push      r14       push      rbx       mov       rbx,rsi       mov       r15,[rdi+8]       mov       rsi,r15       mov       r14,r15       mov       rdi,rbx       call      CORINFO_HELP_CHECKED_ASSIGN_REF       lea       rdi,[rbx+8]       mov       rsi,r14       call      CORINFO_HELP_CHECKED_ASSIGN_REF       lea       rdi,[rbx+10]       mov       rsi,r15       call      CORINFO_HELP_CHECKED_ASSIGN_REF       mov       rax,rbx       pop       rbx       pop       r14       pop       r15       ret; Total bytes of code 59

Withdotnet/runtime#111576 anddotnet/runtime#111733 in .NET 10, all of those write barriers are elided:

; .NET 10; Tests.Test()       mov       rax,[rdi+8]       mov       rcx,rax       mov       rdx,rax       mov       [rsi],rcx       mov       [rsi+8],rdx       mov       [rsi+10],rax       mov       rax,rsi       ret; Total bytes of code 25

Much more impactful, however, aredotnet/runtime#112060 anddotnet/runtime#112227, which have to do with “return buffers.” When a .NET method is typed to return a value, the runtime has to decide how that value gets from the callee back to the caller. For small types, like integers, floating-point numbers, pointers, or object references, the answer is simple: the value can be passed back via one or more CPU registers reserved for return values, making the operation essentially free. But not all values fit neatly into registers. Larger value types, such as structs with multiple fields, require a different strategy. In these cases, the caller allocates a “return buffer,” a block of memory, typically in the caller’s stack frame, and the caller passes a pointer to that buffer as a hidden argument to the method. The method then writes the return value directly into that buffer in order to provide the caller with the data. When it comes to write barriers, the challenge here is that there historically hasn’t been a requirement that the return buffer be on the stack; it’s technically feasible it could have been allocated on the heap, even if it rarely or never is. And since the callee doesn’t know where the buffer lives, any object reference writes needed to be tracked with GC write barriers. We can see that with a simple benchmark:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _firstName = "Jane", _lastName = "Smith", _address = "123 Main St", _city = "Anytown";    [Benchmark]    public Person GetPerson() => new(_firstName, _lastName, _address, _city);    public record struct Person(string FirstName, string LastName, string Address, string City);}

On .NET 9, each field of the returned value type is incurring aCORINFO_HELP_CHECKED_ASSIGN_REF write barrier:

; .NET 9; Tests.GetPerson()       push      r15       push      r14       push      r13       push      rbx       mov       rbx,rsi       mov       rsi,[rdi+8]       mov       r15,[rdi+10]       mov       r14,[rdi+18]       mov       r13,[rdi+20]       mov       rdi,rbx       call      CORINFO_HELP_CHECKED_ASSIGN_REF       lea       rdi,[rbx+8]       mov       rsi,r15       call      CORINFO_HELP_CHECKED_ASSIGN_REF       lea       rdi,[rbx+10]       mov       rsi,r14       call      CORINFO_HELP_CHECKED_ASSIGN_REF       lea       rdi,[rbx+18]       mov       rsi,r13       call      CORINFO_HELP_CHECKED_ASSIGN_REF       mov       rax,rbx       pop       rbx       pop       r13       pop       r14       pop       r15       ret; Total bytes of code 81

Now in .NET 10, the calling convention has been updated to require that the return buffer live on the stack (if the caller wants the data somewhere else, it’s responsible for subsequently doing that copy). And because the return buffer is now guaranteed to be on the stack, the JIT can elide all GC write barriers as part of returning values.

; .NET 10; Tests.GetPerson()       mov       rax,[rdi+8]       mov       rcx,[rdi+10]       mov       rdx,[rdi+18]       mov       rdi,[rdi+20]       mov       [rsi],rax       mov       [rsi+8],rcx       mov       [rsi+10],rdx       mov       [rsi+18],rdi       mov       rax,rsi       ret; Total bytes of code 35

dotnet/runtime#111636 from@a74nh is also interesting from a performance perspective because, as is common in optimization, it trades off one thing for another. Prior to this change, Arm64 had one universal write barrier helper for all GC modes. This change brings Arm64 in line with x64 by routing through aWriteBarrierManager that selects among multipleJIT_WriteBarrier variants based on runtime configuration. In doing so, it makes each Arm64 write barrier a bit more expensive, by adding region checks and moving to a region-aware card marking scheme, but in exchange it lets the GC do less work: fewer cards in the card table get marked, and the GC can scan more precisely.dotnet/runtime#106191 also helps reduce the cost of write barriers on Arm64 by tightening the hot-path comparisons and eliminating some avoidable saves and restores.

Instruction Sets

.NET continues to see meaningful optimizations and improvements across all supported architectures, along with various architecture-specific improvements. Here are a handful of examples.

Arm SVE

APIs for Arm SVE were introduced in .NET 9. As noted in theArm SVE section of last year’s post, enabling SVE is a multi-year effort, and in .NET 10, support is still considered experimental. However, the support has continued to be improved and extended, with PRs likedotnet/runtime#115775 from@snickolls-arm addingBitwiseSelect methods,dotnet/runtime#117711 from@jacob-crawley addingMaxPairwise andMinPairwise methods, anddotnet/runtime#117051 from@jonathandavies-arm addingVectorTableLookup methods.

Arm64

dotnet/runtime#111893 from@jonathandavies-arm,dotnet/runtime#111904 from@jonathandavies-arm,dotnet/runtime#111452 from@jonathandavies-arm,dotnet/runtime#112235 from@jonathandavies-arm, anddotnet/runtime#111797 from@snickolls-arm all improved .NET’s support for utilizing Arm64’s multi-operation compound instructions. For example, when implementing a compare and branch, rather than emitting acmp against 0 followed bybeq instruction, the JIT may now emit acbz (“Compare and Branch on Zero”) instruction.

APX

Intel’s Advanced Performance Extensions (APX) was announced in 2023 as an extension of the x86/x64 instruction set. It expands the number of general-purpose registers from 16 to 32 and adds new instructions such as conditional operations designed to reduce memory traffic, improve performance, and lower power consumption.dotnet/runtime#106557 from@Ruihan-Yin,dotnet/runtime#108796 from@Ruihan-Yin, anddotnet/runtime#113237 from@Ruihan-Yin essentially teach the JIT how to speak the new dialect of assembly code (the REX and expanded EVEX encodings), whiledotnet/runtime#108799 from@DeepakRajendrakumaran updates the JIT to be able to use the expanded set of registers, anddotnet/runtime#116035 from@DeepakRajendrakumaran enables new push and pop instructions for working with such registers. The most impactful new instructions in APX are around conditional compares (ccmp), a concept the JIT already supports from targeting other instruction sets, anddotnet/runtime#111072 from@anthonycanino,dotnet/runtime#112153 from@anthonycanino, anddotnet/runtime#116445 from@khushal1996 all teach the JIT how to make good use of these new instructions with APX.

AVX512

.NET 8 added broad support for AVX512, and .NET 9 significantly improved its handling and adoption throughout the core libraries. .NET 10 includes a plethora of additional related optimizations:

  • dotnet/runtime#109258 from@saucecontrol anddotnet/runtime#109267 from@saucecontrol expand the number of places the JIT is able to use EVEX embedded broadcasts, a feature that lets vector instructions read a single scalar element from memory and implicitly replicate it to all the lanes of the vector, without needing a separate broadcast or shuffle operation.
  • dotnet/runtime#108824 removes a redundant sign extension from broadcasts.
  • dotnet/runtime#116117 from@alexcovington improves the code generated forVector.Max andVector.Min when AVX512 is supported.
  • dotnet/runtime#109474 from@saucecontrol improves “containment” (where an instruction can be eliminated by having its behaviors fully encapsulated by another instruction) for AVX512 widening intrinsics (similar containment-related improvements were made indotnet/runtime#110736 from@saucecontrol anddotnet/runtime#111778 from@saucecontrol).
  • dotnet/runtime#111853 from@saucecontrol improvesVector128/256/512.Dot to be better accelerated with AVX512.
  • dotnet/runtime#110195,dotnet/runtime#110307, anddotnet/runtime#117118 all improve how vector masks are handled. In AVX512, masks are special registers that can be included as part of various instructions to control which subset of vector elements should be utilized (each bit in a mask corresponds to one element in the vector). This enables operating on only part of a vector without needing extra branching or shuffling.
  • dotnet/runtime#115981 improves zeroing (where the JIT emits instructions to zero out memory, often as part of initializing a stack frame) on AVX512. After zeroing as much as it can with 64-byte instructions, it was falling back to using 16-byte instructions, when it could have used 32-byte instructions.
  • dotnet/runtime#110662 improves the code generated forExtractMostSignificantBits (which is used by many of the searching algorithms in the core libraries) when working withshort andushort (andchar, as most of those core library implementations reinterpret castchar as one of the others) by using EVEX mask support.
  • dotnet/runtime#113864 from@saucecontrol improves the code generated forConditionalSelect when not used with mask registers.

AVX10.2

.NET 9 added support and intrinsics for the AVX10.1 instruction set. Withdotnet/runtime#111209 from@khushal1996, .NET 10 adds support and intrinsics for the AVX10.2 instruction set.dotnet/runtime#112535 from@khushal1996 optimizes floating-point min/max operations with AVX10.2 instructions, whiledotnet/runtime#111775 from@khushal1996 enables floating-point conversions to utilize AVX10.2.

GFNI

dotnet/runtime#109537 from@saucecontrol adds intrinsics for the GFNI (Galois Field New Instructions) instruction set, which can be used for accelerating operations over Galois fields GF(2^8). These are common in cryptography, error correction, and data encoding.

VPCLMULQDQ

VPCLMULQDQ is an x86 instruction set extension that adds vector support to the olderPCLMULQDQ instruction, which performs carry-less multiplication over 64-bit integers.dotnet/runtime#109137 from@saucecontrol adds new intrinsic APIs forVPCLMULQDQ.

Miscellaneous

Many more PRs than the ones I’ve already called out have gone into the JIT this release. Here are a few more:

  • Eliminating some covariance checks. Writing into arrays of reference types can require “covariance checks.” Imagine you have a classBase and two derived typesDerived1 : Base andDerived2 : Base. Since arrays in .NET are covariant, I can have aDerived1[] and cast it successfully to aBase[], but under the covers that’s still aDerived1[]. That means, for example, that any attempt to store aDerived2 into that array should fail at runtime, even if it compiles. To achieve that, the JIT needs to insert such covariance checks when writing into arrays, but just like with bounds checking and write barriers, the JIT can elide those checks when it can prove statically that they’re not necessary. Such an example is with sealed types. If the JIT sees an array of typeT[] andT is known to be sealed,T[] must exactly be aT[] and not someDerivedT[], because there can’t be aDerivedT. So with a benchmark like this:
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private List<string> _list = new() { "hello" };    [Benchmark]    public void Set() => _list[0] = "world";}

    as long as the JIT can see that the array underlying theList<string> is astring[] (string is sealed), it shouldn’t need a covariance check. In .NET 9, we get this:

    ; .NET 9; Tests.Set()       push      rbx       mov       rbx,[rdi+8]       cmp       dword ptr [rbx+10],0       je        short M00_L00       mov       rdi,[rbx+8]       xor       esi,esi       mov       rdx,78914920A038       call      System.Runtime.CompilerServices.CastHelpers.StelemRef(System.Object[], IntPtr, System.Object)       inc       dword ptr [rbx+14]       pop       rbx       retM00_L00:       call      qword ptr [78D1F80558A8]       int       3; Total bytes of code 44

    Note thatCastHelpers.StelemRef call… that’s the helper that performs the write with the covariance check. But now in .NET 10, thanks todotnet/runtime#107116 (which teaches the JIT how to resolve the exact type for the field of the closed generic), we get this:

    ; .NET 10; Tests.Set()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       cmp       dword ptr [rax+10],0       je        short M00_L00       mov       rcx,[rax+8]       mov       edx,[rcx+8]       test      rdx,rdx       je        short M00_L01       mov       rdx,75E2B9009A40       mov       [rcx+10],rdx       inc       dword ptr [rax+14]       pop       rbp       retM00_L00:       call      qword ptr [762368116760]       int       3M00_L01:       call      CORINFO_HELP_RNGCHKFAIL       int       3; Total bytes of code 58

    No covariance check, thank you very much.

  • More strength reduction. “Strength reduction” is a classic compiler optimization that replaces more expensive operations, like multiplications, with cheaper ones, like additions. In .NET 9, this was used to transform indexed loops that used multiplied offsets (e.g.index * elementSize) into loops that simply incremented a pointer-like offset (e.g.offset += elementSize), cutting down on arithmetic overhead and improving performance. In .NET 10, strength reduction has been extended, in particular withdotnet/runtime#110222. This enables the JIT to detect multiple loop induction variables with different step sizes and strength-reduce them by leveraging their greatest common divisor (GCD). Essentially, it creates a single primary induction variable based on the GCD of the varying step sizes, and then recovers each original induction variable by appropriately scaling. Consider this example:
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "numbers")]public partial class Tests{    [Benchmark]    [Arguments("128514801826028643102849196099776734920914944609068831724328541639470403818631040")]    public int[] Parse(string numbers)    {        int[] results = new int[numbers.Length];        for (int i = 0; i < numbers.Length; i++)        {            results[i] = numbers[i] - '0';        }        return results;    }}

    In this benchmark, we’re iterating through an inputstring, which is a collection of 2-bytechar elements, and we’re storing the results into an array of 4-byteint elements. The core loop in the .NET 9 assembly looks like this:

    ; .NET 9M00_L00:       mov       edx,ecx       movzx     edi,word ptr [rbx+rdx*2+0C]       add       edi,0FFFFFFD0       mov       [rax+rdx*4+10],edi       inc       ecx       cmp       r15d,ecx       jg        short M00_L00

    Themovzx edi,word ptr [rbx+rdx*2+0C] is the read ofnumbers[i], and themov [rax+rdx*4+10],edi is the assignment toresults[i].rdx here isi, so each assignment is effectively having to doi*2 to compute the byte offset of thechar at indexi, and similarly doi*4 to compute the byte offset of theint at offseti. Now here’s the .NET 10 assembly:

    ; .NET 10M00_L00:       movzx     edx,word ptr [rbx+rcx+0C]       add       edx,0FFFFFFD0       mov       [rax+rcx*2+10],edx       add       rcx,2       dec       r15d       jne       short M00_L00

    The multiplication in thenumbers[i] read is gone. Instead, it can just incrementrcx by 2 on each iteration, treating that as the offset to theithchar, and then instead of multiplying by 4 to compute theint offset, it just multiples by 2.

  • CSE integration with SSA. As with most compilers, the JIT employs common subexpression elimination (CSE) to find identical computations and avoid doing them repeatedly.dotnet/runtime#106637 teaches the JIT how to do so in a more consistent manner by more fully integrating CSE with its Static Single Assignment (SSA) representation. This in turn allows for more optimizations to kick in, e.g. some of the strength reduction done around loop induction variables in .NET 9 wasn’t applying as much as it should have, and now it will.
  • return someCondition ? true : false. There are often multiple ways to represent the same thing, but it often happens in compilers that certain patterns will be recognized during optimization while other equivalent ones won’t, and it can therefore behoove the compiler to first normalize the representations to all use the better recognized one. There’s a really common and interesting case of this withreturn someCondition, where, for reasons relating to the JIT’s internal representation, the JIT is better able to optimize with the equivalentreturn someCondition ? true : false.dotnet/runtime#107499 normalizes to the latter. As an example of this, consider this benchmark:
    // dotnet run -c Release -f net9.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public bool Test1(int i)    {        if (i > 10 && i < 20) return true;        return false;    }    [Benchmark]    [Arguments(42)]    public bool Test2(int i) => i > 10 && i < 20;}

    On .NET 9, that results in this assembly code forTest1:

    ; .NET 9; Tests.Test1(Int32)       sub       esi,0B       cmp       esi,8       setbe     al       movzx     eax,al       ret; Total bytes of code 13

    The JIT has successfully recognized that it can change the two comparisons to instead be a subtraction and a single comparison, as if thei > 10 && i < 20 were instead written as(uint)(i - 11) <= 8. But forTest2, .NET 9 produces this:

    ; .NET 9; Tests.Test2(Int32)       xor       eax,eax       cmp       esi,14       setl      cl       movzx     ecx,cl       cmp       esi,0A       cmovg     eax,ecx       ret; Total bytes of code 18

    Because of how the return condition is being represented internally by the JIT, it’s missing this particular optimization, and the assembly code more directly reflects what was written in the C#. But now in .NET 10, because of this normalization, we now get this forTest2, exactly what we got forTest1:

    ; .NET 10; Tests.Test2(Int32)       sub       esi,0B       cmp       esi,8       setbe     al       movzx     eax,al       ret; Total bytes of code 13
  • Bit tests. The C# compiler has a lot of flexibility in how it emitsswitch andis expressions. Consider a case like this:c is ' ' or '\t' or '\r' or '\n'. It could emit that as the equivalent of a series of cascadingif/else branches, as an ILswitch instruction, as a bit test, or as combinations of those. The C# compiler, though, doesn’t have all of the information the JIT has, such as whether the process is 32-bit or 64-bit, or knowledge of what instructions cost on given hardware. Withdotnet/runtime#107831, the JIT will now recognize more such expressions that can be implemented as a bit test and generate the code accordingly.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "c")]public partial class Tests{    [Benchmark]    [Arguments('s')]    public void Test(char c)    {        if (c is ' ' or '\t' or '\r' or '\n' or '.')        {            Handle(c);        }        [MethodImpl(MethodImplOptions.NoInlining)]        static void Handle(char c) { }    }}
    MethodRuntimeMeanRatioCode Size
    Test.NET 9.00.4537 ns1.0258 B
    Test.NET 10.00.1304 ns0.2944 B

    It’s also common to see bit tests implemented in C# against shifted values; a constant mask is created with bits set at various indices, and then an incoming value to check is tested by shifting a bit to the corresponding index and seeing whether it aligns with one in the mask. For example, here is howRegex tests to see whether a providedUnicodeCategory is one of those that composes the “word” class (`\w`):

    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Globalization;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "uc")]public partial class Tests{    [Benchmark]    [Arguments(UnicodeCategory.DashPunctuation)]    public bool Test(UnicodeCategory uc) => (WordCategoriesMask & (1 << (int)uc)) != 0;    private const int WordCategoriesMask =        1 << (int)UnicodeCategory.UppercaseLetter |        1 << (int)UnicodeCategory.LowercaseLetter |        1 << (int)UnicodeCategory.TitlecaseLetter |        1 << (int)UnicodeCategory.ModifierLetter |        1 << (int)UnicodeCategory.OtherLetter |        1 << (int)UnicodeCategory.NonSpacingMark |        1 << (int)UnicodeCategory.DecimalDigitNumber |        1 << (int)UnicodeCategory.ConnectorPunctuation;}

    Previously, the JIT would end up emitting that similar to how it’s written: a shift followed by a test. Now withdotnet/runtime#111979 from@varelen, it can emit it as a bit test.

    ; .NET 9; Tests.Test(System.Globalization.UnicodeCategory)       mov       eax,1       shlx      eax,eax,esi       test      eax,4013F       setne     al       movzx     eax,al       ret; Total bytes of code 22; .NET 10; Tests.Test(System.Globalization.UnicodeCategory)       mov       eax,4013F       bt        eax,esi       setb      al       movzx     eax,al       ret; Total bytes of code 15
  • Redundant sign extensions. Withdotnet/runtime#111305, the JIT can now remove more redundant sign extensions (when you take a smaller size type, e.g.int, and convert it to a larger size type, e.g.long, while preserving the value’s sign). For example, with a test like thispublic ulong Test(int x) => (uint)x < 10 ? (ulong)x << 60 : 0, the JIT can now emit amov (just copy the bits) instead ofmovsxd (move with sign extension), since it knows from the first comparison that the shift will only ever be performed with a non-negativex.
  • Better division with BMI2. If the BMI2 instruction set is available, withdotnet/runtime#116198 from@Daniel-Svensson the JIT can now use themulx instruction (“Unsigned Multiply Without Affecting Flags”) to implement integer division, e.g.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "value")]public partial class Tests{    [Benchmark]    [Arguments(12345)]    public ulong Div10(ulong value) => value / 10;}

    results in:

    ; .NET 9; Tests.Div10(UInt64)       mov       rdx,0CCCCCCCCCCCCCCCD       mov       rax,rsi       mul       rdx       mov       rax,rdx       shr       rax,3       ret; Total bytes of code 24; .NET 10; Tests.Div10(UInt64)       mov       rdx,0CCCCCCCCCCCCCCCD       mulx      rax,rax,rsi       shr       rax,3       ret; Total bytes of code 20
  • Better range comparison. When comparing aulong expression againstuint.MaxValue, rather than being emitted as a comparison, withdotnet/runtime#113037 from@shunkino it can be handled more efficiently as a shift.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "x")]public partial class Tests{    [Benchmark]    [Arguments(12345)]    public bool Test(ulong x) => x <= uint.MaxValue;}

    resulting in:

    ; .NET 9; Tests.Test(UInt64)       mov       eax,0FFFFFFFF       cmp       rsi,rax       setbe     al       movzx     eax,al       ret; Total bytes of code 15; .NET 10; Tests.Test(UInt64)       shr       rsi,20       sete      al       movzx     eax,al       ret; Total bytes of code 11
  • Better dead branch elimination. The JIT’s branch optimizer is already able to use implications from comparisons to statically determine the outcome of other branches. For example, if I have this:
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "x")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public void Test(int x)    {        if (x > 100)        {            if (x > 10)            {                Console.WriteLine();            }        }    }}

    the JIT generates this on .NET 9:

    ; .NET 9; Tests.Test(Int32)       cmp       esi,64       jg        short M00_L00       retM00_L00:       jmp       qword ptr [7766D3E64FA8]; Total bytes of code 12

    Note there’s only a single comparison against 100 (0x64), with the comparison against 10 elided (as it’s implied by the previous comparison). However, there are many variations to this, and not all of them were handled equally well. Consider this:

    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "x")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public void Test(int x)    {        if (x < 16)            return;        if (x < 8)            Console.WriteLine();    }}

    Here, theConsole.WriteLine ideally wouldn’t appear in the emitted assembly at all, as it’s never reachable. Alas, on .NET 9, we get this (thejmp instruction here is a tail call toWriteLine):

    ; .NET 9; Tests.Test(Int32)       push      rbp       mov       rbp,rsp       cmp       esi,10       jl        short M00_L00       cmp       esi,8       jge       short M00_L00       pop       rbp       jmp       qword ptr [731ED8054FA8]M00_L00:       pop       rbp       ret; Total bytes of code 23

    Withdotnet/runtime#111766 on .NET 10, it successfully recognizes that by the time it gets to thex < 8, that condition will always befalse, and it can be eliminated. And once it’s eliminated, the initial branch is also unnecessary. So the whole method reduces to this:

    ; .NET 10; Tests.Test(Int32)       ret; Total bytes of code 1
  • Better floating-point conversion.dotnet/runtime#114410 from@saucecontrol,dotnet/runtime#114597 from@saucecontrol, anddotnet/runtime#111595 from@saucecontrol all speed up conversions between floating-point and integers, such as by usingvcvtusi2s when AVX512 is available, or when it isn’t, avoiding the intermediatedouble conversion.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]public partial class Tests{    [Benchmark]    [Arguments(42)]    public float Compute(uint i) => i;}
    ; .NET 9; Tests.Compute(UInt32)       mov       eax,esi       vxorps    xmm0,xmm0,xmm0       vcvtsi2sd xmm0,xmm0,rax       vcvtsd2ss xmm0,xmm0,xmm0       ret; Total bytes of code 16; .NET 10; Tests.Compute(UInt32)       vxorps    xmm0,xmm0,xmm0       vcvtusi2ss xmm0,xmm0,esi       ret; Total bytes of code 11
  • Unrolling. When usingCopyTo (or other “memmove”-based operations) with a constant source,dotnet/runtime#108576 reduces costs by avoiding a redundant memory load.dotnet/runtime#109036 unblocks more unrolling on Arm64 forEquals/StartsWith/EndsWith. Anddotnet/runtime#110893 enables unrolling non-zero fills (unrolling already happened for zero fills).
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private char[] _chars = new char[100];    [Benchmark]    public void Fill() => _chars.AsSpan(0, 16).Fill('x');}
    ; .NET 9; Tests.Fill()       push      rbp       mov       rbp,rsp       mov       rdi,[rdi+8]       test      rdi,rdi       je        short M00_L00       cmp       dword ptr [rdi+8],10       jb        short M00_L00       add       rdi,10       mov       esi,10       mov       edx,78       call      qword ptr [7F3093FBF1F8]; System.SpanHelpers.Fill[[System.Char, System.Private.CoreLib]](Char ByRef, UIntPtr, Char)       nop       pop       rbp       retM00_L00:       call      qword ptr [7F3093787810]       int       3; Total bytes of code 49; .NET 10; Tests.Fill()       push      rbp       mov       rbp,rsp       mov       rax,[rdi+8]       test      rax,rax       je        short M00_L00       cmp       dword ptr [rax+8],10       jl        short M00_L00       add       rax,10       vbroadcastss ymm0,dword ptr [78EFC70C9340]       vmovups   [rax],ymm0       vzeroupper       pop       rbp       retM00_L00:       call      qword ptr [78EFC7447B88]       int       3; Total bytes of code 48

    Note the call toSpanHelpers.Fill in the .NET 9 assembly and the absence of it in the .NET 10 assembly.

Native AOT

Native AOT is the ability for a .NET application to be compiled directly to assembly code at build-time. The JIT is still used for code generation, but only at build time; the JIT isn’t part of the shipping app at all, and no code generation is performed at run-time. As such, most of the optimizations to the JIT already discussed, as well as optimizations throughput the rest of this post, apply to Native AOT equally. Native AOT presents some unique opportunities and challenges, however.

One super power of the Native AOT tool chain is the ability to interpret (some) code at build-time and use the results of that execution rather than performing the operation at run-time. This is particularly relevant for static constructors, where the constructor’s code can be interpreted to initialize variousstatic readonly fields, and then the contents of those fields can be persisted into the generated assembly; at run-time, the contents needs only be rehydrated from the assembly rather than being recomputed. This also potentially helps to make more code redundant and removable, if for example the static constructor and anything it (and only it) referenced were no longer needed. Of course, it would be dangerous and problematic if any arbitrary code could be run during build, so instead there’s a very filtered allow list and specialized support for the most common and appropriate constructs.dotnet/runtime#107575 augments this “preinitialization” capability to support spans sourced from arrays, such that using methods like.AsSpan() doesn’t cause preinitialization to bail out.dotnet/runtime#114374 also improved preinitialization, removing restrictions around accessing static fields of other types, calling methods on other types that have their own static constructors, and dereferencing pointers.

Conversely, Native AOT has its own challenges, specifically that size really matters and is harder to control. With a JIT available at run-time, code generation for only exactly what’s needed can be deferred until run-time. With Native AOT,all assembly code generation needs to be done at build-time, which means the Native AOT tool chain needs to work hard to determine the least amount of code it needs to emit to support everything the app might need to do at run-time. Most of the effort on Native AOT in any given release ends up being about helping it to further decrease the size of generated code. For example:

  • dotnet/runtime#117411 enables folding bodies of generic instantations of the same method, essentially avoiding duplication by using the same code for copies of the same method where possible.
  • dotnet/runtime#117080 similarly helps improve the existing method body deduplication logic.
  • dotnet/runtime#117345 from@huoyaoyuan tweaks a bit of code in reflection that would previously artificially force the code to be preserved for all enumerators of all generic instantations of every collection type.
  • dotnet/runtime#112782 adds the same distinction that already existed forMethodTables for non-generic methods (“is this method table visible to user code or not”) to generic methods, allowing more metadata for the non-user visible ones to be optimized away.
  • dotnet/runtime#118718 anddotnet/runtime#118832 enable size reductions related to boxed enums. The former tweaks a few methods inThread,GC, andCultureInfo to avoid boxing some enums, which means the code for those needn’t be generated. The latter tweaks the implementation ofRuntimeHelpers.CreateSpan, which is used by the C# compiler as part of creating spans with constructs like collection expressions.CreateSpan is a generic method, and the Native AOT toolchain’s whole-program analysis would end up treating the generic type parameter as being “reflected on,” meaning the compiler had to assume any type parameter would be used with reflection and thus had to preserve relevant metadata. When used with enums, it would need to ensure support for boxed enums was kept around, andSystem.Console has such a use with an enum. That in turn meant that a simple “hello, world” console app couldn’t trim away that boxed enum reflection support; now it can.

VM

The .NET runtime offers a wide range of services to managed applications, most obviously the garbage collector and the JIT compiler, but it also encompasses a host of other capabilities: assembly and type loading, exception handling, virtual method dispatch, interoperability support, stub generation, and so on. Collectively, all of these features are referred to as being a part of the .NET Virtual Machine (VM).

dotnet/runtime#108167 anddotnet/runtime#109135 rewrote various runtime helpers from C in the runtime to C# inSystem.Private.CoreLib, including the “unboxing” helpers, which are used to unboxobjects to value types in niche scenarios. This rewrite avoids overheads associated with transitioning between native and managed and also enables the JIT an opportunity to optimize in the context of callers, such as with inlining. Note that these unboxing helpers are only used in obscure situations, so it requires a bit of a complicated benchmark to demonstrate the impact:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[DisassemblyDiagnoser(0)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private object[] _objects = [new GenericStruct<MyStruct, object>()];    [Benchmark]    public void Unbox() => Unbox<GenericStruct<MyStruct, object>>(_objects[0]);    private void Unbox<T>(object o) where T : struct, IStaticMethod<T>    {        T? local = (T?)o;        if (local.HasValue)        {            T localCopy = local.Value;            T.Method(ref localCopy);        }    }    public interface IStaticMethod<T>    {        public static abstract void Method(ref T param);    }    struct MyStruct : IStaticMethod<MyStruct>    {        public static void Method(ref MyStruct param) { }    }    struct GenericStruct<T, V> : IStaticMethod<GenericStruct<T, V>> where T : IStaticMethod<T>    {        public T Value;        [MethodImpl(MethodImplOptions.NoInlining)]        public static void Method(ref GenericStruct<T, V> value) => T.Method(ref value.Value);    }}
MethodRuntimeMeanRatioCode Size
Unbox.NET 9.01.626 ns1.00148 B
Unbox.NET 10.01.379 ns0.85148 B

What it means to move the implementation from native to managed is most easily seen just by looking at the generated assembly. Other than uninteresting and non-impactful changes in which registers happen to get assigned, the only real difference between .NET 9 and .NET 10 is a single instruction:

-      call      CORINFO_HELP_UNBOX_NULLABLE+      call      System.Runtime.CompilerServices.CastHelpers.Unbox_Nullable(Byte ByRef, System.Runtime.CompilerServices.MethodTable*, System.Object)

dotnet/runtime#115284 streamlines how the runtime sets up and tears down the little code blocks (“funclets”) the runtime uses to implementcatch/finally on x64. Historically, these funclets acted a lot like tiny functions, saving and restoring non-volatile CPU registers on entry and exit (a “non-volatile” register is effectively one where the caller can expect it to contain the same value after a function call as it did before the function call). This PR changes the contract so that funclets no longer need to preserve those registers themselves; instead, the runtime takes care of preserving them. That shrinks the prologs and epilogs the JIT emits for funclets, reduces instruction count and code size, and lowers the cost of entering and exiting exception handlers.

Withdotnet/runtime#114462, the runtime now uses a single shared “template” for many of the small executable “stubs” it needs at runtime; stubs are tiny chunks of machine code that act as jump points, call counters, or patchable trampolines. Previously, each memory allocation for stubs would regenerate the same instructions over and over. The new approach builds one copy of the stub code in a read-only page and then maps that same physical page into every place it’s needed, while giving each allocation its own writable page for the per-stub data that changes at runtime. This lets hundreds of virtual stub pages all point to one physical code page, cutting memory use, reducing startup work, and improving instruction cache locality.

Also interesting aredotnet/runtime#117218 anddotnet/runtime#116031, which together help optimize the generation of stack traces in large, heavily multi-threaded applications when being profiled.

Threading

TheThreadPool underlies most work in most .NET apps and services. It’s a critical-path component that has to be able to deal with all manners of workloads efficiently.

dotnet/runtime#109841 implemented an opt-in feature thatdotnet/runtime#112796 then enabled by default for .NET 10. The idea behind it is fairly straightforward, but to understand it, we first need to examine how the thread pool queues work items. The thread pool has multiple queues, typically one “global” queue and then one “local” queue per thread in the pool. When threads outside of the pool queue work, that work goes to the global queue, and when a thread pool thread queues work, especially aTask or work related to anawait, that work item typically goes to that thread’s local queue. Then when a thread pool thread finishes whatever it was doing and goes in search of more work, it first checks its own local queue (treating its local queue as highest priority), then if that’s empty it checks the global queue, and then if that’s empty it goes and helps out the other threads in the pool by searching their queues for work to be done. This is all in an attempt to a) minimize contention on the global queue (if threads are mainly queueing and dequeuing from their own local queue, they’re not contending with each other), and b) prioritize work that’s logically part of already started work (the only way for work to get into a local queue is if that thread was processing a work item that created it). Generally, this works out well, but sometimes we get into degenerate scenarios, typically when an app does something that goes against best practices… like blocking.

Blocking a thread pool thread means that thread can’t service other work coming into the pool. If the blocking is brief, it’s generally fine, and if it’s longer, the thread pool tries to accommodate it by injecting more threads and finding a steady state at which things hum along. But a certain kind of blocking can be really problematic:“sync over async”. With “sync over async”, one thread blocks while waiting for an asynchronous operation to complete, and ifthat operation needs to do something on the thread pool in order to complete, you now have one thread pool thread blocked waiting for another thread pool thread to pick up a particular work item and process it. This can quickly lead to the whole pool getting into a jam… especially with the thread local queues. If a thread is blocked on an operation that depends on work items in that thread’s local queue getting processed, that work item being picked off now depends on the global queue being exhausted and another thread coming along and stealing the work item from this thread’s queue. If there’s a steady stream of incoming work into the global queue, though, that will never happen; essentially, the highest priority work item has become the lowest priority work item.

So, back to these PRs. The idea is fairly simple: when the thread is about to block, and in particular when it’s about to block waiting on aTask, it first dumps its entire local queue into the global queue. That way, this work which was highest priority for the blocked thread has a fairer chance of being processed by other threads, rather than it being the lowest priority work for everyone. We can try to see the impact of this with a specifically-crafted workload:

// dotnet run -c Release -f net9.0 --filter "*"// dotnet run -c Release -f net10.0 --filter "*"using System.Diagnostics;int numThreads = Environment.ProcessorCount;ThreadPool.SetMaxThreads(numThreads, 1);ManualResetEventSlim start = new();CountdownEvent allDone = new(numThreads);new Thread(() =>{    while (true)    {        for (int i = 0; i < 10_000; i++)        {            ThreadPool.QueueUserWorkItem(_ => Thread.SpinWait(1));        }        Thread.Yield();    }}) { IsBackground = true }.Start();for (int i = 0; i < numThreads; i++){    ThreadPool.QueueUserWorkItem(_ =>    {        start.Wait();        TaskCompletionSource tcs = new();        const int LocalItemsPerThread = 4;        var remaining = LocalItemsPerThread;        for (int j = 0; j < LocalItemsPerThread; j++)        {            Task.Run(() =>            {                Thread.SpinWait(100);                if (Interlocked.Decrement(ref remaining) == 0)                {                    tcs.SetResult();                }            });        }        tcs.Task.Wait();        allDone.Signal();    });}var sw = Stopwatch.StartNew();start.Set();Console.WriteLine(allDone.Wait(20_000) ?    $"Completed: {sw.ElapsedMilliseconds}ms" :    $"Timed out after {sw.ElapsedMilliseconds}ms");

This is:

  • creating a noise thread that tries to keep the global queue inundated with new work
  • queuingEnvironment.ProcessorCount work items, each of which queues four work items to the local queue that all do a little work and then blocks on aTask until they all complete
  • waiting for thoseEnvironment.ProcessorCount work items to complete

When I run this on .NET 9, it hangs, because there’s so much work in the global queue, no threads are able to process those sub-work items that are necessary to unblock the main work items:

Timed out after 20002ms

On .NET 10, it generally completes almost instantly:

Completed: 4ms

Some other tweaks were made to the pool as well:

  • dotnet/runtime#115402 reduced the amount of spin-waiting done on Arm processors, bringing it more in line with x64.
  • dotnet/runtime#112789 reduced the frequency at which the thread pool checked CPU utilization, as in some circumstances it was adding noticeable overhead, and makes the frequency configurable.
  • dotnet/runtime#108135 from@AlanLiu90 removed a bit of lock contention that could happen under load when starting new thread pool threads.

On the subject of locking, and only for developers that find themselves with a strong need to do really low-level low-lock development,dotnet/runtime#107843 from@hamarb123 adds two new methods to theVolatile class:ReadBarrier andWriteBarrier. A read barrier has “load acquire” semantics, and is sometimes referred to as a “downward fence”: it prevents instructions from being reordered in such a way that memory accesses below/after the barrier move to above/before it. In contrast, a write barrier has “store release” semantics, and is sometimes referred to as an “upwards fence”: it prevents instructions from being reordered in such a way that memory accesses above/before the barrier move to below/after it. I find it helps to think about this with regards to alock:

A;lock (...){    B;}C;

While in practice the implementation may provide stronger fences, by specification entering alock has acquire semantics and exiting alock has release semantics. Imagine if the instructions in the above code could be reordered like this:

A;B;lock (...){}C;

or like this:

A;lock (...){}B;C;

Both of those would be really bad. Thankfully, the barriers help us here. The acquire / read barrier semantics of entering the lock are a downwards fence: logically the brace that starts the lock puts downwards pressure on everything inside the lock to not move to before it, and the brace that ends the lock puts upwards pressure on everything inside the lock to not move to after it. Interestingly, nothing about the semantics of these barriers prevents this from happening:

lock (...){    A;    B;    C;}

These barriers are referred to as “half fences”; the read barrier prevents later things from moving earlier, but not the other way around, and the write barrier prevents earlier things from moving later, but not the other way around. (As it happens, though, while not required by specification, today the implementation oflock does use a full barrier on both enter and exit, so nothing before or after alock will move into it.)

ForTask in .NET 10,Task.WhenAll has a few changes to improve its performance.dotnet/runtime#110536 avoids a temporary collection allocation when needing to buffer up tasks from anIEnumerable<Task>.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public Task WhenAllAlloc()    {        AsyncTaskMethodBuilder t = default;        Task whenAll = Task.WhenAll(from i in Enumerable.Range(0, 2) select t.Task);        t.SetResult();        return whenAll;    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
WhenAllAlloc.NET 9.0216.8 ns1.00496 B1.00
WhenAllAlloc.NET 10.0181.9 ns0.84408 B0.82

Anddotnet/runtime#117715 from@CuteLeon avoids the overhead of theTask.WhenAll altogether when the input ends up just being a single task, in which case it simply returns that task instance.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public Task WhenAllAlloc()    {        AsyncTaskMethodBuilder t = default;        Task whenAll = Task.WhenAll([t.Task]);        t.SetResult();        return whenAll;    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
WhenAllAlloc.NET 9.072.73 ns1.00144 B1.00
WhenAllAlloc.NET 10.033.06 ns0.4572 B0.50

System.Threading.Channels is one of the lesser-known but quite useful areas of threading in .NET (you can watchYet Another “Highly Technical Talk” with Hanselman and Toub from Build 2025 to learn more about it). If you find yourself needing a queue to hand off some data between a producer and a consumer, you should likely look intoChannel<T>. The library was introduced in .NET Core 3.0 as a small, robust, and fast producer/consumer queueing mechanism; it’s evolved since, such as gaining aReadAllAsync method for consuming the contents of a channel as anIAsyncEnumerable<T> and aPeekAsync method for peeking at its contents without consuming. The original release supportedChannel.CreateUnbounded andChannel.CreateBounded methods, and .NET 9 augmented those with aChannel.CreateUnboundedPrioritized. .NET 10 continues to expand on channels, both with functional improvements (such as withdotnet/runtime#116097, which adds an unbuffered channel implementation), and with performance improvements.

.NET 10 helps to to reduce overall memory consumption of an application using channels. One of the cross-cutting features channels supports is cancellation: you can cancel pretty much any interaction with a channel, which sports asynchronous methods for both producing and consuming data. When a reader or writer needs to pend, it creates (or reuses a pooled instance of) anAsyncOperation object that gets added to a queue; a later writer or reader that’s then able to satisfy a pending reader or writer dequeues one and marks it as completed. These queues were implemented with arrays, which makes it challenging to remove an entry from the middle of the queue if the associated operation gets canceled. So, rather than trying, it simply left the canceled object in the queue, and then when it would eventually get dequeued, it’s just thrown away and the dequeuer tries again. The theory was that, at steady state, you will quickly dequeue any canceled operations, and it’d be better to not exert a lot of effort to try to remove them more quickly. As it turns out, that assumption was problematic for some scenarios, where the workload wasn’t balanced, e.g. lots of readers would pend and timeout due to lack of writers, and each of those timed out readers would leave behind a canceled item in the queue. The next time a writer would come along, yes, all those canceled readers would get cleared out, but in the meantime, it would manifest as a notable increase in working set.

dotnet/runtime#116021 addresses that by switching from array-backed queues to linked-list-based queues. The waiter objects themselves double as the nodes in the linked lists, so the only additional memory overhead is a couple of fields for the previous and next nodes in the linked list. But even that modest increase is undesirable, so as part of the PR, it also tries to find compensating optimizations to balance things out. It’s able to remove a field fromChannel<T>‘s custom implementation ofIValueTaskSource<T> by applying a similar optimization that was made toManualResetValueTaskSourceCore<T> in a previous release: it’s incredibly rare for an awaiter to supply anExecutionContext (via use of the awaiter’sOnCompleted rather thanUnsafeOnCompleted method), and even more so for that to happen when there’s also a non-defaultTaskScheduler orSynchronizationContext that needs to be stored, so rather than using two fields for those concepts, they just get grouped into one (which means that in the super duper rare case where both are needed, it incurs an extra allocation). Another field is removed for storing aCancellationToken on the instance, which on .NET Core can be retrieved from other available state. These changes then actually result in the size of theAsyncOperation waiter instance decreasing rather than increasing. Win win. It’s hard to see the impact of this change on throughput; it’s easier to just see what the impact is on working set in the degenerate case where canceled operations are never removed. If I run this code:

// dotnet run -c Release -f net9.0 --filter "*"// dotnet run -c Release -f net10.0 --filter "*"using System.Threading.Channels;Channel<int> c = Channel.CreateUnbounded<int>();for (int i = 0; ; i++){    CancellationTokenSource cts = new();    var vt = c.Reader.ReadAsync(cts.Token);    cts.Cancel();    await ((Task)vt.AsTask()).ConfigureAwait(ConfigureAwaitOptions.SuppressThrowing);    if (i % 100_000 == 0)    {        Console.WriteLine($"Working set: {Environment.WorkingSet:N0}b");    }}

in .NET 9 I get output like this, with an ever increasing working set:

Working set: 31,588,352bWorking set: 164,884,480bWorking set: 210,698,240bWorking set: 293,711,872bWorking set: 385,495,040bWorking set: 478,158,848bWorking set: 553,385,984bWorking set: 608,206,848bWorking set: 699,695,104bWorking set: 793,034,752bWorking set: 885,309,440bWorking set: 986,103,808bWorking set: 1,094,234,112bWorking set: 1,156,239,360bWorking set: 1,255,198,720bWorking set: 1,347,604,480bWorking set: 1,439,879,168bWorking set: 1,532,284,928b

and in .NET 10, I get output like this, with a nice level steady state working set:

Working set: 33,030,144bWorking set: 44,826,624bWorking set: 45,481,984bWorking set: 45,613,056bWorking set: 45,875,200bWorking set: 45,875,200bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272bWorking set: 46,006,272b

Reflection

.NET 8 added the[UnsafeAccessor] attribute, which enables a developer to write anextern method that matches up with some non-visible member the developer wants to be able to use, and the runtime fixes up the accesses to be just as if the target member was being used directly. .NET 9 then extended it with generic support.

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Reflection;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private List<int> _list = new List<int>(16);    private FieldInfo _itemsField = typeof(List<int>).GetField("_items", BindingFlags.NonPublic | BindingFlags.Instance)!;    private static class Accessors<T>    {        [UnsafeAccessor(UnsafeAccessorKind.Field, Name = "_items")]        public static extern ref T[] GetItems(List<T> list);    }    [Benchmark]    public int[] WithReflection() => (int[])_itemsField.GetValue(_list)!;    [Benchmark]    public int[] WithUnsafeAccessor() => Accessors<int>.GetItems(_list);}
MethodMean
WithReflection2.6397 ns
WithUnsafeAccessor0.7300 ns

But there are still gaps in that story. The signature of theUnsafeAccessor member needs to align with the signature of the target member, but what if that target member has parameters that aren’t visible to the code writing theUnsafeAccessor? Or, what if the target member is static? There’s no way for the developer to express in theUnsafeAccessor on which type the target member exists.

For these scenarios,dotnet/runtime#114881 augments the story with the[UnsafeAccessorType] attribute. TheUnsafeAccessor method can type the relevant parameters asobject but then adorn them with an[UnsafeAccessorType("...")] that provides a fully-qualified name of the target type. There are a bunch of examples then of this being used indotnet/runtime#115583, which replaces most of the cross-library reflection done between libraries in .NET itself with use of[UnsafeAccessor]. An example of where this is handy is with a cyclic relationship betweenSystem.Net.Http andSystem.Security.Cryptography.System.Net.Http sits aboveSystem.Security.Cryptography, referencing it for critical features likeX509Certificate. ButSystem.Security.Cryptography needs to be able to make HTTP requests in order to download OCSP information, and withSystem.Net.Http referencingSystem.Security.Cryptography,System.Security.Cryptography can’t in turn explicitly referenceSystem.Net.Http. It can, however, use reflection or[UnsafeAccessor] and[UnsafeAccessorType] to do so, and it does. It used to use reflection, now in .NET 10 it uses[UnsafeAccessor].

There are a few other nice improvements in and around reflection.dotnet/runtime#105814 from@huoyaoyuan updatesActivatorUtilities.CreateFactory to remove a layer of delegates.CreateFactory returns anObjectFactory delegate, but under the covers the implementation was creating aFunc<...> and then creating anObjectFactory delegate for thatFunc<...>‘sInvoke method. The PR changes it to just create theObjectFactory initially, which means every invocation avoids one layer of delegate invocation.

// dotnet run -c Release -f net9.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Configs;using BenchmarkDotNet.Environments;using BenchmarkDotNet.Jobs;using BenchmarkDotNet.Running;using Microsoft.Extensions.DependencyInjection;var config = DefaultConfig.Instance    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("Microsoft.Extensions.DependencyInjection.Abstractions", "9.0.9").AsBaseline())    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("Microsoft.Extensions.DependencyInjection.Abstractions", "10.0.0-rc.1.25451.107"));BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]public partial class Tests{    private IServiceProvider _sp = new ServiceCollection().BuildServiceProvider();    private ObjectFactory _factory = ActivatorUtilities.CreateFactory(typeof(object), Type.EmptyTypes);    [Benchmark]    public object CreateInstance() => _factory(_sp, null);}
MethodRuntimeMeanRatio
CreateInstance.NET 9.08.136 ns1.00
CreateInstance.NET 10.06.676 ns0.82

dotnet/runtime#112350 reduces some overheads and allocations as part of parsing and renderingTypeNames.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Reflection.Metadata;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "t")]public partial class Tests{    [Benchmark]    [Arguments(typeof(Dictionary<List<int[]>[,], List<int?[][][,]>>[]))]    public string ParseAndGetName(Type t) => TypeName.Parse(t.FullName).FullName; }
MethodRuntimeMeanRatioAllocatedAlloc Ratio
ParseAndGetName.NET 9.05.930 us1.0012.25 KB1.00
ParseAndGetName.NET 10.04.305 us0.735.75 KB0.47

Anddotnet/runtime#113803 from@teo-tsirpanis improves howDebugDirectoryBuilder inSystem.Reflection.Metadata usesDeflateStream to embed a PDB. The code was previously buffering the compressed output into an intermediateMemoryStream, and then thatMemoryStream was being written to theBlobBuilder. With this change, theDeflateStream is wrapped directly around theBlobBuilder, enabling the compressed data to be propagated directly tobuilder.WriteBytes.

Primitives and Numerics

Every time I write one of these “Performance Improvements in .NET” posts, a part of me thinks “how could there possibly be more next time.” That’s especially true for core data types, which have received so much scrutiny over the years. Yet, here we are, with more to look at for .NET 10.

DateTime andDateTimeOffset get some love indotnet/runtime#111112, in particular with micro-optimizations around how instances are initialized. Similar tweaks show up indotnet/runtime#111244 forDateOnly,TimeOnly, andISOWeek.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private DateTimeOffset _dto = new DateTimeOffset(2025, 9, 10, 0, 0, 0, TimeSpan.Zero);    [Benchmark]    public DateTimeOffset GetFutureTime() => _dto + TimeSpan.FromDays(1);}
MethodRuntimeMeanRatio
GetFutureTime.NET 9.06.012 ns1.00
GetFutureTime.NET 10.01.029 ns0.17

Guid gets several notable performance improvements in .NET 10.dotnet/runtime#105654 from@SirCxyrtyx imbuesGuid with an implementation ofIUtf8SpanParsable. This not only allowsGuid to be used in places where a generic parameter is constrained toIUtf8SpanParsable, it givesGuid overloads ofParse andTryParse that operate on UTF8 bytes. This means if you have UTF8 data, you don’t first need to transcode it to UTF16 in order to parse it, nor useUtf8Parser.TryParse, which isn’t as optimized as isGuid.TryParse (but which does enable parsing out aGuid from the beginning of a larger input).

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Buffers.Text;using System.Text;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _utf8 = Encoding.UTF8.GetBytes(Guid.NewGuid().ToString("N"));    [Benchmark(Baseline = true)]    public Guid TranscodeParse()    {        Span<char> scratch = stackalloc char[64];        ReadOnlySpan<char> input = Encoding.UTF8.TryGetChars(_utf8, scratch, out int charsWritten) ?            scratch.Slice(0, charsWritten) :            Encoding.UTF8.GetString(_utf8);        return Guid.Parse(input);    }    [Benchmark]    public Guid Utf8ParserParse() => Utf8Parser.TryParse(_utf8, out Guid result, out _, 'N') ? result : Guid.Empty;    [Benchmark]    public Guid GuidParse() => Guid.Parse(_utf8);}
MethodMeanRatio
TranscodeParse24.72 ns1.00
Utf8ParserParse19.34 ns0.78
GuidParse16.47 ns0.67

Char,Rune, andVersion also gainedIUtf8SpanParsable implementations, indotnet/runtime#105773 from@lilinus anddotnet/runtime#109252 from@lilinus. There’s not much of a performance benefit here forchar andRune; implementing the interface mainly yields consistency and the ability to use these types with generic routines parameterized based on that interface. ButVersion gains the same kinds of performance (and usability) benefits as didGuid: it now sports support for parsing directly from UTF8, rather than needing to transcode first to UTF16 and then parse that.

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _utf8 = Encoding.UTF8.GetBytes(new Version("123.456.789.10").ToString());    [Benchmark(Baseline = true)]    public Version TranscodeParse()    {        Span<char> scratch = stackalloc char[64];        ReadOnlySpan<char> input = Encoding.UTF8.TryGetChars(_utf8, scratch, out int charsWritten) ?            scratch.Slice(0, charsWritten) :            Encoding.UTF8.GetString(_utf8);        return Version.Parse(input);    }    [Benchmark]    public Version GuidParse() => Version.Parse(_utf8);}
MethodMeanRatio
TranscodeParse46.48 ns1.00
GuidParse35.75 ns0.77

Sometimes performance improvements come about as a side-effect of other work.dotnet/runtime#110923 was intending to remove some pointer use fromGuid‘s formatting implementation, but in doing so, it ended up also slightly improving throughput of the (admittedly rarely used) “X” format.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private char[] _dest = new char[64];    private Guid _g = Guid.NewGuid();    [Benchmark]    public void FormatX() => _g.TryFormat(_dest, out int charsWritten, "X");}
MethodRuntimeMeanRatio
FormatX.NET 9.03.0584 ns1.00
FormatX.NET 10.00.7873 ns0.26

Random (and its cryptographically-secure counterpartRandomNumberGenerator) continues to improve in .NET 10, with new methods (such asRandom.GetString andRandom.GetHexString fromdotnet/runtime#112162) for usability, but also importantly with performance improvements to existing methods. BothRandom andRandomNumberGenerator were given a handyGetItems method in .NET 8; this method allows a caller to supply a set of choices and the number of items desired, allowingRandom{NumberGenerator} to perform “sampling with replacement”, selecting an item from the set that number of times. In .NET 9, these implementations were optimized to special-case a power-of-2 number of choices that’s less than or equal to 256. In such a case, we can avoid many trips to the underlying source of randomness by requesting bytes in bulk, rather than requesting anint per element. With the power-of-2 choice count, we can simply mask each byte to produce the index into the choices while not introducing bias. In .NET 10,dotnet/runtime#107988 extends this to apply to non-power-of-2 cases, as well. We can’t just mask off bits as in the power-of-2 case, but we can do “rejection sampling,” which is just a fancy way of saying “if you randomly get a value outside of the allowed range, try again”.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Security.Cryptography;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private const string Base58 = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz";    [Params(30)]    public int Length { get; set; }    [Benchmark]    public char[] WithRandom() => Random.Shared.GetItems<char>(Base58, Length);    [Benchmark]    public char[] WithRandomNumberGenerator() => RandomNumberGenerator.GetItems<char>(Base58, Length);}
MethodRuntimeLengthMeanRatio
WithRandom.NET 9.030144.42 ns1.00
WithRandom.NET 10.03073.68 ns0.51
WithRandomNumberGenerator.NET 9.03023,179.73 ns1.00
WithRandomNumberGenerator.NET 10.030853.47 ns0.04

decimal operations, specifically multiplication and division, get a performance bump, thanks todotnet/runtime#99212 from@Daniel-Svensson.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private decimal _n = 9.87654321m;    private decimal _d = 1.23456789m;    [Benchmark]    public decimal Divide() => _n / _d;}
MethodRuntimeMeanRatio
Divide.NET 9.027.09 ns1.00
Divide.NET 10.023.68 ns0.87

UInt128 division similarly gets some assistance indotnet/runtime#99747 from@Daniel-Svensson, utilizing the X86DivRem hardware intrinsic when dividing a value that’s larger than aulong by a value that could fit in aulong.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private UInt128 _n = new UInt128(123, 456);    private UInt128 _d = new UInt128(0, 789);    [Benchmark]    public UInt128 Divide() => _n / _d;}
MethodRuntimeMeanRatio
Divide.NET 9.027.3112 ns1.00
Divide.NET 10.00.5522 ns0.02

BigInteger gets a few improvements as well.dotnet/runtime#115445 from@Rob-Hague augments itsTryWriteBytes method to use a direct memory copy when viable, namely when the number is non-negative such that it doesn’t need twos-complement tweaks.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Numerics;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private BigInteger _value = BigInteger.Parse(string.Concat(Enumerable.Repeat("1234567890", 20)));    private byte[] _bytes = new byte[256];    [Benchmark]    public bool TryWriteBytes() => _value.TryWriteBytes(_bytes, out _);}
MethodRuntimeMeanRatio
TryWriteBytes.NET 9.027.814 ns1.00
TryWriteBytes.NET 10.05.743 ns0.21

Also rare but fun, if you tried usingBigInteger.Parse exactly with the string representation ofint.MinValue, you’d end up allocating unnecessarily. That’s addressed bydotnet/runtime#104666 from@kzrnm, which tweaks the handling of this corner-case so that it’s appropriately recognized as a case that can be represented using a singleton forint.MinValue (the singleton already existed, it just wasn’t applied in this case).

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Numerics;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _int32min = int.MinValue.ToString();    [Benchmark]    public BigInteger ParseInt32Min() => BigInteger.Parse(_int32min);}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
ParseInt32Min.NET 9.080.54 ns1.0032 B1.00
ParseInt32Min.NET 10.071.59 ns0.890.00

One area that got a lot of attention in .NET 10 isSystem.Numerics.Tensors. TheSystem.Numerics.Tensors library was introduced in .NET 8, focusing on aTensorPrimitives class that provided various numerical routines on spans offloat. .NET 9 then expandedTensorPrimitives with more operations and generic versions of them. Now in .NET 10,TensorPrimitives gains even more operations, with many of the existing ones also made faster for various scenarios.

To start,dotnet/runtime#112933 adds over 70 new overloads toTensorPrimitives, including operations likeStdDev,Average,Clamp,DivRem,IsNaN,IsPow2,Remainder, and many more. The majority of these operations are also vectorized, using shared implementations that are parameterized with generic operators. For example, the entirety of theDecrement<T> implementation is:

public static void Decrement<T>(ReadOnlySpan<T> x, Span<T> destination) where T : IDecrementOperators<T> =>    InvokeSpanIntoSpan<T, DecrementOperator<T>>(x, destination);

whereInvokeSpanIntoSpan is a shared routine used by almost 60 methods, each of which supplies its own operator that’s then used in the heavily-optimized routine. In this case, theDecrementOperator<T> is simply this:

private readonly struct DecrementOperator<T> : IUnaryOperator<T, T> where T : IDecrementOperators<T>{    public static bool Vectorizable => true;    public static T Invoke(T x) => --x;    public static Vector128<T> Invoke(Vector128<T> x) => x - Vector128<T>.One;    public static Vector256<T> Invoke(Vector256<T> x) => x - Vector256<T>.One;    public static Vector512<T> Invoke(Vector512<T> x) => x - Vector512<T>.One;}

With that minimal implementation, which provides a decrement implementation for vectorized widths of 128 bits, 256 bits, 512 bits, and scalar, the workhorse routine is able to provide a very efficient implementation.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Numerics.Tensors;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private float[] _src = Enumerable.Range(0, 1000).Select(i => (float)i).ToArray();    private float[] _dest = new float[1000];    [Benchmark(Baseline = true)]    public void DecrementManual()    {        ReadOnlySpan<float> src = _src;        Span<float> dest = _dest;        for (int i = 0; i < src.Length; i++)        {            dest[i] = src[i] - 1f;        }    }    [Benchmark]    public void DecrementTP() => TensorPrimitives.Decrement(_src, _dest);}
MethodMeanRatio
DecrementManual288.80 ns1.00
DecrementTP22.46 ns0.08

Wherever possible, these methods also utilize APIs on the underlyingVector128,Vector256, andVector512 types, including new corresponding methods introduced indotnet/runtime#111179 anddotnet/runtime#115525, such asIsNaN.

Existing methods are also improved.dotnet/runtime#111615 from@BarionLP improvesTensorPrimitives.SoftMax by avoiding unnecessary recomputation ofT.Exp. The softmax function involves computingexp for every element and summing them all together. The output for an element with valuex is then theexp(x) divided by that sum. The previous implementation was following that outline, resulting in computingexp twice for each element. We can instead computeexp just once for each element, caching them temporarily in the destination while creating the sum, and then reusing those for the subsequent division, overwriting each with the actual result. The net result is close to doubling the throughput:

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.// dotnet run -c Release -f net9.0 --filter **using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Configs;using BenchmarkDotNet.Environments;using BenchmarkDotNet.Jobs;using BenchmarkDotNet.Running;using System.Numerics.Tensors;var config = DefaultConfig.Instance    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]public partial class Tests{    private float[] _src, _dst;    [GlobalSetup]    public void Setup()    {        Random r = new(42);        _src = Enumerable.Range(0, 1000).Select(_ => r.NextSingle()).ToArray();        _dst = new float[_src.Length];    }    [Benchmark]    public void SoftMax() => TensorPrimitives.SoftMax(_src, _dst);}
MethodRuntimeMeanRatio
SoftMax.NET 9.01,047.9 ns1.00
SoftMax.NET 10.0649.8 ns0.62

dotnet/runtime#111505 from@alexcovington enablesTensorPrimitives.Divide<T> to be vectorized forint. The operation already supported vectorization forfloat anddouble, for which there’s SIMD hardware-accelerated support for division, but it didn’t supportint, which lacks SIMD hardware-accelerated support. This PR teaches the JIT how to emulate SIMD integer division, by converting theints todoubles, doingdouble division, and then converting back.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.// dotnet run -c Release -f net9.0 --filter **using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Configs;using BenchmarkDotNet.Environments;using BenchmarkDotNet.Jobs;using BenchmarkDotNet.Running;using System.Numerics.Tensors;var config = DefaultConfig.Instance    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]public partial class Tests{    private int[] _n, _d, _dst;    [GlobalSetup]    public void Setup()    {        Random r = new(42);        _n = Enumerable.Range(0, 1000).Select(_ => r.Next(1000, int.MaxValue)).ToArray();        _d = Enumerable.Range(0, 1000).Select(_ => r.Next(1, 1000)).ToArray();        _dst = new int[_n.Length];    }    [Benchmark]    public void Divide() => TensorPrimitives.Divide(_n, _d, _dst);}
MethodRuntimeMeanRatio
Divide.NET 9.01,293.9 ns1.00
Divide.NET 10.0458.4 ns0.35

dotnet/runtime#116945 further updatesTensorPrimitives.Divide (as well asTensorPrimitives.Sign andTensorPrimitives.ConvertToInteger) to be vectorizable when used withnint ornuint.nint can be treated identically toint when in a 32-bit process and tolong when in a 64-bit process; same fornuint withuint andulong, respectively. So anywhere we’re successfully vectorizing forint/uint on 32-bit orlong/ulong on 64-bit, we can also successfully vectorize fornint/nuint.dotnet/runtime#116895 also enables vectorizingTensorPrimitives.ConvertTruncating when used to convertfloat toint oruint anddouble tolong orulong. Vectorization hadn’t previously been enabled because the underlying operations used had some undefined behavior; that behavior was fixed late in the .NET 9 cycle, such that this vectorization can now be enabled.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.// dotnet run -c Release -f net9.0 --filter **using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Configs;using BenchmarkDotNet.Environments;using BenchmarkDotNet.Jobs;using BenchmarkDotNet.Running;using System.Numerics.Tensors;var config = DefaultConfig.Instance    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]public partial class Tests{    private float[] _src;    private int[] _dst;    [GlobalSetup]    public void Setup()    {        Random r = new(42);        _src = Enumerable.Range(0, 1000).Select(_ => r.NextSingle() * 1000).ToArray();        _dst = new int[_src.Length];    }    [Benchmark]    public void ConvertTruncating() => TensorPrimitives.ConvertTruncating(_src, _dst);}
MethodRuntimeMeanRatio
ConvertTruncating.NET 9.0933.86 ns1.00
ConvertTruncating.NET 10.041.99 ns0.04

Not to be left out,TensorPrimitives.LeadingZeroCount is also improved indotnet/runtime#110333 from@alexcovington. When AVX512 is available, the change utilizes AVX512 instructions likePermuteVar16x8x2 to vectorizeLeadingZeroCount for all types supported byVector512<T>.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.// dotnet run -c Release -f net9.0 --filter **using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Configs;using BenchmarkDotNet.Environments;using BenchmarkDotNet.Jobs;using BenchmarkDotNet.Running;using System.Numerics.Tensors;var config = DefaultConfig.Instance    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]public partial class Tests{    private byte[] _src, _dst;    [GlobalSetup]    public void Setup()    {        _src = new byte[1000];        _dst = new byte[_src.Length];        new Random(42).NextBytes(_src);    }    [Benchmark]    public void LeadingZeroCount() => TensorPrimitives.LeadingZeroCount(_src, _dst);}
MethodRuntimeMeanRatio
LeadingZeroCount.NET 9.0401.60 ns1.00
LeadingZeroCount.NET 10.012.33 ns0.03

In terms of changes that affected the most operations,dotnet/runtime#116898 anddotnet/runtime#116934 take the cake. Together, these PRs extend vectorization for almost 60 distinct operations to also accelerate forHalf:Abs,Add,AddMultiply,BitwiseAnd,BitwiseOr,Ceiling,Clamp,CopySign,Cos,CosPi,Cosh,CosineSimilarity,Decrement,DegreesToRadians,Divide,Exp,Exp10,Exp10M1,Exp2,Exp2M1,ExpM1,Floor,FusedAddMultiply,Hypot,Increment,Lerp,Log,Log10,Log10P1,Log2,Log2P1,LogP1,Max,MaxMagnitude,MaxMagnitudeNumber,MaxNumber,Min,MinMagnitude,MinMagnitudeNumber,MinNumber,Multiply,MultiplyAdd,MultiplyAddEstimate,Negate,OnesComplement,Reciprocal,Remainder,Round,Sigmoid,Sin,SinPi,Sinh,Sqrt,Subtract,Tan,TanPi,Tanh,Truncate, andXor. The challenge here is thatHalf doesn’t have accelerated hardware support, and today is not even supported by the vector types. In fact, even for its scalar operations,Half is manipulated internally by converting it to afloat, performing the relevant operation asfloat, and then casting back, e.g. here’s the implementation of theHalf multiplication operator:

public static Half operator *(Half left, Half right) => (Half)((float)left * (float)right);

For all of theseTensorPrimitives operations, they previously would treatHalf like any other unaccelerated type, and would just run a scalar loop that performed the operation on eachHalf. That means for each element, we’re converting it tofloat, then performing the operation, and then converting it back. As luck would have it, though,TensorPrimitives already defines theConvertToSingle andConvertToHalf methods, which are accelerated. We can then reuse those methods to do the same thing that’s already done for scalar operations but do it vectorized: take a vector ofHalfs, convert them all tofloats, process all thefloats, and convert them all back toHalfs. Of course, I already stated that the vector types don’t supportHalf, so how can we “take a vector ofHalf“? By reinterpret casting theSpan<Half> toSpan<short> (orSpan<ushort>), which allows us to smuggle theHalfs through. And, as it turns out, even for scalar, the very first thingHalf‘sfloat cast operator does is convert it to ashort.

The net result is that a ton of operations can now be accelerated forHalf.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.// dotnet run -c Release -f net9.0 --filter **using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Configs;using BenchmarkDotNet.Environments;using BenchmarkDotNet.Jobs;using BenchmarkDotNet.Running;using System.Numerics.Tensors;var config = DefaultConfig.Instance    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]public partial class Tests{    private Half[] _x, _y, _dest;    [GlobalSetup]    public void Setup()    {        _x = new Half[1000];        _y = new Half[_x.Length];        _dest = new Half[_x.Length];        var random = new Random(42);        for (int i = 0; i < _x.Length; i++)        {            _x[i] = (Half)random.NextSingle();            _y[i] = (Half)random.NextSingle();        }    }    [Benchmark]    public void Add() => TensorPrimitives.Add(_x, _y, _dest);}
MethodRuntimeMeanRatio
Add.NET 9.05,984.3 ns1.00
Add.NET 10.0481.7 ns0.08

TheSystem.Numerics.Tensors library in .NET 10 now also includes stable APIs for tensor types (which useTensorPrimitives in their implementations). This includes aTensor<T>,ITensor<,>,TensorSpan<T>, andReadOnlyTensorSpan<T>. One of the really interesting things about these types is that they take advantage of the new C# 14 compound operators feature, and do so for a significant performance benefit. In previous versions of C#, you’re able to write custom operators, for example an addition operator:

public class C {    public int Value;    public static C operator +(C left, C right) => new() { Value = left.Value + right.Value };}

With that type, I can write code like:

C a = new() { Value = 42 };C b = new() { Value = 84 };C c = a + b;Console.WriteLine(c.Value);

which will print out126. I can also change the code to use a compound operator,+=, like this:

C a = new() { Value = 42 };C b = new() { Value = 84 };a += b;Console.WriteLine(a.Value);

which will also print out126, because thea += b is always identical toa = a + b… or, at least it was. Now with C# 14, it’s possible for a type to not only define a+ operator, it can also define a+= operator. If a type defines a+= operator, it will be used rather than expandinga += b as shorthand fora = a + b. And that has performance ramifications.

A tensor is basically a multidimensional array, and as with arrays, these can be big… really big. If I have a sequence of operations:

Tensor<int> t1 = ...;Tensor<int> t2 = ...;for (int i = 0; i < 3; i++){    t1 += t2;}

and each of thoset1 += t2s exands intot1 = t1 + t2, then for each I’m allocating a brand new tensor. If they’re big, that gets expensive right quick. But C# 14’s new user-defined compound operators, as initially added to the compiler indotnet/roslyn#78400, enable mutation of the target.

public class C {    public int Value;    public static C operator +(C left, C right) => new() { Value = left.Value + right.Value };    public static void operator +=(C other) => left.Value += other.Value;}

And that means that such compound operators on the tensor types can just update the target tensor in place rather than allocating a whole new (possibly very large) data structure for each computation.dotnet/runtime#117997 adds all of these compound operators for the tensor types. (Not only are these using C# 14 user-defined compound operators, they’re doing so as extension operators, using the new C# 14 extension types feature. Fun!)

Collections

Handling collections of data is the lifeblood of any application, and as such every .NET release tries to eke out even more performance from collections and collection processing.

Enumeration

Iterating through collections is one of the most common things developers do. To make this as efficient as possible, the most prominent collection types in .NET (e.g.List<T>) expose struct-based enumerators (e.g.List<T>.Enumerator) which their publicGetEnumerator() methods then return in a strongly-typed manner:

public Enumerator GetEnumerator() => new Enumerator(this);

This is in addition to theirIEnumerable<T>.GetEnumerator() implementation, which ends up being implemented via an “explicit” interface implementation (“explicit” means the relevant method provides the interface method implementation but does not show up as a public method on the type itself), e.g.List<T>‘s implementation:

IEnumerator<T> IEnumerable<T>.GetEnumerator() =>    Count == 0 ? SZGenericArrayEnumerator<T>.Empty :    GetEnumerator();

Directlyforeach‘ing over the collection allows the C# compiler to bind to the struct-based enumerator, enabling avoiding the enumerator allocation and being able to directly see the non-virtual methods on the enumerator, rather than working with anIEnumerator<T> and the interface dispatch required to invoke methods on it. That, however, falls apart once the collection is used polymorphically as anIEnumerable<T>; at that point, theIEnumerable<T>.GetEnumerator() is used, which is forced to allocate a new enumerator instance (except for special-cases, such as howList<T>‘s implementation shown above returns a singleton enumerator when the collection is empty).

Thankfully, as noted earlier in the JIT section, the JIT has been gaining super powers around dynamic PGO, escape analysis, and stack allocation. This means that in many situations, the JIT is now able to see that the most common concrete type for a given call site is a specific enumerator type and generate code specific to when it is that type, devirtualizing the calls, possibly inlining them, and then, if it’s able to do so sufficiently, stack allocating the enumerator. With the progress that’s been made in .NET 10, this now happens very frequently for arrays andList<T>. While the JIT is able to do this in general regardless of an object’s type, the ubiquity of enumeration makes it all that much more important forIEnumerator<T>, sodotnet/runtime#116978 marksIEnumerator<T> as an[Intrinsic], giving the JIT the ability to better reason about it.

However, some enumerators still needed a bit of help. BesidesT[],List<T> is the most popular collection type in .NET, and with the JIT changes, manyforeachs of anIEnumerable<T> that are actuallyList<T> will successfully have the enumerator stack allocated. Awesome. That awesomeness dwindled, however, when trying out different sized lists. This is a benchmark that tests out enumerating aList<T> typed asIEnumerable<T>, with different lengths, along with benchmark results from early August 2025 (around .NET 10 Preview 7).

// dotnet run -c Release -f net10.0 --filter **using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private IEnumerable<int> _enumerable;    [Params(500, 5000, 15000)]    public int Count { get; set; }    [GlobalSetup]    public void Setup() => _enumerable = Enumerable.Range(0, Count).ToList();    [Benchmark]    public int Sum()    {        int sum = 0;        foreach (int item in _enumerable) sum += item;        return sum;    }}
MethodCountMeanAllocated
Sum500214.1 ns
Sum50004,767.1 ns40 B
Sum1500013,824.4 ns40 B

Note that for the500 elementList<T>, the allocation column shows that nothing was allocated on the heap, as the enumerator was successfully stack allocated. Fabulous. But then just increasing the size of the list caused it to no longer be stack-allocated. Why? The reason for the allocation in the jump from500 to5000 has to do with dynamic PGO combined with howList<T>‘s enumerator was written oh so many years ago.

List<T>‘s enumerator’sMoveNext was structured like this:

public bool MoveNext(){    if (_version == _list._version && ((uint)_index < (uint)_list._size))    {        ... // handle successfully getting next element        return true;    }    return MoveNextRare();}private bool MoveNextRare(){    ... // handle version mismatch and/or returning false for completed enumeration}

TheRare in the name gives a hint as to why it’s split like this. TheMoveNext method was kept as thin as possible for the common case of invokingMoveNext, namely all successful calls that returntrue; the only timeMoveNextRare is needed, other than when the enumerator is misused, is for the final call to it after all elements have been yielded. That streamlining ofMoveNext itself was done to makeMoveNext inlineable. However, a lot has changed since this code was written, making it less important, and the separating out ofMoveNextRare has a really interesting interaction with dynamic PGO. One of the things dynamic PGO looks for is whether code is considered hot (used a lot) or cold (used rarely), and that data influences whether a method should be considered for inlining. For shorter lists, dynamic PGO will seeMoveNextRare invoked a reasonable number of times, and will consider it for inlining. And if all of the calls to the enumerator are inlined, the enumerator instance can avoid escaping the call frame, and can then be stack allocated. But once the list length grows to a much larger amount, thatMoveNextRare method will start to look really cold, will struggle to be inlined, and will then allow the enumerator instance to escape, preventing it from being stack allocated.dotnet/runtime#118425 recognizes that times have changed since this enumerator was written, with many changes to inlining heuristics and PGO and the like; it undoes the separating out ofMoveNextRare and simplifies the enumerator. With how the system works today, the re-combinedMoveNext is still inlineable, with or without PGO, and we’re able to stack allocate at the larger size.

MethodCountMeanAllocated
Sum500221.2 ns
Sum50002,153.6 ns
Sum1500014,724.9 ns40 B

With that fix, we still had an issue, though. We’re now avoiding the allocation at lengths 500 and 5000, but at 15,000 we still see the enumerator being allocated. Now why? This has to do withOSR (on-stack replacement), which was introduced in .NET 7 as a key enabler for allowing tiered compilation to be used with methods containing loops. OSR allows for a method to be recompiled with optimizations even while it’s executing, and for an invocation of the method to jump from the unoptimized code for the method to the corresponding location in the newly optimized method. While OSR is awesome, it unfortunately causes some complications here. Once the list gets long enough, an invocation of the tier 0 (unoptimized) method will transition to the OSR optimized method… but OSR methods don’t contain dynamic PGO instrumentation (they used to, but it was removed because it led to problems if the instrumented code never got recompiled again and thus suffered regressions due to forever-more running with the instrumentation probes in place). Without the instrumentation, and in particular without the instrumentation for the tail portion of the method (where the enumerator’sDispose method is invoked), even thoughList<T>.Dispose is a nop, the JIT may not be able to do the guarded devirtualization that enables theIEnumerator<T>.Dispose to be devirtualized and inlined. Meaning, ironically, that the nopDispose causes escape analysis to see the enumerator instance escape, such that it can’t be stack allocated. Whew.

Thankfully,dotnet/runtime#118461 addresses that in the JIT. Specifically for enumerators, this PR enables dynamic PGO to infer the missing instrumentation based on the earlier probes used with the other enumerator methods, which then enables it to successfully devirtualize and inlineDispose. So, for .NET 10, and the same benchmark, we end up with this lovely sight:

MethodCountMeanAllocated
Sum500216.5 ns
Sum50002,082.4 ns
Sum150006,525.3 ns

Other types needed a bit of help as well.dotnet/runtime#118467 addressesPriorityQueue<TElement, TPriority>‘s enumerator; it’s enumerator was a port ofList<T>‘s and so was changed similarly.

Separately,dotnet/runtime#117328 streamline’sStack<T>‘s enumerator type, removing around half the lines of code that previously composed it. The previous enumerator’sMoveNext incurred five branches on the way to grabbing most next elements:

  • It first did a version check, comparing the stack’s version number against the enumerator’s captured version number, to ensure the stack hadn’t been mutated since the time the enumerator was grabbed.
  • It then checked to see whether this was the first call to the enumerator, taking one path that lazily-initialized some state if it was and another path assuming already-initialized state if not.
  • Assuming this wasn’t the first call, it then checked whether enumeration had previously ended.
  • Assuming it hadn’t, it then checked whether there’s anything left to enumerate.
  • And finally, it dereferenced the underlying array, incurring a bounds check.

The new implementation cuts that in half. It relies on the enumerator’s constructor initializing the current index to the length of the stack, such that eachMoveNext call just decrements this value. When the data is exhausted, the count will go negative. This means that we can combine a whole bunch of these checks into a single check:

if ((uint)index < (uint)array.Length)

and we’re left with just two branches on the way to reading any element: the version check and whether the index is in bounds. That reduction not only means there’s less code to process and fewer branches that might be improperly predicted, it also shrinks the size of the members to the point where they’re much more likely to be inlined, which in turns makes it much more likely that the enumerator object can be stack allocated.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private Stack<int> _direct = new Stack<int>(Enumerable.Range(0, 10));    private IEnumerable<int> _enumerable = new Stack<int>(Enumerable.Range(0, 10));    [Benchmark]    public int SumDirect()    {        int sum = 0;        foreach (int item in _direct) sum += item;        return sum;    }    [Benchmark]    public int SumEnumerable()    {        int sum = 0;        foreach (int item in _enumerable) sum += item;        return sum;    }}
MethodRuntimeMeanRatioCode SizeAllocatedAlloc Ratio
SumDirect.NET 9.023.317 ns1.00331 BNA
SumDirect.NET 10.04.502 ns0.1955 BNA
SumEnumerable.NET 9.030.893 ns1.00642 B40 B1.00
SumEnumerable.NET 10.07.906 ns0.26381 B0.00

dotnet/runtime#117341 does something similar but forQueue<T>.Queue<T> has an interesting complication when compared toStack<T>, which is that it can wrap around the length of the underlying array. Whereas withStack<T>, we can always start at a particular index and just count down to 0, using that index as the offset into the array, withQueue<T>, the starting index can be anywhere in the array, and when walking from that index to the last element, we might need to wrap around back to the beginning. Such wrapping can be accomplished using% array.Length (which is whatQueue<T> does on .NET Framework), but such a division operation can be relatively costly. An alternative, since we know the count can never be more than the array’s length, is to check whether we’ve already walked past the end of the array, and if we have, then subtract the array’s length to get to the corresponding location from the start of the array. The existing implementation in .NET 9 did just that:

if (index >= array.Length){    index -= array.Length; // wrap around if needed}_currentElement = array[index];

That is two branches, one for the check against the array length, and one for the bounds check. The bounds check can’t be eliminated here because the JIT hasn’t seen proof that the index is actually in-bounds and thus needs to be defensive. Instead, we can write it like this:

if ((uint)index < (uint)array.Length){    _currentElement = array[index];}else{    index -= array.Length;    _currentElement = array[index];}

An enumeration of a queue can logically be split into two parts: the elements from the head index to the end of the array, and the elements from the beginning of the array to the tail. All of the former now fall into the first block, which incurs only one branch because the JIT can use the knowledge gleaned from the comparison to eliminate the bounds check. It only incurs a bounds check when in the second portion of the enumeration.

We can more easily visualize the branch savings by using benchmarkdotnet’sHardwareCounters diagnoser, asking it to trackHardwareCounter.BranchInstructions (this diagnoser only works on Windows). Note here, as well, that the changes not only improve throughput, they also enable the boxed enumerator to be stack allocated.

// This benchmark was run on Windows for the HardwareCounters diagnoser to work.// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using BenchmarkDotNet.Diagnosers;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HardwareCounters(HardwareCounter.BranchInstructions)][MemoryDiagnoser(displayGenColumns: false)][DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private Queue<int> _direct;    private IEnumerable<int> _enumerable;    [GlobalSetup]    public void Setup()    {        _direct = new Queue<int>(Enumerable.Range(0, 10));        for (int i = 0; i < 5; i++)        {            _direct.Enqueue(_direct.Dequeue());        }        _enumerable = _direct;    }    [Benchmark]    public int SumDirect()    {        int sum = 0;        foreach (int item in _direct) sum += item;        return sum;    }    [Benchmark]    public int SumEnumerable()    {        int sum = 0;        foreach (int item in _enumerable) sum += item;        return sum;    }}
MethodRuntimeMeanRatioBranchInstructions/OpCode SizeAllocatedAlloc Ratio
SumDirect.NET 9.024.340 ns1.0079251 BNA
SumDirect.NET 10.07.192 ns0.303796 BNA
SumEnumerable.NET 9.030.695 ns1.00103531 B40 B1.00
SumEnumerable.NET 10.08.672 ns0.2850324 B0.00

ConcurrentDictionary<TKey, TValue> also gets in on the fun. The dictionary is implemented as a collection of “buckets”, each of which of which is a linked list of entries. It had a fairly complicated enumerator for processing these structures, relying on jumping between cases of a switch statement, e.g.

switch (_state){    case StateUninitialized:        ... // Initialize on first MoveNext.        goto case StateOuterloop;    case StateOuterloop:        // Check if there are more buckets in the dictionary to enumerate.        if ((uint)i < (uint)buckets.Length)        {            // Move to the next bucket.            ...            goto case StateInnerLoop;        }        goto default;    case StateInnerLoop:        ... // Yield elements from the current bucket.        goto case StateOuterloop;    default:        // Done iterating.        ...}

If you squint, there are nested loops here, where we’re enumerating each bucket and for each bucket enumerating its contents. With how this is structured, however, from the JIT’s perspective, we could enter those loops from any of thosecases, depending on the current value of_state. That produces something referred to as an “irreducible loop,” which is a loop that has multiple possible entry points. Imagine you have:

A:if (someCondition) goto B;...B:if (someOtherCondition) goto A;

LabelsA andB form a loop, but that loop can be entered by jumping to eitherA or toB. If the compiler could prove that this loop were only ever enterable fromA or only ever enterable fromB, then the loop would be “reducible.” Irreducible loops are much more complex than reducible loops for a compiler to deal with, as they have more complex control and data flow and in general are harder to analyze.dotnet/runtime#116949 rewrites theMoveNext method to be a more typicalwhile loop, which is not only easier to read and maintain, it’s also reducible and more efficient, and because it’s more streamlined, it’s also inlineable and enables possible stack allocation.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections.Concurrent;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private ConcurrentDictionary<int, int> _ints = new(Enumerable.Range(0, 1000).ToDictionary(i => i, i => i));    [Benchmark]    public int EnumerateInts()    {        int sum = 0;        foreach (var kvp in _ints) sum += kvp.Value;        return sum;    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
EnumerateInts.NET 9.04,232.8 ns1.0056 B1.00
EnumerateInts.NET 10.0664.2 ns0.160.00

LINQ

All of these examples show enumerating collections using aforeach loop, and while that’s obviously incredibly common, so too is using LINQ (Language Integrated Query) to enumerate and process collections. For in-memory collections, LINQ provides literally hundreds of extension methods for performing maps, filters, sorts, and a plethora of other operations over enumerables. It is incredibly handy, is thus usedeverywhere, and is thus important to optimize. Every release of .NET has seen improvements to LINQ, and that continues in .NET 10.

Most prominent from a performance perspective in this release are the changes toContains. As discussed in depth inDeep .NET: Deep Dive on LINQ with Stephen Toub and Scott Hanselman andDeep .NET: An even DEEPER Dive into LINQ with Stephen Toub and Scott Hanselman, the LINQ methods are able to pass information between them by using specialized internalIEnumerable<T> implementations. When you callSelect, that might return anArraySelectIterator<TSource, TResult> or anIListSelectIterator<TSource, TResult> or anIListSkipTakeSelectIterator<TSource, TResult> or one of any number of other types. Each of these types has fields that carry information about the source (e.g. theIListSkipTakeSelectIterator<TSource, TResult> has fields not only for theIList<TSource> source and theFunc<TSource, TResult> selector, but also for the tracked min and max bounds based on previousSkip andTake calls), and they have overrides of virtual methods that allow for various operations to be specialized. This means sequences of LINQ methods can be optimized. For example,source.Where(...).Select(...) is optimized a) to combine both the filter and the map delegates into a singleIEnumerable<T>, thus removing the overhead of an extra layer of interface dispatch, and b) to perform operations specific to the original source data type (e.g. ifsource was an array, the processing can be done directly on that array rather than viaIEnumerator<T>).

Many of these optimizations make the most sense when a method returns anIEnumerable<T> that happens to be the result of a LINQ query. The producer of that method doesn’t know how the consumer will be consuming it, and the consumer doesn’t know the details of how the producer produced it. But since the LINQ methods flow context via the concrete implementations ofIEnumerable<T>, significant optimizations are possible for interesting combinations of consumer and producer methods. For example, let’s say a producer of anIEnumerable<T> decides they want to always return data in ascending order, so they do:

public static IEnumerable<T> GetData(){    ...    return data.OrderBy(s => s.CreatedAt);}

But as it turns out, the consumer won’t be looking at all of the elements, and instead just wants the first:

T value = GetData().First();

LINQ optimizes this by having the enumerable returned fromOrderBy provide a specialized implementation ofFirst/FirstOrDefault: it doesn’t need to perform anO(N log N) sort (or allocate a lot of memory to hold all of the keys), it can instead just do anO(N) search for the smallest element in the source, because the smallest element would be the first to be yielded fromOrderBy.

Contains is ripe for these kinds of optimizations as well, e.g.OrderBy,Distinct, andReverse all entail non-trivial processing and/or allocation, but if followed by aContains, all that work can be skipped, as theContains can just search the source directly. Withdotnet/runtime#112684, this set of optimizations is extended toContains, with almost 30 specialized implementations ofContains across the various iterator specializations.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private IEnumerable<int> _source = Enumerable.Range(0, 1000).ToArray();    [Benchmark]    public bool AppendContains() => _source.Append(100).Contains(999);    [Benchmark]    public bool ConcatContains() => _source.Concat(_source).Contains(999);    [Benchmark]    public bool DefaultIfEmptyContains() => _source.DefaultIfEmpty(42).Contains(999);    [Benchmark]    public bool DistinctContains() => _source.Distinct().Contains(999);    [Benchmark]    public bool OrderByContains() => _source.OrderBy(x => x).Contains(999);    [Benchmark]    public bool ReverseContains() => _source.Reverse().Contains(999);    [Benchmark]    public bool UnionContains() => _source.Union(_source).Contains(999);    [Benchmark]    public bool SelectManyContains() => _source.SelectMany(x => _source).Contains(999);    [Benchmark]    public bool WhereSelectContains() => _source.Where(x => true).Select(x => x).Contains(999);}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
AppendContains.NET 9.02,931.97 ns1.0088 B1.00
AppendContains.NET 10.052.06 ns0.0256 B0.64
ConcatContains.NET 9.03,065.17 ns1.0088 B1.00
ConcatContains.NET 10.054.58 ns0.0256 B0.64
DefaultIfEmptyContains.NET 9.039.21 ns1.00NA
DefaultIfEmptyContains.NET 10.032.89 ns0.84NA
DistinctContains.NET 9.016,967.31 ns1.00058656 B1.000
DistinctContains.NET 10.046.72 ns0.00364 B0.001
OrderByContains.NET 9.012,884.28 ns1.00012280 B1.000
OrderByContains.NET 10.050.14 ns0.00488 B0.007
ReverseContains.NET 9.0479.59 ns1.004072 B1.00
ReverseContains.NET 10.051.80 ns0.1148 B0.01
UnionContains.NET 9.016,910.57 ns1.00058664 B1.000
UnionContains.NET 10.055.56 ns0.00372 B0.001
SelectManyContains.NET 9.02,950.64 ns1.00192 B1.00
SelectManyContains.NET 10.060.42 ns0.02128 B0.67
WhereSelectContains.NET 9.01,782.05 ns1.00104 B1.00
WhereSelectContains.NET 10.0260.25 ns0.15104 B1.00

LINQ in .NET 10 also gains some new methods, includingSequence andShuffle. While the primary purpose of these new methods is not performance, they can have a meaningful impact on performance, due to how they’ve been implemented and how they integrate with the rest of the optimizations in LINQ. TakeSequence, for example.Sequence is similar toRange, in that its a source for numbers:

public static IEnumerable<T> Sequence<T>(T start, T endInclusive, T step) where T : INumber<T>

WhereasRange only works withint and produces a contiguous series of non-overflowing numbers starting at the initial value,Sequence works with anyINumber<>, supportsstep values other than1 (including negative values), and allows for wrapping aroundT‘s maximum or minimum. However, when appropriate (e.g.step is1),Sequence will try to utilizeRange‘s implementation, which has internally been updated to work with anyT : INumber<T>, even though its public API is still tied toint. That means that all of the optimizations afforded toRange<T> propagate toSequence<T>.

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private List<short> _values = new();    [Benchmark(Baseline = true)]    public void Fill1()    {        _values.Clear();        for (short i = 42; i <= 1042; i++)        {            _values.Add(i);        }    }    [Benchmark]    public void Fill2()    {        _values.Clear();        _values.AddRange(Enumerable.Sequence<short>(42, 1042, 1));    }}
MethodMeanRatio
Fill11,479.99 ns1.00
Fill237.42 ns0.03

My favorite new LINQ method, though, isShuffle (introduced indotnet/runtime#112173), in part because it’s very handy, but in part because of its implementation and performance focus. The purpose ofShuffle is to randomize the source input, and logically, it’s akin to a very simple implementation:

public static IEnumerable<T> Shuffle<T>(IEnumerable<T> source){    T[] arr = source.ToArray();    Random.Shared.Shuffle(arr);    foreach (T item in arr) yield return item;}

Worst case, this implementation is effectively what’s in LINQ. Just as in the worst caseOrderBy needs to buffer up the whole input because it’s possible any item might be the smallest and thus need to be yielded first,Shuffle similarly needs to support the possibility that the last element should probabilistically be yielded first. However, there are a variety of special-cases in the implementation that allow it to perform significantly better than such a hand-rolledShuffle implementation you might be using today.

First,Shuffle has some of the same characteristics asOrderBy, in that they’re both creating permutations of the input. That means that many of the ways we can specialize subsequent operations on the result of anOrderBy also apply toShuffle. For example,Shuffle.First on anIList<T> can just select an element from the list at random.Shuffle.Count can just count the underlying source, since the order of the elements is irrelevant to the result.Shuffle.Contains can just perform the contains on the underlying source. Etc. But my two favorite sequences areShuffle.Take andShuffle.Take.Contains.

Shuffle.Take provides an interesting optimization opportunity: whereas withShuffle by itself we need to build the whole shuffled sequence, with aShuffle followed immediately by aTake(N), we only need to sampleN items from the source. We still need thoseN items to be a uniformly random distribution, akin to what we’d get if we performed the buffering shuffle and then selected the firstN items in the resulting array, but we can do so using an algorithm that allows us to avoid buffering everything. We need an algorithm that will let us iterate through the source data once, picking out elements as we go, and only ever bufferingN items at a time. Enter “reservoir sampling.” I previously discussed reservoir sampling inPerformance Improvements in .NET 8, as it’s employed by the JIT as part of its dynamic PGO implementation, and we can use the algorithm here inShuffle as well. Reservoir sampling provides exactly the single-pass, low-memory path we want: initialize a “reservoir” (an array) with the firstN items, then as we scan the rest of the sequence, probabilistically overwrite one of the elements in our reservoir with the current item. The algorithm ensures that every element ends up in the reservoir with equal probability, yielding the same distribution as fully shuffling and takingN, but using onlyO(N) space and only making a single pass over an otherwise unknown-length source.

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private IEnumerable<int> _source = Enumerable.Range(1, 1000).ToList();    [Benchmark(Baseline = true)]    public List<int> ShuffleTakeManual() => ShuffleManual(_source).Take(10).ToList();    [Benchmark]    public List<int> ShuffleTakeLinq() => _source.Shuffle().Take(10).ToList();    private static IEnumerable<int> ShuffleManual(IEnumerable<int> source)    {        int[] arr = source.ToArray();        Random.Shared.Shuffle(arr);        foreach (var item in arr)        {            yield return item;        }    }}
MethodMeanRatioAllocatedAlloc Ratio
ShuffleTakeManual4.150 us1.004232 B1.00
ShuffleTakeLinq3.801 us0.92192 B0.05

Shuffle.Take.Contains is even more fun. We now have a probability problem that reads like a brain teaser or an SAT question. “I havetotalCount items of whichequalCount match my target value, and we’re going to picktakeCount items at random. What is the probability that at least one of thosetakeCount items is one of theequalCount items?” This is called a hypergeometric distribution, and we can use an implementation of it forShuffle.Take.Contains.

To make this easier to reason about, let’s talk candy. Imagine you have a jar of 100 jelly beans, of which 20 are your favorite flavor, Watermelon, and you’re going to pick 5 of the 100 beans at random; what are the chances you get at least one Watermelon? To solve this, we could reason through all the different ways we might get 1, 2, 3, 4, or 5 Watermelons, but instead, let’s do the opposite and think through how likely it is that we don’t get any (sad panda):

  • The chance that our first pick isn’t a Watermelon is the number of non-Watermelons divided by the total number of beans, so(100-20)/100.
  • Once we’ve picked a bean out of the jar, we’re not putting it back, so the chance that our second pick isn’t a Watermelon is now(99-20)/99 (we have one fewer bean, but our first pick wasn’t a Watermelon, so there’s the same number of Watermelons as there was before).
  • For a third pick, it’s now(98-20)/98.
  • And so on.

After five rounds, we end up with(80/100) * (79/99) * (78/98) * (77/97) * (76/96), which is ~32%. If the chances I don’t get a Watermelon are ~32%, then the chances I do get a Watermelon are ~68%. Jelly beans aside, that’s our algorithm:

double probOfDrawingZeroMatches = 1;for (long i = 0; i < _takeCount; i++){    probOfDrawingZeroMatches *= (double)(totalCount - i - equalCount) / (totalCount - i);}return Random.Shared.NextDouble() > probOfDrawingZeroMatches;

The net effect is we can compute the answer much more efficiently than with a naive implementation that shuffles and then separately takes and separately contains.

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private IEnumerable<int> _source = Enumerable.Range(1, 1000).ToList();    [Benchmark(Baseline = true)]    public bool ShuffleTakeContainsManual() => ShuffleManual(_source).Take(10).Contains(2000);    [Benchmark]    public bool ShuffleTakeContainsLinq() => _source.Shuffle().Take(10).Contains(2000);    private static IEnumerable<int> ShuffleManual(IEnumerable<int> source)    {        int[] arr = source.ToArray();        Random.Shared.Shuffle(arr);        foreach (var item in arr)        {            yield return item;        }    }}
MethodMeanRatioAllocatedAlloc Ratio
ShuffleTakeContainsManual3,900.99 ns1.004136 B1.00
ShuffleTakeContainsLinq79.12 ns0.0296 B0.02

LINQ in .NET 10 also sports some new methods thatare about performance (at least in part), in particularLeftJoin andRightJoin, fromdotnet/runtime#110872. I say these are about performance because it’s already possible to achieve the left and right join semantics using existing LINQ surface area, and the new methods do it more efficiently.

Enumerable.Join implements an “inner join,” meaning only matching pairs from the two supplied collections appear in the output. For example, this code, which is joining based on the first letter in each string:

IEnumerable<string> left = ["apple", "banana", "cherry", "date", "grape", "honeydew"];IEnumerable<string> right = ["aardvark", "dog", "elephant", "goat", "gorilla", "hippopotamus"];foreach (string result in left.Join(right, s => s[0], s => s[0], (s1, s2) => $"{s1} {s2}")){    Console.WriteLine(result);}

outputs:

apple aardvarkdate doggrape goatgrape gorillahoneydew hippopotamus

In contrast, a “left join” (also known as a “left outer join”) would yield the following:

apple aardvarkbananacherrydate doggrape goatgrape gorillahoneydew hippopotamus

Note that it has all of the same output as with the “inner join,” except it has at least one row for everyleft element, even if there’s no matching element in theright row. And then a “right join” (also known as a “right outer join”) would yield the following:

apple aardvarkdate dog elephantgrape goatgrape gorillahoneydew hippopotamus

Again, all the same output as with the “inner join,” except it has at least one row for everyright element, even if there’s no matching element in theleft row.

Prior to .NET 10, there was noLeftJoin orRightJoin, but their semantics could be achieved using a combination ofGroupJoin,SelectMany, andDefaultIfEmpty:

public static IEnumerable<TResult> LeftJoin<TOuter, TInner, TKey, TResult>(    this IEnumerable<TOuter> outer, IEnumerable<TInner> inner,    Func<TOuter, TKey> outerKeySelector, Func<TInner, TKey> innerKeySelector,    Func<TOuter, TInner?, TResult> resultSelector) =>    outer    .GroupJoin(inner, outerKeySelector, innerKeySelector, (o, inners) => (o, inners))    .SelectMany(x => x.inners.DefaultIfEmpty(), (x, i) => resultSelector(x.o, i));

GroupJoin creates a group for eachouter (“left”) element, where the group contains all matching items frominner (“right”). We can flatten those results by usingSelectMany, such that we end up with an output for each pairing, usingDefaultIfEmpty to ensure that there’s always at least a default inner element to pair. We can do the exact same thing for aRightJoin: in fact, we can implement the right join just by delegating to the left join and flipping all the arguments:

public static IEnumerable<TResult> RightJoin<TOuter, TInner, TKey, TResult>(    this IEnumerable<TOuter> outer, IEnumerable<TInner> inner,    Func<TOuter, TKey> outerKeySelector, Func<TInner, TKey> innerKeySelector,    Func<TOuter, TInner?, TResult> resultSelector) =>    inner.LeftJoin(outer, innerKeySelector, outerKeySelector, (i, o) => resultSelector(o, i));

Thankfully, you no longer need to do that yourself, and this isn’t how the newLeftJoin andRightJoin methods are implemented in .NET 10. We can see the difference with a benchmark:

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Linq;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private IEnumerable<int> Outer { get; } = Enumerable.Sequence(0, 1000, 2);    private IEnumerable<int> Inner { get; } = Enumerable.Sequence(0, 1000, 3);    [Benchmark(Baseline = true)]    public void LeftJoin_Manual() =>        ManualLeftJoin(Outer, Inner, o => o, i => i, (o, i) => o + i).Count();    [Benchmark]    public int LeftJoin_Linq() =>        Outer.LeftJoin(Inner, o => o, i => i, (o, i) => o + i).Count();    private static IEnumerable<TResult> ManualLeftJoin<TOuter, TInner, TKey, TResult>(        IEnumerable<TOuter> outer, IEnumerable<TInner> inner,        Func<TOuter, TKey> outerKeySelector, Func<TInner, TKey> innerKeySelector,        Func<TOuter, TInner?, TResult> resultSelector) =>        outer        .GroupJoin(inner, outerKeySelector, innerKeySelector, (o, inners) => (o, inners))        .SelectMany(x => x.inners.DefaultIfEmpty(), (x, i) => resultSelector(x.o, i));}
MethodMeanRatioAllocatedAlloc Ratio
LeftJoin_Manual29.02 us1.0065.84 KB1.00
LeftJoin_Linq15.23 us0.5336.95 KB0.56

Moving on from new methods, existing methods were also improved in other ways.dotnet/runtime#112401 from@miyaji255 improved the performance ofToArray andToList followingSkip and/orTake calls. In the specialized iterator implementation used forTake andSkip, this PR simply checks in theToList andToArray implementations whether the source is something from which we can easily get aReadOnlySpan<T> (namely aT[] orList<T>). If it is, rather than copying elements one by one into the destination, it can slice the retrieved span and use itsCopyTo, which, depending on theT, may even be vectorized.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private readonly IEnumerable<string> _source = Enumerable.Range(0, 1000).Select(i => i.ToString()).ToArray();    [Benchmark]    public List<string> SkipTakeToList() => _source.Skip(200).Take(200).ToList();}
MethodRuntimeMeanRatio
SkipTakeToList.NET 9.01,218.9 ns1.00
SkipTakeToList.NET 10.0257.4 ns0.21

LINQ in .NET 10 also sees a few notable enhancements for Native AOT. The code for LINQ has grown over time, as all of these various specializations have found their way into the codebase. These optimizations are generally implemented by deriving specialized iterators from a baseIterator<T>, which has a bunch ofabstract orvirtual methods for performing the subsequent operation (e.g.Contains). With Native AOT, any use of a method likeEnumerable.Contains then prevents the corresponding implementations onall of those specializations from being trimmed away, leading to non-trivial increase in assembly code size. As such, years ago multiple builds ofSystem.Linq.dll were introduced into thedotnet/runtime build system: one focused on speed, and one focused on size. When buildingSystem.Linq.dll to go with coreclr, you’d end up with the speed-optimized build that has all of these specializations. When buildingSystem.Linq.dll to go with other flavors, like Native AOT, you’d instead get the size-optimized build, which eschews many of the LINQ optimizations that have been added in the last decade. And as this was a build-time decision, developers using one of these platforms didn’t get a choice; as you learn in kindergarten, “you get what you get and you don’t get upset.” Now in .NET 10, if you do forget what you learned in kindergarten and you do get upset, you have recourse: thanks todotnet/runtime#111743 anddotnet/runtime#109978, this setting is now a feature switch rather than a build-time configuration. So, in particular if you’re publishing for Native AOT and you’d prefer all the speed-focused optimizations, you can add<UseSizeOptimizedLinq>false</UseSizeOptimizedLinq> to your project file and be happy.

However, the need for that switch is now also reduced significantly bydotnet/runtime#118156. When this size/speed split was previously introduced into theSystem.Linq.dll build, all of these specializations were eschewed, without a lot of an analysis for tradeoffs involved; as this was focused on optimizing for size, any specialized overrides were removed, no matter how much space they actually saved. Many of those savings turned out to be minimal, however, and in a variety of situations, the throughput cost was significant. This PR brings back some of the more impactful specializations where the throughput gains significantly outweigh the relatively-minimal size cost.

Frozen Collections

TheFrozenDictionary<TKey, TValue> andFrozenSet<T> collection types were introduced in .NET 8 as collections optimized for the common scenario of creating a long-lived collection that’s then read froma lot. They spend more time at construction in exchange for faster read operations. Under the covers, this is achieved in part by having specializations of the implementations that are optimized for different types of data or shapes of input. .NET 9 improved upon the implementations, and .NET 10 takes it even further.

FrozenDictionary<TKey, TValue> exerts a lot of energy forTKey asstring, as that is such a common use case. It also has specializations forTKey asInt32.dotnet/runtime#111886 anddotnet/runtime#112298 extend that further by adding specializations for whenTKey is any primitive integral type that’s the size of anint or smaller (e.g.byte,char,ushort, etc.) as well as enums backed by such primitives (which represent the vast, vast majority of enums used in practice). In particular, they handle the common case where these values are densely packed, in which case they implement the dictionary as an array that it can index into based on the integer’s value. This makes for a very efficient lookup, while not consuming too much additional space: it’s only used when the values are dense and thus won’t be wasting many empty slots in the array.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections.Frozen;using System.Net;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "status")]public partial class Tests{    private static readonly FrozenDictionary<HttpStatusCode, string> s_statusDescriptions =        Enum.GetValues<HttpStatusCode>().Distinct()            .ToFrozenDictionary(status => status, status => status.ToString());    [Benchmark]    [Arguments(HttpStatusCode.OK)]    public string Get(HttpStatusCode status) => s_statusDescriptions[status];}
MethodRuntimeMeanRatio
Get.NET 9.02.0660 ns1.00
Get.NET 10.00.8735 ns0.42

BothFrozenDictionary<TKey, TValue> andFrozenSet<T> also improve with regards to the alternate lookup functionality introduced in .NET 9. Alternate lookups are a mechanism that enables getting a proxy for a dictionary or set that’s keyed with a different key fromTKey, most commonly aReadOnlySpan<char> whenTKey isstring. As noted, bothFrozenDictionary<TKey, TValue> andFrozenSet<T> achieve their goals by having different implementations based on the nature of the indexed data, and that specialization is achieved by virtual methods that derived specializations override. The JIT is typically able to minimize the costs of such virtuals, especially if the collections are stored instatic readonly fields. However, the alternate lookup support complicated things, as it introduced a virtual method with a generic method parameter (the alternate key type), otherwise known as GVM. “GVM” might as well be a four-letter word in performance circles, as they’re hard for the runtime to optimize. The purpose of these alternate lookups is primarily performance, but the use of a GVM significantly reduced those performance gains.dotnet/runtime#108732 from@andrewjsaid addresses this by changing the frequency with which a GVM needs to be invoked. Rather than the lookup operation itself being a generic virtual method, the PR introduces a separate generic virtual method that retrieves a delegate for performing the lookup; the retrieval of that delegate still incurs GVM penalties, but once the delegate is retrieved, it can be cached, and invoking it does not incur said overheads. This results in measurable improvements on throughput.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections.Frozen;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly FrozenDictionary<string, int> s_d = new Dictionary<string, int>     {        ["one"] = 1, ["two"] = 2, ["three"] = 3, ["four"] = 4, ["five"] = 5, ["six"] = 6,         ["seven"] = 7, ["eight"] = 8, ["nine"] = 9, ["ten"] = 10, ["eleven"] = 11, ["twelve"] = 12,    }.ToFrozenDictionary();    [Benchmark]    public int Get()    {        var alternate = s_d.GetAlternateLookup<ReadOnlySpan<char>>();        return            alternate["one"] + alternate["two"] + alternate["three"] + alternate["four"] + alternate["five"] +            alternate["six"] + alternate["seven"] + alternate["eight"] + alternate["nine"] + alternate["ten"] +             alternate["eleven"] + alternate["twelve"];    }}
MethodRuntimeMeanRatio
Get.NET 9.0133.46 ns1.00
Get.NET 10.081.39 ns0.61

BitArray

BitArray provides support for exactly what its name says, a bit array. You create it with the desired number of values and can then read and write abool for each index, turning the corresponding bit to1 or0 accordingly. It also provides a variety of helper operations for processing the whole bit array, such as for Boolean logic operations likeAnd andNot. Where possible, those operations are vectorized, taking advantage of SIMD to process many bits per instruction.

However, for situations where you want to write custom manipulations of the bits, you only have two options: use the indexer (or correspondingGet andSet methods), which means multiple instructions required to process each bit, or useCopyTo to extract all of the bits to a separate array, which means you need to allocate (or at least rent) such an array and pay for the memory copy before you can then manipulate the bits. There’s also not a great way to then copy those bits back if you wanted to manipulate theBitArray in place.

dotnet/runtime#116308 adds aCollectionsMarshal.AsBytes(BitArray) method that returns aSpan<byte> directly referencing theBitArray‘s underlying storage. This provides a very efficient way to get access to all the bits, which then makes it possible to write (or reuse) vectorized algorithms. Say, for example, you wanted to use aBitArray to represent a binary embedding (an “embedding” is a vector representation of the semantic meaning of some data, basically an array of numbers, each one corresponding to some aspect of the data; a binary embedding uses a single bit for each number). To determine how semantically similar two inputs are, you get an embedding for each and then perform a distance or similarity calculation on the two. For binary embeddings, a common distance metric is “hamming distance,” which effectively lines up the bits and tells you the number of positions that have different values, e.g.0b1100 and0b1010 have a hamming distance of 2. Helpfully,TensorPrimitives.HammingBitDistance provides an implementation of this, accepting twoReadOnlySpan<T>s and computing the number of bits that differ between them. WithCollectionsMarshal.AsBytes, we can now utilize that helper directly with the contents ofBitArrays, both saving us the effort of having to write it manually and benefiting from any optimizations inHammingBitDistance itself.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections;using System.Numerics.Tensors;using System.Runtime.InteropServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private BitArray _bits1, _bits2;    [GlobalSetup]    public void Setup()    {        Random r = new(42);        byte[] bytes = new byte[128];        r.NextBytes(bytes);        _bits1 = new BitArray(bytes);        r.NextBytes(bytes);        _bits2 = new BitArray(bytes);    }    [Benchmark(Baseline = true)]    public long HammingDistanceManual()    {        long distance = 0;        for (int i = 0; i < _bits1.Length; i++)        {            if (_bits1[i] != _bits2[i])            {                distance++;            }        }        return distance;    }    [Benchmark]    public long HammingDistanceTensorPrimitives() =>        TensorPrimitives.HammingBitDistance(            CollectionsMarshal.AsBytes(_bits1),            CollectionsMarshal.AsBytes(_bits2));}
MethodMeanRatio
HammingDistanceManual1,256.72 ns1.00
HammingDistanceTensorPrimitives63.29 ns0.05

The main motivation for this PR was adding theAsBytes method, but doing so triggered a series of other modifications that themselves help with performance. For example, rather than backing theBitArray with anint[] as was previously done, it’s now backed by abyte[], and rather than reading elements one by one in thebyte[]-based constructor, vectorized copy operations are now being used (they were already being used and continue to be used in theint[]-based constructor).

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _byteData = Enumerable.Range(0, 512).Select(i => (byte)i).ToArray();    [Benchmark]    public BitArray ByteCtor() => new BitArray(_byteData);}
MethodRuntimeMeanRatio
ByteCtor.NET 9.0160.10 ns1.00
ByteCtor.NET 10.083.07 ns0.52

Other Collections

There are a variety of other notable improvements in collections:

  • List<T>.dotnet/runtime#107683 from@karakasa builds on a change that was made in .NET 9 to improve the performance of usingInsertRange on aList<T> to insert aReadOnlySpan<T>. When a fullList<T> is appended to, the typical process is a new larger array is allocated, all of the existing elements are copied over (one array copy), and then the new element is stored into the array in the next available slot. If that same growth routine is used wheninserting rather thanappending an element, you possibly end up copying some elements twice: you first copy over all of the elements into the new array, and then to handle the insert, you may again need to copy some of the elements you already copied as part of shifting them to make room for the insertion at the new location. In the extreme, if you’re inserting at index 0, you copy all of the elements into the new array, and then you copy all of the elements again to shift them by one slot. The same applies when inserting a range of elements, so with this PR, rather than first copying over all of the elements and then shifting a subset,List<T> now grows by copying the elements above and below the target range for the insertion to their correct location and then fills in the target range with the inserted elements.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private readonly int[] _data = [1, 2, 3, 4];    [Benchmark]    public List<int> Test()    {        List<int> list = new(4);        list.AddRange(_data);        list.InsertRange(0, _data);        return list;    }}
    MethodRuntimeMeanRatio
    Test.NET 9.048.65 ns1.00
    Test.NET 10.030.07 ns0.62
  • ConcurrentDictionary<TKey, TValue>.dotnet/runtime#108065 from@koenigst changes how aConcurrentDictionary‘s backing array is sized when it’s cleared.ConcurrentDictionary is implemented with an array of linked lists, and when the collection is constructed, a constructor parameter allows for presizing that array. Due to the concurrent nature of the dictionary and its implementation,Clear‘ing it necessitates creating a new array rather than just using part of the old one. When that new array was created, it reset to using the default size. This PR tweaks that to remember the initial capacity requested by the user, and using that initial size again when constructing the new array.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Collections.Concurrent;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private ConcurrentDictionary<int, int> _data = new(concurrencyLevel: 1, capacity: 1024);    [Benchmark]    public void ClearAndAdd()    {        _data.Clear();        for (int i = 0; i < 1024; i++)        {            _data.TryAdd(i, i);        }    }}
    MethodRuntimeMeanRatioAllocatedAlloc Ratio
    ClearAndAdd.NET 9.051.95 us1.00134.36 KB1.00
    ClearAndAdd.NET 10.030.32 us0.5848.73 KB0.36
  • Dictionary<TKey, TValue>.Dictionary is one of the most popular collection types across .NET, andTKey ==string is one of (if notthe) most popular forms.dotnet/runtime#117427 makes dictionary lookups with constantstrings much faster. You might expect it would be a complicated change, but it ends up being just a few strategic tweaks. A variety of methods for operating onstrings are already known to the JIT and already have optimized implementations for when dealing with constants. All this PR needed to do was change which methodsDictionary<TKey, TValue> was using in its optimizedTryGetValue lookup path, and because that path is often inlined, a constant argument toTryGetValue can be exposed as a constant to these helpers, e.g.string.Equals.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private Dictionary<string, int> _data = new() { ["a"] = 1, ["b"] = 2, ["c"] = 3, ["d"] = 4, ["e"] = 5 };    [Benchmark]    public int Get() => _data["a"] + _data["b"] + _data["c"] + _data["d"] + _data["e"];}
    MethodRuntimeMeanRatio
    Get.NET 9.033.81 ns1.00
    Get.NET 10.014.02 ns0.41
  • OrderedDictionary<TKey, TValue>.dotnet/runtime#109324 adds new overloads ofTryAdd andTryGetValue that provide the index of the added or retrieved element in the collection. This index can then be used in subsequent operations on the dictionary to access the same slot. For example, if you want to implement anAddOrUpdate operation on top ofOrderedDictionary, you need to perform one or two operations, first trying to add the item, and then if found to already exist, updating it, and that update can benefit from targeting the exact index that contains the element rather than it needing to do another keyed lookup.
    // dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private OrderedDictionary<string, int> _dictionary = new();    [Benchmark(Baseline = true)]    public void Old() => AddOrUpdate_Old(_dictionary, "key", k => 1, (k, v) => v + 1);    [Benchmark]    public void New() => AddOrUpdate_New(_dictionary, "key", k => 1, (k, v) => v + 1);    private static void AddOrUpdate_Old(OrderedDictionary<string, int> d, string key, Func<string, int> addFunc, Func<string, int, int> updateFunc)    {        if (d.TryGetValue(key, out int existing))        {            d[key] = updateFunc(key, existing);        }        else        {            d.Add(key, addFunc(key));        }    }    private static void AddOrUpdate_New(OrderedDictionary<string, int> d, string key, Func<string, int> addFunc, Func<string, int, int> updateFunc)    {        if (d.TryGetValue(key, out int existing, out int index))        {            d.SetAt(index, updateFunc(key, existing));        }        else        {            d.Add(key, addFunc(key));        }    }}
    MethodMeanRatio
    Old6.961 ns1.00
    New4.201 ns0.60
  • ImmutableArray<T>. TheImmutableCollectionsMarshal class already exposes anAsArray method that enables retrieving the backingT[] from anImmutableArray<T>. However, if you had anImmutableArray<T>.Builder, there was previously no way to access the backing store it was using.dotnet/runtime#112177 enables doing so, with anAsMemory method that retrieves the underlying storage as aMemory<T>.
  • InlineArray. .NET 8 introducedInlineArrayAttribute, which can be used to attribute a struct containing a single field; the attribute accepts a count, and the runtime replicates the struct’s field that number of times, as if you’d logically copy/pasted the field repeatedly. The runtime also ensures that the storage is contiguous and appropriately aligned, such that if you had an indexible collection that pointed to the beginning of the struct, you could use it as an array. And it so happens such a collection exists:Span<T>. C# 12 then makes it easy to treat any such attributed struct as a span, e.g.
    [InlineArray(8)]internal struct EightStrings{    private string _field;}...EightStrings strings = default;Span<string> span = strings;

    The C# compiler will itself emit code that uses this capability. For example, if you use collection expressions to initialize a span, you’re likely triggering the compiler to emit anInlineArray. When I write this:

    public void M(int a, int b, int c, int d) {    Span<int> span = [a, b, c, d];}

    the compiler emits something like the following equivalent:

    public void M(int a, int b, int c, int d){    <>y__InlineArray4<int> buffer = default(<>y__InlineArray4<int>);    <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 0) = a;    <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 1) = b;    <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 2) = c;    <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 3) = d;    <PrivateImplementationDetails>.InlineArrayAsSpan<<>y__InlineArray4<int>, int>(ref buffer, 4);}

    where it has defined that<>y__InlineArray4 like this:

    [StructLayout(LayoutKind.Auto)][InlineArray(4)]internal struct <>y__InlineArray4<T>{    [CompilerGenerated]    private T _element0;}

    This shows up elsewhere, too. For example, C# 13 introduced support for usingparams with collections other than arrays, including spans, so now I can write this:

    public void Caller(int a, int b, int c, int d) => M(a, b, c, d);public void M(params ReadOnlySpan<int> span) { }

    and forCaller we’ll see very similar code emitted to what I previously showed, with the compiler manufacturing such anInlineArray type. As you might imagine, the popularity of the features that cause the compiler to produce these types has caused there to be a lot of them emitted. Each type is specific to a particular length, so while the compiler will reuse them, a) it can end up needing to emit a lot to cover different lengths, and b) it emits them as internal to each assembly that needs them, so there can end up being a lot of duplication. Looking just at the shared framework for .NET 9 (the core libraries likeSystem.Private.CoreLib that ship as part of the runtime), there are ~140 of these types… all of which are for sizes no larger than 8. For .NET 10,dotnet/runtime#113403 adds a set of publicInlineArray2<T>,InlineArray3<T>, etc., that should cover the vast majority of sizes the compiler would otherwise need to emit types. In the near future, the C# compiler will be updated to use those new types when available instead of emitting its own, thereby yielding non-trivial size savings.

I/O

In previous .NET releases, there have been concerted efforts that have invested a lot in improving specific areas of I/O performance, such as completely rewritingFileStream in .NET 6. Nothing as comprehensive as that was done for I/O in .NET 10, but there are some nice one-off improvements that can still have a measurable impact on certain scenarios.

On Unix, when aMemoryMappedFile is created and it’s not associated with a particularFileStream, it needs to create some kind of backing memory for the MMF’s data. On Linux, it’d try to useshm_open, which creates a shared memory object with appropriate semantics. However, in the years sinceMemoryMappedFile was initially enabled on Linux, the Linux kernel has added support for anonymous files and thememfd_create function that creates them. These are ideal forMemoryMappedFile and much more efficient, sodotnet/runtime#105178 from@am11 switches over to usingmemfd_create when it’s available.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.IO.MemoryMappedFiles;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public void MMF()    {        using MemoryMappedFile mff = MemoryMappedFile.CreateNew(null, 12345);        using MemoryMappedViewAccessor accessor = mff.CreateViewAccessor();    }}
MethodRuntimeMeanRatio
MMF.NET 9.09.916 us1.00
MMF.NET 10.06.358 us0.64

FileSystemWatcher is improved indotnet/runtime#116830. The primary purpose for this PR was to fix a memory leak, where on Windows disposing of aFileSystemWatcher while it was in use could end up leaking some objects. However, it also addresses a performance issue specific to Windows.FileSystemWatcher needs to pass a buffer to the OS for the OS to populate with file-changed information. That meant thatFileSystemWatcher was allocating a managed array and then immediately pinning that buffer so it could pass a pointer to it into native code. For certain consumption ofFileSystemWatcher, especially in scenarios where lots ofFileSystemWatcher instances are created, that pinning could contribute to non-trivial heap fragmentation. Interestingly, though, this array is effectively never consumed as an array: all of the writes into it are performed in native code via the pointer that was passed to the OS, and all consumption of it in managed code to read out the events are done via a span. That means the array nature of it doesn’t really matter, and we’re better off just allocating a native rather than managed buffer that then requires pinning.

// Run on Windows.// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public void FSW()    {        using FileSystemWatcher fsw = new(Environment.CurrentDirectory);        fsw.EnableRaisingEvents = true;    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
FSW.NET 961.46 us1.008944 B1.00
FSW.NET 1061.21 us1.00744 B0.08

BufferedStream gets a boost fromdotnet/runtime#104822 from@ANahr. There is a curious and problematic inconsistency inBufferedStream that’s been there since, well, forever as far as I can tell. It’s obviously been revisited in the past, and due to the super duper strong backwards compatibility concerns for .NET Framework (where a key feature is that the framework doesn’t change), the issue was never fixed. There’s even acomment in the code to this point:

// We should not be flushing here, but only writing to the underlying stream, but previous version flushed, so we keep this.

ABufferedStream does what its name says. It wraps an underlyingStream and buffers access to it. So, for example, if it were configured with a buffer size of 1000, and you wrote 100 bytes to theBufferedStream at a time, your first 10 writes would just go to the buffer and the underlyingStream wouldn’t be touched at all. Only on the 11th write would the buffer be full and need to be flushed (meaning written) to the underlyingStream. So far, so good. Moreover, there’s a difference between flushing to the underlying stream and flushing the underlying stream. Those sound almost identical, but they’re not: in the former case, we’re effectively calling_stream.Write(buffer) to write the buffer to that stream, and in the latter case, we’re effectively calling_stream.Flush() to force any bufferingthat stream was doing to propagate it toits underlying destination.BufferedStream really shouldn’t be in the business of doing the latter whenWrite‘ing to theBufferedStream, and in general it wasn’t… except in one case. Whereas most of the writing-related methods would not call_stream.Flush(), for some reasonWriteByte did. In particular for cases where theBufferedStream is configured with a small buffer, and where the underlying stream’s flush is relatively expensive (e.g.DeflateStream.Flush forces any buffered bytes to be compressed and emitted), that can be problematic for performance, nevermind the inconsistency. This change simply fixes the inconsistency, such thatWriteByte no longer forces a flush on the underlying stream.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.IO.Compression;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _bytes;    [GlobalSetup]    public void Setup()    {        _bytes = new byte[1024 * 1024];        new Random(42).NextBytes(_bytes);    }    [Benchmark]    public void WriteByte()    {        using Stream s = new BufferedStream(new DeflateStream(Stream.Null, CompressionLevel.SmallestSize), 256);        foreach (byte b in _bytes)        {            s.WriteByte(b);        }    }}
MethodRuntimeMeanRatio
WriteByte.NET 9.073.87 ms1.00
WriteByte.NET 10.017.77 ms0.24

While on the subject of compression, it’s worth calling out several improvements inSystem.IO.Compression in .NET 10, too. As noted inPerformance Improvements in .NET 9,DeflateStream/GZipStream/ZLibStream are managed wrappers around an underlying nativezlib library. For a long time, that was the originalzlib (madler/zlib). Then it was Intel’szlib-intel fork (intel/zlib), which is now archived and no longer maintained. In .NET 9, the library switched to usingzlib-ng (zlib-ng/zlib-ng), which is a modernized fork that’s well-maintained and optimized for a large number of hardware architectures. .NET 9 is based onzlib-ng 2.2.1.dotnet/runtime#118457 updates it to usezlib-ng 2.2.5. Compared with the 2.2.1 release, there are a variety of performance improvements inzlib-ng itself, which .NET 10 then inherits, such as improved used of AVX2 and AVX512. Most importantly, though, the update includes arevert that undoes a cleanup change in the 2.2.0 release; the original change removed a workaround for a function that had been slow and was found to no longer be slow, but as it turns out, it’s still slow in some circumstances (long,highly compressible data), resulting in a throughput regression. The fix in 2.2.5 puts back the workaround to fix the regression.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.IO.Compression;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _data = new HttpClient().GetByteArrayAsync(@"https://raw.githubusercontent.com/dotnet/runtime-assets/8d362e624cde837ec896e7fff04f2167af68cba0/src/System.IO.Compression.TestData/DeflateTestData/xargs.1").Result;    [Benchmark]    public void Compress()    {        using ZLibStream z = new(Stream.Null, CompressionMode.Compress);        for (int i = 0; i < 100; i++)        {            z.Write(_data);        }    }}
MethodRuntimeMeanRatio
Compress.NET 9.0202.79 us1.00
Compress.NET 10.070.45 us0.35

The managed wrapper forzlib also gains some improvements.dotnet/runtime#113587 from@edwardneal improves the case where multiple gzip payloads are being read from the underlyingStream. Due to its nature, multiple complete gzip payloads can be written one after the other, and a singleGZipStream can be used to decompress all of them as if they were one. Each time it hit a boundary between payloads, the managed wrapper was throwing away the old interop handles and creating new ones, but it can instead take advantage of reset capabilities in the underlyingzlib library, shaving off some cycles associated with freeing and re-allocating the underlying data structures. This is a very biased micro-benchmark (a stream containing a 1000 gzip payloads that each decompresses into a single byte), highlighting the worst case, but it exemplifies the issue:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.IO.Compression;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private MemoryStream _data;    [GlobalSetup]    public void Setup()    {        _data = new MemoryStream();        for (int i = 0; i < 1000; i++)        {            using GZipStream gzip = new(_data, CompressionMode.Compress, leaveOpen: true);            gzip.WriteByte(42);        }    }    [Benchmark]    public void Decompress()    {        _data.Position = 0;        using GZipStream gzip = new(_data, CompressionMode.Decompress, leaveOpen: true);        gzip.CopyTo(Stream.Null);    }}
MethodRuntimeMeanRatio
Decompress.NET 9.0331.3 us1.00
Decompress.NET 10.0104.3 us0.31

Other components that sit above these streams, likeZipArchive, have also improved.dotnet/runtime#103153 from@edwardneal updatesZipArchive to not rely onBinaryReader andBinaryWriter, avoiding their underlying buffer allocations and having more fine-grained control over how and when exactly data is encoded/decoded and written/read. Anddotnet/runtime#102704 from@edwardneal reduces memory consumption and allocation when updatingZipArchives. AZipArchive update used to be “rewrite the world”: it loaded every entry’s data into memory and rewrote all the file headers, all entry data, and the “central directory” (what the zip format calls its catalog of all the entries in the archive). A large archive would have proportionally large allocation. This PR introduces change tracking plus ordering of entries so that only the portion of the file from the first actually affected entry (or one whose variable‑length metadata/data changed) is rewritten, rather than always rewriting the whole thing. The effects can be significant.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.IO.Compression;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private Stream _zip = new MemoryStream();    [GlobalSetup]    public void Setup()    {        using ZipArchive zip = new(_zip, ZipArchiveMode.Create, leaveOpen: true);        Random r = new(42);        for (int i = 0; i < 1000; i++)        {            byte[] fileBytes = new byte[r.Next(512, 2048)];            r.NextBytes(fileBytes);            using Stream s = zip.CreateEntry($"file{i}.txt").Open();            s.Write(fileBytes);        }    }    [Benchmark]    public void Update()    {        _zip.Position = 0;        using ZipArchive zip = new(_zip, ZipArchiveMode.Update, leaveOpen: true);        zip.GetEntry("file987.txt")?.Delete();    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
Update.NET 9.0987.8 us1.002173.9 KB1.00
Update.NET 10.0354.7 us0.36682.22 KB0.31

(ZipArchive andZipFile also gain async APIs indotnet/runtime#114421, a long requested feature that allows using async I/O while loading, manipulating, and saving zips.)

Finally, somewhere between performance and reliability,dotnet/roslyn-analyzers#7390 from@mpidash adds a new analyzer forStreamReader.EndOfStream.StreamReader.EndOfStream seems like it should be harmless, but it’s quite the devious little property. The intent is to determine whether the reader is at the end up of the underlyingStream. Seems easy enough. If theStreamReader still has previously read data buffered, obviously it’s not at the end. And if the reader has previously seen EOF, e.g.Read returned0, then it obviously is at the end. But in all other situations, there’s no way to know you’re at the end of the stream (at least in the general case) without performing a read, which means this property does something properties should never do: perform I/O. Worse than just performing I/O, that read can be a blocking operation, e.g. if theStream represents a network stream for aSocket, and performing a read actually means blocking until data is received. Even worse, though, is when it’s used in an asynchronous method, e.g.

while (!reader.EndOfStream){    string? line = await reader.ReadLineAsync();    ...}

Now not only mightEndOfStream do I/O and block, it’s doing that in a method that’s supposed to do all of its waiting asynchronously.

What makes this even more frustrating is thatEndOfStream isn’t even useful in a loop like that above.ReadLineAsync will return anull string if it’s at the end of the stream, so the loop would instead be better as:

while (await reader.ReadLineAsync() is string line){    ...}

Simpler, cheaper, and no ticking time bombs of synchronous I/O. Thanks to this new analyzer, any such use ofEndOfStream in an async method will triggerCA2024:

CA2024 Analyzer

Networking

Networking-related operations show up in almost every modern workload. Past releases of .NET have seen a lot of energy exerted on whittling away at networking overheads, as these components are used over and over and over, often in critical paths, and the overheads can add up. .NET 10 continues the streamlining trend.

As was seen with core primitives earlier,IPAddress andIPNetwork are both imbued with UTF8 parsing capabilities, thanks todotnet/runtime#102144 from@edwardneal. As is the case with most other such types in the core libraries, the UTF8-based implementation and the UTF16-based implementation are mostly the same implementation, sharing most of their code via generic methods parameterized onbyte vschar. And as a result of the focus on enabling UTF8, not only can you parse UTF8 bytes directly rather than needing to transcode first, the existing code actually gets a bit faster.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Net;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "s")]public partial class Tests{    [Benchmark]    [Arguments("Fe08::1%13542")]    public IPAddress Parse(string s) => IPAddress.Parse(s);}
MethodRuntimeMeanRatio
Parse.NET 9.071.35 ns1.00
Parse.NET 10.054.60 ns0.77

IPAddress is also imbued withIsValid andIsValidUtf8 methods, thanks todotnet/runtime#111433. It was previously possible to test the validity of an address viaTryParse, but when successful, that would allocate theIPAddress; if you don’t need the resulting object but just need to know whether it’s valid, the extra allocation is wasteful.

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Net;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _address = "123.123.123.123";    [Benchmark(Baseline = true)]    public bool TryParse() => IPAddress.TryParse(_address, out _);    [Benchmark]    public bool IsValid() => IPAddress.IsValid(_address);}
MethodMeanRatioAllocatedAlloc Ratio
TryParse26.26 ns1.0040 B1.00
IsValid21.88 ns0.830.00

Uri, used in the above benchmark, also gets some notable improvements. In fact, one of my favorite improvements in all of .NET 10 is inUri. The feature itself isn’t a performance improvement, but there are some interesting performance-related ramifications for it. In particular, since forever,Uri has had a length limitation due to implementation details.Uri keeps track of various offsets in the input, such as where the host portion starts, where the path starts, where the query starts, and so on. The implementer chose to useushort for each of these values rather thanint. That means the maximum length of aUri is then constrained to the lengths aushort can describe, namely 65,535 characters. That sounds like a ridiculously longUri, one no one would ever need to go beyond… until you consider data URIs. Data URIs embed a representation of arbitrary bytes, typically Base64 encoded, in the URI itself. This allows for files to be represented directly in links, and it’s become a common way for AI-related services to send and receive data payloads, like images. It doesn’t take a very large image to exceed 65K characters, however, especially with Base64 encoding increasing the payload size by ~33%.dotnet/runtime#117287 finally removes that limitation, so nowUri can be used to represent very large data URIs, if desired. This, however, has some performance ramifications (beyond the few percentage increase in the size ofUri, to accomodate the extraushort toint bytes). In particular,Uri implements path compression, so for example this:

Console.WriteLine(new Uri("http://test/hello/../hello/../hello"));

prints out:

http://test/hello

As it turns out, the algorithm implementing that path compression isO(N^2). Oops. With a limit of 65K characters, such a quadratic complexity isn’t a security concern (asO(N^2) operations can sometimes be, as ifN is unbounded, it creates an attack vector where an attacker can doN work and get the attackee to do disproportionately more). But once the limit is removed entirely, it could be. As such,dotnet/runtime#117820 compensates by making the path compressionO(N). And while in the general case, we don’t expect path compression to be a meaningfully impactful part of constructingUri, in degenerate cases, even under the old limit, the change can still make a measurable improvement.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _input = $"http://host/{string.Concat(Enumerable.Repeat("a/../", 10_000))}{new string('a', 10_000)}";    [Benchmark]    public Uri Ctor() => new Uri(_input);}
MethodRuntimeMeanRatio
Ctor.NET 9.018.989 us1.00
Ctor.NET 10.02.228 us0.12

In the same vein, the longer the URI, the more effort is required to do whatever validation is needed in the constructor.Uri‘s constructor needs to check whether the input has any Unicode characters that might need to be handled. Rather than checking all the characters one at a time, withdotnet/runtime#107357,Uri can now useSearchValues to more quickly rule out or find the first location of a character that needs to be looked at more deeply.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _uri;    [GlobalSetup]    public void Setup()    {        byte[] bytes = new byte[40_000];        new Random(42).NextBytes(bytes);        _uri = $"data:application/octet-stream;base64,{Convert.ToBase64String(bytes)}";    }    [Benchmark]    public Uri Ctor() => new Uri(_uri);}
MethodRuntimeMeanRatio
Ctor.NET 9.019.354 us1.00
Ctor.NET 10.02.041 us0.11

Other changes were made toUri that further reduce construction costs in various other cases, too. For cases where the URI host is an IPv6 address, e.g.http://[2603:1020:201:10::10f],dotnet/runtime#117292 recognizes that scope IDs are relatively rare and makes the cases without a scope ID cheaper in exchange for making the cases with a scope ID a little more expensive.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public string CtorHost() => new Uri("http://[2603:1020:201:10::10f]").Host;}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
CtorHost.NET 9.0304.9 ns1.00208 B1.00
CtorHost.NET 10.0254.2 ns0.83216 B1.04

(Note that the .NET 10 allocation is 8 bytes larger than the .NET 9 allocation due to the extra space required in this case for dropping the length limitation, as discussed earlier.)

dotnet/runtime#117289 also improves construction for cases where the URI requires normalization, saving some allocations by using normalization routines over spans (which were added indotnet/runtime#110465) instead of needing to allocatestrings for the inputs.

using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public Uri Ctor() => new("http://some.host.with.ümlauts/");}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
Ctor.NET 9.0377.6 ns1.00440 B1.00
Ctor.NET 10.0322.0 ns0.85376 B0.85

Various improvements have also found their way into the HTTP stack. For starters, the download helpers onHttpClient andHttpContent have improved. These types expose helper methods for some of the most common forms of grabbing data; while a developer can grab the responseStream and consume that efficiently, for simple and common cases like “just get the whole response as astring” or “just get the whole response as abyte[]“, theGetStringAsync andGetByteArrayAsync make that really easy to do.dotnet/runtime#109642 changes how these methods operate in order to better manage the temporary buffers that are required, especially in the case where the server hasn’t advertised aContent-Length, such that the client doesn’t know ahead of time how much data to expect and thus how much space to allocate.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Net;using System.Net.Sockets;using System.Text;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private HttpClient _client = new();    private Uri _uri;    [GlobalSetup]    public void Setup()    {        Socket listener = new(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);        listener.Bind(new IPEndPoint(IPAddress.Loopback, 0));        listener.Listen(int.MaxValue);        _ = Task.Run(async () =>        {            byte[] header = "HTTP/1.1 200 OK\r\nTransfer-Encoding: chunked\r\n\r\n"u8.ToArray();            byte[] chunkData = Enumerable.Range(0, 100).SelectMany(_ => "abcdefghijklmnopqrstuvwxyz").Select(c => (byte)c).ToArray();            byte[] chunkHeader = Encoding.UTF8.GetBytes($"{chunkData.Length:X}\r\n");            byte[] chunkFooter = "\r\n"u8.ToArray();            byte[] footer = "0\r\n\r\n"u8.ToArray();            while (true)            {                var server = await listener.AcceptAsync();                server.NoDelay = true;                using StreamReader reader = new(new NetworkStream(server), Encoding.ASCII);                while (true)                {                    while (!string.IsNullOrEmpty(await reader.ReadLineAsync())) ;                    await server.SendAsync(header);                    for (int i = 0; i < 100; i++)                    {                        await server.SendAsync(chunkHeader);                        await server.SendAsync(chunkData);                        await server.SendAsync(chunkFooter);                    }                    await server.SendAsync(footer);                }            }        });        var ep = (IPEndPoint)listener.LocalEndPoint!;        _uri = new Uri($"http://{ep.Address}:{ep.Port}/");    }    [Benchmark]    public async Task<byte[]> ResponseContentRead_ReadAsByteArrayAsync()    {        using HttpResponseMessage resp = await _client.GetAsync(_uri);        return await resp.Content.ReadAsByteArrayAsync();    }    [Benchmark]    public async Task<string> ResponseHeadersRead_ReadAsStringAsync()    {        using HttpResponseMessage resp = await _client.GetAsync(_uri, HttpCompletionOption.ResponseHeadersRead);        return await resp.Content.ReadAsStringAsync();    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
ResponseContentRead_ReadAsByteArrayAsync.NET 9.01.438 ms1.00912.71 KB1.00
ResponseContentRead_ReadAsByteArrayAsync.NET 10.01.166 ms0.81519.12 KB0.57
ResponseHeadersRead_ReadAsStringAsync.NET 9.01.528 ms1.001166.77 KB1.00
ResponseHeadersRead_ReadAsStringAsync.NET 10.01.306 ms0.86773.3 KB0.66

dotnet/runtime#117071 reduces overheads associated with HTTP header validation. In theSystem.Net.Http implementation, some headers have dedicated parsers for them, while many (the majority of custom ones that services define) don’t. This PR recognizes that for these, the validation that needs to be performed amounts to only checking for forbidden newline characters, and the objects that were being created for all headers weren’t necessary for these.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Net.Http.Headers;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private readonly HttpResponseHeaders _headers = new HttpResponseMessage().Headers;    [Benchmark]    public void Add()    {        _headers.Clear();        _headers.Add("X-Custom", "Value");    }    [Benchmark]    public object GetValues()    {        _headers.Clear();        _headers.TryAddWithoutValidation("X-Custom", "Value");        return _headers.GetValues("X-Custom");    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
Add.NET 9.028.04 ns1.0032 B1.00
Add.NET 10.012.61 ns0.450.00
GetValues.NET 9.082.57 ns1.0064 B1.00
GetValues.NET 10.023.97 ns0.2932 B0.50

For folks using HTTP/2,dotnet/runtime#112719 decreases per-connection memory consumption, by changing theHPackDecoder to lazily grow its buffers, starting from expected-case sizing rather than worst-case. (“HPACK” is the header compression algorithm used by HTTP/2, utilizing a table shared between client and server for managing commonly transmitted headers.) It’s a little hard to measure in a micro-benchmark, since in a real app the connections get reused (and the benefits here aren’t about temporary allocation but rather connection density and overall working set), but we can get a glimpse of it by doing what you’re not supposed to do and create a newHttpClient for each request (you’re not supposed to do that, or more specifically not supposed to create a new handler for each request, because doing so tears down the connection pool and the connections it contains… which is bad for an app but exactly what we want for our micro-benchmark).

// For this benchmark, change the benchmark.csproj to start with://     <Project Sdk="Microsoft.NET.Sdk.Web">// instead of://     <Project Sdk="Microsoft.NET.Sdk">// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using System.Net;using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using Microsoft.AspNetCore.Server.Kestrel.Core;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private WebApplication _app;    [GlobalSetup]    public async Task Setup()    {        var builder = WebApplication.CreateBuilder();        builder.Logging.SetMinimumLevel(LogLevel.Warning);        builder.WebHost.ConfigureKestrel(o => o.ListenLocalhost(5000, listen => listen.Protocols = HttpProtocols.Http2));        _app = builder.Build();        _app.MapGet("/hello", () => Results.Text("hi from kestrel over h2c\n"));        var serverTask = _app.RunAsync();        await Task.Delay(300);    }    [GlobalCleanup]    public async Task Cleanup()    {        await _app.StopAsync();        await _app.DisposeAsync();    }    [Benchmark]    public async Task Get()    {        using var client = new HttpClient()        {            DefaultRequestVersion = HttpVersion.Version20,            DefaultVersionPolicy = HttpVersionPolicy.RequestVersionExact        };        var response = await client.GetAsync("http://localhost:5000/hello");    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
Get.NET 9.0485.9 us1.0083.19 KB1.00
Get.NET 10.0445.0 us0.9251.79 KB0.62

Also, on Linux and macOS, all HTTP use (and, more generally, all socket interactions) gets a tad cheaper fromdotnet/runtime#109052, which eliminates aConcurrentDictionary<> lookup for each asynchronous operation that completes on aSocket.

And for all you Native AOT fans,dotnet/runtime#117012 also adds a feature switch that enables trimming out the HTTP/3 implementation fromHttpClient, which can represent a very sizeable and “free” space savings if you’re not using HTTP/3 at all.

Searching

Someone once told me that computer science was “all about sorting and searching.” That’s not far off. Searching in one way, shape, or form is an integral part of many applications and services.

Regex

Whether you love or hate the terse syntax, regular expressions (regex) continue to be an integral part of software development, with applications as part of both software and the software development process. As such, it’s had robust support in .NET since the early days of the platform, with theSystem.Text.RegularExpressions namespace providing a feature-rich set of regex capabilities. The performance ofRegex was improved significantly in .NET 5 (Regex Performance Improvements in .NET 5) and then again in .NET 7, which also saw a significant amount of new functionality added (Regular Expression Improvements in .NET 7). It’s continued to be improved in every release since, and .NET 10 is no exception.

As I’ve discussed in previous blog posts about regex and performance, there are two high-level ways regex engines are implemented, either with backtracking or without. Non-backtracking engines typically work by creating some form of finite automata that represents the pattern, and then for each character consumed from the input, moves around the deterministic finite automata (DFA, meaning you can be in only a single state at a time) or non-deterministic finite automata (NFA, meaning you can be in multiple states at a time), transitioning from one state to another. A key benefit of a non-backtracking engine is that it can often make linear guarantees about processing time, where an input string of lengthN can be processed in worst-caseO(N) time. A key downside of a non-backtracking engine is it can’t support all of the features developers are familiar with in modern regex engines, like back references. Backtracking engines are named as such because they’re able to “backtrack,” trying one approach to see if there’s a match and then going back and trying another. If you have the regex pattern\w*\d (which matches any number of word characters followed by a single digit) and supply it with the string"12", a backtracking engine is likely to first try treating both the'1' and the'2' as word characters, then find that it doesn’t have anything to fulfill the\d, and thus backtrack, instead treating only the'1' as being consumed by the\w*, and leaving the'2' to be consumed by the\d. Backtracking is how engines support features like back references, variable-length lookarounds, conditional expressions, and more. They can also have excellent performance, especially on the average and best cases. A key downside, however, is their worst case, where on some patterns they can suffer from “catastrophic backtracking.” That happens when all of that backtracking leads to exploring the same input over and over and over again, possibly consuming much more than linear time.

Since .NET 7, .NET has had an opt-in non-backtracking engine, which is what you get withRegexOptions.NonBacktracking, Otherwise, it uses a backtracking engine, whether using the default interpreter, or a regex compiled to IL (RegexOptions.Compiled), or a regex emitted as a custom C# implementation with the regex source generator ([GeneratedRegex(...)]). These backtracking engines can yield exceptional performance, but due to their backtracking nature, they are susceptible to bad worst-case performance, which is why specifying timeouts to aRegex is often encouraged, especially when using patterns of unknown provenance. Still, there are things backtracking engines can do to help mitigate some such backtracking, in particular avoiding the need for some of the backtracking in the first place.

One of the main tools backtracking engines offer for reduced backtracking is an “atomic” construct. Some regex syntaxes surface this via “possessive quantifiers,” while others, including .NET, surface it via “atomic groups.” They’re fundamentally the same thing, just expressed in the syntax differently. An atomic group in .NET’s regex syntax is a group that is never backtracked into. If we take our previous\w*\d example, we could wrap the\w* loop in an atomic group like this:(?>\w*)\d. In doing so, whatever that\w* consumes won’t change via backtracking after exiting the group and moving on to whatever comes after it in the pattern. So if I try to match"12" with such a pattern, it’ll fail, because the\w* will consume both characters, the\d will have nothing to match, and no backtracking will be applied, because the\w* is wrapped in an atomic group and thus exposes no backtracking opportunities.

In that example, wrapping the\w* with an atomic group changes the meaning of the pattern, and thus it’s not something that a regex engine could choose to do automatically. However, there are many cases where wrapping otherwise backtracking constructs in an atomic group does not observably change behavior, because any backtracking that would otherwise happen would provably never be fruitful. Consider a patterna*b.a*b is observably identical to(?>a*)b, which says that thea* should not be backtracked into. That’s because there’s nothing thea* can “give back” (which can only beas) that would satisfy what comes next in the pattern (which is onlyb). It’s thus valid for a backtracking engine to transform how it processesa*b to instead be the equivalent of how it processes(?>a*)b. And the .NET regex engine has been capable of such transformations since .NET 5. This can result in massive improvements to throughput. With backtracking, waving my hands, we effectively need to execute everything after the backtracking construct for each possible position we could backtrack to. So, for example, with\w*SOMEPATTERN, if thew* successfully initially consumes 100 characters, we then possibly need to try to matchSOMEPATTERN up to 100 different times, as we may need to backtrack up to 100 times and re-evaluateSOMEPATTERN each time we give back one of the things initially matched. If we instead make that(?>\w*), we eliminate all but one of those! That makes improvements to this ability to automatically transform backtracking constructs to be non-backtracking possibly massive improvements in performance, and practically every release of .NET since .NET 5 has increased the set of patterns that are automatically transformed. .NET 10 included.

Let’s start withdotnet/runtime#117869, which teaches the regex optimizer about more “disjoint” sets. Consider the previous example ofa*b, and how I said we can make thata* loop atomic because there’s nothinga* can “give back” that matchesb. That is a general statement about auto-atomicity: a loop can be made atomic if it’s guaranteed to end with something that can’t possibly match the thing that comes after it. So, if I have[abc]+[def], that loop can be made atomic, because there’s nothing[abc] can match that[def] can also match. In contrast, if the expression were instead[abc]+[cef], that loop must not be made atomic automatically, as doing so could change behavior. The setsdo overlap, as both can match'c'. So, for example, if the input were just"cc", the original expression should match it (the[abc]* loop would match'c' with one iteration of the loop and then the second'c' would satisfy the[cef] set), but if the expression were instead(?>[abc]+)[cef], it would no longer match, as the[abc]+ would consume both'c's, and there’d be nothing left for the[cef] set to match. Two sets that don’t have any overlap are referred to as being “disjoint,” and so the optimizer needs to be able to prove the disjointedness of sets in order to perform these kinds of auto-atomicity optimizations. The optimizer was already able to do so for many sets, in particular ones that were composed purely of characters or character ranges, e.g.[ace] or[a-zA-Z0-9]. But many sets are instead composed of entire Unicode categories. For example, when you write\d, unless you’ve specifiedRegexOptions.ECMAScript that’s the same as\p{Nd}, which says “match any character in the Unicode category of Number decimal digits”, aka all characters for whichchar.GetUnicodeCategory returnsUnicodeCategory.DecimalDigitNumber. And the optimizer was unable to reason about overlap between such sets. So, for example, if you had the expression\w*\p{Sm}, that matches anything that’s any number of word characters followed by a math symbol (UnicodeCategory.MathSymbol).\w is actually just a set of eight specific Unicode categories, such that the previous expression behaves identically to if I’d written[\p{Ll}\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{Mn}\p{Nd}\p{Pc}]*\p{Sm} (\w is composed ofUnicodeCategory.UppercaseLetter,UnicodeCategory.LowercaseLetter,UnicodeCategory.TitlecaseLetter,UnicodeCategory.ModiferLetter,UnicodeCategory.OtherLetter,UnicodeCategory.NonSpacingMark,UnicodeCategory.ModiferLetter,UnicodeCategory.DecimalDigitNumber, andUnicodeCategory.ConnectorPunctuation). Note that none of those eight categories is the same as\p{Sm}, which means they’re disjoint, which means we can safely change that loop to being atomic without impacting behavior; it just makes it faster. One of the easiest ways to see the effect of this is to look at the output from the regex source generator. Before the change, if I look at the XML comment generated for that expression, I get this:

/// ○ Match a word character greedily any number of times./// ○ Match a character in the set [\p{Sm}].

and after, I get this:

/// ○ Match a word character atomically any number of times./// ○ Match a character in the set [\p{Sm}].

That one word change in the first sentence makes a huge difference. Here’s the relevant portion of the C# code emitted by the source generator for the matching routine before the change:

// Match a word character greedily any number of times.//{    charloop_starting_pos = pos;    int iteration = 0;    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))    {        iteration++;    }    slice = slice.Slice(iteration);    pos += iteration;    charloop_ending_pos = pos;    goto CharLoopEnd;    CharLoopBacktrack:    if (Utilities.s_hasTimeout)    {        base.CheckTimeout();    }    if (charloop_starting_pos >= charloop_ending_pos)    {        return false; // The input didn't match.    }    pos = --charloop_ending_pos;    slice = inputSpan.Slice(pos);    CharLoopEnd://}

You can see how backtracking influences the emitted code. The core loop in there is iterating through as many word characters as it can match, but then before moving on, it remembers some position information about where it was. It also sets up a label for where subsequent code should jump to if it needs to backtrack; that code undoes one of the matched characters and then retries everything that came after it. If the code needs to backtrack again, it’ll again undo one of the characters and retry. And so on. Now, here’s what the code looks like after the change:

// Match a word character atomically any number of times.{    int iteration = 0;    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))    {        iteration++;    }    slice = slice.Slice(iteration);    pos += iteration;}

All of that backtracking gunk is gone; the loop matches as much as it can, and that’s that. You can see the effect this has one some cases with a micro-benchmark like this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = new string(' ', 100);    private static readonly Regex s_regex = new Regex(@"\s+\S+", RegexOptions.Compiled);    [Benchmark]    public int Count() => s_regex.Count(s_input);}

This is a simple test where we’re trying to match any positive number of whitespace characters followed by any positive number of non-whitespace characters, giving it an input composed entirely of whitespace. Without atomicity, the engine is going to consume all of the whitespace as part of the\s+ but will then find that there isn’t any non-whitespace available to match the\S+. What does it do then? It backtracks, gives back one of the hundred spaces consumed by\s+, and tries again to match the\S+. It won’t match, so it backtracks again. And again. And again. A hundred times, until it has nothing left to try and gives up. With atomicity, all that backtracking goes away, allowing it to fail faster.

MethodRuntimeMeanRatio
Count.NET 9.0183.31 ns1.00
Count.NET 10.069.23 ns0.38

dotnet/runtime#117892 is a related improvement. In regex,\b is called a “word boundary”; it checks whether the wordness of the previous character (whether the previous character matches\w) matches the wordness of the next character, calling it a boundary if they differ. You can see this in the engine’sIsBoundary helper’s implementation, which follows (note that according toTR18 whether a character is considered a boundary word char isalmost exactly the same as\w, except with two additional zero-width Unicode characters also included):

internal static bool IsBoundary(ReadOnlySpan<char> inputSpan, int index){    int indexM1 = index - 1;    return ((uint)indexM1 < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[indexM1])) !=           ((uint)index < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[index]));}

The optimizer already had a special-case in its auto-atomicity logic that had knowledge of boundaries and their relationship to\w and\d, specifically. So, if you had\w+\b, the optimizer would recognize that in order for the\b to match, what comes after what the\w+ matches must necessarily not match\w, because then it wouldn’t be a boundary, and thus the\w+ could be made atomic. Similarly, with a pattern of\d+\b, it would recognize that what came after must not be in\d, and could make the loop atomic. It didn’t generalize this, though. Now in .NET 10, it does. This PR teaches the optimizer how to recognize subsets of\w, because, as with the special-case of\d, any subset of\w can similarly benefit: if what comes before the\b is a word character, what comes after must not be. Thus, with this PR, an expression like[a-zA-Z]+\b will now have the loop made atomic.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = "Supercalifragilisticexpialidocious1";    private static readonly Regex s_regex = new Regex(@"^[A-Za-z]+\b", RegexOptions.Compiled);    [Benchmark]    public int Count() => s_regex.Count(s_input);}
MethodRuntimeMeanRatio
Count.NET 9.0116.57 ns1.00
Count.NET 10.021.74 ns0.19

Just doing a better job of set disjointedness analysis is helpful, but more so is actually recognizing whole new classes of things that can be made atomic. In prior releases, the auto-atomicity optimizations only kicked in for loops over single characters, e.g.a*,[abc]*?,[^abc]*. That is obviously only a subset of loops, as many loops are composed of more than just a single character; loops can surround any regex construct. Even a capture group thrown into the mix would knock the auto-atomicity behavior off the rails. Now withdotnet/runtime#117943, a significant number of loops involving more complicated constructs can be made atomic. Loops larger than a single character are tricky, though, as there are more things that need to be taken into account when reasoning through atomicity. With a single character, we only need to prove disjointedness for that one character with what comes after it. But, consider an expression like([a-z][0-9])+a1. Can that loop be made atomic? What comes after the loop ('a') is provably disjoint from what ends the loop ([0-9]), and yet making this loop atomic automatically would change behavior and be a no-no. Imagine if the input were"b2a1". That matches; if this expression is processed normally, the loop would match a single iteration, consuming the"b2", and then thea1 after the loop would consume the correspondinga1 in the input. But, if the loop were made atomic, e.g.(?>([a-z][0-9])+)a1, the loop would end up performing two iterations and consuming both the"b2" and the"a1", leaving nothing for thea1 in the pattern. As it turns out, we not only need to ensure what ends the loop is disjoint from what comes after it, we also need to ensure that what starts the loop is disjoint from what comes after it. That’s not all, though. Now consider an expression^(a|ab)+$. This matches an entire input composed of"a"s and"ab"s. Given an input string like"aba", this will match successfully, as it will consume the"ab" with the second branch of the alternation, and then consume the remaininga with the first branch of the alternation on the next iteration of the loop. But now consider what happens if we make the loop atomic:^(?>(a|ab)+)$. Now on that same input, the initiala in the input will be consumed by the first branch of the alternation, and that will satisfy the loop’s minimum bound of 1 iteration, exiting the loop. It’ll then proceed to validate that it’s at the end of the string, and fail, but with the loop now atomic, there’s nothing to backtrack into, and the whole match fails. Oops. The problem here is that the loop’s ending must not only be disjoint with what comes next, and the loop’s beginning must not only be disjoint with what comes next, but because it’s a loop, what comes next can actually be itself, which means the loop’s beginning and ending must be disjoint from each other. Those criteria significantly limit to what patterns this can be applied, but even with that, it’s still surprisingly common:dotnet/runtime-assets (which contains test assets for use withdotnet/runtime) contains adatabase of regex patterns sourced from appropriately-licensed nuget packages, yielding almost 20,000 unique patterns, and more than 7% of those were positively impacted by this.

Here is an example that’s searching“The Entire Project Gutenberg Works of Mark Twain” for sequences of all lowercase ASCII words, each followed by a space, and then all followed by an uppercase ASCII letter.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly Regex s_regex = new Regex(@"([a-z]+ )+[A-Z]", RegexOptions.Compiled);    [Benchmark]    public int Count() => s_regex.Count(s_input);}

In previous releases, that inner loop would be made atomic, but the outerloop would remain greedy (backtracking). From the XML comment generated by the source generator, we get this:

/// ○ Loop greedily at least once.///     ○ 1st capture group.///         ○ Match a character in the set [a-z] atomically at least once.///         ○ Match ' './// ○ Match a character in the set [A-Z].

Now in .NET 10, we get this:

/// ○ Loop atomically at least once.///     ○ 1st capture group.///         ○ Match a character in the set [a-z] atomically at least once.///         ○ Match ' './// ○ Match a character in the set [A-Z].
MethodRuntimeMeanRatio
Count.NET 9.0573.4 ms1.00
Count.NET 10.0504.6 ms0.88

As with any optimization, auto-atomicity should never change observable behavior; it should just make things faster. And as such, every case where atomicity is automatically applied requires it being reasoned through to ensure that the optimization is of sound logic. In some cases, the optimization was written to be conservative, as the relevant reasoning through the logic wasn’t previously done. An example of that is addressed bydotnet/runtime#118191, which makes a few tweaks to how boundaries are handled in the auto-atomicity logic, removing some constraints that were put in place but which, as it turns out, are unnecessary. The core logic that implements the atomicity analysis is a method that looks like this:

private static bool CanBeMadeAtomic(RegexNode node, RegexNode subsequent, ...)

node is the representation for the part of the regex that’s being considered for becoming atomic (e.g. a loop) andsubsequent is what comes immediately after it in the pattern; the method then proceeds to validatenode againstsubsequent to see whether it can prove there wouldn’t be any behavioral changes ifnode were made atomic. However, not all cases are sufficiently handled just by validating againstsubsequent itself. Consider a pattern likea*b*\w, wherenode representsa* andsubsequent representsb*.a andb are obviously disjoint, and sonode can be made atomic with regards tosubsequent, but… heresubsequent is also “nullable,” meaning it might successfully match 0 characters (the loop has a lower bound of 0). And in such a case, what comes after thea* won’t necessarily be ab but could be what comes after theb*, which here is a\w, which overlaps witha, and as such, it would be a behavioral change to make this into(?>a*)b*\w. Consider an input of just"a". With the original pattern,a* would successfully match the empty string with 0 iterations,b* would successfully match the empty string with 0 iterations, and then\w would successfully match the input'a'. But with the atomicized pattern,(?>a*) would successfully match the input'a' with a single iteration, leaving nothing to match the\w. As such, whenCanBeMadeAtomic detects thatsubsequent may be nullable and successfully match the empty string, it needs to iterate to also validate against what comes aftersubsequent (and possibly again and again if what comes next itself keeps being nullable).

CanBeMadeAtomic already factored in boundaries (\b and\B), but it did so with the conservative logic that since a boundary is “zero-width” (meaning it doesn’t consume any input), it must always require checking what comes after it. But that’s not actually the case. Even though a boundary is zero-width, it still makes guarantees about what comes next: if the prior character is a word character, the next is guaranteed to not be with a successful match. And as such, we can safely make this more liberal and not require checking what comes next.

This last example also highlights an interesting aspect of this auto-atomicity optimization in general. There is nothing this optimization provides that the developer writing the regex in the first place couldn’t have done themselves. Instead ofa*b, a developer can write(?>a*)b. Instead of[a-z]+(?= ), a developer can write(?>[a-z]+)(?= ). And so on. But when was the last time you explicitly added an atomic group to a regex you authored? Of the almost 20,000 regular expression patterns in the aforementioned database of real-world regexes sourced from nuget, care to guess how many include an explicitly written atomic group? The answer: ~100. It’s just not something developers in general think to do, so although the optimization transforms the user’s pattern into something they could have written themselves, it’s an incredibly valuable optimization, especially since now in .NET 10 over 70% of those patterns have at least one construct upgraded to be atomic.

The auto-atomicity optimization is an example of the optimizer removing unnecessary work. A key example of that, but certainly not the only example. Several additional PRs in .NET 10 have also eliminated unnecessary work, in other ways.

dotnet/runtime#118084 is a fun example of this, but to understand it, we first need to understand lookarounds. A “lookaround” is a regex construct that makes its contents zero-width. Whereas when a set like “[abc]” matches it consumes a single character from the input, or when a loop like “[abc]{3,5}” matches it’ll consume between 3-5 characters from the input, lookarounds (as with other zero-width constructs, like anchors) don’t consume anything. You wrap a lookaround around a regex expression, and it effectively makes the consumption temporary, e.g. if I wrap[abc]{3,5} in a positive lookahead as(?=[abc]{3,5}), that will end up performing the whole match for the 3-5 set characters, but those characters won’t remain consumed after exiting the lookaround; the lookaround is just performing a test to ensure the inner pattern matches but the position in the input is reset upon exiting the lookaround. This is again visualized easily by looking at the code emitted by the regex source generator for a pattern like(?=[abc]{3,5})abc:

// Zero-width positive lookahead.{    int positivelookahead_starting_pos = pos;    // Match a character in the set [a-c] atomically at least 3 and at most 5 times.    {        int iteration = 0;        while (iteration < 5 && (uint)iteration < (uint)slice.Length && char.IsBetween(slice[iteration], 'a', 'c'))        {            iteration++;        }        if (iteration < 3)        {            return false; // The input didn't match.        }        slice = slice.Slice(iteration);        pos += iteration;    }    pos = positivelookahead_starting_pos;    slice = inputSpan.Slice(pos);}// Match the string "abc".if (!slice.StartsWith("abc")){    return false; // The input didn't match.}

We can see that the lookaround is caching the starting position, then proceeding to try to match the loop it contains, and if successful, resetting the matching position to what it was when the lookaround was entered, then continuing on to perform the match for what comes after the lookaround.

These examples have been for a particular flavor of lookaround, called a positive lookahead. There are four variations of lookarounds composed of two choices: positive vs negative, and lookahead vs lookbehind. Lookaheads validate the pattern starting from the current position and proceeding forwards (as matching typically is), while lookbehinds validate the pattern starting from just before the current position and extending backwards. Positive indicates that the pattern should match, while negative indicates that the pattern should not match. So, for example, the negative lookbehind(?<!\w) will match if what comes before the current position is not a word character.

Negative lookarounds are particularly interesting, because, unlike every other regex construct, they guarantee that the pattern they containdoesn’t match. That also makes them special in other regards, in particular around capture groups. For a positive lookaround, even though they’re zero width, anything capture groups inside of the lookaround capture still remain to outside of the lookaround, e.g.^(?=(abc))\1$, which entails a backreference successfully matching what’s captured by the capture group inside of the positive lookahead, will successfully match the input"abc". But becausenegative lookarounds guarantee their content doesn’t match, it would be counter-intuitive if anything captured inside of a negative lookaround persisted past the lookaround… so it doesn’t. The capture groups inside of a negative lookaround are still possibly meaningful, in particular if there’s a backreference alsoinside of the same lookaround that refers back to the capture group, e.g. the pattern^(?!(ab)\1cd)ababc is checking to see whether the input does not begin withababcd but does begin withababc. But if there’s no backreference, the capture group is useless, and we don’t need to do any work for it as part of processing the regex (work like remembering where the capture occurred). Such capture groups can be completely eliminated from the node tree as part of the optimization phase, and that’s exactly whatdotnet/runtime#118084 does. Just as developers often use backtracking constructs without thinking to make them atomic, developers also often use capture groups purely as a grouping mechanism without thinking of the possibility of making them non-capturing groups. Since captures in general need to persist to be examined by theMatch object returned from aRegex, we can’t just eliminate all capture groups that aren’t used internally in the pattern, but we can for these negative lookarounds. Consider a pattern like(?<!(access|auth)\s)token, which is looking for the word"token" when it’snot preceeded by"access " or"auth "; the developer here (me, in this case) did what’s fairly natural, putting a group around the alternation so that the\s that follows either word can be factored out (if it were insteadaccess|auth\s, the whitespace set would only be in the second branch of the alternation and wouldn’t apply to the first). But my “simple” grouping here is actually a capture group by default; to get it to be non-capturing, I’d either need to write it as a non-capturing group, i.e.(?<!(?:access|auth)\s)token, or I’d need to useRegexOptions.ExplicitCapture, which turns all non-named capture groups into non-capturing groups.

We can similarly remove other work related to lookarounds. As noted, positive lookarounds exist to transform any pattern into a zero-width pattern, i.e. don’t consume anything. That’s all they do. If the pattern being wrapped by the positive lookaround is already zero-width, the lookaround contributes nothing to the behavior of the expression and can be removed. So, for example, if you have(?=$), that can be transformed into just$. That’s exactly whatdotnet/runtime#118091 does.

dotnet/runtime#118079 anddotnet/runtime#118111 handle other transformations relative to zero-width assertions, in particular with regards to loops. For whatever reason, you’ll see developers wrapping zero-width assertions inside of loops, either making such assertions optional (e.g.\b?) or with some larger upper bound (e.g.(?=abc)*). But these zero-width assertions don’t consume anything; their sole purpose is to flag whether something is true or false at the current position. If you make such a zero-width assertion optional, then you’re saying “check whether it’s true or false, and then immediately ignore the answer, because both answers are valid”; as such, the whole expression can be removed as a nop. Similarly, if you wrap a loop with an upper bound greater than 1 around such an expression, you’re saying “check whether it’s true or false, now without changing anything check again, and check again, and check again.” There’s a common English expression that’s something along the lines of “insanity is doing the same thing over and over again and expecting different results.” That applies here. There may be behavioral benefits to invoking the zero-width assertion once, but repeating it beyond that is a pure waste: if it was going to fail, it would have failed the first time. Mostly. There’s one specific case where the difference is actually observable, and that has to do with an interesting feature of .NET regexes: capture groups trackall matched captures, not just the last. Consider this program:

// dotnet run -c Release -f net10.0using System.Diagnostics;using System.Text.RegularExpressions;Match m = Regex.Match("abc", "^(?=(\\w+)){3}abc$");Debug.Assert(m.Success);foreach (Group g in m.Groups){    foreach (Capture c in g.Captures)    {        Console.WriteLine($"Group: {g.Name}, Capture: {c.Value}");    }}

If you run that, you may be surprised to see that capture group #1 (the explicit group I have inside of the lookahead) provides three capture values:

Group: 0, Capture: abcGroup: 1, Capture: abcGroup: 1, Capture: abcGroup: 1, Capture: abc

That’s because the loop around the positive lookahead does three iterations, each iteration matches"abc", and each successful capture is persisted for subsequent inspection via theRegex APIs. As such, we can’t optimize any loop around zero-width assertions by lowering the upper bound from greater than 1 to 1; we can only do so if it doesn’t contain any captures. And that’s what these PRs do. Given a loop that wraps a zero-width assertion that does not contain a capture, if the lower bound of the loop is 0, the whole loop and its contents can be eliminated, and if the upper bound of the loop is greater than 1, the loop itself can be removed, leaving only its contents in its stead.

Any time work like this is eliminated, it’s easy to construct monstrous, misleading micro-benchmarks… but it’s also a lot of fun, so, I’ll allow myself it this time.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly Regex s_regex = new Regex(@"(?=.*\bTwain\b.*\bConnecticut\b)*.*Mark", RegexOptions.Compiled);    [Benchmark]    public int Count() => s_regex.Count(s_input);}
MethodRuntimeMeanRatio
Count.NET 9.03,226.024 ms1.000
Count.NET 10.06.605 ms0.002

dotnet/runtime#118083 is similar. “Repeaters” are a name for a regex loop that has the same lower and upper bound, such that the contents of the loop “repeats” that fixed number of times. Typically you’ll see these written out using the{N} syntax, e.g.[abc]{3} is a repeater that requires three characters, any of which can be'a','b', or'c'. But of course it could also be written out in long-form, just by manually repeating the contents, e.g.[abc][abc][abc]. Just as we saw how we can condense loops around zero-width assertions when specified in loop form, we can do the exact same thing when manually written out. So, for example,\b\b is the same as just\b{2}, which is just\b.

Another nice example of removing unnecessary work isdotnet/runtime#118105. Boundary assertions are used in many expressions, e.g. it’s quite common to see a simple pattern like\b\w+\b, which is trying to match an entire word. When the regex engine encounters such an assertion, historically it’s delegated to theIsBoundary helper shown earlier. There is, however, some subtle unnecessary work here, which is more obvious when you see what the regex source generator outputs for an expression like\b\w+\b. This is what the output looks like on .NET 9:

// Match if at a word boundary.if (!Utilities.IsBoundary(inputSpan, pos)){    return false; // The input didn't match.}// Match a word character atomically at least once.{    int iteration = 0;    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))    {        iteration++;    }    if (iteration == 0)    {        return false; // The input didn't match.    }    slice = slice.Slice(iteration);    pos += iteration;}// Match if at a word boundary.if (!Utilities.IsBoundary(inputSpan, pos)){    return false; // The input didn't match.}

Pretty straightforward: match the boundary, consume as many word characters as possible, then again match a boundary. Except if you look back at the definition ofIsBoundary, you’ll notice that it’s doing two checks, one against the previous character and one against the next character.

internal static bool IsBoundary(ReadOnlySpan<char> inputSpan, int index){    int indexM1 = index - 1;    return ((uint)indexM1 < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[indexM1])) !=           ((uint)index < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[index]));}

Now, look at that, and look back at the generated code, and look at this again, and back at the source generated code again. See anything unnecessary? When we perform the first boundary comparison, we are dutifully checking the previous character, which is necessary, but then we’re checking the current character, which is about to checked against\w by the subsequent\w+ loop. Similarly for the second boundary check, we just finished matching\w+, which will have only successfully matched if there was at least one word character. While we still need to validate that the subsequent character is not a boundary character (there are two characters considered boundary characters that aren’t word characters), we don’t need to re-validate the previous character. So,dotnet/runtime#118105 overhauls boundary handling in the compiler and source generator to emit customized boundary checks based on surrounding knowledge. If it can prove that the subsequent construct will validate that a character is a word character, then it only needs to validate that the previous character is not a boundary character; similarly, if it can prove that the previous construct will have already validated that a character is a word character, then it only needs to validate that the next character isn’t. This leads to this tweaked source generated code now on .NET 10:

// Match if at a word boundary.if (!Utilities.IsPreWordCharBoundary(inputSpan, pos)){    return false; // The input didn't match.}// Match a word character atomically at least once.{    int iteration = 0;    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))    {        iteration++;    }    if (iteration == 0)    {        return false; // The input didn't match.    }    slice = slice.Slice(iteration);    pos += iteration;}// Match if at a word boundary.if (!Utilities.IsPostWordCharBoundary(inputSpan, pos)){    return false; // The input didn't match.}

ThoseIsPreWordCharBoundary andIsPostWordCharBoundary helpers are just half the checks in the main boundary helper. In cases where there are lots of boundary tests being performed, the reduced check count can add up.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly Regex s_regex = new Regex(@"\ba\b", RegexOptions.Compiled | RegexOptions.IgnoreCase);    [Benchmark]    public int CountStandaloneAs() => s_regex.Count(s_input);}
MethodRuntimeMeanRatio
CountStandaloneAs.NET 9.020.58 ms1.00
CountStandaloneAs.NET 10.019.25 ms0.94

TheRegex optimizer is all about pattern recognition: it looks for sequences and shapes it recognizes and performs transforms over those to put them into a more efficiently-processable form. One example of this is with alternations around coalescable branches. Let’s say you have an alternationa|e|i|o|u. You could process that as an alternation, but it’s also much more efficiently represented and processed as the equivalent set[aeiou]. There is an optimization that does such transformations as part of handling alternations. However, through .NET 9, it only handled single characters and sets, but not negated sets. For example, it would transforma|e|i|o|u into[aeiou], and it would transform[aei]|[ou] into[aeiou], but it would not merge negations like[^\n], otherwise known as. (when not inRegexOptions.Singleline mode). When developers want a set that represents all characters, there are various idioms they employ, such as[\s\S], which says “this is a set of all whitespace and non-whitespace characters”, aka everything. Another common idiom is\n|., which is the same as\n|[^\n], which says “this is an alternation that matches either a newline or anything other than a newline”, aka also everything. Unfortunately, while examples like[\d\D] have been handled well,.|\n has not, because of the gap in the alternation optimization.dotnet/runtime#118109 improves that, such that such “not” cases are mergable as part of the existing optimization. That takes a relatively expensive alternation and converts it into a super fast set check. And while, in general, set containment checks are very efficient, this one is as efficient as you can get, as it’s always true. We can see an example of this with a pattern intended to match C-style comment blocks.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private const string Input = """        /* This is a comment. */        /* Another comment */        /* Multi-line           comment */        """;    private static readonly Regex s_regex = new Regex(@"/\*(?:.|\n)*?\*/", RegexOptions.Compiled);    [Benchmark]    public int Count() => s_regex.Count(Input);}
MethodRuntimeMeanRatio
Count.NET 9.0344.80 ns1.00
Count.NET 10.093.59 ns0.27

Note that there’s another change that helps .NET 10 here,dotnet/runtime#118373, though I hesitate to call it out as a performance improvement since it’s really more of a bug fix. As part of writing this post, these benchmark numbers were showing some oddities (it’s important in general to be skeptical of benchmark results and to investigate anything that doesn’t align with reason and expectations). The result of investigating was a one-word change that yielded significant speedups on this test, specifically when usingRegexOptions.Compiled (the bug didn’t exist in the source generator). As part of handling lazy loops, there’s a special-case for when the lazy loop is around a set that matches any character, which, thanks to the previous PR,(?:.|\n) now does. That special-case recognizes that if the lazy loop matches anything, we can efficiently find the end of the lazy loop by searching for whatever comes after the loop (e.g. in this test, the loop is followed by the literal"*/"). Unfortunately, the helper that emits thatIndexOf call was passed the wrong node from the pattern: it was being passed the object representing the(?:.|\n) any-set rather than the"*/" literal, which resulted in it emitting the equivalent ofIndexOfAnyInRange((char)0, '\uFFFF') rather than the equivalent ofIndexOf("*/"). Oops. It was still functionally correct, in that theIndexOfAnyInRange call would successfully match the first character and the loop would re-evaluate from that location, but that means that rather than efficiently skipping using SIMD over a bunch of positions that couldn’t possibly match, we were doing non-trivial work for each and every position along the way.

dotnet/runtime#118087 represents another interesting transformation related to alternations. It’s very common to come across alternations with empty branches, possibly because that’s what the developer wrote, but more commonly as an outcome of other transformations that have happened. For example, given the pattern\r\n|\r, which is trying to match line endings that begin with\r, there is an optimization that will factor out a common prefix of all of the branches, producing the equivalent of\r(?:\n|); in other words,\r followed by either a line feed or empty. Such an alternation is a perfectly valid representation for this concept, but there’s a more natural one:?. Behaviorally, this pattern is identical to\r\n?, and because the latter is more common and more canonical, the regex engine has more optimizations that recognize this loop-based form, for example coalescing with other loops, or auto-atomicity. As such, this PR finds all alternations of the formX| and transforms them intoX?. Similarly, it finds all alternations of the form|X and transforms them intoX??. The difference betweenX| and|X is whetherX is tried first or empty is tried first; similarly, the difference between the greedyX? loop and the lazyX?? loop is whetherX is tried first or empty is tried first. The impact of this can be seen in the code generated for the previously cited example. Here is the source-generated code for the heart of the matching routine for\r\n|\r on .NET 9:

// Match '\r'.if (slice.IsEmpty || slice[0] != '\r'){    return false; // The input didn't match.}// Match with 2 alternative expressions, atomically.{    int alternation_starting_pos = pos;    // Branch 0    {        // Match '\n'.        if ((uint)slice.Length < 2 || slice[1] != '\n')        {            goto AlternationBranch;        }        pos += 2;        slice = inputSpan.Slice(pos);        goto AlternationMatch;        AlternationBranch:        pos = alternation_starting_pos;        slice = inputSpan.Slice(pos);    }    // Branch 1    {              pos++;        slice = inputSpan.Slice(pos);    }    AlternationMatch:;}

Now, here’s what’s produced on .NET 10:

// Match '\r'.if (slice.IsEmpty || slice[0] != '\r'){    return false; // The input didn't match.}// Match '\n' atomically, optionally.if ((uint)slice.Length > (uint)1 && slice[1] == '\n'){    slice = slice.Slice(1);    pos++;}

The optimizer recognized that the\r\n|\r was the same as\r(?:\n|), which is the same as\r\n?, which is the same as\r(?>\n?), which it can produce much simplified code for, given that it no longer needs any backtracking.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly Regex s_regex = new Regex(@"ab|a", RegexOptions.Compiled);    [Benchmark]    public int Count() => s_regex.Count(s_input);}
MethodRuntimeMeanRatio
Count.NET 9.023.35 ms1.00
Count.NET 10.018.73 ms0.80

.NET 10 also features improvements toRegex that go beyond just this form of work elimination.Regex‘s matching routines are logically factored into two pieces: finding as quickly as possible the next place that could possibly match (TryFindNextPossibleStartingPosition), and then performing the full matching routine at that location (TryMatchAtCurrentPosition). It’s desirable thatTryFindNextPossibleStartingPosition both does its work as quickly as possible while also significantly limiting the number of locations a full match should be performed.TryFindNextPossibleStartingPosition, for example, could operate very quickly just by always saying that the next index in the input should be tested, which would result in the full matching logic being performed at every index in the input; that’s not great for performance. Instead, the optimizer analyzes the pattern looking for things that would allow it to quickly search for viable starting locations, e.g. fixed strings or sets at known offsets in the pattern. Anchors are some of the most valuable things the optimizer can find, as they significantly inhibit the possible places matching is valid; the ideal pattern begins with a beginning anchor (^), which then means the only possible place matching can be successful is at index 0.

We previously discussed lookarounds, but as it turns out, until .NET 10, lookarounds weren’t factored into whatTryFindNextPossibleStartingPosition should look for.dotnet/runtime#112107 changes that. It teaches the optimizer when and how to explore positive lookaheads at the beginning of a pattern for constructs that could help it more efficiently find starting locations. For example, in .NET 9, for the pattern(?=^)hello, here’s what the source generator emits forTryFindNextPossibleStartingPosition:

private bool TryFindNextPossibleStartingPosition(ReadOnlySpan<char> inputSpan){    int pos = base.runtextpos;    // Any possible match is at least 5 characters.    if (pos <= inputSpan.Length - 5)    {        // The pattern has the literal "hello" at the beginning of the pattern. Find the next occurrence.        // If it can't be found, there's no match.        int i = inputSpan.Slice(pos).IndexOfAny(Utilities.s_indexOfString_hello_Ordinal);        if (i >= 0)        {            base.runtextpos = pos + i;            return true;        }    }    // No match found.    base.runtextpos = inputSpan.Length;    return false;}

The optimizer found the"hello" string in the pattern and is thus searching for that as part of finding the next possible place to do the full match. That would be excellent, if it weren’t for the lookahead that also says any match must happen at the beginning of the input. Now in .NET 10, we get this:

private bool TryFindNextPossibleStartingPosition(ReadOnlySpan<char> inputSpan){    int pos = base.runtextpos;    // Any possible match is at least 5 characters.    if (pos <= inputSpan.Length - 5)    {        // The pattern leads with a beginning (\A) anchor.        if (pos == 0)        {            return true;        }    }    // No match found.    base.runtextpos = inputSpan.Length;    return false;}

Thatpos == 0 check is critical, because it means we will only ever attempt the full match in one location and we can avoid the search that would happen even if we never found a good location to perform the match. Again, any time you eliminate work like this, you can construct tantalizing micro-benchmarks…

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly Regex s_regex = new Regex(@"(?=^)hello", RegexOptions.Compiled);    [Benchmark]    public int Count() => s_regex.Count(s_input);}
MethodRuntimeMeanRatio
Count.NET 9.02,383,784.95 ns1.000
Count.NET 10.017.43 ns0.000

That same PR also improved optimizations over alternations. It’s already the case that the branches of alternations are analyzed looking for common prefixes that can be factored out. For example, given the patternabc|abd, the optimizer will spot the shared"ab" prefix at the beginning of each branch and factor that out, resulting inab(?:c|d), and will then see that each branch of the remaining alternation are individual characters, which it can convert into a set,ab[cd]. If, however, the branches began with anchors, these optimizations wouldn’t be applied. Given the pattern^abc|^abd, the code generators would end up emitting this exactly as it’s written, with an alternation with two branches, the first branch checking for the beginning and then matching"abc", the second branch also checking for the beginning and then matching"abd". Now in .NET 10, the anchor can be factored out, such that^abc|^abd ends up being rewritten as^ab[cd].

As a small tweak,dotnet/runtime#112065 also helps improve the source generated code for repeaters by using a more efficient searching routine. Let’s take the pattern[0-9a-f]{32} as an example. This is looking for sequences of 32 lowercase hex digits. In .NET 9, the implementation of that ends up looking like this:

// Match a character in the set [0-9a-f] exactly 32 times.{    if ((uint)slice.Length < 32)    {        return false; // The input didn't match.    }    if (slice.Slice(0, 32).IndexOfAnyExcept(Utilities.s_asciiHexDigitsLower) >= 0)    {        return false; // The input didn't match.    }}

Simple, clean, fairly concise, and utilizing the vectorizedIndexOfAnyExcept to very efficiently validate that the whole sequence of 32 characters are lowercase hex. We can do a tad bit better, though. TheIndexOfAnyExcept method not only needs to find whether the span contains something other than one of the provided values, it needs to specify the index at which that found value occurs. That’s only a few instructions, but it’s a few unnecessary instructions, since here that exact index isn’t utilized… the implementation only cares whether it’s>= 0, meaning whether anything was found or not. As such, we can instead use theContains variant of this method, which doesn’t need to spend extra cycles determining the exact index. Now in .NET 10, this is generated:

// Match a character in the set [0-9a-f] exactly 32 times.if ((uint)slice.Length < 32 || slice.Slice(0, 32).ContainsAnyExcept(Utilities.s_asciiHexDigitsLower)){    return false; // The input didn't match.}

Finally, the .NET 10 SDK includes a new analyzer related toRegex. It’s oddly common to see code that determines whether an input matches aRegex written like this:Regex.Match(...).Success. While functionally correct, that’s much more expensive thanRegex.IsMatch(...). For all of the engines,Regex.Match(...) requires allocating a newMatch object and supporting data structures (except when there isn’t a match found, in which case it’s able to use an empty singleton); in contrast,IsMatch doesn’t need to allocate such an instance because it doesn’t need to return such an instance (as an implementation detail, it may still use aMatch object, but it can reuse one rather than creating a new one each time). It can also avoid other inefficiencies.RegexOptions.NonBacktracking is “pay-for-play” with the information it needs to gather. Determining justwhether there’s a match is cheaper than determining exactly where the match begins and ends, which is cheaper still than determining all of the captures that make up that match.IsMatch is thus the cheapest, only needing to determine that there is a match, not exactly where it is or what the exact captures are, whereasMatch needs to determine all of that.Regex.Matches(...).Count is similar; it’s having to gather all of the relevant details and allocate a whole bunch of objects, whereasRegex.Count(...) can do so in a much more efficient manner.dotnet/roslyn-analyzers#7547 adds CA1874 and CA1875, which flag these cases and recommend use ofIsMatch andCount, respectively.

Analyzer and fixer for CA1874

Analyzer and fixer for CA1875

// dotnet run -c Release -f net10.0 --filter **using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly Regex s_regex = new Regex(@"\b\w+\b", RegexOptions.NonBacktracking);    [Benchmark(Baseline = true)]    public int MatchesCount() => s_regex.Matches(s_input).Count;    [Benchmark]    public int Count() => s_regex.Count(s_input);}
MethodMeanRatioAllocatedAlloc Ratio
MatchesCount680.4 ms1.00665530176 B1.00
Count219.0 ms0.320.00

Regex is one form of searching, but there are other primitives and helpers throughout .NET for various forms of searching, and they’ve seen meaningful improvements in .NET 10, as well.

SearchValues

When discussing performance improvements in .NET 8, I called out two changes that were my favorites. The first was dynamic PGO. The second wasSearchValues.

SearchValues provides a mechanism for precomputing optimal strategies for searching. .NET 8 introduced overloads ofSearchValues.Create that produceSearchValues<byte> andSearchValues<char>, and corresponding overloads ofIndexOfAny and friends that accept such instances. If there’s a set of values you’ll be searching for over and over and over, you can create one of these instances once, cache it, and then use it for all subsequent searches for those values, e.g.

private static readonly SearchValues<char> s_validBase64Chars = SearchValues.Create("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/");internal static bool IsValidBase64(ReadOnlySpan<char> input) =>    input.ContainsAnyExcept(s_validBase64Chars);

There are a plethora of different implementations used bySearchValues<T> behind the scenes, each of which is selected and configured based on theT and the exact nature of the target values for which we’re searching.dotnet/runtime#106900 adds another, which both helps to shave off several instructions in the core vectorized search loop, and helps to highlight just how nuanced these different algorithms can be. Previously, if four targetbyte values were provided, and they weren’t in a contiguous range,SearchValues.Create would choose an implementation that just uses four vectors, one per target byte, and does four comparisons (one against each target vector) for each input vector being tested. However, there’s already a specialization that’s used for more than five target bytes when all of the target bytes are ASCII. This PR allows that specialization to be used for both four or five targets when the lower nibble (the bottom four bits) of each of the targets is unique, and in doing so, it becomes several instructions cheaper: rather than doing four comparisons, it can do a single shuffle and equality check.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Buffers;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly byte[] s_haystack = new HttpClient().GetByteArrayAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly SearchValues<byte> s_needle = SearchValues.Create("\0\r&<"u8);    [Benchmark]    public int Count()    {        int count = 0;        ReadOnlySpan<byte> haystack = s_haystack.AsSpan();        int pos;        while ((pos = haystack.IndexOfAny(s_needle)) >= 0)        {            count++;            haystack = haystack.Slice(pos + 1);        }        return count;    }}
MethodRuntimeMeanRatio
Count.NET 9.03.704 ms1.00
Count.NET 10.02.668 ms0.72

dotnet/runtime#107798 improves another such algorithm, when AVX512 is available. One of the fallback strategies used bySearchValues.Create<char> is a vectorized “probabilistic map”, basically a Bloom filter. It has a bitmap that stores a bit for eachbyte of thechar; when testing to see whether thechar is in the target set, it checks to see whether the bit for each of thechar‘sbytes is set. If at least one isn’t set, thechar definitely isn’t in the target set. If both are set, more validation will need to be done to determine the actual inclusion of that value in the set. This can make it very efficient to rule out large amounts of input that definitely are not in the set and then only spend more effort on input that might be. The implementation involves various shuffle, shift, and permute operations, and this change is able to use a better set of instructions that reduce the number needed.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Buffers;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly SearchValues<char> s_searchValues = SearchValues.Create("ßäöüÄÖÜ");    private string _input = new string('\n', 10_000);    [Benchmark]    public int IndexOfAny() => _input.AsSpan().IndexOfAny(s_searchValues);}
MethodRuntimeMeanRatio
IndexOfAny.NET 9.0437.7 ns1.00
IndexOfAny.NET 10.0404.7 ns0.92

While .NET 8 introduced support forSearchValues<byte> andSearchValues<char>, .NET 9 introduced support forSearchValues<string>.SearchValues<string> is used a bit differently fromSearchValues<byte> andSearchValues<char>; whereasSearchValues<byte> is used to search for targetbytes within a collection ofbytes andSearchValues<char> is used to search for targetchars within a collection ofchars,SearchValues<string> is used to search for targetstrings within a singlestring (or span ofchars). In other words, it’s a multi-substring search. Let’s say you have the regular expression(?i)hello|world; that is specifying that it should look for either “hello” or “world” in a case-insensitive manner; theSearchValues equivalent of that isSearchValues.Create(["hello", "world"], StringComparison.OrdinalIgnoreCase) (in fact, if you specify that pattern, theRegex compiler and source generator will use such aSearchValues.Create call under the covers in order to optimize the search).

SearchValues<string> also gets better in .NET 10. A key algorithm used bySearchValues<string> whenever possible and relevant is called “Teddy,” and enables performing a vectorized search for multiple substrings. In its core processing loop, when using AVX512, there are two instructions, aPermuteVar8x64x2 and anAlignRight;dotnet/runtime#107819 recognizes that those can be replaced by a singlePermuteVar64x8x2. Similarly, when on Arm64,dotnet/runtime#118110 plays the instructions game and replaces a use ofExtractNarrowingSaturateUpper with the slightly cheaperUnzipEven.

SearchValues<string> is also able to optimize searching for a single string, spending more time to come up with optimal search parameters than does a simplerIndexOf(string, StringComparison) call. Similar to the approach with the probabilistic maps employed earlier, the vectorized search can yield false positives that then need to be weeded out. In some cases by construction, however, we know that false positives aren’t possible;dotnet/runtime#108368 extends an existing optimization that was case-sensitive only to also apply in some case-insensitive uses, such that we can avoid doing the extra validation step in more cases. For the candidate verification that remains,dotnet/runtime#108365 also significantly reduces overhead in a variety of cases, including adding specialized handling for needles (the things being searched for) of up to 16 characters (previously it was only up to 8), and precomputing more information to make the verification faster.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Buffers;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_haystack = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;    private static readonly Regex s_the = new("the", RegexOptions.IgnoreCase | RegexOptions.Compiled);    private static readonly Regex s_something = new("something", RegexOptions.IgnoreCase | RegexOptions.Compiled);    [Benchmark]    public int CountThe() => s_the.Count(s_haystack);    [Benchmark]    public int CountSomething() => s_something.Count(s_haystack);}
MethodRuntimeMeanRatio
CountThe.NET 9.09.881 ms1.00
CountThe.NET 10.07.799 ms0.79
CountSomething.NET 9.02.466 ms1.00
CountSomething.NET 10.02.027 ms0.82

dotnet/runtime#118108 also adds a “packed” variant of the single-string implementation, meaning it’s able to handle common cases like ASCII more efficiently by ignoring a character’s upper zero byte in order to fit twice as much into a vector.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Buffers;using System.Text.RegularExpressions;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private static readonly string s_haystack = string.Concat(Enumerable.Repeat("Sherlock Holm_s", 8_000));    private static readonly SearchValues<string> s_needles = SearchValues.Create(["Sherlock Holmes"], StringComparison.OrdinalIgnoreCase);    [Benchmark]     public bool ContainsAny() => s_haystack.AsSpan().ContainsAny(s_needles);}
MethodRuntimeMeanRatio
ContainsAny.NET 9.058.41 us1.00
ContainsAny.NET 10.016.32 us0.28

MemoryExtensions

The searching improvements continue beyondSearchValues, of course. Prior to .NET 10, theMemoryExtensions class already had a wealth of support for searching and manipulating spans, with extension methods likeIndexOf,IndexOfAnyExceptInRange,ContainsAny,Count,Replace,SequenceCompare, and more (the set was further extended as well bydotnet/runtime#112951, which addedCountAny andReplaceAny), but the vast majority of these were limited to work withT types constrained to beIEquatable<T>. And in practice, many of the types you want to search do in fact implementIEquatable<T>. However, you might be in a generic context with an unconstrainedT, such that even if theT used to instatiate the generic type or method is equatable, it’s not evident in the type system and thus theMemoryExtensions method couldn’t be used. And of course there are scenarios where you want to be able to supply a different comparison routine. Both of these scenarios show up, for example, in the implementation of LINQ’sEnumerable.Contains; if the sourceIEnumerable<TSource> is actually something we could treat as a span, likeTSource[] orList<TSource>, it’d be nice to be able to just delegate to the optimizedMemoryExtensions.Contains<T>, but a)Enumerable.Contains doesn’t constrain itsTSource : IEquatable<TSource>, and b)Enumerable.Contains accepts an optional comparer.

To address this,dotnet/runtime#110197 adds ~30 new overloads to theMemoryExtensions class. These overloads all parallel existing methods, but remove theIEquatable<T> (orIComparable<T>) constraint on the generic method parameter and accept an optionalIEqualityComparer<T>? (orIComparer<T>). When no comparer or a default comparer is supplied, they can fall back to using the same vectorized logic for relevant types, and otherwise can provide as optimal an implementation as they can muster, based on the nature ofT and the supplied comparer.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private IEnumerable<int> _data = Enumerable.Range(0, 1_000_000).ToArray();    [Benchmark]    public bool Contains() => _data.Contains(int.MaxValue, EqualityComparer<int>.Default);}
MethodRuntimeMeanRatio
Contains.NET 9.0213.94 us1.00
Contains.NET 10.067.86 us0.32

(It’s also worth highlighting that with the “first-class” span support in C# 14, many of these extensions fromMemoryExtensions now naturally show up directly on types likestring.)

This kind of searching often shows up as part of other APIs. For example, encoding APIs often need to first find something to be encoded, and that searching can be accelerated by using one of these efficiently implemented search APIs. There are dozens and dozens of existing examples of that throughout the core libraries, many of the places usingSearchValues or these variousMemoryExtensions methods.dotnet/runtime#110574 adds another, speeding upstring.Normalize‘s argument validation. The current implementation walks character by character looking for the first surrogate. The new implementation gives that a jump start by usingIndexOfAnyInRange.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _input = "This is a test. This is only a test. Nothing to see here. \u263A\uFE0F";    [Benchmark]    public string Normalize() => _input.Normalize();}
MethodRuntimeMeanRatio
Normalize.NET 9.0104.93 ns1.00
Normalize.NET 10.088.94 ns0.85

dotnet/runtime#110478 similarly updatesHttpUtility.UrlDecode to use the vectorizedIndexOfAnyInRange. It also avoids allocating the resultingstring if nothing needs to be decoded.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Web;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public string UrlDecode() => HttpUtility.UrlDecode("aaaaabbbbb%e2%98%ba%ef%b8%8f");}
MethodRuntimeMeanRatio
UrlDecode.NET 9.059.42 ns1.00
UrlDecode.NET 10.054.26 ns0.91

Similarly,dotnet/runtime#114494 employsSearchValues inOptimizedInboxTextEncoder, which is the core implementation that backs the various encoders likeJavaScriptEncoder andHtmlEncoder in theSystem.Text.Encodings.Web library.

JSON

JSON is at the heart of many different domains, having become the lingua franca of data interchange on the web. WithSystem.Text.Json as the recommended library for working with JSON in .NET, it is constantly evolving to meet additional performance requirements. .NET 10 sees it updated with both improvements to the performance of existing methods as well as new methods specifically geared towards helping with performance.

TheJsonSerializer type is layered on top of the lower-levelUtf8JsonReader andUtf8JsonWriter types. When serializing,JsonSerializer needs an instance ofUtf8JsonWriter, which is aclass, and any associated objects, such as anIBufferWriter instance. For any temporary buffers it requires, it’ll use rented buffers fromArrayPool<byte>, but for these helper objects, it maintains its own cache, to avoid needing to recreate them at very high frequencies. That cache was being used for all asynchronous streaming serialization operations, but as it turns out, it wasn’t being used for synchronous streaming serialization operations.dotnet/runtime#112745 fixes that to make the use of the cache consistent, avoiding these intermediate allocations.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.Json;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private Data _data = new();    private MemoryStream _stream = new();    [Benchmark]    public void Serialize()    {        _stream.Position = 0;        JsonSerializer.Serialize(_stream, _data);    }    public class Data    {        public int Value1 { get; set; }    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
Serialize.NET 9.0115.36 ns1.00176 B1.00
Serialize.NET 10.077.73 ns0.670.00

Earlier when discussing collections, it was noted thatOrderedDictionary<TKey, TValue> now exposes overloads of methods likeTryAdd that return the relevant item’s index, which then allows subsequent access to avoid the more costly key-based lookup. As it turns out,JsonObject‘s indexer needs to do that, first indexing into the dictionary by key, doing some checks, and then indexing again. It’s now been updated to use these new overloads. As those lookups typically dominate the cost of using the setter, this can upwards of double throughput ofJsonObject‘s indexer:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.Json.Nodes;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private JsonObject _obj = new();    [Benchmark]    public void Set() => _obj["key"] = "value";}
MethodRuntimeMeanRatio
Set.NET 9.040.56 ns1.00
Set.NET 10.016.96 ns0.42

Most of the improvements inSystem.Text.Json, however, are actually via new APIs. This same “avoid a double lookup” issue shows up in other places, for example wanting to add a property to aJsonObject but only if it doesn’t yet exist. Withdotnet/runtime#111229 from@Flu, that’s addressed with a newTryAdd method (as well as aTryAdd overload and an overload of the existingTryGetPropertyValue that, as withOrderedDictionary<>, returns the index of the property).

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.Json.Nodes;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private JsonObject _obj = new();    private JsonNode _value = JsonValue.Create("value");    [Benchmark(Baseline = true)]    public void NonOverwritingSet_Manual()    {        _obj.Remove("key");        if (!_obj.ContainsKey("key"))        {            _obj.Add("key", _value);        }    }    [Benchmark]    public void NonOverwritingSet_TryAdd()    {        _obj.Remove("key");        _obj.TryAdd("key", _value);    }}
MethodMeanRatio
NonOverwritingSet_Manual16.59 ns1.00
NonOverwritingSet_TryAdd14.31 ns0.86

dotnet/runtime#109472 from@karakasa also imbuesJsonArray with newRemoveAll andRemoveRange methods. In addition to the usability benefits these can provide, they have the same performance benefits they have onList<T> (which is not a coincidence, given thatJsonArray is, as an implementation detail, a wrapper for aList<JsonNode?>). Removing “incorrectly” from aList<T> can end up being anO(N^2) endeavor, e.g. when I run this:

// dotnet run -c Release -f net10.0using System.Diagnostics;for (int i = 100_000; i < 700_000; i += 100_000){    List<int> items = Enumerable.Range(0, i).ToList();    Stopwatch sw = Stopwatch.StartNew();    while (items.Count > 0)    {        items.RemoveAt(0); // uh oh    }    Console.WriteLine($"{i} => {sw.Elapsed}");}

I get output like this:

100000 => 00:00:00.2271798200000 => 00:00:00.8328727300000 => 00:00:01.9820088400000 => 00:00:03.9242008500000 => 00:00:06.9549009600000 => 00:00:11.1104903

Note how as the list length grows linearly, the elapsed time is growing non-linearly. That’s primarily because eachRemoveAt(0) is requiring the entire remainder of the list to shift down, which isO(N) in the length of the list. That means we getN + (N-1) + (N-2) + ... + 1 operations, which isN(N+1)/2, which isO(N^2). BothRemoveRange andRemoveAll are able to avoid those costs by doing the shifting only once per element. Of course, even without such methods, I could have written my previous removal loop in a way that keeps it linear, namely by repeatedly removing the last element rather than the first (and, of course, if Ireally intended on removing everything, I could have just usedClear). Typical use, however, ends up removing a smattering of elements, and being able to just delegate and not worry about accidentally incurring a non-linear overhead is helpful.

// dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.Json.Nodes;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private JsonArray _arr;    [IterationSetup]    public void Setup() =>        _arr = new JsonArray(Enumerable.Range(0, 100_000).Select(i => (JsonNode)i).ToArray());    [Benchmark]    public void Manual()    {        int i = 0;        while (i < _arr.Count)        {            if (_arr[i]!.GetValue<int>() % 2 == 0)            {                _arr.RemoveAt(i);            }            else            {                i++;            }        }    }    [Benchmark]    public void RemoveAll() => _arr.RemoveAll(static n => n!.GetValue<int>() % 2 == 0);}
MethodMeanAllocated
Manual355.230 ms
RemoveAll2.022 ms24 B

(Note that whileRemoveAll in this micro-benchmark is more than 150x faster, it does have that small allocation that the manual implementation doesn’t. That’s due to a closure in the implementation while delegating toList<T>.RemoveAll. This could be avoided in the future if necessary.)

Another frequently-requested new method is fromdotnet/runtime#116363, which adds newParse methods toJsonElement. If a developer wants aJsonElement and only needs it temporarily, the most efficient mechanism available today is still the right answer:Parse aJsonDocument, use itsRootElement, and thenonly when done with theJsonElement, dispose of theJsonDocument, e.g.

using (JsonDocument doc = JsonDocument.Parse(json)){    DoSomething(doc.RootElement);}

That, however, is really only viable when theJsonElement is used in a scoped manner. If a developer needs to hand out theJsonElement, they’re left with three options:

  1. Parse into aJsonDocument, clone itsRootElement, dispose of theJsonDocument, hand out the clone. While usingJsonDocument is good for the temporary case, making a clone like this entails a fair bit of overhead:
    JsonElement clone;using (JsonDocument doc = JsonDocument.Parse(json)){    clone = doc.RootElement.Clone();}return clone;
  2. Parse into aJsonDocument and just hand out itsRootElement. Pleasedo not do this!JsonDocument.Parse creates aJsonDocument that’s backed by an array from theArrayPool<>. If you don’tDispose of theJsonDocument in this case, an array will be rented and then never returned to the pool. That’s not the end of the world; if someone else requests an array from the pool and the pool doesn’t have one cached to give them, it’ll just manufacture one, so eventually the pool’s arrays will be replenished. But the arrays in the pool are generally “more valuable” than others, because they’ve generally been around longer, and are thus more likely to be in higher generations. By using anArrayPool array rather than a new array for a shorter-livedJsonDocument, you’re more likely throwing away an array that’ll have net more impact on the overall system. The impact of that is not easily seen in a micro-benchmark.
    return JsonDocument.Parse(json).RootElement; // please don't do this
  3. UseJsonSerializer to deserialize aJsonElement. This is a simple and reasonable one-liner, but it does invoke theJsonSerializer machinery, which brings in more overhead.
    return JsonSerializer.Deserialize<JsonElement>(json);

Now in .NET 10, there’s a fourth option:

  • UseJsonElement.Parse. This is the right answer. Use this instead of (1), (2), or (3).
    //  dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Text.Json;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private const string JsonString = """{ "name": "John", "age": 30, "city": "New York" }""";    [Benchmark]    public JsonElement WithClone()    {        using JsonDocument d = JsonDocument.Parse(JsonString);        return d.RootElement.Clone();    }    [Benchmark]    public JsonElement WithoutClone() =>        JsonDocument.Parse(JsonString).RootElement; // please don't do this in production code    [Benchmark]    public JsonElement WithDeserialize() =>        JsonSerializer.Deserialize<JsonElement>(JsonString);    [Benchmark]    public JsonElement WithParse() =>        JsonElement.Parse(JsonString);}
    MethodMeanAllocated
    WithClone303.7 ns344 B
    WithoutClone249.6 ns312 B
    WithDeserialize397.3 ns272 B
    WithParse261.9 ns272 B

With JSON being used as an encoding for many modern protocols, streaming large JSON payloads has become very common. And for most use cases, it’s already possible to stream JSON well withSystem.Text.Json. However, in previous releases there wasn’t been a good way to stream partial string properties; string properties had to have their values written in one operation. If you’ve got small strings, that’s fine. If you’ve got really, really large strings, and those strings are lazily-produced in chunks, however, you ideally want the ability to write those chunks of the property as you have them, rather than needing to buffer up the value in its entirety.dotnet/runtime#101356 augmentedUtf8JsonWriter with aWriteStringValueSegment method, which enables such partial writes. That addresses the majority case, however there’s a very common case where additional encoding of the value is desirable, and an API that automatically handles that encoding helps to be both efficient and easy. These modern protocols often transmit large blobs of binary data within the JSON payloads. Typically, these blobs end up being Base64 strings as properties on some JSON object. Today, outputting such blobs requires Base64-encoding the whole input and then writing the resultingbytes orchars in their entirety into theUtf8JsonWriter. To address that,dotnet/runtime#111041 adds aWriteBase64StringSegment method toUtf8JsonWriter. For those sufficiently motivated to reduce memory overheads, and to enable the streaming of such payloads,WriteBase64StringSegment enables passing in a span of bytes, which the implementation will Base64-encode and write to the JSON property; it can be called multiple times withisFinalSegment=false, such that the writer will continue appending the resulting Base64 data to the property, until it’s called with a final segment that ends the property. (Utf8JsonWriter has long had aWriteBase64String method, this newWriteBase64StringSegment simply enables it to be written in pieces.) The primary benefit of such a method is reduced latency and working set, as the entirety of the data payload needn’t be buffered before being written out, but we can still come up with a throughput benchmark that shows benefits:

//  dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Buffers;using System.Text.Json;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private Utf8JsonWriter _writer = new(Stream.Null);    private Stream _source = new MemoryStream(Enumerable.Range(0, 10_000_000).Select(i => (byte)i).ToArray());    [Benchmark]    public async Task Buffered()    {        _source.Position = 0;        _writer.Reset();        byte[] buffer = ArrayPool<byte>.Shared.Rent(0x1000);        int totalBytes = 0;        int read;        while ((read = await _source.ReadAsync(buffer.AsMemory(totalBytes))) > 0)        {            totalBytes += read;            if (totalBytes == buffer.Length)            {                byte[] newBuffer = ArrayPool<byte>.Shared.Rent(buffer.Length * 2);                Array.Copy(buffer, newBuffer, totalBytes);                ArrayPool<byte>.Shared.Return(buffer);                buffer = newBuffer;            }        }        _writer.WriteStartObject();        _writer.WriteBase64String("data", buffer.AsSpan(0, totalBytes));        _writer.WriteEndObject();        await _writer.FlushAsync();        ArrayPool<byte>.Shared.Return(buffer);    }    [Benchmark]    public async Task Streaming()    {        _source.Position = 0;        _writer.Reset();        byte[] buffer = ArrayPool<byte>.Shared.Rent(0x1000);        _writer.WriteStartObject();        _writer.WritePropertyName("data");        int read;        while ((read = await _source.ReadAsync(buffer)) > 0)        {            _writer.WriteBase64StringSegment(buffer.AsSpan(0, read), isFinalSegment: false);        }        _writer.WriteBase64StringSegment(default, isFinalSegment: true);        _writer.WriteEndObject();        await _writer.FlushAsync();        ArrayPool<byte>.Shared.Return(buffer);    }}
MethodMean
Buffered3.925 ms
Streaming1.555 ms

.NET 9 saw the introduction of theJsonMarshal class and theGetRawUtf8Value method, which provides raw access to the underlying bytes of property values fronted by aJsonElement. For situations where the name of the property is also needed,dotnet/runtime#107784 from@mwadams provides a correspondingJsonMarshal.GetRawUtf8PropertyName method.

Diagnostics

Over the years, I’ve seen a fair number of codebases introduce astruct-basedValueStopwatch; I think there are even a few still floating around theMicrosoft.Extensions libraries. The premise behind these is thatSystem.Diagnostics.Stopwatch is aclass, but it simply wraps along (a timestamp), so rather than writing code like the following that allocates:

Stopwatch sw = Stopwatch.StartNew();... // something being measuredsw.Stop();TimeSpan elapsed = sw.Elapsed;

you could write:

ValueStopwatch sw = ValueStopwatch.StartNew();... // something being measuredsw.Stop();TimeSpan elapsed = sw.Elapsed;

and avoid the allocation.Stopwatch subsequently gained helpers that make such aValueStopwatch less appealing, since as of .NET 7, I can write it instead like this:

long start = Stopwatch.GetTimestamp();... // something being measuredlong end = Stopwatch.GetTimestamp();TimeSpan elapsed = Stopwatch.GetElapsedTime(start, end);

However, that’s not quite as natural as the original example, that just usesStopwatch. Wouldn’t it be nice if you could write the original example and have it executed as if it were the latter? With all the investments in .NET 9 and .NET 10 around escape analysis and stack allocation, you now can.dotnet/runtime#111834 streamlines theStopwatch implementation so thatStartNew,Elapsed, andStop are fully inlineable. At that point, the JIT can see that the allocatedStopwatch instance never escapes the frame, and it can be stack allocated.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Diagnostics;using System.Runtime.CompilerServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][DisassemblyDiagnoser][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public TimeSpan WithGetTimestamp()    {        long start = Stopwatch.GetTimestamp();        Nop();        long end = Stopwatch.GetTimestamp();        return Stopwatch.GetElapsedTime(start, end);    }    [Benchmark]    public TimeSpan WithStartNew()    {        Stopwatch sw = Stopwatch.StartNew();        Nop();        sw.Stop();        return sw.Elapsed;    }    [MethodImpl(MethodImplOptions.NoInlining)]    private static void Nop() { }}
MethodRuntimeMeanRatioCode SizeAllocatedAlloc Ratio
WithGetTimestamp.NET 9.028.95 ns1.00148 BNA
WithGetTimestamp.NET 10.028.32 ns0.98130 BNA
WithStartNew.NET 9.038.62 ns1.00341 B40 B1.00
WithStartNew.NET 10.028.21 ns0.73130 B0.00

dotnet/runtime#117031 is a nice improvement that helps reduce working set for anyone using anEventSource and that has events with really large IDs. For efficiency purposes,EventSource was using an array to map event ID to the data for that ID; lookup needs to be really fast, since the lookup is performed on every event write in order to look up the metadata for the event being written. In manyEventSources, the developer authors events with a small, contiguous range of IDs, and the array ends up being very dense. But if a developer authors any event with a really large ID (which we’ve seen happen in multiple real-world projects, due to splitting events into multiple partial class definitions shared between different projects and selecting IDs for each file unlikely to conflict with each other), an array is still created with a length to accomodate that large ID, which can result in a really big allocation that persists for the lifetime of the event source, and a lot of that allocation ends up just being wasted space. Thankfully, sinceEventSource was written years ago, the performance ofDictionary<TKey, TValue> has increased significantly, to the point where it’s able to efficiently handle the lookups without needing the event IDs to be dense. Note that there should really only ever be one instance of a givenEventSource-derived type; the recommended pattern is to store one into a static readonly field and just use that one. So the overheads incurred as part of this are primarily about the impact that single large allocation has on working set for the duration of the process. To make it easier to demonstrate, though, I’m doing something you’d never, ever do, and creating a new instance per event. Don’t try this at home, or at least not in production.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Diagnostics;using System.Diagnostics.Tracing;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private MyListener _listener = new();    [Benchmark]    public void Oops()    {        using OopsEventSource oops = new();        oops.Oops();    }    [EventSource(Name = "MyTestEventSource")]    public sealed class OopsEventSource : EventSource    {        [Event(12_345_678, Level = EventLevel.Error)]        public void Oops() => WriteEvent(12_345_678);    }    private sealed class MyListener : EventListener    {        protected override void OnEventSourceCreated(EventSource eventSource) =>             EnableEvents(eventSource, EventLevel.Error);    }}
MethodRuntimeMeanRatioAllocatedAlloc Ratio
Oops.NET 9.01,876.21 us1.001157428.01 KB1.000
Oops.NET 10.022.06 us0.0119.21 KB0.000

dotnet/runtime#107333 from@AlgorithmsAreCool reduces thread contention involved in starting and stopping anActivity.ActivitySource maintains a thread-safe list of listeners, which only changes on the rare occasion that a listener is registered or unregistered. Any time anActivity is created or destroyed (which can happen at very high frequency), each listener gets notified, which requires walking through the list of listeners. The previous code used a lock to protect that listeners list, and to avoid notifying the listener while holding the lock, the implementation would take the lock, determine the next listener, release the lock, notify the listener, and rinse and repeat until it had notified all listeners. This could result in significant contention, as multiple threads started and stoppedActivitys. Now with this PR, the list switches to be an immutable array. Each time the list changes, a new array is created with the modified set of listeners. This makes the act of changing the listeners list much more expensive, but, as noted, that’s generally a rarity. And in exchange, notifying listeners becomes much cheaper.

dotnet/runtime#117334 from@petrroll avoids the overheads of callers needing to interact with null loggers by excluding them inLoggerFactory.CreateLoggers, whiledotnet/runtime#117342 seals theNullLogger type so type checks againstNullLogger (e.g.if (logger is NullLogger) can be made more efficient by the JIT. Anddotnet/roslyn-analyzers# from@mpidash will help developers to realize that their logging operations aren’t as cheap as they thought they might be. Consider this code:

[LoggerMessage(Level = LogLevel.Information, Message = "This happened: {Value}")]private static partial void Oops(ILogger logger, string value);public static void UnexpectedlyExpensive(){    Oops(NullLogger.Instance, $"{Guid.NewGuid()} {DateTimeOffset.UtcNow}");}

It’s using the logger source generator, which will emit an implementation dedicated to this log method, including a log level check so that it doesn’t pay the bulk of the costs associated with logging unless the associated level is enabled:

[global::System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.Extensions.Logging.Generators", "6.0.5.2210")]private static partial void Oops(global::Microsoft.Extensions.Logging.ILogger logger, global::System.String value){    if (logger.IsEnabled(global::Microsoft.Extensions.Logging.LogLevel.Information))    {        __OopsCallback(logger, value, null);    }}

Except, the call site is doing non-trivial work, creating a newGuid, fetching the current time, and allocating a string via string interpolation, even though it might be wasted work ifLogLevel.Information isn’t available. This CA1873 analyzer flags that:Analyzer for expensive logging sites

Cryptography

A ton of effort went into cryptography in .NET 10, almost entirely focused on post‑quantum cryptography (PQC). PQC refers to a class of cryptographic algorithms designed to resist attacks from quantum computers, machines that could one day render classic cryptographic algorithms like Rivest–Shamir–Adleman (RSA) or Elliptic Curve Cryptography (ECC) insecure by efficiently solving problems such as integer factorization and discrete logarithms. With the looming threat of “harvest now, decrypt later” attacks (where a well-funded attacker idly captures encrypted internet traffic, expecting that they’ll be able to decrypt and read it later) and the multi-year process required to migrate critical infrastructure, the transition to quantum‑safe cryptographic standards has become an urgent priority. In this light, .NET 10 adds support for ML-DSA (a National Institute of Standards and Technology PQC digital signature algorithm), Composite ML-DSA (a draft Internet Engineering Task Force specification for creating signatures that combine ML-DSA with a classical crypto algorithm like RSA), SLH-DSA (another NIST PQC signature algorithm), and ML-KEM (a NIST PQC key encapsulation algorithm). This is an important step towards quantum-resistant security, enabling developers to begin experimenting with and planning for post-quantum identity and authenticity scenarios. While this PQC effort is not about performance, the design of them is very much focused on more modern sensibilities that have performance as a key motivator. While older types, like those that derive fromAsymmetricAlgorithm, are design around arrays, with support for spans tacked on later, the new types are design with spans at the center, and with array-based APIs available only for convenience.

There are, however, some cryptography-related changes in .NET 10 that are focused squarely on performance. One is around improving OpenSSL “digest” performance. .NET’s cryptography stack is built on top of the underlying platform’s native cryptographic libraries; on Linux, that means using OpenSSL, making it a hot path for common operations like hashing, signing, and TLS. “Digest algorithms” are the family of cryptographic hash functions (for example, SHA‑256, SHA‑512, SHA‑3) that turn arbitrary input into fixed‑size fingerprints; they’re used all of the place, from verifying packages to TLS handshakes to content de-duplication. While .NET can use OpenSSL 1.x if that’s what’s offered by the OS, since .NET 6 it’s been focusing more and more on optimizing for and lighting-up with OpenSSL 3 (the previously-discussed PQC support requires OpenSSL 3.5 or later). With OpenSSL 1.x, OpenSSL exposed getter functions likeEVP_sha256(), which were cheap functions that just returned a direct pointer to theEVP_MD for the relevant hash implementation. OpenSSL 3.x introduced a provider model, with a fetch function (EVP_MD_fetch) for retrieving the provider-backed implementation. To keep source compatibility, the 1.x-era getter functions were changed to return pointers to compatibility shims: when you pass one of these legacyEVP_MD pointers into operations likeEVP_DigestInit_ex, OpenSSL performs an “implicit fetch” under the covers to resolve the actual implementation. That implicit fetch path adds extra work, on each use. Instead, OpenSSL recommends consumers do an explicit fetch and then cache the result for reuse. That’s whatdotnet/runtime#118613 does. The result is leaner and faster cryptographic hash operations on OpenSSL‑based platforms.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Security.Cryptography;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _src = new byte[1024];    private byte[] _dst = new byte[SHA256.HashSizeInBytes];    [GlobalSetup]    public void Setup() => new Random(42).NextBytes(_src);    [Benchmark]    public void Hash() => SHA256.HashData(_src, _dst);}
MethodRuntimeMeanRatio
Hash.NET 9.01,206.8 ns1.00
Hash.NET 10.0960.6 ns0.80

A few other performance niceties have also found their way in.

  • AsnWriter.Encode.dotnet/runtime#106728 anddotnet/runtime#112638 add and then use throughout the crypto stack a callback-based mechanism toAsnWriter that enables encoding without forced allocation for the temporary encoded state.
  • SafeHandle singleton.dotnet/runtime#109391 employs a singletonSafeHandle in more places inX509Certificate to avoid temporary handle allocation.
  • Span-basedProtectedData.dotnet/runtime#109529 from@ChadNedzlek addsSpan<byte>-based overloads to theProtectedData class that enable protecting data without requiring the source or destinations to be in allocated arrays.
  • PemEncoding UTF-8.dotnet/runtime#109438 adds UTF-8 support toPemEncoding.PemEncoding, a utility class for parsing and formatting PEM (Privacy-Enhanced Mail)-encoded data such as that used in certificates and keys, previously worked only withchars. As was then done indotnet/runtime#109564, this change makes it possible to parse UTF8 data directly without first needing to transcode to UTF16.
  • FindByThumbprint.dotnet/runtime#109130 adds anX509Certification2Collection.FindByThumbprint method. The implementation uses a stack-based buffer for the thumbprint value for each candidate certificate, eliminating the arrays that would otherwise be created in a naive manual implementation.dotnet/runtime#113606 then utilized this inSslStream.
  • SetKeydotnet/runtime#113146 adds a span-basedSymmetricAlgorithm.SetKey method which can then be used to avoid creating unnecessary arrays.

Peanut Butter

As in every .NET release, there are a large number of PRs that help with performance in some fashion. The more of these that are addressed, the more the overall overhead for applications and services is lowered. Here are a smattering from this release:

  • GC. DATAS (Dynamic Adaptation To Application Sizes) was introduced in .NET 8 and enabled by default in .NET 9. Now in .NET 10,dotnet/runtime#105545 tuned DATAS to improve its overall behavior, cutting unnecessary work, smoothing out pauses (especially under high allocation rates), correcting fragmentation accounting that could cause extra short collections (gen1), and other such tweaks. The net result is fewer unnecessary collections, steadier throughput, and more predictable latency for allocation-heavy workloads.dotnet/runtime#118762 also adds several knobs for configuring how DATAS behaves, and in particular settings to fine-tune how Gen0 grows.
  • GCHandle. The GC supports various types of “handles” that allow for explicit management of resources in relation to GC operation. For example, you can create a “pinning handle,” which ensures that the GC will not move the object in question. Historically, these handles were surfaced to developers via theGCHandle type, but it has a variety of issues, including that it’s really easy to misuse due to lack of strong typing. To help address that,dotnet/runtime#111307 introduces a few new strongly-typed flavors of handles, withGCHandle<T>,PinnedGCHandle<T>, andWeakGCHandle<T>. These should not only address some of the usability issues, they’re also able to shave off a bit of the overheads incurred by the old design.
    // dotnet run -c Release -f net10.0 --filter "*"using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;using System.Runtime.InteropServices;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private byte[] _array = new byte[16];    [Benchmark(Baseline = true)]    public void Old() => GCHandle.Alloc(_array, GCHandleType.Pinned).Free();    [Benchmark]    public void New() => new PinnedGCHandle<byte[]>(_array).Dispose();}
    MethodMeanRatio
    Old27.80 ns1.00
    New22.73 ns0.82
  • Mono interpreter. The mono interpreter gained optimized support for several opcodes, including switches (dotnet/runtime#107423), new arrays (dotnet/runtime#107430), and memory barriers (dotnet/runtime#107325). But arguably more impactful was a series of more than a dozen PRs that enabled the interpreter to vectorize more operations with WebAssembly (Wasm). This included contributions likedotnet/runtime#114669, which enabled vectorization of shift operations, anddotnet/runtime#113743, which enabled vectorization of a plethora of operations likeAbs,Divide, andTruncate. Other PRs used the Wasm-specific intrinsic APIs in more places, in order to accelerate on Wasm routines that were already accelerated on other architectures using architecture-specific intrinsics, e.g.dotnet/runtime#115062 usedPackedSimd in the workhorse methods behind the hex conversion routines onConvert, likeConvert.FromBase64String.
  • FCALLs. There are many places in the lower-layers ofSystem.Private.CoreLib where managed code needs to call into native code in the runtime. There are two primary ways this transition from managed to native has happened, historically. One method is through what’s called a “QCALL”, essentially just a DllImport (P/Invoke) into native functions exposed by the runtime. The other, which historically was the dominant mechansim, is an “FCALL,” which is a more complex and specialized pathway that allows direct access to managed objects from native code. FCALLs were once the standard, but over time, more of them were converted to QCALLs. This shift improves reliability (since FCALLs are notoriously tricky to implement correctly) and can also boost performance, as FCALLs require helper method frames, which QCALLs can often avoid. A ton of PRs in .NET 10 went into removing FCALLs, likedotnet/runtime#107218 for helper method frames inException,GC, andThread,dotnet/runtime#106497 for helper method frames inobject,dotnet/runtime#107152 for those used in connecting to profilers,dotnet/runtime#108415 anddotnet/runtime#108535 for ones in reflection, and over a dozen others. In the end, all FCALLS that touched managed memory or threw exceptions were removed.
  • Converting hex. Recent .NET releases added methods toConvert likeFromHexString andTryToHexStringLower, but such methods all used UTF16.dotnet/runtime#117965 adds overloads of these that work with UTF8 bytes.
  • Formatting. String interpolation is backed by “interpolated string handlers.” When you interpolate with a string target type, by default you get theDefaultInterpolatedStringHandler that comes fromSystem.Runtime.CompilerServices. That implementation is able to use stack-allocated memory and theArrayPool<> for reduced allocation overheads as it’s buffering up text formatted to it. While very advanced, other code, including other interpolated string handlers, can useDefaultInterpolatedStringHandler as an implementation detail. However, when doing so, such code only could get access to the final output as astring; the underlying buffer wasn’t exposed.dotnet/runtime#112171 adds aText property toDefaultInterpolatedStringHandler for code that wants access to the already formatted text in aReadOnlySpan<char>.
  • Enumeration-related allocations.dotnet/runtime#118288 removes a handful of allocations related to enumeration, for example removing astring.Split call inEnumConverter and replacing it with aMemoryExtensions.Split call that doesn’t need to allocate either thestring[] or the individualstring instances.
  • NRBF decoding.dotnet/runtime#107797 from@teo-tsirpanis removes an array allocation used in adecimal constructor call, replacing it instead with a collection expression targeting a span, which will result in the state being stack allocated.
  • TypeConverter allocations.dotnet/runtime#111349 from@AlexRadch reduces some parsing overheads in theTypeConverters forSize,SizeF,Point, andRectangle by using more modern APIs and constructs, such as the span-basedSplit method and string interpolation.
  • Generic math conversions. Most of theTryConvertXx methods using the various primitive’s implementations of the generic math interfaces are marked asMethodImplOptions.AggressiveInlining, to help the JIT realize they should always be inlined, but a few stragglers were left out.dotnet/runtime#112061 from@hez2010 fixes that.
  • ThrowIfNull. C# 14 now supports the ability to write extension static methods. This is a huge boon for libraries that need to support downlevel targeting, as it means static methods can be polyfilled just as instance methods can be. There are many libraries in .NET that build not only for the latest runtimes but also for .NET Standard 2.0 and .NET Framework, and those libraries have been unable to use helper static methods likeArgumentNullException.ThrowIfNull, which can help to streamline call sites and make methods more inlineable (in addition, of course, to tidying up the code). Now that the dotnet/runtime repo builds with a C# 14 compiler,dotnet/runtime#114644 replaced ~2500 call sites in such libraries with use of aThrowIfNull polyfill.
  • FileProvider Change Tokens.dotnet/runtime#116175 reduces allocation inPollingWildCardChangeToken by using allocation-free mechanisms for computing hashes, whiledotnet/runtime#115684 from@rameel reduces allocation inCompositeFileProvider by avoiding taking up space for nopNullChangeTokens.
  • String interpolation.dotnet/runtime#114497 removes a variety of null checks when dealing with nullable inputs, shaving off some overheads of the interpolation operation.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    private string _value = " ";    [Benchmark]    public string Interpolate() => $"{_value} {_value} {_value} {_value}";}
    MethodRuntimeMeanRatio
    Interpolate.NET 9.034.21 ns1.00
    Interpolate.NET 10.029.47 ns0.86
  • AssemblyQualifiedName.Type.AssemblyQualifiedName previously recomputed the result on every access. As ofdotnet/runtime#118389, it’s now cached.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0using BenchmarkDotNet.Attributes;using BenchmarkDotNet.Running;BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);[MemoryDiagnoser(displayGenColumns: false)][HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]public partial class Tests{    [Benchmark]    public string AQN() => typeof(Dictionary<int, string>).AssemblyQualifiedName!;}
    MethodRuntimeMeanRatioAllocatedAlloc Ratio
    AQN.NET 9.0132.345 ns1.007712 B1.00
    AQN.NET 10.01.218 ns0.0090.00

What’s Next?

Whew! After all of that, I hope you’re as excited as I am about .NET 10, and more generally, about the future of .NET.

As you’ve seen in this tour (and in those for previous releases), the story of .NET performance is one of relentless iteration, systemic thinking, and the compounding effect of many targeted improvements. While I’ve highlighted micro-benchmarks to show specific gains, the real story isn’t about these benchmarks… it’s about making real-world applications more responsive, more scalable, more sustainable, more economical, and ultimately, more enjoyable to build and use. Whether you’re shipping high-throughput services, interactive desktop apps, or resource-constrained mobile experiences, .NET 10 offers tangible performance benefits to you and your users.

The best way to appreciate these improvements is to try.NET 10 RC1 yourself. Download it, run your workloads, measure the impact, and share your experiences. See awesome gains? Find a regression that needs fixing? Spot an opportunity for further improvement? Shout it out, open an issue, even send a PR. Every bit of feedback helps make .NET better, and we look forward to continuing to build with you.

Happy coding!

Author

Stephen Toub - MSFT
Partner Software Engineer

Stephen Toub is a developer on the .NET team at Microsoft.

86 comments

Discussion is closed.Login to edit/delete existing comments.

Sort by :

Stay informed

Get notified when new posts are published.
Follow this blog
facebooklinkedinyoutubetwitchStackoverflow