Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Download Microsoft EdgeMore info about Internet Explorer and Microsoft Edge
Table of contentsExit focus mode

What's new in .NET libraries for .NET 10

  • 2025-07-16
Feedback

In this article

This article describes new features in the .NET libraries for .NET 10. It's been updated for Preview 6.

Cryptography

Find certificates by thumbprints other than SHA-1

Finding certificates uniquely by thumbprint is a fairly common operation, but theX509Certificate2Collection.Find(X509FindType, Object, Boolean) method (for theFindByThumbprint mode) only searches for the SHA-1 thumbprint value.

There's some risk to using theFind method for finding SHA-2-256 ("SHA256") and SHA-3-256 thumbprints since these hash algorithms have the same lengths.

Instead, .NET 10 introduces anew method that accepts the name of the hash algorithm to use for matching.

X509Certificate2Collection coll = store.Certificates.FindByThumbprint(HashAlgorithmName.SHA256, thumbprint);Debug.Assert(coll.Count < 2, "Collection has too many matches, has SHA-2 been broken?");return coll.SingleOrDefault();

Find PEM-encoded data in ASCII/UTF-8

The PEM encoding (originallyPrivacy Enhanced Mail, but now used widely outside of email) is defined for "text", which means that thePemEncoding class was designed to run onString andReadOnlySpan<char>. However, it's common (especially on Linux) to have something like a certificate written in a file that uses the ASCII (string) encoding. Historically, that meant you needed to open the file and convert the bytes to chars (or a string) before you could usePemEncoding.

The newPemEncoding.FindUtf8(ReadOnlySpan<Byte>) method takes advantage of the fact that PEM is only defined for 7-bit ASCII characters, and that 7-bit ASCII has a perfect overlap with single-byte UTF-8 values. By calling this new method, you can skip the UTF-8/ASCII-to-char conversion and read the file directly.

byte[] fileContents = File.ReadAllBytes(path);-char[] text = Encoding.ASCII.GetString(fileContents);-PemFields pemFields = PemEncoding.Find(text);+PemFields pemFields = PemEncoding.FindUtf8(fileContents);-byte[] contents = Base64.DecodeFromChars(text.AsSpan()[pemFields.Base64Data]);+byte[] contents = Base64.DecodeFromUtf8(fileContents.AsSpan()[pemFields.Base64Data]);

Encryption algorithm for PKCS#12/PFX export

The newExportPkcs12 methods onX509Certificate allow callers to choose what encryption and digest algorithms are used to produce the output:

  • Pkcs12ExportPbeParameters.Pkcs12TripleDesSha1 indicates the Windows XP-era de facto standard. It produces an output supported by almost every library and platform that supports reading PKCS#12/PFX by choosing an older encryption algorithm.
  • Pkcs12ExportPbeParameters.Pbes2Aes256Sha256 indicates that AES should be used instead of 3DES (and SHA-2-256 instead of SHA-1), but the output might not be understood by all readers (such as Windows XP).

If you want even more control, you can usethe overload that accepts aPbeParameters.

Post-quantum cryptography (PQC)

.NET 10 includes support for three new asymmetric algorithms: ML-KEM (FIPS 203), ML-DSA (FIPS 204), and SLH-DSA (FIPS 205). The new types are:

  • System.Security.Cryptography.MLKem
  • System.Security.Cryptography.MLDsa
  • System.Security.Cryptography.SlhDsa

Because it adds little benefit, these new types don't derive fromAsymmetricAlgorithm. Rather than theAsymmetricAlgorithm approach of creating an object and then importing a key into it, or generating a fresh key, the new types all use static methods to generate or import a key:

using System;using System.IO;using System.Security.Cryptography;private static bool ValidateMLDsaSignature(ReadOnlySpan<byte> data, ReadOnlySpan<byte> signature, string publicKeyPath){    string publicKeyPem = File.ReadAllText(publicKeyPath);    using (MLDsa key = MLDsa.ImportFromPem(publicKeyPem))    {        return key.VerifyData(data, signature);    }}

And rather than setting object properties and having a key materialize, key generation on these new types takes in all of the options it needs.

using (MLKem key = MLKem.GenerateKey(MLKemAlgorithm.MLKem768)){    string publicKeyPem = key.ExportSubjectPublicKeyInfoPem();    ...}

These algorithms all continue with the pattern of having a staticIsSupported property to indicate if the algorithm is supported on the current system.

.NET 10 includes Windows Cryptography API: Next Generation (CNG) support for Post-Quantum Cryptography (PQC), making these algorithms available on Windows systems with PQC support. For example:

using System;using System.IO;using System.Security.Cryptography;private static bool ValidateMLDsaSignature(ReadOnlySpan<byte> data, ReadOnlySpan<byte> signature, string publicKeyPath){    string publicKeyPem = File.ReadAllText(publicKeyPath);    using MLDsa key = MLDsa.ImportFromPem(publicKeyPem);    return key.VerifyData(data, signature);}

The PQC algorithms are available on systems where the system cryptographic libraries are OpenSSL 3.5 (or newer) or Windows CNG with PQC support. Also, the new classes are all marked as[Experimental] under diagnosticSYSLIB5006 until development is complete.

Globalization and date/time

New method overloads in ISOWeek for DateOnly type

TheISOWeek class was originally designed to work exclusively withDateTime, as it was introduced before theDateOnly type existed. Now thatDateOnly is available, it makes sense forISOWeek to support it as well. The following overloads are new:

Numeric ordering for string comparison

Numerical string comparison is a highly requested feature for comparing strings numerically instead of lexicographically. For example,2 is less than10, so"2" should appear before"10" when ordered numerically. Similarly,"2" and"02" are equal numerically. With the newNumericOrdering option, it's now possible to do these types of comparisons:

StringComparer numericStringComparer = StringComparer.Create(CultureInfo.CurrentCulture, CompareOptions.NumericOrdering);Console.WriteLine(numericStringComparer.Equals("02", "2"));// Output: Trueforeach (string os in new[] { "Windows 8", "Windows 10", "Windows 11" }.Order(numericStringComparer)){    Console.WriteLine(os);}// Output:// Windows 8// Windows 10// Windows 11HashSet<string> set = new HashSet<string>(numericStringComparer) { "007" };Console.WriteLine(set.Contains("7"));// Output: True

This option isn't valid for the following index-based string operations:IndexOf,LastIndexOf,StartsWith,EndsWith,IsPrefix, andIsSuffix.

NewTimeSpan.FromMilliseconds overload with single parameter

TheTimeSpan.FromMilliseconds(Int64, Int64) method was introduced previously without adding an overload that takes a single parameter.

Although this works since the second parameter is optional, it causes a compilation error when used in a LINQ expression like:

Expression<Action> a = () => TimeSpan.FromMilliseconds(1000);

The issue arises because LINQ expressions can't handle optional parameters. To address this, .NET 10 introducesa new overload takes a single parameter. It also modifiesthe existing method to make the second parameter mandatory.

Strings

String normalization APIs to work with span of characters

Unicode string normalization has been supported for a long time, but existing APIs only worked with the string type. This means that callers with data stored in different forms, such as character arrays or spans, must allocate a new string to use these APIs. Additionally, APIs that return a normalized string always allocate a new string to represent the normalized output.

.NET 10 introduces new APIs that work with spans of characters, which expand normalization beyond string types and help to avoid unnecessary allocations:

Collections

AdditionalTryAdd andTryGetValue overloads forOrderedDictionary<TKey, TValue>

OrderedDictionary<TKey,TValue> providesTryAdd andTryGetValue for addition and retrieval like any otherIDictionary<TKey, TValue> implementation. However, there are scenarios where you might want to perform more operations, so new overloads are added that return an index to the entry:

This index can then be used withGetAt andSetAt for fast access to the entry. An example usage of the newTryAdd overload is to add or update a key-value pair in the ordered dictionary:

// Try to add a new key with value 1.if (!orderedDictionary.TryAdd(key, 1, out int index)){    // Key was present, so increment the existing value instead.    int value = orderedDictionary.GetAt(index).Value;    orderedDictionary.SetAt(index, value + 1);}

This new API is already used inJsonObject and improves the performance of updating properties by 10-20%.

Serialization

Allow specifying ReferenceHandler inJsonSourceGenerationOptions

When you usesource generators for JSON serialization, the generated context throws when cycles are serialized or deserialized. Now you can customize this behavior by specifying theReferenceHandler in theJsonSourceGenerationOptionsAttribute. Here's an example usingJsonKnownReferenceHandler.Preserve:

public static void MakeSelfRef(){    SelfReference selfRef = new SelfReference();    selfRef.Me = selfRef;    Console.WriteLine(JsonSerializer.Serialize(selfRef, ContextWithPreserveReference.Default.SelfReference));    // Output: {"$id":"1","Me":{"$ref":"1"}}}[JsonSourceGenerationOptions(ReferenceHandler = JsonKnownReferenceHandler.Preserve)][JsonSerializable(typeof(SelfReference))]internal partial class ContextWithPreserveReference : JsonSerializerContext{}internal class SelfReference{    public SelfReference Me { get; set; } = null!;}

Option to disallow duplicate JSON properties

The JSON specification doesn't specify how to handle duplicate properties when deserializing a JSON payload. This can lead to unexpected results and security vulnerabilities. .NET 10 introduces theJsonSerializerOptions.AllowDuplicateProperties option to disallow duplicate JSON properties:

string json = """{ "Value": 1, "Value": -1 }""";Console.WriteLine(JsonSerializer.Deserialize<MyRecord>(json).Value); // -1JsonSerializerOptions options = new() { AllowDuplicateProperties = false };JsonSerializer.Deserialize<MyRecord>(json, options);                // throws JsonExceptionJsonSerializer.Deserialize<JsonObject>(json, options);              // throws JsonExceptionJsonSerializer.Deserialize<Dictionary<string, int>>(json, options); // throws JsonExceptionJsonDocumentOptions docOptions = new() { AllowDuplicateProperties = false };JsonDocument.Parse(json, docOptions);   // throws JsonExceptionrecord MyRecord(int Value);

Duplicates are detected by checking if a value is assigned multiple times during deserialization, so it works as expected with other options like case-sensitivity and naming policy.

Strict JSON serialization options

The JSON serializer accepts many options to customize serialization and deserialization, but the defaults might be too relaxed for some applications. .NET 10 adds a newJsonSerializerOptions.Strict preset that follows best practices by including the following options:

These options are read-compatible withJsonSerializerOptions.Default - an object serialized withJsonSerializerOptions.Default can be deserialized withJsonSerializerOptions.Strict.

For more information about JSON serialization, seeSystem.Text.Json overview.

System.Numerics

More left-handed matrix transformation methods

.NET 10 adds the remaining APIs for creating left-handed transformation matrices for billboard and constrained-billboard matrices. You can use these methods like their existing right-handed counterparts, for example,CreateBillboard(Vector3, Vector3, Vector3, Vector3), when using a left-handed coordinate system instead:

Tensor enhancements

TheSystem.Numerics.Tensors interface now includes a nongeneric interface,IReadOnlyTensor, for operations like accessingLengths andStrides. Slice operations no longer copy data, which improves performance. Additionally, you can access data nongenerically by boxing toobject when performance isn't critical.

Options validation

New AOT-safe constructor forValidationContext

TheValidationContext class, used during options validation, includes a new constructor overload that explicitly accepts thedisplayName parameter:

ValidationContext(Object, String, IServiceProvider, IDictionary<Object,Object>)

The display name ensures AOT safety and enables its use in native builds without warnings.

Diagnostics

Support for telemetry schema URLs inActivitySource andMeter

ActivitySource andMeter now support specifying a telemetry schema URL during construction, which aligns with OpenTelemetry specifications. The telemetry schema ensures consistency and compatibility for tracing and metrics data. Additionally, .NET 10 introducesActivitySourceOptions, which simplifies the creation ofActivitySource instances with multiple configuration options (including thetelemetry schema URL).

The new APIs are:

Out-of-proc trace support for Activity events and links

TheActivity class enables distributed tracing by tracking the flow of operations across services or components. .NET supports serializing this tracing data out-of-process via theMicrosoft-Diagnostics-DiagnosticSource event source provider. AnActivity can include additional metadata such asActivityLink andActivityEvent. .NET 10 adds support for serializing these links and events, so out-of-proc trace data now includes that information. For example:

Events->"[(TestEvent1,​2025-03-27T23:34:10.6225721+00:00,​[E11:​EV1,​E12:​EV2]),​(TestEvent2,​2025-03-27T23:34:11.6276895+00:00,​[E21:​EV21,​E22:​EV22])]"Links->"[(19b6e8ea216cb2ba36dd5d957e126d9f,​98f7abcb3418f217,​Recorded,​null,​false,​[alk1:​alv1,​alk2:​alv2]),​(2d409549aadfdbdf5d1892584a5f2ab2,​4f3526086a350f50,​None,​null,​false)]"

Rate-limit trace-sampling support

When distributed tracing data is serialized out-of-process via theMicrosoft-Diagnostics-DiagnosticSource event source provider, all recorded activities can be emitted, or sampling can be applied based on a trace ratio.

A new sampling option calledRate Limiting Sampling restricts the number ofroot activities serialized per second. This helps control data volume more precisely.

Out-of-proc trace data aggregators can enable and configure this sampling by specifying the option inFilterAndPayloadSpecs. For example, the following setting limits serialization to 100 root activities per second across allActivitySource instances:

[AS]*/-ParentRateLimitingSampler(100)

ZIP files

ZipArchive performance and memory improvements

.NET 10 improves the performance and memory usage ofZipArchive.

First, the way entries are written to aZipArchive when inUpdate mode has been optimized. Previously, allZipArchiveEntry instances were loaded into memory and rewritten, which could lead to high memory usage and performance bottlenecks. The optimization reduces memory usage and improves performance by avoiding the need to load all entries into memory.

Second, the extraction ofZipArchive entries is now parallelized, and internal data structures are optimized for better memory usage. These improvements address issues related to performance bottlenecks and high memory usage, makingZipArchive more efficient and faster, especially when dealing with large archives.

New async ZIP APIs

.NET 10 introduces new asynchronous APIs that make it easier to perform non-blocking operations when reading from or writing to ZIP files. This feature was highly requested by the community.

Newasync methods are available for extracting, creating, and updating ZIP archives. These methods enable developers to efficiently handle large files and improve application responsiveness, especially in scenarios involving I/O-bound operations. These methods include:

For examples of using these APIs, seethe Preview 4 blog post.

Performance improvement in GZipStream for concatenated streams

A community contribution improved the performance ofGZipStream when processing concatenated GZip data streams. Previously, each new stream segment disposed and reallocated the internalZLibStreamHandle, which resulted in additional memory allocations and initialization overhead. With this change, the handle is now reset and reused to reduce both managed and unmanaged memory allocations and improve execution time. The largest impact (~35% faster) is seen when processing a large number of small data streams. This change:

  • Eliminates repeated allocation of ~64-80 bytes of memory per concatenated stream, with additional unmanaged memory savings.
  • Reduces execution time by approximately 400 ns per concatenated stream.
Collaborate with us on GitHub
The source for this content can be found on GitHub, where you can also create and review issues and pull requests. For more information, seeour contributor guide.

Feedback

Was this page helpful?

YesNo

In this article

Was this page helpful?

YesNo