Welcome to.NET 6. Today’s release is the result of more than a year of work by the.NET team and community. C# 10 and F# 6 offer language improvements that make your code simpler and better. Performance has improved dramatically, and we’ve seen Microsoft reduce the cost of hosting cloud services. .NET 6 is the first version with native Support for Apple Silicon (Arm64), and also has improvements for Windows Arm64. We built a new dynamic configuration file Guided optimization (PGO) system that provides deep optimization that is only possible at run time. Improved cloud diagnostics using Dotnet Monitor and OpenTelemetry. WebAssembly support for more capabilities and performance. New APIS have been added for HTTP/3 to handle JSON, math, and direct manipulation of memory. .NET 6 will be supported for three years. The developers have already started upgrading the application to. NET 6, we are hearing very good early results in production. .NET 6 is ready for your application.

You can download for Linux, macOS, and Windows. NET 6.

  • Setup program and binaries
  • Container image
  • Linux software package
  • Release notes
  • API differences
  • Known problems
  • GitHub Problem tracker

See ASP.NET Core, Entity Framework, Windows Forms,.net MAUI, YARP, and dotnet monitor posts for new features in various scenarios.

The.net 6 window

.net is:

  • Production stress testing using Microsoft services, cloud applications run by other companies, and open source projects.

  • As the latest long Term Support (LTS) version is supported for three years.

  • Unified platform across browsers, cloud, desktop, Internet of Things, and mobile applications, all using the same. NET library and the ability to easily share code.

  • Performance was greatly improved, especially file I/O, which collectively resulted in reduced execution time, latency, and memory usage.

  • C# 10 offers language improvements such as record structures, implicit use, and new lambda capabilities, while the compiler adds incremental source generators. F# 6 adds new features, including task-based asynchrony, pipeline debugging, and numerous performance improvements.

  • Visual Basic improves the Visual Studio experience and the Windows Forms project opening experience.

  • Hot Reload enables you to skip rebuilding and restarting your application to see the new changes (while your application is running), Visual Studio 2022 and. NET CLI supports C# and Visual Basic.

  • Cloud diagnostics has been improved with OpenTelemetry and Dotnet Monitor and is now supported in production and available in Azure application services.

  • The JSON API is more powerful, and the source generator through the serializer has higher performance.

  • The minimal API introduced in ASP.NET Core simplifies the getting started experience and improves the performance of HTTP services.

  • The Blazor component can now be rendered from JavaScript and integrated with existing javascript-based applications.

  • WebAssembly AOT compilation of Blazor WebAssembly (Wasm) applications, as well as support for run-time re-linking and native dependencies.

  • Single-page applications built with ASP.NET Core now use a more flexible schema that works with Angular, React, and other popular front-end JavaScript frameworks.

  • HTTP/3 was added so that ASP.NET Core, HttpClient, and gRPC can all interact with HTTP/3 clients and servers.

  • File IO now supports symbolic links, and FileStream has greatly improved performance through re-writing-from-scratch.

  • Security has been improved with support for OpenSSL 3, ChaCha20Poly1305 encryption schemes and runtime defense in Depth mitigation, especially W^X and CET.

  • You can publish single-file applications (extract-free) for Linux, macOS, and Windows (previously Linux only).

  • IL trim is now more powerful and efficient, with new warnings and profilers to ensure correct end results.

  • Source generators and profilers have been added to help you generate better, safer and higher performance code.

  • Source code builds enable organizations like Red Hat to build from source code. NET and provide its own build to its users.

This release includes approximately 10,000 Git commits. Even though the article is long, it skips over many improvements. You must download and try it out. NET 6 to see all the new features.

support

.NET 6 is a long term support (LTS) release that will be supported for three years. It supports a variety of operating systems, including macOS Apple Silicon and Windows Arm64.

With Red Hat. NET team support on Red Hat Enterprise Linux. NET. On RHEL 8 and later,.NET 6 will be available for AMD and Intel (X64_64), ARM (AARCH64), and IBM Z and LinuxONE (S390X) architectures.

Please start migrating your application to. NET 6, especially. NET 5 applications. We hear from early adopters, from. NET Core 3.1 and. NET 5 upgrade to. NET 6 is simple.

Visual Studio 2022 and Visual Studio 2022 for Mac support NET 6. It is not supported by Visual Studio 2019, Visual Studio for Mac 8, or MSBuild 16. If you want to use.NET 6, you will need to upgrade to Visual Studio 2022 (now also 64-bit). Visual Studio Code C# extension support. NET 6.

Azure App Services:

  • Azure Functions is now supported in. NET 6 runs serverless functions.

  • The App Service.NET 6 GA Announcement provides information and details for ASP.NET Core developers who are excited to start using it today. NET 6.

  • Azure static Web applications now support full stack with Blazor WebAssembly frontend and Azure Function API. NET 6 applications.

Note: If your application is already running on an application service. NET 6 preview or RC release, will be in. After the NET 6 runtime and SDK are deployed to your area, it will be automatically updated on the first reboot. If you deploy a standalone application, you will need to rebuild and redeploy.

Unified Extension Platform

.NET 6 provides a unified platform for browsers, cloud, desktop, Internet of Things, and mobile applications. The underlying platform has been updated to meet the needs of all application types and make it easy to reuse code across all applications. The new features and improvements apply to all applications simultaneously, so your code running in the cloud or on a mobile device behaves the same way and has the same benefits.

The pool of.NET developers grows with each release. Machine learning and WebAssembly are two recent additions. For example, with machine learning, you can write applications that look for exceptions in streaming data. With WebAssembly, you can host it in a browser. NET applications, just like HTML and JavaScript, or mixing them with HTML and JavaScript.

One of the most exciting new features is. NET Multi-Platform App UI (.NET MAUI). You can now write code in a single project to provide a modern client application experience across desktop and mobile operating systems. .NET MAUI will be better than. NET 6 will be released later. We put a lot of time and effort into.NET MAUI, and we’re excited to release it and see it. NET MAUI applications go into production.

Of course,.net applications can also use the Windows desktop at home (using Windows Forms and WPF) and in the cloud using ASP.NET Core. They are the type of application we have offered for the longest time, and are still very popular with us at. This has been improved in NET 6.

For the.net 6

Continue to write on all of these operating systems with broad platform themes. NET code is easy.

To target.NET 6, you need to use. NET 6 target framework, as shown below:

The < TargetFramework > net6.0 < TargetFramework >

Net6.0 Target Framework Moniker (TFM) gives you access. .net provides all cross-platform apis. This is best if you’re writing console applications, ASP.NET Core applications, or reusable cross-platform libraries.

If you’re writing Windows Forms or iOS applications for a specific operating system, for example, there’s another set of TFM’s, each for the self-evident operating system, for you to use. They give you access to all of Net6.0’s apis as well as a bunch of operating system-specific apis.

  • Net6.0 – android
  • Net6.0 – ios
  • Net6.0 – maccatalyst
  • Net6.0 – tvos
  • Net6.0 – Windows

Each version-free TFM is equivalent to targeting. The lowest operating system version supported by NET 6. If you want specific or access to newer apis, you can specify the operating system version.

Both Net6.0 and Net6.0 – Windows TFMs are supported (with. NET 5 same). Android and Apple TFM are. NET 6 new features, currently in preview phase. They will be supported in a later.NET 6 update.

There is no compatibility relationship between operating system-specific TFM. For example, Net6.0-ios is incompatible with Net6.0-TVOs. If you want to share code, you need to do so using either the source code with the #if statement or a binary with the Net6.0 object code.

performance

Since we launched the.net Core project, the team has been constantly focusing on performance. Stephen Toub is documenting every edition. NET performance progress is excellent. Welcome to the posts on performance improvements in.NET 6. In this article, it includes the major performance improvements you want to know about, including file IO, interface transformations, PGO, and System.text.json.

Dynamic PGO

** Dynamic contour-guided optimization (PGO)** can significantly improve steady-state performance. For example, PGO increased the number of requests per second for TechEmpower JSON”MVC” suite by 26% (510K -&gt; 640 k).

Dynamic PGO is built on layered compilation, which enables methods to compile very quickly first (called “layer 0”) to improve startup performance, and then recompile later (called “layer 1”) with lots of optimizations enabled once the method is proven to have an impact. This model enables methods to be examined at layer 0 to allow various observations of code execution. The information gathered from layer 0 execution is used to better optimize the layer 1 code when these methods are retuned at layer 1. That’s the nature of the mechanic.

The startup time of the dynamic PGO will be slightly slower than the default runtime because of the extra code running in the layer 0 method to observe method behavior.

To enable dynamic PGO, set DOTNET_TieredPGO=1 in the environment where the application will run. You must also ensure that hierarchical compilation is enabled (by default). Dynamic PGO is optional because it is a new and influential technology. We want to release opt-in and feedback to make sure it’s fully stress-tested. We did the same thing for hierarchical compilation. At least one very large Microsoft service supports and already uses dynamic PGO in production. We encourage you to try it.

You can see more about the benefits of dynamic PGO in the performance post in.NET 6, including the following microbenchmark, which measures the cost of a specific LINQ enumerator.

private IEnumerator<long> _source = Enumerable.Range(0.long.MaxValue).GetEnumerator();

[Benchmark]
public void MoveNext() => _source.MoveNext();
Copy the code

This is the result with and without dynamic PGO.

methods mean Code size
PGO has been disabled 1.905 nanoseconds 30 b
Enable PGO 0.7071 nanoseconds 105 b

That’s a pretty big difference, but the code size has also increased, which may surprise some readers. This is the size of assembly code generated by the JIT, not memory allocation (which is a more common focus). The.NET 6 Performance post has a good explanation for this.

A common optimization in PGO implementations is the “hot/cold separation”, where the frequently executed method parts (” hot “) are near each other at the beginning of the method, and the infrequently executed method parts (” cold “) are moved to the end of the method. This makes better use of the instruction cache and minimizes the load of code that might be unused.

As a context, interface scheduling is the most expensive type of invocation in.NET. Non-virtual method calls are the fastest, and even faster are calls that can be eliminated inline. In this case, the dynamic PGO provides two (alternative) calling sites for MoveNext. The first — hot — is a direct call to Enumerable+ rangeiterator.movenext, and the other — cold — is called through the virtual interface of IEnumerator

. It would be a huge win if the hottest people were called most of the time.

That’s magic. When the JIT checks layer 0 code for this method, it includes checking the interface schedule to track the specific type of \_source on each call. The JIT found that each call was made to a type called Enumerable+RangeIterator, which is a private class that implements Enumerable.Range inside the Enumerable implementation. So, for layer 1, the JIT has issued a check to see if the \_source type is Enumerable+RangeIterator: If not, jump to the cold part of performing normal interface scheduling that we highlighted earlier. But if it is — and this is expected to be the case most of the time based on the analysis data — it can then proceed directly to call the non-virtualized Enumerable+ RangeIterator.movenext method. Not only that, it thinks the inline MoveNext method is profitable. The end result is generated assembly code that is a bit large, but optimized for the exact scenarios that are expected to be most common. These are the kind of wins we want when we start building dynamic PGO.

Dynamic PGO will be discussed again in the RyuJIT section.

File IO improvements

FileStream is almost entirely used. NET 6 rewrite, focusing on improving asynchronous file IO performance. On Windows, the implementation no longer uses the blocking API and can be several times faster! We also improved memory usage on all platforms. We have made the asynchronous operation allocation-free after the first asynchronous operation (usually allocation)! In addition, we have made Windows and Unix implement the behavior of different edge cases uniform (this is possible).

This rewritten performance improvement benefits all operating systems. The benefit to Windows is greatest because it is far behind. MacOS and Linux users should also see significant FileStream performance improvements.

The following benchmark writes 100 MB to a new file.

private byte[] _bytes = new byte[8_000];

[Benchmark]
public async Task Write100MBAsync()
{
    using FileStream fs = new("file.txt", FileMode.Create, FileAccess.Write, FileShare.None, 1, FileOptions.Asynchronous);
    for (int i = 0; i < 100_000_000 / 8_000; i++)
        await fs.WriteAsync(_bytes);
}
Copy the code

On Windows with SSD drives, we observed 4x acceleration and over 1200x allocation drop:

methods run mean ratio allocated
Write 100 mbasync The.net 5.0 1308 milliseconds 1.00 3809 KB
Write 100 mbasync The.net 6.0 306.8 milliseconds 0.24 3 KB

We also recognize the need for higher performance file IO capabilities: concurrent reads and writes, and scatter/collect IO. For these situations, we introduced new apis for the System.io.File and System.io.RandomAccess classes.

async Task AllOrNothingAsync(string path, IReadOnlyList<ReadOnlyMemory<byte>> buffers)
{
    using SafeFileHandle handle = File.OpenHandle(
        path, FileMode.Create, FileAccess.Write, FileShare.None, FileOptions.Asynchronous,
        preallocationSize: buffers.Sum(buffer => buffer.Length)); // hint for the OS to pre-allocate disk space

    await RandomAccess.WriteAsync(handle, buffers, fileOffset: 0); // on Linux it's translated to a single sys-call!
Copy the code

This example demonstrates:

  • Using the newFile.OpenHandle APIOpen the file handle.
  • Preallocate disk space using the new preallocate size function.
  • Write files using the new Scatter/Gather IO API.

The pre-allocation capability improves performance because writes do not need to expand files and files are less likely to be fragmented. This approach improves reliability because writes will no longer fail due to insufficient space because space has been reserved. The Scatter/Gather IO API reduces the number of system calls required to write data.

Faster interface checking and conversion

The interfacial casting performance is improved by 16%-38%. This improvement is particularly useful for pattern matching between C# and interfaces.

This chart shows the scale of improvement for a representative benchmark.

One of the biggest advantages of moving part of the.net runtime from C++ to hosted C# is that it lowers the contribution barrier. This includes interface transformation, which was used earlier. NET 6 changes moved to C#. More people in the.net ecosystem understand C# than C++ (and run with challenging C++ patterns). Just being able to read some of the code that makes up the runtime is an important step in developing the confidence to contribute in one form or another.

Ben Adams.

System.text. Json source generator

Json, which avoids the need for reflection and code generation at run time and generates the best serialization code at build time. Serializers are often written using very conservative techniques, as they must be. However, if you read your own serialization source code (using the serializer), you can see what the obvious choices should be, which can make the serializer more optimized for your particular situation. That’s exactly what this new source generator does.

In addition to improving performance and reducing memory, the source code generator generates code that is best suited for assembly dressing. This helps make smaller applications.

Serializing POCO is a very common scenario. Using the new source code generator, we observed that serialization was 1.6 times faster than our benchmark.

methods mean The standard deviation ratio
serializer 243.1 nanoseconds 9.54 nanoseconds 1.00
SrcGenSerializer 149.3 nanoseconds 1.91 nanoseconds 0.62

TechEmpower Cache Benchmark platforms or frameworks cache information from databases in memory. The.net implementation of the benchmark performs JSON serialization of cached data so that it can be sent to the test tool as a response.

request/ * * * *seconds requirements
net5.0 243000 3669151
Net 6.0 260928 3939804
Net6.0 + JSON source code generation 364224 5499468

We observed RPS gains of about 100K (an increase of about 40%). NET 6 throughput ratio when combined with MemoryCache performance improvements. NET 5 50% higher!

# # 10

Welcome to C# 10. One of the main themes of C# 10 is to continue the simplification journey that started with top-level statements in C# 9. The new feature removes more rituals from program.cs, resulting in a one-line Program. They were inspired by talking to people without C# experience (students, professional developers, and others) and learning what was most effective and intuitive for them.

Most of the.net SDK templates have been updated to provide a simpler, more concise experience that can now be implemented using C# 10. We’ve received feedback that some people don’t like new templates because they’re not suitable for experts, remove object-oriented, remove important concepts learned on day one of writing C#, or encourage writing an entire program in one file. Objectively speaking, none of these views is correct. The new model also applies to students who are professional developers. However, it is different from the C-derived model prior to.NET 6.

There are several other features and improvements in C# 10, including the record structure.

Global usage instruction

The global using directive lets you using specify the directive only once and apply it to each file you compile.

The following example shows the breadth of syntax:

  • global using System;
  • global using static System.Console;
  • global using Env = System.Environment;

You can put global using statements in any.cs file, including program.cs.

Implicit USings is an MSBuild concept that automatically adds a set of instructions based on the SDK. For example, console applications are implicitly used differently than ASP.NET Core.

Implicit use is optional and PropertyGroup is enabled in a:

  • <ImplicitUsings\&gt; enable\&lt; /ImplicitUsings>

Implicit use is optional for existing projects, but is included by default in new C# projects. For more information, see implicit use.

File scoped namespace

File-scoped namespaces enable you to declare the entire file namespace without having to nest the rest of the content in {… }. Only one is allowed and must appear before declaring any type.

The new syntax is a single line:

namespaceMyNamespace; classMyClass{... }// Not indented
Copy the code

This new syntax is an alternative to the three-line indentation style:

namespaceMyNamespace { classMyClass{... }// Everything is indented
}
Copy the code

The benefit is to reduce indentation in the extremely common case where the entire file is in the same namespace.

Record structure

C# 9 introduces records as a special form of value-oriented classes. In C# 10, you can also declare structural records. Constructs in C# already have value equality, but record constructs add the == operator and IEquatable

implementations, as well as a value-based ToString implementation:

public record structPerson
{
publicstringFirstName{get; init; } publicstringLastName{get; init;}
}
Copy the code

Just like record classes, record structures can be “positional”, which means they have a primary constructor that implicitly declares the public member corresponding to the argument:

public record structPerson(stringFirstName,stringLastName);

However, unlike record classes, implicit public members are mutable auto-implemented attributes. In this way, the record structure becomes a natural growth story for tuples. For example, if you have a return type (String FirstName, String LastName) and you want to extend it to a named type, you can easily declare the corresponding positional structure record and maintain mutable semantics.

If you want an immutable record with a read-only property, you can declare the entire record structure readonly (just as you would any other structure) :

publicreadonly record structPerson(stringFirstName,stringLastName);

C# 10 supports not only record constructs, but also _ all _ constructs and anonymous with expressions:

var updatedPerson = person with{FirstName=&quot; Mary&quot; };

F# 6

F# 6 aims to make F# simpler and more efficient. This applies to language design, libraries, and tools. Our goal with F# 6 (and beyond) is to eliminate extremes in the language that surprise users or hinder learning F#. We are pleased to partner with the F# community on this ongoing effort.

Make F# faster and more interoperable

New syntax Task {… } Create a task directly and start it. This is one of the most important features in F# 6, making asynchronous tasks simpler, with higher performance, and greater interoperability with C# and other.net languages. Previously, creating.NET tasks required the use of async {… } to create a task and call the Async. StartImmediateAsTask.

Task {… } builds on what is called “recoverable code” RFC FS-1087. Recoverable code is a core feature that we hope to use in the future to build other high-performance asynchronous and yield state machines.

F# 6 also adds additional performance features for library authors, including lineiflambda and an unboxed representation of the F# active mode. One particularly notable performance improvement is the compilation of list and array expressions, which are now up to four times faster and better and simpler to debug.

Make F# easier to learn and more unified

F# 6 enable expr[idx] indexing syntax. So far, F# has been indexed using expr.[idx]. The removal of the dot notation was based on repeated feedback from first-time F# users that the use of the dot was unnecessarily different from what they expected from standard practice. In the new code, we recommend systematic use of the new EXPR [IDX] indexing syntax. As a community, we should all switch to this syntax.

The F# community has made significant improvements to make the F# language more uniform in F# 6. The most important of these is the elimination of some inconsistencies and limitations in the F# indent rule. Other design additions to make F# more uniform include the addition of as patterns; Allow “overloading custom operations” in computed expressions (useful for DSL); Allows _ to discard the use binding and allows %B to be binary formatted in the output. The F# core library adds new functions for copying and updating lists, arrays, and sequences, as well as other built-in NativePtr functions. Some old features of F# deprecated since 2.0 now cause errors. Many of these changes better align F# with your expectations, thereby reducing surprises.

F# 6 also adds support for other “implicit” and “type-oriented” conversions in F#. This means fewer explicit up-conversions and adds first-class support for.NET style implicit conversions. F# has also been tweaked to better suit the digital library era of using 64-bit integers, and extends 32-bit integers implicitly.

Improve the F# tool

Tooling improvements in F# 6 make everyday coding easier. New “pipeline commissioning” allows you to step, set breakpoints and examine the f # pipeline input grammar | | > f1 > median of f2. The debugging display of shadow values has been improved to eliminate a common source of confusion during debugging. The F# tool is also more efficient now, with the F# compiler performing the parsing phase in parallel. The F# IDE tool has also been improved. F# scripts are now more robust, allowing you to fix the.net SDK version you use through global.json files.

Thermal overload

Hot Reload is another performance feature that focuses on developer productivity. It enables you to make various code edits to running applications, shortening the time you need to wait for the application to rebuild, restart, or re-navigate to where you were after making code changes.

Hot Reload is available through dotnet Watch CLI tools and Visual Studio 2022. You can use Hot Reload with many application types, such as ASP.NET Core, Blazor,.NET MAUI, console, Windows Forms, WPF, WinUI 3, Azure functions, and more.

When using the CLI, simply start your.NET 6 application dotnet Watch, make any supported edits, and then when you save the file (as in Visual Studio Code), the changes will be applied immediately. If changes are not supported, the details are logged to the command window.

This image shows a dotnet watch. I edited the.cs file and the.cshtml file (as described in the log), both of which were applied to the code and reflected very quickly in the browser in less than half a second.

With Visual Studio 2022, simply launch your application, make the supported changes, and then apply those changes using the new “hot reload” button (shown below). You can also choose to apply changes when you save through the drop-down menu on the same button. With Visual Studio 2022, hot reloading is available for multiple.NET versions, for.NET 5+,.NET Core, and the.NET Framework. For example, you will be able to make code-hiding changes to the button’s OnClickEvent handler. The application’s Main method does not support it.

Note: RuntimeInformation FrameworkDescription has an error, the error will be shown in the image, will soon be repaired.

Hot Reload also works with the existing Edit and Continue functionality (when stopped at a breakpoint) and XAML Hot Reload for real-time editing of the application’S UI. C# and Visual Basic applications are currently supported (not F#).

security

Security has improved significantly in.NET 6. It was always the focus of the team, including threat modeling, encryption, and defense in depth.

On Linux, we rely on OpenSSL for all encryption, including TLS (HTTPS is required). On macOS and Windows, we rely on the functionality provided by the operating system for the same purpose. With each new version of.NET, we often need to add support for the new version of OpenSSL. .NET 6 adds support for OpenSSL 3.

The biggest changes to OpenSSL 3 are the improved FIPS 140-2 module and simpler licensing.

.NET 6 requires OpenSSL 1.1 or higher, and will prefer the highest installed version of OpenSSL it can find, up to and including V3. In general, you are most likely to start using OpenSSL 3 when the Linux distribution you are using switches to OpenSSL 3 by default. Most distributions haven’t done this yet. For example, if you install.NET 6 on Red Hat 8 or Ubuntu 20.04, you will not (at the time of writing) start using OpenSSL 3.

OpenSSL 3, Windows 10 21H1, and Windows Server 2022 all support ChaCha20Poly1305. You can use this new authenticated encryption scheme in.NET 6 (assuming your environment supports it).

Thanks to Kevin Jones for Linux support for ChaCha20Poly1305.

We also released a new runtime security mitigation roadmap. Importantly, the runtime you use is not affected by the type of textbook attack. We are meeting that need. In.NET 6, we built an initial implementation of W^X and Intel Control Flow coercion technology (CET). W^X is fully supported, enabled by default for macOS Arm64, and optionally added to other environments. CET is opt-in and preview for all environments. We want to enable both technologies by default in all environments in.NET 7.

Arm64

Arm64 is exciting these days for laptops, cloud hardware and other devices. We are equally excited about the.NET team and are doing our best to keep up with this industry trend. We work directly with engineers at Arm Holdings, Apple and Microsoft to ensure that our implementation is correct and optimized, and that our plans are consistent. These close partnerships have helped us a lot.

  • Special thanks to Apple for sending our team a bushel of Arm64 development kit for us to use prior to the release of the M1 chip and providing important technical support.
  • Special thanks to Arm Holdings for their engineers who reviewed our Arm64 changes and made performance improvements.

Prior to this, we added initial support for Arm64 with.NET Core 3.0 and Arm32. The team has made significant investments in Arm64 in recent releases and will continue to do so for the foreseeable future. In.NET 6, we focused on supporting the new Apple Silicon chip and X64 emulation scenarios on macOS and Windows Arm64 operating systems.

You can install the Arm64 and X64 versions of.NET on macOS 11+ and Windows 11+ Arm64 operating systems. We had to make multiple design choices and product changes to make sure it worked.

Our strategy is “pro-native architecture.” We recommend that you always use the SDK that matches the native architecture, which is the Arm64 SDK on macOS and Windows Arm64. The SDK is a lot of software. Native performance on Arm64 chips will be much higher than emulation. We updated the CLI to simplify operations. We will never focus on optimizing the simulation of X64.

By default, if you dotnet Run is a.NET 6 application with the Arm64 SDK, it will run as Arm64. You can easily switch to running in X64 with arguments, such as -adotnet run -a x64. The same argument applies to other CLI verbs. For more information, see Arm64 for macOS and Windows. NET 6 RC2 update.

I want to make sure I cover one of the nuances. When you use -A X64, the SDK still runs natively as Arm64. There are fixed points at process boundaries in the.NET SDK architecture. In most cases, a process must be all Arm64 or all X64. I’m simplifying a bit, but the.NET CLI waits for the last process in the SDK architecture to be created, and then starts it as the chip architecture (such as X64) that you requested. This is how your code runs. That way, as a developer, you get the benefits of Arm64, but your code can run as it needs to. This is only relevant if you need to run some code as x64. If you don’t, then you can always run everything in Arm64, which is great.

Arm64 support

For macOS and Windows Arm64, here are the key points you need to know:

  • .NET 6 Arm64 and X64 SDK are supported and recommended.
  • Support for all supported Arm64 and X64 runtimes.
  • .NET Core 3.1 and.net 5 SDKS work, but offer fewer features and are not fully supported in some cases.
  • dotnet testDoes not yet work with X64 emulation. We areEfforts to.dotnet testWill be improved as part of version 6.0.200, and possibly earlier.

For more complete information, see. NET support for macOS and Windows Arm64.

Linux is missing from this discussion. It does not support X64 emulation as macOS and Windows do. Therefore, these new CLI features and support methods are not directly applicable to Linux, and Linux does not require them.

Windows Arm64

We have a simple tool to demonstrate this. The environment in which NET runs.

C:Usersrich>dotnet tool install -g dotnet-runtimeinfo
You can invoke the tool using the following command: dotnet-runtimeinfo
Tool 'dotnet-runtimeinfo' (version '1.0.5') was successfully installed.

C:Usersrich>dotnet runtimeinfo
         42
         42              ,d                             ,d
         42              42                             42
 ,adPPYb,42  ,adPPYba, MM42MMM 8b,dPPYba,   ,adPPYba, MM42MMM
a8" `Y42 a8"     "8a 42 42P' `"8a a8P_____42   42
8b       42 8b       d8  42    42       42 8PP"""""""42"8a,   ,d42 "8a, ,a8"  42.42       42 "8b, ,aa 42, `"8bbdP"Y8 `"YbbdP"'"Y428 42       42  `"Ybbd8"'   "Y428

**.NET information
Version: 6.0.0
FrameworkDescription: .NET 6.0.0-rtm.21522.10
Libraries version: 6.0.0-rtm.21522.10
Libraries hash: 4822e3c3aa77eb82b2fb33c9321f923cf11ddde6

**Environment information
ProcessorCount: 8
OSArchitecture: Arm64
OSDescription: Microsoft Windows 10.0.22494
OSVersion: Microsoft Windows NT 10.0.22494.0
Copy the code

As you can see, the tool runs natively on Windows Arm64. I’ll show you what ASP.NET Core looks like.

macOS Arm64

You can see that the experience on macOS Arm64 is similar, and the architectural goals are demonstrated.

rich@MacBook-Air app % dotnet --version
6.0100.
rich@MacBook-Air app % dotnet --info | grep RID
 RID:         osx-arm64
rich@MacBook-Air app % cat Program.cs 
using System.Runtime.InteropServices;
using static System.Console;

WriteLine($"Hello, {RuntimeInformation.OSArchitecture} from {RuntimeInformation.FrameworkDescription}!");
rich@MacBook-Air app % dotnet run
Hello, Arm64 from .NET 6.0. 0-rtm21522.10.!
rich@MacBook-Air app % dotnet run -a x64
Hello, X64 from .NET 6.0. 0-rtm21522.10.!
rich@MacBook-Air app % 
Copy the code

This image shows how Arm64 execution is the default setting for the Arm64 SDK and how easy it is to switch between the target Arm64 and X64 using the -a parameter. The exact same experience applies to Windows Arm64.

This image demonstrates the same thing, but using ASP.NET Core. I’m using the same one you see in the figure above. NET 6 Arm64 SDK.

Arm64 Docker

Docker supports containers that run in the native schema, which is the default, and emulation. This may seem obvious, but it can be confusing when most Docker Hub directories are x64-oriented. You can use -platform Linux/AMd64 to request x64 images.

We only support running Linux Arm64.net container images on Arm64 operating systems. This is because we never support running in QEMU. NET, which Docker uses for architectural emulation. It seems that this may be due to QEMU limitations.

This image shows our maintenance console example: McR.microsoft.com/dotnet/samples. This is an interesting example because it contains some basic logic for printing the CPU and memory limits that you can use. The image I show has CPU and memory limits set.

Try yourself: docker run – rm McR.microsoft.com/dotnet/samples

Arm64 performance

Apple Silicon and x64 emulation support projects are important, but we’ve also generally improved Arm64 performance.

This image demonstrates an improvement to zero out the contents of a stack frame, a common operation. The green line is the new behavior, and the orange line is another (less beneficial) experiment, both of which improved relative to baseline, indicated by the blue line. For this test, lower is better.

The container

.NET 6 is more container-friendly, based on all of the improvements discussed in this article, for Arm64 and X64. We also made key changes that helped with various scenarios. Verifying container improvements using.NET 6 demonstrates that some of these improvements are being tested together.

Windows Container improvements and new environment variables are also included in the November 9 release (tomorrow). NET Framework 4.8 container update.

Release notes can be found in our Docker repository:

  • .NET 6 container release notes
  • .net Framework 4.8 Container release notes in November 2021

Windows container

.NET 6 has added support for Windows process isolation containers. If you use Windows containers in Azure Kubernetes Service (AKS), you rely on process-isolated containers. Process isolation containers can be considered very similar to Linux containers. Linux containers use CGroups and Windows process isolation containers use Job Objects. Windows also provides a Hyper-V container that provides greater isolation through more powerful virtualization. Hyper-v container. NET 6 doesn’t change anything.

This change is the main value of the Environment. Now ProcessorCount will use Windows process isolation container report the correct value. If the nucleus machines create 2 containers in 64, the Environment, ProcessorCount returns 2. In previous versions, this property would report the total number of processors on the machine, regardless of restrictions specified by the Docker CLI, Kubernetes, or other container scheduler/runtime. This value is used by various parts of.NET for extension purposes, including. NET garbage collector (although it relies on the associated lower-level apis). The community library also relies on this API for extension.

We recently validated this new capability with customers on Windows containers in production using a large number of PODS on AKS. They were able to run successfully with 50% memory (compared to their typical configuration), which was the OutOfMemoryException level StackOverflowException that previously caused the exception. They didn’t take the time to find the minimum memory configuration, but we guessed it was significantly less than 50% of their typical memory configuration. As a result of this change, they will move to a cheaper Azure configuration to save money. Just level up and it was a nice, easy win.

Optimal scaling

We heard from the user, some applications in the Environment. The ProcessorCount report the correct values when unable to achieve the best extension. If this sounds like the opposite of what you just read about Windows containers, it is. .net now provide DOTNET_PROCESSOR_COUNT Environment variables to manually control the Environment. The ProcessorCount values. In a typical use case, an application might be configured with 4 cores on a 64 core machine and scale best with 8 or 16 cores. This environment variable can be used to enable the scaling.

This model may seem strange, in which the Environment ProcessorCount – and cpus (through the Docker CLI) values may be different. By default, the container runs for the core equivalent, not the actual core. This means that when you say you want 4 cores, you get the same CPU time as 4 cores, but your application could (theoretically) run on more cores, or even all 64 cores in a short period of time on a 64-bit machine. This might enable your application to scale better on more than four threads (continuing the example), and allocating more might be beneficial. This assumes that the thread allocation based on the Environment. The ProcessorCount value. If you choose to set a higher value, your application may use more memory. For some workloads, this is a simple trade-off. At the very least, this is a new option that you can test.

This new feature is supported by both Linux and Windows containers.

Docker also provides a CPU group capability that your application can associate with a specific kernel. This feature is not recommended in this case, because the number of kernels an application can access is specifically defined. We also saw some problems with using it with hyper-V containers, and it didn’t really work in that isolation mode.

Debian 11 “bullseye”

We pay close attention to the life cycle and release schedule of Linux distributions and try to make the best choices on your behalf. Debian is our Linux distribution for the default Linux image. If you 6.0 extract labels from one of our container repositories, you will extract a Debian image (assuming you are using a Linux container). With each new.NET release, we consider whether we should adopt the new Debian release.

As a policy, we do not change Debian versions, such as 6.0, mid-release, to facilitate tagging. If we did, certain applications would crash. This means that it is important to select the Debian version at the start of the release. In addition, these images get a lot of use, mainly because they are “good label” references.

Debian and. .net versions are naturally not planned together. When we started.net 6, we saw that Debian “Bullseye” would probably be released in 2021. We decided to bet on Bullseye from launch. We started using.NET 6 Preview 1 to release a bullsey-based container image and decided not to look back. The bet is that the.NET 6 version will lose out to the Bullseye version. As of August 8, we still don’t know when Bullseye will ship, three months before our own version is released, on November 8. We don’t want to release production on preview Linux. NET 6, but we stuck to the plan late that we were going to lose the race.

When Debian 11 “Bullseye” was released on August 14, we were pleasantly surprised. We lost the game, but we won the bet. This means that by default,.NET 6 users can get the best and most up-to-date Debian from day one. We believe in Debian 11 and. NET 6 will be a great combination for many users. Sorry, Nemesis. We hit the bull ‘s-eye.

Newer distributions include newer major versions of various packages in their package feeds, and CVE fixes can often be obtained more quickly. This is complementary to the newer kernels. New releases can serve users better.

Looking further ahead, we’ll start planning support for Ubuntu 22.04 soon. Ubuntu is another Debian distribution that has been well received. NET developers welcome. We want to provide day support for new Ubuntu LTS releases.

Hats off to Tianon Gravi for maintaining the Debian image for the community and helping us when we have problems.

Dotnet Monitor

Dotnet Monitor is an important diagnostic tool for containers. It has been mirroring the Sidecar container for some time, but in an unsupported “experimental” state. As part of.NET 6, we are releasing a base. Dotnet Monitor image for NET 6, which is fully supported in production.

Dotnet Monitor has been used by Azure App Service as an implementation detail for its ASP.NET Core Linux diagnostic experience. This is one of the expected scenarios, built on top of Dotnet Monitor, to provide a higher level and higher value experience.

You can now pull a new image:

Docker pull McR.microsoft.com/dotnet/monitor:6.0

The dotnet monitor enables slave. NET processes access diagnostic information (logging, tracing, process dumps) more easily. It’s easy to access all the diagnostic information you need on a desktop, but these familiar techniques may not work in a production environment that uses containers. Dotnet Monitor provides a unified way to collect these diagnostic artifacts, whether running on your desktop computer or in a Kubernetes cluster. There are two different mechanisms for collecting these diagnostic artifacts:

  • HTTP API for temporary collection of artifacts. You can call these API endpoints when you already know that your application is experiencing problems and you are interested in gathering more information.
  • Rule-based configuration triggers for alway online collection of artifacts. You can configure rules to collect diagnostic data when required conditions are met, for example, process dumps are collected when you have a persistently high CPU.

Dotnet monitor for. NET applications provide a common diagnostic API that can work anywhere using any tool. The “generic API” is not. NET API, but a Web API that you can call and query. Dotnet Monitor includes an ASP.NET Web server that works directly with. Designed to interact with and expose data from diagnostic servers in the.NET runtime, dotnet Monitor enables high performance monitoring and secure use in production to control access to privileged information. Dotnet Monitor interacts with the runtime through non-Internet addressible Unix Domain sockets — across container boundaries. The model communication model is a good fit for this use case.

Structured JSON logging

The JSON formatter is now the default console logger in the aspnet.NET 6 container image. The default in.NET 5 is a simple console formatter. This change was made to enable the default configuration to be used with automation tools that rely on machine-readable formats such as JSON.

The output of the image now looks like aspnet:

$ docker run --rm -it -p 8000:80 mcr.microsoft.com/dotnet/samples:aspnetapp
{"EventId":60."LogLevel":"Warning"."Category":"Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository"."Message":"Storing keys in a directory u0027/root/.aspnet/DataProtection-Keysu0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed."."State": {"Message":"Storing keys in a directory u0027/root/.aspnet/DataProtection-Keysu0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed."."path":"/root/.aspnet/DataProtection-Keys"."{OriginalFormat}":"Storing keys in a directory u0027{path}u0027 that may not be persisted outside of the container. Protected data will be  unavailable when container is destroyed."}}
{"EventId":35."LogLevel":"Warning"."Category":"Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager"."Message":"No XML encryptor configured. Key {86cafacf-ab57-434a-b09c-66a929ae4fd7} may be persisted to storage in unencrypted form."."State": {"Message":"No XML encryptor configured. Key {86cafacf-ab57-434a-b09c-66a929ae4fd7} may be persisted to storage in unencrypted form."."KeyId":"86cafacf-ab57-434a-b09c-66a929ae4fd7"."{OriginalFormat}":"No XML encryptor configured. Key {KeyId:B} may be persisted to storage in unencrypted form."}}
{"EventId":14."LogLevel":"Information"."Category":"Microsoft.Hosting.Lifetime"."Message":"Now listening on: http://[::]:80"."State": {"Message":"Now listening on: http://[::]:80"."address":"http://[::]:80"."{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":0."LogLevel":"Information"."Category":"Microsoft.Hosting.Lifetime"."Message":"Application started. Press Ctrlu002BC to shut down."."State": {"Message":"Application started. Press Ctrlu002BC to shut down."."{OriginalFormat}":"Application started. Press Ctrlu002BC to shut down."}}
{"EventId":0."LogLevel":"Information"."Category":"Microsoft.Hosting.Lifetime"."Message":"Hosting environment: Production"."State": {"Message":"Hosting environment: Production"."envName":"Production"."{OriginalFormat}":"Hosting environment: {envName}"}}
{"EventId":0."LogLevel":"Information"."Category":"Microsoft.Hosting.Lifetime"."Message":"Content root path: /app"."State": {"Message":"Content root path: /app"."contentRoot":"/app"."{OriginalFormat}":"Content root path: {contentRoot}"}}
Copy the code

Logging\_\_Console\_\_FormatterName can be changed by setting or unsetting environment variables or by code changes (see console Log Formats for more details).

After the change, you should see the following output (as in. NET 5 same) :

$ docker run --rm -it -p 8000:80 -e Logging__Console__FormatterName="" mcr.microsoft.com/dotnet/samples:aspnetapp
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
      Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
      No XML encryptor configured. Key {8d4ddd1d-ccfc- 4898.-9fe1-3e7403bf23a0} may be persisted to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http:/ / [: :] : 80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app
Copy the code

Note: This change does not affect the NET SDKS, such as dotnet Run. This change is specific to the ASPNET container image.

OpenTelemetry indicator is supported

As part of our focus on observability, we’ve been working on the last few. NET version adds support for OpenTelemetry. In.NET 6, we added support for the OpenTelemetry Metrics API. By adding support for OpenTelemetry, your application can seamlessly interoperate with other OpenTelemetry systems.

System. Diagnostics. The Metrics is OpenTelemetry Metrics of API specification. The.net implementation. The Metrics API is designed specifically to process raw measurements to efficiently and simultaneously generate a continuous summary of those measurements.

The API includes classes that Meter can use to create instrument objects. The API exposes four utility classes: Counter, Histogram, ObservableCounter, and ObservableGauge to support different measurement schemes. In addition, the API exposes the MeterListener class to allow listening to measurements recorded by the instrument for aggregation and grouping purposes.

The OpenTelemetry.NET implementation will be extended to use these new apis, which add support for Metrics observable scenarios.

Examples of library measurement records

     Meter meter = new Meter("io.opentelemetry.contrib.mongodb"."v1.0");
    Counter<int> counter = meter.CreateCounter<int> ("Requests");
    counter.Add(1);
    counter.Add(1, KeyValuePair.Create<string.object> ("request"."read"));
Copy the code

Hearing the sample

  MeterListener listener = new MeterListener();
    listener.InstrumentPublished = (instrument, meterListener) =>
    {
        if (instrument.Name == "Requests" && instrument.Meter.Name == "io.opentelemetry.contrib.mongodb")
        {
            meterListener.EnableMeasurementEvents(instrument, null); }}; listener.SetMeasurementEventCallback<int>((instrument, measurement, tags, state) =>
    {
        Console.WriteLine($"Instrument: {instrument.Name} has recorded the measurement {measurement}");
    });
    listener.Start();
Copy the code

Windows Forms

We continue to make important improvements in Windows Forms. .NET 6 includes better control accessibility, the ability to set application-wide default fonts, template updates, and more.

Accessibility improvements

In this release, we added UIA providers for CheckedListBox, LinkLabel, Panel, ScrollBar, and TabControlTrackBar, which enable tools like storytellers and test automation to interact with elements of the application.

The default font

You can use it now. Application.SetDefaultFont

voidApplication.SetDefaultFont(Font font)

Smallest application

Here are the smallest Windows Forms applications with.NET 6:

class Program{[STAThread]
    static void Main()
    {
        ApplicationConfiguration.Initialize();
        Application.Run(newForm1()); }}Copy the code

As part of the.NET 6 release, we have been updating most templates to make them more modern and minimalist, including Windows Forms. We decided to make Windows Forms templates more traditional, in part because of the need to apply the [STAThread] attribute to the application entry point. However, there is more drama than is immediately in sight.

ApplicationConfiguration. The Initialize () is a source to generate the API, it in the background by issuing the following call:

Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false); Application.SetDefaultFont(newFont(...) ); Application.SetHighDpiMode(HighDpiMode.SystemAware);Copy the code

The parameters for these calls can be configured using the MSBuild property in the CSPROj or props files.

The Windows Forms designer in Visual Studio 2022 also knows about these properties (currently it only reads the default font) and can show you your application as it would at run time:

Template update

The Windows forms template for C# has been updated to support new application booting, global using directives, file-scoped namespaces, and nullable reference types.

More running time designers

Now you can build a generic designer (for example, report designer) because. NET 6 has all the pieces that designers and designer-related infrastructure are missing. See this blog post for more information.

Single file application

In.NET 6, in-memory single-file applications have been enabled for Windows and macOS. In.NET 5, this type of deployment is limited to Linux. You can now publish single-file binaries deployed and started as single files for all supported operating systems. Single-file applications no longer extract any core runtime assemblies to temporary directories.

This extension is based on building blocks called “superhosts.” Apphost “is an executable that starts the application in a non-single file, such as myapp.exe or./ myapp.apphost contains code to find the runtime, load it, and start your application with that runtime. Superhost still performs some of these tasks, but uses statically linked copies of all the CoreCLR native binaries. Static linking is the method we use to achieve a single file experience. Native dependencies, such as those shipped with the NuGet package, are notable exceptions to single-file embedding. By default, they are not contained in a single file. For example, WPF native dependencies are not part of the superhost and therefore generate other files outside of the single-file application. You can use this setting IncludeNativeLibrariesForSelfExtract embedding and extracting the native dependencies.

Static analysis

We improved the single-file parser to allow custom warnings. If your API doesn’t work in a single-file distribution, you can now flag it with the [RequiresAssemblyFiles] attribute, and a warning will appear if the analyzer is enabled. Adding this property also silences all warnings related to a single file in a method, so you can use it to propagate warnings up to your public API.

When PublishSingleFile is set to true, the single-file parser is automatically enabled for exe projects, but you can also enable it for any project by setting EnableSingleFileAnalysis to true. This is helpful if you want to support libraries as part of a single file application.

In.NET 5, we added warnings for assemblies.Location and some other apis that behave differently in a single package.

The compression

Single package now supports compression, can set the attribute EnableCompressionInSingleFile to true. At run time, files are decompressed into memory as needed. Compression can save a lot of space in some scenarios.

Let’s take a look at the individual file distribution (with and without compression) used with the NuGet Package Explorer.

Uncompressed: 172 MB

Compression: 71.6 MB

Compression can significantly increase application startup time, especially on Unix platforms. Unix platforms have a copy-free quick boot path that cannot be used for compression. You should test your application after enabling compression to see if the additional start-up costs are acceptable.

Single file debugging

Currently, single-file applications can only be debugged using platform debuggers such as WinDBG. We are considering adding Visual Studio debugging with a higher version of Visual Studio 2022.

Single file signature on macOS

The single file application now meets Apple notary and signature requirements on macOS. The specific change relates to the way we build a single-file application based on a discrete file layout.

Apple began implementing new signature and notarization requirements for macOS Catalina. We have been working closely with Apple to understand the needs and to find out what to do. NET and other development platforms can work in this environment. We have made product changes and documented user workflows to meet Apple’s requirements in recent years. NET version requirements. One of the remaining gaps is single-file signatures, which are distributed on macOS. NET application requirements included in the macOS store.

IL dressing

The team has been working on IL modifications for multiple versions. .NET 6 represents a significant step forward in this journey. We have worked hard to make the more aggressive pruning mode safe and predictable, so we are confident that it will be the default mode. TrimMode= Link, previously optional, is now the default.

We have a three-pronged pruning strategy:

  • Improve the trim capacity of the platform.
  • Annotate the platform to provide better warnings and enable others to do the same.
  • On top of that, make the default trim mode more aggressive to make the application smaller.

Due to unreliable results from applications using uncommented reflection, the trim is in preview state until then. With the trim warning, the experience should now be predictable. An application without a pruning warning should be pruned correctly and observe no change in behavior at run time. Right now, there are only core ones. The NET library has fully annotated pruning, but we would like to see ecosystem annotated pruning and compatible pruning

Reduce the application size

Let’s take a look at this trim improvement using CrossGen, one of the SDK tools. It can be pruned with a few pruning warnings, which the CrossGen team was able to solve.

First, let’s take a look at publishing CrossGen as a standalone application without pruning. It is 80 MB inclusive. NET runtime and all libraries).

Then we can try (now older).net 5 default trim mode, copyUsed. The result is reduced to 55 MB.

The new.NET 6 default trim mode link reduces the individual file size even further to 36MB.

We want the new Link pruning model to be better aligned with pruning expectations: significant savings and predictable results.

Enable warnings by default

Pruning warnings tell you where pruning might remove code used at run time. These warnings were previously disabled by default because warnings were very noisy, mainly because the.NET platform did not participate in pruning as a first class scenario.

We annotate most.net libraries so that they produce accurate pruning warnings. Therefore, we felt it was time to enable pruning warnings by default. ASP.NET Core and the Windows desktop runtime libraries are not annotated. We plan to annotate the ASP.NET service component next (after.NET 6). We expect to see the community annotate the NuGet library after.net 6 is released.

You can set the < SuppressTrimAnalysisWarnings > to true to disable the warning.

More information:

  • Trim the warning
  • Pruning is introduced
  • Prepare the.NET library for pruning

Shared with the local AOT

We also implemented the same trim warning for the Native AOT experiment, which should improve the Native AOT compilation experience in almost the same way.

mathematics

We have significantly improved the math API. Some in the community are already enjoying these improvements.

Performance oriented apis

Performance-oriented Math APIS have been added to System.Math. Their implementation is hardware-accelerated if the underlying hardware supports it.

The new API:

  • SinCosUsed for simultaneous calculationSinandCos.
  • ReciprocalEstimateUsed to calculate1 / xThe approximate value of.
  • ReciprocalSqrtEstimateUsed to calculate1 / Sqrt(x)The approximate value of.

New reloads:

  • Clamp.DivRem.MinandMaxsupportnintandnuint.
  • AbsandSignsupportnint.
  • DivRemVariations in returntuple.

Performance improvement:

  • The porting of ScaleB to C# resulted in a 93% increase in call speed. Alex Corvington.

Large integer performance

Improved parsing of BigIntegers from decimal and hexadecimal strings. We see improvements of up to 89%, as shown below (the lower the better).

Thanks to Joseph da Silva.

The Complex API is now commented readonly

Now on the various API annotations, System.Numerics.Com plexreadonly to ensure not readonly value or the value that replicate in.

Credit to Hrrrrustic.

BitConverter now supports floating-point to unsigned integer bit broadcasting

BitConverter now supports DoubleToUInt64Bits, HalfToUInt16Bits, SingleToUInt32Bits, UInt16BitsToHalf, UInt32BitsToSingle, And UInt64BitsToDouble. This should make it easier to manipulate floating-point bits when needed.

Michal Petryka.

BitOperations supports additional functionality

BitOperations now supports IsPow2,RoundUpToPowerOf2 and provides nint/nuint overloading of existing functions.

Thanks to John Kelly, Dwight Howard, and Robin Lindner.

Vector, Vector2, Vector3 and Vector4 improvements

Vector now supports the nint and nuint primitives added in C# 9. For example, this change should make it easier to use SIMD instructions with Pointers or platform-dependent length types.

Vector now supports a Sum method to simplify the need to compute the “horizontal Sum” of all elements in a Vector. Credit to Ivan Zlatanov.

Vector now supports a generic method, As

, to simplify handling vectors in generic contexts where the specific type is unknown. Thank you Huo Yaoyuan
,>

Overloading support for Spans has been added to Vector2, Vector3, and Vector4 to improve the experience when vector types need to be loaded or stored.

Better parsing of standard number formats

We have improved the standard number type parser, in particular. The ToString and. TryFormatParse. They will now understand the requirements for accuracy >99 decimal places and will provide accurate results for that many digits. In addition, the parser now better supports trailing zeros in methods.

The following example demonstrates the before and after behavior.

  • 32.ToString(“C100”)->C132
    • The.net 6:

System.Text.Json

System.text. Json provides a variety of high-performance apis for processing Json documents. In the past few releases, we have added new features to further improve JSON processing performance and ease the hurdle for people who want to migrate from Newtonsoft.json. This release includes a continuation of that path and a big step forward in terms of performance, particularly in terms of serializer source generators.

JsonSerializer source generated

Note: Applications generated using source code from.NET 6 RC1 or earlier should be recompiled.

The backbone of almost all.NET serializers is reflection. Reflection is a nice capability for some scenarios, but not the basis for high-performance cloud-native applications that typically (de) serialize and process large numbers of JSON documents. Reflection is a startup, memory usage, and assembly trim issue.

An alternative to runtime reflection is compile-time source code generation. In.NET 6, we included a new source code generator as System.text.json. Json source generator that can be used with JsonSerializer in a variety of ways and can be configured in a variety of ways.

It can provide the following benefits:

  • Reduced startup time
  • Improved serialization throughput
  • Reduce private memory usage
  • Delete runtime useSystem.ReflectionSystem.Reflection.Emit
  • IL dressing compatibility

By default, the JSON source generator issues serialization logic for a given serializable type. JsonSerializer provides Utf8JsonWriter with higher performance than using existing methods by generating source code for direct use. In short, source code generators provide a way to give you different implementations at compile time to make the runtime experience better.

Given a simple type:

namespace Test
{
    internal class JsonMessage
    {
        public string Message { get; set; }}}Copy the code

The source generator can be configured to generate serialization logic for instances of the sample JsonMessage type. Note that the class name JsonContext is arbitrary. You can use any class name you want for the generated source.

using System.Text.Json.Serialization;

namespace Test{[JsonSerializable(typeof(JsonMessage)]
    internal partial class JsonContext : JsonSerializerContext{}}Copy the code

A serializer call using this pattern might look like the following example. This example provides the best possible performance.

using MemoryStream ms = new(a);using Utf8JsonWriter writer = new(ms);

JsonSerializer.Serialize(jsonMessage, JsonContext.Default.JsonMessage);
writer.Flush();

// Writer contains:
// {"Message":"Hello, world!" }
Copy the code

The fastest and most optimized source code generation mode — based on Utf8JsonWriter — is currently only available for serialization. Utf8JsonReader may provide similar support for deserialization in the future, depending on your feedback.

The source generator also issues type metadata initialization logic, which also facilitates deserialization. JsonMessage To deserialize instances that use pre-generated type metadata, you can do the following:

JsonSerializer.Deserialize(json, JsonContext.Default.JsonMessage);

* * IAsyncEnumerable JsonSerializer support

You can now serialize the IAsyncEnumerableJSON array using System.text. Json (de). The following example uses a stream as a representation of any asynchronous data source. The source can be a file on a local computer or the result of a database query or a Web service API call.

JsonSerializer. SerializeAsync has been updated to identify and provide special treatment for IAsyncEnumerable value.

using System;
using System.Collections.Generic;
using System.IO;
using System.Text.Json;

static async IAsyncEnumerable<int> PrintNumbers(int n)
{
    for (int i = 0; i < n; i++) yield return i;
}

using Stream stream = Console.OpenStandardOutput();
var data = new { Data = PrintNumbers(3)};await JsonSerializer.SerializeAsync(stream, data); // prints {"Data":[0,1,2]}
Copy the code

IAsyncEnumerable only supports values using the asynchronous serialization method. An attempt to serialize using the synchronous method will result in a NotSupportedException thrown.

Streaming deserialization requires a new API to return IAsyncEnumerable

. We have added the JsonSerializer DeserializeAsyncEnumerable method, you can see in the following example.

using System;
using System.IO;
using System.Text;
using System.Text.Json;

var stream = new MemoryStream(Encoding.UTF8.GetBytes("[0,1,2,3,4]"));
await foreach (int item in JsonSerializer.DeserializeAsyncEnumerable<int>(stream))
{
    Console.WriteLine(item);
}
Copy the code

This example deserializes elements on demand and is useful when working with particularly large data streams. It only supports reading from root-level JSON arrays, although it may be relaxed in the future based on feedback.

The existing DeserializeAsync method nominally supports IAsyncEnumerable

, but within the scope of its non-streaming method signature. It must return the final result as a single value, as shown in the following example.

using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Text.Json;

var stream = new MemoryStream(Encoding.UTF8.GetBytes(@ "{" "Data ":" [0,1,2,3,4]}"));
var result = await JsonSerializer.DeserializeAsync<MyPoco>(stream);
await foreach (int item in result.Data)
{
    Console.WriteLine(item);
}

public class MyPoco
{
    public IAsyncEnumerable<int> Data { get; set; }}Copy the code

In this example, the deserializer will IAsyncEnumerable everything that is cached in memory before returning the deserialized object. This is because the deserializer needs to consume the entire JSON value before returning the result.

System.text. Json: writable DOM function

The writable JSON DOM feature adds a new simple and high-performance programming model to System.text. JSON. This new API is attractive because it avoids serialization contracts that require strong typing, and the DOM is mutable compared to the existing JsonDocument type.

This new API has the following benefits:

  • A lightweight alternative to serialization in cases where using POCO types is impossible or undesirable, or where JSON schemas are not fixed and must be checked.
  • Enable valid modifications to a subset of the tree. For example, you can effectively navigate to a subpart of a large JSON tree and read an array or deserialize a POCO from that subpart. LINQ can also be used with it.

The following example demonstrates the new programming model.

    // Parse a JSON object
    JsonNode jNode = JsonNode.Parse("{"MyProperty": 42}");
    int value = (int)jNode["MyProperty"];
    Debug.Assert(value= =42);
    // or
    value = jNode["MyProperty"].GetValue<int> (); Debug.Assert(value= =42);

    // Parse a JSON array
    jNode = JsonNode.Parse("Final three [10]");
    value = (int)jNode[1];
    Debug.Assert(value= =11);
    // or
    value = jNode[1].GetValue<int> (); Debug.Assert(value= =11);

    // Create a new JsonObject using object initializers and array params
    var jObject = new JsonObject
    {
        ["MyChildObject"] = new JsonObject
        {
            ["MyProperty"] = "Hello"["MyArray"] = new JsonArray(10.11.12)}};// Obtain the JSON from the new JsonObject
    string json = jObject.ToJsonString();
    Console.WriteLine(json); / / {" MyChildObject ": {" MyProperty" : "Hello," "MyArray on" : final three [10]}}

    // Indexers for property names and array elements are supported and can be chained
Debug.Assert(jObject["MyChildObject"] ["MyArray"] [1].GetValue<int= = > ()11);
Copy the code

ReferenceHandler.IgnoreCycles

JsonSerializer(System.text.json) now supports the ability to ignore loops when serializing object diagrams. The ReferenceHandler. IgnoreCycles option with Newtonsoft. Json ReferenceLoopHandling. Ignore the similar behavior. One key difference is that the System.text. Json implementation replaces reference loops with NULL Json tags, rather than ignoring object references.

You can see in the following example ReferenceHandler. IgnoreCycles behavior. In this case, the Next attribute is serialized to NULL, because otherwise it creates a loop.

class Node
{
    public string Description { get; set; }
    public object Next { get; set; }}void Test()
{
    var node = new Node { Description = "Node 1" };
    node.Next = node;

    var opts = new JsonSerializerOptions { ReferenceHandler = ReferenceHandler.IgnoreCycles };

    string json = JsonSerializer.Serialize(node, opts);
    Console.WriteLine(json); // Prints {"Description":"Node 1","Next":null}
}
Copy the code

Source code building

With source building, you can build from source on your own computer with just a few commands. .net SDK. Let me explain why this project is important.

Source code builds are a scenario that we are releasing. NET Core 1.0 was an infrastructure developed in collaboration with Red Hat. A few years later, we were very close to delivering a fully automated version of it. This feature is important for Red Hat Enterprise Linux (RHEL).NET users. Red Hat tells us that.NET has evolved into an important developer platform for its ecosystem. Ok!

The gold standard for Linux distributions is to build open source using the compiler and toolchain that are part of the distribution archive. This applies to the.net runtime (written in C++), but not to any code written in C#. For C# code, we use a two-pass build mechanism to meet release requirements. It’s a bit complicated, but it’s important to understand the process.

Used by Red Hat. NET SDK (#1) Microsoft binary build to build. NET SDK source code to generate a pure open source binary build of the SDK (#2). Later, the same SDK source code is built again using this new version of SDK (#2) to generate a provable open source SDK (#3). The final binary version of the.NET SDK (#3) is then available for RHEL users. Later, Red Hat can use the same SDK (#3) to build new ones. NET version without the need to use Microsoft SDK to build monthly updates.

The process can be surprising and confusing. Open source distributions need to be built with open source tools. This pattern ensures that no SDK built by Microsoft is required, intentionally or not. As a developer platform, the threshold for inclusion in a distribution is higher than using only a compatible license. The source code build project enables. NET can meet this standard.

The deliverable of a source code build is a source code zip. The source tarball contains all the sources of the SDK (for a given version). From there, Red Hat (or other organizations) can build their own version of the SDK. Red Hat policy requires the use of built-in source toolchains to generate binary tar balls, which is why they use the two-pass approach. But the source code build itself does not require this two-pass approach.

In the Linux ecosystem, it is common for a given component to have both source and binary packages or tarballs. We have the binary tarball available, and now we have the source tarball. This makes.NET compatible with standard component patterns.

A major improvement in.NET 6 is that the source tarball is now a product we build. It used to require a lot of manual work, which led to long delays in delivering the source tarball to Red Hat. Neither side is happy about it.

We have worked closely with Red Hat on this project for over five years. Its success is due in large part to the efforts of the brilliant Red Hat engineers we have the privilege of working with. Other distributions and organizations have and will benefit from their efforts.

As a side note, source code builds are a big step toward reproducible builds, and we believe that. The.net SDK and C# compiler have important reproducible build capabilities.

Library API

In addition to the apis already covered, the following apis have been added.

WebSocket compression

Compression is important for any data that travels over a network. WebSockets now enable compression. We used the WebSockets extension Permessage-Deflate implementation, RFC 7692. It allows compression of the WebSockets message payload using the DEFLATE algorithm. This feature is one of the main user requests for Networking on GitHub.

Compression used with encryption can lead to attacks such as CRIME and BREACH. This means that the secret cannot be sent with user-generated data in a single compression context, or it can be extracted. To bring these impacts to users’ attention and help them weigh the risks, we named one of the key apis DangerousDeflateOptions. We also added the ability to turn off specific message compression, so that if users want to send secrets, they can do so safely without compression.

WebSocket memory usage is reduced by about 27% when compression is disabled.

Enabling compression from the client is easy, as shown in the following example. However, keep in mind that the server can negotiate Settings, such as requesting a smaller window or refusing compression altogether.

var cws = new ClientWebSocket();
cws.Options.DangerousDeflateOptions = new WebSocketDeflateOptions()
{
    ClientMaxWindowBits = 10,
    ServerMaxWindowBits = 10
};
Copy the code

WebSocket compression support for ASP.NET Core has also been added.

Credit to Ivan Zlatanov.

Socks Proxy support

SOCKS is a proxy server implementation that can handle any TCP or UDP traffic, making it a very versatile system. This is a long-standing community request that has been added to. NET 6.

This change adds support for Socks4, Socks4a, and Socks5. For example, it can test external connections or connect to the Tor network over SSH.

This class of WebProxy now accepts a SOCKS scheme, as shown in the following example.

var handler = new HttpClientHandler
{
    Proxy = new WebProxy("Socks5: / / 127.0.0.1".9050)};var httpClient = new HttpClient(handler);
Copy the code

Attributed to Huo Yaoyuan.

Microsoft. Extensions. Hosting API – host configuration options

We added a new ConfigureHostOptions API on IHostBuilder to simplify application setup (for example, configuring off timeout) :

using HostBuilder host = new()
    .ConfigureHostOptions(o =>
    {
        o.ShutdownTimeout = TimeSpan.FromMinutes(10);
    })
    .Build();

host.Run();
Copy the code

In.NET 5, configuring host options is a bit complicated:

using HostBuilder host = new()
    .ConfigureServices(services =>
    {
        services.Configure<HostOptions>(o =>
        {
            o.ShutdownTimeout = TimeSpan.FromMinutes(10);
        });
    })
    .Build();

host.Run();
Copy the code

Microsoft. Extensions. DependencyInjection – CreateAsyncScope API

CreateAsyncScope creates the API to handle the disposal of the service IAsyncDisposable. Previously, you may have noticed disposal IAsyncDisposable service provider may lead to abnormal InvalidOperationException.

The following example demonstrates the new schema where CreateAsyncScope is used to enable the safe use of using statements.

await using (var scope = provider.CreateAsyncScope())
{
    var foo = scope.ServiceProvider.GetRequiredService<Foo>();
}
Copy the code

The following example illustrates an existing problem case:

using System;
using System.Threading.Tasks;
using Microsoft.Extensions.DependencyInjection;

await using var provider = new ServiceCollection()
        .AddScoped<Foo>()
        .BuildServiceProvider();

// This using can throw InvalidOperationException
using (var scope = provider.CreateScope())
{
    var foo = scope.ServiceProvider.GetRequiredService<Foo>();
}

class Foo : IAsyncDisposable
{
    public ValueTask DisposeAsync()= >default;
}
Copy the code

The following pattern is a previously suggested solution to avoid exceptions. It’s no longer needed.

var scope = provider.CreateScope();
var foo = scope.ServiceProvider.GetRequiredService<Foo>();
await ((IAsyncDisposable)scope).DisposeAsync();
Copy the code

Thank you Martin Bjorkstrom.

Microsoft. Extensions. Logging – source generator at compile time

.NET 6 introduced the LoggerMessageAttribute type. This property is Microsoft. Extensions. Part of the Logging namespace, use, it will source to generate high performance Logging API. Source generated log support is designed for modern times. NET applications provide highly available and high-performance logging solutions. The automatically generated source code relies on the ILogger interface and the LoggerMessage.define functionality.

The LoggerMessageAttribute source generator fires when used for partial logging methods. When triggered, it can either automatically generate partial implementations of the methods it is decorating, or compile time diagnostics with hints for proper use. Compile-time logging solutions are typically much faster at run time than existing logging methods. It achieves this by minimizing boxing, AD hoc distribution, and duplicates.

There are several advantages to using the LoggerMessage.define API manually:

  • Shorter and simpler syntax: Declarative properties are used instead of encoding boilerplate.
  • Guided developer experience: Generators issue warnings to help developers do the right thing.
  • Supports any number of logging parameters.LoggerMessage.DefineA maximum of six are supported.
  • Dynamic log levels are supported. This is aLoggerMessage.DefineIt’s impossible alone.

To use LoggerMessageAttribute, the consumer class and method need to be partial. The code generator triggers and generates the partial method implementation at compile time.

public static partial class Log{[LoggerMessage(EventId = 0, Level = LogLevel.Critical, Message = "Could not open socket to `{hostName}`")]
    public static partial void CouldNotOpenSocket(ILogger logger, string hostName);
}
Copy the code

In the previous example, the logging method is static and the logging level is specified in the property definition. When using properties in a static context, ILogger requires instances as parameters. You can also choose to use this property in a non-static context. For more examples and usage scenarios, visit the compile-time logging source generator documentation.

System.linq – Enumerable support for Index and Range parameters

The Enumerable.ElementAt method now accepts indexes from the end of the Enumerable, as shown in the following example.

Enumerable.Range(1, 10).ElementAt(^2); // returns 9

Added an overload of Enumerable.Take to accept the Range argument. It simplifies slicing enumerable sequences:

  • source.Take(.. 3) instead of the source. Take (3)
  • source.Take(3..) Instead of a source. The Skip (3)
  • source.Take(2.. 7) instead of the source. Take (7). Skip (2)
  • source.Take(^3..) Instead of the source. TakeLast (3)
  • source.Take(.. ^ 3) instead of the source. SkipLast (3)
  • source.Take(^7.. ^3) instead of.source.takelast (7).skiplast (3)

Thank you for @ dixin.

System. Linq – TryGetNonEnumeratedCount

The TryGetNonEnumeratedCount method attempts to get an enumerable count of the source without forcing enumeration. This approach is useful in scenarios where pre-allocated buffers are useful before enumeration, as shown in the following example.

List<T> buffer = source.TryGetNonEnumeratedCount(out int count) ? new List<T>(capacity: count) : new List<T>();
foreach (T item in source)
{
    buffer.Add(item);
}
Copy the code

TryGetNonEnumeratedCount check implementation ICollection/ ICollection

; Or take advantage of some of the internally optimized sources adopted by Linq.

System.linq — DistinctBy/UnionBy/IntersectBy/ExceptBy

A new variant has been added to set operations that allow you to specify equality using key selector functions, as shown in the following example.

Enumerable.Range(1.20).DistinctBy(x => x % 3); / / {1, 2, 3}

var first = new (string Name, int Age)[] { ("Francis".20), ("Lindsey".30), ("Ashley".40)};var second = new (string Name, int Age)[] { ("Claire".30), ("Pat".30), ("Drew".33)}; first.UnionBy(second, person => person.Age);// { ("Francis", 20), ("Lindsey", 30), ("Ashley", 40), ("Drew", 33) }
Copy the code

System.Linq – MaxBy / MinBy

The MaxBy and MinBy methods allow you to find the largest or smallest element using the key selector, as shown in the following example.

var people = new (string Name, int Age)[] { ("Francis".20), ("Lindsey".30), ("Ashley".40)}; people.MaxBy(person => person.Age);// ("Ashley", 40)
Copy the code

System. Linq – the Chunk

Chunk can be used to divide enumerable source blocks into fixed-size slices, as shown in the following example.

IEnumerable<int[]> chunks = Enumerable.Range(0, 10).Chunk(size: 3); {{0,1,2}, {3,4,5}, {6,7,8}, {9}}

Robert Anderson.

System. Linq – / / overloaded LastOrDefaultSingleOrDefault FirstOrDefault USES the default parameters

If the source is enumerable empty, the existing FirstOrDefault /LastOrDefault /SingleOrDefault method returns default(T). New overloads have been added that accept the default parameters returned in this case, as shown in the following example.

Enumerable.Empty\&lt; int\&gt; ().SingleOrDefault(-1); // returns -1

Thank you @Foxtrek64.

System.linq -zip accepts three enumerable overloads

The Zip method now supports combining three enumerations, as shown in the following example.

var xs = Enumerable.Range(1.10);
var ys = xs.Select(x => x.ToString());
var zs = xs.Select(x => x % 2= =0);

foreach ((int x, string y, bool z) in Enumerable.Zip(xs,ys,zs))
{
}
Copy the code

Attributed to Huo Yaoyuan.

Priority queue

PriorityQueue < TElement TPriority > (System. Collections. The Generic) is a new collection, you can add new project with value and priority. On exit, PriorityQueue returns the element with the lowest priority value. You can think of this new collection as similar to Queue

, but each enqueued element has a priority value that affects the behavior of the Queue.

The following example demonstrates this. PriorityQueue
,>

// creates a priority queue of strings with integer priorities
var pq = new PriorityQueue<string.int> ();// enqueue elements with associated priorities
pq.Enqueue("A".3);
pq.Enqueue("B".1);
pq.Enqueue("C".2);
pq.Enqueue("D".3);

pq.Dequeue(); // returns "B"
pq.Dequeue(); // returns "C"
pq.Dequeue(); // either "A" or "D", stability is not guaranteed.
Copy the code

Patryk Golebiowski.

Faster processing of structures as dictionary values

CollectionsMarshal GetValueRef is a new security API, it can be faster to update the structure of values in the dictionary. The new API is intended for high-performance scenarios, not general use. It returns the ref structure value, which can then be updated using typical techniques.

The following example demonstrates how to use the new API:

ref MyStruct value = CollectionsMarshal.GetValueRef(dictionary, key);
// Returns Unsafe.NullRef<TValue>() if it doesn't exist; check using Unsafe.IsNullRef(ref value)
if(! Unsafe.IsNullRef(ref value))
{
    // Mutate in-place
    value.MyInt++;
}
Copy the code

Prior to this change, updating struct dictionary values can be expensive for high performance scenarios, requiring dictionary lookups and copies to the struct on the stack. Then, after the change, the struct will be assigned to the dictionary key again, resulting in another lookup and copy operation. This improvement reduces the key hash to 1 (from 2) and removes all structure copy operations.

Credit to Ben Adams.

Create DateOnly and TimeOnly structures

Added date – and time-only structures with the following characteristics:

  • Each of them represents half of aDateTime, or just the date part, or just the time part.
  • DateOnlyPerfect for birthdays, anniversaries and weekdays. It works with SQL ServerdateThe types are consistent.
  • TimeOnlyGreat for regular meetings, alarm clocks and weekly work hours. It works with SQL ServertimeThe types are consistent.
  • Supplement existing date/time types (DateTime.DateTimeOffset.TimeSpan.TimeZoneInfo).
  • inSystemNamespace, provided in CoreLib, just like existing related types.

Performance improvements datetime.utcnow

This improvement has the following benefits:

  • Fixed 2.5x performance regression for getting system time on Windows.
  • Take advantage of Windows’ 5-minute sliding cache of leap second data, rather than fetching it on every call.

Support for Windows and IANA time zones on all platforms

This improvement has the following benefits:

  • Implicit conversion when used (github.com/dotnet/runt…
  • TimeZoneInfoBy:TryConvertIanaIdToWindowsId,TryConvertWindowsIdToIanaIdandHasIanaId(Github.com/dotnet/runt…Perform an explicit conversion
  • Improved cross-platform support and interoperability between systems using different time zone types.
  • The TimeZoneConverter OSS library is required for deletion. This feature is now built in.

Improved time zone display name

Time zone display names have been improved on Unix:

  • Disambiguate the display names in the list returned by. TimeZoneInfo.GetSystemTimeZones
  • Using ICU/CLDR globalization data.
  • Unix only. Windows still uses registry data. This may change later.

The following additional improvements have also been made:

  • The display and standard names for the UTC time zone are hard-coded into English and are now in the same language as the rest of the time zone data (CurrentUICultureOn Unix, the default operating system language on Windows).
  • Due to size limitations, the time zone display name in Wasm was changed to use the non-localized IANA ID.
  • TimeZoneInfo.AdjustmentRuleThe nested class takes itBaseUtcOffsetDeltaThe inner property is exposed and a new constructor is obtainedbaseUtcOffsetDeltaAs a parameter. (Github.com/dotnet/runt…
  • TimeZoneInfo.AdjustmentRuleAlso got various fixes for loading time zones on Unix (Github.com/dotnet/runt…), (Github.com/dotnet/runt…)

Improved support for Windows ACL

System. The Threading. AccessControl now including with Windows access control list (ACL) interaction improvement support. New overloads were added to Mutex and Semaphore’s OpenExisting and TryOpenExisting methods EventWaitHandle. These overloads with “security permission” instances allow to open existing instances of thread-synchronization objects created with special Windows security properties.

This update matches and has the same behavior as apis available in the.NET Framework.

The following examples demonstrate how to use these new apis.

For the Mutex:

var rights = MutexRights.FullControl;
string mutexName = "MyMutexName";

var security = new MutexSecurity();
SecurityIdentifier identity = new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid, null);
MutexAccessRule accessRule = new MutexAccessRule(identity, rights, AccessControlType.Allow);
security.AddAccessRule(accessRule);

// createdMutex, openedMutex1 and openedMutex2 point to the same mutex
Mutex createdMutex = MutexAcl.Create(initiallyOwned: true, mutexName, out bool createdNew, security);
Mutex openedMutex1 = MutexAcl.OpenExisting(mutexName, rights);
MutexAcl.TryOpenExisting(mutexName, rights, out Mutex openedMutex2);
Copy the code

In order to Semaphore

var rights = SemaphoreRights.FullControl;
string semaphoreName = "MySemaphoreName";

var security = new SemaphoreSecurity();
SecurityIdentifier identity = new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid, null);
SemaphoreAccessRule accessRule = new SemaphoreAccessRule(identity, rights, AccessControlType.Allow);
security.AddAccessRule(accessRule);

// createdSemaphore, openedSemaphore1 and openedSemaphore2 point to the same semaphore
Semaphore createdSemaphore = SemaphoreAcl.Create(initialCount: 1,  maximumCount: 3, semaphoreName, out bool createdNew, security);
Semaphore openedSemaphore1 = SemaphoreAcl.OpenExisting(semaphoreName, rights);
SemaphoreAcl.TryOpenExisting(semaphoreName, rights, out Semaphore openedSemaphore2);
Copy the code

In order to EventWaitHandle

var rights = EventWaitHandleRights.FullControl;
string eventWaitHandleName = "MyEventWaitHandleName";

var security = new EventWaitHandleSecurity();
SecurityIdentifier identity = new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid, null);
EventWaitHandleAccessRule accessRule = new EventWaitHandleAccessRule(identity, rights, AccessControlType.Allow);
security.AddAccessRule(accessRule);

// createdHandle, openedHandle1 and openedHandle2 point to the same event wait handle
EventWaitHandle createdHandle = EventWaitHandleAcl.Create(initialState: true, EventResetMode.AutoReset, eventWaitHandleName, out bool createdNew, security);
EventWaitHandle openedHandle1 = EventWaitHandleAcl.OpenExisting(eventWaitHandleName, rights);
EventWaitHandleAcl.TryOpenExisting(eventWaitHandleName, rights, out EventWaitHandle openedHandle2);
Copy the code

HMAC one-time method

System. Security. CryptographyHMAC class now has allowed a one-off HMAC calculation without having to allocate a static method. These additions are similar to the one-time methods for hash generation that were added in previous versions.

DependentHandle is now available

This DependentHandle type is now public with the following API surface:

namespace System.Runtime
{
    public struct DependentHandle : IDisposable
    {
        public DependentHandle(object? target, object? dependent);
        public bool IsAllocated { get; }
        public object? Target { get; set; }
        public object? Dependent { get; set; }
        public (object? Target, object? Dependent) TargetAndDependent { get; }
        public void Dispose(); }}Copy the code

It can be used to create advanced systems, such as complex caching systems or custom versions of ConditionalWeakTable

types. For example, it will be used by the WeakReferenceMessenger type in the MVVM Toolkit to avoid allocating memory when a message is broadcast.
,>

Portable thread pools

.NET thread pools have been re-implemented as managed implementations and are now used as. The default thread pool in NET 6. We make this change to make all. NET applications can access the same thread pool regardless of whether they are using CoreCLR, Mono, or any other runtime. As part of this change, we did not observe or anticipate any functional or performance impact.

RyuJIT

The team is working on this release. The NET JIT compiler has made a number of improvements, documented in each preview post. Most of these changes have improved performance. Here are some highlights of RyuJIT.

Dynamic PGO

In.NET 6, we enabled two forms of PGO (Profile boot optimization) :

  • Dynamic PGO uses data collected from the current run to optimize the current run.
  • Static PGO relies on data collected from past runs to optimize future runs.

The dynamic PGO was covered earlier in the performance section of this article. I will provide a re-cap.

Dynamic PGO enables the JIT to gather information at run time about the path and type of code that is actually being used to run a particular application. The JIT can then optimize code based on these code paths, sometimes dramatically improving performance. We saw double-digit health improvements in both testing and production. There is a classic set of compiler techniques that cannot be implemented without PGO using JIT or ahead of time compilation. We can now apply these technologies. Hot/cold separation is one such technique, while de-virtualization is another.

To enable dynamic PGO, set DOTNET\_TieredPGO=1 in the environment where the application will run.

As described in the Performance section, dynamic PGO increased the number of TechEmpower JSON”MVC” suite requests per second by 26% (510K -&gt; 640 k). This is an amazing improvement without changing the code.

Our goal is in the future. Dynamic PGO is enabled by default in the.NET version. NET 7. We strongly encourage you to try dynamic PGO in your application and give us feedback.

The complete PGO

To take full advantage of Dynamic PGO, you can set two additional environment variables: DOTNET\_TC\_QuickJitForLoops=1 and DOTNET\_ReadyToRun=0. This ensures that as many methods as possible participate in hierarchical compilation. We call this variant Full PGO. A full PGO can provide a greater steady-state performance advantage over a dynamic PGO, but the startup time is slower (because more methods must be run at layer 0).

You do not want to use this option for short-running serverless applications, but it may make sense for long-running applications.

In future releases, we plan to streamline and simplify these options so that you can more easily get the benefits of full PGO and use them for a wider range of applications.

Static PGO

We currently use static PGO for optimization. NET library assemblies, such as system.private.corelib, which comes with R2R (Ready To Run).

The benefit of static PGO is that it is optimized when the assembly is compiled into R2R format using CrossGen. This means there are run-time benefits and no run-time costs. This is very important and why PGO is important to C++, for example.

Cycle alignment

Memory alignment is a common requirement of all operations in modern computing. In.NET 5, we started to align methods on 32 byte boundaries. In.NET 6, we added a feature that performs adaptive loop alignment by adding NOP padding instructions to methods that have loops so that the loop code starts at a MOD (16) or MOD (32) memory address. These changes improved and stabilized. NET code performance.

In the bubble sort diagram below, data point 1 represents the point at which we started the alignment method at the 32 byte boundary. Data point 2 represents the point at which we also begin to align the inner loop. As you can see, the performance and stability of benchmarks have improved significantly.

Hardware accelerated architecture

Structure is an important part of the CLR type system. In recent years, they have often been used as whole. Performance primitives in NET libraries. Recent examples of ValueTask are ValueTuple and Span

. Record structures are a new example. In.NET 5 and. In NET 6, we have been improving the performance of structures, in part by ensuring that structures can be stored in ultra-fast CPU registers when they are the return values of local variables, parameters, or methods. This is especially useful for apis that use vector computation.

Stability measurement

There is a lot of engineering systems work on the team that never appears on the blog. This is true for any hardware or software product you use. The JIT team conducted a project to stabilize performance measurements with the goal of increasing the regression values automatically reported by our internal performance laboratory automation. The project was interesting because it required in-depth research and product changes to achieve stability. It also shows the scale of what we measure to maintain and improve performance.

This image illustrates an unstable performance measurement where performance fluctuates between slow and fast over continuous runs. The X-axis is the test date, and the Y-axis is the test time in nanoseconds. At the end of the chart (after committing these changes), you can see that the measurements are stable and the results are best. This picture shows a single test. There are more tests in dotnet/ Runtime #43227 that demonstrate similar behavior.

Ready-to-use code /Crossgen 2

Crossgen2 is an alternative to the CrossGen tool. It aims to satisfy two outcomes:

  • Make CrossGen development more efficient.
  • Enable a set of features that are currently unavailable with CrossGen.

This conversion is somewhat similar to the native code csc.exe to the managed code Roslyn compiler. Crossgen2 is written in C#, but it doesn’t expose a fancy API like Roslyn does.

We may have/have been. NET 6 and 7 have six projects planned that rely on CrossGen2. The vector instruction default proposal is what we want to be. NET 6 but more likely. A good example of crossgen2 functionality and product changes made to NET 7. Version bubbles are another good example.

Crossgen2 supports cross-compilation across operating system and architectural dimensions (hence the name “Crossgen”). This means that you will be able to use a single build machine to generate native code for all targets, at least in relation to the code ready to run. However, running and testing the code is another matter, for which you need the right hardware and operating system.

The first step is to compile the platform itself with CrossGen2. We used.NET 6 for all architectural tasks. As a result, we were able to eliminate the old CrossGen in this release. Note that CrossGen2 works only with CoreCLR and not with Mono-based applications (which have a separate set of code generation tools).

The project — at least initially — was not performance-oriented. The goal is to enable a better architecture to host the RyuJIT (or any other) compiler to generate code offline (without requiring or starting the runtime).

You might say, “Hey… If it’s written in C#, don’t you need to start the runtime to run crossgen2?” Yes, but that’s not what “offline” means in this context. When CrossGen2 runs, we do not use the JIT that comes with the runtime that runs CrossGen2 to generate run-ready (R2R) code. That’s not going to work, at least not for our goals. Imagine crossgen2 running on an x64 machine and we need to generate code for Arm64. Crossgen2 loads the Arm64 RyuJIT (compiled for X64) as a native plug-in and then uses it to generate Arm64 R2R code. Machine instructions are simply byte streams saved to a file. It can also work in the opposite direction. On Arm64, crossgen2 can generate x64 code using the x64 RyuJIT compiled to Arm64. We use the same approach for x64 code on an X64 machine. Crossgen2 loads a RyuJIT, which is built for any required configuration. This may seem complicated, but it’s the kind of system you need if you want to enable a seamless cross-location model, and that’s exactly what we want.

We want to use the term “crossgen2” in only one release, after which it will replace the existing crossgen, and then we will go back to using the term “crossgen” to mean “crossgen2”.

.NET diagnostics: EventPipe

EventPipe is our cross-platform mechanism for exporting events, performance data, and counters both in and out of the process. Since.NET 6, we have moved the implementation from C++ to C. With this change, Mono also uses EventPipe. This means that both CoreCLR and Mono use the same event infrastructure, including. NET Diagnostic CLI tool.

This change was accompanied by a small decrease in CoreCLR:

library After the size The size of the before differences
libcoreclr.so 7037856-7049408. – 11552.

We also made some changes to improve the throughput of EventPipe under load. In the first few preview releases, we made a number of changes that resulted in improved throughput. NET 5 2.06 times:

For this benchmark, the higher the better. .NET 6 is the orange line and.NET 5 is the blue line.

SDK

The following improvements have been made to the.NET SDK.

.NET 6 SDK optional CLI installation for workloads

.NET 6 introduces the concept of SDK workloads. Workloads are optional components that can be installed on. NET SDK to enable various scenarios. The new workloads in.NET 6 are:.NET MAUI and Blazor WebAssembly AOT workloads. We may create new workloads in.NET 7 (possibly from existing SDKS). The biggest benefits of workloads are reduced size and selectivity. We want to make the SDK smaller over time and only install the components you need. This model is good for the developer machine, and even better for CI.

Visual Studio users don’t really need to worry about workloads. The workload functionality is specifically designed so that an installation coordinator like Visual Studio can install your workload for you. You can directly manage workloads through the CLI.

The workload function exposes several verbs for managing workloads, including the following:

  • dotnet workload restore– Install the workload required for a given project.
  • dotnet workload install– Install named workloads.
  • dotnet workload list– List the workloads you have installed.
  • dotnet workload update– Update all installed workloads to the latest available version.

Update verb Query to update the nuget.org workload list, update the local list, download the new version of the installed workload, and then delete all the old versions of the workload. This is similar to apt Update & & Apt upgrade-y (for Debian-based Linux distributions). It makes sense to treat the workload as the SDK’s private package manager. It is private because it only applies to SDK components. We may reconsider this in the future. These Dotnet workload commands run in the context of a given SDK. Suppose you install it at the same time. 6 and.net. NET 7. The workload commands will provide different results for each SDK because the workload will be different (at least different versions of the same workload).

Please note that the workload in NuGet.org is copied into your SDK installation, so dotnet Workload Install will need to run a lift or use sudo if the SDK installation location is protected (that is, at the administrator/root location).

Built-in SDK version check

To make it easier to track when new versions of the SDK and runtime are available, we asked. NET 6 SDK adds a new command.

dotnet sdk check

It will tell you if there are any available that you have installed. An updated version of the.NET SDK, runtime or workload. You can see the new experience in the image below.

dotnet new

You can now search in NuGet.org with.dotnet new –search

Other improvements to the template installation include support for switching authorization credentials to support private NuGet sources. –interactive

After installing the CLI template, you can go through and check whether updates are available. –update-check–update-apply

NuGet package validation

The package validation tool enables NuGet library developers to verify that their packages are consistent and properly formed.

This includes:

  • There are no significant changes between verified versions.
  • Verify that the package has the same common set of apis for all run-time specific implementations.
  • Identify any target framework or runtime suitability gaps.

This tool is part of the SDK. The easiest way to use it is to set a new property in the project file.

<EnablePackageValidation> true </EnablePackageValidation>

More Roslyn analyzers

In.NET 5, we offer about 250 packages. NET SDK analyzer. Many of these already exist, but are sent out of band as NuGet packets. We have added more profilers for.NET 6.

By default, most new profilers are enabled at the information level. You can enable these analyzers at the warning level by configuring the AnalysisMode as follows:

All

We released the set of profilers we wanted for.NET 6 (plus some additional features) and made most of them grabable. The community added several implementations, including these.

contributors The problem The title
Newellclark Dotnet/Runtime #33777 Use span-based string.concat
Newellclark Dotnet/Runtime #33784 String.asspan () string.substring ()
Newellclark Dotnet/Runtime #33789 Cover the Stream ReadAsync/WriteAsync
Newellclark Dotnet/runtime #35343 Replace with Dictionary&lt; ,&gt; .Keys.ContainsContainsKey
Newellclark Dotnet/runtime #45552 Use instead of String.EqualsString.Com pare said
Mecktrell Dotnet/runtime #47180 Contains(char)String.Contains(String)

Thanks to Meik Tranel and Newell Clark.

Enable custom protection for Platform Compatibility Analyzer

The CA1416 platform compatibility analyzer has identified platform protection using methods in OperatingSystem and RuntimeInformation, Such as OperatingSystem. IsWindows and OperatingSystem. IsWindowsVersionAtLeast. However, the analyzer does not recognize any other protection possibilities, such as caching platform-checking results in fields or attributes, or complex platform-checking logic defined in helper methods.

To allow for the possibility of custom guard, we added a new attribute SupportedOSPlatformGuard and UnsupportedOSPlatformGuard use corresponding platform name and/or release notes custom guard members. This comment is recognized and respected by the flow analysis logic of the platform compatibility analyzer.

usage

    [UnsupportedOSPlatformGuard("browser")] // The platform guard attribute
#if TARGET_BROWSER
    internal bool IsSupported => false;
#else
    internal bool IsSupported => true;
#endif

    [UnsupportedOSPlatform("browser")]
    void ApiNotSupportedOnBrowser(){}void M1()
    {
        ApiNotSupportedOnBrowser();  // Warns: This call site is reachable on all platforms.'ApiNotSupportedOnBrowser()' is unsupported on: 'browser'

        if (IsSupported)
        {
            ApiNotSupportedOnBrowser();  // Not warn}} [SupportedOSPlatform("Windows")]
    [SupportedOSPlatform("Linux")]
    void ApiOnlyWorkOnWindowsLinux(){} [SupportedOSPlatformGuard("Linux")]
    [SupportedOSPlatformGuard("Windows")]
    private readonly bool _isWindowOrLinux = OperatingSystem.IsLinux() || OperatingSystem.IsWindows();

    void M2()
    {
        ApiOnlyWorkOnWindowsLinux();  // This call site is reachable on all platforms.'ApiOnlyWorkOnWindowsLinux()' is only supported on: 'Linux', 'Windows'.

        if (_isWindowOrLinux)
        {
            ApiOnlyWorkOnWindowsLinux();  // Not warn}}}Copy the code

The end of the

Welcome to.NET 6. It’s another huge one. NET release, with many improvements in performance, functionality, availability and security. We hope you find many improvements that will ultimately make you more efficient and capable in daily development, and improve performance or reduce the cost of applications in production. We’ve started from those that have started to use. NET 6 people there to hear good news.

At Microsoft, we’re still in. Key applications are already in production at this early stage of NET 6 deployment, with more to come in the coming weeks and months.

.NET 6 is our latest LTS release. We encourage everyone to switch to it, especially if you use one. NET 5. We expect it to be the fastest adoption ever. .net version.

This version is the result of at least 1,000 people (but probably more). This includes Microsoft’s. NET team and more in the community. I tried to include many community-contributed features in this article. Thank you for taking the time to create this content and complete our process. I hope this experience will be a good one and that more people will contribute.

This article is the result of the collaboration of many talented people. Contributions include functional content provided by the team throughout the release, significant new content created for the final post, and numerous technical and prose corrections needed to make the final content the quality you deserve. Happy to make it and all the other posts for you.

Thank you for being a.NET developer.