Garbage Collection in .NET Core Simplified

In this article, let's look at how Garbage Collection works and things we need to keep in mind while developing memory optimized applications in dotne..

Garbage Collection is a common phenomenon in most of the modern day high level programming languages, which relieve developers from the burden of memory allocations and managements.

In this article, let’s look at some of the most important aspects of a Garbage Collector and all things that developers need to keep in mind while developing memory optimized applications in dotnetcore.

GC is a feature from CLR

  • Garbage Collector is one of the features provided by the Common Language Runtime (CLR) of the .NET Core runtime; the other feature offered is the JIT Compilation.
  • It is responsible for memory allocation and management in .NET Core applications.
  • It allocates heap segments as required by the application during runtime, where each heap is a continuous block of memory.
  • GC takes care of all the objects that are created and supported by the CLR. This code which can be managed by the GC is called as Managed Code.

GC works in a Generational Allocation approach

  • Garbage Collector works in the form of a Generational approach, where each objects for which memory is managed is categorized into three categories – Gen 0, Gen 1 and Gen 2.
  • Gen 0 consists of objects which are of the smallest lifetime and are garbage collected frequently. Gen 2 consists of objects which have larger lifetimes (such as singletons) and are infrequently garbage collected.
  • As objects are reused or called resulting in longer lifetimes, they are moved to higher Generations. For example, all objects created during the period of a web request (Scoped objects) are generally of smallest lifetimes and hence are categorized as Gen 0. Whereas Singleton services have application lifetime and so are marked Gen 2.
  • When an ASP.NET Core application starts:
    • Garbage Collector reserves small amount of memory for initial heap segments
    • It also commits a small portion of memory when the runtime is loaded

GC works in two Modes

  • Garbage Collection in .NET Core has two modes:
    • Workstation mode – which is a mode optimized for Single nodes
    • Server – it is the default for all ASP.NET Core applications, which is optimized for multi-core server environments
  • Server mode GC is not available in systems having single core CPU.
  • In Webservers, where CPU usage is more important than memory the GC runs in Server mode for better performance. Whereas if the application has a high memory usage and CPU is low, Workstation GC can have a better result. For example, multiple containers which run over a single machine where memory usage is high, Workstation mode might help.

The threat of Memory Leaks

  • Garbage Collector treats objects which are being referenced as alive and can’t free such objects. Such objects which are referenced but actually are not used can result in Memory Leaks. Hence it is the developer’s responsibility to ensure that all the objects that are created inside a class or in a controller are actually being used. This is particularly important in cases where static instance variables are involved.
  • If an application frequently allocates memory to new objects but fails to free them after they are no longer needed, the memory usage will grow over time and can result in application crash after a point. For example, excessive creation of static strings which are not properly disposed later.
public class MyController: ControllerBase {

	private static List<string> _leakingStrings = new List<string>();

	public string Get() {
		string str = "Some Random String";
		return str;

In the above example, although the scope of the string variable str is within the Action method and should have been memory released after the value is returned,

because it is being referenced inside a static string list the Garbage Collector can’t release its memory even though we’re no longer using it. This results in memory leaks and a possible application crash due to high memory usage.

Handling Native Memory is the developer’s responsibility

  • Some .NET Core applications rely on native memory (such as files, database, sockets etc.) for their functionalities. Garbage Collector can’t access these and hence can’t automatically memory manage such cases. It is the developer’s responsibility to ensure that the application disposes such objects once their use is over.
  • This code which can’t be managed by the GC is called as Unmanaged Code.
  • .NET runtime provides the IDisposable interface to let developers release native memory. A correctly implemented IDisposable classes call Dispose() method as the finalizer starts execution. For this purpose, it’s highly recommended to instantiate classes that implement the IDisposable interface within a using block.
using(var myVar = new MyDisposableClass()) {
	// some logic by using the MyDisposableClass instance
// since MyDisposableClass implements IDisposable 
// and its instance is being created within a using block, 
// the Dispose method is invoked implicitly 
// by the runtime without any intervention 
// This doesn't happen if we don't wrap it within a using block.

One quick question — If we don’t use an IDisposable in a using block, will the Dispose method be called?

Large Objects are an Issue

  • Frequent memory allocation/release can fragment memory, especially when allocating large chunks of memory. This is especially true in cases when large objects with continuous memory allocations are involved.
  • To manage this, GC tries to defragment memory after release, this is called as Compaction. In this, GC moves large objects which comes with a performance penalty. Hence GC maintains large objects in a specific zone called Large Object Heap.
  • When LOH is completely full, GC does a Gen2 garbage collection, which is slow and might result in other collections (Gen 0 and Gen 1) as well.
  • Hence for better performance, use of large objects must be minimized. For this, we must ensure that the classes are not large enough and if possible split large objects into smaller ones. For example, use of Response Caching Middleware in an ASP.NET Core web application splits up the response objects ensuring a better performance.

Not all Objects are meant to be disposed off after use

  • Using libraries which involve native interactions such as HttpClient, SqlConnection, Sockets, Files incorrectly can cause resource leaks. These are more scarce than memory and are more problematic when leaked than memory.
  • It is recommended to dispose off the native libraries when not in use, but this is not true for all resources.
  • For example, HttpClient internally uses IDisposable, but it shouldn’t be disposed everytime it is instantiated and used. It should instead be reused.

The curious case of HttpClient

  • HttpClient internally involves use of the machine’s socket addresses and these addresses have a little delay in binding and unbinding to the applications.
  • It takes time for these connections to be released by the Operating System on which the application runs.
  • When we continuously create and dispose HttpClient instances, this mechanism is repeated each time which may result in port exhaustion.
  • Hence just in the case of HttpClient, we instead need to reuse the created instances by either going with static instances or by using the HttpClientFactory for accessing HttpClient instance which is a better and safer way of dealing with HttpClient instances.

One quick question — How do you memory manage unmanaged code?

Object Pooling Helps

  • For handling native memory instances, implementing Object Pooling can help with the performance. It helps in reusing the instances as they are released and are specifically designed for objects that are expensive to create. Examples include using a database connection pool or a ThreadPool.


Default image
Sriram Mannava

I'm a full-stack developer and a software enthusiast who likes to play around with cloud and tech stack out of curiosity.

Leave a Reply