In-Memory Cache refers to a high-speed memory section which is dedicated for storing and accessing less frequently updated but frequently read data; a stale (but not so stale) data.
Using an In-Memory Cache increases an application performance and reduces the number of times an application accesses the database for records, thereby reducing database costs.
In real-world scenarios where time is money, caching helps with better user experience and performance. In ASP.NET Core, we have the IMemoryCache interface which provides seamless and simple in-memory implementation for applications which run in a single node.
But as the application moves on to be deployed in multiple nodes in a load-balanced environments, we do have options to go for popular centralized cache tier options such as Redis, Memcache, NCache and so on.
A perfect implementation of a Cache resides with choosing the best Caching strategy for the application. It depends on the frequency in which the data to be cached will be modified and the possibilities of data modifications from external agencies, such as worker jobs and others. In general, there are two important caching strategies or implementations which are used for building a robust caching tier for applications.
They are:
- Lazyloading or Cache-aside
- Write-through caching
In this article, let’s talk in detail about the Lazyloading or Cache-aside strategy
Lazyloading or Cache-aside pattern
This is the most commonly used caching strategy by developers across the domains, subconsciously and so is most popular. In this caching approach, the application layer checks onto the caching tier for required data, on the failure of which the application proceeds to fetch data from the data access layer. Once the data is fetched from the data layer, the application updates the cache tier with the missing content for future reference – and this cycle continues.
When the data is available in the cache, it is called as a cache “HIT” and the application gets data from the cache directly without moving on to the data layer. If the data isn’t available in the cache, it is called as a cache “MISS” and it leads the application tier to move on to the data layer for fetching data.
This kind of approach has several advantages for itself –
- The cache contains only smaller chunks of data, which the application sets up in the event of a cache “MISS” keeping the cache free from unwanted or unaccessed datasets.
- In the event of a cache node failure, the application still works with an increased output latency for data requests.
And there are disadvantages too –
- For everytime a cache “MISS” occurs, there are three additional steps which follow before data is returned:
- Access data from cache which is a “MISS“
- Fetch data from the data-tier
- Set the cache with the data
this is a sort of extra steps which need to be performed each time a cache “MISS” occurs causing unwanted latency for such requests.
- Since the application tier decides on where to fetch data and what to do, we have a complex application-tier with additional logic for caching.
Implementing a Lazy loading cache
Consider an application which returns a Reader record from the data-tier, for a given recordId. The data-tier for this logic is represented by a Repository class which is as follows:
namespace ReadersMvcApp.Providers.Repositories
{
public interface IReaderRepo
{
IQueryable<Reader> Readers { get; }
Reader GetReader(Guid id);
}
}
namespace ReadersMvcApp.Providers.Repositories
{
public class ReaderRepo : IReaderRepo
{
private readonly DbSet<Reader> _readers;
public ReaderRepo()
{
// initialization logic
}
public Reader GetReader(Guid id)
{
return _readers.Where(x => x.Id == id).FirstOrDefault();
}
public IQueryable<Reader> Readers => _readers.AsQueryable();
}
}
For this data-tier, we add a Decorator implementation of IReaderRepo, which contains the Caching implementation for the same. This is as follows.
namespace ReadersMvcApp.Providers.Repositories
{
public class CachedReaderRepo : IReaderRepo
{
private readonly IReaderRepo repo;
private readonly IMemoryCache cache;
public CachedReaderRepo(IReaderRepo repo, IMemoryCache cache)
{
this.repo = repo;
this.cache = cache;
}
public Reader GetReader(Guid id)
{
Reader reader;
if (!cache.TryGetValue(id, out reader))
{
var record = repo.GetReader(id);
if (record != null)
{
return cache.Set(record.Id, record);
}
}
return reader;
}
public IQueryable<Reader> Readers => repo.Readers;
}
}
Observe what happens inside of the GetReader() method, which is responsible for fetching the Reader records from the data-tier. There is a check for any available cached record for the incoming readerId (HIT or MISS?) When there is a cache MISS, the cache-tier fetches record from the data-tier via its abstraction IReaderRepo, and then sets the cache for the missing record. And then returns the record back.
In case if it was a cache HIT, the cache-tier returns the record which was already cached. This makes things simple and straight-forward for cache access. This setup is registered in the Startup class as:
/* Startup class - ConfigureServices method */
services.AddSingleton<IReaderRepo>(
x => ActivatorUtilities.CreateInstance<CachedReaderRepo>(x,
ActivatorUtilities.CreateInstance<ReaderRepo>(x)));
For a requested instance of type IReaderRepo, we inject an instance of CachedReaderRepo which takes a parameter object of type ReaderRepo which is of the same abstraction IReaderRepo.
This way, we can implement a “Lazyloading” cache strategy using Decorator and Repositories in a modest approach.
Found this article helpful? Please consider supporting!