How to deploy ASP.NET Core API via AWS CloudFormation

Let's see how we can create and deploy our ASP.NET Core microservices in AWS using a Serverless Model via CloudFormation.

Introduction

The term Microservice refers to a single, isolated and independent piece of web API which works for a specific responsibility. This approach tries to simplify the otherwise complex task of designing APIs for complex systems which generally come with more than one API endpoints serving different purposes.

In this article, let’s talk in-detail about building Microservices in ASP.NET Core with more focus on deploying these services as individual silos interfaced by an API gateway in AWS cloud, as a single stack using SAM – Serverless Application Model

Why is a Monolith bad?

Let’s assume we have a Library Management Solution which offers functionalities for book keeping, user and inventory (books) management. Now the service at its bare bones should operate on two entities: books and users, with various responsibilities woven around these two entities. There can be a User API for managing users (Create, Retrieve, Delete and Update) while an Inventory management requires similar operations on books (Adding, Updating, Reducing and Retrieving).

On top of these basic features, we would also have to care about the Book keeping responsibility which wraps around both the users who request for a book and the respective transactions be placed against the inventory for the requested book.

In the old times (or say the near-past times), such systems used to be developed as a single Solution with multiple endpoints serving their respective purposes and are complimented by layers of abstractions. This entire Solution, with all its complexities is dumped into a single traditional web server which handles the requests and routes into this Solution for the respective path.

wp-content/uploads/2022/05/monolithic1.png
A Monolith Architecture

This kind of architecture is called Monolithic architecture. One where there is only a single complex node which holds all the responsibilities of a system.

Although this approach works (like this has been working all these decades), there are a few bottlenecks with this approach:

  • Let’s say you want to update some logic for Users endpoint, you can’t just update that part and run away. You need to deploy the entire solution in the server, which requires you to do test all the functionalities post-deployment, although you did a small change in one piece of the entire cake.
  • Let’s say something went wrong in the Book keeping functionality in the solution, it’d be such a pain for you to run across all the files in the solution to find the root cause for that issue in the Book keeping functionality.
  • Real challenge comes in for load-balancing such a solution, when you want to scale this set for higher loads and the list of issues goes on increasing.

The solution? Divide and conquer

Let’s assume we want to redesign this solution for a better performance and maintenance as per the present standards. With the emergence of technologies such as containerization, cloud and other sophisticated libraries which make our lives easier, we can instead break this complex solution into simpler parts – say like break the API into individual chunks based on their functionality and deploy them into individual servers.

This results in four independent services: UserService, BookService, InventoryService, OrderService – each running in their own server handling requests corresponding to only their forte. The result of this is what we call as a Microservice architecture

wp-content/uploads/2022/05/microservice.png
A Microservice Architecture

Microservices are individual service endpoints running their own logic, but from a client standpoint its difficult to keep track of the different endpoints it needs to hit for different functionalities. For example if each of the service in the Book Management system are deployed in each of its own tiny service and assigned a host, it would result in 5 different host addresses.

But we don’t want to let the client keep hold of all this information, since it may vary over time. What is the solution? Another interfacing server that runs a peculiar logic to route incoming requests to respective microservice endpoints based on some criteria – most generally the request path. We call this routing server as an API Gateway.

Microservices are tiny services running a specific functionality, which are designed to run their own resources and dependencies and are bound together from the client standpoint via an API Gateway.

Inter Service Communication b/w Microsevices

Ideally, each microservice operates on its own data store while it is not mandatory in all the cases. For services working with its own individual datastores, there may be cases when one microservice needs to notify another of a certain entity change. For example, when a new Order is made for a Book, the OrderService must notify the BookService for a new order on its Book entity while the InventoryService needs to be notified of a change in the inventory count. In such cases, we make use of patterns such as Event Sourcing, CQRS and so on for such event based communications among the services.

Microservices and ASP.NET Core

To be honest, Microservices are more of a design/deployment phenomenon than a tech related entity. So regardless of which tech stack we’re using, we end up doing the same thing once we develop our application based on this design. Generally, we design our application to be decoupled into such microservice components and then deploy each component into a “container” which contains a controlled set of resources and environment for these components to run without any issues.

In real-world, we create such containers using Docker for deploying our microservices, which does the heavy-lifting for us. On the contrary, we can do a similar thing in cloud by using managed services. In AWS, we can deploy such microservices into Lambda, which offers such a managed environment as per our tech stack and is charged based on the compute time. These Lambda functions are grouped together for client interfacing, by another cloud service called API Gateway, which provides the Routing functionality we discussed before with much more capabilities such as caching, logging, authorization to name a few.

In the last article, we have already seen how we can convert our ASP.NET Core API into Lambda function. In case if you haven’t seen it, its highly recommended to go through before proceeding further.

Deploying ASP.NET Core Microservices in AWS with Serverless Model

Let’s pick up our previous example of ReadersAPI which we successfully converted into a Lambda function. Let’s assume we also add another service which takes up the functionality of handling all the Writers in our system. Let’s call it as WritersAPI.

Now we have two “microservices” which handle respective functionalities of managing readers and writers in their individuality. The solution looks like below:

A Serverless Solution

Each of these microservices run on a separate Lambda function and work on two different tables test_readers and test_writers in DynamoDB. From the client endpoint, we would want to route the incoming requests for reader functionality be routed to the ReadersAPI Lambda based on the path “/readers” and similarly the writer functionality would be routed to the WritersAPI Lambda based on the path “/writers”.

Now we need to build the following components (assuming that the DynamoDB tables are already there):

  1. ReadersAPI Lambda
  2. WritersAPI Lambda
  3. API Gateway with “/readers” and “/writers” route proxy
wp-content/uploads/2022/05/aws-microservice.png
The Microservice Solution with an API Gateway

To deploy this setup together, AWS provides us with a “stack” model wherein we configure the resources we need and how they need to be built – into a single build script and then feed the script into AWS which creates and deploys all the resources as instructed. This is called as the Serverless Application Model and the service which offers this automated approach is called as “Cloudformation”.

“AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts” – Documentation

Building the Serverless Template

We’re now interested in a “stack” containing two Lambda functions which “should only” talk to their respective DynamoDB tables and then are routed through an API Gateway for specific paths.

The template looks like this:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Transform": "AWS::Serverless-2016-10-31",
  "Description": "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
  "Conditions": {},
  "Resources": {
    "testReadersFunction": {
      "Type": "AWS::Serverless::Function",
      "Properties": {
        "Handler": "DynamoDb.ReadersApp.WebApi::DynamoDb.ReadersApp.WebApi.LambdaEntryPoint::FunctionHandlerAsync",
        "Runtime": "dotnetcore3.1",
        "CodeUri": "./DynamoDb.ReadersApp.WebApi/",
        "MemorySize": 256,
        "Timeout": 30,
        "Role": null,
        "Policies": [
          "AWSLambdaBasicExecutionRole",
          {
            "Statement": [
              {
                "Sid": "ListAndDescribe",
                "Effect": "Allow",
                "Action": [
                  "dynamodb:List*",
                  "dynamodb:DescribeReservedCapacity*",
                  "dynamodb:DescribeLimits",
                  "dynamodb:DescribeTimeToLive"
                ],
                "Resource": "*"
              },
              {
                "Sid": "SpecificTable",
                "Effect": "Allow",
                "Action": [
                  "dynamodb:BatchGet*",
                  "dynamodb:DescribeStream",
                  "dynamodb:DescribeTable",
                  "dynamodb:Get*",
                  "dynamodb:Query",
                  "dynamodb:Scan",
                  "dynamodb:BatchWrite*",
                  "dynamodb:CreateTable",
                  "dynamodb:Delete*",
                  "dynamodb:Update*",
                  "dynamodb:PutItem"
                ],
                "Resource": "arn:aws:dynamodb:*:*:table/test_readers"
              }
            ]
          }
        ],
        "Environment": {
          "Variables": {}
        },
        "Events": {
          "ProxyResource": {
            "Type": "Api",
            "Properties": {
              "Path": "/readers/{proxy+}",
              "Method": "ANY"
            }
          }
        }
      }
    },
    "testWritersFunction": {
      "Type": "AWS::Serverless::Function",
      "Properties": {
        "Handler": "DynamoDb.WritersApp.WebApi::DynamoDb.WritersApp.WebApi.LambdaEntryPoint::FunctionHandlerAsync",
        "Runtime": "dotnetcore3.1",
        "CodeUri": "./DynamoDb.WritersApp.WebApi/",
        "MemorySize": 256,
        "Timeout": 30,
        "Role": null,
        "Policies": [
          "AWSLambdaBasicExecutionRole",
          {
            "Statement": [
              {
                "Sid": "ListAndDescribe",
                "Effect": "Allow",
                "Action": [
                  "dynamodb:List*",
                  "dynamodb:DescribeReservedCapacity*",
                  "dynamodb:DescribeLimits",
                  "dynamodb:DescribeTimeToLive"
                ],
                "Resource": "*"
              },
              {
                "Sid": "SpecificTable",
                "Effect": "Allow",
                "Action": [
                  "dynamodb:BatchGet*",
                  "dynamodb:DescribeStream",
                  "dynamodb:DescribeTable",
                  "dynamodb:Get*",
                  "dynamodb:Query",
                  "dynamodb:Scan",
                  "dynamodb:BatchWrite*",
                  "dynamodb:CreateTable",
                  "dynamodb:Delete*",
                  "dynamodb:Update*",
                  "dynamodb:PutItem"
                ],
                "Resource": "arn:aws:dynamodb:*:*:table/test_writers"
              }
            ]
          }
        ],
        "Environment": {
          "Variables": {}
        },
        "Events": {
          "ProxyResource": {
            "Type": "Api",
            "Properties": {
              "Path": "/writers/{proxy+}",
              "Method": "ANY"
            }
          }
        }
      }
    }
  },
  "Outputs": {
    "ApiURL": {
      "Description": "API endpoint URL for Prod environment",
      "Value": {
        "Fn::Sub": "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
      }
    }
  }
}

In this template, we define our “Resources” – our two Lambda functions for ReadersAPI and WritersAPI named as “testReadersFunction” and “testWritersFunction”. In each of these lambda resources, we define “Policies” which specifies the permissions for the individual Lambda functions. In our case, we’re restricting our Lambda functions to access “only their respective tables”, which is a best practice and a production-grade expectation.

Other than the “Policies”, we have the usual “Runtime”, “Handler” which points to the method to be invoked when this lambda is executed, typically the entry point of the lambda as we have seen before. The “CodeUri” points to the particular microservice project root. Each of the projects is separately packed and then deployed into the created Lambda resource when this build script is executed.

For the route specification inside each “resource”, we have our “ProxyResource” inside of “Events” where we specify for which “path” that particular lambda needs to be called. The path has a wildcard {proxy+} which matches to anything following the path prefix.

"ProxyResource": {
    "Type": "Api",
    "Properties": {
        "Path": "/writers/{proxy+}",
        "Method": "ANY"
    }
}

Complementing this serverless template is a defaults.json which contains the common parameters to be passed while executing this build script.

{
  "Information": [
    "This file provides default values for the deployment 
    wizard inside Visual Studio and the AWS Lambda commands added to the .NET Core CLI.",
    "To learn more about the Lambda commands with the .NET Core CLI 
    execute the following command at the command line in the project root directory.",
    "dotnet lambda help",
    "All the command line options for the Lambda command can be specified in this file."
  ],
  "region": "us-west-2",
  "configuration": "Release",
  "framework": "netcoreapp3.1",
  "s3-bucket": "orion-express-api",
  "stack-name": "testReadersServerlessApi",
  "s3-prefix": "orion-express-api/",
  "template": "serverless.template",
  "profile": ""
}

Executing the Template

To run this template, we simply run the following command in the path where the serverless.template file resides.

> dotnet lambda deploy-serverless

When this command is executed, the dotnet lambda tool runs through each resource in the serverless template and then creates them based on the specs provided, followed by attaching the configured policies. Then the tool runs the package command on each of the projects specified in the CodeUri of the resource and then uploads the generated binaries into the S3 provided in the defaults.json

Finally, it creates an API Gateway with an auto generated Id and then adds these Lambda functions with the route proxies specified.

wp-content/uploads/2022/05/cloudformation-output.png
Running the Command in CLI

To verify this, we can look for the same under CloudFormation service in AWS Console, where we can find additional information about the API Gateway created, the Lambda functions created and their associated Policy information – everything we configured in our serverless template.

wp-content/uploads/2022/05/cloudformation-awsconsole.png
CloudFormation Console

To test whether our setup works, invoke the API Gateway URL provided with the path /readers/all, you can see the data coming in.

wp-content/uploads/2022/05/output-api-readers.png
The Readers API

Similar is the case with /writers/all API

wp-content/uploads/2022/05/output-api-writers.png
The Writers API

Conclusion

Microservices are a futuristic architecture design which can suit very well for applications demanding high load and scalability. While one can easily create a simple microservice architecture using container services such as Docker, when we take this to cloud, we can make use of managed services such as Lambda which offer a similar experience at lesser pricing models.

Buy Me A Coffee

Found this article helpful? Please consider supporting!

Complementing this architecture is the use of routing via API Gateway which does the proxy for us based on path prefixes configured. To simplify deploying such a complicated setup, AWS provides us with CloudFormation which is a managed service to help automate deployments using a simple buildscript that works for any tech stack.

The complete example is available at https://github.com/referbruv/aspnetcore-microservices-cloudformation


Buy Me A Coffee

Found this article helpful? Please consider supporting!

Ram
Ram

I'm a full-stack developer and a software enthusiast who likes to play around with cloud and tech stack out of curiosity. You can connect with me on Medium, Twitter or LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *