Switching from Azure to GCP, with ASP.NET MVC on ASP.NET Core 3.1 – InformTFB

Switching from Azure to GCP, with ASP.NET MVC on ASP.NET Core 3.1

Switching from Azure to GCP, with ASP.NET MVC on ASP.NET Core 3.1

In this article, I described my own successful experience of migrating a real project from one cloud platform to another.

Of course, this is not the only possible way. But I think here you can find tips that will make life easier for everyone who is just going to make such a transition. However, you will have to take into account the specifics of your project and be guided by common sense.

Task set by the customer: Azure -> GCP

The customer decided to switch from one cloud (Azure) to another (Google Cloud Platform). In some distant future, it was generally planned to transfer the server part to Node.JS and develop the system using the full-stack typescript development team. At the time of my entry into the project, there were a couple of ASP.NET MVC applications that were decided to extend their life. I had to transfer them to GCP.

Initial state, factors that prevent you from switching to GCP immediately

Initially, there were two ASP.NET MVC applications that interacted with a single shared MS SQL database. They were deployed on Azure App Services.

The first application — let’s call it Web Portal – new user interface based on Razor, TypeScript, JavaScript, Knockout, and Bootstrap. No problems were expected with these client technologies. On the other hand, the server part of the application used several Azure-specific services: Azure Service Bus, Azure Blobs, Azure Tables storage, and Azure Queue storage. Something had to be done with them, since none of them is supported in GCP. In addition, the application to use Azure Redis Cache for. The Azure WebJob service was used to process long-running requests, and its tasks were transmitted via the Azure Service Bus. According to the support programmer, background tasks could run for up to half an hour.

Initially, the Web Portal architecture in our project looked like this
Initially, the Web Portal architecture in our project looked like this

Azure WebJobs also had to be replaced with something. An architecture with a task queue for long — running calculations is not the only possible solution.you can use specialized libraries for background tasks, such as Hangfire, or contact Microsoft’s IHostedService.

The second application — let’s call it the Web API-was ASP.NET WEB API. It used only MS SQL databases. Rather, the configuration file contained links to several databases, but in reality, the application accessed only one of them. But I just had to learn about this nuance.

Both applications were in working order, but in poor condition: there was no architecture as such, there was a lot of old unused code, and the construction principles were not followed ASP.NET the Customer himself recognized the poor quality of the code, and the person who originally wrote the applications did not work for the company for several years. Any changes and new solutions were given the green light.

So, it was necessary to translate ASP.NET MVC applications on ASP.NET Core 3.1, translate WebJob c .NET Framework on .NET Core, so that you can deploy them under Linux. Using Windows on GCP is possible, but not advisable. It was necessary to get rid of Azure-specific services, replace Azure WebJob with something else, and decide how we would deploy applications in GCP, i.e. choose an alternative to Azure App Services. We needed to add Docker support. At the same time, it would be nice to introduce at least some architecture and improve the quality of the code.

General principles and considerations

When refactoring, we followed the principle of step-by-step changes: all work was divided into stages, which in turn consisted of separate steps.

At the end of each stage, the application must be in a stable state, i.e. pass at least Smoke tests.

At the end of each step, the application or the part of it that has been modified must also be in a state close to stable. That is, it must be running or at least in a compiled state, if this step can be considered intermediate.

The steps and stages should be as short as possible: the work should be divided as much as possible. Sometimes we still had to take steps when the app didn’t compile for one or two days. Sometimes, at the end of a step, only the draft solution that was recently changed was compiled. If a stage or step can be divided into projects, you should start with the project that does not depend on others, and then move on to those that depend only on it, and so on.the Plan that we have drawn up is presented below.

When replacing Azure services, you can either choose an alternative GCP service, or choose a cloud-agnostic solution. We will consider the choice of services in this project and its justification in each case separately.

Work plan

The high-level plan as a whole was dictated by the customer.somewhere I added steps that the client side didn’t know were necessary or didn’t attach importance to them. The plan was slightly adjusted in the course of work. At some stages, refactoring of the architecture and code was added, which was not directly related to the transition to another platform. The final version can be seen below. Each point of this plan is a stage, in the sense that when it is completed, the application is in a stable state.

  1. Web Portal c ASP.NET MVC on ASP.NET Core1.1. Analysis of Web Portal code and dependencies on Azure services and third-party libraries, estimating the required time.1.2. Translating the Web Portal to .NET Core.1.3. Refactoring to fix major problems.1.4. Merge Web Portal changes from the main repository branch made in parallel by other developers.1.5. Decresase Web Portal.1.6. testing the Web Portal, fixing errors, and deploying a new version on Azure.
  2. Web API c ASP.NET MVC on ASP.NET Core2.1. Writing E2E automated tests for the Web API.2.2. Analyzing the Web API code and dependencies on Azure services and third-party libraries, estimating the required time.2.3. Removing unused source code from the Web API.2.4. Translating the Web API to .NET Core.2.5. Refactoring the Web API with the aim of eliminating the major problems.2.6. Merge Web API changes from the main repository branch made in parallel by other developers.2.7. Decresase Web API.2.8. Testing Web API, Troubleshootingand the deployment of the new version to Azure.
  3. Removingdependenciesfrom Azure3.1. Eliminating Web Portal dependencies on Azure.
  4. Deploying to GCP4.1. Deploying the Web Portal in a test environment in GCP.4.2. Testing the Web Portal and Troubleshooting possible errors.4.3. Migrating the database for the test environment.4.4. Deploying the Web API in a test environment in GCP.4.5. testing the Web API and Troubleshooting possible errors.4.6. Database migration for the prod environments.4.7. Deployment of the Web Portal and Web API in prod GCP.

The entire plan is presented only for informational purposes, and later in the article I will try to reveal in detail only the most interesting, from my point of view, questions.

.NET Framework -> .NET Core

Before I started migrating my code, I found an article about migration .Net Framework on .Net Core from Microsoftand then a link to the migration ASP.NET on ASP.NET Core.

With the migration of non-Web projects, everything was relatively simple:

  • converting the storage format for NuGet packages using Visual Studio 2019;
  • adapting the list of these packages and their versions;
  • switching from App. config in XML to settings.json and replace all existing accesses to configuration values with the new syntax.

Some versions of the Azure SDK NuGet packages have changed, resulting in incompatibilities. In most cases, it was not always possible to find the newest one, but it was supported by the code .NET Core version, which would not require changes in the logic of the old program code. The exception is packages for working with the Azure Service Bus and WebJobs SDK. I had to switch from Azure Service Bus to binary serialization, and webjob was transferred to a new, backward-incompatible version of the SDK.

With migration ASP.NET MVC on ASP.NET but the situation was much more complicated. All of the above steps had to be done for Web projects as well. But I had to start with a new one ASP.NET Core of the project, where we moved the code of the old project. Structure ASP.NET the Core of the project is very different from its predecessor, many standard classes ASP.NET MVC has undergone changes. Below is a list of what we’ve changed, and most of it will be relevant for any transition from ASP.NET MVC on ASP.NET Core.

  1. Creating a new project ASP.NET Core and migration of the main code from the old one to it ASP.NET the project’s MVC.
  2. Correction of the project’s dependencies on external libraries (in our case, these were only NuGet packages, see above for considerations about library versions).
  3. Replacing Web. config with appsettings.json and all related changes in the code.
  4. Implementing the standard Dependency injection mechanism from .NET Core instead of any of its alternatives used in Asp.NET an MVC project.
  5. Using StaticFiles middleware for all root folders of static files: images, fonts, JavaScript scripts, CSS styles, etc.
app.UseStaticFiles(); // wwwroot
app.UseStaticFiles(new StaticFileOptions
   	{
     	FileProvider = new PhysicalFileProvider(
         Path.Combine(Directory.GetCurrentDirectory(), "Scripts")),
     	RequestPath = "/Scripts"
});

You can move all static files to wwwroot.

6. Switch to using bundleconfig.json for all JavaScript and CSS bundles instead of the old mechanisms. Changing the JavaScript and CSS connection syntax:

<link rel="stylesheet" href="~/bundles/Content.css" asp-append-version="true" />
<script src="~/bundles/modernizr.js" asp-append-version="true"></script>

For the Directive asp-append-version="true"to work correctly, bundles must be located at the root, i.e. in wwwroota folder (see here).

To debug bundles, I used an adapted version of the helper from here.

7. Changing the UnhadledExceptions handling mechanism: in ASP.NET if its support is implemented, it remains to deal with it and use it instead of what was used in the project before.

8. Logging: I adapted the old logging mechanisms to use the standard ones in ASP.NET Core and implemented Serilog. The latter is optional, but, in my opinion, it is worth doing it to get flexible structured logging with a huge number of log storage options.

9. Session — if the old project used a session, then the code for accessing it will need to be adapted a little and write a helper to save any object, since initially only a string is supported.

10. Routing: the old project used a mechanism based on templates, it needed a little tweaking.

11. JSON serialization: In ASP.NET Core uses the System.Text library by default.Json instead of Newtonsoft. Json. Microsoft claims that it runs faster than its predecessor, but unlike the latter, it doesn’t support much of what Newtonsoft does.Json was able to do out of the box without any programmer involvement. It’s good that you can switch back to Newtonsoft. Json. This is exactly what I did when I found out that most of the serialization in the Web API was broken, and it is very difficult to get it back into working order using a new library, if possible. Learn more about using Newtonsoft.You can read the Json here.

12. the old project used Typescript 2.3. I had to Tinker with connecting it, I needed to install Node.JS, choose the correct version of the Microsoft package.TypeScript. MSBuild, add and configure tsconfig. json, correct the definitions file for the Knockout library, and add directives in some places //@ts-ignore.

13. The code for forcing HTTPS support is enabled automatically when this option is enabled in the project wizard. The old code that uses the HttpsOnly custom attribute has been removed.

14. all low-level actions, such as getting parameters from the request body, request URL, HTTP Headers, and HttpContext, required changes, because the API for accessing them has changed compared to ASP.NET MVC. The work would have been noticeably less if the old project had used standard binding mechanisms more often through the parameters of actions (Actions) and controllers (Controllers).

15. Swagger was added using the Swashbuckle library.AspNetCore.Swagger.

16. the non-Standard authentication mechanism required refactoring to bring it to a standard form.

The number of changes was very large, so it was often necessary to leave only one controller and make it work properly. We then added others gradually, following the principle of step-by-step changes.

What should I do with specific Azure services?

After switching to ASP.NET Core had to get rid of Azure services. You could either choose solutions that do not depend on the cloud platform, or find something suitable from the GCP list. Benefit from the many services there are direct alternatives to other cloud providers.

We decided to replace Azure Service Bus with Redis Pub/Sub on the urgent recommendation of the customer. This is a fairly simple tool, but not as powerful and flexible as, for example, RabbitMQ. But for our simple scenario, it was enough, and this choice was supported by the fact that Redis was already used in the project. Time has confirmed that the decision was correct. The logic of working with the queue was abstracted and separated into two classes, one of which implements sending an arbitrary object, the other receives messages and passes them for processing. It took only a few hours to allocate these objects, and if Redis Pub/Sub itself suddenly needs to be replaced, then this will be very simple.

Azure Blobs have been replaced with GCP Blobs. The solution is obvious, but there is still a difference in the functionality of the services: GCP Blobs does not support adding data to the end of an existing BLOB. In our project, such a BLOB was used to create similar logs in CSV format. On the Google platform, we decided to record this information in the Google Cloud operations suite, formerly known as Stackdriver.

Azure Table Storage was used to record application logs and access them from the Web Portal. There was a self-written logger for this purpose. We decided to bring this process in line with Microsoft’s best practices, i.e. use their ILogger interface. In addition, a library for structural logging Serilog was introduced. In GCP, logging was configured in Stackdriver.

For some time, the project had to work in parallel on both GCP and Azure. Therefore, all platform-specific functionality has been separated out into separate classes that implement common interfaces: IBlobService, IRequestLogger, and ILogReader. Logging abstraction was achieved automatically by using the Serilog library. But in order to show logs in the Web Portal as it was done in the old application, it was necessary to adapt the order of records in Azure Table Storage by implementing its own Serilog. Sinks.AzureTableStorage.KeyGenerator.IKeyGenerator. In GCP, Log Router Sinks were created to read logs from Google Cloud operations, transmitting data to BigQuery, from where the application received it.

What should I do with Azure WebJobs?

The Azure WebJobs service is only available for Azure App Services on Windows. In fact, it is a console application that uses a special Azure WebJobs SDK. I removed the dependency on this SDK. The application remains permanently running on the console and follows a similar logic:

static async Task Main(string[] args)
{
….
 
  var builder = new HostBuilder();

  ...            	

  var host = builder.Build();

  using (host)
  {
     await host.RunAsync();
  }
...
}

The class registered with Dependency Injection is responsible for all this work

public class RedisPubSubMessageProcessor : Microsoft.Extensions.Hosting.IHostedService
{
...
	public async Task StartAsync(CancellationToken cancellationToken)
...
	public async Task StopAsync(CancellationToken cancellationToken)
...
}

This is standard for .NET Core mechanism. Although there is no dependency on the Azure WebJob SDK, this console application works successfully as an Azure WebJob. It also works without problems in a Linux Docker container running Kubernetes, which will be discussed in the article later.

Refactoring down the road

The architecture and code of the app were far from ideal. Over the course of many steps, small changes to the code they affected were gradually made. There were also specially planned refactoring stages, agreed and evaluated together with the customer. At these stages, we fixed problems with authentication and authorization, and translated them into practices from Microsoft. There was a separate stage for introducing a certain architecture, allocating layers, and eliminating unnecessary dependencies. Working with the Web API it started with the stage of deleting unused code. When replacing many Azure services, the first stage involved defining interfaces and separating these dependencies into separate classes.

All this, in my opinion, was necessary and had a positive effect on the result.

Docker

With Docker support, everything went pretty smoothly. You can easily add a Dockerfile using Visual Studio. I added them for all projects that correspond to applications, for Web Portal, Web API, WebJob (which later turned into just a console application). These standard Dockerfiles from Microsoft did not undergo any special changes and worked out of the box with the only exception — I had to add commands to install Node in the Dockerfile for Web Portal.js. This is required by the build container for working with TypeScript.

RUN apt-get update && \
	apt-get -y install curl gnupg && \
	curl -sL https://deb.nodesource.com/setup_12.x  | bash - && \
	apt-get -y install nodejs

Azure App Services -> GKE

There is no single correct deployment solution .NET Core applications in GCP, you can always choose from several options:

  • App Engine Flex.
  • Kubernetes Engine.
  • Compute Engine.

In our case, I decided on the Google Kubernetes Engine (GKE). And by this time we already had containerized applications (Linux). GKE turned out to be perhaps the most flexible of the three solutions presented above. It allows you to share cluster resources between multiple applications, as in our case. In principle, to select one of the three options, you can use the flowchart for this link.

All solutions for the GCP services used are described above, except for MS SQL Server, which we replaced with Google’s Cloud SQL.

Architecture of our system after migration to GCP
Architecture of our system after migration to GCP

Testing

The Web Portal was tested manually, and after each stage I conducted a simple Smoke test myself. This was due to the presence of a user interface. If, at the end of the next stage, a new piece of code was released in Prod, other users, in particular, the Product Owner, joined its testing. But unfortunately, there were no dedicated QA specialists in the project. Of course, all identified errors were corrected before the start of the next stage. Later, a simple Puppeteer test was added, which executed a script for loading one of two types of reports with some parameters and compared the resulting report with the reference one. The test was integrated into CICD. Adding some unit tests was problematic due to the lack of any architecture.

The first stage of web API migration, on the contrary, was writing tests. Postman was used for this, and then these tests were called in CICD using Newman. Even earlier, integration with Swagger was added to the old code, which helped create an initial list of method addresses and try out many of them. One of the next steps was to determine the current list of operations. To do this, we used IIS (Internet Information Services) logs that were available for a month and a half. Several tests with different parameters have been created for many of the current list methods. Tests that lead to changes in data in the database were allocated to a separate Postman collection and were not run on shared runtimes. Of course, all this was parameterized so that it could be run on both Staging, Prod, and Dev.

Testing allowed us to make sure that the product remained stable after migration. Of course, the ideal solution would be to cover all the functionality with automated tests. Therefore, in the case of the Web API, despite much more effort spent at the very beginning, migration, finding and fixing errors then went much easier.

Azure MS SQL -> GCP Managed MS SQL

Migrating MS SQL from Managed Azure to GCP Cloud SQL was not as easy as it seemed at first. There were several main reasons for this:

  • Very large database size (Azure portal showed:Database data storage /Used space 181GB).
  • Presence of dependencies on external tables.
  • No common format for exporting from Azure and importing to GCP Cloud SQL.

When migrating the database, I mainly relied on an article in Spanish, which I automatically translated into Google Chrome. It turned out to be the most useful one I could find.

Before starting the migration, you must delete all references to external tables and databases, otherwise the migration will fail. Azure SQL only supports exporting to the bacpac format, which is more compact than the standard backup format. In our case, 6 GB came out in bacpac versus 154 GB in backup. But GCP Cloud will only allow you to import backup, so we needed a conversion, which we managed to do only by restoring to local MS SQL from bacpac and creating a backup from it. These operations required installing the latest version of Microsoft SQL Server Management Studio, and the local MS SQL Server server was a lower version. Many operations took many hours, some even lasted for several days. I recommend increasing the Azure SQL quota before importing and making a copy of the prod database to import from it. Somewhere, we needed to transfer a file between the clouds to speed up uploading to the local machine. We also added a 1TB SSD drive specifically for database files.

Valery Radokhleb
Valery Radokhleb
Web developer, designer

Leave a Reply

Your email address will not be published. Required fields are marked *