Our Blog

Ongoing observations by End Point people

Building REST APIs with .NET 5, ASP.NET Core, and PostgreSQL

Kevin Campusano

By Kevin Campusano
July 9, 2021

A market at night Photo by Sam Beasley

This is old news by now, but I’m still amazed by the fact that nowadays .NET is open source and can run on Linux. I truly believe that this new direction can help the technology realize its true potential, since it’s no longer shackled to Windows-based environments. I’ve personally been outside the .NET game for a good while, but with the milestone release that is .NET 5, I think now is a great time to dive back in.

So I thought of taking some time to do just that, really dive in, see what’s new, and get a sense of the general developer experience that the current incarnation of .NET offers. So in this blog post, I’m going to chronicle my experience developing a simple but complete REST API application. Along the way, I’ll touch on the most common problems that one runs into when developing such applications and how are they solved in the .NET world. So think of this piece as a sort of tutorial or overview of the most common framework features when it comes to developing REST APIs.

There’s a table of contents at the bottom.

First, let’s get familiar with what we’re building.

What we’re building

The demo application

You can find the finished product on my GitHub.

The application that we’ll be building throughout this article will address a request from a hypothetical car junker business. Our client wants to automate the process of calculating how much money to offer their customers for their vehicles, given certain information about them. And they want an app to do that. We are building the back-end component that will support that app. It is a REST API that allows users to provide vehicle information (year, make, model, condition, etc) and will produce a quote of how much money our hypothetical client would be willing to pay for it.

Here’s a short list of features that we need to implement in order to fulfill that requirement:

  1. Given a vehicle model and condition, calculate a price.
  2. Store and manage rules that are used to calculate vehicle prices.
  3. Store and manage pricing overrides on a vehicle model basis. Price overrides are used regardless of the current rules.
  4. CRUD vehicle models so that overrides can be specified for them.

The data model

Here’s what our data model looks like:

Data Model

The main table in our model is the quotes table. It stores all the requests for quotes received from our client’s customers. It captures all the relevant vehicle information in terms of model and condition. It also captures the offered quote; that is, the money value that our system calculates for their vehicle.

The quotes table includes all the fields that identify a vehicle: year, make, model, body style and size. It also includes a model_style_year_id field which is an optional foreign key to another table. This FK points to the model_style_years table which contains specific vehicle models that our system can store explicitly.

The idea of this is that, when a customer submits a request for a quote, if we have their vehicle registered in our database, then we can populate this foreign key and link the quote with the specific vehicle that it’s quoting. If we don’t have their vehicle registered, then we leave that field unpopulated. Either way, we can offer a quote. The only difference is the level or certainty of the quote.

The records in the model_style_years table represent specific vehicles. That whole hierarchy works like this: A vehicle make (e.g. Honda, Toyota, etc in the makes table) has many models (e.g. Civic, Corolla, etc in the models table), each model has many styles (the model_styles table). Styles are combinations of body types (the body_types table) and sizes (the sizes table) (e.g. Mid-size Sedan, Compact Coupe, etc). And finally, each model style has many years in which they were being produced (via the model_style_years table).

This model allows us very fine grained differentiation between vehicles. For example, we can have a “2008 Honda Civic Hatchback which is a Compact car” and also a “1990 Honda Civic Hatchback which is a Sub-compact”. That is, same model, different year, size or body type.

We also have a quote_rules table which stores the rules that are applied when it comes to calculating a vehicle quote. The rules are pairs of key-values with an associated monetary value. So for example, rules like “a vehicle that has alloy wheels is worth $10 more” can be expressed in the table with a record where feature_type is “has_alloy_wheels”, feature_value is “true” and price_modifier is “10”.

Finally, we have a quote_overrides table which specifies a flat, static price for specific vehicles (via the link to the model_style_years table). The idea here is that if some customer requests a quote for a vehicle for which we have an override, no price calculation rules are applied and they are offered what is specified in the override record.

The development environment

Setting up the PostgreSQL database with Docker

For this project, our database of choice is PostgreSQL. Luckily for us, getting a PostgreSQL instance up and running is very easy thanks to Docker.

If you want to learn more about dockerizing a typical web application, take a look at this article that explains the process in detail.

Once you have Docker installed in your machine, getting a PostgreSQL instance is as simple as running the following command:

$ docker run -d \
    --name vehicle-quote-postgres \
    -p 5432:5432 \
    --network host \
    -e POSTGRES_DB=vehicle_quote \
    -e POSTGRES_USER=vehicle_quote \
    -e POSTGRES_PASSWORD=password \
    postgres

Here we’re asking Docker to run a new container based on the latest postgres image from DockerHub, name it vehicle-quote-postgres, specify the port to use the default PostgreSQL one, make it accessible to the local network (with the --network host option) and finally, specify a few environment variables that the postgres image uses when building our new instance to set up the default database name, user and password (with the three -e options).

After Docker is done working its magic, you should be able to access the database with something like:

$ docker exec -it vehicle-quote-postgres psql -U vehicle_quote

That will result in:

$ docker exec -it vehicle-quote-postgres psql -U vehicle_quote
psql (13.2 (Debian 13.2-1.pgdg100+1))
Type "help" for help.

vehicle_quote=#

This command is connecting to our new vehicle-quote-postgres container and then, from within the container, using the command line client psql in order to connect to the database.

If you have psql installed on your own machine, you can use it directly to connect to the PostgreSQL instance running inside the container:

$ psql -h localhost -U vehicle_quote

This is possible because we specified in our docker run command that the container would be accepting traffic over port 5432 (-p 5432:5432) and that it would be accesible within the same network as our actual machine (--network host).

Installing the .NET 5 SDK

Ok, with that out of the way, let’s install .NET 5.

.NET 5 truly is multi-platform, so whatever environment you prefer to work with, they’ve got you covered. You can go to the .NET 5 download page and pick your desired flavor of the SDK.

On Ubuntu 20.10, which is what I’m running, installation is painless. It’s your typical process with APT and this page from the official docs has all the details.

First step is to add the Microsoft package repository:

$ wget https://packages.microsoft.com/config/ubuntu/20.10/packages-microsoft-prod.deb -O packages-microsoft-prod.deb

$ sudo dpkg -i packages-microsoft-prod.deb

Then, install .NET 5 with APT like one would any other software package:

$ sudo apt-get update; \
  sudo apt-get install -y apt-transport-https && \
  sudo apt-get update && \
  sudo apt-get install -y dotnet-sdk-5.0

Run dotnet --version in your console and you should see something like this:

$ dotnet --version
5.0.301

Setting up the project

Creating our ASP.NET Core REST API project

Ok now that we have our requirements, database and SDK, let’s start setting up our project. We do so with the following command:

$ dotnet new webapi -o VehicleQuotes

This instructs the dotnet command line tool to create a new REST API web application project for us in a new VehicleQuotes directory.

As a result, dotnet will give you a The template “ASP.NET Core Web API” was created successfully. message and a new directory will be created with our web application files. The newly created VehicleQuotes project looks like this:

.
├── appsettings.Development.json
├── appsettings.json
├── Controllers
│   └── WeatherForecastController.cs
├── obj
│   ├── project.assets.json
│   ├── project.nuget.cache
│   ├── VehicleQuotes.csproj.nuget.dgspec.json
│   ├── VehicleQuotes.csproj.nuget.g.props
│   └── VehicleQuotes.csproj.nuget.g.targets
├── Program.cs
├── Properties
│   └── launchSettings.json
├── Startup.cs
├── VehicleQuotes.csproj
└── WeatherForecast.cs

Important things to note here are the appsettings.json and appsettings.Development.json files which contain environment specific configuration values; the Controllers directory where we define our application controllers and action methods (i.e. our REST API endpoints); the Program.cs and Startup.cs files that contain our application’s entry point and bootstrapping logic; and finally VehicleQuotes.csproj which is the file that contains project wide configuration that the framework cares about like references, compilation targets, and other options. Feel free to explore.

The dotnet new command has given us quite a bit. These files make up a fully working application that we can run and play around with. It even has a Swagger UI, as I’ll demonstrate shortly. It’s a great place to get started from.

You can also get a pretty comprehensive .gitignore file by running the dotnet new gitignore command.

From inside the VehicleQuotes directory, you can run the application with:

$ dotnet run

Which will start up a development server and give out the following output:

$ dotnet run
Building...
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /home/kevin/projects/endpoint/blog/VehicleQuotes

Open up a browser window and go to https://localhost:5001/swagger to find a Swagger UI listing our API’s endpoints:

Initial Swagger UI

As you can see we’ve got a GET WeatherForecast endpoint in our app. This is included by default in the webapi project template that we specified in our call to dotnet new. You can see it defined in the Controllers/WeatherForecastController.cs file.

Installing packages we’ll need

Now let’s install all the tools and libraries we will need for our application. First, we install the ASP.NET Code Generator tool which we’ll use later for scaffolding controllers:

$ dotnet tool install --global dotnet-aspnet-codegenerator

We also need to install the Entity Framework command line tools which help us with creating and applying database migrations:

$ dotnet tool install --global dotnet-ef

Now, we need to install a few libraries that we’ll use in our project. First are all the packages that allow us to use Entity Framework Core, provide scaffolding support and give us a detailed debugging page for database errors:

$ dotnet add package Microsoft.VisualStudio.Web.CodeGeneration.Design
$ dotnet add package Microsoft.EntityFrameworkCore.Design
$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer
$ dotnet add package Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore

We also need the EF Core driver for PostgreSQL which will allow us to interact with our database:

$ dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL

Finally, we need another package that will allow us to use the snake case naming convention for our database tables, fields, etc. We need this because EF Core uses capitalized camel case by default, which is not very common in the PostgreSQL world, so this will allow us to play nice. This is the package:

$ dotnet add package EFCore.NamingConventions

Connecting to the database and performing initial app configuration

In order to connect to, query, and modify a database using EF Core, we need to create a DbContext. This is a class that serves as the entry point into the database. Create a new directory called Data in the project root and add this new VehicleQuotesContext.cs file to it:

using Microsoft.EntityFrameworkCore;

namespace VehicleQuotes
{
    public class VehicleQuotesContext : DbContext
    {
        public VehicleQuotesContext (DbContextOptions<VehicleQuotesContext> options)
            : base(options)
        {
        }
    }
}

As you can see this is just a simple class that inherits from EF Core’s DbContext class. That’s all we need for now. We will continue building on this class as we add new tables and configurations.

Now, we need to add this class into ASP.NET Core’s built in IoC container so that it’s available to controllers and other classes via Dependency Injection, and tell it how to find our database. Go to Startup.cs and add the following using statement near the top of the file:

using Microsoft.EntityFrameworkCore;

That will allow us to do the following change in the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{

    // ...

+    services.AddDbContext<VehicleQuotesContext>(options =>
+        options
+            .UseNpgsql(Configuration.GetConnectionString("VehicleQuotesContext"))
+    );
}

UseNpgsql is an extension method made available to us by the Npgsql.EntityFrameworkCore.PostgreSQL package that we installed in the previous step.

The services variable contains all the objects (known as “services”) that are available in the app for Dependency Injection. So here, we’re adding our newly created DbContext to it, specifying that it will connect to a PostgreSQL database (via the options.UseNpgsql call), and that it will use a connection string named VehicleQuotesContext from the app’s default configuration file. So let’s add the connection string then. To do so, change the appsettings.json like so:

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
-  "AllowedHosts": "*"
+  "AllowedHosts": "*",
+  "ConnectionStrings": {
+      "VehicleQuotesContext": "Host=localhost;Database=vehicle_quote;Username=vehicle_quote;Password=password"
+  }
}

This is your typical PostgreSQL connection string. The only gotcha is that it needs to be specified under the ConnectionStrings -> VehicleQuotesContext section so that our call to Configuration.GetConnectionString can find it.

Now let’s put the EFCore.NamingConventions package to good use and configure EF Core to use sake case when naming database objects. Add the following to the ConfigureServices method in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    // ...

    services.AddDbContext<VehicleQuotesContext>(options =>
        options
            .UseNpgsql(Configuration.GetConnectionString("VehicleQuotesContext"))
+           .UseSnakeCaseNamingConvention()
    );
}

UseSnakeCaseNamingConvention is an extension method made available to us by the EFCore.NamingConventions package that we installed in the previous step.

Now let’s make logging a little bit more verbose with:

public void ConfigureServices(IServiceCollection services)
{
    // ...

    services.AddDbContext<VehicleQuotesContext>(options =>
        options
            .UseNpgsql(Configuration.GetConnectionString("VehicleQuotesContext"))
            .UseSnakeCaseNamingConvention()
+           .UseLoggerFactory(LoggerFactory.Create(builder => builder.AddConsole()))
+           .EnableSensitiveDataLogging()
    );
}

This will make sure full database queries appear in the log in the console, including parameter values. This could expose sensitive data so be careful when using EnableSensitiveDataLogging in production.

We can also add the following service configuration to have the app display detailed error pages when something related to the database or migrations goes wrong:

public void ConfigureServices(IServiceCollection services)
{
    // ...

+   services.AddDatabaseDeveloperPageExceptionFilter();
}

AddDatabaseDeveloperPageExceptionFilter is an extension method made available to us by the Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore package that we installed in the previous step.

Finally, one last configuration I like to do is have the Swagger UI show up at the root URL, so that instead of using https://localhost:5001/swagger, we’re able to just use https://localhost:5001. We do so by by updating the Configure method this time, in the same Startup.cs file that we’ve been working on:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
        app.UseSwagger();
-       app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", "VehicleQuotes v1"));
+       app.UseSwaggerUI(c => {
+           c.SwaggerEndpoint("/swagger/v1/swagger.json", "VehicleQuotes v1");
+           c.RoutePrefix = "";
+       });
    }

The magic is done by the c.RoutePrefix = ""; line which makes it so there’s no need to put any prefix in order to access the auto generated Swagger UI.

Try it out. Do dotnet run and navigate to https://localhost:5001 and you should see the Swagger UI there.

Building the application

Creating model entities, migrations and updating the database

Alright, with all that configuration out of the way, let’s implement some of our actual application logic now. Refer back to our data model. We’ll start by defining our three simplest tables: makes, sizes and body_types. With EF Core, we define tables via so-called POCO entities, which are simple C# classes with some properties. The classes become tables and the properties become the tables’ fields. Instances of these classes represent records in the database.

So, create a new Models directory in our project’s root and add these three files:

// Models/BodyType.cs
namespace VehicleQuotes.Models
{
    public class BodyType
    {
        public int ID { get; set; }
        public string Name { get; set; }
    }
}
// Models/Make.cs
namespace VehicleQuotes.Models
{
    public class Make
    {
        public int ID { get; set; }
        public string Name { get; set; }
    }
}
// Models/Size.cs
namespace VehicleQuotes.Models
{
    public class Size
    {
        public int ID { get; set; }
        public string Name { get; set; }
    }
}

Now, we add three corresponding DbSets to our DbContext in Data/VehicleQuoteContext.cs. Here’s the diff:

using Microsoft.EntityFrameworkCore;
+using VehicleQuotes.Models;

namespace VehicleQuotes
{
    public class VehicleQuotesContext : DbContext
    {
        public VehicleQuotesContext (DbContextOptions<VehicleQuotesContext> options)
            : base(options)
        {
        }

+       public DbSet<Make> Makes { get; set; }
+       public DbSet<Size> Sizes { get; set; }
+       public DbSet<BodyType> BodyTypes { get; set; }
    }
}

This is how we tell EF Core to build tables in our database for our entities. You’ll see later how we use those DbSets to access the data in those tables. For now, let’s create a migration script that we can later run to apply changes to our database. Run the following to have EF Core create it for us:

$ dotnet ef migrations add AddLookupTables

Now take a loot at the newly created Migrations directory. It contains a few new files, but the one we care about right now is Migrations/{TIMESTAMP}_AddLookupTables.cs. In its Up method, it’s got some code that will modify the database structure when run. The EF Core tooling has inspected our project, identified the new entities, and automatically generated a migration script for us that creates tables for them. Notice also how the tables and fields use the snake case naming convention, just as we specified with the call to UseSnakeCaseNamingConvention in Startup.cs.

Now, to actually run the migration script and apply the changes to the database, we do:

$ dotnet ef database update

That command inspects our project to find any migrations that haven’t been run yet, and applies them. In this case, we only have one, so that’s what it runs. Look at the output in the console to see it working its magic step by step:

$ dotnet ef database update
Build started...
Build succeeded.
warn: Microsoft.EntityFrameworkCore.Model.Validation[10400]
      Sensitive data logging is enabled. Log entries and exception messages may include sensitive application data; this mode should only be enabled during development.

...

info: Microsoft.EntityFrameworkCore.Database.Command[20101]
      Executed DbCommand (4ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
      CREATE TABLE sizes (
          id integer GENERATED BY DEFAULT AS IDENTITY,
          name text NULL,
          CONSTRAINT pk_sizes PRIMARY KEY (id)
      );
info: Microsoft.EntityFrameworkCore.Database.Command[20101]
      Executed DbCommand (1ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
      INSERT INTO "__EFMigrationsHistory" (migration_id, product_version)
      VALUES ('20210625212939_AddLookupTables', '5.0.7');
Done.

Notice how it warns us about potential exposure of sensitive data because of that EnableSensitiveDataLogging option we opted into in Startup.cs. Also, EF Core related logs are extra verbose showing all database operations because of another configuration option that we applied there: the UseLoggerFactory(LoggerFactory.Create(builder => builder.AddConsole())) one.

You can connect to the database with the psql command line client and see that the changes took effect:

$ psql -h localhost -U vehicle_quote

...

vehicle_quote=# \c vehicle_quote 
psql (12.7 (Ubuntu 12.7-0ubuntu0.20.10.1), server 13.2 (Debian 13.2-1.pgdg100+1))
You are now connected to database "vehicle_quote" as user "vehicle_quote".
vehicle_quote=# \dt
                   List of relations
 Schema |         Name          | Type  |     Owner     
--------+-----------------------+-------+---------------
 public | __EFMigrationsHistory | table | vehicle_quote
 public | body_types            | table | vehicle_quote
 public | makes                 | table | vehicle_quote
 public | sizes                 | table | vehicle_quote
(4 rows)

vehicle_quote=# \d makes
                            Table "public.makes"
 Column |  Type   | Collation | Nullable |             Default              
--------+---------+-----------+----------+----------------------------------
 id     | integer |           | not null | generated by default as identity
 name   | text    |           |          | 
Indexes:
    "pk_makes" PRIMARY KEY, btree (id)

There are our tables in all their normalized, snake cased glory. The __EFMigrationsHistory table is used internally by EF Core to keep track of which migrations have been applied.

Creating controllers for CRUDing our tables

Now that we have that, let’s add a few endpoints to support basic CRUD of those tables. We can use the dotnet-aspnet-codegenerator scaffolding tool that we installed earlier. For the three tables that we have, we would do:

$ dotnet aspnet-codegenerator controller \
    -name MakesController \
    -m Make \
    -dc VehicleQuotesContext \
    -async \
    -api \
    -outDir Controllers

$ dotnet aspnet-codegenerator controller \
    -name BodyTypesController \
    -m BodyType \
    -dc VehicleQuotesContext \
    -async \
    -api \
    -outDir Controllers

$ dotnet aspnet-codegenerator controller \
    -name SizesController \
    -m Size \
    -dc VehicleQuotesContext \
    -async \
    -api \
    -outDir Controllers

Those commands tell the scaffolding tool to create new controllers that:

  1. Are named as given by the -name option.
  2. Use the model class specified in the -m option.
  3. Use our VehicleQuotesContext to talk to the database. As per the -dc option.
  4. Define the methods using async/await syntax. Given by the -async option.
  5. Are API controllers. Specified by the -api option.
  6. Are created in the Controllers directory. Via the -outDir option.

Explore the new files that got created in the Controllers directory: MakesController.cs, BodyTypesController.cd and SizesController.cs. The controllers have been generated with the necessary Action Methods to fetch, create, update and delete their corresponding entities. Try dotnet run and navigate to https://localhost:5001 to see the new endpoints in the Swagger UI:

Swagger UI with lookup tables

Try it out! You can interact with each of the endpoints from the Swagger UI and it all works as you’d expect.

Adding unique constraints via indexes

Ok, our app is coming along well. Right now though, there’s an issue with the tables that we’ve created. It’s possible to create vehicle makes with the same name. The same is true for body types and sizes. This doesn’t make much sense for these tables. So let’s fix that by adding a uniqueness constraint. We can do it by creating a unique database index using EF Core’s Index attribute. For example, we can modify our Models/Make.cs like so:

+using Microsoft.EntityFrameworkCore;

namespace VehicleQuotes.Models
{
+   [Index(nameof(Name), IsUnique = true)]
    public class Make
    {
        public int ID { get; set; }
        public string Name { get; set; }
    }
}

In fact, do the same for our other entities in Models/BodyType.cs and Models/Size.cs. Don’t forget the using Microsoft.EntityFrameworkCore statement.

With that, we can create a new migration:

$ dotnet ef migrations add AddUniqueIndexesToLookupTables

That will result in a new migration script in Migrations/{TIMESTAMP}_AddUniqueIndexesToLookupTables.cs. Its Up method looks like this:

protected override void Up(MigrationBuilder migrationBuilder)
{
    migrationBuilder.CreateIndex(
        name: "ix_sizes_name",
        table: "sizes",
        column: "name",
        unique: true);

    migrationBuilder.CreateIndex(
        name: "ix_makes_name",
        table: "makes",
        column: "name",
        unique: true);

    migrationBuilder.CreateIndex(
        name: "ix_body_types_name",
        table: "body_types",
        column: "name",
        unique: true);
}

As you can see, new unique indexes are being created on the tables and fields that we specified. Like before, apply the changes to the database structure with:

$ dotnet ef database update

Now if you try to create, for example, a vehicle make with a repeated name, you’ll get an error. Try doing so by POSTing to /api/Makes via the Swagger UI:

Unique constraint violation

Responding with specific HTTP error codes (409 Conflict)

The fact that we can now enforce unique constraints is all well and good. But the error scenario is not very user friendly. Instead of returning a “500 Internal Server Error” status code with a wall of text, we should be responding with something more sensible. Maybe a “409 Conflict” would be more appropriate for this kind of error. We can easily update our controllers to handle that scenario. What we need to do is update the methods that handle the POST and PUT endpoints so that they catch the Microsoft.EntityFrameworkCore.DbUpdateException exception and return the proper response. Here’s how we would do it for the MakesController:

// ...
namespace VehicleQuotes.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class MakesController : ControllerBase
    {
        // ...

        [HttpPut("{id}")]
        public async Task<IActionResult> PutMake(int id, Make make)
        {
            // ...

            try
            {
                await _context.SaveChangesAsync();
            }
            // ...
+           catch (Microsoft.EntityFrameworkCore.DbUpdateException)
+           {
+               return Conflict();
+           }

            return NoContent();
        }

        [HttpPost]
        public async Task<ActionResult<Make>> PostMake(Make make)
        {
            _context.Makes.Add(make);
-           await _context.SaveChangesAsync();

+           try
+           {
+               await _context.SaveChangesAsync();
+           }
+           catch (Microsoft.EntityFrameworkCore.DbUpdateException)
+           {
+               return Conflict();
+           }

            return CreatedAtAction("GetMake", new { id = make.ID }, make);
        }

        // ...
    }
}

Go ahead and do the same for the other two controllers, and try again to POST a repeated make name via the Swagger UI. You should see this now instead:

HTTP 409 Conflict

Much better now, don’t you think?

Adding a more complex entity to the model

Now let’s work on an entity that’s a little bit more complex: the one we will use to represent vehicle models.

For this entity, we don’t want our API to be as low level as the one for the other three, where it was basically a thin wrapper over database tables. We want it to be a little bit more abstract and not expose the entire database structure verbatim.

Refer back to the data model. We’ll add models, model_styles and model_style_years. Let’s start by adding the following classes:

// Models/Model.cs
using System.Collections.Generic;

namespace VehicleQuotes.Models
{
    public class Model
    {
        public int ID { get; set; }
        public string Name { get; set; }
        public int MakeID { get; set; }

        public Make Make { get; set; }

        public ICollection<ModelStyle> ModelStyles { get; set; }
    }
}
// Models/ModelStyle.cs
using System.Collections.Generic;

namespace VehicleQuotes.Models
{
    public class ModelStyle
    {
        public int ID { get; set; }
        public int ModelID { get; set; }
        public int BodyTypeID { get; set; }
        public int SizeID { get; set; }

        public Model Model { get; set; }
        public BodyType BodyType { get; set; }
        public Size Size { get; set; }

        public ICollection<ModelStyleYear> ModelStyleYears { get; set; }
    }
}
// Models/ModelStyleYear.cs
namespace VehicleQuotes.Models
{
    public class ModelStyleYear
    {
        public int ID { get; set; }
        public string Year { get; set; }
        public int ModelStyleID { get; set; }

        public ModelStyle ModelStyle { get; set; }
    }
}

Notice how some of these entities now include properties whose types are other entities. Some of them are collections even. These are called Navigation Properties and are how we tell EF Core that our entities are related to one another. These will result in foreign keys being created in the database.

Take the Model entity for example. It has a property Make of type Make. It also has a MakeID property of type int. EF Core sees this and figures out that there’s a relation between the makes and models tables. Specifically, that models have a make. A many-to-one relation where the models table stores a foreign key to the makes table.

Similarly, the Model entity has a ModelStyles property of type ICollection<ModelStyleYear>. This tells EF Core that models have many model_styles. This one is a one-to-many relation from the perspective of the models table. The foreign key lives in the model_styles table and points back to models.

The official documentation is a great resource to learn more details about how relationships work in EF Core.

After that, same as before, we have to add the corresponding DbSets to our DbContext:

// ...

namespace VehicleQuotes
{
    public class VehicleQuotesContext : DbContext
    {
        // ...

+       public DbSet<Model> Models { get; set; }
+       public DbSet<ModelStyle> ModelStyles { get; set; }
+       public DbSet<ModelStyleYear> ModelStyleYears { get; set; }
    }
}

Don’t forget the migration script. First create it:

$ dotnet ef migrations add AddVehicleModelTables

And then apply it:

$ dotnet ef database update

Adding composite unique indexes

These vehicle model related tables also need some uniqueness enforcement. This time, however, the unique keys are composite. Meaning that they involve multiple fields. For vehicle models, for example, it makes no sense to have multiple records with the same make and name. But it does make sense to have multiple models with the same name, as long as they belong to different makes. We can solve for that with a composite index. Here’s how we create one of those with EF Core:

using System.Collections.Generic;
+using Microsoft.EntityFrameworkCore;

namespace VehicleQuotes.Models
{
+   [Index(nameof(Name), nameof(MakeID), IsUnique = true)]
    public class Model
    {
        // ...
    }
}

Very similar to what we did with the Make, BodyType, and Size entities. The only difference is that this time we included multiple fields in the parameters for the Index attribute.

We should do the same for ModelStyle and ModelStyleYear:

using System.Collections.Generic;
+using Microsoft.EntityFrameworkCore;

namespace VehicleQuotes.Models
{
+   [Index(nameof(ModelID), nameof(BodyTypeID), nameof(SizeID), IsUnique = true)]
    public class ModelStyle
    {
        // ...
    }
}
+using Microsoft.EntityFrameworkCore;

namespace VehicleQuotes.Models
{
+   [Index(nameof(Year), nameof(ModelStyleID), IsUnique = true)]
    public class ModelStyleYear
    {
        // ...
    }
}

Don’t forget the migrations:

$ dotnet ef migrations add AddUniqueIndexesForVehicleModelTables

$ dotnet ef database update

Adding controllers with custom routes

Our data model dictates that vehicle models belong in a make. In other words, a vehicle model has no meaning by itself. It only has meaning within the context of a make. Ideally, we want our API routes to reflect this concept. In other words, instead of URLs for models to look like this: /api/Models/{id}; we’d rather them look like this: /api/Makes/{makeId}/Models/{modelId}. Let’s go ahead and scaffold a controller for this entity:

$ dotnet aspnet-codegenerator controller \
    -name ModelsController \
    -m Model \
    -dc VehicleQuotesContext \
    -async \
    -api \
    -outDir Controllers

Now let’s change the resulting Controllers/ModelsController.cs to use the URL structure that we want. To do so, we modify the Route attribute that’s applied to the ModelsController class to this:

[Route("api/Makes/{makeId}/[controller]/")]

Do a dotnet run and take a peek at the Swagger UI on https://localhost:5001 to see what the Models endpoint routes look like now:

Nested routes

The vehicle model routes are now nested within makes, just like we wanted.

Of course, this is just eye candy for now. We need to actually use this new makeId parameter for the logic in the endpoints. For example, one would expect a GET to /api/Makes/1/Models to return all the vehicle models that belong to the make with id 1. But right now, all vehicle models are returned regardless. All other endpoints behave similarly, there’s no limit to the operations on the vehicle models. The given makeId is not taken into consideration at all.

Let’s update the ModelsController’s GetModels method (which is the one that handles the GET /api/Makes/{makeId}/Models endpoint) to behave like one would expect. It should look like this:

[HttpGet]
public async Task<ActionResult<IEnumerable<Model>>> GetModels([FromRoute] int makeId)
{
    var make = await _context.Makes.FindAsync(makeId);

    if (make == null)
    {
        return NotFound();
    }

    return await _context.Models.Where(m => m.MakeID == makeId).ToListAsync();
}

See how we’ve included a new parameter to the method: [FromRoute] int makeId. This [FromRoute] attribute is how we tell ASP.NET Core that this endpoint will use that makeId parameter coming from the URL route. Then, we use our DbContext to try and find the make that corresponds to the given identifier. This is done in _context.Makes.FindAsync(makeId). Then, if we can’t find the given make, we return a 404 Not Found HTTP status code as per the return NotFound(); line. Finally, we query the models table for all the records whose make_id matches the given parameter. That’s done in the last line of the method.

We have access to the DbContext because it has been injected as a dependency into the controller via its constructor by the framework.

The official documentation is a great resource to learn about all the possibilities when querying data with EF Core.

Let’s update the GetModel method, which handles the GET /api/Makes/{makeId}/Models/{id} endpoint, similarly.

[HttpGet("{id}")]
-public async Task<ActionResult<Model>> GetModel(int id)
+public async Task<ActionResult<Model>> GetModel([FromRoute] int makeId, int id)
{
-   var model = await _context.Models.FindAsync(id);
+   var model = await _context.Models.FirstOrDefaultAsync(m =>
+       m.MakeID == makeId && m.ID == id
+   );

    if (model == null)
    {
        return NotFound();
    }

    return model;
}

We’ve once again included the makeId as a parameter to the method and modified the EF Core query to use both the make ID and the vehicle model ID when looking for the record.

And that’s the gist of it. Other methods would need to be updated similarly. The next section with include these methods in their final form, so I won’t go through each one of them here.

Using resource models as DTOs for controllers

Now, I did say at the beginning that we wanted the vehicle model endpoint to be a bit more abstract. Right now it’s operating directly over the EF Core entities and our table. As a result, creating new vehicle models via the POST /api/Makes/{makeId}/Models endpoint is a pain. Take a look at the Swagger UI request schema for that endpoint:

Raw model request schema

This is way too much. Let’s make it a little bit more user friendly by making it more abstract.

To do that, we will introduce what I like to call a Resource Model (or DTO, or View Model). This is a class whose only purpose is to streamline the API contract of the endpoint by defining a set of fields that clients will use to make requests and interpret responses. Something that’s simpler than our actual database structure, but still captures all the information that’s important for our application. We will update the ModelsController so that it’s able to receive objects of this new class as requests, operate on them, translate them to our EF Core entities and actual database records, and return them as a response. The hope is that, by hiding the details of our database structure, we make it easier for clients to interact with our API.

So let’s create a new ResourceModels directory in our project’s root and add these two classes:

// ResourceModels/ModelSpecification.cs
namespace VehicleQuotes.ResourceModels
{
    public class ModelSpecification
    {
        public int ID { get; set; }
        public string Name { get; set; }

        public ModelSpecificationStyle[] Styles { get; set; }
    }
}
// ResourceModels/ModelSpecificationStyle.cs
namespace VehicleQuotes.ResourceModels
{
    public class ModelSpecificationStyle
    {
        public string BodyType { get; set; }
        public string Size { get; set; }

        public string[] Years { get; set; }
    }
}

Thanks to these two, instead of that mess from above, clients POSTing to /api/Makes/{makeId}/Models will be able to use a request body like this:

{
  "name": "string",
  "styles": [
    {
      "bodyType": "string",
      "size": "string",
      "years": [
        "string"
      ]
    }
  ]
}

Which is much simpler. We have the vehicle model name and an array of styles. Each style has a body type and a size, which we can specify by their names because those are unique keys. We don’t need their integer IDs (i.e. primary keys) in order to find to them. Then, each style has an array of strings that contain the years in which those styles are available for that model. The make is part of the URL already, so we don’t need to also specify it in the request payload.

Let’s update our ModelsController to use these Resource Models instead of the Model EF Core entity. Be sure to include the namespace where the Resource Models are defined by adding the following using statement: using VehicleQuotes.ResourceModels;. Now, let’s update the GetModels method (which handles the GET /api/Makes/{makeId}/Models endpoint) so that it looks like this:

[HttpGet]
// Return a collection of `ModelSpecification`s and expect a `makeId` from the URL.
public async Task<ActionResult<IEnumerable<ModelSpecification>>> GetModels([FromRoute] int makeId)
{
    // Look for the make identified by `makeId`.
    var make = await _context.Makes.FindAsync(makeId);

    // If we can't find the make, then we return a 404.
    if (make == null)
    {
        return NotFound();
    }

    // Build a query to fetch the relevant records from the `models` table and
    // build `ModelSpecification` with the data.
    var modelsToReturn = _context.Models
        .Where(m => m.MakeID == makeId)
        .Select(m => new ModelSpecification {
            ID = m.ID,
            Name = m.Name,
            Styles = m.ModelStyles.Select(ms => new ModelSpecificationStyle {
                BodyType = ms.BodyType.Name,
                Size = ms.Size.Name,
                Years = ms.ModelStyleYears.Select(msy => msy.Year).ToArray()
            }).ToArray()
        });

    // Execute the query and respond with the results.
    return await modelsToReturn.ToListAsync();
}

The first thing that we changed was the return type. Instead of Task<ActionResult<IEnumerable<Model>>>, the method now returns Task<ActionResult<IEnumerable<ModelSpecification>>>. We’re going to use our new Resource Models as these endpoints’ contract, so we need to make sure we are returning those. Next, we considerably changed the LINQ expression that searches the database for the vehicle model records we want. The filtering logic (given by the Where) is the same. That is, we’re still searching for vehicle models within the given make ID. What we changed was the projection logic in the Select. Our Action Method now returns a collection of ModelSpecification objects, so we updated the Select to produce such objects, based on the records from the models table that match our search criteria. We build ModelSpecifications using the data coming from models records and their related model_styles and model_style_years. Finally, we asynchronously execute the query to fetch the data from the database and return it.

Next, let’s move on to the GetModel method, which handles the GET /api/Makes/{makeId}/Models/{id} endpoint. This is what it should look like:

[HttpGet("{id}")]
// Return a `ModelSpecification`s and expect `makeId` and `id` from the URL.
public async Task<ActionResult<ModelSpecification>> GetModel([FromRoute] int makeId, [FromRoute] int id)
{
    // Look for the model specified by the given identifiers and also load
    // all related data that we care about for this method.
    var model = await _context.Models
        .Include(m => m.ModelStyles).ThenInclude(ms => ms.BodyType)
        .Include(m => m.ModelStyles).ThenInclude(ms => ms.Size)
        .Include(m => m.ModelStyles).ThenInclude(ms => ms.ModelStyleYears)
        .FirstOrDefaultAsync(m => m.MakeID == makeId && m.ID == id);

    // If we couldn't find it, respond with a 404.
    if (model == null)
    {
        return NotFound();
    }

    // Use the fetched data to construct a `ModelSpecification` to use in the response.
    return new ModelSpecification {
        ID = model.ID,
        Name = model.Name,
        Styles = model.ModelStyles.Select(ms => new ModelSpecificationStyle {
            BodyType = ms.BodyType.Name,
            Size = ms.Size.Name,
            Years = ms.ModelStyleYears.Select(msy => msy.Year).ToArray()
        }).ToArray()
    };
}

Same as before, we changed the return type of the method to be ModelSpecification. Then, we modified the query so that it loads all the related data for the Model entity via its navigation properties. That’s what the Include and ThenInclude calls do. We need this data loaded because we use it in the method’s return statement to build the ModelSpecification that will be included in the response. The logic to build it is very similar to that of the previous method.

You can learn more about the various available approaches for loading data with EF Core in the official documentation.

Next is the PUT /api/Makes/{makeId}/Models/{id} endpoint, handled by the PutModel method:

[HttpPut("{id}")]
// Expect `makeId` and `id` from the URL and a `ModelSpecification` from the request payload.
public async Task<IActionResult> PutModel([FromRoute] int makeId, int id, ModelSpecification model)
{
    // If the id in the URL and the request payload are different, return a 400.
    if (id != model.ID)
    {
        return BadRequest();
    }

    // Obtain the `models` record that we want to update. Include any related
    // data that we want to update as well.
    var modelToUpdate = await _context.Models
        .Include(m => m.ModelStyles)
        .FirstOrDefaultAsync(m => m.MakeID == makeId && m.ID == id);

    // If we can't find the record, then return a 404.
    if (modelToUpdate == null)
    {
        return NotFound();
    }

    // Update the record with what came in the request payload.
    modelToUpdate.Name = model.Name;

    // Build EF Core entities based on the incoming Resource Model object.
    modelToUpdate.ModelStyles = model.Styles.Select(style => new ModelStyle {
        BodyType = _context.BodyTypes.Single(bodyType => bodyType.Name == style.BodyType),
        Size = _context.Sizes.Single(size => size.Name == style.Size),

        ModelStyleYears = style.Years.Select(year => new ModelStyleYear {
            Year = year
        }).ToList()
    }).ToList();

    try
    {
        // Try saving the changes. This will run the UPDATE statement in the database.
        await _context.SaveChangesAsync();
    }
    catch (Microsoft.EntityFrameworkCore.DbUpdateException)
    {
        // If there's an error updating, respond accordingly.
        return Conflict();
    }

    // Finally return a 204 if everything went well.
    return NoContent();
}

The purpose of this endpoint is to update existing resources. So, it receives a representation of said resource as a parameter that comes from the request body. Before, it expected an instance of the Model entity, but now, we’ve changed it to receive a ModelSpecification. The rest of the method is your usual structure of first obtaining the record to update by the given IDs, then changing its values according to what came in as a parameter, and finally, saving the changes.

You probably get the idea by now: since the API is using the Resource Model, we need to change input and output values for the methods and run some logic to translate between Resource Model objects and Data Model objects that EF Core can understand so that it can perform its database operations.

That said, here’s what the PostModel Action Method, handler of the POST /api/Makes/{makeId}/Models endpoint, should look like:

[HttpPost]
// Return a `ModelSpecification`s and expect `makeId` from the URL and a `ModelSpecification` from the request payload.
public async Task<ActionResult<ModelSpecification>> PostModel([FromRoute] int makeId, ModelSpecification model)
{
    // First, try to find the make specified by the incoming `makeId`.
    var make = await _context.Makes.FindAsync(makeId);

    // Respond with 404 if not found.
    if (make == null)
    {
        return NotFound();
    }

    // Build out a new `Model` entity, complete with all related data, based on
    // the `ModelSpecification` parameter.
    var modelToCreate = new Model {
        Make = make,
        Name = model.Name,

        ModelStyles = model.Styles.Select(style => new ModelStyle {
            // Notice how we search both body type and size by their name field.
            // We can do that because their names are unique.
            BodyType = _context.BodyTypes.Single(bodyType => bodyType.Name == style.BodyType),
            Size = _context.Sizes.Single(size => size.Name == style.Size),

            ModelStyleYears = style.Years.Select(year => new ModelStyleYear {
                Year = year
            }).ToArray()
        }).ToArray()
    };

    // Add it to the DbContext.
    _context.Add(modelToCreate);

    try
    {
        // Try running the INSERTs.
        await _context.SaveChangesAsync();
    }
    catch (Microsoft.EntityFrameworkCore.DbUpdateException)
    {
        // Return accordingly if an error happens.
        return Conflict();
    }

    // Get back the autogenerated ID of the record we just INSERTed.
    model.ID = modelToCreate.ID;

    // Finally, return a 201 including a location header containing the newly
    // created resource's URL and the resource itself in the response payload.
    return CreatedAtAction(
        nameof(GetModel),
        new { makeId = makeId, id = model.ID },
        model
    );
}

All that should be pretty self explanatory by now. Moving on to the DeleteModel method which handles the DELETE /api/Makes/{makeId}/Models/{id} endpoint:

[HttpDelete("{id}")]
// Expect `makeId` and `id` from the URL.
public async Task<IActionResult> DeleteModel([FromRoute] int makeId, int id)
{
    // Try to find the record identified by the ids from the URL.
    var model = await _context.Models.FirstOrDefaultAsync(m => m.MakeID == makeId && m.ID == id);

    // Respond with a 404 if we can't find it.
    if (model == null)
    {
        return NotFound();
    }

    // Mark the entity for removal and run the DELETE.
    _context.Models.Remove(model);
    await _context.SaveChangesAsync();

    // Respond with a 204.
    return NoContent();
}

And that’s all for that controller. Hopefully that demonstrated what it looks like to have endpoints that operate using objects other than the EF Core entities. Fire up the app with dotnet run and explore the Swagger UI and you’ll see the changes that we’ve made reflected in there. Try it out. Try CRUDing some vehicle models. And don’t forget to take a look at our POST endpoint specification which looks much more manageable now:

POST Models endpoint

Which means that you can send in something like this, for example:

{
    "name": "Corolla",
    "styles": [
        {
            "bodyType": "Sedan",
            "size": "Compact",
            "years": [ "2000", "2001" ]
        }
    ]
}

This will work assuming you’ve created at least one make to add the vehicle model to, as well as a body type whose name is Sedan and a size whose name is Compact.

There’s also a ModelExists method in that controller which we don’t need anymore. You can delete it.

Validation using built-in Data Annotations.

Depending on how “creative” you were in the previous section when trying to CRUD models, you may have run into an issue or two regarding the data that’s allowed into our database. We solve that by implementing input validation. In ASP.NET Core, the easiest way to implement validation is via Data Annotation attributes on the entities or other objects that controllers receive as request payloads. So let’s see about adding some validation to our app. Since our ModelsController uses the ModelSpecification and ModelSpecificationStyle Resource Models to talk to clients, let’s start there. Here’s the diff:

+using System.ComponentModel.DataAnnotations;

namespace VehicleQuotes.ResourceModels
{
    public class ModelSpecification
    {
        public int ID { get; set; }
+       [Required]
        public string Name { get; set; }

+       [Required]
        public ModelSpecificationStyle[] Styles { get; set; }
    }
}
+using System.ComponentModel.DataAnnotations;

namespace VehicleQuotes.ResourceModels
{
    public class ModelSpecificationStyle
    {
+       [Required]
        public string BodyType { get; set; }
+       [Required]
        public string Size { get; set; }

+       [Required]
+       [MinLength(1)]
        public string[] Years { get; set; }
    }
}

And just like that, we get a good amount of functionality. We use the Required and MinLength attributes from the System.ComponentModel.DataAnnotations namespace to specify that some fields are required, and that our Years array needs to contain at least one element. When the app receives a request to the PUT or POST endpoints — which are the ones that expect a ModelSpecification as the payload — validation kicks in. If it fails, the action method is never executed and a 400 status code is returned as a response. Try POSTing to /api/Makes/{makeId}/Models with a payload that violates some of these rules to see for yourself. I tried for example sending this:

{
  "name": null,
  "styles": [
    {
      "bodyType": "Sedan",
      "size": "Full size",
      "years": []
    }
  ]
}

And I got back a 400 response with this payload:

{
  "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "traceId": "00-0fd4f00eeb9f2f458ccefc180fcfba1c-79a618f13218394b-00",
  "errors": {
    "Name": [
      "The Name field is required."
    ],
    "Styles[0].Years": [
      "The field Years must be a string or array type with a minimum length of '1'."
    ]
  }
}

Pretty neat, huh? With minimal effort, we have some basic validation rules in place and a pretty usable response for when errors occur.

To learn more about model validation, including all the various validation attributes included in the framework, check the official documentation: Model validation and System.ComponentModel.DataAnnotations Namespace.

Validation using custom attributes

Of course, the framework is never going to cover all possible validation scenarios with the built-in attributes. Case in point, it’d be great to validate that the Years array contains values that look like actual years. That is, four-character, digit-only strings. There are no validation attributes for that. So, we need to create our own. Let’s add this file into a new Validations directory:

// Validations/ContainsYearsAttribute.cs
using System;
using System.ComponentModel.DataAnnotations;
using System.Linq;
using System.Runtime.CompilerServices;

namespace VehicleQuotes.Validation
{
    // In the .NET Framework, attribute classes need to have their name suffixed with the word "Attribute".
    // Validation attributes need to inherit from `System.ComponentModel.DataAnnotations`'s `ValidationAttribute` class
    // and override the `IsValid` method.
    public class ContainsYearsAttribute : ValidationAttribute
    {
        private string propertyName;

        // This constructor is called by the framework the the attribute is applied to some member. In this specific
        // case, we define a `propertyName` parameter annotated with a `CallerMemberName` attribute. This makes it so
        // the framework sends in the name of the member to which our `ContainsYears` attribute is applied to.
        // We store the value to use it later when constructing our validation error message.
        // Check https://docs.microsoft.com/en-us/dotnet/api/system.runtime.compilerservices.callermembernameattribute?view=net-5.0
        // for more info on `CallerMemberName`.
        public ContainsYearsAttribute([CallerMemberName] string propertyName = null)
        {
            this.propertyName = propertyName;
        }

        // This method is called by the framework during validation. `value` is the actual value of the field that this
        // attribute will validate.
        protected override ValidationResult IsValid(object value, ValidationContext validationContext)
        {
            // By only only applying the validation checks when the value is not null, we make it possible for this
            // attribute to work on optional fields. In other words, this attribute will skip validation if there is no
            // value to validate.
            if (value != null)
            {
                // Check if all the elements of the string array are valid years. Check the `IsValidYear` method below
                // to see what checks are applied for each of the array elements.
                var isValid = (value as string[]).All(IsValidYear);

                if (!isValid)
                {
                    // If not, return an error.
                    return new ValidationResult(GetErrorMessage());
                }
            }

            // Return a successful validation result if no errors were detected.
            return ValidationResult.Success;
        }

        // Determines if a given value is valid by making sure it's not null, nor empty, that its length is 4 and that
        // all its characters are digits.
        private bool IsValidYear(string value) =>
            !String.IsNullOrEmpty(value) && value.Length == 4 && value.All(Char.IsDigit);

        // Builds a user friendly error message which includes the name of the field that this validation attribute has
        // been applied to.
        private string GetErrorMessage() =>
            $"The {propertyName} field must be an array of strings containing four numbers.";
    }
}

Check the comments in the code for more details into how that class works. Then, we apply our custom attribute to our ModelSpecificationStyle class in the same way that we applied the built in ones:

using System.ComponentModel.DataAnnotations;
+using VehicleQuotes.Validation;

namespace VehicleQuotes.ResourceModels
{
    public class ModelSpecificationStyle
    {
        // ...

        [Required]
        [MinLength(1)]
+       [ContainsYears]
        public string[] Years { get; set; }
    }
}

Now do a dotnet run and try to POST this payload:

{
  "name": "Rav4",
  "styles": [
    {
      "bodyType": "SUV",
      "size": "Mid size",
      "years": [ "not_a_year" ]
    }
  ]
}

That should make the API respond with this:

{
  "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "traceId": "00-9980325f3e388f48a5975ef382d5b137-2d55da1bb9613e4f-00",
  "errors": {
    "Styles[0].Years": [
      "The Years field must be an array of strings containing four numbers."
    ]
  }
}

That’s our custom validation attribute doing its job.

There’s another aspect that we could validate using a custom validation attribute. What happens if we try to POST a payload with a body type or size that doesn’t exist? These queries from the PostModel method would throw an InvalidOperationException:

BodyType = _context.BodyTypes.Single(bodyType => bodyType.Name == style.BodyType)

and

Size = _context.Sizes.Single(size => size.Name == style.Size)

They do so because we used the Single method, which is designed like that. It tries to find a body type or size whose name is the given value, can’t find it, and thus, throws an exception.

If, for example, we wanted not-founds to return null, we could have used SingleOrDefault instead.

This unhandled exception results in a response that’s quite unbecoming:

InvalidOperationException during POST

So, to prevent that exception and control the error messaging, we need a couple of new validation attributes that go into the body_types and sizes tables and check if the given values exist. Here’s what one would look like:

// Validations/VehicleBodyTypeAttribute.cs
using System;
using System.ComponentModel.DataAnnotations;
using Microsoft.EntityFrameworkCore;
using System.Linq;

namespace VehicleQuotes.Validation
{
    public class VehicleBodyTypeAttribute : ValidationAttribute
    {
        protected override ValidationResult IsValid(object value, ValidationContext validationContext)
        {
            if (value == null) return ValidationResult.Success;

            var dbContext = validationContext.GetService(typeof(VehicleQuotesContext)) as VehicleQuotesContext;

            var bodyTypes = dbContext.BodyTypes.Select(bt => bt.Name).ToList();

            if (!bodyTypes.Contains(value))
            {
                var allowed = String.Join(", ", bodyTypes);
                return new ValidationResult(
                    $"Invalid vehicle body type {value}. Allowed values are {allowed}."
                );
            }

            return ValidationResult.Success;
        }
    }
}

You can find the other one here: Validations/VehicleSizeAttribute.cs.

These two are very similar to one another. The most interesting part is how we use the IsValid method’s second parameter (ValidationContext) to obtain an instance of VehicleQuotesContext that we can use to query the database. The rest should be pretty self-explanatory. These attributes are classes that inherit from System.ComponentModel.DataAnnotations’s ValidationAttribute and implement the IsValid method. The method then checks that the value under scrutiny exists in the corresponding table and if it does not, raises a validation error. The validation error includes a list of all allowed values. They can be applied to our ModelSpecificationStyle class like so:

// ...
namespace VehicleQuotes.ResourceModels
{
    public class ModelSpecificationStyle
    {
        [Required]
+       [VehicleBodyType]
        public string BodyType { get; set; }

        [Required]
+       [VehicleSize]
        public string Size { get; set; }

        //...
    }
}

Now, a request like this:

{
  "name": "Rav4",
  "styles": [
    {
      "bodyType": "not_a_body_type",
      "size": "Mid size",
      "years": [ "2000" ]
    }
  ]
}

Produces a response like this:

{
  "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "traceId": "00-9ad59a7aff60944ab54c19a73be73cc7-eeabafe03df74e40-00",
  "errors": {
    "Styles[0].BodyType": [
      "Invalid vehicle body type not_a_body_type. Allowed values are Coupe, Sedan, Convertible, Hatchback, SUV, Truck."
    ]
  }
}

Implementing endpoints for quote rules and overrides

At this point we’ve explored many of the most common features available to us for developing Web APIs. So much so that implementing the next two pieces of functionality for our app doesn’t really introduce any new concepts. So, I wont discuss that here in great detail.

Feel free to browse the source code on GitHub if you want though. These are the relevant files:

The FeatureTypeAttribute class is interesting in that it provides another example of a validation attribute. This time is one that makes sure the value being validated is included in an array of strings that’s defined literally in the code.

Other than that, it’s all stuff we’ve already covered: models, migrations, scaffolding controllers, custom routes, resource models, etc.

If you are following along, be sure to add those files and run a dotnet ef database update to apply the migration.

Implementing the quote model

Let’s now start implementing the main capability of our app: calculating quotes for vehicles. Let’s start with the Quote entity. This is what the new Models/Quote.cs file containing the entity class will look like:

// Models/Quote.cs
using System;
using System.ComponentModel.DataAnnotations.Schema;

namespace VehicleQuotes.Models
{
    public class Quote
    {
        public int ID { get; set; }

        // Directly tie this quote record to a specific vehicle that we have
        // registered in our db, if we have it.
        public int? ModelStyleYearID { get; set; }

        // If we don't have the specific vehicle in our db, then store the
        // vehicle model details independently.
        public string Year { get; set; }
        public string Make { get; set; }
        public string Model { get; set; }
        public int BodyTypeID { get; set; }
        public int SizeID { get; set; }

        public bool ItMoves { get; set; }
        public bool HasAllWheels { get; set; }
        public bool HasAlloyWheels { get; set; }
        public bool HasAllTires { get; set; }
        public bool HasKey { get; set; }
        public bool HasTitle { get; set; }
        public bool RequiresPickup { get; set; }
        public bool HasEngine { get; set; }
        public bool HasTransmission { get; set; }
        public bool HasCompleteInterior { get; set; }

        public int OfferedQuote { get; set; }
        public string Message { get; set; }
        public DateTime CreatedAt { get; set; }

        public ModelStyleYear ModelStyleYear { get; set; }

        public BodyType BodyType { get; set; }
        public Size Size { get; set; }
    }
}

This should be pretty familiar by now. It’s a plain old class that defines a number of properties. One for each of the fields in the resulting table and a few navigation properties that serve to access related data.

The only aspect worth noting is that we’ve defined the ModelStyleYearID property as a nullable integer (with int?). This is because, like we discussed at the beginning, the foreign key from quotes to vehicle_style_years is actually optional. The reason being that we may receive a quote request for a vehicle that we don’t have registered in our database. We need to be able to support quoting those vehicles too, so if we don’t have the requested vehicle registered, then that foreign key will stay unpopulated and we’ll rely on the other fields (i.e. Year, Make, Model, BodyTypeID and SizeID) to identify the vehicle and calculate the quote for it.

Using Dependency Injection

So far we’ve been putting a lot logic in our controllers. That’s generally not ideal, but fine as long as the logic is simple. The problem is that a design like that can quickly become a hindrance for maintainability and testing as our application grows more complex. For the logic that calculates a quote, we’d be better served by implementing it in its own class, outside of the controller that should only care about defining endpoints and handling HTTP concerns. Then, the controller can be given access to that class and delegate to it all the quote calculation logic. Thankfully, ASP.NET Core includes an IoC container by default, which allows us to use Dependency Injection to solve these kinds of problems. Let’s see what that looks like.

For working with quotes, we want to offer two endpoints:

  1. A POST api/Quotes that captures the vehicle information, calculates the quote, keeps record of the request, and responds with the calculated value.
  2. A GET api/Quotes that returns all the currently registered quotes in the system.

Using the Dependency Injection capabilities, a controller that implements those two could look like this:

using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using VehicleQuotes.ResourceModels;
using VehicleQuotes.Services;

namespace VehicleQuotes.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class QuotesController : ControllerBase
    {
        private readonly QuoteService _service;

        // When intiating the request processing logic, the framework recognizes
        // that this controller has a dependency on QuoteService and expects an
        // instance of it to be injected via the constructor. The framework then
        // does what it needs to do in order to provide that dependency.
        public QuotesController(QuoteService service)
        {
            _service = service;
        }

        // GET: api/Quotes
        [HttpGet]
        // This method returns a collection of a new resource model instead of just the `Quote` entity direcly.
        public async Task<ActionResult<IEnumerable<SubmittedQuoteRequest>>> GetAll()
        {
            // Instead of directly implementing the logic in this method, we call on
            // the service class and let it take care of the rest.
            return await _service.GetAllQuotes();
        }

        // POST: api/Quotes
        [HttpPost]
        // This method receives as a paramater a `QuoteRequest` of just the `Quote` entity direcly.
        // That way callers of this endpoint don't need to be exposed to the details of our data model implementation.
        public async Task<ActionResult<SubmittedQuoteRequest>> Post(QuoteRequest request)
        {
            // Instead of directly implementing the logic in this method, we call on
            // the service class and let it take care of the rest.
            return await _service.CalculateQuote(request);
        }
    }
}

As you can see, we’ve once again opted to abstract away clients from the implementation details of our data model and used Resource Models for the API contract instead of the Quote entity directly. We have one for input data that’s called QuoteRequest and another one for output: SubmittedQuoteRequest. Not very remarkable by themselves, but feel free to explore the source code in the GitHub repo.

This controller has a dependency on QuoteService, which it uses to perform all of the necessary logic. This class is not defined yet so let’s do that next:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
using VehicleQuotes.Models;
using VehicleQuotes.ResourceModels;

namespace VehicleQuotes.Services
{
    public class QuoteService
    {
        private readonly VehicleQuotesContext _context;

        // This constructor defines a dependency on VehicleQuotesContext, similar to most of our controllers.
        // Via the built in dependency injection features, the framework makes sure to provide this parameter when
        // creating new instances of this class.
        public QuoteService(VehicleQuotesContext context)
        {
            _context = context;
        }

        // This method takes all the records from the `quotes` table and constructs `SubmittedQuoteRequest`s with them.
        // Then returns that as a list.
        public async Task<List<SubmittedQuoteRequest>> GetAllQuotes()
        {
            var quotesToReturn = _context.Quotes.Select(q => new SubmittedQuoteRequest
            {
                ID = q.ID,
                CreatedAt = q.CreatedAt,
                OfferedQuote = q.OfferedQuote,
                Message = q.Message,

                Year = q.Year,
                Make = q.Make,
                Model = q.Model,
                BodyType = q.BodyType.Name,
                Size = q.Size.Name,

                ItMoves = q.ItMoves,
                HasAllWheels = q.HasAllWheels,
                HasAlloyWheels = q.HasAlloyWheels,
                HasAllTires = q.HasAllTires,
                HasKey = q.HasKey,
                HasTitle = q.HasTitle,
                RequiresPickup = q.RequiresPickup,
                HasEngine = q.HasEngine,
                HasTransmission = q.HasTransmission,
                HasCompleteInterior = q.HasCompleteInterior,
            });

            return await quotesToReturn.ToListAsync();
        }

        // This method takes an incoming `QuoteRequest` and calculates a quote based on the vehicle described by it.
        // To calculate this quote, it looks for any overrides before trying to use the currently existing rules defined
        // in the `quote_rules` table. It also stores a record on the `quotes` table with all the incoming data and the
        // quote calculation result. It returns back the quote value as well as a message explaining the conitions of
        // the quote.
        public async Task<SubmittedQuoteRequest> CalculateQuote(QuoteRequest request)
        {
            var response = this.CreateResponse(request);
            var quoteToStore = await this.CreateQuote(request);
            var requestedModelStyleYear = await this.FindModelStyleYear(request);
            QuoteOverride quoteOverride = null;

            if (requestedModelStyleYear != null)
            {
                quoteToStore.ModelStyleYear = requestedModelStyleYear;

                quoteOverride = await this.FindQuoteOverride(requestedModelStyleYear);

                if (quoteOverride != null)
                {
                    response.OfferedQuote = quoteOverride.Price;
                }
            }

            if (quoteOverride == null)
            {
                response.OfferedQuote = await this.CalculateOfferedQuote(request);
            }

            if (requestedModelStyleYear == null)
            {
                response.Message = "Offer subject to change upon vehicle inspection.";
            }

            quoteToStore.OfferedQuote = response.OfferedQuote;
            quoteToStore.Message = response.Message;

            _context.Quotes.Add(quoteToStore);
            await _context.SaveChangesAsync();

            response.ID = quoteToStore.ID;
            response.CreatedAt = quoteToStore.CreatedAt;

            return response;
        }

        // Creates a `SubmittedQuoteRequest`, intialized with default values, using the data from the incoming
        // `QuoteRequest`. `SubmittedQuoteRequest` is what gets returned in the response payload of the quote endpoints.
        private SubmittedQuoteRequest CreateResponse(QuoteRequest request)
        {
            return new SubmittedQuoteRequest
            {
                OfferedQuote = 0,
                Message = "This is our final offer.",

                Year = request.Year,
                Make = request.Make,
                Model = request.Model,
                BodyType = request.BodyType,
                Size = request.Size,

                ItMoves = request.ItMoves,
                HasAllWheels = request.HasAllWheels,
                HasAlloyWheels = request.HasAlloyWheels,
                HasAllTires = request.HasAllTires,
                HasKey = request.HasKey,
                HasTitle = request.HasTitle,
                RequiresPickup = request.RequiresPickup,
                HasEngine = request.HasEngine,
                HasTransmission = request.HasTransmission,
                HasCompleteInterior = request.HasCompleteInterior,
            };
        }

        // Creates a `Quote` based on the data from the incoming `QuoteRequest`. This is the object that gets eventually
        // stored in the database.
        private async Task<Quote> CreateQuote(QuoteRequest request)
        {
            return new Quote
            {
                Year = request.Year,
                Make = request.Make,
                Model = request.Model,
                BodyTypeID = (await _context.BodyTypes.SingleAsync(bt => bt.Name == request.BodyType)).ID,
                SizeID = (await _context.Sizes.SingleAsync(s => s.Name == request.Size)).ID,

                ItMoves = request.ItMoves,
                HasAllWheels = request.HasAllWheels,
                HasAlloyWheels = request.HasAlloyWheels,
                HasAllTires = request.HasAllTires,
                HasKey = request.HasKey,
                HasTitle = request.HasTitle,
                RequiresPickup = request.RequiresPickup,
                HasEngine = request.HasEngine,
                HasTransmission = request.HasTransmission,
                HasCompleteInterior = request.HasCompleteInterior,

                CreatedAt = DateTime.Now
            };
        }

        // Tries to find a registered vehicle that matches the one for which the quote is currently being requested.
        private async Task<ModelStyleYear> FindModelStyleYear(QuoteRequest request)
        {
            return await _context.ModelStyleYears.FirstOrDefaultAsync(msy =>
                msy.Year == request.Year &&
                msy.ModelStyle.Model.Make.Name == request.Make &&
                msy.ModelStyle.Model.Name == request.Model &&
                msy.ModelStyle.BodyType.Name == request.BodyType &&
                msy.ModelStyle.Size.Name == request.Size
            );
        }

        // Tries to find an override for the vehicle for which the quote is currently being requested.
        private async Task<QuoteOverride> FindQuoteOverride(ModelStyleYear modelStyleYear)
        {
            return await _context.QuoteOverides
                .FirstOrDefaultAsync(qo => qo.ModelStyleYear == modelStyleYear);
        }

        // Uses the rules stored in the `quote_rules` table to calculate how much money to offer for the vehicle
        // described in the incoming `QuoteRequest`.
        private async Task<int> CalculateOfferedQuote(QuoteRequest request)
        {
            var rules = await _context.QuoteRules.ToListAsync();

            // Given a vehicle feature type, find a rule that applies to that feature type and has the value that
            // matches the condition of the incoming vehicle being quoted.
            Func<string, QuoteRule> theMatchingRule = featureType =>
                rules.FirstOrDefault(r =>
                    r.FeatureType == featureType &&
                    r.FeatureValue == request[featureType]
                );

            // For each vehicle feature that we care about, sum up the the monetary values of all the rules that match
            // the given vehicle condition.
            return QuoteRule.FeatureTypes.All
                .Select(theMatchingRule)
                .Where(r => r != null)
                .Sum(r => r.PriceModifier);
        }
    }
}

Finally, we need to tell the framework that this class is available for Dependency Injection. Similarly to how we did with our VehicleQuotesContext, we do so in the Startup.cs file’s ConfigureServices method. Just add this line at the top:

services.AddScoped<Services.QuoteService>();

The core tenet of Inversion of Control is to depend on abstractions, not on implementations. So ideally, we would not have our controller directly call for a QuoteService instance. Instead, we would have it reference an abstraction, e.g. an interface like IQuoteService. The statement on Startup.cs would then look like this instead: services.AddScoped<Services.IQuoteService, Services.QuoteService>();.

This is important because it would allow us to unit test the component that depends on our service class (i.e. the controller in this case) by passing it a mock object — one that also implements IQuoteService but does not really implement all the functionality of the actual QuoteService class. Since the controller only knows about the interface (that is, it “depends on an abstraction”), the actual object that we give it as a dependency doesn’t matter to it, as long as it implements that interface. This ability to inject mocks as dependencies is invaluable during testing. Testing is beyond the scope of this article though, so I’ll stick with the simpler approach with a static dependency on a concrete class. Know that this is not a good practice when it comes to actual production systems.

And that’s all it takes. Once you add a few rules via POST ​/api​/QuoteRules, you should be able to get some vehicles quoted with POST /api/Quotes. And also see what the system has stored via GET /api/Quotes.

A Quote

And that’s all the functionality that we set out to build into our REST API! There are a few other neat things that I thought I’d include though.

Adding seed data for lookup tables

Our vehicle size and body type data isn’t meant to really chance much. In fact, we could even preload that data when our application starts. EF Core provides a data seeding feature that we can access via configurations on the DbContext itself. For our case, we could add this method to our VehicleQuotesContext:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Size>().HasData(
        new Size { ID = 1, Name = "Subcompact" },
        new Size { ID = 2, Name = "Compact" },
        new Size { ID = 3, Name = "Mid Size" },
        new Size { ID = 5, Name = "Full Size" }
    );

    modelBuilder.Entity<BodyType>().HasData(
        new BodyType { ID = 1, Name = "Coupe" },
        new BodyType { ID = 2, Name = "Sedan" },
        new BodyType { ID = 3, Name = "Hatchback" },
        new BodyType { ID = 4, Name = "Wagon" },
        new BodyType { ID = 5, Name = "Convertible" },
        new BodyType { ID = 6, Name = "SUV" },
        new BodyType { ID = 7, Name = "Truck" },
        new BodyType { ID = 8, Name = "Mini Van" },
        new BodyType { ID = 9, Name = "Roadster" }
    );
}

OnModelCreating is a hook that we can define to run some code at the time the model is being created for the first time. Here, we’re using it to seed some data. In order to apply that, a migration needs to be created and executed. If you’ve added some data to the database, be sure to wipe it before running the migration so that we don’t run into unique constraint violations. Here are the migrations:

$ dotnet ef migrations add AddSeedDataForSizesAndBodyTypes

$ dotnet ef database update

After that’s done, it no longer makes sense to allow creating, updating, deleting and fetching individual sizes and body types, so I would delete those endpoints from the respective controllers.

Body Types, GET all only

Sizes, GET all only

There are other options for data seeding in EF Core. Take a look: Data Seeding.

Improving the Swagger UI via XML comments

Our current auto-generated Swagger UI is pretty awesome. Especially considering that we got it for free. It’s a little lacking when it comes to more documentation about specific endpoint summaries or expected responses. The good news is that there’s a way to leverage C# XML Comments in order to improve the Swagger UI.

We can add support for that by configuring our project to produce, at build time, an XML file with the docs that we write. In order to do so, we need to update the VehicleQuotes.csproj like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <!-- ... -->
+   <GenerateDocumentationFile>true</GenerateDocumentationFile>
+   <NoWarn>$(NoWarn);1591</NoWarn>
  </PropertyGroup>

  <!-- ... -->
</Project>

GenerateDocumentationFile is the flag that tells the .NET 5 build tools to generate the documentation file. The NoWarn element prevents our build output from getting cluttered with a lot of warnings saying that some classes and methods are not properly documented. We don’t want that because we just want to write enough documentation for the Swagger UI. And that includes only the controllers.

You can run dotnet build and look for the new file in bin/Debug/net5.0/VehicleQuotes.xml.

Then, we need to update Startup.cs. First we need to add the following using statements:

using System.IO;
using System.Reflection;

And add the following code to the ConfigureServices method on Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    // ...

    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "VehicleQuotes", Version = "v1" });

+       c.IncludeXmlComments(
+           Path.Combine(
+               AppContext.BaseDirectory,
+               $"{Assembly.GetExecutingAssembly().GetName().Name}.xml"
+           )
        );
    });

    // ...
}

This makes it so the SwaggerGen service knows to look for the XML documentation file when building up the Open API specification file used for generating the Swagger UI.

Now that all of that is set up, we can actually write some XML comments and attributes that will enhance our Swagger UI. As an example, put this on top of ModelsController’s Post method:

/// <summary>
/// Creates a new vehicle model for the given make.
/// </summary>
/// <param name="makeId">The ID of the vehicle make to add the model to.</param>
/// <param name="model">The data to create the new model with.</param>
/// <response code="201">When the request is invalid.</response>
/// <response code="404">When the specified vehicle make does not exist.</response>
/// <response code="409">When there's already another model in the same make with the same name.</response>
[HttpPost]
[ProducesResponseType(StatusCodes.Status201Created)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesResponseType(StatusCodes.Status409Conflict)]
public async Task<ActionResult<ModelSpecification>> Post([FromRoute] int makeId, ModelSpecification model)
{
    // ...
}

Then, the Swagger UI now looks like this for this endpoint:

Fully documented POST Models endpoint

Configuring the app via settings files and environment variables

Another aspect that’s important to web applications is having them be configurable via things like configuration files or environment variables. The framework already has provision for this, we just need to use it. I’m talking about the appsettings files.

We have two of them created for us by default: appsettings.json which is applied in all environments, and appsettings.Development.json that is applied only under development environments. The environment is given by the ASPNETCORE_ENVIRONMENT environment variable, and it can be set to either Development, Staging, or Production by default. That means that if we had, for example, an appsettings.Staging.json file, the settings defined within would be loaded if the ASPNETCORE_ENVIRONMENT environment variable were set to Staging. You get the idea.

You can learn more about configuration and environments in the official documentation.

Anyway, let’s add a new setting on appsettings.json:

{
  // ...
+ "DefaultOffer": 77
}

We’ll use this setting to give default offers when we’re not able to calculate appropriate quotes for vehicles. This can happen if we don’t have rules, or if the ones we have don’t match any of the incoming vehicle features or if for some other reason the final sum ends up in zero or negative number. We can use this setting in our QuoteService like so:

// ...
+using Microsoft.Extensions.Configuration;

namespace VehicleQuotes.Services
{
    public class QuoteService
    {
        // ...
+       private readonly IConfiguration _configuration;

-       public QuoteService(VehicleQuotesContext context)
+       public QuoteService(VehicleQuotesContext context, IConfiguration configuration)
        {
            _context = context;
+           _configuration = configuration;
        }

        // ...

        public async Task<SubmittedQuoteRequest> CalculateQuote(QuoteRequest request)
        {
            // ...

+           if (response.OfferedQuote <= 0)
+           {
+               response.OfferedQuote = _configuration.GetValue<int>("DefaultOffer", 0);
+           }

            quoteToStore.OfferedQuote = response.OfferedQuote;

            // ...
        }

        // ...
    }
}

Here, we’ve added a new parameter to the constructor to specify that VehicleQuotesContext has a dependency on IConfiguration. This prompts the framework to provide an instance of that when instantiating the class. We can use that instance to access the settings that we defined in the appsettings.json file via its GetValue method, like I demonstrated above.

The value of the settings in appsettings.json can be overridden by environment variables as well. On Linux, for example, we can run the app and set an environment value with a line like this:

$ DefaultOffer=123 dotnet run

This will make the application use 123 instead of 77 when it comes to the DefaultOffer setting. This flexibility is great from a DevOps perspective. And we had to do minimal work in order to get that going.

That’s all for now

And that’s it! In this article we’ve gone through many of the features offered in .NET 5, ASP.NET Core and Entity Framework Core to support some of the most common use cases when it comes to developing Web API applications.

We’ve installed .NET 5 and created an ASP.NET Core Web API project with EF Core and a few bells and whistles, created controllers to support many different endpoints, played a little bit with routes and response codes, created and built upon a data model and updated a database via entities and migrations, implemented more advance database objects like indexes to enforce uniqueness constraints, implemented input validation using both built-in and custom validation attributes, implemented resource models as DTOs for defining the contract of some of our API endpoints, tapped into the built-in dependency injection capabilities, explored and improved the auto-generated Swagger UI, added seed data for our database, learned about configuration via settings files and environment variables.

.NET 5 is looking great.

Table of contents


dotnet csharp rest api postgres

Appium: Automated Mobile Testing

Couragyn Chretien

By Couragyn Chretien
June 30, 2021

Clouds and a river

First impressions are everything. You can have the best, most robust application in the world, but if it looks like it’s from 2004 most users won’t give it a second look. Automated testing can help ensure that the app the user sees is consistent and fully functional no matter the iteration.

Selenium, Cypress, and other automated testing suites have become more and more popular for webapps. This trend has not carried over to mobile native app testing. This may be a bit surprising, as a fully functional frontend can be the difference between a professional-feeling app and a hacky one.

There are many frameworks that can be used to test mobile applications (Appium, UI Automator, Robotium, XCUITest, SeeTest, and TestComplete to name a few), but today we’ll be focusing on Appium.

Appium is an open source framework that’s easy to use out of the box. It can be used to test many versions of many different mobile OSes. It’s a one-stop shop to ensure your users are on the same page, no matter the platform.

Installation and setup for Linux

General

We will be testing the To-Do List app from the Google Play Store. What we’re testing and for which platform isn’t important, since we’re here to learn about the Appium methodology that can be applied to any mobile app.

AVD Android emulator

We will be using an AVD (Android Virtual Device) to test our app. There are many out there but I recommend Android Studio’s.

Once it’s running, just click Configure > AVD Manager.

AVD Manager

Either create a new AVD or use the default one and start it up.

AVD View

Appium

Download Appium from GitHub.

Appium can be run through the command line but we’re going to use the desktop app here. This gives you the ability to record a session and capture elements for automated tests.

Once it’s running, all we have to do is click Start Server.

Start Appium

Ruby and RSpec

We will be using Ruby and RSpec to run our tests. At the top level of our project we need to add these lines to our Gemfile:

/Gemfile:

source 'https://www.rubygems.org'

gem 'appium_lib', '~> 11.2'
gem 'appium_lib_core', '~> 4.2'

Make sure Ruby is installed, then run:

$ gem install rspec
$ bundle install

We also need to configure RSpec to find the correct AVD and APK to test.

/spec/spec_settings.rb:

ANDROID_PACKAGE = 'io.appium.android.apis'
def android_caps {
  caps: {
    # These settings match the settings for your AVD. They can be found in the AVD Manager.
    platformName: 'Android',
    platformVersion: '11',
    deviceName: 'Pixel_3a_API_30_x86',
    #  This points to the apk you are testing
    app: './To-do_list.apk',
    automationName: 'UIAutomator2',
  },
  appium_lib: {
    wait: 60
  }
}
end

Finally we need to create our test. Since we don’t know the elements yet we can start with a blank template.

/spec/TaskCRUD:

require 'spec_settings'
require 'appium_lib'

describe 'Tests basic CRUD functions of Task application' do

  before(:all) do
    @driver = Appium::Driver.new(android_caps, true).start_driver
  end

  after(:all) do
    @driver&.quit
  end

  it '...' do

  end
end

Putting it all together

Let’s boot it up and make sure everything is running as expected.

  1. Run an AVD from Android Studio
  2. Start the Appium Server
  3. Run rspec in the project root

Even though there is no test written it should still run and pass the template test. Running it the first time will save the APK into the AVD virtual memory. We can now access the app on the AVD emulator and record tests.

Recording a test with Inspector

Open the Appium window that has the server running and click Start Inspector Session:

Start Inspector

Under Attach to Session... you will see a dropdown containing the RSpec test we just tried to run. If it’s not here try running rspec again. Select this and click the Attach to Session button.

Attach to Session

We are now connected to the Android Emulator via the Inspector. We can click on elements to see their properties and interact with them.

View Element

If we click the top record button it will save our actions and allow us to import them into a test.

Session Recording

Running a test

Let’s take what we got from the Inspector and populate our template test.

/spec/TaskCRUD:

require 'spec_settings'
require 'appium_lib'

describe 'Tests basic CRUD functions of Task application' do

  before(:all) do
    @driver = Appium::Driver.new(android_caps, true).start_driver
  end

  after(:all) do
    @driver&.quit
  end

  it 'creates a task and verifies its name' do
   el1 = @driver.find_elements(:id, "todolist.scheduleplanner.dailyplanner.todo.reminders:id/zk")
    el1.click
    el2 = @driver.find_elements(:id, "todolist.scheduleplanner.dailyplanner.todo.reminders:id/xd")
    el2.click
    el3 = @driver.find_elements(:id, "todolist.scheduleplanner.dailyplanner.todo.reminders:id/wt")
    el3.send_keys "task_1"
    el4 = @driver.find_elements(:id, "todolist.scheduleplanner.dailyplanner.todo.reminders:id/wr")
    el4.click
    el5 = @driver.find_elements(:xpath, "/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/androidx.recyclerview.widget.RecyclerView/android.widget.LinearLayout[3]/android.widget.TextView")
    el5.click
    el6 = @driver.find_elements(:id, "todolist.scheduleplanner.dailyplanner.todo.reminders:id/wo")
    el6.click
    el7 = @driver.find_elements(:xpath, "/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.RelativeLayout/androidx.recyclerview.widget.RecyclerView")
    el7.click
    el8 = @driver.find_elements(:id, "todolist.scheduleplanner.dailyplanner.todo.reminders:id/xl")
    expect(el8.text).to eql 'Test-text-1'
  end
end

If we run it with the rspec command we can watch the test in progress on the AVD and then we receive a success message!

What’s next?

Additional tests

Tests should be written to cover as many aspects of the app as possible. The main focus should be on parts of the app that don’t change very often. Tests written for something in progress or that is due for a facelift will need to be rewritten when the time comes.

Test with other devices

An app can look perfect on a Galaxy S20 but look awful on a Pixel 5. That’s why it’s important to try many different AVD versions for your app. Android allows you to create many phone types that run many different firmware versions.

CI/​CD integration

Manual testing is great but nothing beats automation. These tests should ideally be integrated into a continuous integration and continuous delivery (CI/​CD) platform. This way each build will be automatically vetted for bugs.

For more info, check out this guide for integrations with CircleCI.


automation testing android mobile

Catching CSS Regressions and Visual Bugs in Continuous Integration

Afif Sohaili

By Afif Sohaili
June 24, 2021

Blue patterns Photo by Alexandra Nicolae

Many web projects nowadays are equipped to simulate real users and automatically test a real use case, but these end-to-end tests still have a weakness. Browser automation tools run tests by performing the interactions and assertions in the context of the DOM of the pages. That is very different from how humans use web applications, which is by looking at the user interface. Due to this, the tests’ definition of a “functional web app” differs from that of a real human.

For example, let’s take a simple music player with two buttons:

  1. One in green with a play icon, and
  2. One in red with a stop icon.

The HTML would look something like this:

<!-- music player -->
<button class="button is-red"><i class="icon icon-stop"/></button>
<button class="button is-green"><i class="icon icon-play"/></button>

In our end-to-end tests, one would typically instruct the test to check:

  1. If the expected classnames exist.
  2. If clicking the stop button stops the music.
  3. If clicking the play button plays the music.

If all of the above conditions were met, then the app is considered “functional”. However, the tests have no way to verify if the right colors are actually reflected on the buttons, or if the SVG icons on the button are properly loaded and shown to the users. Unfortunately, for a real user, both of these are very crucial in telling the user how the app functions. If the stylings for the buttons are not implemented or the SVG icons for the buttons fail to load, users will not have a clear indication on how the app works, and the app will not be considered “functional” by real users, even though the functionalities of both buttons are wired correctly in the code.

Web developers also frequently deal with styling regressions. This can happen due to the “cascading” characteristic of CSS: Badly scoped CSS values may affect other elements in unrelated areas unintentionally. Many times, styling regressions slip past even the end-to-end tests with browser automation because, again, end-to-end tests only verify that the app’s functionalities are wired together, but do not check if things appear the right way visually.

Visual regression testing to the rescue

Visual regression testing adds another quality gate in the workflow to allow developers to verify that, after a code change, the user sees what is expected. For example, if the code change is supposed to modify the appearance of an element, developers can verify that it’s doing just that, or at the very least, the code change should not adversely affect other areas. Visual regression testing involves taking screenshots of tested scenarios with changes and comparing them against the baseline (usually screenshots from the stable branch).

This type of test complements the other testing categories in the test pyramid and ensures user experience is not adversely affected from any given code changes. For more information on the test pyramid and different categories of testing, visit this blog post by my colleague, Kevin.

There are many tools that help with visual regression testing. Here are some tools that you could look into:

  1. Percy.io
    • Generous free tier to get you started.
    • 5000 free screenshots per month with unlimited team members. This should get you going just fine for smaller projects.
    • Integrate with many mainstream end-to-end tests frameworks.
  2. Applitools
    • Uses AI-powered computer vision for visual diffing. Said to reduce a lot of false positives and be more efficient.
  3. Testim.io Automation
    • Another service that uses AI-powered computer vision.
  4. BackstopJS
    • A Node.js library you can set up yourself.

Each tool has its own strengths and weaknesses. For today’s demonstration, we are going to look at implementing visual regression testing with Percy and see how it works.

Integrating Percy

Percy provides comprehensive guides to get you started with many popular end-to-end test frameworks such as Selenium, Cypress, TestCafe, and Capybara. You can check the documentation to see how to integrate it with your project. Since we work with Rails a lot at End Point, we are going to demonstrate Percy within a Rails project alongside Rspec/​Capybara-driven end-to-end tests.

In this article, we are going to use this simple Rails project I built for fun as the demo. It is just a simple wiki project using Vue and Rails. Let’s start.

1. Register an account at percy.io

  1. Head to percy.io and click the “Start for Free” button on the top-right side.
  2. Choose the preferred sign up method. In this case, we are just going to use email and password, so we will choose Sign Up with Browserstack.
  3. Fill in the email and password and complete the registration. It will then send an email to ask us to verify our email address. Let’s check our email and complete the email verification.
  4. Next, we will see the Get Started screen. We will choose Create a New Project and give it a relevant name.
  5. We have successfully created a new percy.io project and should be on the project page. There are some links to setup guides that can help us integrate Percy into our project. Our project page
  6. Let’s navigate to Project Settings. We will scroll down to the Branch Settings section and make sure the Default base branch points to the primary branch of our project’s repository. This is usually main, master or develop depending on your team’s workflow. You might want to make the same branch auto-approved as well. That way, screenshots from this branch will be treated as the baseline and will be used when comparing against the development branches. In our case, we are using the main branch so we will specify that in both fields.

Great, we have now successfully set up Percy. There are other configurations in the Project Settings page (e.g. browsers to enable, diff sensitivity, and whether or not to allow public viewers), but we do not have to care about them for the purpose of this demo. You can always come back and tweak this later according to your project’s requirements.

2. Integrate with the Rails project

Now, let’s look at how to integrate Percy with our demo Rails project. Percy provides a comprehensive guide to do that here.

1. First, let’s export the PERCY_TOKEN provided to us in the Builds page into our environment variable.

export PERCY_TOKEN=<value>

2. Then, we will add percy-capybara into our Gemfile and run bundle install.

gem 'percy-capybara'

3. Next, let’s install @percy/agent via npm/​Yarn.

yarn add --dev @percy/agent

That’s it! Now let’s look at adding Percy to our feature specs.

3. Percy in Feature Specs

Percy has to be integrated into feature specs (i.e. end-to-end tests) because it requires the app to be running in order for it to be able to take screenshots. Hence, it will not work in unit tests where the application context is mocked and chunks of code are tested in isolation.

First, we will check out the primary working branch main and take a look at the existing feature specs:

# spec/integration/document_spec.rb
require "rails_helper"

RSpec.feature "Add documents", js: true do
  before :each do
    User.create(email: 'user@example.com', password: 'password')

    visit "/documents/new"

    within("#new_user") do
      fill_in 'Email', with: 'user@example.com'
      fill_in 'Password', with: 'password'
    end

    click_button 'Log in'
  end

  scenario "User visits page to create new document" do
    expect(page).to have_text('Create new document')
  end

  scenario "User sees validation errors" do
    click_button 'Create'

    expect(page).to have_text("Title can't be blank")
    expect(page).to have_text("Body can't be blank")
    expect(page).to have_text("Body is too short (minimum is 10 characters)")
  end
end

The feature specs are pretty simple. /documents/new is a route that only registered users can access, so upon visiting the path, devise is going to ask the visitor to authenticate before they can access the page.

  1. The first test case verifies that the user can see the Create new document page upon successfully signing in.
  2. The second case verifies that the user will see validation errors if they do not fill in the required details for the new document.
Adding Percy into the mix

To add Percy, simply require "percy" at the top of the spec file and add Percy.snapshot(page, { name: '<screenshot description>' }) on the lines in which we want Percy to capture.

# spec/integration/document_spec.rb
require "rails_helper"
# Include Percy in our feature specs
require "percy"

RSpec.feature "Add documents", js: true do
  before :each do
    User.create(email: 'user@example.com', password: 'password')

    visit "/documents/new"

    within("#new_user") do
      fill_in 'Email', with: 'user@example.com'
      fill_in 'Password', with: 'password'
    end

    click_button 'Log in'
  end

  scenario "User visits page to create new document" do
    expect(page).to have_text('Create new document')
    # Screenshot when we're redirected to the create new document page
    Percy.snapshot(page, { name: 'Create document page' })
  end

  scenario "User sees validation errors" do
    click_button 'Create'

    expect(page).to have_text("Title can't be blank")
    expect(page).to have_text("Body can't be blank")
    expect(page).to have_text("Body is too short (minimum is 10 characters)")
    # Screenshot when we see the errors
    Percy.snapshot(page, { name: 'Create document page validation error' })
  end
end

That’s it! Easy enough, right? Next, let’s run the tests so that the screenshots are taken and sent to percy.io. To do this, we cannot run the usual bundle exec rspec. Instead, we will have to run Percy and pass rspec to it.

$ yarn percy exec -- rspec

Note: This is the command that you should use if you have continuous integration set up.

And we’re done here! Now, if we go to percy.io, we can see that a new build has been created for us. After a few minutes, we should see the screenshots of the app.

Our first build on Percy

4. Introduce visual changes and see how percy.io compares

The last step we ran was on the primary working branch. Since we set this branch as the baseline and that the screenshots from this branch would be auto-approved, we are not going to be asked to verify anything on this build.

Therefore, to demonstrate the visual diff-ing feature of Percy, let’s create another branch highlight-create-button, and let’s make the Create Document button appear bigger as well as change the text to just say Create.

<%# app/views/documents/_form.html.erb %>

<%# ... %>

<%# Old button %>
<%#= form.submit class: 'button is-primary' %>

<%# New button %>
<%= form.submit "Create", class: 'button is-primary is-large' %>

Awesome! Now, let’s rerun yarn percy exec -- rspec and analyse the visual diff on percy.io.

Once percy.io is done processing the new screenshots, it will compare the screenshots against the baseline in the main branch. We will see this in our new build:

Visual diff on Percy

As you can see, Percy highlights any changed areas in red. This makes it easier for us to spot the differences. In this case, it is expected for the button to change, so we can just go ahead and approve the build with the green button at the top.

5. What about style regressions?

Let’s do one more demonstration. This time, let’s simulate a CSS change gone wrong:

Create another git branch called css-regression.

Add a text-indent: -5rem to the error messages to push the text out of its container. This is what it would look like:

Regression

Let’s run bundle exec rspec. We will get this output:

> bundle exec rspec
Capybara starting Puma...
* Version 5.3.2 , codename: Sweetnighter
* Min threads: 0, max threads: 4
* Listening on http://127.0.0.1:65318
..

Finished in 2.62 seconds (files took 1.3 seconds to load)
2 examples, 0 failures

As you can see, the tests were not able to catch this styling regression. This is because the error messages specified in the assertion still exist on the page, so the use case is not considered broken in the “eyes” of the regular feature specs.

Now, run yarn percy exec -- rspec to send the screenshots to percy.io to be processed.

Regression on Percy

Great, percy.io was able to catch the regression here in the visual diff! A reviewer can just mark the build as “Requested changes” and leave a comment in there to have it fixed.

6. Responsive design and cross-browser testing

Percy can also help test your app’s UI on different browsers — namely Edge, Firefox, and Chrome. By default, all three browsers are enabled. You do not need to do anything else.

It can be used to test against different screen sizes as well. You can just select the screen sizes that are relevant to you and instruct Percy to take screenshots at those screen sizes automatically. This is particularly helpful as testing on several screen size variants can be very time-consuming. With Percy, you do not have to pay attention to the screenshot variants that do not diverge from the baseline.

In order to specify screen sizes, just pass a list of screen sizes that you are interested in, like so:

Percy.snapshot(page, { name: 'Create document page', widths: [480, 768, 1024, 1366] })

Rerun yarn percy exec -- rspec. We can now see the different variants (browsers and screen sizes) of each scenario on the top-right corner of the screen.

Variants

That’s it! We have successfully demonstrated how Percy can help us verify expected UI changes and catch visual regressions.

Caveats

As helpful and as crucial as it is, visual regression testing is not without its caveats. One of the most common issues with visual regression testing is, in some cases, it can produce a huge amount of false positives. Major-but-intentional layout shifts and changes to shared elements may result in having to review a huge amount of screenshots in the project.

For example, let’s say you increase the height of the site header (which is a common element in all pages) by 30px. When Percy processes the screenshots, it will invalidate most of the screenshots in the project, because elements would have been shifted down by 30px as a result of the taller header. For large projects, reviewing each screenshot one-by-one can be a hassle, especially with the tens of permutations for each screenshot to cater different screen sizes and browsers. As a result, one would be tempted to just click “Approve build”, thinking all changed areas would just be elements being pushed down, but that may not necessarily be true for elements with position: absolute;; they might get hidden behind the now taller header, and this may cause visual regressions to still slip into production, which kind of defeats visual testing’s original purpose.

Other than that, dealing with external resources can also result in flaky builds (e.g. if a page has an iframe or links to an external image). There are a lot of ways in which an external resource can change, e.g. image quality, content, updated resource, etc., and because these resources are outside of our control, they may become a source of false positives and a nuisance to the reviewers.

One way to deal with this is to not screenshot whole pages. Instead, screenshot only the individual components that are relevant to each scenario. This way, layout shifts in other parts of the pages will not affect the build in major ways, and it also helps cut out the changes from external resources on the page. Unfortunately, Percy does not have the capability to do this yet.

Conclusion

So, there you go! I hope this article has given you a good introduction to visual regression testing, why should you have it, and how to get started with it. Hope it will benefit you and your team in pushing great web applications!

Other resources


automation user-interface testing css

Job opening: Liquid Galaxy support engineer

Benjamin Goldstein

By Benjamin Goldstein
June 10, 2021

Liquid Galaxy cabinet

We are looking for a full-time, experienced or entry-level engineer to join the End Point Immersive and Geospatial Support (I+G) Team—​a small, multidisciplinary team that supports our company’s clients with their Liquid Galaxy systems.

The candidate will be based out of our Johnson City, Tennessee or New York City offices depending on where the candidate is located. With sufficient hardware and deployment experience, the engineer may eventually be asked to travel to, perform, and supervise system installations.

Occasional evenings and weekend on-call shifts are shared amongst the team.

This is a great entry-level opportunity to learn about all aspects of production computer systems and their deployment. More experienced individuals will have the opportunity to work directly in feature development on production systems and possibly assist with other ongoing consulting projects the I+G team takes on.

Overview

  • Job Level: Experienced or entry-level, full-time.
  • Location: On-site expected with occasional remote work, on-call weekdays and weekends.
  • Environment/​Culture: Casual, remote management, lots of video meetings.
  • Benefits: paid vacation and holidays, 401(k), health insurance.

Core responsibilities

  • Track, monitor, and resolve system issues on production computer systems.
  • Support clients over email and video calls, troubleshooting system features or content.
  • Build server stacks, troubleshoot hardware, and fix software issues pre-installation.
  • Report and document issues and fixes.
  • Share responsibilities for office upkeep.
  • Adapt to differing needs day to day!

Preferred skills and experience

  • Associate or Bachelor’s degree in Computer Science, Electrical Engineering, or experience in related fields.
  • Working familiarity with computers, general exposure to internal components and software.
  • Familiarity with Linux basics.
  • Self-driven, results-, detail-, and team-oriented, follows through on problems.
  • A driver’s license is required for potential travel.
  • Comfortable with basic hand tools: hand drill, hex keys, screwdrivers, wire clippers, etc.

What work here offers:

  • Collaboration with knowledgeable, friendly, helpful, and diligent co-workers around the world
  • Flexible, sane work hours
  • Paid holidays and vacation
  • Annual bonus opportunity
  • Freedom from being tied to an office location
  • Use your desktop OS of choice: Linux, macOS, Windows
  • For U.S. employees: health insurance subsidy and 401(k) retirement savings plan

About End Point

End Point was founded in New York City in 1995 as a full-service software development consultancy. The products and designs we produce are stable, scalable, and long-lasting. Over the past two decades, we’ve automated business processes, brought new ideas to market, and built large-scale, dynamic infrastructure.

Get in touch with us:

Please email us an introduction to jobs@endpoint.com to apply! Include your location, a resume/​CV, your Git repository or LinkedIn URLs (if any), and whatever else may help us get to know you. You can expect to interview with the Support Lead and Team Manager, and do a hands-on Linux work exercise.

We look forward to hearing from you! Direct work seekers only, please—​this role is not for agencies or subcontractors.

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of sex/​gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.


company jobs liquid-galaxy

Lazy Isn’t Bad: Write Lazy Scripts

Ardyn Majere

By Ardyn Majere
June 9, 2021

Photo by cottonbro

A quote attributed to Bill Gates says, “I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it.” Larry Wall’s list of the virtues of a programmer begins with “laziness,” which of course is a particular kind of laziness: the desire to automate things that a human has previously done manually and painstakingly.

It’s a philosophy that is generally good to adopt. Are you doing repetitive tasks that you hate every day? Weekly? At the end of every month? That’s probably a clue that you could take some pain out of your workday. If there’s some task that comes up repeatedly that requires you to fiddle with data which can be automated, even if it’s relatively easy to do manually, why not do it?

There are many ways in which people try to improve productivity. One of the more famous, Kanban, involves removing work in progress and optimizing flow. While implementing full Kanban is probably more than you want to do, we can do it the lazy way.

The lazy way means you put time and effort into your infrastructure, creating code or scripts that will do the job for you. Many people don’t think they have the time to put some hours into making this work. It is indeed a time sink, but you only write the script once, compared to many many times of doing the job. The time saved in the long run can be immense. Additionally, the job can be done more accurately; less human interaction means less human error.

Look at a few factors: What task is the most boring and repetitive, what task takes the longest, and what task is most easily automatable? Pick one and write a script for it. Don’t make it too fancy; just make something to get the work done. If you find yourself with extra time left over after the next time you do the task, repeat the process and see if you can shave off more time.

While writing this post, I have a particular client and situation in mind, but we’ve seen it play out similarly many times over the years: we have a longstanding client who needs information gathered from a database. The manual process requires a lot of eye-scanning, copying, and pasting individual data items, fiddling with data formats, reformatting ID numbers from one set to another, and so on. While our example scenario is complex and has several different steps where it can fork in different directions, each step is easy to automate on its own.

Part of this task involves copying numbers from a spreadsheet, changing the number format (stripping out excess characters), deduplicating the numbers, applying a database call to those numbers, and writing the success or failure back to the document.

The most time-consuming part was searching through large log files, trying to identify the right string from just the numbers — this was also the easiest part to automate.

While writing a script that could access the spreadsheet and the database might take an hour, writing a script that does the data processing and spits out a database query took perhaps ten minutes, and has easily saved me that long after only a few uses. I also don’t have to worry about getting the call wrong; it’s preformatted for me.

Another part of the task was downloading several files with different extensions from different directories. This didn’t take too long to set up manually, perhaps 30 seconds. But with a script to grab the files for me, it takes five, and only one command instead of downloading five different files by hand.

This customer has had this pop up several times per month, sometimes per week, and would have saved thousands of dollars in fees in the long run — not to mention delays and fuss — if they’d had us write an application so they could self-serve this. Sometimes, you only realise this in hindsight, but that’s no reason to put off biting the bullet now. There are always more iterations of data handling coming in the future.

Of course these smaller scriptlets didn’t automate the whole process completely. But each step was easily done, and the time taken for the whole task is vastly reduced. If something feels too big to automate in one go, but you can see a bit that could be scripted alone, only script that part. It’ll make you feel good about that part every time you fire the script off. And maybe you can find a way to automate another part later. And so on.

Could I have made the task a simple push of the button, or completely hands off, by fully automating it? Probably. But it would take many hours to integrate all the components, and might only save time in the very long term, while the process might change in the mean time. Writing a half-measure helper script to partially automate the process isn’t fully solving the problem, but it’s lazy — in the good way!


automation tips

Fetching Outputs From Java Process Monitoring Tool with Icinga/​Nagios

Photo by Mihai Lupascu on Unsplash

Recently, I encountered an issue when executing NRPE, a Nagios agent which runs on servers that are being monitored from Icinga’s head server. Usually NRPE-related calls should run without issues on the target server, since it is declared in the sudoers file (commonly /etc/sudoers). In this post, I will cover an issue I encountered getting the output from jps (Java Virtual Machine Process Status Tool), which needed to be executed with root privileges.

Method

I wanted to use Icinga to get a Java process’s state (in this case, the process is named “Lucene”) from Icinga’s head server, remotely. jps works for this, functioning similarly to the ps command on Unix-like systems.

Usually, NRPE should be able to execute the remote process (on the target server) from Icinga’s head. In this case we are going to create a workaround through the following steps:

  1. Dump the Java process ID into a text file.
  2. Dump the running threads into another text file.
  3. Put item 1 and item 2 above into a single bash script.
  4. Create a cronjob to automatically run the bash script.
  5. Create an NRPE plugin to evaluate the output of item 1 and item 2.

Test

To illustrate this, I ran the intended command locally on the target server as the nagios user. Theoretically, this should emulate the NRPE call as if it was executed from Icinga’s server remotely. The file check_lucene_indexing_deprecated was meant to demonstrate the NRPE execution failure, whereas check_lucene_indexing is the file which is expected to run the NRPE plugin successfully. The paths to both check_lucene_indexing_deprecated and check_lucene_indexing were already declared in /etc/sudoers file on the target machine.

To show the differences, I ran two different scripts from the Icinga’s head server.

Here is the output from both local script executions: first as the nagios user, then as the root user:

# sudo -s -u nagios ./check_lucene_indexing_deprecated
CRITICAL -- Lucene indexing down

# sudo -s -u root  ./check_lucene_indexing_deprecated
OK -- 2 Lucene threads running

As you can see, the script worked fine running as root, but not as the nagios user.

Let’s run the scripts from Icinga’s head server:

# /usr/lib64/nagios/plugins/check_nrpe -t 5 -H <the target server’s FQDN> -c  check_lucene_indexing_dep
CRITICAL -- Lucene indexing down (0 found)

# /usr/lib64/nagios/plugins/check_nrpe -t 5 -H  <the target server’s FQDN> -c  check_lucene_indexing
OK -- 2 Lucene threads running

In the background, we can see different output from running jps on the target server using the root user compared to the nagios user. Let’s say I want to check the jps process ID (PID):

# sudo -s -u nagios jps -l
29112 sun.tools.jps.Jps

And as root:

# jps -l
7541 /usr/share/jetty9/start.jar
29131 sun.tools.jps.Jps

The point of running the jps -l command is to get the process ID of /usr/share/jetty9/start.jar, which is 7541. However, as indicated above, the nagios user’s execution did not display the intended result, but the root user’s did.

The workaround

We can check the existence of the process ID by dumping it into a text file and letting the NRPE plugin read it instead.

In order to get NRPE to fetch the current state of the process, we will create a cronjob; in our case it will be executed every 10 minutes. This script will dump the PID of the Java process into a text file and later NRPE will run another script which will analyze the contents of the text file.

Cronjob, creating dump files

*/10 * * * * /root/bin/fetch_lucene_pid.sh

The cron script contains the following details:

PID_TARGET=/var/run/nrpe-lucene.pid
THREADS_TARGET=/var/run/nrpe-lucene-thread.txt

/usr/bin/jps -l | grep "start.jar" | cut -d' ' -f1 1>$PID_TARGET 2>/dev/null

PID=$(cat $PID_TARGET)

re='^[0-9]+$'

if  [[ -z $PID ]]  || ! [[ $PID =~ $re ]]  ; then
exit 0
fi

THREADS=$(/usr/bin/jstack $PID | grep -A 2 "ProjectIndexer\|ConsultantIndexer" | grep -c "java.lang.Thread.State: WAITING (parking)")

echo $THREADS > $THREADS_TARGET

So instead of running the jps command directly as nagios, we let the system run jps (as root) and dump the result into a file. Our NRPE-based script will read the output later and feed the result to the dashboard.

NRPE plugin file, reading values generated from the cronjob

So we will take a look at what was written in the successfully executed Bash script (that is, check_lucene_indexing).

The NRPE plugin file, check_lucene_indexing, contains the following script:

#!/bin/bash

PID_TARGET=/var/run/nrpe-lucene.pid
THREADS_TARGET=/var/run/nrpe-lucene-thread.txt

PID=$(cat $PID_TARGET)
THREADS=$(cat $THREADS_TARGET)

re='^[0-9]+$'

if  [[ -z $PID ]]  || ! [[ $PID =~ $re ]]  ; then
  echo "CRITICAL -- Lucene indexing down (a)"
  exit 2
fi


if [ $THREADS -eq 2 ]
then
  echo "OK -- $THREADS Lucene threads running"
  exit 0
else
  echo "CRITICAL -- Lucene indexing down (b)"
  exit 2
fi

From the NRPE plugin script you can see the following text files being used:

PID_TARGET=/var/run/nrpe-lucene.pid
THREADS_TARGET=/var/run/nrpe-lucene-thread.txt

PID_TARGET contains the process’s PID, which I used to determine whether the intended process is running or not.

THREADS_TARGET contains the number of the Java threads which are currently running.

The following is the content of the check_lucene_indexing_deprecated script:

#!/bin/bash

PID=$(/usr/bin/jps -l | grep "start.jar" | cut -d' ' -f1)

if [[ -z $PID ]]; then
  echo "CRITICAL -- Lucene indexing down"
  exit 2
fi

THREADS=$(/usr/bin/jstack $PID | grep -A 2 "ProjectIndexer\|ConsultantIndexer" | grep -c "java.lang.Thread.State: WAITING (parking)")

if [ $THREADS -eq 2 ]
then
  echo "OK -- $THREADS Lucene threads running"
  exit 0
else
  echo "CRITICAL -- Lucene indexing down"
  exit 2
fi

As you can see, check_lucene_deprecated was able to get the result if it is being executed locally on the target machine - but not from the remote (Icinga’s head server). This is because jps will provide limited results when executed as the nagios compared to the local root user. Note that I have defined the path of the script in the sudoers file prior to the script execution.

Defaults: nagios !requiretty
nagios  ALL = NOPASSWD: /usr/local/lib/nagios/plugins/check_lucene_indexing
nagios  ALL = NOPASSWD: /usr/local/lib/nagios/plugins/check_lucene_indexing_deprecated

Conclusion

The method which I shared above is just one of the ways to use jps reports with Icinga/​Nagios plugins. As of now this solution works as expected. If you want to reuse the scripts, please customize them according to your environment to get the results you want. Also, as written in the documentation, getting the output by parsing the output from jps means we need to update the script any time jps changes its output format.

Please comment below if you have experience with jps and Icinga/​Nagios, and tell us how you handle the reporting.

Related reading:


linux monitoring nagios

Engineering Is Not Just About the Cool Stacks

Afif Sohaili

By Afif Sohaili
May 25, 2021

Photo by Di

As a developer, I love finding new shiny pieces of tech that can help me develop web applications better. To me, it is one of the best parts of the job: Pick up a new programming language, a new database, a new standard, etc., and build cool things with it. But recently, I had the chance to reflect on my experience working on a boring stack at one of my previous workplaces, and it was actually a lot better than I expected.

My journey

I used to work at a company where the teams are free to decide the tech stacks that best fit their requirements, so a wide range of modern tools were used there to solve the problems at hand. Within just 2 years of joining the company, I switched teams four times and touched everything from Java, Scala, Akka, Terraform, AWS ECS, Serverless, API Gateway, AWS Lambda, AWS DynamoDB, React, Gatsby, Vue, Nuxt, TypeScript, Flowtype, and many other tools that really helped advance my technical skills.

But in late 2018, my manager invited me to assist a “less popular” team in the company. The team handled a myriad of complex products that are quite old. The team’s tech stack was not the shiniest compared to the other teams in the company; the products are developed using Java, a language I generally do not prefer due to its verbosity. We just shipped JAR files to the platform’s marketplace, so there were no web services to maintain and hence it didn’t challenge or sharpen our operations skills. The frontend JavaScript code was largely built with jQuery as it came bundled with the platform we are building on top of.

As a developer who loves shiny tools, this team should have been a dreadful one to be in. But to my surprise, as I reflect upon it, it was actually the team that gave me the highest work satisfaction. It was satisfying because there were a lot of problems to solve, and most of the time the harder problems to solve are not the tech stack: the apps work, and customers are paying.

Instead of talking about moving from Java to Kotlin or from jQuery to React, our team spent the time brainstorming ideas on how to increase code quality, how to ensure discipline and follow good principles when crafting software, how to improve our developers’ workflow and processes, and how to best share knowledge and contexts that we had just acquired for future team members. Each problem solved gave me that dopamine boost that made me feel good.

Below are some of the things we did to solve these problems.

Problem #1: Code quality

The products the team were handling had been suffering from some quality degradations in recent years, and these ultimately led to more bugs and regressions, which then leads to an increase in customer complaints, and that leads to us working under pressure, which then leads to even more quality degradations, and the vicious cycle repeats.

When this started to happen, what did we do?

We slowed things down. Instead of having individual engineers work on multiple tickets in parallel, we held more pair programming sessions and reduced the number of tickets in progress to help incorporate collaboration earlier and gain feedbacks on the implementation strategy sooner. At the same time, pairing sessions also helped get the newer engineers up to speed faster. We introduced more quality gates for pull requests to be able to be merged into main. We agreed as a team to scrutinize them harder, e.g. if you do not have tests among the changes, just consider the pull request rejected. We also required all team members to approve the changes. We worked with product managers to slow down shipping new features and instead having more bugs fixed. Hence, the health of the codebases got better and better and the products got more and more stable each day.

We also held a brown-bag meeting reinforcing SOLID design principles to everyone in the team. Knowledge sharing like this helps bring everyone onto the same page while pairing and reduce the long discussion threads on pull requests.

Problem #2: Context-sharing

Members of a team come and go, and that is true for all teams. Complex apps like the ones we were maintaining are usually full of legacy decisions that the newer team members will not have the knowledge of if no deliberate efforts were made to ensure that those contexts get shared. So, we took steps to mitigate this by including the contexts in commit messages. We also encouraged writing our findings in the Jira tickets and in the pull request descriptions on GitHub, but we preferred Git commit messages for documenting decisions related to the commit, because Git is the system that is least likely to change. We might switch from Jira or from GitHub to some similar services, so contexts would be lost when that happens, but it is very unlikely that we would switch from Git for version control.

We also started a knowledgebase to contain information that should be shared within the team, including onboarding instructions, development gotchas, legacy contexts, etc. This helps distribute the knowledge to everyone and reduce blockers on specific team members. We also made a habit of documenting a discussion that happens offline, often in our internal communication channels or in our internal knowledgebase.

Problem #3: Developers’ workflow

Improving our process was also a focus. One of the first things we did was rewrite our Jenkins CI/CD pipeline to adopt multibranch pipeline, allowing us to build and test feature branches before they were merged into the main branch.

We also automated several tasks that we identified as repetitive, from as small as notifying everyone that there is a new pull request open on GitHub, to automatically running compatibility checks for all of our products once the platform we were building on released a new beta version. We constantly looked for areas we could automate better and discussed them in our weekly retrospectives.

We also tuned our workflow to better organize our capacity around all the demands that we are getting: from our customers, our roadmaps, and our improvements bucket. We adopted Classes of Service (CoS) to achieve that. CoS is an approach in which you decide different treatments to different types of work. This means the capacity we have is flexible depending on the demand of each type of work.

For example, we introduced a collapsible task buffer called Intangible that holds work whose value customers do not experience directly, such as upgrading Node.js version, etc. It is collapsible in the sense that the developers in this CoS can be reassigned to other CoS should we deem more capacity is required there. Note that CoS (or any other system for that matter) is not without its pros and cons, so be sure to tweak it to fit your circumstances should you consider adopting it.

Not all problems are tech, but all tech has problems.

Sometimes, we identify ourselves through our favorite tech stack.

Hey, I’m a JavaScript developer.

Hey, I’m a Ruby developer.

And that is not wrong per se, but this particular experience of mine serves as a reminder that the essence of software engineering goes deeper than just the tech — it is to solve problems through software. And that is what we do at End Point.

Reference


culture training programming

Database Design: Using Composite Keys

Emre Hasegeli

By Emre Hasegeli
May 20, 2021

Photo by Chuttersnap

Whether to use single-column or composite keys is another long-debated subject of database design. I previously wrote to support using natural keys and here I want to make good arguments for using composite keys.

Single-column vs. Composite

Single-column keys are widely used nowadays. I wouldn’t be surprised if many developers today don’t even think database design with composite keys is possible, even though they were essential in the beginning. Relational databases make no assumption that the keys must be composed of a single column.

Let’s see the composite keys with the corporate database example again. First, we’d need departments and employees:

CREATE TABLE departments (
  department_id text NOT NULL,
  department_location text NOT NULL,

  PRIMARY KEY (department_id)
);

CREATE TABLE employees (
  username text NOT NULL,
  department_id text NOT NULL,

  PRIMARY KEY (username),
  FOREIGN KEY (department_id) REFERENCES departments
);

Then our database grows, and we need to split the departments into multiple teams. Here’s what they’ll look like:

| id | department  | team           | members |
| -- | ----------- | -------------- | ------- |
|  1 | sysadmin    | infrastructure |       5 |
|  2 | sysadmin    | internal_tools |       3 |
|  3 | development | internal_tools |       4 |
|  4 | development | web_site       |       8 |

As you noticed there are 2 teams named internal_tools, so we cannot use this as the primary key column. We can add a surrogate auto-increment column to use as the primary key, or make the department and team the primary key. Let’s go with the surrogate key option first to demonstrate the problem:

CREATE TABLE teams (
  team_id int NOT NULL GENERATED ALWAYS AS IDENTITY,
  department_id text NOT NULL,
  team_name text NOT NULL
  team_members int NOT NULL,

  PRIMARY KEY (team_id),
  UNIQUE (department_id, team_name),
  FOREIGN KEY (department_id) REFERENCES departments
);

As you noticed, we used the surrogate column as the primary key, and added an additional unique index to ensure the team name to be unique in with the department. Now, let’s relate the employees with the teams:

ALTER TABLE employees
  ADD COLUMN team_id int NOT NULL,
  ADD FOREIGN KEY (team_id) REFERENCES teams;

Now, we know both the department and the team of an employee, but the problem is that they can point to inconsistent rows. For example, I can INSERT myself as an employee:

| username | department_id | team_id |
| -------- | ------------- | ------- |
| hasegeli | sysadmin      |       3 |

team_id 3 is in development department, so now you’d never know if I am in the sysadmin or development department. This is a very common source of data integrity problems in the databases. Applications have no good option to handle this. They would typically crash or sometimes show the employee in one department and sometimes in the other.

You cannot easily add a constraint to the database to prevent this. The best option would be to remove the department_id when the team_id is added as we know the department of the team anyway, but this option is expensive and not always possible, for example when the team_id can be NULL.

Relations with Composite Keys

Now, let’s create the teams table with a composite key:

DROP TABLE teams CASCADE;

CREATE TABLE teams (
  department_id text NOT NULL,
  team_id text NOT NULL,
  team_members int NOT NULL,

  PRIMARY KEY (department_id, team_id),
  FOREIGN KEY (department_id) REFERENCES departments
);

ALTER TABLE employees
  ALTER COLUMN team_id TYPE text,
  ADD FOREIGN KEY (department_id, team_id) REFERENCES teams;

With this method, we ensure data integrity and don’t need to disturb the existing users of the employees table while adding the teams. They can still reliably use the department_id column. We can also set the team_id as NULL and still maintain the data integrity.

Ease of Change

As we have already seen, one of the benefits of using composite keys is to respond to database model changes easier and without compromising data integrity. This becomes relevant in many real world scenarios. For example, let’s add the employees to rooms which must belong to the same department:

CREATE TABLE rooms (
  department_id text NOT NULL,
  room_id text NOT NULL,
  room_location text NOT NULL,

  PRIMARY KEY (department_id, room_id),
  FOREIGN KEY (department_id) REFERENCES departments
);

ALTER TABLE employees
  ADD COLUMN room_id text,
  ADD FOREIGN KEY (department_id, room_id) REFERENCES rooms;

Nothing special needed to be done in here, and it is often easy to respond to more complicated change requests. Composite keys play well with database constraints. For example, let’s add an employee rank which needs to be unique in every department:

ALTER TABLE employees
  ADD COLUMN employee_rank int,
  ADD CHECK (employee_rank > 0),
  ADD UNIQUE (department_id, employee_rank);

This was so easy because we have the department_id in here. Now, let’s imagine a more complicated change request. We need to divide the departments into sections:

CREATE TABLE sections (
  department_id text NOT NULL,
  section_id text NOT NULL,
  section_location text NOT NULL,

  PRIMARY KEY (department_id, section_id),
  FOREIGN KEY (department_id) REFERENCES departments
);

ALTER TABLE teams
  ADD COLUMN section_id text NOT NULL,
  DROP CONSTRAINT teams_pkey CASCADE,
  ADD PRIMARY KEY (department_id, section_id, team_id),
  ADD FOREIGN KEY (epartment_id, section_id) REFERENCES sections;

ALTER TABLE employees
  ADD COLUMN section_id text NOT NULL,
  ADD FOREIGN KEY (department_id, section_id, team_id) REFERENCES teams;

As you see, we can make this happen with minimal impact to the users of the database. As a final practice, let’s reserve employee ranks between 1 and 10 for the main section of every department:

ALTER TABLE employees
  ADD CHECK (section_id = 'main' OR employee_rank > 10);

Querying

Another advantage of using composite keys is to have more possibility for joining of tables. For example using the tables we created we can join employees to sections without using the teams tables. This would not be possible if we had used single-column keys everywhere.

Join conditions get complicated with composite keys. The USING clause helps. To demonstrate, let’s join all of the tables we created so far:

SELECT *
  FROM employees
    JOIN deparments USING (department_id)
    JOIN sections USING (department_id, section_id)
    JOIN rooms USING (department_id, room_id)
    JOIN teams USING (department_id, section_id, team_id);

The USING clause only works if you name the columns the same on all tables. I recommend doing so.

Another advantage of USING clause is to eliminate duplicate columns on the result set. If you run this query, you would not see the department_id column repeated 5 times.

Performance Considerations

One disadvantage of using composite keys is to store more data on tables as references. You would also need more space for the indexes as the reference columns often need to be indexed. However storage is the cheapest of resources, and the performance advantages easily outweigh the extra storage.

The main performance advantage of using composite keys is eliminating the need for many joins as mentioned before. However, when you do need to join many tables, the query planner would have many different paths. It’s the query planners’ strong suit to find the best join order. Composite keys allow them to come up with better plans in many scenarios. To demonstrate this, let’s get our join-all-tables query and add some WHERE conditions:

SELECT *
  FROM employees
    JOIN deparments USING (department_id)
    JOIN sections USING (department_id, section_id)
    JOIN rooms USING (department_id, room_id);
    JOIN teams USING (department_id, section_id, team_id)
  WHERE username LIKE 'a%' AND
    department_location LIKE 'b%' AND
    section_location LIKE 'c%' AND
    room_location LIKE 'd%';
    team_members > 3;

Now we added 5 conditions using 5 columns on 5 different tables. The query planner can estimate which conditions are more selective and plan to join the tables from the smaller one to bigger one.


database development performance postgres sql
Page 1 of 195 • Next page

Popular Tags


Archive


Search our blog