Our Blog

Ongoing observations by End Point people

Job opening: Systems Programmer

Matt Vollrath

By Matt Vollrath
October 13, 2021

foggy autumn mountain view

End Point is an Internet technology company with headquarters in New York City and an office in Johnson City, Tennessee. The majority of our 50 employees work remotely. We provide consulting, development, and support services to many clients ranging from small family businesses to large corporations. We also develop and support an immersive visualization product called Liquid Galaxy.

Job Description

We are looking for a C++ developer to join our team full-time to develop new custom software solutions and improve existing ones for our clients. The person in this position will work collaboratively with our talented team of developers to design, implement, test, debug, and maintain systems software.

Responsibilities

  • Create reusable, effective, secure, and scalable C++ code.
  • Translate technical requirements into code.
  • Identify bottlenecks and bugs in the system and develop solutions.
  • Troubleshoot and ensure that software applications are running correctly.

Skills and Qualifications

  • English language proficiency
  • Preferably based in the United States
  • Strong technical and communication skills
  • 4+ years of professional experience developing software using C++
  • Proficiency with containers, networking fundamentals, and pub-sub messaging systems such as ROS, ROS2, Socket.IO
  • Experience programming in Python and JavaScript/​TypeScript for Node.js and browser, or equivalent
  • Strong experience writing modules for high-level languages such as Python, Node
  • Good understanding of code versioning tools and proficiency with Git
  • Proficient with Linux including shell scripting
  • Ideally also experience with database/​interface architecture and design, the Unity engine, and/or WebRTC

Benefits

  • Flexible, sane work hours
  • Annual bonus opportunity
  • Paid holidays and vacation
  • Health insurance subsidy
  • 401(k) retirement savings plan

How to contact us

Please email us an introduction to jobs@endpoint.com to apply. Include your location, your resume/​CV, your LinkedIn URL (if you have one), and whatever else helps us get to know you.

We look forward to hearing from you! Direct employment seekers only, please—​this role is not for agencies or subcontractors.

Equal opportunity employer

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of sex/​gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.


company jobs

Vue GraphQL integration using Apollo Client

Daniel Gomm

By Daniel Gomm
October 8, 2021

Photo by Mathew Benoit on Unsplash

Introduction

In this post I’ll go over everything you need to know to get your Vue app using GraphQL to send and receive data. This post only covers the frontend — stay tuned for my next post on making a GraphQL server using Django and graphene-python!

For the uninitiated: GraphQL is a query language that aims to replace the traditional REST API. The idea is that, instead of having separate endpoints for each resource in your API, you use one endpoint that accepts GraphQL queries and mutations for all of your resources. Overall, this makes data access on the frontend more like querying a database. Not only does it give you more control over your data, but it also can be much faster than using a REST API, providing a better user experience.

Getting started

To get your Vue app set up using GraphQL we’ll need to do two things. First, we’ll install vue-apollo (a Vue plugin for the Apollo GraphQL client) as well as apollo-boost, which bootstraps the configuration of Apollo. With these you’ll be able to:

  • Manually run GraphQL queries and mutations from any Vue component via the this.$apollo helper
  • Automatically map GraphQL queries to a component’s data fields by adding the apollo property to your component

These queries will also lazy load data from Apollo’s cache to minimize requests across multiple components.

Second, we’ll add webpack configuration so that you can store your GraphQL queries and mutations in separate files (.gql or .graphql), and import them directly into your component files.

Let’s begin by installing the required npm packages:

npm install graphql vue-apollo apollo-boost graphql-tag

Setting up VueApollo

To set up the VueApollo plugin, we’ll use the ApolloClient helper from apollo-boost, and pass it the URL of your GraphQL API endpoint:

main.js

import Vue from 'vue';
import App from './App.vue';
import ApolloClient from 'apollo-boost';
import VueApollo from 'vue-apollo';

// Create the apolloProvider using the ApolloClient helper
// class from apollo-boost
const apolloProvider = new VueApollo({
  defaultClient: new ApolloClient({
    uri: '<YOUR_GRAPHQL_ENDPOINT_HERE>'
  })
});

// Add VueApollo plugin
Vue.use(VueApollo);

// Instantiate your Vue instance with apolloProvider
new Vue({
  apolloProvider,
  render: h => h(App),
}).$mount('#app')

With this configuration in place, you now have access to this.$apollo in all your components, and you can add smart queries to them using the apollo property.

GraphQL file imports

To enable GraphQL file imports, update vue.config.js to use the included GraphQL loader from graphql-tag to parse all files with a .graphql or .gql extension:

vue.config.js

module.exports = {
    chainWebpack: (config) => {
        // GraphQL Loader
        config.module
          .rule('graphql')
          .test(/\.(graphql|gql)$/)
          .use('graphql-tag/loader')
          .loader('graphql-tag/loader')
          .end();
      },
};

Once this configuration is in place, you can create a .gql or .graphql file, and import it directly into your javascript files:

import MY_QUERY from "./my-query.gql";

This imported query (named MY_QUERY in the example) is a DocumentNode object, and can be passed directly to Apollo.

As a side note: If you have an existing GraphQL server, it’s usually possible to export your schema into a .gql file that contains the queries and mutations your server uses. Not only does this save a lot of time, but it helps minimize inconsistencies between the queries on the frontend and what the backend actually does.

Loading data with Apollo queries

With Apollo, you can configure any Vue component to map GraphQL queries to fields in its data object. You can do this by adding an apollo option to your component. Each field on this object is an Apollo Smart Query, which will automatically run the query (lazily loading from the cache) and then map the query results to a field in the component’s data. The name of the mapped data field will be the same as the field name within the apollo object.

For example, let’s say we needed to make a component load a list of blog posts, given a user ID, and display the total number of posts for that user. To do this using Apollo, you’ll need to define a GraphQL query that accepts userId as a variable and queries for that user’s posts. Here’s how that query might look:

posts.gql

query ($userId: String!) {
    posts(userId: $userId) {
        id 
        content
    }
}

We can then define an apollo object on the component that loads the data from the query into our component’s data:

posts.vue

<template>
    <p>
        Total number of posts: {{posts.length}}
    </p>
</template>
<script>
import POSTS_BY_USER from "./posts.gql";

export default {
    name: 'NumPosts',
    props: ['userId'],
    data() {
        return {
            // This value is updated by apollo when the query
            // is run and receives data
            posts: [],
        }
    },
    // This smart query will automatically run the POSTS_BY_USER
    // query when the component is mounted. It also responds to 
    // changes in any of its variables, and will automatically 
    // rerun the query if the userId changes.
    apollo: {
        posts: {
            query: POSTS_BY_USER,
            variables() {
                return { userId: this.userId };
            },
        },
    },
}
</script>

The smart query accepts a GraphQL query and a variables object where the keys are the variable names and the values are the variable values. What this will do is run the POSTS_BY_USER query when the component mounts, and store the results of that query in the posts data field. Then, any time one of the variables changes (in this case, it would happen if the userId prop receives a new value), the query will be rerun and posts will again be updated. Additionally, the results of the query are stored in Apollo’s cache. So, if another component has the same smart query in it, only one actual request will be made.

Updating data with Apollo mutations

To update existing objects using GraphQL, we use mutations. GraphQL mutations look similar to queries, except that on the server, they will update or create new resources. For example, a mutation to update an existing user’s post would look like this:

mutation ($id: Int!, $content: String!) {
    updatePost(id: $id, content: $content) {
        id
        content
    }
}

Running this mutation will cause the server to update the post with the specified $id. To run GraphQL mutations from your component, you can use the Apollo mutate method:

this.$apollo.mutate({
    mutation: UPDATE_POST,
    variables: { id: this.post?.id, content: this.newContent }
});

This function sends the UPDATE_POST mutation to the server to be run, and then updates the cache for all occurrences of the post with the given id when it receives the response.

For updating existing objects, Apollo is able to automatically handle updating the cache. However, when creating a new object, the cache needs to be updated manually. I’ll demonstrate this in the next section.

Creating data and handling cache updates

Apollo has a global cache of query results, which prevents duplicate requests from being made when the same query is run again in the future. In the cache, each query is indexed using the query itself, and the variables it was run with.

When you run a mutation that updates an existing object, Apollo is smart enough to update the cache because it can use the ID of that object (from the mutation’s variables) to find all cached queries that include it. However, when creating new objects, Apollo won’t update the cache because there’s no object in any cached queries with the ID of the new object. This is why you’ll have to either update the cache yourself, or specify which queries need to be re-fetched after running the mutation.

While specifying the queries to re-fetch makes the code much simpler, it might make more sense to do a manual update if the query to be re-fetched is costly.

Continuing with our blog posts example, let’s assume we have a query POSTS_BY_USER, which returns a list of all posts for a given user ID. If we wanted to create a new post, we’d need to update the cached results for POSTS_BY_USER with the given user ID to include the new post.

To create a new post, and then re-fetch the POSTS_BY_USER query, it would look like this:

this.$apollo.mutate({
    mutation: ADD_POST,
    variables: { content: this.newPostContent },
    refetchQueries: [
        {
            query: POSTS_BY_USER, 
            variables: { userId: this.currentUser.id }
        }
    ]
});

To do the same exact thing with a manual cache update, it would look like this:

this.$apollo.mutate({
    mutation: ADD_POST,
    variables: { content: this.newPostContent },
    update: (cache, result) => {
        // The new post returned from the server. Notice how 
        // the field on data matches the name of the mutation 
        // in the GraphQL code.
        let newPost = result.data.addPost;

        // Queries are cached using the query itself, and the
        // variables list used.
        let cacheId = {
            query: POSTS_BY_USER,
            variables: { userId: this.currentUser.id },
        };

        // Get the old list from the cache, and create a new array
        // containing the new item returned from the server along 
        // with the existing items.
        const data = cache.readQuery(cacheId);
        const newData = [...data.postsByUser, newPost];

        // Write the new array of data for this query into 
        // the cache.
        cache.writeQuery({
            ...cacheId,
            data: { postsByUser: newData },
        });
    },
    // By specifying optimistic response, we're instructing apollo 
    // to update the cache before receiving a response from the 
    // server. This means the UI will be updated much quicker.
    optimisticResponse: {
        __typename: "Mutation",
        addPost: {
            __typename: "Post",
            id: "xyz-?",
            content: this.newPostContent,
            userId: this.currentUser.id,
        },
    },
});

There’s a few things to note about the above code. First, it specifies an optimisticResponse field on the mutation. This field can be used to pass a response to Apollo before the server actually responds. If you know exactly what the response will look like, you can use it to enhance the user experience by making the UI respond right away instead of waiting while the server processes the request.

As you can see, manually updating the cache requires quite a bit of code to accomplish, and is a bit hard to read. In my own projects, I found it best to abstract the Apollo mutations into separate helper functions that just accept the variables object. This way, the cache updates stay separate from the business logic of the components, and aren’t scattered throughout the codebase.

Conclusion

And that’s it! My experience converting an existing Vue codebase to use Apollo/​GraphQL was a very positive one. The resulting code had much better performance than manually sending requests and updating a Vuex store, and was a lot easier to work on.

Have any questions? Feel free to leave a comment!


javascript graphql vue

Liquid Galaxy Hackathon 2021

Seth Jensen

By Seth Jensen
October 2, 2021

Our NYC hackathon group

A few months ago we had our first company gathering since the pandemic started. About 20 End Pointers came to our New York City office to work on various Liquid Galaxy projects, and for several of us, to meet each other in person for the first time.

Except for our NYC-based team, this was everyone’s first look at the “new” office; we moved offices in January of 2020, so COVID-19 shut down about 14 months of office use.

The hackathon group

Our CMS team worked on some exciting updates to our Liquid Galaxy CMS, including implementing new and improved techonologies for the database and user interface.

The CMS team
The CMS team: Zed, Dan, and Jeff

Our Research and Development team worked on upgrades to the Liquid Galaxy itself, focusing on creating smoother transitions for multimedia presentations. This included a custom window manager created by Matt, dubbed “Matt WM” by the team.

Neil, Jacob, Matt, and Will working on the Liquid Galaxy
Will, Matt, Jacob, and Neil, hard at work

Our support team worked on spinning up documentation and data entry to bring our inventory up to date and prepare for the next wave of installations.

Darius hacking away
Darius hacking away

It was great to see everyone at the NYC office, working elbow to elbow. We saw plenty of the hard work and camaraderie which is End Point’s speciality. Until next time, we will continue to improve the Liquid Galaxy remotely!

View from our new office on the 19th floor


liquid-galaxy company

Liquid Galaxy Screen Share Integration

Alejandro Ramon

By Alejandro Ramon
September 29, 2021

Screen Share Integration

End Point’s Immersive and Geospatial Division is proud to announce the rollout of our new Screen Share Integration as an extension to the Liquid Galaxy platform’s capabilities. The additional hardware and software configuration can be added to existing installations or included in a solution provided by our sales team.

Screen Share Integration uses ClickShare, a well-regarded enterprise-grade wireless screen sharing tool already used in the offices of many of our commercial real estate clients. With Screen Share Integration you can push a button to share laptop, desktop, phone, and tablet content directly onto the displays of the Liquid Galaxy or onto an integrated side screen. We expect this to be useful for clients who are interested in sharing videos, spreadsheets, and other ad-hoc interactive media directly from their devices to supplement the main content on screen.

Why we created this

In an effort to expand the flexibility of the Liquid Galaxy platform, we thought about what tools our current clients are already using. We acknowledge that there are limitations to the interactivity of certain content types on the platform, and ClickShare is a useful tool for wireless sharing already used by our clients. Recognizing the value in supporting more interactivity with content sources and expanding on the possibility of what can be presented on the system, we invested in this feature to provide more options to our users.

Who this benefits

Any client wishing that they could share and interact with data or media from other sources can now seamlessly integrate these media streams onto the Liquid Galaxy mid-presentation, easily enabling and disabling the share as they go through their presentation.

How it works

Clients can use a USB-enabled single button device or an application compatible with iOS, Android, Windows, or macOS devices to share the content to their Liquid Galaxy. Using the touchscreen, the user can then place their content onto the main displays in pre-defined windows, seamlessly overlaid on the Liquid Galaxy.

If you are an existing client and have any questions about this new capability, please email or call us! If you are considering a Liquid Galaxy platform for your organization and would like to learn more, please contact us.


liquid-galaxy company

Integrating the Estes Freight Shipping SOAP API as a Spree Shipping Calculator

By Patrick Lewis
September 28, 2021

Cargo ship on sea with dark clouds

One of our clients with a Spree-based e-commerce site was interested in providing automated shipping quotes to their customers using their freight carrier Estes. After doing some research I found that Estes provided a variety of SOAP APIs and determined a method for extending Spree with custom shipping rate calculators. This presented an interesting challenge to me on several levels: most of my previous API integration experience was with REST, not SOAP APIs, and I had not previously worked on custom shipping calculators for Spree. Fortunately, the Estes SOAP API documentation and some code examples of other Spree shipping calculators were all I needed to create a successful integration of the freight shipping API for this client.

Estes API Documentation

The Estes Rate Quote Web Service API is the one that I relied on for being able to generate shipping quotes based on a combination of source address, destination address, and package weight. I found the developer documentation to be thorough and helpful, and was able to create working client code to send a request and receive a response relatively quickly. Many optional fields can be provided when making requests but I found that I only needed to use a small subset of these, as shown in the example code below.

The one aspect of the API that tripped me up a bit was their use of CN as the country code for Canada; Spree and most other codebases I have encountered use the international standard ISO 3166 country codes with CA for Canada, so I had to add a small workaround for that in my client code when requesting shipping quotes to Canadian addresses. Another limitation I encountered is that the API expected to receive only 5-digit US and 6-character Canadian postal codes, so I had to do a bit of manipulation in my shipping calculator to account for that.

Ruby SOAP Client

I researched Ruby SOAP clients and soon found Savon, the “Heavy metal SOAP client”, which proved to be very easy to integrate into my existing Rails/​Spree application. I added the savon gem to my project’s Gemfile and I was quickly able to instantiate a Savon client and configure it using the WSDL provided by Estes. After that, most of the integration work involved crafting a valid XML payload for my request and then parsing the response.

Spree Shipping Calculators

The final piece of the puzzle was implementing the Savon client into a Spree shipping calculator class. This process allowed me to retrieve details about the current user’s order and then return the calculated shipping estimates within the context of the Spree checkout pages. Looking at existing shipping calculator code helped set me on the right path here; in the end, it was just a matter of defining a new class that inherited from the base Spree::ShippingCalculator class and then defining the #compute_package method expected by Spree for returning the shipping cost of a given package.

Code Example

# app/models/spree/calculator/shipping/estes_calculator.rb
module Spree::Calculator::Shipping
  # Custom freight shipping rate API integration
  class EstesCalculator < Spree::ShippingCalculator
    ESTES_API_URL = 'https://www.estes-express.com/tools/rating/ratequote/v4.0/services/RateQuoteService?wsdl'.freeze

    def self.description
      'Estes Freight'
    end

    def compute_package(package)
      country_code = package.order.ship_address.country.iso
      return 0 unless country_code.in?(['US', 'CA', 'MX'])

      client = Savon.client(
        filters: %i[user password account],
        log: true,
        log_level: :debug,
        logger: Logger.new(Rails.root.join('log', 'savon.log')),
        pretty_print_xml: true,
        wsdl: ESTES_API_URL
      )

      country_code = 'CN' if country_code == 'CA' # Estes uses CN for Canada
      postal_code = package.order.ship_address.zipcode

      postal_code =
        if country_code == 'CN'
          postal_code.delete(' ').first(6)
        else
          postal_code.first(5)
        end
      xml = <<~XML
        <soapenv:Envelope
            xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
            xmlns:rat="http://ws.estesexpress.com/ratequote"
            xmlns:rat1="http://ws.estesexpress.com/schema/2019/01/ratequote">
          <soapenv:Header>
            <rat:auth>
              <rat:user>#{ENV['ESTES_USER']}</rat:user>
              <rat:password>#{ENV['ESTES_PASSWORD']}</rat:password>
            </rat:auth>
          </soapenv:Header>
          <soapenv:Body>
            <rat1:rateRequest>
              <rat1:requestID>#{package.order.number}</rat1:requestID>
              <rat1:account>#{ENV['ESTES_ACCOUNT']}</rat1:account>
              <rat1:originPoint>
                <rat1:countryCode>US</rat1:countryCode>
                <rat1:postalCode>10001</rat1:postalCode>
              </rat1:originPoint>
              <rat1:destinationPoint>
                <rat1:countryCode>#{country_code}</rat1:countryCode>
                <rat1:postalCode>#{postal_code}</rat1:postalCode>
              </rat1:destinationPoint>
              <rat1:payor>S</rat1:payor>
              <rat1:terms>C</rat1:terms>
              <rat1:baseCommodities>
                <rat1:commodity>
                  <rat1:class>50</rat1:class>
                  <rat1:weight>#{package_weight(package)}</rat1:weight>
                </rat1:commodity>
              </rat1:baseCommodities>
            </rat1:rateRequest>
          </soapenv:Body>
        </soapenv:Envelope>
      XML
      response = client.call(:get_quote, xml: xml)
      quotes = response.body.dig(:rate_quote, :quote_info, :quote)

      if quotes.is_a?(Array)
        quotes.first.dig(:pricing, :total_price).to_f
      elsif quotes.is_a?(Hash)
        quotes.dig(:pricing, :total_price).to_f
      else
        0
      end
    rescue Savon::Error
      # Record shipping rate as 0 if an API error is caught, 0 amount will indicate need to show user an error message on the shipping rate page
      0
    end

    private

    def package_weight(package)
      weight = package.contents.sum { |content| content.line_item.weight }

      if weight < 5 # enforce minimum package weight
        5
      else
        weight.round
      end
    end
  end
end

Conclusion

I was pleased that I was able to quickly build a custom shipping calculator in Spree that used the Estes Rate Quote API to provide accurate shipping estimates for large freight packages. The high quality documentation of the Estes API and the Savon SOAP Client gem made for a pleasant development experience, and the client was happy to gain the new functionality for their Spree store.


api ruby rails shipping spree

Deploying a .NET 5 app on IIS

Juan Pablo Ventoso

By Juan Pablo Ventoso
September 27, 2021

Puzzle Photo by Christian Cable, CC BY 2.0

.NET 5 has been around for a few years now, after being released at .NET Conf 2020, containing the best of both worlds: .NET Core, including multi-platform support and several performance improvements, and .NET Framework, including Windows desktop development support with WPF and Windows Forms (UWP is also supported, but not officially yet).

A .NET Core-based project can be published into any platform (as long as we’re not depending on libraries targeted to .NET Framework), allowing us to save costs by hosting on Linux servers and increasing performance by having cheaper scalability options. But most developers are still using Windows with Internet Information Services (IIS) as the publishing target, likely due to the almost 20 years of history of .NET Framework, compared to the relatively short history of .NET Core, launched in 2016.

Our .NET project

We won’t review the steps needed to set up a new .NET 5 project, since this time we are only focusing on publishing what we already have developed. But to understand how our application will integrate with IIS and the framework, it’s important to note a fundamental change any .NET 5 project has in comparison with a .NET Framework one:

Since .NET 5 is .NET Core in its foundation, our project output will actually be a console application. If we create a new .NET Core project, no matter which version we are using, we will find a Program.cs file in the root with an application entry point in it, that will look somewhat similar to the one below:

public class Program
{
	public static void Main(string[] args)
	{
		BuildWebHost(args).Run();
	}

	public static IWebHost BuildWebHost(string[] args) =>
		WebHost.CreateDefaultBuilder(args)
			.UseStartup<Startup>()
			.Build();
}

The WebHost object will be the one processing requests to the app, as well as setting configuration like the content root, accessing environment variables, and logging.

This application needs to be executed by the dotnet process, which comes with any .NET 5 runtime.

Installing a .NET 5 runtime

The first step we need to do in our destination server, is to prepare the environment to run .NET 5 apps by installing the .NET 5 Hosting Bundle, a 65 MB setup file which has all the basic stuff needed to run .NET 5 on Windows. Since .NET can also run on Linux and macOS, installer executables are available for these operating systems as well.

Installing the hosting bundle

Once the installation finishes, we will need to restart IIS by typing iisreset on an elevated command prompt.

Creating the application pool

It’s always recommended to create a new application pool for a new website that will be published. That allows us to run the website in a separate IIS process, which is safer and prevents other websites from crashing if an application throws an unhandled exception. To create a new application pool, right-click on the “Application pools” section on the IIS Manager sidebar and choose the option “Add application pool”.

Since .NET 5 is based on .NET Core, the application pool we create will not be loaded inside the .NET Framework runtime environment. All .NET 5 applications will run by calling the external dotnet process, which is the reason why we need to install a separate hosting bundle in the first place.

That means that, when we are creating our application pool, we will need to set the .NET CLR version to “No managed code” before saving changes, as shown below:

App pool settings

Creating the new website

With the bundle installed and a new application pool created, it’s time to add the new website where our application will be published to. Right-click on the “Sites” section on the IIS Manager sidebar and choose the “Add Website” option.

We can choose any name to identify the new website. The important thing is to point the website to the newly created application pool, and bind it to the correct IP address and domain/​host name, as shown below:

Setting up a new website

Once we accept the changes, the new website will be automatically started, which means we should be able to reach the IP address/​hostname we entered. We will get a default page or a 404 response, depending on how our IIS instance is configured, since we haven’t published our application yet.

Publishing our project

Finally, it’s time to publish our .NET application into the new website. If we have Visual Studio, we can have the IDE automatically upload our content to IIS and publish it by right-clicking on our project and choosing “Publish”. Or we can use the dotnet command with the publish parameter to do it ourselves.

I usually prefer to do a manual publish into a local folder, and then decide which content I need to copy into the destination. Sometimes we only update a portion of the backend logic, in which case copying the output DLLs is all we need to do.

If we choose to let Visual Studio handle the publishing process, we need to choose “Web Server (IIS) / Web deploy” as our publish destination and enter the information needed for Visual Studio to connect to the server and copy the files:

Publishing the website

We need to make sure that the “site name” we enter here corresponds to the site name we entered when we created the new website on the server.

If you prefer to manually copy the files like me, on the destination screen choose “Folder” instead of “Web Server (IIS)”. That option will copy the project’s output into the specified folder, so it can be manually copied into our website’s root later.

And that’s it! Our project is now published into the website we created. We can hit our IP address/​hostname again and we should now get our website’s default response.

To sum it up, the main difference between publishing a .NET Framework app and a .NET 5 app is that we need to install a runtime and do a couple of tweaks when setting up the new application pool. But in general, it’s a pretty straightforward process.


dotnet csharp iis

Monitoring Settings Changes in ASP.NET Core

Daniel Gomm

By Daniel Gomm
September 22, 2021

Ripples in water
Photo by Linus Nylund on Unsplash

Did you know you can directly respond to config file changes in ASP.NET Core? By using the IOptionsMonitor<T> interface, it’s possible to run a lambda function every time a config file is updated.

For those who aren’t too familiar with it, ASP.NET Core configuration is handled using the Options Pattern. That is, you create a model class that has a field for each property in your config file, and then set up the application to map the configuration to that class, which can be injected into controllers and services. In your controllers, you can request the configuration class T as a dependency by using an options wrapper class like IOptions<T>.

While this works just fine in some cases, it doesn’t allow you to actually run code when the configuration is changed. However, if you request your configuration using IOptionsMonitor<T>, you get the ability to define a lambda function to run every time appsettings.json is changed.

One use case for this functionality would be if you wanted to maintain a list in the config file, and log every time that list was changed. In this article I’ll explain how to set up an ASP.NET Core 5.0 API to run custom code whenever changes are made to appsettings.json.

Setting up the API

In order to use the options pattern in your API, you’ll first need to add the options services to the container using the services.AddOptions() method. Then, you can register your custom configuration class (in this example, MyOptions) to be bound to a specific section in appsettings.json (in this example, "myOptions").

public void ConfigureServices(IServiceCollection services)
{
  // ...

  // Add options services to the container
  services.AddOptions();

  // Configure the app to map the "myOptions" section of
  // the config file to the MyOptions model class
  services.Configure<MyOptions>(
    Configuration.GetSection("myOptions")
  );
}

Monitoring Configuration Changes

Now that the API is set up correctly, in your controllers you can directly request the configuration using IOptionsMonitor<T>. You can also unpack the configuration instance itself by using the IOptionsMonitor<T>.CurrentValue property. This value will automatically get updated by IOptionsMonitor<T> when the configuration is changed, so you only need to retrieve it once in the constructor.

using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;

[Route("api/[controller]")]
[ApiController]
public class MyController : ControllerBase
{
  private MyOptions MyOptions;

  public MyController(IOptionsMonitor<MyOptions> MyOptionsMonitor)
  {
    // Stores the actual configuration in the controller
    // to reference later. Changes to the config will be
    // reflected in MyOptions.
    this.MyOptions = MyOptionsMonitor.CurrentValue;

    // Registers a lambda function to be called every time
    // a configuration change is made
    MyOptionsMonitor.OnChange(async (opts) =>
    {
      // Write some code!
    });
  }
}

One small gotcha worth noting is that the OnChange function isn’t debounced. So if you’re using an IDE that automatically saves as you type, it’ll trigger the OnChange function rapidly as you edit appsettings.json.

Conclusion

And that’s it! It doesn’t take much to get your API to run custom code on config file changes.

Have any questions? Feel free to leave a comment!


monitoring dotnet

Introducing VisionPort Remote

Alejandro Ramon

By Alejandro Ramon
September 17, 2021

VisionPort Remote screenshot

Welcome to End Point Liquid Galaxy’s new VisionPort Remote! This application not only allows the user to show their content on screen, but also gives control over Google Earth navigation through a number of touch actions. In addition to serving as a remote touchscreen of the system, Remote doubles as a portal that creators can use to test their content from their own devices. A new feature of Remote permits a shareable “guest view” allowing the presentation host to show their content without the possibility of a guest intervening.

Note: This interface layout is not final.

Why we created this

The COVID-19 pandemic increased the need for remote work, content, and presentations. Prior to the pandemic, the only way to use the Liquid Galaxy system was if you were in front of the display. The VisionPort Remote provides more flexibility and an ability to experience the system’s benefits from all over the world.

Who this benefits

The VisionPort remote helps content creators visualize the content that they are making without needing to be in front of the system, enables hands-free control of the Liquid Galaxy from any device, and allows for remote sharing of content to viewers who do not have easy access to a Liquid Galaxy but still want to experience the immersive environment.

How it works

The host of Remote holds control of the system and presentation, allowing others to view but not affect the Liquid Galaxy.

The “Earth preview” displays a live view of content that is being displayed on the Liquid Galaxy, and gives the presenter the ability to navigate the map as one normally would with the SpaceNav, the 3D joystick which supports easy motion in all directions.

The lower portion of the interface controls the presentation and scene. Scenes, presentations and playlists can be selected to be displayed on the Liquid Galaxy as well as within the preview window.

To the left of the scene controls is a window with preview display modes. These buttons will show a zoomed-in portion of the displays for a better view of their content. By default, wall is selected and shows an entire view of a presentation, as it would be displayed against the 7 screen wall. The other options show a view of the selected section.

The share button in the top right corner features a view-only option for the guest viewing the presentation. The guest only has the option to view the different preview display modes as they are presented. Press the share button, copy the link, and send it to your intended audience.

If you have any questions please reach out to us!


liquid-galaxy company
Page 1 of 197 • Next page

Popular Tags


Archive


Search our blog