Our Blog

Ongoing observations by End Point people

Choosing Between SaaS and a Custom Website

Greg Hanson

By Greg Hanson
May 18, 2021

Photo by Daniel McCullough

So you need a website, but you’re not sure whether you want to pay for a website provided as software as a service (SaaS), or build a custom website and host it.

The options are plentiful, as are the reasons for building a website in the first place. The purpose of this article is not to put forward specific packages or providers for consideration, rather I want to discuss how you might make that decision more objectively. Also, this discussion is for a commercial website, not a personal website or blog.

Here at End Point, we receive many inquiries asking for new websites. We are well equipped to help you work through this process, and are happy to do so. But for those of you who want to “go it alone”, or if you just want to be better prepared before giving us a call, read on for some things to consider as you make a pros & ​cons list for SaaS vs. custom websites.

While deciding between a SaaS offering and a custom build carries unlimited considerations, here are a few main points to help you narrow down the field.

Experience

The foremost factor that you should consider for your pros & cons list is your experience. This may come as a little bit of a shock, as you might think that budget would be the top consideration. Budget is very important, but it is largely determined by your experience with your chosen technology. Don’t think for a minute that either SaaS or custom development cannot consume almost any budget, especially when you are working with an unfamiliar technology.

So before budget, comes experience. Experience can be futher divided into experience with your business and experience with websites in general. Here are a few questions to start off with.

In your industry, are you a:

  1. Rookie
  2. Veteran

How long have you owned this business?

  1. 1–3 years
  2. 3+ years

Does this business have an existing website?

  1. No
  2. Yes

If yes, is it currently SaaS or custom-developed?

  1. SaaS
  2. Custom-developed

If you answered “1” to more than half of the above questions, it’s more likely that a SaaS system would be a better fit for you. If you mostly answered “2”, you may want to lean towards custom. These questions are not definitive, but are meant to help you decide where to focus your research.

Budget

In general, if your budget is under $10,000 then SaaS is most likely the choice for you. Some exceptions here:

  • You are a talented web programmer.
  • Your brother, sister, friend, or cousin is a talented web programmer.
  • You or your developer friend have enough time to devote to this endeavor. You will need a lot.

Custom websites are no longer appropriate for the average business owner with less than a $10k minimum start up cost, as well as a $1–3k monthly budget for ongoing improvements and maintenance. Currently, droves of talented programmers are working for SaaS companies, building tools for you to use to build your own website. In fact, SaaS offerings are so much cheaper and more widely supported that building a custom site for the average small business usually makes little sense.

If your budget is $10k or more, you are probably capable of funding a custom website. However, this alone should not rule out a SaaS website. SaaS offerings come in many shapes and sizes, and many are fully capable of running enterprise-level websites!

How does your business make money?

One good case for using a custom website is if your company offers a niche product. Most SaaS offerings are based on popular business or product templates, so they might not fit well if you need to break from the mold.

For example, do you sell clothing, movies, or tools? There are great SaaS offerings out there that can have your website up and running in less than a day. Do you provide a service that uses multiple providers, working in different capacities and for different rates depending on the level of service provided? You may need a custom application! The further from commonplace your products or services are, the more likely it is that you will need a custom application.

Again, all of this advice is general, and there are exceptions to every rule. SaaS choices still require customizing to fit your business model. None of them will fit perfectly, and you will usually need to configure and add plugins to make your site conform to your business practices. SaaS offerings typically have a wide range of plugins and customizations to accommodate this. Those extras usually cost extra too.

While SaaS sites do provide a great starting point with a plethora of potential plugins, in many cases plugins are supplied by 3rd parties. That means that if you have problems with the plugin, you will need to deal with that provider separate from the main service provider.

A lot of time and money can be spent trying to make diverse systems work together. SaaS can be more “turn-key” in the beginning, but especially for more complex websites, it can end up costing more than if you created a custom site in the beginning.

What vendors does your company work with?

Vendors are sometimes overlooked when trying to decide on what type of website will be needed. Here are some questions which might complicate your website:

  • Do you use a 3rd party to ship or fulfill your products?
  • Do you sell your products on Amazon, Google Shopping, or other providers?
  • Do you use a vendor to provide your products to you?
  • Do you assemble the product after receiving parts from different vendors?
  • Does your company use any software for customer management, accounting, or shipping?

If your answer is yes to any of these questions, you need to know what platform each of those vendors or platforms use, and if it is compatible with the website you are planning to build. Almost all business done today involves the exchange of information between you, your customer, and your vendors.

That exchange of information usually takes place through an Application Programming Interface (API). If you aren’t familiar with APIs, here’s a great explanation. Understanding how websites communicate with your vendors is very important.

The point is that if you plan on doing any volume of business, at some point you will need to exchange information with some vendor or other. Verifying that your new website is capable of exchanging information with existing vendors is a big consideration for you to make before you build. Building a website without the required APIs would be like building a house without any provision for water, plumbing, or electricity. This is important stuff, and it can get very expensive to retrofit!

Is that all?

Absolutely not. There are endless considerations that can play into your decision. But hopefully this post has given you some good ideas of where to start and will help you avoid analysis paralysis. Use this information to get a good idea of your needs and document them, so that you can investigate the right things. Compare how well different solutions fit your needs.

Who knows, you may start with a SaaS solution for a few years, and then when things are rolling and you have a better handle on your site layout, migrate to a custom-built site. There are no set rules.

Don’t read an ad for a website service offering to find out you what you need in a website. Determine what you need in your website and look for a product which meets your needs!


saas software

End Point Relocates Its Tennessee Office

Cody Ressler

By Cody Ressler
May 14, 2021

After having been located in Bluff City for close to eight years, End Point is pleased to announce that it has recently relocated its Tennessee office to Johnson City. The new location is an improved facility that better serves our Liquid Galaxy team. A group of veteran End Pointers including Matt Vollrath, Neil Elliot, and Josh Ausborne are working out of the new location, alongside more recent employees Josh Harless and myself.

Our Tennessee office has an important role in providing remote access to various Liquid Galaxy platforms so that other End Point engineers can work on them. Johnson City has recently begun a fiber internet initiative through its power company, BrightRidge. We were lucky enough to be in the first wave of installations and got cables run underground to our office so that we can enjoy speeds of up to 1000 Mbps (gigabit).

This office is where we test new content, features, updates, or entirely new designs. The interior design includes plenty of space to concentrate on individual work, as well as for test systems and preparing new units for shipment. It is also well-suited for collaborative work with both onsite and remote teammates on R&D, content development, remote updates, and support.

The office is located in a much newer building with easy access to the interstate freeway and a wide variety of restaurants and hotels. Gone is the transmission-​destroying hill where our previous office sat. Instead we are located on a freight truck-accessible cul-de-sac perfect for Liquid Galaxy shipments.

Our new office boasts efficient window treatments, better HVAC, an open concept design, and plenty of space for everyone to work, build, and assemble new Liquid Galaxies. And as a bonus, the location dramatically reduces our commute times.

If you are in the area, let us know if you would like to come visit with us in the office!


company liquid-galaxy

Integrating Laravel With a React Frontend

Daniel Gomm

By Daniel Gomm
May 7, 2021

Photo by Scott Webb on Unsplash

Frontend frameworks can be useful, and provide a lot of advantages over server-side rendering of views. It’s not uncommon now for websites to be purely presentational frontend applications. Thankfully Laravel provides some helpers for including a dedicated frontend, including a fantastic npm package, laravel-mix, which heavily simplifies the use of webpack.

In this article I’ll go over how to set up a new Laravel application to work with React as its frontend. While this article may focus on React, the main issues are the same regardless of framework. You’ll need to:

  • Add your JavaScript application to the project’s file system and set up a build process for the frontend sources
  • Write some additional code to bootstrap your frontend application once the page has loaded
  • Carefully set up URL conventions to distinguish between frontend and backend routes.

Scaffolding The Frontend

In a standard Laravel 8 application (created using composer create-project laravel/laravel <NAME>), the frontend JS application is stored in the /resources/js folder. Laravel provides a helper package called laravel/ui, which can be used to scaffold the frontend with many popular frameworks, including React. To scaffold an empty React application, you can run the following:

composer require laravel/ui
php artisan ui react

This will add a new folder resources/js/components/ with a single file called Example.js in it, which contains a basic stateless functional component called Example. It’ll also add a new line to resources/js/app.js that requires the Example component. Finally, webpack.mix.js will be updated to include adding React in the build. I’ll go over what this file does in the next section.

Compiling Assets With Laravel Mix

Laravel Mix is an npm package that comes bundled with every Laravel application. It’s not Laravel specific though; you can add it to any application where you want a simple build process. It defines helpers for popular frameworks, React included. The mix.react() helper automatically handles adding in Babel to support using JSX syntax. For Laravel, the frontend build process is configured in webpack.mix.js. By default, it includes some scaffolding code that gives you a general idea of how it can be used:

const mix = require("laravel-mix");
mix
  .js("resources/js/app.js", "public/js")
  .react()
  .sass("resources/sass/app.scss", "public/css");

To run this build process, use the npm run dev command. This will use laravel-mix to compile everything specified in webpack.mix.js. The output directory for the build is also specified there. You can also start a basic development server by running php artisan serve.

This works just fine out of the box, but one thing worth noting is that by default, it’ll package all the code, including your dependencies, in the same file: public/js/app.js. This will cause the entire dependency tree to be reloaded if you make even a single line change to your code. You can use the mix.extract() helper to put the modules into a separate file, public/js/vendor.js. This allows the browser to cache your dependencies, which won’t change too much, separately from your application, which will change much more often. Here’s how this looks in webpack.mix.js:

mix
  .js("resources/js/app.js", "public/js")
  .react()
  .extract(["react"])
  .sass("resources/sass/app.scss", "public/css");

Then, to actually include your built JavaScript sources, go to views/welcome.blade.php and add them in the header, in this order:

<head>
  . . .
  <!-- Include Frontend Application (webpack mix) -->
  <script defer src="/js/manifest.js"></script>
  <script defer src="/js/vendor.js"></script>
  <script defer src="/js/app.js"></script>
</head>

The order is important because each successive script depends on the content of the previous one being defined.

Notice that all the script tags have the defer attribute added to them. This forces the browser to wait until the DOM has fully loaded in order to execute the scripts. If you don’t add the defer attribute, you’ll end up with a blank screen when you try to load the application. This happens because the browser will, by default, run your scripts as soon as they’re loaded. And, when they’re in the head section, they get loaded before the body. So, if the script loads before the body, the root element of the React application won’t be in the DOM yet, which in turn causes the application to fail to load.

Handling Frontend Routing

The next roadblock to tackle for setting up the frontend is routing. If you’re planning to have the frontend do its own routing, you’ll need to make sure that the backend routes don’t clash with the frontend ones. You’ll also need to make sure that, for all routes that the backend doesn’t recognize, it falls back to rendering the layout page that bootstraps the frontend, and not a 404 page. If you fail to do the latter, nested frontend routes won’t work if you navigate to them directly, or refresh the page after navigating from the root URL.

One way to ensure the routes don’t clash is to add a prefix like /app/ for web routes. API routes already have the /api/ prefix set up by default, and shouldn’t pose any issues. Then, since all frontend routes won’t be recognized by Laravel, we’ll want to add a fallback route. The fallback route ensures that welcome.blade.php, which contains our root React component Example, gets rendered instead of a 404 error page for all frontend routes. We can do this by using Laravel’s Route::fallback() function in /routes/web.php:

Route::fallback(function() {
    return view(welcome);
});

Make sure you add this at the very bottom of /routes/web.php, so that it’s the last route registered by your application. This is recommended by the Laravel docs and is also good practice since this route should be the last possible route to match any given URL.

CSRF Tokens

One other thing that’s important to mention is that by default Laravel has built-in features for generating and verifying CSRF tokens. This is set up in the VerifyCsrfToken middleware class that comes bundled with a fresh application. It provides nice and easy helpers for Blade pages like @csrf to ease adding this to your forms as a hidden input. However, if you’re making forms outside of Blade in React, you might receive an error page that says 419 Page Expired when you try to submit a form or send a request:

419 Page Expired Error

This error happens for both vanilla HTML forms, and when sending a POST request via JavaScript, depending on the library being used. For example, I’ve encountered this issue when using jQuery, but not axios.

You can handle this in a few different ways. The easiest way is to simply add an exception for this route in your VerifyCsrfToken class:

class VerifyCsrfToken extends Middleware
{
    /**
     * The URIs that should be excluded from CSRF verification.
     *
     * @var array
     */
    protected $except = [
        "/my-route"
    ];
}

However, this removes CSRF protection entirely and in most cases, you’ll want the CSRF protection in your forms. This can be done by setting either X-XSRF-TOKEN or X-CSRF-TOKEN request headers, and also by adding a _token property to the request parameters containing the CSRF token. It’s important to note that these similarly named values are not the same thing. The XSRF token is just an encrypted version of the actual CSRF token. Laravel 8 always sets the XSRF-TOKEN cookie in the response headers by default:

XSRF-TOKEN Cookie

This means that XSRF-TOKEN is defined in document.cookie when the page loads. By default, axios (which is included with your new Laravel application) automatically looks for this value in the cookie, and adds it to the request headers.

Conclusion

And that’s it! I’ve found Laravel works pretty well with a dedicated frontend once you get the initial setup out of the way. Have any questions? Feel free to leave a comment!


php laravel react

What is serialization?

Zed Jensen

By Zed Jensen
May 6, 2021

Mailbox Photo by Brian Patrick Tagalog on Unsplash

Serialization is a process used constantly by most applications today. However, there are some common misconceptions and misunderstandings about what it is and how it works; I hope to clear up a few of these in this post. I’ll be talking specifically about serialization and not marshalling, a related process.

What is serialization?

Most developers know that complex objects need to be transformed into another format before they can be sent to a server, but many might not be aware that every time they print an object in the Python or JavaScript console, the same type of thing is happening. Variables and objects as they’re stored in memory—either in a headless program or one with developer tools attached—are not really usable to us humans.

Data serialization is the process of taking an object in memory and translating it to another format. This may entail encoding the information as a chunk of binary to store in a database, creating a string representation that a human can understand, or saving a config file from the options a user selected in an application. The reverse—deserialization—takes an object in one of these formats and converts it to an in-memory object the program can work with. This two-way process of translation is a very important part of the ability of various programs and computers to communicate with one another.

An example of serialization that we deal with every day can be found in the way we view numbers on a calculator. Computers use binary numbers, not decimal, so how do we ask one to add 230 and 4 and get back 234? Because the 230 and the 4 are deserialized to their machine representations, added in that format, and then serialized again in a form we understand: 234. To get 230 in a form the computer understands, it has to read each digit one at a time, figure out what that digit’s value is (i.e. the 2 is 200 and the 3 is 30), and then add them together. It’s easy to overlook how often this concept appears in everything we do with computers!

Why it’s important to understand how it works

As a developer, there are many reasons you should be familiar with how serialization works as well as the various formats available, including:

  • Different formats are best suited for different use cases.
  • Standardization varies between formats. For example, INI files have no single specification, but TOML does. YAML 1.2 came out in 2009 but most YAML parsers still implement only parts of the earlier YAML 1.1 spec.
  • Each application typically supports only one or a few formats.
  • Formats have different goals, such as readability & simplicity for humans, speed for computers, and conciseness for storage space and transfer efficiency.
  • Applications use the various formats very differently from each other.

Before you start working on a project, it will certainly pay off to make sure you’re familiar with the options for serialization formats so you can pick the one most suited to your particular use case.

Binary vs. human-readable serialization

There’s one more important distinction to be made before I show any examples, and that is human-readable vs. binary serialization. The advantage of human-readability is obvious: debugging in particular is much simpler, but other things like scanning data for keywords is much easier as well. Binary serialization, however, can be much faster to process for both the sender and recipient, it can sometimes include information that’s hard to represent in plain text, and it can be much more efficient with space without needing separate compression. I’ll stick to reviewing human-readable formats in this post.

Common serialization formats with examples

CSV

For my examples, I’ll have a simple JavaScript object representing myself, with properties including my name, recent books I’ve read, and my favorite food. I’ll start with CSV (comma-separated values) because it’s intended for simpler data records than most of the other formats I’ll be showing; you’ll notice that there isn’t an easy way to do object hierarchies or lists. CSV files begin with a list of the column names followed by the rows of data:

name,favorite_food_name,favorite_food_prep_time,recent_book
Zed,Pizza,30,Leviathan Wakes

CSV files are most often used for storing or transferring tabular data, but there’s no single specification, so the implementation can be fairly different in different programs. The most common differences involve data with commas or line breaks, requiring quoting of some or all elements, and escaping some characters.

TSV

Files in tab-separated values (TSV) format are also fairly common, using tabs instead of commas to separate columns of data.

Because the tab character is rarely used in text put into table format, it is less of a problem as a separator than the very frequently-occurring comma. Typically no quoting or escaping of any kind is needed or possible in a TSV file.

name	favorite_food_name	favorite_food_prep_time	recent_book
Zed	Pizza	30	Leviathan Wakes

For the rest of my examples of each format, I’ll show the command (and library, if needed) that I used to get the serialized form of my object.

JSON

JSON stands for JavaScript Object Notation, and thus you might be fooled into thinking that it’s just an extension of JavaScript itself. However, this isn’t the case; it was originally derived from JavaScript syntax, but it has significant differences. For example, JSON has a stricter syntax for declaring objects. For my example, using the Google Chrome developer console I declared my object like this:

const me = {
  name: 'Zed',
  recent_books: [
    'Leviathan Wakes',
    'Pride and Prejudice and Zombies'
  ],
  favorite_food: {
    name: 'Pizza',
    prep_time: 30
  }
};

You’ll notice that the property names aren’t quoted and the strings are single-quoted with '. This is perfectly valid JavaScript, but invalid JSON. Let’s see what an equivalent JSON file could look like:

{
  "name": "Zed",
  "recent_books": [
    "Leviathan Wakes",
    "Pride and Prejudice and Zombies"
  ],
  "favorite_food": {
    "name": "Pizza",
    "prep_time": 30
  }
}

JSON requires property names to be quoted, and only double quotes " are allowed. It’s true that they look very similar, but the difference is important. Also notice that this JSON is formatted in an easy-to-read way, on multiple lines with indentation. This is called pretty-printing and is possible because JSON doesn’t care about whitespace.

Imagine my JavaScript application wants to send this object to some server that’s expecting JSON, using any other platform such as Java or .NET and not necessarily JavaScript. It would need to serialize the object from memory into a JSON string first, which can be done by JavaScript itself:

> let meJSON = JSON.stringify(me);
> console.log(meJSON);
{"name":"Zed","recent_books":["Leviathan Wakes","Pride and Prejudice and Zombies"],"favorite_food":{"name":"Pizza","prep_time":30}}

Note that the result here has no extra line breaks or spaces. This is called minifying, and is the reverse of pretty-printing. The flexibility allowed by these two processes is one reason people like JSON.

Parsing our example back into a JavaScript object is also very easy:

> console.log(JSON.parse(meJSON));
{
  name: 'Zed',
  recent_books: [ 'Leviathan Wakes', 'Pride and Prejudice and Zombies' ],
  favorite_food: { name: 'Pizza', prep_time: 30 }
}

The easy integration with JavaScript is a big reason JSON is so popular. I showed these examples to highlight how easy it is to use, but also to point out that sometimes we might use serialization without being aware of what’s going on under the hood; it’s important to remember that JSON texts aren’t JavaScript objects, and there may be instances where it makes more sense to use another format.

For instance, if you need a config file format that’s easy for humans to read, it is very helpful to allow comments that are not part of the data structure once it is read into memory. But CSV, TSV, and JSON do not allow for comments. The most obvious or popular choice isn’t always the only one, or the best one, so let’s keep looking at other formats.

XML

XML is well known as the markup language of which HTML is a subset, or at least a close sibling. It can also be used for serialization of data, and allows us to add comments such as the one at the beginning:

<!-- My favorites as of May 2021 -->
<name>Zed</name>
<recent_books>Leviathan Wakes</recent_books>
<recent_books>Pride and Prejudice and Zombies</recent_books>
<favorite_food>
    <name>Pizza</name>
    <prep_time>30</prep_time>
</favorite_food>

XML has the benefit of being widely used, and it can represent more complex data structures since each element can also optionally have various attributes, and ordering of its child elements is significant.

But XML is unpleasant to type and for many use cases feels rather complex and bloated, so it suffers when compared to other formats we are looking at in this post.

YAML

YAML is a serialization format for all kinds of data that’s designed to be human-readable. Simple files look fine, like our example:

---
# My favorites as of May 2021
name: "Zed"
recent_books:
  - "Leviathan Wakes"
  - "Pride and Prejudice and Zombies"
favorite_food:
  name: "pizza"
  prep_time: 30

However, the YAML specification is far from simple, and quite a bit has been written on why it’s better to use other formats where possible:

INI

INI, short for initialization, is well-known and has been around since the ’90s or earlier. It was most notably used for configuration files in Windows, especially in the era before Windows 95. INI files are still used in many places, including Windows and Linux programs’ system configuration files such as for the Git version control system.

Our example in INI format looks like this:

; My favorites as of May 2021

name=Zed
recent_books[]=Leviathan Wakes
recent_books[]=Pride and Prejudice and Zombies

[favorite_food]
name=Pizza
prep_time=30

INI has no single specification, so one project’s config files might use different syntax from another. This makes it hard to recommend over newer formats like TOML.

TOML

TOML, which stands for Tom’s Obvious Minimal Language, is a more recent addition to serialization formats; its first version was released in 2013. TOML maps directly to dictionary objects and is intended especially for configuration files as an alternative to INI. It has similar syntax to INI as well:

# My favorites as of May 2021

name = "Zed"

recent_books = [
  "Leviathan Wakes",
  "Pride and Prejudice and Zombies"
]

[favorite_food]
name = "Pizza"
prep_time = 30

Unlike INI and YAML, TOML has a very clear and well-defined specification, and seems like a great option for new projects in the future. It is currently used most prominently by the Rust programming language tools. There is a list of TOML libraries per language and version on the TOML wiki at GitHub.

PHP’s serialize()

PHP’s serialization output isn’t quite as readable, but the data is still recognizable for someone scanning visually for keywords or doing a more rigorous search. Converting from JSON is fairly simple:

#!/usr/bin/env php

<?php

$json = '
{
  "name": "Zed",
  "recent_books": [
    "Leviathan Wakes",
    "Pride and Prejudice and Zombies"
  ],
  "favorite_food": {
    "name": "Pizza",
    "prep_time": 30
  }
}
';

$obj = json_decode($json, true);

echo serialize($obj);

And the result:

a:3:{s:4:"name";s:3:"Zed";s:12:"recent_books";a:2:{i:0;s:15:"Leviathan Wakes";i:1;s:27:"Pride and Prejudice and Zombies";}s:13:"favorite_food";a:2:{s:4:"name";s:5:"Pizza";s:9:"prep_time";i:30;}}

PHP serialize() does not allow for comments, but it does support full object marshalling, which it is more commonly used for.

Perl’s Data::Dumper

Perl’s Data::Dumper module serializes data in a format specifically for Perl to load back into memory:

#!/usr/bin/env perl

use strict;
use warnings;
use JSON;
use Data::Dumper 'Dumper';

my $json = <<'END';
{
  "name": "Zed",
  "recent_books": [
    "Leviathan Wakes",
    "Pride and Prejudice and Zombies"
  ],
  "favorite_food": {
    "name": "Pizza",
    "prep_time": 30
  }
}
END

my $hash = decode_json $json;

print Dumper($hash);

And the result, which is a valid Perl statement:

$VAR1 = {
          'recent_books' => [
                              'Leviathan Wakes',
                              'Pride and Prejudice and Zombies'
                            ],
          'name' => 'Zed',
          'favorite_food' => {
                               'name' => 'Pizza',
                               'prep_time' => 30
                             }
        }

Conclusion

Serialization is an extremely common function that we as programmers should be familiar with. Knowing which is a good option for a new project can save time and money, as well as making things easier for developers and API users.

Please leave a comment if I have missed your favorite format!

Further reading


data-processing json

3 Immediate Benefits of Google Analytics for Business Owners

Ben Witten

By Ben Witten
April 30, 2021

Image from Google’s marketing platform blog

Where is your traffic coming from? What drew the traffic to your website? Which parts of your website are most visited? How do visits change over time? And how can the answers to these questions help you?

Answering such questions and doing something about it is called search engine optimization (SEO).

To help you with this is Google Analytics, a web analytics service that lets you track and understand your website traffic. It is a valuable tool for businesses of all sizes that are looking to grow.

Here are three ways Google Analytics can benefit your business:

Determining Site Improvements to Strengthen Website Flow

This is a great way to generate more “conversions” — visitors to your website taking a desired action. Are visitors behaving the way you expected them to? Can you observe any bottlenecks in audience flow?

Bottlenecks include traffic getting stuck on one page, when you want them to be going to a different one, like a contact page. Understanding how traffic gets stuck might point you toward the need to refresh certain web pages, which could in turn lead to more conversions.

For example, we observed that our “Deployment Automation” Expertise subpage has had a 100% bounce rate over the past three months. This is concerning because it means that the content may not be engaging or there may not be a clear visitor navigation path, the end goal being a contact submission. Analytics helped us start looking at how to strengthen this subpage.

Image from Google’s marketing platform blog

Understanding your Audience

Who is coming to your site, and how are they finding you? What referral sites, partner sites, media, and blog posts are directing the most traffic to your page? How can you leverage that?

In reviewing your inbound traffic, you will see some combination of the following types of traffic:

  • Direct: Traffic from directly typing the URL into the browser address bar.
  • Organic: Traffic from people who navigate to your website through search engines after seeing you in search results. Having a strong online presence, especially strong SEO, will help more visitors arrive on your website without the need to pay for them.
  • Referral: Traffic that comes to your website after being “referred” from a different website. This is when other websites link to your webpage. More backlinks and referral traffic typically leads to significant SEO benefits.
  • Paid: This traffic arrives from paid search campaigns on platforms such as Google Ads.
  • Email: Traffic from links in emails.
  • Social: Traffic that comes from posts on social media networks like Facebook, LinkedIn, and Twitter.
  • Other: All the traffic which doesn’t fit in any other category.

We recommend reviewing each type of traffic to get a better understanding of their flow through your website, and noting any trends you find within individual web traffic sources and mediums.

Data-Driven Decision Making: Stop Relying on Assumptions and Rely on Data

One great challenge to businesses is overconfidence in how much you understand about your audience. Google Analytics, and other similar analytics tools, can transform your work culture from being based on opinions and assumptions to being based on hard data. Google Analytics provides data in an organized and impactful format, and using analytics data in tandem with sales efforts can lead to more conversions and revenue for your business.

Alternatives

With Google having access to so much data and being one of the two major advertisers on the web, many people are looking for alternatives that allow them more control over their customer data, separation from Google’s advertising platforms, and a slimmer data footprint for compliance with privacy laws such as CCPA (California) and GDPR (European Union).

There have always been various options for web visitor analytics. Google Analytics was originally created by a company called Urchin Software, which Google acquired in 2005. Some current alternatives include:

  • Cloudflare web analytics, a new service offered by the popular CDN (Content Distribution Network) that simply shows visitor data already flowing through their systems.
  • GoatCounter, a SaaS or self-hosted open source application, which aims to provide simple counters rather than collecting personal data, thus avoiding any need for a privacy notice.
  • Matomo, formerly known as Piwik, a fully-featured SaaS or on-premises paid package with a limited open source version.
  • Open Web Analytics, a customizable open source analytics framework.

We at End Point have found success with these core ideas and several of these services. We are happy to provide a free consultation to discuss your website needs.


seo analytics

Enumerated Types in Rails and PostgreSQL

Patrick Lewis

By Patrick Lewis
April 29, 2021

enumeration Photo by Jared Tarbell, used under CC BY 2.0, cropped from original.

Enumerated types are a useful programming tool when dealing with variables that have a predefined, limited set of potential values. An example of an enumerated type from Wikipedia is “the four suits in a deck of playing cards may be four enumerators named Club, Diamond, Heart, and Spade, belonging to an enumerated type named suit”.

I use enumerated types in my Rails applications most often for model attributes like “status” or “category”. Rails’ implementation of enumerated types in ActiveRecord::Enum provides a way to define sets of enumerated types and automatically makes some convenient methods available on models for working with enumerated attributes. The simple syntax does belie some potential pitfalls when it comes to longer-term maintenance of applications, however, and as I’ll describe later in this post, I would caution against using this basic 1-line syntax in most cases:

enum status: [:active, :archived]

The Rails implementation of enumerated types maps values to integers in database rows by default. This can be surprising the first time it is encountered, as a Rails developer looking to store status values like “active” or “archived” would typically create a string-based column. Instead, Rails looks for an numeric type column and stores the index of the selected enumerated value (0 for active, 1 for archived, etc.).

This exposes one of the first potential drawbacks of this minimalist enumerated type implementation: the stored integer values can be difficult to interpret outside the context of the Rails application. Although querying records in a Rails console will map the integer values back to their enumerated equivalents, other database clients are simply going to return the mapped integer values instead, leaving it up to the developer to look up what those 0 or 1 values are supposed to represent.

A larger problem that arises from defining an enum as an array of values is that the values are tied to the order of elements in an array. This means any change to the order or length of the array can have unwanted consequences on the mapped values.

# don't do this; the index of active is changed from 0 to 1, archived from 1 to 2
enum status: [:abandoned, :active, :archived]

At a minimum, I would recommend using this hash-based syntax for defining enumerated types with explicit integer mapping:

enum status: {
  active: 0,
  archived: 1
}

This provides the benefit of documenting which integers are mapped to which enumerated values, and also provides more flexibility for future adjustments. For example, a new status value can now be added to the enumerated type without disrupting any of the existing records:

enum status: {
  abandoned: 2,
  active: 0,
  archived: 1
}

For Rails applications with PostgreSQL databases, it’s possible to go one step further and get most of the best of both worlds: the efficiency of using predefined enumerated types while still maintaining the ability to store meaningful string values at the database level. This is made possible by combining Rails enums with PostgreSQL Enumerated Types.

This technique requires using a migration to first define a new enumerated type in the database, and then creating a column in the model’s table to use that PostgreSQL type:

class AddEnumeratedStatusToDevices < ActiveRecord::Migration[5.2]
  def up
    execute <<-SQL
      CREATE TYPE device_status AS ENUM ('abandoned', 'active', 'archived');
    SQL

    add_column :devices, :status, :device_status
    add_index :devices, :status
  end

  def down
    remove_index :devices, :status
    remove_column :devices, :status

    execute <<-SQL
      DROP TYPE device_status;
    SQL
  end
end

The corresponding model code to use this new enumerated type looks similar to before, but now the values are mapped to strings:

class Device < ApplicationRecord
  enum status: {
    abandoned: "abandoned",
    active: "active",
    archived: "archived"
  }
end

This combination of Rails and PostgreSQL enumerated types has become my preferred approach in most situations. One limitation to be aware of with this approach is that PostgreSQL enumerated types can be extended with ALTER TYPE, but existing values cannot be removed. There is a small bit of additional development overhead introduced with the need to manage the enumerated type at both the Rails and the PostgreSQL level, but I like having the option of querying records by the string values of attributes, and the use of a PostgreSQL enumerated type provides for more efficient database storage than simply using a string type column.


ruby rails postgres

Automating Windows Service Installation

Daniel Gomm

By Daniel Gomm
April 23, 2021

Assembly line Photo by Science in HD on Unsplash

For me, setting up a service started as a clean one-liner that used InstallUtil.exe, but as time went on, I accumulated additional steps. Adding external files & folders, setting a custom Service Logon Account, and even an SSL cert had to be configured first before the service could be used. An entire checklist was needed just to make sure the service would start successfully. That’s when I realized a proper installation method was needed here. This article will go over how to make a dedicated .msi installer for a Windows Service that can do all these things and more.

Creating an installer can be tricky, because not all the available features are easy to find. In fact, the setup project itself is not included by default in Visual Studio; you need to install an extension in order to create one. But once the installer is created, we can use it to do things like:

  • Configure the installer to copy the build output of a project to the C:\Program Files (x86) folder, as well as add custom files & folders to the installation
  • Add custom CLI flags to the installer to specify the Service Logon Account at install time
  • Add an installer class to the service and use the installation lifecycle hooks to write custom code that gets run at any stage of the installation.

A Note On Compatibility

For .NET Core and .NET 5.0 projects, you won’t be able to add an installer class. To use either .NET Core or .NET 5.0 to make a service instead, you’d need to make a different kind of project called a Worker Service. A Worker Service differs from a traditional Windows Service in that it’s more like a console application that spawns off a worker process on a new thread. It can be configured to run as a Windows service, but doesn’t have to be. So instead of using an installer, for a Worker Service you’d publish the project to an output directory and then use the SC.exe utility to add it as a Windows service:

dotnet publish -o C:\<PUBLISH_PATH>
SC CREATE <WORKER_NAME> C:\<PUBLISH_PATH>

Creating a Windows Setup Project

In order to create a .msi installer in Visual Studio 2019, you’ll need to install the Microsoft Visual Studio Installer Projects extension. While it’s not provided with a default installation of Visual Studio 2019, it’s an official Microsoft extension. Once you’ve installed it, you’ll be able to create any of the following projects:

Setup Project Templates screenshot: Setup Project, Web Setup Projet, Merge Module Project, Setup Wizard

To create an installer, you can create a new Setup Project. The build output from this project will be your .msi installer. The setup project has a few different views, which you can use to configure what the installer needs to accomplish. These views can be accessed by right-clicking on the project in the Solution Explorer and expanding View from the context menu:

Setup Project Views screenshot

Configuring the Installation File System

To configure what files need to be installed, you can use the File System view, which provides a UI with some folders added to it:

File System View screenshot: Application Folder, User’s Desktop, User’s Programs Menu

Here, clicking on any folder on the left shows its contents over on the right. It also populates the Properties Window with the information about the folder:

Application Folder Properties screenshot: AlwaysCreate, Condition, DefaultLocation, Property, Transitive

In the above example, we can see that the Application Folder is being output to a folder inside C:\Program Files (x86). You can add any folders you want to the file system by right-clicking on the file system to open the Special Folders context menu:

Special Folder Context Menu screenshot

Some default folders are shown here for convenience. But let’s say we wanted to make some files get added to the C:\ProgramData folder. To do this, select “Custom Folder” and give it a name. Then, in the Properties Window, set the value of DefaultLocation to the correct path:

ProgramData Properties screenshot

From here, you can use the right half of the view to add additional folders within C:\ProgramData\DotNetDemoService based on your needs.

Another thing you’ll likely want to do is put the DLLs from your application into a folder within C:\Program Files (x86). You can easily do this by mapping the primary build output of your project to the Application Folder in the installer’s file system. To do this, right-click on the Application Folder, and add project output:

Adding Project Output screenshot

From there you’ll be prompted to select the project and output type. Select your project, and “Primary Output”:

Add Project Output Dialog screenshot

This will copy over the DLLs for your project and all of its dependencies.

Creating an Installer class

You may be wondering if it’s possible to define custom code to be run during the installation process. It is! For any project targeting .NET Framework 4.8 and under, you can add a class that extends System.Configuration.Install.Installer, and has the [RunInstaller(true)] attribute applied to it. After doing so, you’ll then be able to hook in and override any of the installation lifecycle methods. Taking a look into the definition of the System.Configuration.Install.Installer class reveals the list of overridable lifecycle hook methods you can use to add custom logic to the installation:

public virtual void Commit(IDictionary savedState);
public virtual void Install(IDictionary stateSaver);
public virtual void Rollback(IDictionary savedState);
public virtual void Uninstall(IDictionary savedState);
protected virtual void OnAfterInstall(IDictionary savedState);
protected virtual void OnAfterRollback(IDictionary savedState);
protected virtual void OnAfterUninstall(IDictionary savedState);
protected virtual void OnBeforeInstall(IDictionary savedState);
protected virtual void OnBeforeRollback(IDictionary savedState);
protected virtual void OnBeforeUninstall(IDictionary savedState);
protected virtual void OnCommitted(IDictionary savedState);
protected virtual void OnCommitting(IDictionary savedState);

It also defines event handlers for each of these steps as well:

public event InstallEventHandler BeforeInstall;
public event InstallEventHandler Committing;
public event InstallEventHandler AfterUninstall;
public event InstallEventHandler AfterRollback;
public event InstallEventHandler AfterInstall;
public event InstallEventHandler Committed;
public event InstallEventHandler BeforeRollback;
public event InstallEventHandler BeforeUninstall;

To add an installer class to the Windows Service project, there’s a helper you can use by right clicking on the designer view of the service and selecting “Add Installer” from the context menu:

Adding an Installer screenshot

This will add a new file called ProjectInstaller.cs to your project, which has its own designer view. The designer view has a corresponding ProjectInstaller.Designer.cs file that amends the ProjectInstaller class with the code generated by the designer. You’ll notice that this designer view already defines two objects, serviceInstaller1 and serviceProcessInstaller1.

Installer Designer View screenshot

These are special installer classes that will handle all the default installation tasks for your service. serviceInstaller1 is of type ServiceInstaller and handles defining the service name and if it should auto start when the machine boots up. serviceProcessInstaller1 is of type ServiceProcessInstaller and handles setting up the Service Logon Account, which the service will run with once installed. Both of these are already set up and invoked by the designer generated code in ProjectInstaller.Designer.cs.

Since both of these special service installers extend System.Configuration.Install.Installer, you can add custom code to occur at any point of the installation on these as well. The designer view again provides a GUI helper to add this in. Double-clicking on serviceInstaller1 will automatically add a new method to ProjectInstaller:

private void serviceInstaller1_AfterInstall(object sender, InstallEventArgs e) { }

It will also put some code into ProjectInstaller.Designer.cs which adds this method to the AfterInstall event of serviceInstaller1.

Adding Installer CLI Options

It’s also possible to add custom properties that you can pass to the installer as command line arguments. These can be done by defining Custom Actions on the primary build output of your project. To do this, go to the Custom Actions view of the installer project, right click on “Install” and select “Add Custom Action” from the context menu:

Add a Custom Action screenshot

This will open up a dialog that prompts you to select a file in the installer’s file system to define a custom action for. In this case, we want to define a custom action on the primary build output. This way, the custom CLI options we are about to define will be passed to the project’s installer class.

Add Custom Action Dialog screenshot

After you click “OK”, the primary build output will show up in the Custom Actions view. When you click on it, you’ll notice that the properties window has a property called CustomActionData. In short, you can use it to define custom CLI arguments like this:

CustomActionData Definining CLI Arguments screenshot

CustomActionData has its own syntax, so let’s dive deeper into what this actually does. We’re mapping the value of USERNAME and PASSWORD from the installer’s Properties Collection to the InstallContext of the installer class of your project under the Username and Password keys, respectively. The square brackets denote that the value is to be taken from the Properties Collection, and the quotes allow the value of the property to contain spaces. The forward slash denotes that we are adding a new key to the context. Any command line arguments passed to the installer are added to the Properties Collection by default.

Using Custom CLI Options in the Installer Class

Now that we have defined our custom action with the CLI arguments, we can go over to the project’s Installer class and access them via the Context property. In this example, we’re using the custom properties to define the Logon account for the service, which needs to be set right before the installation happens. We can use the Install(IDictionary stateSaver) lifecycle hook method for this purpose:

public override void Install(IDictionary stateSaver)
{
    // If no username or password is specified, fall back to installing the service to run as the SYSTEM account
    if (
        string.IsNullOrEmpty(this.Context.Parameters["Username"])
        || string.IsNullOrEmpty(this.Context.Parameters["Password"])
    ) {
        this.serviceProcessInstaller1.Account = System.ServiceProcess.ServiceAccount.LocalSystem;
    }

    // Otherwise, configure the service to run under the specified account.
    else
    {
        this.serviceProcessInstaller1.Username = this.Context.Parameters["Username"];
        this.serviceProcessInstaller1.Password = this.Context.Parameters["Password"];
    }

    // Run the base class install after the service has been configured.
    base.Install(stateSaver);
}

Conditionally Installing Files

It’s also possible to make the installer conditionally install files based on a value from the Properties Collection. One example of how this can be useful would be swapping in the production or development configuration file based on the value of a command line argument. We don’t need to write any additional code to do this, we just have to add a value for the Condition property in the Properties Window for the file:

Conditionally Installing a File screenshot

The above condition will make the file settings.production.config be installed only if the DEBUG command line argument is not defined or is set to “false”. Like the custom actions, this property is also sourced from the Properties Collection.

Conclusion

And that’s it! I found that having a dedicated .msi installer was handy for making the setup of my Windows Service completely hands-free. While some of the features you need might seem buried within context menus, the flexibility of having the installer handle the service setup is well worth the effort.

Have any questions? Feel free to leave a comment!


windows dotnet automation

GStreamer Nvenc for Ubuntu 20.04

Neil Elliott

By Neil Elliott
April 22, 2021

Image by Martin Adams on Unsplash

GStreamer is a library for creating media-handling components. Using GStreamer you can screencast your desktop, transcode a live stream, or write a media player application for your kiosk.

Video encoding is expensive, even with AMD’s current lineup making it more palatable. Recent Nvidia Quadro and GeForce video cards include dedicated H.264 encoding and decoding hardware as a set of discrete components alongside the GPU. The hardware is used in the popular Shadowplay toolkit on Windows and available to developers through the Nvidia Video SDK on Linux.

GStreamer includes elements as part of the “GStreamer Bad” plugin set that leverages the SDK without having to get your hands too dirty. The plugins are not included with gst-plugins-bad in apt, and must be compiled with supporting libraries from Nvidia. Previously this required registering with Nvidia and downloading the Nvidia Video SDK, but Ubuntu recently added apt packages providing them, a big help for automation.

Environment

CUDA

The nvenc and nvdec plugins depend on CUDA 11. The apt version is too old. I’ve found the runfile to be the most reliable installation method. Deselect the nvidia drivers when using the runfile if using the distro-maintained driver.

Plugin

Install prerequisites from apt:

$ apt install nvidia-driver-460 libnvidia-encode-460 libnvidia-decode-460 libdrm-dev

Clone gst-plugins-bad source matching distro version:

$ git clone --single-branch -b 1.16.2 git://anongit.freedesktop.org/gstreamer/gst-plugins-bad
$ cd gst-plugins-bad

Compile and install plugins:

$ ./autogen.sh --with-cuda-prefix="/usr/local/cuda"
$ cd sys/nvenc
$ make
$ cp .libs/libgstnvenc.so /usr/lib/x86_64-linux-gnu/gstreamer-1.0/
$ cd ../nvdec
$ make
$ cp .libs/libgstnvdec.so /usr/lib/x86_64-linux-gnu/gstreamer-1.0/

Clear GStreamer cache and check for dependency issues using gst-inspect:

$ rm -r ~/.cache/gstreamer-1.0
$ gst-inspect-1.0 | grep 'nvenc\|nvdec'
nvenc:  nvh264enc: NVENC H.264 Video Encoder
nvenc:  nvh265enc: NVENC HEVC Video Encoder
nvdec:  nvdec: NVDEC video decoder

Benchmark

Here is an example pipeline using the standard CPU-based H.264 encoder to encode 10000 frames at 320x240:

$ gst-launch-1.0 videotestsrc num-buffers=10000 ! x264enc ! h264parse ! mp4mux ! filesink location=vid1.mp4

On my modest machine, this took around 9.6 seconds and 400% CPU.

Running the same pipeline with the nvenc element:

$ gst-launch-1.0 videotestsrc num-buffers=10000 ! nvh264enc ! h264parse ! mp4mux ! filesink location=vid2.mp4

About 2.3 seconds with 100% CPU.

Alternatives

The apt-supported version of these plugins is limited to H.264 and 4K pixels in either dimension. Features have been fleshed out upstream. Elements for the Nvidia Tegra line of mobile processors provide more features, but the required hardware probably isn’t included with your workstation.

FFmpeg also provides hardware accelerated elements, including nvcodec supported H.264 and HEVC encoders and decoders, out of the box on Ubuntu.


video ubuntu
Previous page • Page 2 of 195 • Next page

Popular Tags


Archive


Search our blog