Our Blog

Ongoing observations by End Point people

Vue, Font Awesome, and Facebook/​Twitter Icons

By David Christensen
July 12, 2018

some Font Awesome fonts


Font Awesome and Vue are both great technologies. Here I detail overcoming some issues when trying to get the Facebook and Twitter icons working when using the vue-fontawesome bindings in the hopes of saving others future debugging time.


Recently, I was working with the vue-fontawesome tools, which have recently been updated to version 5 of Font Awesome. A quick installation recipe:

$ yarn add @fortawesome/fontawesome
$ yarn add @fortawesome/fontawesome-svg-core
$ yarn add @fortawesome/free-solid-svg-icons
$ yarn add @fortawesome/free-brands-svg-icons
$ yarn add @fortawesome/vue-fontawesome

A best practice when using Font Awesome is to import only the icons you need for your specific project instead of the thousand+, as this just contributes to project bloat. So in our main.js file, we import them like so:

// Font Awesome-related initialization
import { library } from '@fortawesome/fontawesome-svg-core'
import { faEnvelope, faUser } from '@fortawesome/free-solid-svg-icons'
import { faFacebook, faTwitter } from '@fortawesome/free-brands-svg-icons'
import { FontAwesomeIcon } from '@fortawesome/vue-fontawesome'

// Add the specific imported icons

// Enable the FontAwesomeIcon component globally
Vue.component('font-awesome-icon', FontAwesomeIcon)

This allows you to include icons in your view components like so:

  <div class="icons">
    <font-awesome-icon icon="user"/>
    <font-awesome-icon icon="envelope"/>

This worked fine for me until I tried to use the facebook and twitter icon:

  <div class="icons">
    <font-awesome-icon icon="user"/>
    <font-awesome-icon icon="envelope"/>
    <font-awesome-icon icon="twitter"/>  <!-- broken -->
    <font-awesome-icon icon="facebook"/> <!-- broken -->

Only blank spots and errors in the browser console like so:

[Error] Could not find one or...

vue javascript

Training Tesseract 4 models from real images

By Kamil Ciemniewski
July 9, 2018

table of ancient alphabets

Over the years, Tesseract has been one of the most popular open source optical character recognition (OCR) solutions. It provides ready-to-use models for recognizing text in many languages. Currently there are 124 models that are available to be downloaded and used.

Not too long ago, the project moved in the direction of using more modern machine-learning approaches and is now using artificial neural networks.

For some people, this move meant a lot of confusion when they wanted to train their own models. This blog post tries to explain the process of turning scans of images with textual ground-truth data into models that are ready to be used.

Tesseract pre-trained models

You can download the pre-created ones designed to be fast and consume less memory, as well as the ones requiring more in terms of resources but giving a better accuracy.

Pre-trained models have been created using the images with text artificially rendered using a huge corpus of text coming from the web. The text was rendered using different fonts. The project’s wiki states that:

For Latin-based languages, the existing model data provided has been trained on about 400000 textlines spanning about 4500 fonts. For other scripts, not so many fonts are available, but they have still been trained on a similar number of textlines.

Training a new model from scratch

Before diving in, there are a couple of broader aspects you need to know:

  • The latest Tesseract uses artificial neural networks based models (they differ totally from the older approach)
  • You might want to get familiar with how neural networks work and how their different types of layers can be used and what you can expect of them
  • It’s definitely a bonus to read about the “Connectionist Temporal Classification”, explained brilliantly at Sequence Modeling with CTC (it’s not mandatory though)

Compiling the training tools

This blog post talks specifically about the latest version 4 of Tesseract. Please make sure that you have that installed...

ruby machine-learning

SRV DNS records in Terraform and Cloudflare

By Jon Jensen
June 26, 2018

woman walking across train tracks
(Photo by David Goehring, CC BY 2.0, cropped)

At End Point we are using Terraform for a few clients to manage their web hosting infrastructure as code (IaC). Terraform is particularly helpful when working with multiple cloud or infrastructure providers and stitching together their services.

For example, for one web application that involves failover from the primary production infrastructure to a secondary location at a different provider, we are using Cloudflare as a CDN to provide caching, DDoS mitigation, and traffic routing in front of virtual servers at DigitalOcean and Amazon Web Services (AWS).

We decided we wanted to store all of their infrastructure configuration in Terraform, not just what is required for the web application, so we can recreate their entire infrastructure from their Git repository.

This all went fine until we got to their email DNS records. Our client is using Microsoft Office 365 for their email, which requires some SRV records. Terraform’s Cloudflare provider works fine with the universal MX records, but when we first wanted to do this, the Terraform provider for Cloudflare did not support SRV records at all.

Luckily for us, Terraform recently (6 April 2018) gained support for DNS SRV records as mentioned in the release notes and described in more detail in the pull request that added the feature.

Great! So now we can get on with this.

I began by naively assuming that the SRV record data should be given in space-separated form like many DNS interfaces use, including BIND and Cloudflare’s web interface itself. I tried setting it like this:

resource "cloudflare_record" "_sipfederationtls_tcp" {
  domain = "${var.domain}"
  name   = "_sip._tcp.${var.subdomain}"
  type   = "SRV"
  value  = "100 1 443 sipdir.online.lync.com."

But that resulted in an error. So when in doubt, consult the documentation, right? I did that:

Those make it clear that a data element is required...

devops terraform cloud hosting

Ecommerce Shakeups: Magento Acquisition and Etsy Rate Increases

By Steph Skardal
June 19, 2018

Magento, Etsy

If you’ve been paying any attention to much in the ecommerce world, there have been a couple shakeups and transitions that could affect how you look at your ecommerce options these days.

Adobe to Acquire Magento

A few weeks ago, it was announced Adobe would acquire Magento in a large acquisition. We’ve seen Magento clients come and go. It used to be the case that the Magento Community version was free and open source, but lacking in features, and the Magento Enterprise version was not free and came with many more features but was closed source.

But, times change, and admittedly I haven’t looked into the current Magento offerings until writing this post. The two current options for Magento are Magento Commerce Starter and Magento Commerce Pro, more details here. These plans are not for small potatoes, starting at $2k/mo. I can see how the cost of this is worth it in lieu of paying a full time developer, but this is not a good fit for small businesses just getting started.

There at not many public details on the acquisition, other than bringing Magento to Adobe’s range of “physical and digital goods across a range of industries, including consumer packaged goods, retail, wholesale, manufacturing, and the public sector”. Only time will tell.

Etsy Hikes Rates

I am personally connected to the craft industry by way of my own hobby, so I’ve heard rumblings of the changes going on within Etsy with a new CEO throughout the last year. They will be shutting down Etsy wholesale as of July 31st, 2018, closing Etsy Studio & Manufacturing later this year, and last week, they announced increasing transaction fees from 3.5% to 5% which will now also apply to shipping charges. With that money, they will be offering improved tools and marketing efforts. You can read the official announcement here and more Q&A from Etsy here.

There are so many overwhelming options when it comes to determining what the best ecommerce solution is for any size of ecommerce business, and whether...

ecommerce magento etsy saas

systemd: a primer from the trenches

By Ian Neilsen
June 18, 2018

Gears image by Guy Sie, CC BY-SA 2.0, cropped & scaled

systemctl: Let’s get back to basics

''Help me systemd, you are my only hope.''

Sometimes going back to day zero brings clarity to what seems like hopeless or frustrating situation for users from the Unix SysV init world. Caveat: I previously worked at Red Hat for many years before joining the excellent team at End Point and I have been using systemd for as long. I quite honestly have forgotten most of the SysV init days. Although at End Point we work daily on Debian, Ubuntu, CentOS, and BSD variants.

Here is a short and sweet primer to get your fingers wet, before we dive into some of the heavier subjects with systemd.

Did you know that systemd has many utilities you can run?

  • systemctl
  • timedatectl
  • journalctl
  • loginctl
  • systemd-notify
  • systemd-analyze - analyze system
  • systemd-cgls - show cgroup tree
  • systemd-cgtop
  • systemd-nspawn

And systemd consists of several daemons:

  • systemd
  • journald
  • networkd
  • logind
  • timedated
  • udevd
  • system-boot
  • tmpfiles
  • session

That’s a long way from the old SysV init days. But in all essence it’s not that different. The one thing that stands out to me is we have more information with less typing then previously. That can only be a good thing, right?

Well, let’s see! There are many many web pages out there that list systemd or systemctl switches/​flags. However in everyday use I want to speed up the work I do, I want information at my fingertips, and I find flags and switches which mean something sure do make it easier.

Pro Tip 1: Tab completion

Before you begin playing with the commands, you should install bash-completion. Some distros don’t auto-complete with systemd until you install that, and without tab auto-completion you miss out on a lot of systemctl.

As an example when you tab for completion you will see many of the systemctl options:

# systemctl
add-requires           enable                 is-system-running      preset                 show
add-wants              exit ...

hosting systemd

Instant TLS Upgrades Through Proxy Magic!

By David Christensen
June 14, 2018


TLS shutdowns are real

The payment gateways have been warning for years about the impending and required TLS updates. Authorize.net and PayPal—to name a few—have stopped accepting transaction requests from servers using TLS 1.0. Despite the many warnings about this (and many delays in the final enforcement date), some projects are affected by this and payments are coming to a stop, customers cannot checkout, and e-commerce is at a standstill.

Ideally, getting to security compliance would include a larger migration to update your underlying operating system and your application. But a migration and software update can be an expensive project and in some cases, the business can’t wait weeks while this is done.

End Point has worked with several clients recently to try to remedy the situation by using a reverse proxy to fix this and we’ve had good success on getting payments flowing again.

What is a proxy?

A proxy is a mid-point, essentially a digital middleman, moving your data from one place to another. In two recent client instances, we ended up using nginx (the stack’s webserver) as the reverse proxy, basically running a separate server for just shuttling requests to/​from the payment gateway. Since we want to be able to run the gateway in both live and test modes, we use two separate server definitions in our nginx include, one for each.

Since the proxy is talking to the gateway in TLS 1.2 the payment gateway is happy. Since the application can talk http to the proxy running on the same machine, your application is happy. Since payments are now flowing, the business is happy.

Why use a proxy?

While we always impress on clients the importance of staying up-to-date with their entire stack (operating system, language, application frameworks), this is not always practical for some sites, whether for cost reasons or some technical limitations which keep them on a specific library or framework version. In our case, these clients had been migrated to CentOS 7...

ecommerce nginx proxies rails security sysadmin hosting tls

Systematic Query Building with Common Table Expressions

By Josh Tolley
June 12, 2018

The first time I got paid for doing PostgreSQL work on the side, I spent most of the proceeds on the mortgage (boring, I know), but I did get myself one little treat: a boxed set of DVDs from a favorite old television show. They became part of my evening ritual, watching an episode while cleaning the kitchen before bed. The show features three military draftees, one of whom, Frank, is universally disliked. In one episode, we learn that Frank has been unexpectedly transferred away, leaving his two roommates the unenviable responsibility of collecting Frank’s belongings and sending them to his new assignment. After some grumbling, they settle into the job, and one of them picks a pair of shorts off the clothesline, saying, “One pair of shorts, perfect condition: mine,” and he throws the shorts onto his own bed. Picking up another pair, he says, “One pair of shorts. Holes, buttons missing: Frank’s.”

The other starts on the socks: “One pair of socks, perfect condition: mine. One pair socks, holes: Frank’s. You know, this is going to be a lot easier than I thought.”

“A matter of having a system,” responds the first.

I find most things go better when I have a system, as a recent query writing task made clear. It involved data from the Instituto Nacional de Estadística y Geografía, or INEGI, an organization of the Mexican government tasked with collecting and managing country-wide statistics and geographical information. The data set contained the geographic outline of each city block in Mexico City, along with demographic and statistical data for each block: total population, a numeric score representing average educational level, how much of the block had sidewalks and landscaping, whether the homes had access to the municipal sewer and water systems, etc. We wanted to display the data on a Liquid Galaxy in some meaningful way, so I loaded it all in a PostGIS database and built a simple visualization showing each city block as a polygon extruded from the earth, with...

postgres gis sql database

Liquid Galaxy Supporting the Community During Natural Disaster

By Ben Witten
June 7, 2018

Earthquakes and explosive eruptions are currently rocking Kīlauea’s summit crater, creating concerns for the local community. Fortunately, NOAA’s Mokupāpapa Discovery Center in Hilo is stepping up to educate the greater community with the help of their Liquid Galaxy system, which was created and is supported by End Point.

While there is active volcanic activity and explosive eruptions continue at the Kīlauea summit, Hawaiʻi Volcanoes National Park is mostly closed to its nearly two million annual visitors. NOAA’s Mokupāpapa Discovery Center in Hilo is helping to support the community affected by the lava flows and eruption at Kīlauea summit and along its Lower East Rift Zone by hosting Hawaiʻi Volcanoes National Park Service rangers and interpretive staff.

To lessen the impact on park visitors and to provide a venue to learn about the current eruption, NOAA’s Mokupāpapa Discovery Center in Hilo is hosting a pop-up park center, with daily ranger talks at 10 am and 2 pm, on-site rangers throughout the day, and support of park programming. NOAA National Weather Service meteorologists from the Hilo Data Collection Office are also participating in the daily 10 am briefing to provide information on ash fall, wind direction, and air quality hazards.

The briefings are being given using End Point’s Liquid Galaxy as a visualization and briefing tool to show where the current lava flows are and where ash fall may occur from the explosive eruptions at the summit. Understanding where the activity is taking place, as well as understanding what areas are potentially dangerous, has been critical to keeping the public safe from this spectacular natural event. Liquid Galaxy is proving to be an excellent tool to show both the geography and the geology, and previous historic flows on Hawaiʻi Island.

Our End Point support team is monitoring this Liquid Galaxy system 24×7 to ensure there are no disruptions in service for the public’s education. Although the need for our system...

liquid-galaxy event
Previous page • Page 2 of 175 • Next page

Popular Tags


Search our blog