Our Blog

Ongoing observations by End Point people

GraphQL — An Alternative to REST

By Zed Jensen
May 11, 2019


GraphQL has become more and more popular recently as an alternative to traditional RESTful APIs since it was released as open source by Facebook in 2015. According to the GraphQL website, it is “a query language for APIs and a runtime for fulfilling those queries with your existing data”. In this blog post, I’ll go over some of what makes GraphQL different from other API solutions, and then show how to get a GraphQL API up and running so you can try it out yourself!

GraphQL is designed to fit on top of your database layer. With the help of libraries like Apollo GraphQL, it can be used with many different databases. Some of the main differences between GraphQL and more traditional RESTful APIs include:

  • GraphQL uses one endpoint. Most traditional APIs use an endpoint for each type of data; in my example, you’d probably have one each for users (/user), posts (/post) and comments (/comment). Each of these would return some JSON with the data you want. GraphQL, on the other hand, lives at one endpoint (usually /graphql) and changes what it returns based on what you ask for, as detailed in the next point.

  • You can get multiple types in one request. For instance, if you want to get information about an author plus all of their posts, instead of making a request for the author and a request for posts, you do just one request for the author and specify that you’d like their posts as well:

query {
  user(id: "12345") {
    posts {
  • You decide which parts of the data you want. Traditional REST APIs give you data based on which endpoint you’re querying (/post/:id, /user/:id, etc.), and the format of the data is generally the same. For instance, no matter which id you ask for at /posts/:id, you’ll always get something that looks like this back:
  "name":"Smash Mouth",

But what if we don’t need to know when they joined right now? Another example that better illustrates this problem (and...

graphql database

LinuxFest Northwest 2019

By Josh Williams
May 3, 2019

LinuxFest Northwest Logo Creative Commons Attribution-ShareAlike 4.0 International License

I’m sitting in an airport, writing this in an attempt to stay awake. My flight is scheduled to depart at 11:59 PM, or 2:59 AM in the destination time zone which I’m still used to. This is the first red eye flight I’ve attempted, and I’m wondering why I’ve done this to myself.

I have dedicated a good portion of my life to free, open source software. I’ll occasionally travel to conferences, sitting on long flights and spending those valuable weekends in talks about email encryption and chat bots. I’ve also done this to myself. But even with all this I have zero regrets.

This little retrospective comes courtesy of my experience at LinuxFest Northwest this last weekend in Bellingham, Washington.

Specifically I think it was some of the talks, painting things in broad strokes, that did it. I attended Jon “maddog” Hall’s beard-growing Fifty Years of Unix, and later sat in on the Q&A, which was a bit less technical than expected. So I didn’t ask about the “2038 problem.” But that’s okay.

I felt a little guilty, on one hand, doing these general interest sessions instead of something on a much more specific topic, like ZFS, which would have arguably had a more direct benefit. On the other hand, doing those general interest talks helps me stay grounded, I suppose, helps me keep perspective.

I did attend some more specialized talks, naturally. LFNW was a packed conference, often times there were a number of discussions I would have liked to attend happening at the same time. I’m hoping recordings will become available, or at least slides or other notes will appear. Some of the other talks I attended included, in no particular order:

  • Audio Production on Linux
    Like many other End Pointers, I dabble in a little bit of music. Unlike those other End Pointers, I’ve got no talent for it. Still, I try, and so I listened in on this one to find out a little more about how Jack works. I also caught wind of PipeWire, a project that’s aiming to supplant both PulseAudio and Jack....

conference linux open-source postgres

Introduction to Snapshot Testing Vue Components

By Patrick Lewis
May 2, 2019

Camera and instant photos Photo by freestocks, used under CC0 1.0

Snapshot Testing is one of the features of the Jest testing framework that most interested me when I began researching methods for testing Vue.js applications. Most of my testing experience has involved writing many verbose RSpec unit tests for Rails applications, and the promise of being able to use snapshot tests to cover more of a Vue component’s output while writing less code appealed to me. Snapshot testing does have its critics, so I have been interested to start exploring snapshot tests myself to see if they can be a valuable addition to my testing toolkit, or if they are not worth the effort.

Snapshot testing gets its name from the mechanism used to determine whether tests pass or fail by comparing them to a previously-approved reference point, or “snapshot”. With Jest snapshot testing of Vue components, the snapshot takes the form of a text file with a ‘.snap’ extension stored within a __snapshots__ subdirectory alongside the test files:

Directory structure

I decided to generate a new project using Vue CLI to do my first experiments with snapshot testing in a sample project. The project generated by Vue CLI includes one ‘HelloWorld’ component with a Jest unit test file included, so it made a good starting point for converting over to snapshot testing.

The generated test file was:

// HelloWorld.spec.js
import { shallowMount } from '@vue/test-utils'
import HelloWorld from '@/components/HelloWorld.vue'

describe('HelloWorld.vue', () => {
  it('renders props.msg when passed', () => {
    const msg = 'new message'
    const wrapper = shallowMount(HelloWorld, {
      propsData: { msg }

and I converted it to use a snapshot test by changing one expect line:

// HelloWorld.spec.js
import { shallowMount } from '@vue/test-utils'
import HelloWorld from '@/components/HelloWorld.vue'

describe('HelloWorld.vue', () => {
  it('renders props.msg when passed', () => {
    const msg = 'new...

vue testing

Facial Recognition Using Amazon DeepLens: Counting Liquid Galaxy Interactions

By Ben Ironside Goldstein
May 1, 2019

I have been exploring the possible uses of a machine-learning-enabled camera for the Liquid Galaxy. The Amazon Web Services (AWS) DeepLens is a camera that can receive and transmit data over wifi, and that has computing hardware built in. Since its hardware enables it to use machine learning models, it can perform computer vision tasks in the field.

The Amazon DeepLens camera


This camera is the first of its kind—likely the first of many, given the ongoing rapid adoption of Internet of Things (IoT) devices and computer vision. It came to End Point’s attention as hardware that could potentially interface with and extend End Point’s immersive visualization platform, the Liquid Galaxy. We’ve thought of several ways computer vision could potentially work to enhance the platform, for example:

  1. Monitoring users’ reactions
  2. Counting unique visitors to the LG
  3. Counting the number of people using an LG at a given time

The first idea would depend on parsing facial expressions. Perhaps a certain moment in a user experience causes people to look confused, or particularly delighted—valuable insights. The second idea would generate data that could help us assess the platform’s impact, using a metric crucial to any potential clients whose goals involve engaging audiences. The third idea would create a simpler metric: the average number of people engaging with the system over a period of time. Nevertheless, this idea has a key advantage over the second: it doesn’t require distinguishing between people, which makes it a much more tractable project. This post focuses on the third idea.

To set up the camera, the user has to plug it into a power outlet and connect it to wifi. The camera will still work even with a slow network connection, though when the connection is slower the delay between the camera seeing something and reporting it is longer. However, this delay was hardly noticable on my home network which has slow-to-moderate speeds of about 17 Mbps down and 33 Mbps up...

machine-learning artificial-intelligence aws liquid-galaxy

Linux desktop Postfix queue for Gmail SMTP

By Jon Jensen
April 30, 2019

Winter view of snow, river, trees, mountains, clouds at Flagg Ranch, Rockefeller Parkway, Wyoming

On a Linux desktop, I want to start sending email through Gmail in a G Suite account using SMTP, rather than a self-hosted SMTP server. Since Gmail supports SMTP, that should be easy enough.

Google’s article Send email from a printer, scanner, or app gives an overview of several options. I’ll choose the “Gmail SMTP server” track, which seems designed for individual user cases like this.

However, since I am using two-factor authentication (2FA) on this Google account — as we should all be doing now for all accounts wherever possible! — my Gmail login won’t work for SMTP because the clients I am using don’t have a way to supply the 2FA time-based token.

Google’s solution to this is to have me generate a separate “App Password” that can sidestep 2FA for this limited purpose: Set up an App Password.

That works fine, but the app password is a randomly-generated 16-letter password that is not amenable to being memorized. For security reasons, my mail client doesn’t cache passwords between sessions, so I have to look it up and enter it each time I start the mail client. That’s generally only once per day for me, so it’s not a big problem, but it would be nice to avoid.

I also want other local programs — such as cron jobs, development projects underway, etc. — to be able to send mail out through my Gmail account. How can I do that, ideally without teaching each one separately how to do it?

As a server operating system at heart, Linux of course has many SMTP servers that can intermediate by acting as a local SMTP server, queue, and sending client. Such a server could have my Gmail password configured and stored under a separate user account, giving a bit more isolation from my main desktop user.

What local SMTP program to use?


I first tried using the lightweight and ephemeral esmtp since I had already used it on my desktop computer to forward email through an SSH tunnel. I wasn’t able to get it working with Gmail, which could easily have been operator error...

sysadmin email linux

Job opening: Linux system administration and DevOps remote engineer

By Jon Jensen
April 18, 2019

computer monitors behind silhouetted head
Photo by Kevin Horvat on Unsplash

(This position has been filled.)

We are looking for a full-​time, salaried engineer to work during business hours in UTC-10 to UTC-6 (somewhere between Hawaii Time and Mountain Time) in the fun space where operations and development overlap!

End Point is a 23-​year-old Internet technology consulting company based in New York City, with about 50 employees, most working remotely from home offices. We collaborate using SSH, GitLab, GitHub, chat, video conferencing, and good old email and phones.

We serve many development and hosting clients ranging from small family businesses to large corporations.

What you will be doing:

  • Remotely set up and maintain Linux servers (mostly RHEL/​CentOS, Debian, and Ubuntu), with custom web applications
  • Audit and improve security, backups, reliability, monitoring
  • Support developer use of major language ecosystems
  • Automate provisioning with Terraform, Ansible, Chef, Puppet, etc.
  • Troubleshoot problems with performance, automation, security
  • Use open source tools and contribute back as opportunity arises
  • Use your desktop OS of choice: Linux, macOS, Windows

What you bring:

Professional experience with Linux system administration and web application support:

  • Cloud providers such as DigitalOcean, Linode, AWS, Azure, Google Cloud, Heroku, etc.
  • Networking
  • TLS and PKI
  • DNS
  • Web servers and HTTP
  • Databases such as PostgreSQL, MySQL, Solr, Elasticsearch, CouchDB, MongoDB, etc.
  • Libraries in Ruby gems, PHP PEAR/​PECL, Python PyPI, Node.js npm, Perl CPAN, Java/​JVM JARs, etc.
  • Security consciousness, and ideally familiarity with PCI DSS, HIPAA, etc.

And just as important:

  • Strong verbal and written communication skills
  • A good remote work environment
  • An eye for detail
  • Tenacity in solving problems
  • Ownership of projects to get things done well
  • Work both independently and as part of a team
  • Focus on customer needs
  • Be part of emergency on-call rotation including weekends
  • Willingness to shift work time after hours...

company jobs devops remote-work

How to set up your Ruby on Rails development environment in Windows 10 Pro with Visual Studio Code and Windows Subsystem for Linux

By Kevin Campusano
April 4, 2019


There’s one truth that I quickly discovered as I went into my first real foray into Ruby and Rails development: Working with Rails in Windows sucks.

In my experience, there are two main roadblocks when trying to do this. First: RubyInstaller, the most mainstream method for getting Ruby on Windows, is not available for every version of the interpreter. Second: I’ve run into issues while compiling native extensions for certain gems. One of these gems is, surprisingly, sqlite3, a gem that’s needed to even complete the official Getting Started tutorial over on guides.rubyonrails.org.

In this post, I’m going to be talking about how to avoid these pitfalls by setting up your development environment using the Windows Subsystem for Linux on Windows 10 Pro. You can jump to the summary at the bottom of the article to get a quick idea of what we’re going to do over the next few minutes.

Anyway, I’ve since learned that the vast majority of the Ruby and Rails community uses either macOS or some flavor of Linux as their operating system of choice.

Great, but what is a Windows guy like me to do under these circumstances? Well, there are a few options. Assuming they would like/​need to keep using Windows as their main OS, they could virtualize some version of Linux using something like Hyper-V or VitrualBox, or go dual boot with a native Linux installation on their current hardware. Provided you can set something like these up, these solutions can work beautifully, but they can have drawbacks.

Virtual machines, depending on how you set them up and for graphical interfaces specially, can take a bit of a performance hit when compared to running the OS natively. So, having your entire development environment in one can get annoying after a while. The dual boot scenario gets rid of any performance degradation but then you have to go through the hassle of restarting anytime you want to work in a different OS. This can become a problem if you need to actively work on Windows-​based...

ruby ruby-on-rails windows visual-studio-code

Eliminating Resolvers in GraphQL Ruby

By Patrick Lewis
March 29, 2019

GraphQL Ruby code

In this follow-up to my post from last month about Converting GraphQL Ruby Resolvers to the Class-based API I’m going to show how I took the advice of the GraphQL gem’s documentation on Resolvers and started replacing the GraphQL-specific Resolver classes with plain old Ruby classes to facilitate easier testing and code reuse.

The current documentation for the GraphQL::Schema::Resolver class essentially recommends that it not be used, except for cases with specific requirements as detailed in the documentation.

Do you really need a Resolver? Putting logic in a Resolver has some downsides:

Since it’s coupled to GraphQL, it’s harder to test than a plain ol’ Ruby object in your app Since the base class comes from GraphQL-Ruby, it’s subject to upstream changes which may require updates in your code

Here are a few alternatives to consider:

  • Put display logic (sorting, filtering, etc.) into a plain ol’ Ruby class in your app, and test that class

  • Hook up that object with a method

I found that I was indeed having trouble testing my Resolvers that inherited from GraphQL::Schema::Resolver due to the GraphQL-specific overhead and context that they contained. Fortunately, it turned out to be a pretty simple process to convert a Resolver class to a plain Ruby class and test it with RSpec.

This was my starting point:

# app/graphql/resolvers/instructor_names.rb
module Resolvers
  # Return collections of instructor names based on query arguments
  class InstructorNames < Resolvers::Base
    type [String], null: false

    argument :semester, Inputs::SemesterInput, required: true
    argument :past_years, Integer, 'Include instructors for this number of past years', required: false

    def resolve(semester:, past_years: 0)
      term_year_range = determine_term_year_range(semester, past_years)

        .where(term_year: term_year_range)
        .group(:first_name, :last_name)
        .pluck(:first_name, :last_name)
        .map { |name| name.join...

ruby graphql
Page 1 of 179 • Next page

Popular Tags


Search our blog