Our Blog

Ongoing observations by End Point people

Volunteer While You Work From Home

By Elizabeth Garrett Christensen
February 15, 2019

two puppies sitting on a couch

I’ve always been an animal lover. I’ve currently got a dog, two cats, bees, and a flock of chickens in my tiny suburban home and I would get more if I could. Over the past few years I’ve taken up an interest in fostering animals through our local animal shelter. Above are our current fosters, two St. Bernard mix parvo pups.

I’m always looking for opportunities to do something in the community but as a busy mom with a full-​time job, it can be difficult to fit volunteering into your schedule. What I’ve discovered is that animal fostering is a great volunteer job for someone that works from home.

How Does It Work?

Our local shelter has a list of foster volunteers who’ve completed their application process and requisite trainings. When they have an animal or group of animals that needs to be out of the shelter for a certain amount of time, they email everyone with a description of the foster. You review the information and decide if you’re a good fit.

We’re very lucky in that our local humane society provides all the food, bedding, medicine, and instruction you need. Your job is to take care of the animal or litter, report back to the shelter as needed, and return the animal when it is ready to be adopted by the community at large.


These two parvo puppies came to stay with us. Parvo is very contagious and affected animals have to be cared for outside the shelter for the other animals’ safety.

Snuggle Breaks

Animal fostering is a really nice way to give yourself breaks during the day. When I worked in the office, I always liked to make the rounds to the water cooler and chat with coworkers. Having animals around can help you combat some of the isolation and loneliness that comes from a solo remote office.

Just a few minutes of petting a kitten or playing fetch in your backyard with a bored pup is a nice break from work and can help you recharge your brain juices to tackle the next item on your to-do list.


This older bonded pair of dogs came to stay with us...


remote-work community

Where are you with your Windows OS in 2019?

By Dan Briones
February 12, 2019

Windows home row
Photo by bradleypjohnson · CC BY 2.0

It should be of little surprise that on January 14, 2020, after a decade of Windows 7, Microsoft will stop providing security updates and support for this older operating system. Windows 7 was released in 2009, and due to its stability enjoyed many years as the go-to operating system for home and business alike.

Even now, it is estimated by NetMarketShare that over 40% of businesses still rely on it. Despite Microsoft having ended mainstream support for Windows 7 in 2015, it still offered extended support because of the operating system’s popularity, and the generally slow adoption of newer releases. However, that support shall soon end, as will support for Windows Server 2008 R2 (release 2), which also remains in wide use. Organizations of all kinds will need to upgrade to newer operating systems to remain secure.

The corporate adoption of Windows 8 and 8.1 may have been slow in part due to Microsoft’s radical changes to the user interface, such as replacing navigation menus with information-​filled “live” tiles. Windows 10, however, was designed as a compromise, providing a Windows 7-​like Start menu, while preserving the live tile interface for accessing applications and services from the desktop. There are many compelling reasons for moving from Windows 7 to Windows 10. The most notable is that Windows 7 will no longer receive any security updates, creating serious security risk and compliance issues.

In 2018, Microsoft adopted a Semi-​Annual Channel (SAC) governed by the Modern Lifecycle Policy. This means that feature updates are released to the public twice a year, around March and September. Updates are cumulative. Updating to these releases is required to remain eligible for support from Microsoft. Security updates are released monthly and quarterly as rollups, and critical updates are released as needed. This is part of Microsoft’s plan to minimize vulnerabilities and security exposure in an ever-​changing digital...


windows security

Camping in the Clouds with Terraform and Ansible

By Josh Williams
February 5, 2019

Base Camp
Photo by Andrew E. Larsen · CC BY-ND 2.0

Right, so, show of hands: How many of you work on some bit of web code by doing a git clone to your own laptop, developing the feature or bug fix, running through manual testing of the app until you’re happy with it, and off it goes back up to the repo when done? I’m curious, and I have a few questions for you:

  • Have you ever had a bit of code that worked locally, but didn’t in production because of some difference in systems, dependencies, or something else in the stack?
  • How do you show off your work to a client or management for approval? Can you demo several alternate changes to the same site at the same time?
  • How do you bring in coworkers to “look over your shoulder“ and help with something, especially ones that are far away?
  • How do you get a new coworker up to speed if they’re doing development themselves?
  • If you’re working on multiple things, do you create multiple clones?
  • How’re your backups?

Are you fidgeting nervously thinking about that? Sorry. ☹ But also, check out this little thing: DevCamps. It’s been an End Point staple for quite a while now, so if you’ve read our blog before you might have heard about it.

Long story short: In addition to any local development you do, this system will spin up your own little environments (“camps”) on a remote development server. Each camp includes a checkout of the code, separate httpd/​nginx and app processes, and a dedicated database with a clone of the data. What’s that all mean?

  • Well, figuring that dev server is configured the same, you’ll be working right in a stack identical to production.
  • Each camp gets its own port, so you can link your beautiful results to someone and they’ll see exactly what the site will look like when deployed.
  • Bring coworkers in to a shared tmux session for code review or pair programming.
  • And if they just can’t resist working on the same project they can create their own camp, without having to install dependencies and get the stack operational...

camps cloud development

Adding Awesomplete to Vue Components

By Patrick Lewis
January 31, 2019

IBM Model M SSK keyboard IBM Model M SSK by njbair, used under CC BY-SA 2.0 / Cropped from original

Awesomplete is an “Ultra lightweight, customizable, simple autocomplete widget with zero dependencies, built with modern standards for modern browsers.”

Awesomplete caught my attention when I was looking for a lightweight autocomplete implementation to add to an existing, heavily styled form in a Vue.js single-​file component. There are no fewer than 10 options on the Awesome Vue.js list of autocomplete libraries, but many of them brought their own dependencies or custom styling and I was looking for something simpler to add autocomplete features to my form.

I have created a live JSFiddle demo showing an implementation of Awesomplete in a Vue.js app, but the remainder of this post contains more details about adding Awesomplete to a single-​file component in a larger Vue application.

Here is a screenshot and sample code for a simplified version of the Vue single-​file component that I was working with:

Simple form

<template>
  <div>
    <h2>Search by Name</h2>
    <p>
      <em>
        Options: {{ names.join(', ') }}
      </em>
    </p>

    <form>
      <input
        id="name-input"
        placeholder="Enter a name"
        type="text"
      >
    </form>
  </div>
</template>

<script>
export default {
  data () {
    return {
      names: [
        'Colin Creevey',
        'Seamus Finnigan',
        'Lee Jordan'
      ]
    }
  }
}
</script>

In my actual application I was populating the data object with API data via vue-apollo, but I’ve hard-​coded the array of strings here for simplicity.

Adding autocomplete to my form with Awesomplete was as easy as adding the package to my project with yarn awesomplete and then updating the Vue component to load the library and attach it to my form:

<template>
  ...
</template>

<script>
import Awesomplete from 'awesomplete'

export default {
  data () {
    return {
      names: [
        'Colin Creevey',
        'Seamus Finnigan',
        'Lee Jordan...

vue javascript

How to Migrate from Microsoft SQL Server to PostgreSQL

By Selvakumar Arumugam
January 23, 2019

SQL server to Postgres

One of our clients had a Java-​based application stack on Linux that connected to a pretty old version of SQL Server on Windows. We wanted to migrate the entire system to a more consistent unified stack that developers are efficient with, and that is current so it receives regular updates.

We decided to migrate the database from SQL Server to PostgreSQL on Linux because porting the database, while not entirely quick or simple, was still much simpler than porting the app to .NET/C# would have been. Rewriting the application would have taken far longer, been much riskier to the business, and cost a lot more.

I experimented with a few approaches to the migration and decided to go with the process of schema migration and then the data migration approach which is referred to on the Postgres wiki. Let’s walk through the process of migration step by step.

Schema Migration

A schema of the SQL Server database tables and views needs to be exported to perform schema conversion. The following steps will show you how to export the schema.

Export SQL Server Database Schema

In SQL Management Studio, right click on the database and select Tasks → Generate Scripts.

Generate Scripts

Choose “Select specific database objects” and check only your application schema Tables (untick dbo schema objects and others if any).

Choose tables

Ensure that “Types of data to script” in advanced options is set to “Schema only”.

Schema Only

Review and save the database tables schema file tables.sql. Use WinSCP and public key auth to transfer tables.sql to the Linux server.

Convert Schema from SQL Server to Postgres

sqlserver2pgsql is a good migration tool written in Perl to convert SQL Server schemas to Postgres schemas. Clone it from GitHub to your database server and execute the following commands to convert the tables schema:

$ git clone https://github.com/dalibo/sqlserver2pgsql.git
$ cd sqlserver2pgsql
$ perl sqlserver2pgsql.pl -f tables.sql -b tables-before.sql -a tables-after.sql -u tables-unsure.sql

The converted...


pentaho postgres database sql sql-server

VISGRAF and the Moreira Salles Institute to Collaborate Using Liquid Galaxy

By Benjamin Goldstein
January 9, 2019

Liquid Galaxy on display at Instituto Moreira Salles (IMS)

In 2017, End Point donated a Liquid Galaxy to The Institute of Pure and Applied Mathematics (IMPA) in Rio De Janeiro. The Institute is home to VISGRAF, a laboratory specializing in computer graphics research, including AR, VR, visualization, and computer vision.

IMPA recently formed a partnership with a leading Brazilian cultural institution, the Moreira Salles Institute (IMS). The IMS stewards a vast collection of culturally important Brazilian photography, music, literature, and art. IMS moved to collaborate with IMPA because of its core mission of promoting broad access to these historically valuable artifacts.

The head of VISGRAF, Professor Luiz Velho, views the partnership as a way of empowering Brazilian culture. “The IMS collection is invaluable, and we can do unprecedented things with it,” he said in a press release. Researchers from IMPA are working to geolocate the photos, analyze them with computer vision, improve their resolution, and enable immersive engagement with them on the Liquid Galaxy.

Professor Velho has co-authored an interesting working paper with Julia Giannella of IMPA discussing how IMPA and IMS can take advantage of the Liquid Galaxy. The paper goes into detail on how our Content Management System (CMS) can enable curators and researchers to present the IMS’ collection in novel ways. It also describes the physical setup of the Liquid Galaxy at IMS, and discusses how applications enabled for the Liquid Galaxy, like Panotour and Sketchfab, will contribute to the partnership’s work.

We are supporting their Liquid Galaxy use and look forward to our continued collaboration with this talented team of researchers.

Liquid Galaxy on display at Instituto Moreira Salles (IMS)


liquid-galaxy

Speech Recognition from scratch using Dilated Convolutions and CTC in TensorFlow

By Kamil Ciemniewski
January 8, 2019

Sound visualization
Image by WILL POWER · CC BY 2.0, cropped

In this blog post, I’d like to take you on a journey. We’re going to get a speech recognition project from its architecting phase, through coding and training. In the end, we’ll have a fully working model. You’ll be able to take it and run the model serving app, exposing a nice HTTP API. Yes, you’ll even be able to use it in your projects.

Speech recognition has been amongst one of the hardest tasks in Machine Learning. Traditional approaches involve meticulous crafting and extracting of the audio features that separate one phoneme from another. To be able to do that, one needs a deep background in data science and signal processing. The complexity of the training process prompted teams of researchers to look for alternative, more automated approaches.

With the growing development of Deep Learning, the need for handcrafted features declined. The training process for a neural network is much more streamlined. You can feed the signals either in their raw form or as their spectrograms and watch the model improve.

Did this get you excited? Let’s start!

Project Plan of Attack

Let’s build a web service that exposes an API. Let it be able to receive audio signals, encoded as an array of floating point numbers. In return, we’re going to get the recognized text.

Here’s a rough plan of the stages we’re going to go through:

  1. Get the dataset to train the model on
  2. Architect the model
  3. Implement it along with the unit tests
  4. Train it on the dataset
  5. Measure its accuracy
  6. Serve it as a web service

The dataset

The open-source community has a lot to be thankful for the Mozilla Foundation for. It’s a host of many projects with a wonderful, free Firefox browser at its forefront. One of its other projects, called Common Voice, focuses on gathering large data sets to be used by anyone in speech recognition projects.

The datasets consist of wave files and their text transcriptions. There’s no notion of time-​alignment. It’s just the audio...


machine-learning python

Switching PostgreSQL WAL-based Backup Options

By Josh Williams
January 3, 2019

Sunbury hoard
Photo by Paul Hudson · CC BY 2.0, modified

I was woken up this morning. It happens every morning, true, but not usually by a phone call requesting for help with a PostgreSQL database server that was running out of disk space.

It turns out that one of the scripts we’re in the process of retiring, but still had in place, got stuck in a loop and filled most of the available space with partial, incomplete base backups. So, since I’m awake, I’d might as well talk about Postgres backup options. I don’t mean for it to be a gripe session, but I’m tired and it kind of is.

For this particular app, since it resides partially on AWS we looked specifically at options that are able to work natively with S3. We’ve currently settled on pgBackRest. There’s a bunch of options out there, which doesn’t make the choice easy. But I suppose that’s the nature of things these days.

At first we’d tried out pghoard. It looks pretty good on the tin, especially with its ability to connect to multiple cloud storage services beyond S3: Azure, Google, Swift, etc. Having options is always nice. And for the most part it works well, apart from a couple idiosyncrasies.

We had the most trouble with the encryption feature. It didn’t have any problem on the encryption side. But for some reason on the restore the process would hang and eventually fail out without unpacking any data. Having a backup solution is a pretty important thing, but it doesn’t mean anything unless we can get the data back from it. So this was a bit of a sticking point. We probably could have figured out how to get it functioning, and at least been a good citizen and reported it upstream to get it resolved in the source. But we kind of just needed it working, and giving something else a shot is a quicker path to that goal. Sorry, pghoard devs.

The other idiosyncratic behaviors that are probably worth mentioning are that it does its own scheduling. The base backups, for instance, happen at a fixed hour interval in the configuration...


database postgres sysadmin
Page 1 of 177 • Next page

Popular Tags


Archive


Search our blog