Our Blog

Ongoing observations by End Point people

Linux Development in Windows 10 with Docker and WSL 2

Kevin campusano

By Kevin Campusano
June 18, 2020

Banner

I’m first and foremost a Windows guy. But for a few years now, moving away from working mostly with .NET and into a plethora of open source technologies has given me the opportunity to change platforms and run a Linux-based system as my daily driver. Ubuntu, which I honestly love for work, has been serving me well by supporting my development workflow with languages like PHP, JavaScript and Ruby. And with the help of the excellent Visual Studio Code editor, I’ve never looked back. There’s always been an inclination in the back of my mind though, to take some time and give Windows another shot.

With the latest improvements coming to the Windows Subsystem for Linux with its second version, the new and exciting Windows Terminal, and Docker support for running containers inside WSL2, I think the time is now.

In this post, we’ll walk through the steps I took to set up a PHP development environment in Windows, running in a Ubuntu Docker container running on WSL 2, and VS Code. Let’s go.

Note: You have to be on the latest version of Windows 10 Pro (Version 2004) in order to install WSL 2 by the usual methods. If not, you’d need to be part of the Windows Insider Program to have access to the software.

What’s new with WSL 2

This is best explained by the official documentation. However, being a WSL 1 veteran, I’ll mention a few improvements made which have sparked my interest in trying it again.

1. It’s faster and more compatible

WSL 2 introduces a complete architectural overhaul. Now, Windows ships with a full Linux Kernel which WSL 2 distributions run on. This results in greatly improved file system performance and much better compatibility with Linux programs. It’s no longer running a Linux look-alike, but actual Linux.

2. It’s better integrated with Windows

This is a small one: we can now use the Windows explorer to browse files within a WSL distribution. This is not a WSL 2 exclusive feature, it has been there for a while now. I think it’s worth mentioning...


windows linux docker php

Jamstack Conf Virtual 2020: Thoughts & Highlights

Greg davidson

By Greg Davidson
June 16, 2020

Conference

Welcome to Jamstack Conf Virtual 2020

Last week I attended Jamstack Conf Virtual 2020. It had originally been slated to take place in London, UK but was later transformed into a virtual event in light of the COVID-19 pandemic. The conference began at 2pm London time (thankfully I double-checked this the night before!)—​6am for those of us in the Pacific Time Zone.

Before getting too much further I wanted to mention that if you are not familiar with the Jamstack, You can read more about it at jamstack.org.

To virtually participate in the conference we used an app called Hopin. I had not heard of it before but was impressed with how well it worked. There were over 3000 attendees from 130+ countries one of the times I checked. Phil Hawksworth was the Host/​MC for the event and did a great job. There were virtual spaces for the stage, sessions, expo (vendors), and networking. If you opted to, the networking feature paired you with a random attendee for a video chat. I’m not sure what I expected going into it but I thought it was fun. I met a fellow developer from the Dominican Republic. The experience was very similar though more serendipitous than the hallway track or lunch line at an in-person conference.

Phil Hawksworth welcoming the attendees

Keynote

Matt Biilmann opened the conference with a keynote address about the challenges we face as a developer community trying to improve access to accurate, timely and locally relevant information to a global audience. Many billions of users with all kinds of devices and varying levels of connectivity. He moved on to share how Netlify is trying to enable developers to “build websites instead of infrastructure” and “ensure all the best practices become common practices” through features like git-based deployments, build plugins, and edge handlers (more on those later).

State of the Jamstack Survey results

Laurie Voss reporting findings from the Jamstack Survey 2020

Laurie Voss walked us through the results of the...


jamstack html css javascript conference development

Why upgrading software libraries is imperative

Selvakumar arumugam

By Selvakumar Arumugam
June 10, 2020

Image 0

Image by Tolu Olubode on Unsplash

Applications primarily run on front- and back-end programming languages, including library dependencies. Operating systems and programming languages can be periodically updated to run on the latest version, but what about the many libraries being used in the app’s front and backend? As we all know, it can be quite a daunting task to maintain and individually update a long list of software dependencies like the examples later in this post. Still, it is important to keep them updated.

This post dives into our experience upgrading a complex app with a full software stack and lots of dependencies. We’ll examine the benefits of upgrading, what you will need, and how to go about such an upgrade as simply as possible.

The app in question contained decade-old software and included extensive libraries when we received it from our client. The app used languages including Java, Scala, Kotlin, and JavaScript along with many libraries. The initial plan was to upgrade the complete software stack and libraries all at once due to the gap between versions. This proved to be more difficult than expected due to a host of deprecated and removed functionality as well as interdependence of a few of the libraries.

Conflict approach: “Don’t update unless you have to”

While this can be sustainable in the short term, it quickly becomes less applicable in the long run. One important purpose of updates is to (hopefully) protect from new vulnerabilities and cyber attacks. Scenarios arise where particular library fixes are implemented on the latest version, yet require upgrading other libraries to the latest version in a chain. Because upgraded libraries need extensive testing and preparation for new issues, this directly impacts whether the app attempts to resolve an issue.

Therefore, smaller and more frequent updates are more sustainable in the long run. Larger and less frequent upgrades will not only result in unexpected errors, but also require more...


software update

Testing to defend against nginx add_header surprises

Jon jensen

By Jon Jensen
May 29, 2020

Cute calico cat perched securely upon a trepidatious shoe

These days when hosting websites it is common to configure the web server to send several HTTP response headers with every single request for security purposes.

For example, using the nginx web server we may add these directives to our http configuration scope to apply to everything served, or to specific server configuration scopes to apply only to particular websites we serve:

add_header Strict-Transport-Security max-age=2592000 always;
add_header X-Content-Type-Options    nosniff         always;

(See HTTP Strict Transport Security and X-Content-Type-Options at MDN for details about these two particular headers.)

The surprise (problem)

Once upon a time I ran into a case where nginx usually added the expected HTTP response headers, but later appeared to be inconsistent and sometimes did not. This is distressing!

Troubleshooting leads to the (re-)discovery that add_header directives are not always additive throughout the configuration as one would expect, and as every other server I can think of typically does.

If you define your add_header directives in the http block and then use an add_header directive in a server block, those from the http block will disappear.

If you define some add_header directives in the server block and then add another add_header directive in a location block, those from the http and/or server blocks will disappear.

This is even the case in an if block.

In the nginx add_header documentation we find the reason for the behavior explained:

There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level.

This nginx directive has always behaved this way. Various people have warned about it in blog posts and online discussions for many years. But the situation remains the same, a trap for the unwary.

I have tried to imagine the rationale behind this behavior. Response headers often are set in groups, so the programmer...


sysadmin nginx security javascript nodejs testing

Implementing SummAE neural text summarization with a denoising auto-encoder

Kamil ciemniewski

By Kamil Ciemniewski
May 28, 2020

Book open on lawn with dandelions

If there’s any problem space in machine learning, with no shortage of (unlabelled) data to train on, it’s easily natural language processing (NLP).

In this article, I’d like to take on the challenge of taking a paper that came from Google Research in late 2019 and implementing it. It’s going to be a fun trip into the world of neural text summarization. We’re going to go through the basics, the coding, and then we’ll look at what the results actually are in the end.

The paper we’re going to implement here is: Peter J. Liu, Yu-An Chung, Jie Ren (2019) SummAE: Zero-Shot Abstractive Text Summarization using Length-Agnostic Auto-Encoders.

Here’s the paper’s abstract:

We propose an end-to-end neural model for zero-shot abstractive text summarization of paragraphs, and introduce a benchmark task, ROCSumm, based on ROCStories, a subset for which we collected human summaries. In this task, five-sentence stories (paragraphs) are summarized with one sentence, using human summaries only for evaluation. We show results for extractive and human baselines to demonstrate a large abstractive gap in performance. Our model, SummAE, consists of a denoising auto-encoder that embeds sentences and paragraphs in a common space, from which either can be decoded. Summaries for paragraphs are generated by decoding a sentence from the paragraph representations. We find that traditional sequence-to-sequence auto-encoders fail to produce good summaries and describe how specific architectural choices and pre-training techniques can significantly improve performance, outperforming extractive baselines. The data, training, evaluation code, and best model weights are open-sourced.

Preliminaries

Before we go any further, let’s talk a little bit about neural summarization in general. There’re two main approaches to it:

The first approach makes the model “focus” on the most important parts of the longer text - extracting them to form a summary.

Let’s take a recent...


python machine-learning artificial-intelligence natural-language-processing

Designing flexible CI pipelines with Jenkins and Docker

Will plaut

By Will Plaut
May 25, 2020

Pipes

Photo by Tian Kuan on Unsplash

When deciding on how to implement continuous integration (CI) for a new project, you are presented with lots of choices. Whatever you end up choosing, your CI needs to work for you and your team. Keeping the CI process and its mechanisms clear and concise helps everyone working on the project. The setup we are currently employing, and what I am going to showcase here, has proven to be flexible and powerful. Specifically, I’m going to highlight some of the things Jenkins and Docker do that are really helpful.

Jenkins

Jenkins provides us with all the CI functionality we need and it can be easily configured to connect to projects on GitHub and our internal GitLab. Jenkins has support for something it calls a multibranch pipeline. A Jenkins project follows a repo and builds any branch that has a Jenkinsfile. A Jenkinsfile configures an individual pipeline that Jenkins runs against a repo on a branch, tag or merge request (MR).

To keep it even simpler, we condense the steps that a Jenkinsfile runs into shell scripts that live in /scripts/ at the root of the source repo to do things like test or build or deploy, such as /scripts/test.sh. If a team member wants to know how the tests are run, it is right in that file to reference.

The Jenkinsfile can be written in a declarative syntax or in plain Groovy. We have landed on the scripted Groovy syntax for its more fine-grained control of Docker containers. Jenkins also provides several ways to inspect and debug the pipelines with things like “Replay” in its GUI and using input('wait here') in a pipeline to debug a troublesome step. The input() function is especially useful when paired with Docker. The function allows us to pause the job and go to the Jenkins server where we use docker ps to find the running container’s name. Then we use docker exec -it {container name} bash to debug inside of the container with all of the Jenkins environment variables loaded. This has proven to be a great...


jenkins docker groovy

Creating a Messaging App Using Spring for Apache Kafka, Part 3

Kursat aydemir

By Kürşat Kutlu Aydemir
May 21, 2020

Spring-Kafka

Photo by Pascal Debrunner on Unsplash

This article is part of a series.

In this article we’ll create the persistence and cache models and repositories. We’re also going to create our PostgreSQL database and the basic schema that we’re going to map to the persistence model.

Persistence

Database

We are going to keep the persistence model as simple as possible so we can focus on the overall functionality. Let’s first create our PostgreSQL database and schema. Here is the list of tables that we’re going to create:

  • users: will hold the users who are registered to use this messaging service.
  • access_token: will hold the unique authentication tokens per session. We’re not going to implement an authentication and authorization server specifically in this series but rather will generate a simple token and store it in this table.
  • contacts: will hold relationships of existing users.
  • messages: will hold messages sent to users.

Let’s create our tables:

CREATE TABLE kafkamessaging.users (
    user_id BIGSERIAL PRIMARY KEY,
    fname VARCHAR(32) NOT NULL,
    lname VARCHAR(32) NOT NULL,
    mobile VARCHAR(32) NOT NULL,
    created_at DATE NOT NULL
);

CREATE TABLE kafkamessaging.access_token (
    token_id BIGSERIAL PRIMARY KEY, 
    token VARCHAR(256) NOT NULL,
    user_id BIGINT NOT NULL REFERENCES kafkamessaging.users(user_id),
    created_at DATE NOT NULL
);

CREATE TABLE kafkamessaging.contacts (
    contact_id BIGSERIAL PRIMARY KEY,
    user_id BIGINT NOT NULL REFERENCES kafkamessaging.users(user_id),
    contact_user_id BIGINT NOT NULL REFERENCES kafkamessaging.users(user_id),
);

CREATE TABLE kafkamessaging.messages (
    message_id BIGSERIAL PRIMARY KEY,
    from_user_id BIGINT NOT NULL REFERENCES kafkamessaging.users(user_id),
    to_user_id BIGINT NOT NULL REFERENCES kafkamessaging.users(user_id),
    message VARCHAR(512) NOT NULL,
    sent_at DATE NOT NULL
);

Model

Before creating the models we’ll add another dependency called Lombok in pom.xml as shown...


java spring frameworks kafka spring-kafka-series

Shopify Admin API: Importing Products in Bulk

Patrick lewis

By Patrick Lewis
May 4, 2020

Cash Register Photo by Chris Young, used under CC BY-SA 2.0, cropped from original.

I recently worked on an interesting project for a store owner who was facing a daunting task: he had an inventory of hundreds of thousands of Magic: The Gathering (MTG) cards that he wanted to sell online through his Shopify store. The logistics of tracking down artwork and current market pricing for each card made it impossible to do manually.

My solution was to create a custom Rails application that retrieves inventory data from a combination of APIs and then automatically creates products for each card in Shopify. The resulting project turned what would have been a months- or years-long task into a bulk upload that only took a few hours to complete and allowed the store owner to immediately start selling his inventory online. The online store launch turned out to be even more important than initially expected due to current closures of physical stores.

Application Requirements

The main requirements for the Rails application were:

  • Retrieving product data for MTG cards by merging results from a combination of sources/APIs
  • Mapping card attributes and metadata into the format expected by the Shopify Admin API for creating Product records
  • Performing a bulk push of products to Shopify

There were some additional considerations like staying within rate limits for both the inventory data and Shopify APIs, but I will address those further in a follow-up post.

Retrieving Card Artwork and Pricing

I ended up using a combination of two APIs to retrieve MTG card data: MTGJSON for card details like the name of the card and the set it belonged to, and Scryfall for retrieving card images and current market pricing. It was relatively easy to combine the two because MTGJSON provided Scryfall IDs for all of its records, allowing me to merge results from the two APIs together.

Working With the Shopify Admin API in Ruby

The Shopify Admin API deals in terms of generic Product records with predefined attributes...


shopify ecommerce ruby rails
Previous page • Page 2 of 188 • Next page

Popular Tags


Archive


Search our blog