Our Blog

Ongoing observations by End Point people

An introduction to automated testing for web applications with Symfony

Kevin campusano

By Kevin Campusano
September 22, 2020

Banner

Testing is an immense topic in software engineering. A lot has been written and a lot of experience has been collected on it by the greater software development community. There are many different tests, techniques, approaches, philosophies, strategies.

With such a big topic, it would be futile to try touching on every aspect of it in this article. Instead, I’ll try to take a pragmatic approach and discuss a testing strategy I’ve found success with in the past as well how much testing is necessary before I feel comfortable putting code into production. This article could also serve as a sort of introduction to automated testing using the Symfony framework as a vehicle to explore various types of testing without diving too deep into edge cases or framework specifics, and instead leaning more into the concepts and design decisions that go into writing them. Still, we’ll make sure to have a running and competent test suite by the end.

So we’re going to talk about automated testing, which in its own right is a very important part of the larger discipline of software testing. It’s also a topic that, as a developer (and as such, responsible for implementing this type of tests), I’m passionate about.

Let’s get started.

The types of tests we’re going to write

For web applications, as far as automated tests go, there are three categories I think are essential to have and which complement each other very well:

  • Unit tests: These are the most numerous, low-level, and, in my opinion, the most important type of developer tests. Unit tests don’t only make sure that the system does what it is supposed to do, but also that it’s correctly factored where individual components are decoupled. Unit tests focus on exercising specific classes and methods running in complete isolation, which becomes harder if the class you want to test is tightly coupled with its dependencies/​collaborators. These tests validate the behavior of basic programming constructs like classes and the algorithms...


testing symfony php

Liquid Galaxy at the Nano Museum in Seoul

Dave jenkins

By Dave Jenkins
September 17, 2020

21-screen Liquid Galaxy video wall in Seoul, South Korea

We’re excited to share the news of another great project End Point has launched via our partner in South Korea! The Nano Museum in Seoul has added a brand new 21-screen Liquid Galaxy as part of their exhibits. This huge video wall is interactive and includes pre-programmed flights around the world, deep dives into Google Street View at select locations, and the ability to fly the screens with a 6-axis joystick and touchscreen.

This project presented some technical challenges for our hardware team: the 21-screen layout is 3× our normal 7-screen layout (but all very doable). For this configuration, we deployed an “LGOne” server stack which has a head node server for the core applications, media storage, and overall management. It also has a large display node server with multiple Nvidia video cards to power the displays. For this large array of screens, we are able to ‘bridge’ the video cards together (not unlike a RAID array for video cards) to produce multiple hi-resolution video outputs. These video outputs then go to the screens, where they are tiled by the displays’ own built-in capabilities.

We wrote these specific configurations in our build lab in Tennessee, then shipped everything to our partner A-Zero in Seoul. They installed the servers, connected them to the displays, and after some short video conferences to confirm some configuration changes, everything looks great!

If your museum has a large video wall, and you want to bring the entire Earth, Moon, and Mars, and Ceres to your guests, please contact us today!


liquid-galaxy clients

COVID-19 Support for the Kansas Department of Health and Environment

Steve yoman

By Steve Yoman
September 14, 2020

Kansas’s existing EpiTrax system

End Point has worked on Kansas’s disease surveillance systems since 2011. In 2018 we migrated them from their legacy TriSano application to the open source EpiTrax surveillance system created by Utah’s Department of Health. The new EpiTrax system had been in full production for about eight months when COVID-19 cases started to grow in the United States.

COVID-19: Help needed

In March 2020, the Director of Surveillance Systems at the Kansas Department of Health and Environment (KDHE) asked us at End Point to create a web-based portal where labs, hospitals, and ad-hoc testing locations could enter COVID-19 test data. While systems existed for gathering data from labs and hospitals, they needed a way to quickly gather data from the many new and atypical sites collecting COVID-19 test information.

Our approach

Since the portal was intended for people who were unfamiliar with the existing EpiTrax application, we were able to create a new design that was simple and direct, unconstrained by other applications. It required a self-registration function so users could access the system quickly and without administrative overhead, and users needed to understand how to use it without extensive training.

We at End Point agreed to create the portal and dashboard for test results reporters immediately, while planning for later development of additional administrative functions. We used Ruby on Rails and Vue.js to build the portal due to their usefulness for rapid development. The Vue.js front-end JavaScript framework allowed us to quickly put together the portal UI and integrate it with the Rails back-end web services.

Once approved, our team got to work setting up the environment, developing the portal application, and rigorously testing it.

Here are some screenshots of the application:

Portal home page

Kansas Reportable Disease Portal home page

User registration page

Create New User Account page

Dashboard

Dashboard for searching and browsing cases

Reporters’ entry screens

Forms collecting information about reporter and patient

Forms collecting information about disease, speciment, and testing results

The portal was launched on April 30.

Contact tracers support

...

epitrax clients case-study rails vue

Introduction to BorgBackup

Kannan ponnusamy

By Kannan Ponnusamy
September 10, 2020

Black and silver hard drive

Photo by Frank R

What is Borg?

BorgBackup (Borg for short) is a ‘deduplicating’ backup program that eliminates duplicate or redundant information. It optionally supports compression and authenticated encryption.

The main objective of Borg is to provide an efficient and secure way to backup data. The deduplication technique utilized to produce the backup process is very quick and effective.

Step 1: Install the Borg backups

On Ubuntu/Debian:

apt install borgbackup

On RHEL/CentOS/Fedora:

dnf install borgbackup

Step 2: Initialize Local Borg repository

Firstly, the system that is going to be backed up needs a new designated backup directory. Name the parent directory ‘backup’ and then create a child directory called ‘borgdemo’, which serves as the repository.

mkdir -p /mnt/backup
borg init --encryption=repokey /mnt/backup/borgdemo

Step 3: Let’s create the first backup (archive)

In Borg terms, each backup instance will be called an archive. The following demonstrates how to backup the ‘photos’ directory and designate the archive as ‘archive_1’.

borg create --stats --progress /mnt/backup/borgdemo::archive_1 /home/kannan/photos

Note: the archive label for each backup run needs to be specified.

Step 4: Next backup (Incremental)

In order to see if the run was successful, the same command will be executed again. However, this time, with the different unique archive label.

borg create --stats --progress /mnt/backup/borgdemo::archive_2 /home/kannan/photos

The following backup is mostly identical to the previous one. Because of deduplication, the process will not only run faster this time, it will be incremental as well. The --stats flag will provide statistics regarding the size of deduplication.

Step 5: List all the archives

The ‘borg list’ command lists all of the archives stored within the Borg repository.

borg list /mnt/backup/borgdemo

Step 6: Remote Borg Repository

Take the scenario where the backups of many servers need to be maintained in...


sysadmin storage

Rclone: upload to the cloud from your command line and much more

Ardyn majere

By Ardyn Majere
September 9, 2020

Rclone header 2 optimized

The Swiss army knife of storage

Cloud storage providers like Google Drive are great solutions for storing files. You can upload your data and not worry about maintaining a separate system to host it, or all the security hassles that can bring. However, very few major cloud storage providers offer a command line interface or any other official way to upload without using their web interface or closed-source binary tools, if they even offer that.

This obviously makes uploading files from servers difficult, but not impossible if you know the right tools.

About a year ago Jon Jensen penned a blog post about gdrive, a Google Drive command-line tool. However, due to changes with Google’s Drive security, that tool no longer works. This led me to look for a replacement.

Our use case

Recently I had to put some large files in to long term storage on Google Drive, since we needed the local space back. We wanted to retain the data, but didn’t foresee needing to access it for some time, if ever. Google Drive was a good solution for us, but the problem became how to get it there.

The files were too big, and some of them were not stored sparsely—​empty space was tacked on to the disk images. We wanted to encrypt them, as the drives potentially contained customer information. So we had to sequentially process the files, encrypt them, and upload them. I felt like this would take quite a bit of time.

Enter rclone. Rclone can connect to many different kinds of cloud storage providers, DIY cloud storage solutions, and even things like FTP and WebDAV. You can use rclone to copy files directly like rsync, or even use it to mount the remote storage as a local drive. We chose to do the latter.

Rclone connects to a dizzying array of remote web services including Dropbox, Box, Amazon S3, Mega, SugarSync, and even homebrew cloud like ownCloud! This example uses Google Drive, but the instructions for many cloud providers are similar. The setup wizard can guide you through each step...


sysadmin cloud storage

Our Immersive Technology Team Launches Vision.Space

Ben witten

By Ben Witten
September 4, 2020

End Point Corporation’s immersive technology team has launched Vision.Space. Evolved from End Point’s Liquid Galaxy, Vision.Space lets users control touchscreens, video walls, shared laptops, and WiFi controllers, all with a swipe of a finger.

Vision.Space was created to incorporate any number of displays in a video wall configuration. Each display is maximized for resolution and shows a geometrically-adjusted viewing angle to avoid the fish-eye distortion commonly seen on conventional video walls. The platform also incorporates touchscreens placed around the room, enabling participants multiple input sources to manipulate and interact with the visualizations presented.

A “meeting director” can incorporate and guide multiple inbound video streams via an intuitive interface on an iPad or tablet controller. Directing someone’s laptop image to any screen in the room is as easy as swiping a video thumbnail into the appropriate square on the tablet.

Our new Vision.Space platform combines custom server hardware with commercial displays and touchscreens, and is an ideal cutting-edge conference room system for enterprise-level companies in commercial real estate, logistics, and travel, among other industries. Central to Vision.Space is End Point’s CMS (Content Management System), which enables clients to quickly and easily build multimedia presentations for the platform.

Vision.Space’s system architecture is based in Linux and ROS (Robot Operating System), and provides a fundamentally secure, stable, and flexible environment for companies seeking to display extensive geospatial data sets in a concise and interactive manner. Research universities, multimedia studios, and data laboratories are also well-positioned to fully leverage Vision.Space, as it allows for multiple data sources and visualization streams to be viewed simultaneously. Museums, aquariums, and science centers can utilize Vision.Space to wow their visitors by combining immersive video with interactive...


liquid-galaxy vision-space

Containerizing Magento with Docker Compose: Elasticsearch, MySQL and Magento

Kevin campusano

By Kevin Campusano
August 27, 2020

Banner

Magento is a complex piece of software, and as such, we need all the help we can get when it comes to developing customizations for it. A fully featured local development environment can do just that, but these can often times be very complex as well. It’d be nice to have some way to completely capture all the setup for such an environment and be able to get it all up and running quickly, repeatably... even with a single command. Well, Docker containers can help with that. And they can be easily provisioned with the Docker Compose tool.

In this post, we’re going to go in depth into how to fully containerize a Magento 2.4 installation for development, complete with its other dependencies Elasticsearch and MySQL. By the end of it, we’ll have a single command that sets up all the infrastructure needed to install and run Magento, and develop for it. Let’s get started.

Magento 2.4 application components

The first thing that we need to know is what the actual components of a Magento application are. Starting with 2.4, Magento requires access to an Elasticsearch service to power catalog searches. Other than that, we have the usual suspects for typical PHP applications. Here’s what we need:

  1. MySQL
  2. Elasticsearch
  3. A web server running the Magento application

In terms of infrastructure, this is pretty straightforward. It would cleanly translate into three separate machines talking to each other via the network, but in the Docker world, each of these machines become containers. Since we need multiple containers for our infrastructure, things like Docker Compose can come in handy to orchestrate the creation of all that. So let’s get to it.

Creating a shared network

Since we want to create three separate containers that can talk to each other, we need to ask the Docker engine to create a network for them. This can be done with this self-explanatory command:

docker network create magento-demo-network

magento-demo-network is the name I’ve chosen for my network but you...


magento mysql elasticsearch docker

Our Vue Storefront “Proof of Concept” Experience

Kursat aydemir

By Kürşat Kutlu Aydemir
August 10, 2020

Recently we experimented internally with integrating Vue Storefront and Magento 2.3. Vue Storefront is an open source Progressive Web App (PWA) that aims to work with many ecommerce platforms.

What initially piqued our interest was the possibility of integrating Vue Storefront with the venerable ecommerce back-end platform Interchange, which many of our clients use. Vue Storefront’s promise of ease of integration with any ecommerce backend made us curious to see whether it would make a good modern front-end for Interchange.

Since Vue Storefront seems to be most commonly used with Magento, we decided to start our experiment with a standard Vue Storefront/​Magento 2.3 proof-of-concept integration.

PoC of Vue Storefront/​Magento 2.3

OK, to be honest, at the beginning we blindly expected that Vue Storefront would be a copy/​paste front-end template solution that would fairly easily be made to work with its standard integration to a Magento backend. Sadly, this was not the case for us.

Before beginning our journey here, to summarize the Vue Storefront integration with Magento let’s have a look at this diagram to see what components are included:

VS Architecture

Figure 1

At first, we wanted to see how all these components can be installed and run on a single server with modest resources.

I walked through the Vue Storefront documentation and a few blog posts to figure out Vue Storefront and Magento integration.

Preparing the Environment

I downloaded and installed the following:

  • OS: CentOS Linux 7 (64-bit)
  • PHP 7.2.26
  • Magento 2.3 with sample data
  • Elasticsearch 5.6
  • Redis
  • Docker
  • Vue Storefront API
  • Vue Storefront
  • mage2vuestorefront bridge

Installation of those components is fairly easy. We started our virtual server with 4 GB of memory, which we thought should be plenty for a toy setup.

Indexing Elasticsearch with mage2vuestorefront

PHP and Magento 2.3 had a series of memory usage issues. While running the mage2vuestorefront indexer Magento used most of the memory and caused...


vue javascript ecommerce interchange magento
Page 1 of 189 • Next page

Popular Tags


Archive


Search our blog