Support and Expand
Our project managers and engineers can take over your existing project and bring new life to your business.
Testing is an immense topic in software engineering. A lot has been written and a lot of experience has been collected on it by the greater software development community. There are many different tests, techniques, approaches, philosophies, strategies.
With such a big topic, it would be futile to try touching on every aspect of it in this article. Instead, I’ll try to take a pragmatic approach and discuss a testing strategy I’ve found success with in the past as well how much testing is necessary before I feel comfortable putting code into production. This article could also serve as a sort of introduction to automated testing using the Symfony framework as a vehicle to explore various types of testing without diving too deep into edge cases or framework specifics, and instead leaning more into the concepts and design decisions that go into writing them. Still, we’ll make sure to have a running and competent test suite by the end.
So we’re going to talk about automated testing, which in its own right is a very important part of the larger discipline of software testing. It’s also a topic that, as a developer (and as such, responsible for implementing this type of tests), I’m passionate about.
Let’s get started.
For web applications, as far as automated tests go, there are three categories I think are essential to have and which complement each other very well:
Unit tests: These are the most numerous, low-level, and, in my opinion, the most important type of developer tests. Unit tests don’t only make sure that the system does what it is supposed to do, but also that it’s correctly factored where individual components are decoupled. Unit tests focus on exercising specific classes and methods running in complete isolation, which becomes harder if the class you want to test is tightly coupled with its dependencies/collaborators. These tests validate the behavior of basic programming constructs like classes and the algorithms...
We’re excited to share the news of another great project End Point has launched via our partner in South Korea! The Nano Museum in Seoul has added a brand new 21-screen Liquid Galaxy as part of their exhibits. This huge video wall is interactive and includes pre-programmed flights around the world, deep dives into Google Street View at select locations, and the ability to fly the screens with a 6-axis joystick and touchscreen.
This project presented some technical challenges for our hardware team: the 21-screen layout is 3× our normal 7-screen layout (but all very doable). For this configuration, we deployed an “LGOne” server stack which has a head node server for the core applications, media storage, and overall management. It also has a large display node server with multiple Nvidia video cards to power the displays. For this large array of screens, we are able to ‘bridge’ the video cards together (not unlike a RAID array for video cards) to produce multiple hi-resolution video outputs. These video outputs then go to the screens, where they are tiled by the displays’ own built-in capabilities.
We wrote these specific configurations in our build lab in Tennessee, then shipped everything to our partner A-Zero in Seoul. They installed the servers, connected them to the displays, and after some short video conferences to confirm some configuration changes, everything looks great!
If your museum has a large video wall, and you want to bring the entire Earth, Moon, and Mars, and Ceres to your guests, please contact us today!
End Point has worked on Kansas’s disease surveillance systems since 2011. In 2018 we migrated them from their legacy TriSano application to the open source EpiTrax surveillance system created by Utah’s Department of Health. The new EpiTrax system had been in full production for about eight months when COVID-19 cases started to grow in the United States.
In March 2020, the Director of Surveillance Systems at the Kansas Department of Health and Environment (KDHE) asked us at End Point to create a web-based portal where labs, hospitals, and ad-hoc testing locations could enter COVID-19 test data. While systems existed for gathering data from labs and hospitals, they needed a way to quickly gather data from the many new and atypical sites collecting COVID-19 test information.
Since the portal was intended for people who were unfamiliar with the existing EpiTrax application, we were able to create a new design that was simple and direct, unconstrained by other applications. It required a self-registration function so users could access the system quickly and without administrative overhead, and users needed to understand how to use it without extensive training.
Once approved, our team got to work setting up the environment, developing the portal application, and rigorously testing it.
Here are some screenshots of the application:
The portal was launched on April 30.
BorgBackup (Borg for short) is a ‘deduplicating’ backup program that eliminates duplicate or redundant information. It optionally supports compression and authenticated encryption.
The main objective of Borg is to provide an efficient and secure way to backup data. The deduplication technique utilized to produce the backup process is very quick and effective.
apt install borgbackup
dnf install borgbackup
Firstly, the system that is going to be backed up needs a new designated backup directory. Name the parent directory ‘backup’ and then create a child directory called ‘borgdemo’, which serves as the repository.
mkdir -p /mnt/backup borg init --encryption=repokey /mnt/backup/borgdemo
In Borg terms, each backup instance will be called an archive. The following demonstrates how to backup the ‘photos’ directory and designate the archive as ‘archive_1’.
borg create --stats --progress /mnt/backup/borgdemo::archive_1 /home/kannan/photos
Note: the archive label for each backup run needs to be specified.
In order to see if the run was successful, the same command will be executed again. However, this time, with the different unique archive label.
borg create --stats --progress /mnt/backup/borgdemo::archive_2 /home/kannan/photos
The following backup is mostly identical to the previous one. Because of deduplication, the process will not only run faster this time, it will be incremental as well. The
--stats flag will provide statistics regarding the size of deduplication.
The ‘borg list’ command lists all of the archives stored within the Borg repository.
borg list /mnt/backup/borgdemo
Take the scenario where the backups of many servers need to be maintained in...
Cloud storage providers like Google Drive are great solutions for storing files. You can upload your data and not worry about maintaining a separate system to host it, or all the security hassles that can bring. However, very few major cloud storage providers offer a command line interface or any other official way to upload without using their web interface or closed-source binary tools, if they even offer that.
This obviously makes uploading files from servers difficult, but not impossible if you know the right tools.
About a year ago Jon Jensen penned a blog post about gdrive, a Google Drive command-line tool. However, due to changes with Google’s Drive security, that tool no longer works. This led me to look for a replacement.
Recently I had to put some large files in to long term storage on Google Drive, since we needed the local space back. We wanted to retain the data, but didn’t foresee needing to access it for some time, if ever. Google Drive was a good solution for us, but the problem became how to get it there.
The files were too big, and some of them were not stored sparsely—empty space was tacked on to the disk images. We wanted to encrypt them, as the drives potentially contained customer information. So we had to sequentially process the files, encrypt them, and upload them. I felt like this would take quite a bit of time.
Enter rclone. Rclone can connect to many different kinds of cloud storage providers, DIY cloud storage solutions, and even things like FTP and WebDAV. You can use rclone to copy files directly like rsync, or even use it to mount the remote storage as a local drive. We chose to do the latter.
Rclone connects to a dizzying array of remote web services including Dropbox, Box, Amazon S3, Mega, SugarSync, and even homebrew cloud like ownCloud! This example uses Google Drive, but the instructions for many cloud providers are similar. The setup wizard can guide you through each step...
End Point Corporation’s immersive technology team has launched Vision.Space. Evolved from End Point’s Liquid Galaxy, Vision.Space lets users control touchscreens, video walls, shared laptops, and WiFi controllers, all with a swipe of a finger.
Vision.Space was created to incorporate any number of displays in a video wall configuration. Each display is maximized for resolution and shows a geometrically-adjusted viewing angle to avoid the fish-eye distortion commonly seen on conventional video walls. The platform also incorporates touchscreens placed around the room, enabling participants multiple input sources to manipulate and interact with the visualizations presented.
A “meeting director” can incorporate and guide multiple inbound video streams via an intuitive interface on an iPad or tablet controller. Directing someone’s laptop image to any screen in the room is as easy as swiping a video thumbnail into the appropriate square on the tablet.
Our new Vision.Space platform combines custom server hardware with commercial displays and touchscreens, and is an ideal cutting-edge conference room system for enterprise-level companies in commercial real estate, logistics, and travel, among other industries. Central to Vision.Space is End Point’s CMS (Content Management System), which enables clients to quickly and easily build multimedia presentations for the platform.
Vision.Space’s system architecture is based in Linux and ROS (Robot Operating System), and provides a fundamentally secure, stable, and flexible environment for companies seeking to display extensive geospatial data sets in a concise and interactive manner. Research universities, multimedia studios, and data laboratories are also well-positioned to fully leverage Vision.Space, as it allows for multiple data sources and visualization streams to be viewed simultaneously. Museums, aquariums, and science centers can utilize Vision.Space to wow their visitors by combining immersive video with interactive...
Magento is a complex piece of software, and as such, we need all the help we can get when it comes to developing customizations for it. A fully featured local development environment can do just that, but these can often times be very complex as well. It’d be nice to have some way to completely capture all the setup for such an environment and be able to get it all up and running quickly, repeatably... even with a single command. Well, Docker containers can help with that. And they can be easily provisioned with the Docker Compose tool.
In this post, we’re going to go in depth into how to fully containerize a Magento 2.4 installation for development, complete with its other dependencies Elasticsearch and MySQL. By the end of it, we’ll have a single command that sets up all the infrastructure needed to install and run Magento, and develop for it. Let’s get started.
The first thing that we need to know is what the actual components of a Magento application are. Starting with 2.4, Magento requires access to an Elasticsearch service to power catalog searches. Other than that, we have the usual suspects for typical PHP applications. Here’s what we need:
In terms of infrastructure, this is pretty straightforward. It would cleanly translate into three separate machines talking to each other via the network, but in the Docker world, each of these machines become containers. Since we need multiple containers for our infrastructure, things like Docker Compose can come in handy to orchestrate the creation of all that. So let’s get to it.
Since we want to create three separate containers that can talk to each other, we need to ask the Docker engine to create a network for them. This can be done with this self-explanatory command:
docker network create magento-demo-network
magento-demo-network is the name I’ve chosen for my network but you...
What initially piqued our interest was the possibility of integrating Vue Storefront with the venerable ecommerce back-end platform Interchange, which many of our clients use. Vue Storefront’s promise of ease of integration with any ecommerce backend made us curious to see whether it would make a good modern front-end for Interchange.
Since Vue Storefront seems to be most commonly used with Magento, we decided to start our experiment with a standard Vue Storefront/Magento 2.3 proof-of-concept integration.
OK, to be honest, at the beginning we blindly expected that Vue Storefront would be a copy/paste front-end template solution that would fairly easily be made to work with its standard integration to a Magento backend. Sadly, this was not the case for us.
Before beginning our journey here, to summarize the Vue Storefront integration with Magento let’s have a look at this diagram to see what components are included:
At first, we wanted to see how all these components can be installed and run on a single server with modest resources.
I downloaded and installed the following:
Installation of those components is fairly easy. We started our virtual server with 4 GB of memory, which we thought should be plenty for a toy setup.
PHP and Magento 2.3 had a series of memory usage issues. While running the mage2vuestorefront indexer Magento used most of the memory and caused...