End Point Corporation’s immersive technology team has launched Vision.Space. Evolved from End Point’s Liquid Galaxy, Vision.Space lets users control touchscreens, video walls, shared laptops, and WiFi controllers, all with a swipe of a finger.
Vision.Space was created to incorporate any number of displays in a video wall configuration. Each display is maximized for resolution and shows a geometrically-adjusted viewing angle to avoid the fish-eye distortion commonly seen on conventional video walls. The platform also incorporates touchscreens placed around the room, enabling participants multiple input sources to manipulate and interact with the visualizations presented.
A “meeting director” can incorporate and guide multiple inbound video streams via an intuitive interface on an iPad or tablet controller. Directing someone’s laptop image to any screen in the room is as easy as swiping a video thumbnail into the appropriate square on the tablet.
Our new Vision.Space platform combines custom server hardware with commercial displays and touchscreens, and is an ideal cutting-edge conference room system for enterprise-level companies in commercial real estate, logistics, and travel, among other industries. Central to Vision.Space is End Point’s CMS (Content Management System), which enables clients to quickly and easily build multimedia presentations for the platform.
Vision.Space’s system architecture is based in Linux and ROS (Robot Operating System), and provides a fundamentally secure, stable, and flexible environment for companies seeking to display extensive geospatial data sets in a concise and interactive manner. Research universities, multimedia studios, and data laboratories are also well-positioned to fully leverage Vision.Space, as it allows for multiple data sources and visualization streams to be viewed simultaneously. Museums, aquariums, and science centers can utilize Vision.Space to wow their visitors by combining immersive video with interactive...
Magento is a complex piece of software, and as such, we need all the help we can get when it comes to developing customizations for it. A fully featured local development environment can do just that, but these can often times be very complex as well. It’d be nice to have some way to completely capture all the setup for such an environment and be able to get it all up and running quickly, repeatably... even with a single command. Well, Docker containers can help with that. And they can be easily provisioned with the Docker Compose tool.
In this post, we’re going to go in depth into how to fully containerize a Magento 2.4 installation for development, complete with its other dependencies Elasticsearch and MySQL. By the end of it, we’ll have a single command that sets up all the infrastructure needed to install and run Magento, and develop for it. Let’s get started.
The first thing that we need to know is what the actual components of a Magento application are. Starting with 2.4, Magento requires access to an Elasticsearch service to power catalog searches. Other than that, we have the usual suspects for typical PHP applications. Here’s what we need:
In terms of infrastructure, this is pretty straightforward. It would cleanly translate into three separate machines talking to each other via the network, but in the Docker world, each of these machines become containers. Since we need multiple containers for our infrastructure, things like Docker Compose can come in handy to orchestrate the creation of all that. So let’s get to it.
Since we want to create three separate containers that can talk to each other, we need to ask the Docker engine to create a network for them. This can be done with this self-explanatory command:
docker network create magento-demo-network
magento-demo-network is the name I’ve chosen for my network but you...
What initially piqued our interest was the possibility of integrating Vue Storefront with the venerable ecommerce back-end platform Interchange, which many of our clients use. Vue Storefront’s promise of ease of integration with any ecommerce backend made us curious to see whether it would make a good modern front-end for Interchange.
Since Vue Storefront seems to be most commonly used with Magento, we decided to start our experiment with a standard Vue Storefront/Magento 2.3 proof-of-concept integration.
OK, to be honest, at the beginning we blindly expected that Vue Storefront would be a copy/paste front-end template solution that would fairly easily be made to work with its standard integration to a Magento backend. Sadly, this was not the case for us.
Before beginning our journey here, to summarize the Vue Storefront integration with Magento let’s have a look at this diagram to see what components are included:
At first, we wanted to see how all these components can be installed and run on a single server with modest resources.
I downloaded and installed the following:
Installation of those components is fairly easy. We started our virtual server with 4 GB of memory, which we thought should be plenty for a toy setup.
PHP and Magento 2.3 had a series of memory usage issues. While running the mage2vuestorefront indexer Magento used most of the memory and caused...
We just installed a new Liquid Galaxy system for the Downtown San Diego Partnership in the conference room of their office in downtown San Diego (heh). As End Point continues to partner with public organizations, associations, and government agencies, the Liquid Galaxy is proving very effective for showing infrastructure projects, zoning districts, and, most importantly, public engagement with immersive data models. Downtown San Diego wanted to bring presentations and visualizations to a much larger canvas, and the Liquid Galaxy fit well with their open floor plan and large conference room.
Downtown San Diego is tasked with promoting the development of the downtown corridor to their members and the wider public. They can now build some great presentations to fully leverage the 7 large screens showing 3D models of new developments, zoning maps superimposed directly on Google Earth, and with the 4K videos all programmed to show in sequenced scenes, or simply fly through the city with a 6-axis controller and iPad.
This installation presented some unique challenges. The first was an asymmetric wall layout with a large flat wall, smaller angled wall, and an alcove that needed accommodation. The first thing we did was to go onsite and take some measurements. We also received a 2D floorplan from our client. From this floorplan we built a 3D model using Blender:
This allowed us to propose some options for screen layouts in the room, with either an asymmetric screen pattern to match the wall closely, or a symmetric/balanced pattern that would show better but come out from the wall further. The client chose the symmetric layout (of course), which then drove the second challenge: how to build out a mounting frame in a pandemic?
Our engineers put down their keyboards and picked up their circular saws. Dan Briones, an accomplished carpenter as well as our Director of Operations, designed a full mounting frame in his shop in New York, which was all completely flat-packed...
(This position has been filled.)
We are looking for a Windows systems integrator in the New York City metropolitan region to work with us.
We are an Internet technology consulting company based in NYC, with 50 employees serving many clients ranging from small family businesses to large corporations. The company turns 25 years old this year!
This is a consulting position, so excellent verbal and written communication, troubleshooting, and time management skills are required, along with a good sense for when to quickly escalate issues to resolve them efficiently as needed.
You will need to have extensive experience in the Microsoft Windows ecosystem: the MS Windows OS, Windows networking, Active Directory management via Group Policies, MS Exchange Server, MS SQL Server, etc.
The greater knowledge of and larger base of experience you have with these, the better:
Tell us about your other skills and strengths. We’ll be interested to hear about them.
This position requires (at least once COVID-19 subsides) some work in our Manhattan office along with some on-site work at customer locations in the NYC metro region. Working remotely is also possible from time to time.
Last year I bought an old Dell Optiplex on eBay to use as a dedicated Minecraft server for my friends and me. It worked well for a while, but when my university switched to online classes and I moved home, I left it at my college apartment and was unable to fix it (or retrieve our world save) when it failed for some reason. I still wanted to play Minecraft with friends, though, so I had to figure out a solution in the meantime.
I’d previously used a basic DigitalOcean droplet as a Minecraft server, but that had suffered with lag issues, especially with more than two or three people logged in. Their $5 tier of virtual machine provides 1GB of RAM and 1 CPU core, so it shouldn’t be too much of a surprise that it struggled with a Minecraft server. However, more performant virtual machines cost a lot more, and I wanted to keep my solution as cheap as possible.
I mentioned this to a co-worker and he pointed out that most companies don’t actually charge for virtual machines on a monthly basis; in reality, it’s an hourly rate based on when your virtual machine instance actually exists. So, he suggested I create a virtual machine and start my Minecraft server every time I wanted to play, then shut it down and delete it when I was finished, thus saving the cost of running it when it wasn’t being used.
Of course, you could do this manually in your provider’s dev console, but who wants to manually download dependencies, copy your world over, and set up a new server every time you want to play Minecraft? Not me! Instead, I used Terraform, an open-source tool that lets you describe your desired infrastructure and then sets it up for you.
In this post, I’ll show how I got my server setup streamlined into one Terraform configuration file that creates a virtual machine, runs a setup script on it, copies my Minecraft world to it with rsync, starts the Minecraft server, and adds a DNS entry for your new server.
As I mentioned earlier, I’ve used DigitalOcean...
Image from Flickr user fsse8info
Recently the topic of generating random-looking coupon codes and other strings came up on internal chat. My go-to for something like that is always this solution based on Feistel networks, which I didn’t think was terribly obscure. But I was surprised when nobody else seemed to recognize it, so maybe it is. In any case here’s a little illustration of the thing in action.
Feistel networks are the mathematical basis of the ciphers behind DES and other encryption algorithms. I won’t go into details (because that would suggest I fully understand it, and there are bits where I’m hazy) but ultimately it’s a somewhat simple and very fast mechanism that’s fairly effective for our uses here.
For string generation we have two parts. For the first part we take an integer, say the sequentially generated id primary key field in the database, and run it through a function that turns it into some other random-looking integer. Our implementation of the function has an interesting property: If you take that random-looking integer and run it back through the same function, we get the original integer back out. In other words…
cipher(cipher(n)) == n
…for any integer value of n. That one-to-one mapping essentially guarantees that the random-looking output is actually unique across the integer space. In other words, we can be sure there will be no collisions once we get to the string-making part.
The original function is based off the code on the PostgreSQL wiki with just a few alterations for clarity, and should work for any modern (or archaic) version of Postgres.
CREATE OR REPLACE FUNCTION public.feistel_crypt(value integer) RETURNS integer LANGUAGE plpgsql IMMUTABLE STRICT AS $function$ DECLARE key numeric; l1 int; l2 int; r1 int; r2 int; i int:=0; BEGIN l1:= (VALUE >> 16) & 65535; r1:= VALUE & 65535; WHILE i < 3 LOOP -- key can be any function that returns numeric between 0 and 1 ...
Cron is the default job scheduler for the Unix operating system family. It is old and well-used infrastructure — it was first released 45 years ago, in May 1975!
On Linux, macOS, and other Unix-like systems, you can see any cron jobs defined for your current user with:
If nothing is printed out, your user doesn’t have any cron jobs defined.
You can see the syntax for defining the recurring times that jobs should run with:
man 5 crontab
Important in that document is the explanation of the space-separated time and date fields:
field allowed values ----- -------------- minute 0-59 hour 0-23 day of month 1-31 month 1-12 (or names, see below) day of week 0-7 (0 or 7 is Sunday, or use names) A field may contain an asterisk (*), which always stands for "first-last".
For example, to make a job run every Monday at 3:33 am in the server’s defined time zone:
33 3 * * 1 /path/to/executable
Sometimes it may be good to schedule a cron job to run at a somewhat random time: generally not truly random, but maybe at an arbitrary time within a specified time range rather than at a specific recurring interval.
This can be useful to keep simultaneous cron jobs for different users from causing predictable spikes in resource usage, or to run at a time other than the start of a new minute, since cron’s interval resolution doesn’t go smaller than one minute.
There isn’t any simple built-in way to randomize the scheduling in classic cron, but there are several ways to get it done:
The version of cron included with Red Hat Enterprise Linux (RHEL), CentOS, and Fedora Linux is cronie. It allows us to set the variable
RANDOM_DELAY for this purpose. From its manual:
The RANDOM_DELAY variable allows delaying job startups by random amount of minutes with upper limit specified by the variable. The random scaling factor is determined during the cron daemon startup so...