January 9, 2019
In 2017, End Point donated a Liquid Galaxy to The Institute of Pure and Applied Mathematics (IMPA) in Rio De Janeiro. The Institute is home to VISGRAF, a laboratory specializing in computer graphics research, including AR, VR, visualization, and computer vision.
IMPA recently formed a partnership with a leading Brazilian cultural institution, the Moreira Salles Institute (IMS). The IMS stewards a vast collection of culturally important Brazilian photography, music, literature, and art. IMS moved to collaborate with IMPA because of its core mission of promoting broad access to these historically valuable artifacts.
The head of VISGRAF, Professor Luiz Velho, views the partnership as a way of empowering Brazilian culture. “The IMS collection is invaluable, and we can do unprecedented things with it,” he said in a press release. Researchers from IMPA are working to geolocate the photos, analyze them with computer vision, improve their resolution, and enable immersive engagement with them on the Liquid Galaxy.
Professor Velho has co-authored an interesting working paper with Julia Giannella of IMPA discussing how IMPA and IMS can take advantage of the Liquid Galaxy. The paper goes into detail on how our Content Management System (CMS) can enable curators and researchers to present the IMS’ collection in novel ways. It also describes the physical setup of the Liquid Galaxy at IMS, and discusses how applications enabled for the Liquid Galaxy, like Panotour and Sketchfab, will contribute to the partnership’s work.
We are supporting their Liquid Galaxy use and look forward to our continued collaboration with this talented team of researchers.
January 8, 2019
In this blog post, I’d like to take you on a journey. We’re going to get a speech recognition project from its architecting phase, through coding and training. In the end, we’ll have a fully working model. You’ll be able to take it and run the model serving app, exposing a nice HTTP API. Yes, you’ll even be able to use it in your projects.
Speech recognition has been amongst one of the hardest tasks in Machine Learning. Traditional approaches involve meticulous crafting and extracting of the audio features that separate one phoneme from another. To be able to do that, one needs a deep background in data science and signal processing. The complexity of the training process prompted teams of researchers to look for alternative, more automated approaches.
With the growing development of Deep Learning, the need for handcrafted features declined. The training process for a neural network is much more streamlined. You can feed the signals either in their raw form or as their spectrograms and watch the model improve.
Did this get you excited? Let’s start!
Let’s build a web service that exposes an API. Let it be able to receive audio signals, encoded as an array of floating point numbers. In return, we’re going to get the recognized text.
Here’s a rough plan of the stages we’re going to go through:
The open-source community has a lot to be thankful for the Mozilla Foundation for. It’s a host of many projects with a wonderful, free Firefox browser at its forefront. One of its other projects, called Common Voice, focuses on gathering large data sets to be used by anyone in speech recognition projects.
The datasets consist of wave files and their text transcriptions. There’s no notion of time-alignment. It’s just the audio...
January 3, 2019
I was woken up this morning. It happens every morning, true, but not usually by a phone call requesting for help with a PostgreSQL database server that was running out of disk space.
It turns out that one of the scripts we’re in the process of retiring, but still had in place, got stuck in a loop and filled most of the available space with partial, incomplete base backups. So, since I’m awake, I’d might as well talk about Postgres backup options. I don’t mean for it to be a gripe session, but I’m tired and it kind of is.
For this particular app, since it resides partially on AWS we looked specifically at options that are able to work natively with S3. We’ve currently settled on pgBackRest. There’s a bunch of options out there, which doesn’t make the choice easy. But I suppose that’s the nature of things these days.
At first we’d tried out pghoard. It looks pretty good on the tin, especially with its ability to connect to multiple cloud storage services beyond S3: Azure, Google, Swift, etc. Having options is always nice. And for the most part it works well, apart from a couple idiosyncrasies.
We had the most trouble with the encryption feature. It didn’t have any problem on the encryption side. But for some reason on the restore the process would hang and eventually fail out without unpacking any data. Having a backup solution is a pretty important thing, but it doesn’t mean anything unless we can get the data back from it. So this was a bit of a sticking point. We probably could have figured out how to get it functioning, and at least been a good citizen and reported it upstream to get it resolved in the source. But we kind of just needed it working, and giving something else a shot is a quicker path to that goal. Sorry, pghoard devs.
The other idiosyncratic behaviors that are probably worth mentioning are that it does its own scheduling. The base backups, for instance, happen at a fixed hour interval in the configuration...
December 21, 2018
The world is a big place, and the Internet has gotten pretty big too. There are always new projects being created, and I want to share some useful and interesting ones from my growing list:
Squoosh, hosted at squoosh.app, is an open source in-browser tool for experimenting with image compression, made by the Chrome development team.
With Squoosh you can load an image in your browser, convert it to different image file formats (JPEG, WebP, PNG, BMP) using various compression algorithms and settings, and compare the result side-by-side with either the original image or the image compressed using other options.
The screenshot above demonstrates Squoosh running in Firefox 64 on Linux. Click on it to see a larger, lossless PNG screenshot. The photo was taken by my son Phin in northern Virginia, and is a typical imperfect mobile phone photo. On the left is the original, and on the right I am showing how bad gradients in the sky can look when compressed too much—maybe a quality level of 12 (out of 100) was too low. It does make for a very compact file size, though. 😄
Squoosh’s interface has a convenient slider bar so you can compare any part of the two versions of the image side by side. You can zoom and pan the image as well.
If you want access to an amazing number of symbols in a font, check out nerdfonts.com. There you can mix and match symbols from many popular developer-oriented fonts such as Font Awesome, Powerline Symbols, Material Design, etc.
I probably should have chosen some fun symbols to demonstrate it here, but I could tell that was a rabbit hole I would not soon emerge from!
There are many public pastebins these days, but glot.io distinguishes itself by allowing you to run real code on their server in nearly 50 languages.
It offers both...
December 20, 2018
A customer asked for our help dealing with logistical nightmares they encountered during a hardware update and data center relocation project. The customer had two active data centers, and wanted to relocate one of them to a modern Tier 4 facility to improve its performance and provide redundancy for critical systems. They also wanted to consolidate or decommission equipment to reduce recurring expenses and reduce their carbon footprint. The client decided to move to a Tier 4 data center because they provide across-the-board redundancy within the data center.
Enterprises typically transition to Tier 4 data centers because they offer the highest uptime guarantees, and have no single points of failure. These facilities are fully redundant in terms of electrical circuits, cooling, and networking. This architecture is best able to withstand even the most serious technical incidents without server availability being affected. Tier IV facilities have contracts with disaster management companies who will provide them, for example, with fuel in the event that a natural disaster damages the power grid.
(More information about the four data center levels is at the end of this post.)
Some of the customer’s key concerns were:
1. To safely move their virtual environment to the new data center.
2. To protect assets during the relocation.
3. To completely shut down the current data center and migrate seamlessly to the new data center.
4. To update external DNS records without causing any downtime for the web applications used by their customers.
To manage data center relocations is challenging because there are many moving parts and variables. Typically, planning starts 6 months ahead of the scheduled relocation date. We seek to understand the role of each system by consulting in-depth with all the teams involved. We analyze each part of the client’s tech...
December 19, 2018
We brought the Liquid Galaxy to Austin Startup Week a few weeks ago. This event combines developers, demonstrations, and some great pub crawls for the week. This year, ASW had a particular focus on VR, AR, and data visualization, which made it a great fit for us to show off the Liquid Galaxy platform. We set up our 7-screen system near the event registration, and introduced the technology to hundreds of visitors and dignitaries. We want to specifically thank our gracious hosts at Capital Factory, the main sponsor of ASW.
The highlight of this year’s conference is the US Army’s Futures Command setting up a new HQ in Austin to be accompanied with a new working lab at Capital Factory. This Command has a mission of finding new interesting technology and adapting it to solve issues facing soldiers in the field. We had some great conversations with many of the people involved. Of particular note, visitors enjoyed the presentations we had that combine map visualizations, 3D objects, and 4K video, all seen simultaneously. We are firmly confident that the Liquid Galaxy can play a role in simulation training, GIS visualization, and mission planning for the relevant agencies.
The Liquid Galaxy was a great success with the pub crawl portion of ASW—we can show complex 3D models with the Unity engine, and many of the new startups are working on VR tools. Being able to see VR on a room-sized scale impressed our visitors. Here also, we anticipate some good partnerships and cooperative developments to be coming soon. The pub crawls, as with all things in Austin, were awesome.
A few more photos of the event in parting:
Visitors enjoyed exploring their own office views.
Our initial setup at ASW, showing a simulated Mars Base. The US Air Force representatives at the conference paid particular interest to these sorts of simulated environments.
December 13, 2018
We are looking for a full-time engineer to help us further support the software, infrastructure, and hardware integration for our Liquid Galaxy and related systems. Liquid Galaxy is an impressive panoramic system for Google Earth, Street View, CesiumJS, panoramic photos and videos, 3D visualizations, and other applications.
Since Liquid Galaxy is a global operation, we are looking for an engineer who will cover shifts from 5 PM U.S. Eastern Time from Wednesday through Friday; as well as both Saturday & Sunday from approximately 9 AM until 6 PM U.S. Eastern Time. These hours may have to be adjusted slightly.
End Point is a technology consulting company founded in 1995 and based in...
November 21, 2018
When Rasmus Lerdorf created PHP in the ’90s, I bet he never thought that his language would become the engine that powers much of the web, even today, 23 years later.
PHP is indeed super popular. Part of that popularity stems for the fact that it can run pretty much anywhere. You’d be hard pressed to find a hosting solution that doesn’t support PHP out of the box, even the cheapest ones. It is also fully functional in pretty much any version of the most used operating systems so development and deployment have a very low barrier of entry.
Both because and as a result of this ubiquity and popularity, PHP also boasts an expansive, active community and a rich ecosystem of documentation, tools, libraries and frameworks. Zend Framework is a great example of the latter.
I started using Zend Framework a good while ago now, when it became clear to me that writing vanilla PHP wasn’t going to cut it anymore for moderately sized projects. I needed help and Zend Framework extended a very welcomed helping hand. Zend Framework is basically a big collection of libraries and frameworks that cover most of the needs that we would have as web developers building PHP applications.
While most parts of Zend Framework are standalone components that we can plug into an existing code base, picking and choosing what’s needed, there’s also the likes of Zend MVC. Zend MVC is, in its own right, a framework for developing web applications using the MVC design pattern (shocker, I know). A Zend MVC application is itself comprised of several other components from Zend Framework’s library like Zend Form (which helps with developing HTML forms), Zend DB (which helps with database access), Zend Validator (which helps with input validation), and many more.
Obviously, these are very useful within the context of web applications but they can also be valuable on their own. One could put together a PHP console app and make one’s life much easier by leveraging the features provided by these libraries...