Serialization is a process used constantly by most applications today. However, there are some common misconceptions and misunderstandings about what it is and how it works; I hope to clear up a few of these in this post. I’ll be talking specifically about serialization and not marshalling, a related process.
Data serialization is the process of taking an object in memory and translating it to another format. This may entail encoding the information as a chunk of binary to store in a database, creating a string representation that a human can understand, or saving a config file from the options a user selected in an application. The reverse—deserialization—takes an object in one of these formats and converts it to an in-memory object the program can work with. This two-way process of translation is a very important part of the ability of various programs and computers to communicate with one another.
An example of serialization that we deal with every day can be found in the way we view numbers on a calculator. Computers use binary numbers, not decimal, so how do we ask one to add 230 and 4 and get back 234? Because the 230 and the 4 are deserialized to their machine representations, added in that format, and then serialized again in a form we understand: 234. To get 230 in a form the computer understands, it has to read each digit one at a time, figure out what that digit’s value is (i.e. the 2 is 200 and the 3 is 30), and then add them together. It’s easy to overlook how often this concept...
Image from Google’s marketing platform blog
Where is your traffic coming from? What drew the traffic to your website? Which parts of your website are most visited? How do visits change over time? And how can the answers to these questions help you?
Answering such questions and doing something about it is called search engine optimization (SEO).
To help you with this is Google Analytics, a web analytics service that lets you track and understand your website traffic. It is a valuable tool for businesses of all sizes that are looking to grow.
Here are three ways Google Analytics can benefit your business:
This is a great way to generate more “conversions” — visitors to your website taking a desired action. Are visitors behaving the way you expected them to? Can you observe any bottlenecks in audience flow?
Bottlenecks include traffic getting stuck on one page, when you want them to be going to a different one, like a contact page. Understanding how traffic gets stuck might point you toward the need to refresh certain web pages, which could in turn lead to more conversions.
For example, we observed that our “Deployment Automation” Expertise subpage has had a 100% bounce rate over the past three months. This is concerning because it means that the content may not be engaging or there may not be a clear visitor navigation path, the end goal being a contact submission. Analytics helped us start looking at how to strengthen this subpage.
Image from Google’s marketing platform blog
Who is coming to your site, and how are they finding you? What referral sites, partner sites, media, and blog posts are directing the most traffic to your page? How can you leverage that?
In reviewing your inbound traffic, you will see some combination of the following types of traffic:
Enumerated types are a useful programming tool when dealing with variables that have a predefined, limited set of potential values. An example of an enumerated type from Wikipedia is “the four suits in a deck of playing cards may be four enumerators named Club, Diamond, Heart, and Spade, belonging to an enumerated type named suit”.
I use enumerated types in my Rails applications most often for model attributes like “status” or “category”. Rails’ implementation of enumerated types in ActiveRecord::Enum provides a way to define sets of enumerated types and automatically makes some convenient methods available on models for working with enumerated attributes. The simple syntax does belie some potential pitfalls when it comes to longer-term maintenance of applications, however, and as I’ll describe later in this post, I would caution against using this basic 1-line syntax in most cases:
enum status: [:active, :archived]
The Rails implementation of enumerated types maps values to integers in database rows by default. This can be surprising the first time it is encountered, as a Rails developer looking to store status values like “active” or “archived” would typically create a string-based column. Instead, Rails looks for an numeric type column and stores the index of the selected enumerated value (0 for active, 1 for archived, etc.).
This exposes one of the first potential drawbacks of this minimalist enumerated type implementation: the stored integer values can be difficult to interpret outside the context of the Rails application. Although querying records in a Rails console will map the integer values back to their enumerated equivalents, other database clients are simply going to return the mapped integer values instead, leaving it up to the developer to look up what those 0 or 1 values are supposed to represent.
A larger problem that arises from defining an enum as an array of values is that...
For me, setting up a service started as a clean one-liner that used
InstallUtil.exe, but as time went on, I accumulated additional steps. Adding external files & folders, setting a custom Service Logon Account, and even an SSL cert had to be configured first before the service could be used. An entire checklist was needed just to make sure the service would start successfully. That’s when I realized a proper installation method was needed here. This article will go over how to make a dedicated
.msi installer for a Windows Service that can do all these things and more.
Creating an installer can be tricky, because not all the available features are easy to find. In fact, the setup project itself is not included by default in Visual Studio; you need to install an extension in order to create one. But once the installer is created, we can use it to do things like:
C:\Program Files (x86)folder, as well as add custom files & folders to the installation
For .NET Core and .NET 5.0 projects, you won’t be able to add an installer class. To use either .NET Core or .NET 5.0 to make a service instead, you’d need to make a different kind of project called a Worker Service. A Worker Service differs from a traditional Windows Service in that it’s more like a console application that spawns off a worker process on a new thread. It can be configured to run as a Windows service, but doesn’t have to be. So instead of using an installer, for a Worker Service you’d publish the project to an output directory and then use the
SC.exe utility to add it as a Windows service:
dotnet publish -o C:\<PUBLISH_PATH> SC CREATE...
GStreamer is a library for creating media-handling components. Using GStreamer you can screencast your desktop, transcode a live stream, or write a media player application for your kiosk.
Video encoding is expensive, even with AMD’s current lineup making it more palatable. Recent Nvidia Quadro and GeForce video cards include dedicated H.264 encoding and decoding hardware as a set of discrete components alongside the GPU. The hardware is used in the popular Shadowplay toolkit on Windows and available to developers through the Nvidia Video SDK on Linux.
GStreamer includes elements as part of the “GStreamer Bad” plugin set that leverages the SDK without having to get your hands too dirty. The plugins are not included with gst-plugins-bad in apt, and must be compiled with supporting libraries from Nvidia. Previously this required registering with Nvidia and downloading the Nvidia Video SDK, but Ubuntu recently added apt packages providing them, a big help for automation.
The nvenc and nvdec plugins depend on CUDA 11. The apt version is too old. I’ve found the runfile to be the most reliable installation method. Deselect the nvidia drivers when using the runfile if using the distro-maintained driver.
Install prerequisites from apt:
$ apt install nvidia-driver-460 libnvidia-encode-460 libnvidia-decode-460 libdrm-dev
Clone gst-plugins-bad source matching distro version:
$ git clone --single-branch -b 1.16.2 git://anongit.freedesktop.org/gstreamer/gst-plugins-bad $ cd gst-plugins-bad
Compile and install plugins:
$ ./autogen.sh --with-cuda-prefix="/usr/local/cuda" $ cd sys/nvenc $ make $ cp .libs/libgstnvenc.so /usr/lib/x86_64-linux-gnu/gstreamer-1.0/ $ cd ../nvdec $ make $ cp .libs/libgstnvdec.so /usr/lib/x86_64-linux-gnu/gstreamer-1.0/
Clear GStreamer cache and check for dependency issues using gst-inspect:
$ rm -r ~/.cache/gstreamer...
We are happy to introduce a new feature for the Cesium CZML-KML Editor: polygons and polylines geometry editing. You can now edit geometries for existing entities and move entered points during the creation process. Here is a video with a short summary of the editing process:
See our previous blog post introducing the Cesium CZML-KML Editor for further reference.
MySQL is one of the most widely used relational databases. Most PHP websites rely on MySQL for persisting their information, which makes it one of the DB-Engines top four most popular databases along with Oracle, SQL Server, and PostgreSQL.
One of its capabilities that is not very well known is that the engine supports working with spatial data, allowing you to save different shapes (points, lines, polygons) and querying information based on intersections, distances, or overlaps. This capability was included in MySQL a long time ago, but it became easier to use starting in version 5.6, when the distance and point intersection functions were added.
Spatial data can be useful for many needs, including:
My first experience with spatial queries was for a weather website I developed that displays local alerts/warnings on a map using MySQL spatial functions to return active weather alerts for a given location, or to inform if lightning has been observed near the user’s current coordinates. So far, MySQL has given me all the resources I need to do such operations with relatively good performance and without needing to write lots of custom code.
There are many resources available to import spatial information into our database. From the United States Census Bureau we can find a set of shapefiles with all US states and counties. The Back4App social database platform also has many datasets available to download for free.
Of course, we can also create a table ourselves that contains any kind of spatial information. In the example below, we will create a table named restaurants which will have a name and location (lat/long) geometry for each row.
CREATE TABLE restaurants ( name VARCHAR(100), location GEOMETRY NOT NULL, SPATIAL INDEX(location...
Whether to use natural or surrogate keys is a long-debated subject of database design. I am a fan of using natural keys. I think there are even more compelling reasons to use natural keys in databases as the systems grow more complex and interdependent.
Let’s start by what we mean by natural. It’s not trivial to define this. In today’s world of APIs, someone’s surrogate key is another’s natural key. Wikipedia defines natural keys as “a type of unique key in a database formed of attributes that exist and are used in the external world outside the database”. This makes it clear that the keys we get from APIs are our natural keys. But how about the ones generated by us to be used in the external world?
When applications expose the keys on the URLs and APIs, others start relying on them. This is where our choices become important. When all those different applications generate their own keys instead of using the keys they got from other places, life becomes difficult for no reason.
Let’s elaborate with an example corporate database where employees are identified by their usernames and the departments with their domains. So our data would look like this:
| department | username | job | | ----------------- | --------- | ----------------------- | | sysadm.corp-x.com | hasegeli | Database Administrator | | sysadm.corp-x.com | john | System Administrator | | dep1.corp-x.com | jane | Developer |
When we design this using surrogate keys, it’d look like this:
CREATE TABLE departments ( id int NOT NULL GENERATED ALWAYS AS IDENTITY, domain text NOT NULL, PRIMARY KEY (id), UNIQUE (domain) ); CREATE TABLE employees ( id int NOT NULL GENERATED ALWAYS AS IDENTITY, username text NOT NULL, PRIMARY KEY (id), UNIQUE (username) ); CREATE TABLE department_employees ( id int NOT NULL GENERATED ALWAYS AS IDENTITY, department_id int NOT NULL, employee_id int NOT NULL, job text...