Our Blog

Ongoing observations by End Point people

How to set up your Ruby on Rails development environment in Windows 10 Pro with Visual Studio Code and Windows Subsystem for Linux

By Kevin Campusano
April 4, 2019

Banner

There’s one truth that I quickly discovered as I went into my first real foray into Ruby and Rails development: Working with Rails in Windows sucks.

In my experience, there are two main roadblocks when trying to do this. First: RubyInstaller, the most mainstream method for getting Ruby on Windows, is not available for every version of the interpreter. Second: I’ve run into issues while compiling native extensions for certain gems. One of these gems is, surprisingly, sqlite3, a gem that’s needed to even complete the official Getting Started tutorial over on guides.rubyonrails.org.

In this post, I’m going to be talking about how to avoid these pitfalls by setting up your development environment using the Windows Subsystem for Linux on Windows 10 Pro. You can jump to the summary at the bottom of the article to get a quick idea of what we’re going to do over the next few minutes.

Anyway, I’ve since learned that the vast majority of the Ruby and Rails community uses either macOS or some flavor of Linux as their operating system of choice.

Great, but what is a Windows guy like me to do under these circumstances? Well, there are a few options. Assuming they would like/​need to keep using Windows as their main OS, they could virtualize some version of Linux using something like Hyper-V or VitrualBox, or go dual boot with a native Linux installation on their current hardware. Provided you can set something like these up, these solutions can work beautifully, but they can have drawbacks.

Virtual machines, depending on how you set them up and for graphical interfaces specially, can take a bit of a performance hit when compared to running the OS natively. So, having your entire development environment in one can get annoying after a while. The dual boot scenario gets rid of any performance degradation but then you have to go through the hassle of restarting anytime you want to work in a different OS. This can become a problem if you need to actively work on Windows-​based...


ruby ruby-on-rails windows visual-studio-code

Eliminating Resolvers in GraphQL Ruby

By Patrick Lewis
March 29, 2019

GraphQL Ruby code

In this follow-up to my post from last month about Converting GraphQL Ruby Resolvers to the Class-based API I’m going to show how I took the advice of the GraphQL gem’s documentation on Resolvers and started replacing the GraphQL-specific Resolver classes with plain old Ruby classes to facilitate easier testing and code reuse.

The current documentation for the GraphQL::Schema::Resolver class essentially recommends that it not be used, except for cases with specific requirements as detailed in the documentation.

Do you really need a Resolver? Putting logic in a Resolver has some downsides:

Since it’s coupled to GraphQL, it’s harder to test than a plain ol’ Ruby object in your app Since the base class comes from GraphQL-Ruby, it’s subject to upstream changes which may require updates in your code

Here are a few alternatives to consider:

  • Put display logic (sorting, filtering, etc.) into a plain ol’ Ruby class in your app, and test that class

  • Hook up that object with a method

I found that I was indeed having trouble testing my Resolvers that inherited from GraphQL::Schema::Resolver due to the GraphQL-specific overhead and context that they contained. Fortunately, it turned out to be a pretty simple process to convert a Resolver class to a plain Ruby class and test it with RSpec.

This was my starting point:

# app/graphql/resolvers/instructor_names.rb
module Resolvers
  # Return collections of instructor names based on query arguments
  class InstructorNames < Resolvers::Base
    type [String], null: false

    argument :semester, Inputs::SemesterInput, required: true
    argument :past_years, Integer, 'Include instructors for this number of past years', required: false

    def resolve(semester:, past_years: 0)
      term_year_range = determine_term_year_range(semester, past_years)

      CourseInstructor
        .where(term_year: term_year_range)
        .group(:first_name, :last_name)
        .pluck(:first_name, :last_name)
        .map { |name| name.join...

ruby graphql

Thoughts on Project Estimation: The Star, the Planet, and the Habitable Zone

By Árpád Lajos
March 25, 2019

planet in orbitPhoto by ESO on Flickr · CC BY 2.0

Whenever we are working on a feature, planning a milestone or a project, there is always a discussion about the cost and time needed. In most cases there are three main parties involved: the client, the manager, and the programmer(s). Let’s use the analogy of a star system, where the client is the star everything orbits around, the project is the planet, and the programmers are the biosphere. The closer we are to the star, the closer we are to the exact requests of the client.

Everything orbits around the star (the client), whose activity produces the energy, ensuring that there is any planet at all. If the planet (the project) is too close to the star, it will burn out quickly and evaporate. But if the planet is too far away, the relationship between the star and the planet, or the client and the project (from our perspective) will freeze out. There is a so-​called habitable zone, where the planet, or the project can benefit of the energy of the star.

First the habitable zone should be found. This is a concept of the project which is close enough to the client’s desires, but still achievable, so biosphere can coexist with the star system, shaping the planet.

Whenever we create an estimation, we need to differentiate parts of the problem into two main categories. The first category is the subset of the problem where we can accurately foresee what is to be done and we can accurately estimate the needed time. The second category is the subset for problems where we have open questions. It’s good to offer the client alternatives: we could do a vague estimation for the problems where we have open questions, or we can do research to gather further knowledge and increase the subset of problems where we foresee the solution. In general:

T = (T(Known) + T(Unknown) + T(Unforeseen)) * HR

T(Known) is the total time we estimate for the problems for which we mostly already know the solution. T(Unknown) is the total time we estimate for...


tips project-management

Switching from Google Maps to Leaflet

By Juan Pablo Ventoso
March 23, 2019

Leaflet Weather map example
Photo: RadSat HD

It’s no news for anyone who has Google Maps running on their websites that Google started charging for using their API. We saw it coming when, back in 2016, they started requiring a key to add a map using their JavaScript API. And on June 11, 2018, they did a major upgrade to their API and billing system.

The consequence? Any website with more than 25,000 page loads per day will have to pay. And if you are using a dynamic map (a map with custom styling and/or content) you only have roughly 28,000 free monthly page loads. We must create a billing account, even if we have a small website with a couple of daily visitors, hand credit card information to Google, and monitor our stats to make sure we won’t be charged. And if we don’t do that, our map will be dark and will have a “For development only” message in the background.

So what are your options? You can either pay or completely remove Google Maps from your websites. Even enterprise weather websites like The Weather Channel or Weather Underground have now replaced their Google Maps API calls with an alternative like Leaflet or MapBox (in some cases, they even gained some functionality in the process).

I have a personal weather website, and when I heard big changes were coming, I started to move away from Google Maps as well. My choice at that moment was Leaflet: It has everything you may need to build a robust tile-based map, add layers, markers, animations, custom tiles, etc. And it’s BSD-licensed open source and free.

Creating a basic map


Google Map conversion to Leaflet can be almost seamless if the same tiles are used.

Google Maps API and Leaflet share a similar way of doing most things, but they have some key differences we need to take into account. As a general rule, Google used the “google.maps” prefix to name most classes and interfaces, while Leaflet uses the “L” prefix instead.

First thing we need to do is to remove the Google Maps API reference from our website(s). So we need...


leaflet open-source gis maps

Running Magento 2 in Windows with XAMPP

By Juan Pablo Ventoso
March 22, 2019

Ecommerce
Photo by Nicole De Khors · Burst, Some Rights Reserved

Magento is an open source ecommerce platform, written in PHP and relying on MySQL/​MariaDB for persistence. According to BuiltWith, Magento is the third most used platform in ecommerce websites. It began its life in 2008 with its first general release, and a major update (Magento 2) was released in 2015.

And now, more than three years after, Magento 1 is slowly dying: There won’t be any more quality fixes or security updates from June 2020, and there won’t be extended support for fixes or new payment methods. So the obvious choice will be Magento 2 from now on.

But is it fully tested yet? Is it stable enough? If we already have a website running with Magento 1, what should we do? Migrating to Magento 2 is not just hitting an “Update” button: Themes are incompatible, most extensions won’t work, and of course, there’s a big set of changes to get familiar with.

So a good approach might be to get a clean Magento 2 version deployed locally, to look what we need to do to get our website updated and running, test the backend, find where the configuration sections are located, and so on. And many business users, and even some developers like myself, have Microsoft Windows installed on our computers.

Environment setup

The environment I used for this testing installation was Windows 10 Professional. As a first step, we’ll need to make sure that localhost is published in our local hosts file:

  • Navigate to the folder %SystemRoot%\system32\drivers\etc
  • Backup the existing hosts file
  • Open a text editor with administrator rights
  • Open the hosts file
  • Make sure the first line after the commented (#) lines is 127.0.0.1 localhost and the second is ::1 localhost
  • Open a cmd window with administrator rights and run the command ipconfig /flushdns

Now we’re ready to install the environment needed to run Magento. I recommend using XAMPP, a free Apache distribution for Windows that includes MariaDB, PHP, and Perl in a single package...


magento ecommerce mysql windows php

Extensible Binary Encoding with CBOR

By Matt Vollrath
March 18, 2019

illustration of man converting something in a machine

CBOR is a relatively new IETF draft standard extensible binary data format. Compared to similar formats like MessagePack and BSON, CBOR was developed from the ground up with clear goals:

  1. unambiguous encoding of most common data formats from Internet standards
  2. code compactness for encoder and decoder
  3. no schema description needed
  4. reasonably compact serialization
  5. applicability to constrained and unconstrained applications
  6. good JSON conversion
  7. extensibility

RFC 7049 Appendix E, Copyright © 2013 IETF Trust, Bormann & Hoffman

In the context of data storage and messaging, most developers can relate to CBOR as a binary drop-in replacement for JSON. While CBOR doesn’t share the human readability of JSON, it can efficiently and unambiguously encode types of data that JSON struggles with. CBOR can also be extended with tags to optimize serialization beyond its standard primitives.

Encoding Binary Data

JSON is a ubiquitous data format for web and beyond, for many good reasons, but encoding blobs of binary data is an area where JSON falters. For example, if you are designing a JSON protocol to wrap the storage or transfer of arbitrary objects, your options are:

  • Require that all input data can be represented as JSON. When possible this is potentially a reasonable solution, but limits the types of data that can be encoded. Notable exceptions include most popular image encodings, excluding SVG.
  • Base64 encode any binary data values to a string. This can encode any binary data, but increases the size of the data by a minimum of 1/3, incurs encoding and decoding cost, and requires magic to indicate that the string is Base64 encoded.
  • Encode the bytes as an array of numbers or a hex string. These are probably not things you should do, but it seemed worth mentioning that these techniques increase the size of the data by anywhere from 2x to 5x and also require magic to indicate that the data is really binary.

With CBOR, binary blobs of any length are supported out of the...


performance optimization browsers scalability nodejs benchmarks

The flow of hierarchical data extraction

By Árpád Lajos
March 13, 2019

forest view through glass ball on wood stump

1. Problem statement

There are many cases when people intend to collect data, for various purposes. One may want to compare prices or find out how musical fashion changes over time. There are a zillion potential uses of collected data.

The old-fashioned way to do this task is to hire a few dozen of people and explain them where should they go on the web, what should they collect, how they should write a report and how they should send it.

It is more effective to teach them this at the same time than to teach them separately, but even then, there will be misunderstandings, mistakes with high cost, not to mention the limit a human has when processing data in terms of the amount to process. As a result, the industry strives to make sure this is as automatic as possible.

This is why people write software to cope with this issue. The terms data-extractor, data-miner, data-crawler, data-spider mean software which extracts data from a source and stores it at the target. If data is mined from the web, then the more-specific web-extractor, web-miner, web-crawler, web-spider terms can be used.

Semanticdataextractorfigure1

In this article I will use the term “data-miner”.

This article deals with the extraction of hierarchical data in semantic manner and the way we can parse the data we obtained this way.

1.1. Hierarchical structure

A hierarchical structure involves a hierarchy, that is, we have a graph involved with nodes and vertices, but without a cycle. Specialists call this structure a forest. A forest consists of trees; in our case we have a forest of rooted trees. A rooted tree has a root node and every other node is its descendant (child of child of child …), or, if we put it inversely, the root is the ancestor of all other nodes in a rooted tree.

If we add a node to a forest and we make sure that all the trees’ roots in the forest are children of the new node, the new root, then we transformed our hierarchical structure, our forest into a tree.

Common hierarchical structures a data...


data-mining machine-learning data-processing

Converting GraphQL Ruby Resolvers to the Class-​based API

By Patrick Lewis
February 28, 2019

GraphQL Ruby code

The GraphQL gem is a great tool for any Rails developer who wants to add a full-​featured GraphQL API to their Rails applications.

I have been using GraphQL to serve an API in one of my Rails applications since late 2017 and have been very happy with the features and performance provided by the gem, but some of the domain-​specific syntax for building out my API schema never felt quite right when compared to the other Ruby code I was writing in my projects. Fortunately, the 1.8.0 release of the GraphQL Ruby gem brought with it a new default class-​based syntax while remaining compatible with existing code that predated the change.

The Class-based API guide that accompanied the changes does a good job of describing the upgrade path for developers who need to convert their existing schemas. The old .define syntax is eventually going to be removed with version 2.0 of the gem, so I was interested in converting my existing API over to the new style, both to see what benefits the newer syntax provides and to ensure that the API schema remains compatible with future releases of the gem.

The GraphQL gem provides some rake tasks like graphql:upgrade:schema and graphql:upgrade:member for automatic conversion of the older-​style .define files to the newer class-​based syntax. It worked quite well for updating my type definitions, but I also make heavy use of resolvers for containing the logic needed to return values in my GraphQL fields, and there was no way to automatically convert those files.

I found that the process of manually converting my resolvers was pretty straightforward and provided some benefits by cleaning up my QueryType file that was starting to look a little unwieldy.

Here is a before and after for comparison:

Old pre-1.8 '.define' syntax for types:

# app/graphql/types/query_type.rb
Types::QueryType = GraphQL::ObjectType.define do
  description 'Queries'

  field :instructor_names, types[types.String] do
    description 'Returns a collection of instructor...

ruby graphql
Previous page • Page 2 of 179 • Next page

Popular Tags


Archive


Search our blog