Our Blog

Ongoing observations by End Point people

Creating a Messaging App Using Spring for Apache Kafka, Part 1

By Kürşat Kutlu Aydemir
April 8, 2020

spring-kafka Photo by Click and Learn Photography at Unsplash

This article is part of a series.

Spring is a popular Java application framework. Apache Kafka is a fault-tolerant, fast, and horizontally scalable distributed stream-message broker. Spring for Apache Kafka applies the overall concepts of Spring to Java applications based on Kafka.

Since Kafka can establish a fast and fault-tolerant stream data pipeline it can be used as an orchestrator. In this article I’ll explain how to create a spring-kafka project, add dependencies and use Kafka to create a messaging app.

Initialize Spring project

Spring projects can be built from scratch using Spring Initializr. I like to keep the default options. Most Spring projects use Maven. I set the group id as com.endpoint and the artifact as SpringKafkaMessaging which makes the base package name com.endpoint.SpringKafkaMessaging.

Spring Initializr

When we are done with the initial project setup we press the “GENERATE” button to download an empty Spring Boot project in a zip file. You can then use your favorite IDE to open and start developing your project. I prefer Eclipse for Java projects. Here’s what it looks like when I open the project up:

Eclipse

I won’t address detailed configuration or adding dependencies of Spring and Maven projects in this post. If you are not familiar with Spring and Maven, I recommend that you have a look at the Spring documentation first.

Design and architecture

Before adding the dependencies, including Kafka, we need to make a high-level design of this simple project and figure out how to proceed with development. Messaging apps seem simple at first glance but the architecture behind them can be quite complex.

There are different kinds of technology stacks you can choose from. Which base protocol we choose (XMPP, SIP, or WebSocket) depends on what our app’s aim is. Sometimes multiple protocols can be used and interconnected to provide more features; XMPP is mostly used for chatting, SIP is designed for VoIP and media...


spring java frameworks kafka spring-kafka-series

Installing Ubuntu 18.04 to a different partition from an existing Ubuntu installation

By Bharathi Ponnusamy
April 6, 2020

Clean setup

Photo by Patryk Grądys on Unsplash

Our Liquid Galaxy systems are running on Ubuntu 14.04 LTS (Trusty). We decided to upgrade them to Ubuntu 18.04 LTS (Bionic) since Ubuntu 14.04 LTS reached its end of life on April 30, 2019.

Upgrading from Ubuntu 14.04 LTS

The recommended way to upgrade from Ubuntu 14.04 LTS is to first upgrade to 16.04 LTS, then to 18.04 LTS, which will continue to receive support until April 2023:

14.04 LTS → 16.04 LTS → 18.04 LTS

Ubuntu has LTS → LTS upgrades, allowing you to skip intermediate non-LTS releases, but we can’t skip intermediate LTS releases; we have to go via 16.04, unless we want to do a fresh install of 18.04 LTS.

For a little more longevity, we decided to do a fresh install of Ubuntu 18.04 LTS. Not only is this release supported into 2023 but it will offer a direct upgrade route to Ubuntu 20.04 LTS when it’s released in April 2020.

Installing Clean Ubuntu 18.04 LTS from Ubuntu 14.04 LTS

Install debootstrap

The debootstrap utility installs a very minimal Debian system. Debootstrap will install a Debian-based OS into a sub-directory. You don’t need an installation CD for this. However, you need to have access to the corresponding Linux distribution repository (e.g. Debian or Ubuntu).

apt-get update
apt-get -y install debootstrap

Creating a new root partition

Create a logical volume with size 12G and format the filesystem to ext4:

lvcreate -L12G -n ROOT_VG/ROOT_VOLUME
mkfs.ext4 /dev/ROOT_VG/ROOT_VOLUME

Mounting the new root partition

Mount the partition at /mnt/root18. This will be the root (/) of your new system.

mkdir -p /mnt/root18
mount /dev/ROOT_VG/ROOT_VOLUME /mnt/root18

Bootstrapping the new root partition

Debootstrap can download the necessary files directly from the repository. You can substitute any Ubuntu archive mirror for ports.ubuntu.com/ubuntu-ports in the command example below. Mirrors are listed here.

Replace $ARCH below with your architecture: amd64, arm64, armhf, i386, powerpc, ppc64el, or...


linux ubuntu update sysadmin devops chef

Magento 2: Creating a custom module

By Juan Pablo Ventoso
April 1, 2020

Bridge with wires

Photo by Babatunde Olajide, cropped from original

A Magento module is a set of classes and routines that will depend on and interact with other Magento classes in order to add a specific feature to a Magento application. While a theme is orientated towards the front-end and user experience, a module is orientated towards backend logic and application flow.

We will need to create a custom module if we want to add or change the existing logic at a level where Magento doesn’t provide a setting or option for it. For example, if our business has a specific feature or set of features or requirements that are not common to the market, a module can fill that gap for us.

Creating a basic Magento 2 module

Creating a simple module in Magento 2 is not that hard. We will need to accomplish the following tasks:

  • Create a new directory for the module
  • Create a registration.php script
  • Create a etc/module.xml information file
  • Install the new module

Creating a new directory for the module

Where should the new directory for our module be placed? We have two options to choose from:

  • app/code/{vendor}/
  • vendor/{vendor}/

If your module is intended for a specific website you’re working on, you can use the first option. If you’re creating a module with the intention of it being used on several websites, it’s best to choose the second option. We’ll use the first for this example.

Let’s create a directory named EndPoint (our vendor name) with a subdirectory inside it, MyModule:

cd {website_root}
mkdir -p app/code/EndPoint/MyModule

Creating the registration.php script

The registration.php file tells Magento to register the new module under a specific name and location. Let’s create a file named app/code/EndPoint/MyModule/registration.php with the folllowing content:

<?php
\Magento\Framework\Component\ComponentRegistrar::register(
    \Magento\Framework\Component\ComponentRegistrar::MODULE,
    'EndPoint_MyModule',
    __DIR__
);

We’re telling Magento that our module will...


magento php ecommerce

Salesforce Integration with Node.js

By Dylan Wooters
March 27, 2020

Patterned roof

Photo by Dylan Wooters, 2020

Salesforce is huge. It is currently the dominant customer relationship management (CRM) provider, accounting for around 20% of market share. Businesses are using Salesforce not only as a traditional CRM solution, but also for novel purposes. Salesforce can serve as a backend database and admin portal for custom apps, or as a reporting tool that pulls data from various systems.

This growth leads to increasing demand for Salesforce integrations. The term “Salesforce integration” may conjure up images of expensive enterprise software or dense API documentation, but it doesn’t have to be that way. You can work with Salesforce easily using Node.js and the npm package JSforce. An example of a project that might benefit from this kind of Node.js integration is an e-commerce website where order data is loaded to and from Salesforce for order fulfillment, tracking, and reporting.

In this post we’ll cover how to connect to Salesforce using JSforce, the basics of reading and writing data, as well as some advanced topics like working with large amounts of data and streaming data with Socket.IO.

Setting Up

You’ll first want to install Node.js on your local machine, if you haven’t done so already.

Next, create your Node app. This will vary with your requirements. I often use Express to build a REST API for integration purposes. Other times, if I am routinely loading data into Salesforce, I will create Node scripts and schedule them using cron. For the purposes of this post, we will create a small Node script that can be run on the command line.

Create a new directory for your project, and within that directory, run npm init to generate your package.json file. Then install JSforce with npm install jsforce.

Finally, create a file named script.js, which we will run on the command line for testing. To test the script at any time, simply navigate to your app’s directory and run node script.js.

At the top of the script, require jsforce, as well...


nodejs javascript integration

An Introduction to webpack 4: Setting Up a Modern, Modular JavaScript Front-End Application

By Kevin Campusano
March 26, 2020

Banner

Image taken from webpack.js.org

I’ve got a confession to make: Even though I’ve developed many JavaScript-heavy, client side projects with complex build pipelines, I’ve always been somewhat confused by the engine that drives these pipelines under the hood: webpack.

Up until now, when it came to setting up a build system for front-end development, I always deferred to some framework’s default setup or some recipes discovered after some Googling or StackOverflow-ing. I never really understood webpack at a level where I felt comfortable reading, understanding and modifying a config file.

This “learn enough to be effective” approach has served me well so far and it works great for being able to get something working, while also spending time efficiently. When everything works as it should, that is. This approach starts to fall apart when weird, more obscure issues pop up and you don’t know enough about the underlying system concepts to get a good idea of what could’ve gone wrong. Which can sometimes lead to frustrating Googling sessions accompanied with a healthy dose of trial and error. Ask me how I know...

Well, all that ends today. I’ve decided to go back to basics with webpack and learn about the underlying concepts, components and basic configuration. Spoiler alert: it’s all super simple stuff.

Let’s dive in.

The problem that webpack solves

webpack is a module bundler. That means that its main purpose is taking a bunch of disparate files and “bundling” them together into single, aggregated files. Why would we want to do this? Well, for one, to be able to write code that’s modular.

Writing modular code is not as easy in JavaScript that runs in a browser as it is in other languages or environments. Traditionally, the way to achieve good modularity in the web front-end has been via including separate scripts via multiple <script> tags within HTML files. This approach comes with its own host of problems. Things like the order in which the scripts are included...


development javascript webpack babel

Web Projects for a Rainy Day

By Elizabeth Garrett Christensen
March 25, 2020

raindrops on a plant

Image by Yellowstone NPS on Flickr

With the COVID-19 quarantine disrupting life for many of us, I thought I’d put together a list of things you can do with your website on a rainy day. These are things to keep your business moving even if you’re at home and some of your projects are stuck waiting on things to reopen. If you’re looking for some useful things to do to fill your days over the next few months, this post is for you!

Major Version Updates

Make a list of your entire stack, from OS to database to development frameworks. Note the current version and research the current supported versions. I find Wikipedia pages to be fairly reliable for this (e.g. https://en.wikipedia.org/wiki/CentOS). Ok, so what things need to be updated, or will need to be in the next year? Start on those now and use some downtime to get ahead of your updates.

Sample of a client’s stack review

Software Purpose Our version Release date End of support Next update Newest version Notes
CentOS OS for e-commerce server 7 July 2014 June 2024 Not imminent 8 https://wiki.centos.org/About/Product
Nginx Web server 1.16.0 March 2020 Unclear Not imminent 1.16.1 https://nginx.org/
PostgreSQL Database server 9.5.20 January 2016 Feb 2020 Medium term, to version 11 12 https://www.postgresql.org/support/versioning/
Rails App framework for store 5.1 February 2017 Current Long Term, to version 6 6 https://rubygems.org/gems/spree/versions
Elasticsearch Search platform for product import/search 5.6.x September 2017 March 2019 Immediate, to version 6.8 7.4 https://www.elastic.co/support/eol
WordPress Info site 5.2.3 September 2019

optimization development seo reporting testing

What is SharePoint?

By Dan Briones
March 25, 2020

Web servers

Image by Taylor Vick

People often ask me about SharePoint, Microsoft’s browser-based collaboration platform which allows users to upload and share all kinds of documents, images, messages, and more. The product has nearly two decades of history and there are still many who don’t know much about it.

The SharePoint platform has grown over those years, but its capabilities have expanded in such a way that it can be quickly dismissed from consideration out of fear of the complexity of its implementation and the cost of deployment. These fears may be unfounded, however. Especially if you are already on Office 365, SharePoint may be included in your plan.

SharePoint was designed as a framework to create and share content on the web without the need to write code. Its purpose was to allow everyone in the organization to collaborate without any specific programming skills. This framework grew over time, adding many different types of content allowing for interactions with other frameworks increasing the effectiveness of any organization’s work product or intellectual property and communications.

Flavors of SharePoint

There are two ‘flavors’ of SharePoint. You can use Microsoft’s cloud-based service or you can host your own on-premises server farm. But I suspect Microsoft’s preference is to wrangle organizations into the cloud, as seen in Microsoft’s SharePoint 2019 online documentation which casually omits references to the on-premises server product. Microsoft offers an inexpensive per-user SharePoint cloud service license for those organizations that don’t want to use Office 365’s other offerings.

On the other hand, on-premises SharePoint Server licensing is very expensive, especially if you wish to design for high availability and create a well-balanced SharePoint server farm. This requires CALs (Client Access Licenses) as well. But the cloud licensing model is very attractive in pricing, especially if you are planning to move your organization’s Exchange email...


tools

Serialization and Deserialization Issues in Spring REST

By Kürşat Kutlu Aydemir
March 17, 2020

Mosaic pattern

Photo by Annie Spratt

Spring Boot projects primarily use the JSON library Jackson to serialize and deserialize objects. It is especially useful that Jackson automatically serializes objects returned from REST APIs and deserializes complex type parameters like @RequestBody.

In a Spring Boot project the automatically registered MappingJackson2HttpMessageConverter is usually enough and makes JSON conversions simple, but this may have some issues which need custom configuration. Let’s go over a few good practices for them.

Configuring a Custom Jackson ObjectMapper

In Spring REST projects a custom implementation of MappingJackson2HttpMessageConverter helps to create the custom ObjectMapper, as seen below. Whatever custom implementation you need to add to the custom ObjectMapper can be handled by this custom converter:

public class CustomHttpMessageConverter extends MappingJackson2HttpMessageConverter {

    private ObjectMapper initCustomObjectMapper() {
        ObjectMapper customObjectMapper = new ObjectMapper();
        return customObjectMapper;
    }

    // ...
}

Additionally, some MappingJackson2HttpMessageConverter methods, such as writeInternal, can be useful to override in certain cases. I’ll give a few examples in this article.

In Spring Boot you also need to register a custom MappingJackson2HttpMessageConverter like below:

@Bean
MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() {
    return new CustomHttpMessageConverter();
}

Serialization

Pretty-printing

Pretty-printing in Jackson is disabled by default. By enabling SerializationFeature.INDENT_OUTPUT in the ObjectMapper configuration pretty-print output is enabled (as in the example below). Normally a custom ObjectMapper is not necessary for setting the pretty-print configuration. In some cases, however, like one case of mine in a recent customer project, this configuration might be necessary.

For example, passing a URL parameter can enable pretty-printing. In this case...


json java frameworks spring
Previous page • Page 2 of 187 • Next page

Popular Tags


Archive


Search our blog