Our Blog

Ongoing observations by End Point people

Salesforce Integration with Node.js

By Dylan Wooters
March 27, 2020

Patterned roof

Photo by Dylan Wooters, 2020

Salesforce is huge. It is currently the dominant customer relationship management (CRM) provider, accounting for around 20% of market share. Businesses are using Salesforce not only as a traditional CRM solution, but also for novel purposes. Salesforce can serve as a backend database and admin portal for custom apps, or as a reporting tool that pulls data from various systems.

This growth leads to increasing demand for Salesforce integrations. The term “Salesforce integration” may conjure up images of expensive enterprise software or dense API documentation, but it doesn’t have to be that way. You can work with Salesforce easily using Node.js and the npm package JSforce. An example of a project that might benefit from this kind of Node.js integration is an e-commerce website where order data is loaded to and from Salesforce for order fulfillment, tracking, and reporting.

In this post we’ll cover how to connect to Salesforce using JSforce, the basics of reading and writing data, as well as some advanced topics like working with large amounts of data and streaming data with Socket.IO.

Setting Up

You’ll first want to install Node.js on your local machine, if you haven’t done so already.

Next, create your Node app. This will vary with your requirements. I often use Express to build a REST API for integration purposes. Other times, if I am routinely loading data into Salesforce, I will create Node scripts and schedule them using cron. For the purposes of this post, we will create a small Node script that can be run on the command line.

Create a new directory for your project, and within that directory, run npm init to generate your package.json file. Then install JSforce with npm install jsforce.

Finally, create a file named script.js, which we will run on the command line for testing. To test the script at any time, simply navigate to your app’s directory and run node script.js.

At the top of the script, require jsforce, as well...


nodejs javascript integration

An Introduction to webpack 4: Setting Up a Modern, Modular JavaScript Front-End Application

By Kevin Campusano
March 26, 2020

Banner

Image taken from https://webpack.js.org/

I’ve got a confession to make: Even though I’ve developed many JavaScript-heavy, client side projects with complex build pipelines, I’ve always been somewhat confused by the engine that drives these pipelines under the hood: webpack.

Up until now, when it came to setting up a build system for front-end development, I always deferred to some framework’s default setup or some recipes discovered after some Googling or StackOverflow-ing. I never really understood webpack at a level where I felt comfortable reading, understanding and modifying a config file.

This “learn enough to be effective” approach has served me well so far and it works great for being able to get something working, while also spending time efficiently. When everything works as it should, that is. This approach starts to fall apart when weird, more obscure issues pop up and you don’t know enough about the underlying system concepts to get a good idea of what could’ve gone wrong. Which can sometimes lead to frustrating Googling sessions accompanied with a healthy dose of trial and error. Ask me how I know...

Well, all that ends today. I’ve decided to go back to basics with webpack and learn about the underlying concepts, components and basic configuration. Spoiler alert: it’s all super simple stuff.

Let’s dive in.

The problem that webpack solves

webpack is a module bundler. That means that its main purpose is taking a bunch of disparate files and “bundling” them together into single, aggregated files. Why would we want to do this? Well, for one, to be able to write code that’s modular.

Writing modular code is not as easy in JavaScript that runs in a browser as it is in other languages or environments. Traditionally, the way to achieve good modularity in the web front-end has been via including separate scripts via multiple <script> tags within HTML files. This approach comes with its own host of problems. Things like the order in which the scripts are...


development javascript webpack babel

Web Projects for a Rainy Day

By Elizabeth Garrett Christensen
March 25, 2020

raindrops on a plant

Image by Yellowstone NPS on Flickr

With the COVID-19 quarantine disrupting life for many of us, I thought I’d put together a list of things you can do with your website on a rainy day. These are things to keep your business moving even if you’re at home and some of your projects are stuck waiting on things to reopen. If you’re looking for some useful things to do to fill your days over the next few months, this post is for you!

Major Version Updates

Make a list of your entire stack, from OS to database to development frameworks. Note the current version and research the current supported versions. I find Wikipedia pages to be fairly reliable for this (e.g. https://en.wikipedia.org/wiki/CentOS). Ok, so what things need to be updated, or will need to be in the next year? Start on those now and use some downtime to get ahead of your updates.

Sample of a client’s stack review

Software Purpose Our version Release date End of support Next update Newest version Notes
CentOS OS for e-commerce server 7 July 2014 June 2024 Not imminent 8 https://wiki.centos.org/About/Product
Nginx Web server 1.16.0 March 2020 Unclear Not imminent 1.16.1 https://nginx.org/
PostgreSQL Database server 9.5.20 January 2016 Feb 2020 Medium term, to version 11 12 https://www.postgresql.org/support/versioning/
Rails App framework for store 5.1 February 2017 Current Long Term, to version 6 6 https://rubygems.org/gems/spree/versions
Elasticsearch Search platform for product import/search 5.6.x September 2017 March 2019 Immediate, to version 6.8 7.4 https://www.elastic.co/support/eol
WordPress Info site 5.2.3 September 2019

optimization development seo reporting testing

What is SharePoint?

By Dan Briones
March 25, 2020

Web servers

Image by Taylor Vick

People often ask me about SharePoint, Microsoft’s browser-based collaboration platform which allows users to upload and share all kinds of documents, images, messages, and more. The product has nearly two decades of history and there are still many who don’t know much about it.

The SharePoint platform has grown over those years, but its capabilities have expanded in such a way that it can be quickly dismissed from consideration out of fear of the complexity of its implementation and the cost of deployment. These fears may be unfounded, however. Especially if you are already on Office 365, SharePoint may be included in your plan.

SharePoint was designed as a framework to create and share content on the web without the need to write code. Its purpose was to allow everyone in the organization to collaborate without any specific programming skills. This framework grew over time, adding many different types of content allowing for interactions with other frameworks increasing the effectiveness of any organization’s work product or intellectual property and communications.

Flavors of SharePoint

There are two ‘flavors’ of SharePoint. You can use Microsoft’s cloud-based service or you can host your own on-premises server farm. But I suspect Microsoft’s preference is to wrangle organizations into the cloud, as seen in Microsoft’s SharePoint 2019 online documentation which casually omits references to the on-premises server product. Microsoft offers an inexpensive per-user SharePoint cloud service license for those organizations that don’t want to use Office 365’s other offerings.

On the other hand, on-premises SharePoint Server licensing is very expensive, especially if you wish to design for high availability and create a well-balanced SharePoint server farm. This requires CALs (Client Access Licenses) as well. But the cloud licensing model is very attractive in pricing, especially if you are planning to move your organization’s Exchange email...


tools

Serialization and Deserialization Issues in Spring REST

By Kürşat Kutlu Aydemir
March 17, 2020

Mosaic pattern

Photo by Annie Spratt

Spring Boot projects primarily use the JSON library Jackson to serialize and deserialize objects. It is especially useful that Jackson automatically serializes objects returned from REST APIs and deserializes complex type parameters like @RequestBody.

In a Spring Boot project the automatically registered MappingJackson2HttpMessageConverter is usually enough and makes JSON conversions simple, but this may have some issues which need custom configuration. Let’s go over a few good practices for them.

Configuring a Custom Jackson ObjectMapper

In Spring REST projects a custom implementation of MappingJackson2HttpMessageConverter helps to create the custom ObjectMapper, as seen below. Whatever custom implementation you need to add to the custom ObjectMapper can be handled by this custom converter:

public class CustomHttpMessageConverter extends MappingJackson2HttpMessageConverter {

    private ObjectMapper initCustomObjectMapper() {
        ObjectMapper customObjectMapper = new ObjectMapper();
        return customObjectMapper;
    }

    // ...
}

Additionally, some MappingJackson2HttpMessageConverter methods, such as writeInternal, can be useful to override in certain cases. I’ll give a few examples in this article.

In Spring Boot you also need to register a custom MappingJackson2HttpMessageConverter like below:

@Bean
MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() {
    return new CustomHttpMessageConverter();
}

Serialization

Pretty-printing

Pretty-printing in Jackson is disabled by default. By enabling SerializationFeature.INDENT_OUTPUT in the ObjectMapper configuration pretty-print output is enabled (as in the example below). Normally a custom ObjectMapper is not necessary for setting the pretty-print configuration. In some cases, however, like one case of mine in a recent customer project, this configuration might be necessary.

For example, passing a URL parameter can enable pretty-printing. In this case...


json java frameworks

Consolidating Multiple SFTP Accounts Into One Master Account

By Selvakumar Arumugam
March 16, 2020

merging roads

Photo by Dan Meyers

Recently, a client implemented a data-intensive workflow to generate various reports and insights from a list of facilities. Because a significant portion of these files contain sensitive data, they needed to strictly comply with HIPAA. Optimally, facilities should be able to transfer files securely and exclusively to our server. One of the best methods of achieving this is to create individual SSH File Transfer Protocol (SFTP) accounts for each source.

SFTP account

Private SFTP accounts were established for each facility and the data was received at a designated path. At these individual points of contact, a third-party application picks up the data and processes further into the pipeline. The following demonstrates how SFTP accounts are developed and configured:

  • Create a user group for SFTP accounts:
$ addgroup sftpusers
  • Configure the following settings in sshd_config (this enables an SFTP account and sets the default location as the home path):
$ vi /etc/ssh/sshd_config
...
# override default of no subsystems
Subsystem       sftp    internal-sftp...

Match Group sftpusers
    ChrootDirectory /home/%u
    AllowTCPForwarding no
    X11Forwarding no
    ForceCommand internal-sftp
  • Restart SSH server to apply changes:
$ systemctl restart ssh
  • Create an SFTP user account for a facility and place in a folder on the home path to receive data:
# set new user name
sftpuser=the-new-username
useradd $sftpuser
usermod -g sftpusers -s /usr/sbin/nologin $sftpuser
mkdir -p /home/$sftpuser/INPUT_PATH/
chown -R root:root /home/$sftpuser

Mount multiple accounts to one account

The goal here is to point the data from many facilities to one location, but using a single account and path for multiple sites’ data could result in a breach of security and/​or privacy. Mounting the receiving path of a facility’s data onto a single master account and then to a “mount point” with a unique facility name takes care of this issue. The process next...


ssh shell security

Capturing Outgoing Email With Mock SMTP Servers

By Patrick Lewis
March 13, 2020

Mailboxes Photo by Seattleye, used under CC BY 2.0, cropped from original.

Sending automated email to users is a common requirement of most web applications and can take the form of things like password reset emails or order confirmation invoices.

It is important for developers working in development/staging environments to verify that an application is sending email correctly without actually delivering messages to users’ inboxes. If you were testing a background task that searches an e-commerce site for abandoned shopping carts and emails users to remind them that they have not completed a checkout, you would not want to run that in development and end up repeatedly emailing live user email addresses.

A mock SMTP server is useful for development and testing because it lets you configure the email settings of your development environment almost exactly the same as you would for outgoing SMTP email in your production site. The mock SMTP server will capture all of the outbound email and allow you to review it in a web interface instead of actually delivering it to users’ inboxes.

Mock SMTP Servers

There are a variety of standalone/free and hosted/commercial options for mock SMTP servers including:

The standalone/free options have been sufficient for the projects I have worked on. Some of the features offered by the hosted solutions like Mailtrap and Mailosaur may be appealing to larger development teams.

MailHog is my go-to mock SMTP server because it has a nice web interface and is extremely easy to install and configure for typical use. The standalone solutions that I have tried all work similarly; they listen for SMTP email on one port, and provide a web interface on a separate port for reviewing captured email.

Configuring a Rails Application to use MailHog

Installation and use of MailHog is very simple: download and run the mailhog executable to...


ruby rails email testing

E-commerce Client Project Management

By Greg Hanson
March 12, 2020

Banner Photo by You X Ventures on Unsplash

Moving from writing code to managing the show

Many times engineers/​developers make the move from development to project management. It’s a natural move, we want the folks who know the nuts and bolts of e-commerce projects to eventually manage them.

So that’s all fine and dandy, but what if you haven’t been a “manager” before?

  • How do you manage an e-commerce client?
  • How do you manage an e-commerce project?
  • How do you manage engineers/​developers for an e-commerce project?

An answer for each of the above is always: “It depends.” Or maybe more familiarly for Perl developers: TIMTOWTDI.

The reason for that of course is that all of the above questions have variables that will change for every situation.

As a developer, you understand the large number of outcomes that can be introduced into an application by using a single variable. You also understand that the number of outcomes increases proportionally with the number of variables.

The same holds true for management. When you are faced with managing a project, your “variables” now move from placeholders in your code, to placeholders in your project. Where you may have assigned a variable for a “string”, “integer”, or “boolean”, you now may have a “client”, “project”, or “team of developers”.

The point here is that while variables will change from project to project, the “structure” of how you run that project can still remain consistent. Much like designing code to return consistent results while using a wide range of variables.

In order to achieve this type of consistency, the core operations that run within the project, need to be reliable. Over time you will come to develop processes in your projects that will be reliable, and that you will return to time after time, project after project. To get you started, here are a few that should be at the core any project management:

  1. Know your client.
  2. Learn the project.
  3. Know your developers.

These are 3 basic rules that should...


management clients ecommerce
Page 1 of 185 • Next page

Popular Tags


Archive


Search our blog