Getting Started with Node.js LoopBack Connector for Couchbase

Share

Introduction

LoopBack is the leading open source, enterprise-ready Node.js framework for helping developers to create APIs integrated with legacy and next gen backends that at the same time enables mobile and micro services architectures.

LoopBack models connect to backend systems like databases via data sources that provide create, read, update and delete (CRUD) functions through the LoopBack Juggler: a modern ORM/ODM. These data sources are backed by connectors that implement connection and data access logic using database drivers or other client APIs. In addition, there are connectors for REST and SOAP APIs, NoSQL, messaging middleware, storage services and more. For example, StrongLoop maintains connectors for:

You may have used one or more of these connectors, but did you know that in addition to these connectors maintained by StrongLoop, there are many community contributed connectors as well?

This tutorial is the first in a series of posts that will help you get started with some of the many user contributed NoSQL connectors for LoopBack. In this series we will cover usage of the connectors for:

This tutorial will cover integrating LoopBack with Couchbase Server using the community contributed loopback-connector-couchbase.

 

couchpluslb

Read more

How-to Cluster Node.js in Production with strong-cluster-control

Introduction

“A single instance of Node runs in a single thread”. That’s how the official documentation opens up. What this really means is that any given Node.js process can only take advantage of one CPU core on your server. If you have a processor with 6 cores, only one will really be doing all the work.

When you are writing a web application that is expected to get good traffic, it’s important to take full advantage of your hardware. There are two slightly different, yet pretty similar ways to do that:

  1. Start as many Node processes on different ports as there are CPU cores and put a load balancer in front of them.
  2. Start a Node cluster with as many workers as there are CPU cores and let Node to take care of the load balancing.

There are pros and cons to each. You get a lot more control with a dedicated load balancer, however configuring and managing it might get pretty complicated. If that’s not something you want to deal with right away you can sacrifice some control and let Node apply the basic round-robin strategy to distribute the load across your workers.

It’s a very good idea to start thinking about clustering right away, even if you don’t do it. This approach forces you to design your application without a shared in-process state. If not done properly, this can cause incredible pain when the time finally comes to begin clustering and then scale to multiple servers.

This all might sound a little convoluted, but it all comes down to starting a “master” process which then spins up a specified number of “workers”, typically one per CPU core. Each one is a completely isolated Node process with it’s own memory and state.

Read more

Asynchronous Error Handling in Express with Promises, Generators and ES7

tldr; Callbacks have a lousy error-handling story. Promises are better. Marry the built-in error handling in Express with promises and significantly lower the chances of an uncaught exception. Promises are native ES6, can be used with generators, and ES7 proposals like async/await through compilers like Babel.

This article focuses on effective ways to capture and handle errors using error-handling middleware in Express[1]. The article also includes a sample repository of these concepts on GitHub.

zfY6lL7eFa-3000x3000

First, let’s look at what Express handles out of the box and then we will look at using promises, promise generators and ES7 async/await to simplify things further.

Read more

Using StrongLoop Arc to Profile Memory Leaks in SailsJS

sails
In addition to working on Node core, Express, and LoopBack, StrongLoop also maintains over 130 modules for managing the entire Node API lifecycle. All of these tools are wrapped up nicely in the Arc management UI allowing developers to design, build, deploy, scale, and monitor Node.js applications. In this article I’ll show how you can identify and track down memory usage issues (leaks) in a Sails.js application.

Wait, Sails.js?

Yep, that’s one of the great things about Arc, it’s built around a set of great tools that can be used with any Node.js application or framework!

Too Many Mementos

I don’t want to focus too much on building a Sails.js application, there are plenty of articles you can find for that! Instead, let’s assume you have an application already built, but you find it isn’t as performant as you want (or as it should be). One of the most common issues in any Node.js application is a memory leak. JavaScript does have garbage collection, but many times that is insufficient because of its simplicity – generally speaking, if you have a reference to an object, it will not be collected!

What we need to do is be able to (1) detect when there may be a memory leak, and (2) identify what code might be causing the leak. Let’s start with our application, I’ve started from the generated Sails application by just running:

Then I created a new api endpoint (along with the model and controller) using:

Great! from there we can create a new API endpoint by just adding a couple lines to our /config/routes.js file:

Obviously we could have done this LOTS of ways, but just to illustrate how we can profile this application’s memory usage I’ve kept things simple. In the routes above we’ve mapped the /pet endpoints to methods on the User object’s controller. Let’s take a look at those two methods:

While Sails does a decent job of generating some basic endpoints, you may find it easier to create powerful, model-driven, remote methods using LoopBack.io. Check it out!

Notice that in our addPet() method we create a new Pet object and add it to an in-memory array of pets. (It’s not important what’s in the Pet constructor, but keep in mind it could be a large object!) That array of pets will continue to grow while the application is running and the API is being hit, and it is definitely a memory leak… when will that array ever get culled? This may seem like an obvious issue – and it is – but with more complex code issues like the one above can become common. This is especially true when sharing objects across modules.

Build, Deploy, Run, Rinse, Repeat

pm-icon
Now that we have our application ready we want to monitor its memory usage. We’ll be using Arc to do this, but first we have to start the application under the StrongLoop Process Manager. I don’t want to spend a bunch of time discussing build and deploy strategies, but that’s because there is already another great post on building and deploying! Check that article out and come back… I’ll wait. : )

Although we could package up our application in a build file and deploy that, we’ll just start the application from the command line. Of course, we need to install the StrongLoop API tool chain globally first:

That second command (slc start) will start up our Sails application within strong-pm. This will look at the package.json file and determine how to start our application, which in Sails is just node app.js (this is what sails lift will do for you). And for our testing, it might be helpful to limit the Node cluster to only 1 process (strong-pm defaults to the number of cores on your machine) after starting it up (the application cluster will be automatically resized without application interruption!):

Now that our application is running it’s time to start Arc. This part is easy, just run slc arc from your command line! You already have the slc command from installing the StrongLoop tool chain. The StrongLoop controller should automatically open your default browser to the Arc UI home page. If it doesn’t, just check the CLI where you started Arc, the URL to the UI will be in there near the top (look for “StrongLoop Arc is running here: http://localhost:12345”).

If you haven’t already, you’ll need to register for an account with StrongLoop and get a free license. Email [email protected] to get the license key then execute the following command in your project directory. You can access any features without the license, but we need it for the performance monitoring.

Now that we have Arc up and running, just click on the “Metrics” icon from the homepage. (If you’re on another page, use the dropdown menu next to the StrongLoop logo.) Now we load up our local application cluster in the UI (you could also monitor a remote application from here). Enter “localhost” in the “Hostname” field and “8701” for the “Port”, then click the “Load” button. (The StrongLoop Process Manager uses 8701 by default.)

You should see some charts come up, but they won’t be very interesting… yet.

Arc-metrics

Jumping on the Heap

Okay, that wasn’t exactly a small amount of preparation, but now we have all of the tools in place to monitor the performance of our Sails application and track down our memory leak. The second chart on our Metrics page is of the Heap memory usage (both immediately being used and the total heap. This is where we’ll first “see” our leak.

I’m going to use this simple weighted load creation script (written in Node) to hit our Sails API and create a bunch of pets. After running this for a while, you’ll start to see something like the chart below. Notice how the heap size in use continues to grow for a while, then mostly clears, but the total help continues to grow. This pattern often identifies a slow memory leak. Our next task is to determine what might be causing it!

arc-leak

Once you’re done looking at all the useful charts, head to the “Profiler” page (using the dropdown in the top navigation bar). Here we’ll need to reload the process manager with “localhost” and “8701” again. After the UI is ready, click on the “Heap Snapshot” radio button and then the “Take Snapshot” button. This might take a little while, but when finished you’ll see a link to the snapshot in the left sidebar: click on it!

We’re mostly concerned with the number of objects right now, but the shallow and retained sizes will obviously matter greatly in the end. Let’s sort by “Object Count” first (descending) and we’ll quickly see that other than global object constructor pointers (things like “(array)” and “(string)” we see that our “Pet” object is pretty high on that list… it shouldn’t be. That’s our first indicator of a problem!

Arc-heap-snapshot

When we click on one of the objects under the “Pet” top level item we’ll get some extended information in the “Retainers” pane below. This shows us that the object in question is retained by a reference in the pets array. By expanding the context object of this array we can also see that this pets array is held within (among other places) the getPets() function context!

Arc-heap-snapshot-Retainers-crop

Another helpful tool is that Arc’s heap profile visualization allows you to select the “Containment” view from the dropdown menu at the top (it defaults to “Summary”). This will allow you to look at objects being referenced from Roots (live in scope objects or invoked from another object live in scope). If objects are still referenced by “roots” after transactions complete and scopes are destroyed after function execution completes, then they will not be garbage collected! These will fill retainers and cause memory issues.

If you want to learn more about reading heap snapshots I would recommend Addy Osmani’s blog post on this topic!

What’s Next?

It looks like we’ve tracked down our leak, but don’t stop there! There are a lot of other metrics that can be tracked in Arc (items in the event loop, HTTP and DB calls, and CPU usage, too!). I encourage every developer to take some time to look at some of these tools before your application goes into production! Do some load testing and see how things go, don’t get caught off guard with production memory leaks!

Helpful Links

 

New StrongLoop Process Manager Features for Running Node.js in Production Including Docker Support

We are excited to announce the immediate availability of a new version of the StrongLoop Process Manager with enhanced remote and local run capabilities, Docker support, Nginx load balancing and enhanced security.

docker

ijebjihe

 

What is StrongLoop Process Manager?  It’s an enterprise-grade  production runtime process manager for Node.js. StrongLoop Process Manager is built to handle both the vertical and horizontal scaling of your Node applications with ease.

Key features include : Multi-host remote deployment , SSH/Auth support, Docker container and hub support, zero downtime with soft and hard starts, clustering and on-demand resizing, plus automatic Nginx load balancer configuration that can all be managed with a CLI or GUI.

Vertical Scaling – Maximizing Each Unit of Scale

If you are a Node developer like me, you’ve already come to discover that applications start as a single process running on a core within their host. On a four-core host, you’re obviously not taking advantage of all the compute resources available by only utilizing one out of four cores. In this scenario, the StrongLoop Process Manager will automatically cluster your Node application creating a master process and worker processes with one worker per core, by default. This is what would be referred to as “vertically scaling” your app on a single host, as a unit of scale.

vertical scaling

Read more