What’s New in the LoopBack Node.js Framework – June 2015


Curious what new developments are happening with the LoopBack project? Here’s a curated selection of the most important changes that have been made to LoopBack in the past few weeks.

LoopBack Core

Enable Auth

We changed app.enableAuth to automatically setup all required models not attached to the app nor a datasource. The purpose of this change is to make it easier to set up authentication from code that does not use slc loopback project scaffolding, such as unit tests.

To use this new option, just provide the name of the data source to use for these models.  For example:

Observer API

We made several improvements to the hook API. Including the ability to skip other observers by calling context.end(). Now you can also notify multiple observers by passing an array of operation names (for example, notifyObserversOf(['op1', 'op2'])).

Strong Remoting

We added a couple of new features to strong-remoting and remote methods in LoopBack.

Default Status Codes

Now you can define both a status and errorStatus in the HTTP options of your remote method.

Header and Status Argument Targets

To set a header or the status code for an HTTP response, now you can specify the target of your callback argument to either header or status.

Here is an example of both default status codes and argument targets.


We extracted the implementation of data filtering from the memory connector to a new module called loopback-filters. Using this module, you can now filter arrays of objects using the same filter syntax supported by MyModel.find(filter).  We’ll soon be converting all LoopBack modules to use loopback-filter, so it will become the common “built-in” filtering mechanism.

LoopBack supports a specific filter syntax: it’s a lot like SQL, but designed specifically to serialize safely without injection and to be native to JavaScript.  Previously, only the PersistedModel.find() method (and related methods) supported this syntax.

Here is a basic example using the new module.

For a bit more detail, let’s say you are parsing a comma-separated value (CSV) file, and you need to output all values where the price column is between 10 and 100.  To use the LoopBack filter syntax you would need to either create your own CSV connector or use the memory connector, both of which require some extra work not related to your actual goal.

Once you’ve parsed the CSV (with some module like node-csv) you will have an array of objects like this, for example (but with, say, 10,000 unique items):

To filter the rows you could use generic JavaScript like this:

This is pretty simple for filtering, but sorting, field selection, and more advanced operations become a bit tricky.  On top of that, you are usually accepting the parameters as input; for example:

You can rewrite this easily as a LoopBack filter:

Or if you just adopt the filter object syntax as user input:

But loopback-filters supports more than just excluding and including.  It supports field selection (including / excluding fields), sorting, geo/distance sorting, limiting and skipping.  All in a declarative syntax that is easily created from user input.

As a LoopBack user this is a pretty powerful thing. Typically, you will have learned how to write some complex queries using the find() filter syntax; before you would need to figure out how to do the same thing in JavaScript (perhaps using a library such as underscore). Now with the loopback-filters module, in your client application you can re-use the same exact filter object you were sending to the server to filter the database without having to interact with a LoopBack server at all.

Middleware generator

The new LoopBack middleware generator adds a middleware configuration to an existing application.

The tool will prompt you to:

  • Select the phase to use for the middleware.
  • Add a list of paths.
  • Add parameters.

See more about the new middleware generator in the documentation.


The loopback-workspace module provides the backend functionality for Arc API composer and slc loopback

The default template for generating LoopBack projects now includes default lookup paths for mixins. The defaults are the following:

  • loopback/common/mixins
  • loopback/server/mixins – the first two will include loopback core mixins.
  • ../common/mixins relative to your project.
  • ./mixins relative to your project.

Note that the last two paths are relative to your project (just like modelSources).

We also added support for middleware.json. This lays the groundwork for support for generating middleware.json files with Arc API Composer and with slc loopback.

DataSource Juggler

Persist Hook

We added a new “persist” hook. Observers are notified during operations that persist data to the datasource (for example, create, updateAttributes). Don’t confuse this hook with the existing “before save” hook:

  • before save – Use this hook to observe (and operate on) model instances that are about to be saved (for example, when the country code is set and the country name not, fill in the country name).
  • persist – Use this hook to observe (and operate on) data just before it is going to be persisted into a data source (for example, encrypt the values in the database).

Loaded Hook

We have also just added a new “loaded” hook. Observers are notified right after raw data is loaded or returned from the connector and datasource. This allows you to do things like decrypt database values before they are used to create a model instance.

Here is a filtered changelog including important changes to the juggler.

  • PR#611 Dedupe ids args of inq for include
  • Released 2.29.1
  • PR#584 add test suite for scope – dynamic function
  • PR#588 Fix pagination on collections with many-to-many relationships
  • PR#604 Fix destroyById not removing instance from cache
  • PR#609 Don’t silently swallow db errors on validation
  • Released 2.29.0
  • PR#602 Enhance the apis and add more tests
    • End observer notification early
    • Allow multiple notifications in a single call: notifyObserversOf([‘event1′, ‘event2′])
  • PR#600 Fix toJSON() for level 3 inclusions
  • PR#598 Mixin observer apis to the connector
  • PR#597 Enhance fieldsToArray to consider strict mode
  • Released 2.28.1
  • @cbb8d7c Remove dep on sinon
  • PR#586 Add new hook persist
  • Released 2.30.1
  • @8302b24 Pin async to version ~1.0.0 to work around context propagation
  • Released 2.30.0
  • PR#618 Allow 0 as the FK for relationships
  • PR#626 Fix for issues #622 & #623
  • PR#630 Promisify ‘automigrate’


We added a new “execute” hook to the connector API. It allows you to observe the low-level connector.execute() method. See the documentation for more information.

Below is a filtered changelog including important changes to connectors.


  • @b40f92b Add before and after “execute” hooks for the underlying soap invocation
  • PR#20 bump(soap) bump node-soap version from 0.8 to 0.9


  • Released 1.9.0
  • PR#33 Add before and after “execute” hooks


  • @84a1ab0 Add before and after “execute” hooks
  • Released 1.9.2
  • @8a55f92 Update to memwatch-next for node 0.12 compatibility
  • Released 1.9.1
  • @5d907cf Update deps
  • Released 1.9.0
  • @331e158 Replaced ensureIndex() with createIndex()
  • Released 1.11.0
  • PR#142 Add a workaround for auth with multiple mongos servers
  • PR#141 Autoupdate and automigrate now respect settings.mongodb.collection


  • @111f8c2 Add better support for the Date type
  • Released 2.2.0
  • PR#88 Make sure UTC is used for date


  • Released 2.2.0
  • @2fc9258 Update deps
  • PR#18 Add before/after hooks for connector native operations
  • Released 2.1.2
  • @a5f11ac Put request with non existent properties no longer results in error
  • Released 2.1.1
  • @a62e06d Improved query support for Date


  • Released 2.8.0
  • PR#130 Port can’t be number checked to support iisnode
  • @44f733f Support iisnode using named pipes as PORT value (Jonathan Sheely)
  • Released 2.8.1
  • PR#133 Better debug output when loading complex configurations

Component OAuth2

  • Released 2.2.0
  • @83635b0 Tidy up the models to work with MySQL
  • Released 2.1.1
  • @a32e213 Allow models to be customized via options
  • Released 2.1.0
  • @32fcab9 Clean up oAuth2 client app attributes
  • Released 2.3.0
  • @807762d Remove auth code after 1st use
  • @f613a20 Allow options.scopes to be a custom function
  • Released 2.2.1
  • @79a7df8 Allow options.userModel/applicationModel to be strings
  • Released 2.0.0
  • @e5da21e Change license to StrongLoop

Other Module Changes

  • loopback-component-storage
    • PR#74 Bugfix: Cannot read property ‘forEach’ of undefined
    • Released 1.5.0
    • PR#70 Add missing finish event when uploading to s3
  • loopback-testing
    • PR#47 Add withUserModel to extend user related helpers
    • PR#51 use findorCreate to create roles
    • PR#45 Update helpers.js
  • loopback-component-passport
    • Released 1.4.0
    • PR#70 feature: Make email optional
  • loopback-gateway
  • loopback-sdk-angular
    • Released 1.4.0
    • PR#138 Add createMany method
  • loopback-component-push
    • PR#88 Pass contentAvailable through to APNS
    • @e9022a0 Forward “contentAvailable” and “urlArgs” to APNS
    • @08e73ee Update deps

You can help too!

LoopBack is an open source project welcoming contributions from its users. If you would like to help but don’t have any own itch to scratch, then please pick one of the issues labelled as “Beginner Friendly”, see this github view for a full list.

The full changelog

As always, you can find the full list of changes at http://strongloop.github.io/changelog/


Announcing Transaction Tracing for Node.js Beta

At StrongLoop, we develop tools to support development and operations throughout the entire lifecycle of API development. Initially, we released Arc Profiler to help you understand performance characteristics of your Node application. Next, we added Arc Metrics to provide real-time visibility into your staging and production environments. Today we’re announcing the latest Arc module, Tracing, (currently in public Beta) that enables you to perform root cause analysis and triage incidents in production. (You can watch a short overview and demo of the Tracing module here.)

Those who have cut their teeth on the seemingly endless iterations in the dev lifecycle  will understand this satirical spin on Dorothy’s line from the Wizard of Oz:

“Toto, I don’t think we’re in staging anymore…  There’s no place like production… There’s no place like production…”

Simulated load, automated testing, and all the CI magic in the world won’t prepare you for the “gotchas” that can happen in production.  If you’re lucky, you’ll have a canary that keels over as soon as you enter the production mine.  But what then?

The answer is Tracing.  The Arc Tracing module provides the ability to call in the artillery when you need it.  If you see something of interest in Metrics, open up Tracing and you’ll be shown a timeline of memory and CPU usage.

Understanding the Timeline


Locate the point of interest—more often than not a CPU or memory spike in the form of a peak in the line—and start to drill down. When you’ve located the incident, you’ll want to freeze the time slice by clicking on the chart drawing your black line denoting a time slice and starting your drill down.

Read more

StrongLoop Node.js Tracing Quickstart

We recently added the Tracing module (currently in Beta) to StrongLoop Arc’s monitoring and performance analysis tools. Tracing helps you identify performance and execution patterns of Node applications, discover bottlenecks, and trace code execution paths. This enables you to monitor Node.js applications and provides tracing data at the system and function level, giving you insights into how your application performs over time. Tracing works with applications deployed to StrongLoop Process Manager (PM).

This blog post describes how to quickly get up and running with Tracing, assuming you have  some familiarity with StrongLoop tools. It demonstrates the quickest path to try out the new Tracing module, using an example that simulates some load, and how to view and understand the data visualizations based on your application traffic.

Step 1. Setup

Start by installing the latest version of StrongLoop and creating a basic Loopback application. If you have never done this previously, please refer to: http://loopback.io/getting-started/

Clone the example app from https://github.com/strongloop/tracing-example-app and go through the set up:

This example demonstrates the tracing in of StrongLoop Arc. The example includes a simple HTTP server with one route that starts an internal busy loop when triggered.  This generates fluctuations (and thus more data) for the StrongLoop Arc tracing graphs.

Please review the README to get your example app up and running and create some variation in the graphs by running the ./send-request script to make repeated curl requests to the server.

Read more

Node.js Transaction Tracing Deep-Dive By Example

StrongLoop recently announced a public beta of a Node.js transaction tracing module (read the announcement blog or watch the overview and demo video) within StrongLoop Arc to identify performance bottlenecks. In this blog we walk you through a sample application and insights into the patterns that emerge while conducting a Node.js transaction trace.

How Tracing works

StrongLoop tracing monitors your Node application by:

  • Tracing JavaScript functions entry and exit points to record the elapsed time of each function call and the position of the function call in the source file. Tracing instruments every function in your application as well as the packages your application requires.
  • Wrapping all HTTP and HTTPS requests and database operations of MySQL, PostgreSQL, Oracle, Redis, MongoDB, and Memcache/Memcached.


A DoS use case

In this blog, we’re going to analyze behavior of a simple application that implements one Oracle database transaction responding to GET / HTTP request. We’ll consider a hypothetical scenario in which your website gets a denial-of-service (DoS) attack that generates unusual CPU load.

We’ll use an example and demonstrate how to analyze the tracing data and drill down to the specific source code line of the vulnerability exploited by the simulated DoS attack.


Read more

Containerizing Node.js Apps with Docker and StrongLoop

If you read our recent blog on multi-app enhancements to Strongloop Process Manager, you’ll know that we’ve been working on features useful for production deployment.  Here is the newest addition: the ability to deploy apps on Process Manager as containers, using Docker.

This post outlines challenges with Docker, especially when you are “Dockerizing” your Node applications, demonstrates some failed attempts at trimming it down, and shows how you can use StrongLoop Process Manager to containerize your Node apps with a drastically reduced footprint!

Containerizing Apps The Hard Way

Anyone who has heard of Docker has probably spent at least a few seconds at least thinking about containerizing their app. If you are like me, you probably scanned the docs and decided you didn’t have time to figure it out, and that was the end of it.

And then I went back a month later and got a little bit better idea of what this Docker thing is, but ran out of time again. And then I repeated the process a couple more times until I finally had an idea of what Docker is and how I could use it to streamline some of my app deployments.

So then I started “Dockerizing” my app. It isn’t too hard: there is an example of Dockerizing a Node.js Web App on the Docker website that covers the basic steps.  Unfortunately, it uses an outdated version of Node and npm because it uses the RPMs provided by CentOS.  It turns a single-file node package into a ~550MB image. Disk is cheap, as they say, but anyone who has used Docker a while  knows that it really likes to fill up your hard drive.

Trimming it Down

I’ve copied the index.js, package.json, and Dockerfile from that guide and put them in a directory, then ran docker build -t node-experiment:original. to get my baseline and then ran a Docker images node-experiment to get a list of the images in the node-experiment repo:

OK, let’s trim this down a bit, and see if we can make it a little friendlier to distributing across a network, or over the Internet.

First we need to figure out what’s taking up all this space. Here are some of the larger items in the resulting image:

  • 66M /var/cache/yum
  • 95M /usr/lib/locale
  • 11M /usr/lib/gcc
  • 62M /usr/share/locale

And those are just the directories that are easy to clump together, but it’s a good start. We can do some experimenting and add the following lines to the Dockerfile from that tutorial:

Now we rebuild our image, docker build -t node-experiment:take-2 .. Notice that it doesn’t take as long the second time. That’s because it is able to re-use the layers from the previous build.

Now look at the image again, with Docker images node-experiment, and revel in our genius:

Wait, wat!?

Docker Images are Like Ogres

Remember how the second build went faster because it was re-using the layers from the previous build? Every line in the Dockerfile is creating one of those layers, including those rm -rf … lines we added. So those lines aren’t actually deleting anything, they’re adding another layer that applies a mask to “delete” the given files on the resulting multi-layer file system. If you’re thinking that actually uses more space than it saves, you’re right. If you’re thinking this sounds a lot like what happens when you commit a large file to Git and then later remove it, but your repo doesn’t shrink, you’re right again.

So what can we do about this? One approach is to make our layers really complex:

Notice we had to do some rearranging so that we copied the app over before installing Node. That means our speed-up from layer re-use is going to disappear since it will have to re-install Node every time. Mildly annoying, but if the image is significantly smaller, it might be worth the extra minute it takes to rebuild the image. Let’s give that a try and tag it as node-experiment:take-3.

Not bad! Let’s take it one step further and forget about optimizing for layer re-use at all and move all the RUN commands into a single layer and tag it as node-experiment:take-4.

So we’ve managed to hack about 200MB off the size of the image. But what’s left in our image?

  • Node
  • npm
  • Our app
  • Mangled installation of gcc
  • Broken locale data

Hm.. That doesn’t seem like a good thing to be putting into staging, let alone production!

A Better Way

If you’re an avid Git user you might be wishing for a Docker rebase command right about now. Sadly, there is no such command. So what commands are there? Here’s some useful-looking lines from docker help:

It looks like with sufficient scripting we could build our image without even using a Dockerfile. I’ll leave it as an exercise for the reader to come up with a way of tying these commands together to produce an image. Instead I’ll describe the general approach I took with StrongLoop’s Process Manager:

  1. Create a build container with Node, npm, and a complete build toolchain.
  2. Install strong-supervisor (for clustering, control channel, metrics, etc.)
  3. Import your app and install its dependencies, including binary addons.
  4. Create a deployment container with no Node, npm, or build tools of any kind.
  5. Copy Node, strong-supervisor, and your app into the deployment container.
  6. Commit the deployment container as an image.

I used the strong-docker-build module to build the one-file app from above for comparison:

Looks like success! Not only is it smaller, but it has an embedded process supervisor with automatic clustering and metrics reporting, and no extra weight from compilers for languages we don’t need.

How to Use It

This is my favorite part. Now that you have some insight into the problem and how to deal with it the hard/wrong way, here’s how to solve it the easy way using StrongLoop PM:

  1. Install Docker on your server (same as installing the Dockerized StrongLoop PM).
  2. Install StrongLoop PM (same as installing a production server):
    sudo sl-pm-install --driver docker (the new hotness).

Now, whenever you deploy an app to StrongLoop PM on this server, your app will be turned into a Docker image using the approach described above and then run in a Docker container.

If you’re on a Mac and you want to experiment with the new Docker driver, fear not! While the sl-pm-install command is Linux only, the sl-pm and slc pm commands also accept the --driver docker option, and it works right out of the box if you have a working boot2docker installation.

However you install it, you’ll notice the first startup takes a little longer than usual as it pulls down official Debian and Node images from Docker Hub. You can speed this up by pulling them ahead of time yourself with this command:

Docker in Docker

While it is possible to run Docker inside Docker, or connect to a Docker daemon from within a Docker container, this first release doesn’t support it. At the time of this writing, that means the --driver docker option is not compatible with running StrongLoop PM itself as a Docker container.

Whats Next

We’ll continue to develop StrongLoop tools into a full-fledged orchestration and deployment solution for production.