What’s New in the LoopBack Node.js Framework – May 2015


Curious what new developments are happening with the LoopBack project? Here’s a curated selection of the most important changes that have been made to LoopBack in the past few weeks.

What’s LoopBack? It’s an open source Node.js framework for creating REST APIs that connect to datasources like Oracle, SQL Server and MongoDB. Learn more…

LoopBack core

Released 2.18.0

Released 2.17.3 (Ritchie Martori)

  • PR#1370 Use the new remoting.authorization hook for check access
  • PR#1366 Define remote methods via model settings/config

LoopBack datasource juggler

Released 2.28.0

Released 2.27.1

  • PR#595 Make sure relation scope is applied during include
  • @d19001a Updated JSdoc for Datasource constructor

Released 2.27.0

  • @b5b7bab Fix the target id resolution
  • @d009557 DB Call Optimization in relation includes – Fixes #408 & #166
  • PR#579 Conditionally pass options to connector CRUD methods
  • PR#582 Pass-through options from save to create


  • Add authorization hook for remote methods

LoopBack connectors

    • Refactor the base Connector and SqlConnector to encapsulate the common logic to simplify connector implementations and reduce the complexity of maintenance. It also makes it easier to add new features across multiple connectors. See new documentation: Building a connector and corresponding API doc for the new loopback-connector module.


Released 2.1.0

  • PR#17 Add transaction support

Released 2.0.0

  • @2cb0f52 Make sure invalid fields are filtered out
  • PR#16 Refactor base Connector and SqlConnector


  • Released 2.1.0
    • PR#99 Add transaction support
  • Released 2.0.0
    • PR#97 Refactor the code to use base SqlConnector


  • Released 2.1.0
    • PR#78 Add transaction support
  • Released 2.0.0
    • PR#75 Feature/connector refactor
  • Released 1.7.1

Microsoft SQL Server

  • Released 2.1.0
    • PR#44 Add transaction support
  • Released 2.0.0
    • PR#43 Refactor the mssql connector to use base SqlConnector


  • Released 2.1.0
    • PR#40 Add transaction support
  • Released 2.0.0
    • PR#38 Refactor the oracle connector to use base SqlConnector

LoopBack oAuth2 component

LoopBack workspace

  • PR#199 api-server template: add strong-express-metrics

The full changelog

As always, you can find the full list of changes at http://strongloop.github.io/changelog/

You can help too!

LoopBack is an open source project welcoming contributions from its users. If you would like to help but don’t have any own itch to scratch, then please pick one of the issues labelled as “Beginner Friendly”, see this github view for a full list.

Getting Started with JavaScript ES6 Object Notation


Lets talk about object declaration in ES6. It has gotten quite an upgrade from its predecessor, ES5. ECMAScript 6 doesn’t bring any new features in this realm (we aren’t talking about the class keyword here), but it does have a good amount of new syntax to help us keep all the curly braces at bay.

In this article I’m going to outline all the new ES6 object declaration syntax.

Function Shorthand

How often have you declared a property function? I know it’s been too many times for me than I care to recall.

ES6 brings in new syntax to shorten this very common declaration pattern. Notice the lack of the function keyword in the example below! The good thing is that both examples are technically the same thing and could co-exist in ES6.

Getters and Setters

Defining read only properties using Object.createProperty() was possible for a pretty long time in ES5. To me it always felt as an afterthought and maybe that’s the reason why you rarely see this feature being used today.

Typing this every time for a readonly property feels like a lot of hassle to me. ES6 introduces proper getter and setter syntax to make this a first class feature:

The difference here is pretty striking. Both examples are equivalent in functionality but ES6 is way more manageable and feels like it belongs. Any attempt to set the hello property value results in an exception:

Defining a setter is just as easy using the set keyword:

That translates to the following ES5:

Unlike with the getter, trying to read the value from a set-only property doesn’t throw an error and returns undefined. I’m not sure how I feel about the difference in behaviour here, at the same time don’t think I’ve ever encountered a set-only property before. I think we can file this under “no big deal”.

Computed Property Names

Often you have to create an object with keys based on a variable. In that case traditional approach looks something like this:

Functional? Yes, and also incredibly annoying. In ES6 you can define computed property names during object declaration without having separate statements like so:

Here’s another example:

Shorthand Assignment

One of my personal favorite new syntax features is the shorthand assignment. Lets look at the example in ES5 first:

Sprinkle some ES6 magic and you have:

Basically, if the variable and property name is the same, you can omit the right side. Nothing stops you from mixing and matching either:

ES6 Today

How can you take advantage of ES6 features today? Using transpilers in the last couple of years has become the norm. People and large companies no longer shy away. Babel is an ES6 to ES5 transpiler that supports all of the ES6 features.

If you are using something like Browserify in your JavaScript build pipeline, adding Babel transpilation takes only a couple of minutes. There is, of course, support for pretty much every common Node.js build system like Gulp, Grunt and many others.

What About The Browsers?

The majority of browsers are catching up on implementing new features but not one has full support. Does that mean you have to wait? It depends. It’s a good idea to begin using the language features that will be universally available in 1-2 years so that you are comfortable with them when the time comes. On the other hand, if you feel the need for 100% control over the source code, you should stick with ES5 for now.


Getting Started with the Node.js LoopBack Connector for ArangoDB

This tutorial is the third in a series of posts that will help you get started with some of the many user contributed NoSQL connectors for LoopBack. In this series we will cover usage of the connectors for:

We’ve already covered Couchbase Server using loopback-connector-couchbase and RethinkDB with loopback-connector-rethinkdb. Today, we’ll be discussing connecting LoopBack and ArangoDB using another community contributed connector – loopback-connector-arango.



ArangoDB is described as “A distributed free and open-source database with a flexible data model for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.”

ArangoDB is a multi-model mostly-memory database with a flexible data model for documents and graphs. It is designed as a “general purpose database”, offering all the features you typically need for modern web applications.

Installing ArangoDB

Navigate to the ArangoDB download page to download ArangoDB for your operating system. As you can see, ArangoDB supports numerous platforms.


Once again, we’ll be developing on Mac OSX. You can install ArangoDB via a command-line app, Homebrew or even the Apple AppStore. ArangoDB recommends Homebrew so we’ll follow their recommendation.

Tip: Are you an OSX user but haven’t used Homebrew before? Start immediately! Homebrew is billed as the “missing package manager” for OSX and you need to install it right now. Think apt-get or yum for OSX.

Note: If you are looking for cloud hosting with ArangoDB, you also have the very convenient option of deploying on AWS or Azure as ArangoDB is available in both marketplaces.

Creating the test database

Now that we have ArangoDB installed, let’s run it! If you followed along above and installed using Homebrew, run this command in the terminal to start ArangoDB.

Let’s create a database in ArangoDB that we’ll use for the remainder of the tutorial. Did you know that when we started ArangoDB, we also started a full featured, built-in web interface that we can use for database and collection administration, statistics, ad-hoc queries and more? We’re only going to use it to create our database, but you can review the ArangoDB documentation to see all the features of the web interface.

Navigate to http://localhost:8529/, click on the “DB” drop down on the top menu and select “Manage DBs”. On this screen, we’ll be able to add a new database.

Click “Add Database” and let’s name our new database “loopback-example”. Click create and that’s it. We have a new database in ArangoDB. We’ll fill a collection with some test data shortly, but first let’s create our LoopBack application.

LoopBack – Creating the application

Like last time, we’ll again create our LoopBack application using the slc loopback application generator. The slc command line tool, which is Yeoman under the hood, can scaffold a full LoopBack application structure for us in just a few simple steps. Run slc loopback in your terminal.

slc makes it very simple for us to create a full LoopBack application structure in just a few steps. Now, cd into the application directory and install the ArangoDB connector.

Note: The Github repository and the NPM registry for the original version of connector are out of sync. The Github repository holds a old version (0.0.3) of the connector while NPM has the most up-to-date (0.1.0) version. There have been some changes in LoopBack since this connector was last updated, so I’ve forked the code from NPM and made a few changes to get it working again with LoopBack. Credit for the original loopback-connector-arango goes to nvdnkpr.


Creating the backend datasource definition

Alright, now that we have ArangoDB and the community connector installed, let’s setup the datasource that will allow us to connect ArangoDB and LoopBack. We’ll use slc again, but this time we’ll tell slc that we want to create a new datasource.

While slc does recognize a few community contributed connectors, it does not know the ArangoDB connector, so we’ll just select “other” and tell slc the name of the connector.

Note: Don’t forget that the name of the connector is loopback-connector-arango and not loopback-connector-arangodb.

Despite not being aware of the ArangoDB connector, the datasource generator will still give us a head start by creating a skeleton of the ArangoDB datasource. Fill in the rest of the arangodb section in server/datasources.json.

Note: Some connectors use host for the server and some use hostname. Note that this connector uses hostname.

Our datasource is ready to go, so now we can create some models that will represent the data that we’ll store in ArangoDB.

Creating the models

We’re going to use slc again, but this time we’re going to use it to start Arc.

Arc is a graphical UI for the StrongLoop API Platform that complements the slc command line tools for developing APIs quickly and getting them connected to data. Arc also includes tools for building, profiling and monitoring Node apps.

We’re going to start with building our application using Arc and we’ll touch on monitoring our app a little bit later.

This single command will start Arc and take you to the Arc starting page.
Click on “Composer”. This will take us to the Arc Composer where we can see the arangodb datasource we already created under the datasource menu.

Click on “Add New Model” under the model menu.

Enter the “Name” as “business”, the “Plural” as “businesses” and select “arangodb” as the “datasource”.

Next, enter the properties for our model as shown below, then click “Save Model”.
You just created a JSON model representation of our data in ArangoDB and exposed it via the LoopBack REST API without writing a line of code! That’s amazing. Check out the model in common/models in the application directory.

We have our model in LoopBack and now we’re ready to create the collection that will hold our data and populate it with some sample documents.

Creating test data part 2

One neat feature of loopback-connector-arango is that it implements LoopBack’s auto-migrate functionality. Auto-migration can create a database schema based on our application’s models. With relational databases this means that auto-migrate will create a table for a model and columns for each model property.Since we’re using a NoSQL database here, auto-migrate will only create an ArangoDB collection for us, but it is still a nice feature to have included in a community contributed connector.

Note: Auto-migrate will drop an existing table if it’s name matches a model name! Do not use auto-migrate with an existing table/collection.

I’ve provided a simple script that will run automigrate on our ArangoDB datasource and create the business collection.

You can head to http://localhost:8529/_db/loopback-example/_admin/aardvark/standalone.html#collections to see that our business collection has been created.

Now we can populate our business collection with some sample documents. I’ve provided another script to create sample data.

The cool thing to note about this script is that since we already created our datasource and model, we don’t have to go through the hassle again of using a different Node.js module to connect to ArangoDB, pass it the proper connection information, figure out how to use the module and insert documents. By defining our connection information and models for LoopBack, all we need to do is call “create” on our business model to insert documents. Inserting documents in bulk into ArangoDB can simply look like this!

Querying data

We’re ready to query our API! Let’s start by using LoopBack’s built-in Swagger API explorer. Start your application using slc start and go to http://localhost:3000/explorer.

Expand the GET /companies section and click on “Try it out!” to view the documents in our collection.


You’ll see that even though we didn’t provide an id or revision, ArangoDB automatically created those for us.

We can get individual documents by using the id that the connector created for us.

Note: ArangoDB’s id property is actually a database wide id that looks like collectionname/123456. Well, that id with a “/” is not very friendly to URLs like our REST API uses, so the connector implements two methods, fromDB and toDB, that manipulate the id to add or remove the collection name as necessary. So, when we call one of our endpoints with an id, we only need to pass the “123456” and not the collectionname/123456.

The connector supports numerous where filters as well such as equivalence

…and greater than.

We can also use other filters with the connector such as limit…

…and order.

Creating and updating data

Creating a document is as simple as sending a POST request to the business endpoint.

If we need to update the document, a call to PUT /business/id will take care of that.

LoopBack automatically created all of these endpoints for us when we told it to expose our business model via the REST API!

Arc Profiler

Now that we’ve taken a look at the LoopBack ArangoDB connector, let’s take a quick side tour through the Arc Heap Profiler. Remember above that we used Arc Composer to create our models for the tutorial. Well, there’s much more to Arc than just Composer!

First, start Arc with slc arc, select “Profile” from the main menu and then click “Load” on the profile page.

Select “Heap Snapshot” and then click “Take Snapshot”. Wait a few seconds for the snapshot to build.

Select the snapshot created from the left-hand menu and type “ArangoDB” into the class filter box. Expand the ArangoDB class, datasource and models and you’ll see our business model.

Diving too much further into Arc Profiler is beyond the scope of this tutorial, but you can read more about it right on the StrongLoop blog at Using StrongLoop Arc to Profile Memory Leaks in SailsJS. Wait, what? Sails? That’s right! One of the excellent things about Arc is that it works with any Node.js application.


It is amazingly simple to get ArangoDB and LoopBack up and running on your machine. After installing ArangoDB and LoopBack, it only takes a few steps to get the two connected and only a few more to create models of your data and expose them via LoopBack’s REST API. Together they can form a very dynamic duo.

When you’re ready to take the next step, add Arc Profiler into the mix and you have a true powerhouse!

You can find the code for this example on Github.


From Bert Belder: The Future for Node.js Looks Bright

By now you’ve already heard about how io.js was forked from the Node.js project last year, and that a Node foundation has been announced earlier this year. Without going into much detail – a lot of people have been working hard to get boths projects to a good place, and I’m pleased to report that it’s working out!

Last Friday I sent the Node Advisory Board the e-mail below (slightly redacted for readability). Since we want to be serious about open governance, I figured it would be good to share it with a broader audience… so here you go:


People involved with Node and io.js,

After spending many hours in what seem like endless meetings it’s easy to be cynical about Node – it happens to me from time to time. But fundamentally I think we’re really in a good place.

Soon there will be a non-profit foundation that guarantees independence and provides funding that the project so dearly needs. There is a motivated community of individuals and companies contributing time and effort to continuous innovation.

The debate among developers working on Node and io.js is reaching a good outcome and we might soon be working together again. There will be a stable version of Node which is conservatively maintained, so users can confidently depend on it through the next Black Friday, and there will be a branch which releases frequently and is open to change, providing a canvas to paint on for those who want to make Node better.

Over the past year I’ve learned that both predictable stability and open innovation are really important for maintaining a healthy open source project and we’re in the unique position of enabling both in an institutionalized fashion.

Now what we really ought to do is make it easy for users to install both side-by-side so they can use whatever is appropriate for what they’re working on.

In the meantime Node has become a staple for web development – the most ubiquitous platform of our time – and continuous to grow.

Thanks for all the effort you all have put into making this happen.

– Bert


StrongLoop is Committed to Node and io.js

StrongLoop has been a core contributor of Node and io.js since it was founded in 2013. StrongLoop helps companies succeed with Node by creating tools for composing APIs, deploying, monitoring and securing Node apps in production.

Learn more >>


How-to Test Client-Side JavaScript with Karma

I have often quipped that if I were stuck on a desert island with only one npm module, I’d choose karma. The one place where you can’t avoid using JavaScript is in the browser. Even Dart needs to be transpiled to JavaScript to work on browsers other than Dartium. And code that can’t be tested should not be written. Karma is a widely-adopted command-line tool for testing JavaScript code in real browsers. It has a myriad of plugins that enable you to write tests using virtually any testing framework (mocha, jasmine, ngScenario, etc.) and run them against a local browser or in Sauce Labs’ Selenium cloud.

In this article, you’ll see how to use karma to test standalone JavaScript locally, to test DOM interactions locally, and to run tests on Sauce Labs. The code for all these examples is available on my karma-demo GitHub repo.

Your First Karma Test

Like gulp or metalsmith, karma is organized as a lightweight core surrounded by a loose confederation of plugins. Although you will interact with the karma core executable, you will typically need to install numerous plugins to do anything useful.

Suppose you have a trivial browser-side JavaScript file like below. This code is lib/sample.js in the karma-demo GitHub repo.

Testing this in Node.js would be fairly simple. But, what if you wanted to test this in a real browser, like Chrome? Let’s say you have the following test file.

To run this test in Chrome, set up your package.json with karma and 3 plugins. As you might expect, karma-mocha is a karma plugin that enables you to use the mocha test framework, and karma-chrome-launcher enables karma to launch Chrome locally. The karma-chai package is a wrapper around the chai assertion framework, since neither mocha nor karma includes an assertion framework by default.

Now that you have karma installed, you need a karma config file. You can run ./node_modules/karma/bin/karma init to have karma initialize one for you, but the karma config file you’re going to need is shown below.

One key feature of the above config file is that karma configs are NodeJS JavaScript, not just JSON or YAML. You can do file I/O, access environment variables, and require() npm modules in your karma configs.

Once you have this config file, you can run karma using the ./node_modules/karma/bin/karma start test/karma.mocha.conf.js command. You should see a Chrome browser pop up and output similar to what you see below in the shell:

If you see the “SUCCESS” message, that means your test ran successfully.

Testing DOM Interactions with ngScenario

Karma isn’t just useful for testing isolated JavaScript. With ngScenario and the karma-ng-scenario plugin you can test DOM interactions against your local server in a real browser. In this particular example, you will test interactions with a trivial page. However, ngScenario, as the name implies, was developed by the AngularJS team to test AngularJS applications, so you can use karma to test single page apps too.

Suppose you want to test the following page, which uses the sample.js file from the previous section to display the number “42” on the page.

In order to serve this page and the sample.js file, you need an HTTP server. You can put together a simple one using express as shown below.

Navigate to http://localhost:3000/sample.html and you should see the number “42”. Now, let’s set up karma to test this.

In order to use ngScenario with karma, you need to install the karma-ng-scenario plugin.

Once you have the karma-ng-scenario plugin, you will need to create a slightly modified karma config file. The git diff between this file and the previous section’s config file karma.mocha.conf.js is shown below.

That’s all you need! Now, start your web server with node index.js (your web server needs to be running for ngScenario to work) and run ./node_modules/karma/bin/karma start to execute your tests.

The karma.ng-scenario.conf.js file is configured for karma to automatically re-run tests if any of the loaded files change. Suppose you changed the test file scenario.test.js to break:

Karma will automatically re-run tests and show you the following output.

Note the AngularJS team considers ngScenario to be deprecated in favor of protractor. However, I still think ngScenario and karma have a key niche to fill in terms of testing JavaScript DOM interactions against a local server as part of TDD workflow. If you’re interested in working with me to re-imagine how ngScenario should work, shoot me an email at val [at] karpov [dot] io.

To the Cloud!

So far, you’ve only tested your code in Chrome. While testing in Chrome is better than not testing in the browser at all, it’s far from a complete testing paradigm. Thankfully, there’s Sauce Labs, a cloud Selenium service that you can think of as Amazon EC2 for browsers. You can tell Sauce to start Internet Explorer 6 running on Windows XP and give you full control over this browser for testing purposes. In order to get the tests in this section to work, you will have to sign up for an account with Sauce Labs to get an API key.

To integrate karma with Sauce, you’re going to need the karma-sauce-launcher plugin.

To run the ngScenario test from the previous section on IE9 and Safari 5 using Sauce, create another karma config file. The git diff between this new file and the previous section’s config file is show below.

This is all you need to do. Karma handles setting up an SSH tunnel for you, so the Sauce browser can make real requests to your local server.

Note the above example does not use autowatch. This is because, in my experience, end-to-end tests run on Sauce labs are too slow for the type of fast feedback I’d like from the ‘test on save’ paradigm. These heavy end-to-end tests are best run through a CI framework like Travis or Shippable.


Karma is a powerful and extensible tool for testing client-side JS, and very much deserves its spot on the npm home page. If there’s one npm module I can’t write JS without, it’s karma. Karma enables you to test standalone JavaScript or DOM integrations, locally or in the Sauce Labs cloud. With karma, you have no excuse for half-answers like “I think so” when your boss asks you if the site works in IE8.

Like this article? Chapter 9 of my upcoming book, Professional AngularJS, is a detailed guide to testing AngularJS applications. It includes more detail about using karma and ngScenario to test AngularJS applications, as well as protractor.