Getting Started with the RethinkDB Connector for the LoopBack Node.js Framework


This tutorial is the second in a series of posts that will help you get started with some of the many user contributed NoSQL connectors for Node.js LoopBack framework. In this series we will cover usage of the connectors for:

Last week we covered Couchbase Server using the community contributed loopback-connector-couchbase. In this post, we’ll review connecting RethinkDB and LoopBack using another community contributed connector: loopback-connector-rethinkdb.



RethinkDB is described as “the first open-source, scalable JSON database built from the ground up for the real-time web. It inverts the traditional database architecture by exposing an exciting new access model – instead of polling for changes, the developer can tell RethinkDB to continuously push updated query results to applications in real-time. RethinkDB’s real-time push architecture dramatically reduces the time and effort necessary to build scalable real-time apps.”

We’ll primarily focus on connecting RethinkDB to LoopBack in this getting started tutorial, but we’ll include a little bonus content at the end to discuss how to use the RethinkDB real-time features with LoopBack.

Installing RethinkDB

RethinkDB offers packages for numerous flavors of Linux as well as OSX, all of which you can find on the main download page.

There are official packages for:

  • Ubuntu
  • OSX
  • CentOS
  • Debian

Additionally, there are community contributed packages for:

  • Arch Linux
  • openSUSE
  • Fedora
  • Linux Mint
  • Raspbian

We’ll be developing on OSX, so navigate to the OSX download page. In addition to building from source, we can also install RethinkDB using the official installer or by using Homebrew. We’re going to use the installer to get RethinkDB up and running for the purposes of our tutorial. Simply download the disk image and run on the rethinkdb-2.0.1.pkg file.


The installer will guide you through the remaining steps in order to get RethinkDB installed.

To start the RethinkDB server, run the rethinkdb command in your terminal.

Creating test data

We’ll need some data in RethinkDB to test our application. I’ve included some sample JSON and a short script that we’ll use to populate our database. RethinkDB creates a test database automatically during installation which we’ll use. You can find the script and sample data in the data directory in the Github repository. Run the sampledata.js file to create some sample documents. This will create a customer table and insert a few documents that we’ll use for testing.

LoopBack – Creating the application

Now, we’ll create our application using the slc loopback application generator. The slc command line tool, which is Yeoman under the hood, can scaffold a full LoopBack application structure for us in just a few simple steps. Run slc loopback in your terminal.

Now that we have our skeleton and all we need to run a basic LoopBack application, let’s install loopback-connector-rethinkdb from the npm registry.

Creating the backend datasource definition

Alright, now that we have the RethinkDB server and loopback-rethinkdb-connector installed, let’s setup our RethinkDB datasource. We’ll start by again using the extremely useful slc from the command line. This time we’ll tell slc that we want to create a new datasource.

While slc does recognize some community contributed connectors, it does not know the RethinkDB connector, so we’ll just select other and tell slc the name of the connector. Also note that since this is a community contributed connector, StrongLoop does not have a generator for the datasource definition. What’ll you’ll get automatically generated for you is a datasource skeleton.

We’ll need to fill in the remaining information to connect to RethinkDB.

Note: This connector is a bit different from others as you are required to pass in connection information in the form of a URL that the Node.js url module can parse. So, instead of defining the connection in the form of an object, we need to create a connection string in the form of http://username:password@host:port/database. Since we’re not using authentication, we can ignore the username and password.

Creating the models

Now we’re ready to create the LoopBack models that represent the sample data we added to RethinkDB. There are three main ways in which you can create LoopBack models, all of which were reviewed in my previous post on Couchbase Server. Since we’ve already used the command line, let’s switch it up this time and use slc arc.

StrongLoop Arc is a graphical UI for the StrongLoop API Platform that complements the slc command line tools for developing APIs quickly and getting them connected to data. Arc also includes tools for building, profiling and monitoring Node apps.

Start Arc by running slc arc from the command line.

Your browser will automatically open to http://localhost:55988/#/. Click on Composer to launch the LoopBack model composer.

Click on “Add New Model” in the menu on the left, enter “customer” as the model name, “customers” as the plural name and select “rethinkdb” as the datasource.

Next, we need to enter the properties for our customer model. Fill in the names and types for each property as shown below.

Click “Save Model” and go check out the common/models directory inside our project. Arc has automatically created the files necessary to represent our customer model using the properties we entered in the GUI. Simple!

Start the LoopBack application with slc start (slc to the rescue yet again) and navigate to http://localhost:3000/explorer in order to use the built in Swagger API explorer to view our customer model.

Click “Try it out!” under GET /customers to see our sample data.

Querying data

We’ve seen that we can view our customer model via the Swagger explorer. Now, let’s take a look at some of the more advanced features of API interaction using loopback-connector-rethinkdb.

We can get a single document by calling GET /customers/{id}.

In addition to equivalence, the connector supports the where filter types between, gt, gte, lt, lte, inq, nin, neq, like and nlike. We’ll cover a few examples here, but you can see many more in the LoopBack documentation.

Find documents where the balance is greater than 2000.

Use like to find documents that match a certain expression.

In addition to using the format above, you can also use stringified JSON with the REST API to create your filters. Here’s an example with the inq operator.

Of course, the connector also supports other filter operations like order, limit, and fields.

Creating data

When we initially created our application, LoopBack also generated a POST endpoint for our customer resource so that we can easily create new customer documents.

Note: We provided values for the document id when we loaded our initial sample data, but if you do not pass in an id field when creating a document, RethinkDB will automatically generate one for you.

Updating data

Updating a document is as easy as creating or querying – simply send a PUT request to the individual customer resource using the document id.

RethinkDB real-time and LoopBack

Now that we’ve covered the basics of creating a LoopBack API application and connecting it with RethinkDB, let’s look at how we could possibly use the RethinkDB real-time features with LoopBack.

Changefeeds lie at the heart of RethinkDB’s real-time functionality. They allow clients to subscribe to changes on a table, document or even a specific query. Our example will cover how we can subscribe to changes on a table and get notifications in real-time when a document is created or updated using the LoopBack REST API.

First, we’ll create a simple script that will listen to changes on our customer table and output notifications to the console. The portion of the script that listens for changes is one line of code. That’s how simple it is to implement real-time functionality in RethinkDB. You can review the script in this tutorial’s Github repository.

Make sure RethinkDB and the LoopBack application are started and run this in one terminal window to start the script and listen for changes on the customer table.

Next, open up another terminal window and insert a document into the customer table using our REST API.

You’ll see that in the terminal where we started the rethink-changes.js script that RethinkDB immediately recognized that we added a new document and sent a notification to the console. Real-time notifications with one line of code!

You’ll also see that RethinkDB returned and old_val object which is null. That is because this is a brand new document so there is no “old value” to reference.

Let’s update our document and see what kind of notification we get now.

Now we get a notification from RethinkDB that includes both the old values and new values of the document that we updated.

Incredible! Is your head spinning with ideas on how you can use LoopBack along with RethinkDB’s real-time functionality to create awesome applications? I know mine is!


Even though the documentation for loopback-connector-rethinkdb is missing, the connector is solid and this article will give you a headstart to using LoopBack with RethinkDB. If you stop here and and only use this connector with LoopBack and RethinkDB for your REST API, you’ve already created an amazingly simple, yet powerful application.

If you want to take the next step and create applications using LoopBack along with RethinkDB’s real-time functionality, I hope this article has you excited to dive in and imagine your own applications that can be created using these two amazing technologies!

You can find the code for this article on Github.

What’s next? More real-time goodness

If you are working on a Node project which needs real-time communication with other Node and/or Angular apps, devices and sensors, checkout StrongLoop’s new unopinionated Node.js pub-sub for mobile, IoT and the browser!


Announcing the Node.js API Development and DevOps Webinar Series

We are excited to announce that StrongLoop has teamed up with CA Technologies to bring you an eight-part Node.js webinar series!


If you are a developer looking for an overview on the fundamentals of Node.js and how it can be leveraged in the enterprise to create scalable APIs, apps and services, this webinar series is for you! This series will cover basic concepts, as well as, advanced topics you’ll encounter in production demonstrated with code walk-throughs, case studies and exercises to try at home. At the conclusion of this series you will have a good understanding of the concepts concerning Node fundamentals, designing APIs with the LoopBack framework, plus integrating DevOps best practices into your applications.

Topics and dates include…

  • May 14: Picking the Right Node.js Framework for Your Use Case
  • May 28: JavaScript and Node.js Fundamentals
  • June 11: Node.js Architecture and Getting Started with the Express Framework
  • June 25: Understanding RESTful APIs, Real-Time and Debugging Node.js
  • July 9: Introduction to the Loopback Node.js API Framework
  • July 23: API Security, Customization and Mobile Backends
  • Aug 6: Best Practices for Deploying Node.js in Production
  • Aug 20: Node.js Performance Tuning and DevOps

Register for all or some of the webinars on our webinar registration page.

Attend four webinars and take the StrongLoop Certified Node Developer exam for free (a $199 value) plus get a free t-shirt!



Introducing StrongLoop’s Unopinionated Node.js Pub/Sub

Unopinionated, Node.js powered Publish – Subscribe for mobile, IoT and the browser.

There are many ways to push data from a Node app to another app (written in Node, the browser, or other platforms and languages). Several frameworks have arisen around the vague term “realtime” to provide features along these lines. Strong-pubsub is a library of unopinionated modules that implement the basic publish-subscribe programming model without a strict dependency on any transport mechanism (websockets, HTTP, etc.) or protocol (eg. MQTT, STOMP, AMQP, Redis, etc.).

Instead of implementing a specific transport, strong-pubsub allows you to swap out an underlying adapter that implements a pubsub protocol (for example MQTT). It also allows you to swap out an underlying transport (TCP, TLS, WebSockets, or even Primus).

Why do I need this?

There are many uses cases for pubsub generally, but below are some of those that drove the design of the strong-pubsub module. Keep in mind these use cases are at a fairly high level, but if they sound like something you’re working on then there is a great chance that you can use strong-pubsub!

For example, you might be developing a client-side application for browsers and want to be able to bind local Backbone / Angular / etc models (or objects) to data from a LoopBack API. When data in the API changes, the local (client-side) data should be updated immediately. This reduces perceived lag by the user, and could lead to fewer issues with multiple people updating the same information.

Alternatively, if you’re an IoT Node.js developer, you can send messages from my Node server or other Node processes to any clients. These could include other Node.js programs and clients written in other languages or platforms (arduino, c++, ios, android, etc) using any protocol (eg. MQTT).

Or perhaps you’re working on a Node.js server for a chat application and you need to be able to send messages to multiple users at once. Since the Node cluster consists of many servers, you might need to send a message from one server and have it delivered to all users connected to any of the other servers. Many enterprise developers need to integrate with existing pubsub infrastructures, including existing broker or authentication services.

Read more

Getting Started with Node.js LoopBack Connector for Couchbase


LoopBack is the leading open source, enterprise-ready Node.js framework for helping developers to create APIs integrated with legacy and next gen backends that at the same time enables mobile and micro services architectures.

LoopBack models connect to backend systems like databases via data sources that provide create, read, update and delete (CRUD) functions through the LoopBack Juggler: a modern ORM/ODM. These data sources are backed by connectors that implement connection and data access logic using database drivers or other client APIs. In addition, there are connectors for REST and SOAP APIs, NoSQL, messaging middleware, storage services and more. For example, StrongLoop maintains connectors for:

You may have used one or more of these connectors, but did you know that in addition to these connectors maintained by StrongLoop, there are many community contributed connectors as well?

This tutorial is the first in a series of posts that will help you get started with some of the many user contributed NoSQL connectors for LoopBack. In this series we will cover usage of the connectors for:

This tutorial will cover integrating LoopBack with Couchbase Server using the community contributed loopback-connector-couchbase.



Read more

How-to Cluster Node.js in Production with strong-cluster-control


“A single instance of Node runs in a single thread”. That’s how the official documentation opens up. What this really means is that any given Node.js process can only take advantage of one CPU core on your server. If you have a processor with 6 cores, only one will really be doing all the work.

When you are writing a web application that is expected to get good traffic, it’s important to take full advantage of your hardware. There are two slightly different, yet pretty similar ways to do that:

  1. Start as many Node processes on different ports as there are CPU cores and put a load balancer in front of them.
  2. Start a Node cluster with as many workers as there are CPU cores and let Node to take care of the load balancing.

There are pros and cons to each. You get a lot more control with a dedicated load balancer, however configuring and managing it might get pretty complicated. If that’s not something you want to deal with right away you can sacrifice some control and let Node apply the basic round-robin strategy to distribute the load across your workers.

It’s a very good idea to start thinking about clustering right away, even if you don’t do it. This approach forces you to design your application without a shared in-process state. If not done properly, this can cause incredible pain when the time finally comes to begin clustering and then scale to multiple servers.

This all might sound a little convoluted, but it all comes down to starting a “master” process which then spins up a specified number of “workers”, typically one per CPU core. Each one is a completely isolated Node process with it’s own memory and state.

Read more