Getting Started with JavaScript ES6 Object Notation

Share

nothing-to-declare

Lets talk about object declaration in ES6. It has gotten quite an upgrade from its predecessor, ES5. ECMAScript 6 doesn’t bring any new features in this realm (we aren’t talking about the class keyword here), but it does have a good amount of new syntax to help us keep all the curly braces at bay.

In this article I’m going to outline all the new ES6 object declaration syntax.

Function Shorthand

How often have you declared a property function? I know it’s been too many times for me than I care to recall.

ES6 brings in new syntax to shorten this very common declaration pattern. Notice the lack of the function keyword in the example below! The good thing is that both examples are technically the same thing and could co-exist in ES6.

Getters and Setters

Defining read only properties using Object.createProperty() was possible for a pretty long time in ES5. To me it always felt as an afterthought and maybe that’s the reason why you rarely see this feature being used today.

Typing this every time for a readonly property feels like a lot of hassle to me. ES6 introduces proper getter and setter syntax to make this a first class feature:

The difference here is pretty striking. Both examples are equivalent in functionality but ES6 is way more manageable and feels like it belongs. Any attempt to set the hello property value results in an exception:

Defining a setter is just as easy using the set keyword:

That translates to the following ES5:

Unlike with the getter, trying to read the value from a set-only property doesn’t throw an error and returns undefined. I’m not sure how I feel about the difference in behaviour here, at the same time don’t think I’ve ever encountered a set-only property before. I think we can file this under “no big deal”.

Computed Property Names

Often you have to create an object with keys based on a variable. In that case traditional approach looks something like this:

Functional? Yes, and also incredibly annoying. In ES6 you can define computed property names during object declaration without having separate statements like so:

Here’s another example:

Shorthand Assignment

One of my personal favorite new syntax features is the shorthand assignment. Lets look at the example in ES5 first:

Sprinkle some ES6 magic and you have:

Basically, if the variable and property name is the same, you can omit the right side. Nothing stops you from mixing and matching either:

ES6 Today

How can you take advantage of ES6 features today? Using transpilers in the last couple of years has become the norm. People and large companies no longer shy away. Babel is an ES6 to ES5 transpiler that supports all of the ES6 features.

If you are using something like Browserify in your JavaScript build pipeline, adding Babel transpilation takes only a couple of minutes. There is, of course, support for pretty much every common Node.js build system like Gulp, Grunt and many others.

What About The Browsers?

The majority of browsers are catching up on implementing new features but not one has full support. Does that mean you have to wait? It depends. It’s a good idea to begin using the language features that will be universally available in 1-2 years so that you are comfortable with them when the time comes. On the other hand, if you feel the need for 100% control over the source code, you should stick with ES5 for now.

 

Getting Started with the Node.js LoopBack Connector for ArangoDB

This tutorial is the third in a series of posts that will help you get started with some of the many user contributed NoSQL connectors for LoopBack. In this series we will cover usage of the connectors for:

We’ve already covered Couchbase Server using loopback-connector-couchbase and RethinkDB with loopback-connector-rethinkdb. Today, we’ll be discussing connecting LoopBack and ArangoDB using another community contributed connector – loopback-connector-arango.

arangodb

ArangoDB

ArangoDB is described as “A distributed free and open-source database with a flexible data model for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.”

ArangoDB is a multi-model mostly-memory database with a flexible data model for documents and graphs. It is designed as a “general purpose database”, offering all the features you typically need for modern web applications.

Installing ArangoDB

Navigate to the ArangoDB download page to download ArangoDB for your operating system. As you can see, ArangoDB supports numerous platforms.

adb-download

Once again, we’ll be developing on Mac OSX. You can install ArangoDB via a command-line app, Homebrew or even the Apple AppStore. ArangoDB recommends Homebrew so we’ll follow their recommendation.

Tip: Are you an OSX user but haven’t used Homebrew before? Start immediately! Homebrew is billed as the “missing package manager” for OSX and you need to install it right now. Think apt-get or yum for OSX.

Note: If you are looking for cloud hosting with ArangoDB, you also have the very convenient option of deploying on AWS or Azure as ArangoDB is available in both marketplaces.

Creating the test database

Now that we have ArangoDB installed, let’s run it! If you followed along above and installed using Homebrew, run this command in the terminal to start ArangoDB.

Let’s create a database in ArangoDB that we’ll use for the remainder of the tutorial. Did you know that when we started ArangoDB, we also started a full featured, built-in web interface that we can use for database and collection administration, statistics, ad-hoc queries and more? We’re only going to use it to create our database, but you can review the ArangoDB documentation to see all the features of the web interface.

Navigate to http://localhost:8529/, click on the “DB” drop down on the top menu and select “Manage DBs”. On this screen, we’ll be able to add a new database.

add-db
Click “Add Database” and let’s name our new database “loopback-example”. Click create and that’s it. We have a new database in ArangoDB. We’ll fill a collection with some test data shortly, but first let’s create our LoopBack application.

LoopBack – Creating the application

Like last time, we’ll again create our LoopBack application using the slc loopback application generator. The slc command line tool, which is Yeoman under the hood, can scaffold a full LoopBack application structure for us in just a few simple steps. Run slc loopback in your terminal.

slc makes it very simple for us to create a full LoopBack application structure in just a few steps. Now, cd into the application directory and install the ArangoDB connector.

Note: The Github repository and the NPM registry for the original version of connector are out of sync. The Github repository holds a old version (0.0.3) of the connector while NPM has the most up-to-date (0.1.0) version. There have been some changes in LoopBack since this connector was last updated, so I’ve forked the code from NPM and made a few changes to get it working again with LoopBack. Credit for the original loopback-connector-arango goes to nvdnkpr.

 

Creating the backend datasource definition

Alright, now that we have ArangoDB and the community connector installed, let’s setup the datasource that will allow us to connect ArangoDB and LoopBack. We’ll use slc again, but this time we’ll tell slc that we want to create a new datasource.

While slc does recognize a few community contributed connectors, it does not know the ArangoDB connector, so we’ll just select “other” and tell slc the name of the connector.

Note: Don’t forget that the name of the connector is loopback-connector-arango and not loopback-connector-arangodb.

Despite not being aware of the ArangoDB connector, the datasource generator will still give us a head start by creating a skeleton of the ArangoDB datasource. Fill in the rest of the arangodb section in server/datasources.json.

Note: Some connectors use host for the server and some use hostname. Note that this connector uses hostname.

Our datasource is ready to go, so now we can create some models that will represent the data that we’ll store in ArangoDB.

Creating the models

We’re going to use slc again, but this time we’re going to use it to start Arc.

Arc is a graphical UI for the StrongLoop API Platform that complements the slc command line tools for developing APIs quickly and getting them connected to data. Arc also includes tools for building, profiling and monitoring Node apps.

We’re going to start with building our application using Arc and we’ll touch on monitoring our app a little bit later.

This single command will start Arc and take you to the Arc starting page.
strongloop-arc
Click on “Composer”. This will take us to the Arc Composer where we can see the arangodb datasource we already created under the datasource menu.

Click on “Add New Model” under the model menu.

sl-arc-2
Enter the “Name” as “business”, the “Plural” as “businesses” and select “arangodb” as the “datasource”.

Next, enter the properties for our model as shown below, then click “Save Model”.
properties
You just created a JSON model representation of our data in ArangoDB and exposed it via the LoopBack REST API without writing a line of code! That’s amazing. Check out the model in common/models in the application directory.

We have our model in LoopBack and now we’re ready to create the collection that will hold our data and populate it with some sample documents.

Creating test data part 2

One neat feature of loopback-connector-arango is that it implements LoopBack’s auto-migrate functionality. Auto-migration can create a database schema based on our application’s models. With relational databases this means that auto-migrate will create a table for a model and columns for each model property.Since we’re using a NoSQL database here, auto-migrate will only create an ArangoDB collection for us, but it is still a nice feature to have included in a community contributed connector.

Note: Auto-migrate will drop an existing table if it’s name matches a model name! Do not use auto-migrate with an existing table/collection.

I’ve provided a simple script that will run automigrate on our ArangoDB datasource and create the business collection.

You can head to http://localhost:8529/_db/loopback-example/_admin/aardvark/standalone.html#collections to see that our business collection has been created.

Now we can populate our business collection with some sample documents. I’ve provided another script to create sample data.

The cool thing to note about this script is that since we already created our datasource and model, we don’t have to go through the hassle again of using a different Node.js module to connect to ArangoDB, pass it the proper connection information, figure out how to use the module and insert documents. By defining our connection information and models for LoopBack, all we need to do is call “create” on our business model to insert documents. Inserting documents in bulk into ArangoDB can simply look like this!

Querying data

We’re ready to query our API! Let’s start by using LoopBack’s built-in Swagger API explorer. Start your application using slc start and go to http://localhost:3000/explorer.

Expand the GET /companies section and click on “Try it out!” to view the documents in our collection.

api-explorer

You’ll see that even though we didn’t provide an id or revision, ArangoDB automatically created those for us.

We can get individual documents by using the id that the connector created for us.

Note: ArangoDB’s id property is actually a database wide id that looks like collectionname/123456. Well, that id with a “/” is not very friendly to URLs like our REST API uses, so the connector implements two methods, fromDB and toDB, that manipulate the id to add or remove the collection name as necessary. So, when we call one of our endpoints with an id, we only need to pass the “123456” and not the collectionname/123456.

The connector supports numerous where filters as well such as equivalence

…and greater than.

We can also use other filters with the connector such as limit…

…and order.

Creating and updating data

Creating a document is as simple as sending a POST request to the business endpoint.

If we need to update the document, a call to PUT /business/id will take care of that.

LoopBack automatically created all of these endpoints for us when we told it to expose our business model via the REST API!

Arc Profiler

Now that we’ve taken a look at the LoopBack ArangoDB connector, let’s take a quick side tour through the Arc Heap Profiler. Remember above that we used Arc Composer to create our models for the tutorial. Well, there’s much more to Arc than just Composer!

First, start Arc with slc arc, select “Profile” from the main menu and then click “Load” on the profile page.

arc-profiler
Select “Heap Snapshot” and then click “Take Snapshot”. Wait a few seconds for the snapshot to build.

profiler-2
Select the snapshot created from the left-hand menu and type “ArangoDB” into the class filter box. Expand the ArangoDB class, datasource and models and you’ll see our business model.

Diving too much further into Arc Profiler is beyond the scope of this tutorial, but you can read more about it right on the StrongLoop blog at Using StrongLoop Arc to Profile Memory Leaks in SailsJS. Wait, what? Sails? That’s right! One of the excellent things about Arc is that it works with any Node.js application.

Conclusion

It is amazingly simple to get ArangoDB and LoopBack up and running on your machine. After installing ArangoDB and LoopBack, it only takes a few steps to get the two connected and only a few more to create models of your data and expose them via LoopBack’s REST API. Together they can form a very dynamic duo.

When you’re ready to take the next step, add Arc Profiler into the mix and you have a true powerhouse!

You can find the code for this example on Github.

 

From Bert Belder: The Future for Node.js Looks Bright

By now you’ve already heard about how io.js was forked from the Node.js project last year, and that a Node foundation has been announced earlier this year. Without going into much detail – a lot of people have been working hard to get boths projects to a good place, and I’m pleased to report that it’s working out!

Last Friday I sent the Node Advisory Board the e-mail below (slightly redacted for readability). Since we want to be serious about open governance, I figured it would be good to share it with a broader audience… so here you go:

……

People involved with Node and io.js,

After spending many hours in what seem like endless meetings it’s easy to be cynical about Node – it happens to me from time to time. But fundamentally I think we’re really in a good place.

Soon there will be a non-profit foundation that guarantees independence and provides funding that the project so dearly needs. There is a motivated community of individuals and companies contributing time and effort to continuous innovation.

The debate among developers working on Node and io.js is reaching a good outcome and we might soon be working together again. There will be a stable version of Node which is conservatively maintained, so users can confidently depend on it through the next Black Friday, and there will be a branch which releases frequently and is open to change, providing a canvas to paint on for those who want to make Node better.

Over the past year I’ve learned that both predictable stability and open innovation are really important for maintaining a healthy open source project and we’re in the unique position of enabling both in an institutionalized fashion.

Now what we really ought to do is make it easy for users to install both side-by-side so they can use whatever is appropriate for what they’re working on.

In the meantime Node has become a staple for web development – the most ubiquitous platform of our time – and continuous to grow.

Thanks for all the effort you all have put into making this happen.

– Bert

……

StrongLoop is Committed to Node and io.js

StrongLoop has been a core contributor of Node and io.js since it was founded in 2013. StrongLoop helps companies succeed with Node by creating tools for composing APIs, deploying, monitoring and securing Node apps in production.

Learn more >>

 

How-to Test Client-Side JavaScript with Karma

I have often quipped that if I were stuck on a desert island with only one npm module, I’d choose karma. The one place where you can’t avoid using JavaScript is in the browser. Even Dart needs to be transpiled to JavaScript to work on browsers other than Dartium. And code that can’t be tested should not be written. Karma is a widely-adopted command-line tool for testing JavaScript code in real browsers. It has a myriad of plugins that enable you to write tests using virtually any testing framework (mocha, jasmine, ngScenario, etc.) and run them against a local browser or in Sauce Labs’ Selenium cloud.

In this article, you’ll see how to use karma to test standalone JavaScript locally, to test DOM interactions locally, and to run tests on Sauce Labs. The code for all these examples is available on my karma-demo GitHub repo.

Your First Karma Test

Like gulp or metalsmith, karma is organized as a lightweight core surrounded by a loose confederation of plugins. Although you will interact with the karma core executable, you will typically need to install numerous plugins to do anything useful.

Suppose you have a trivial browser-side JavaScript file like below. This code is lib/sample.js in the karma-demo GitHub repo.

Testing this in Node.js would be fairly simple. But, what if you wanted to test this in a real browser, like Chrome? Let’s say you have the following test file.

To run this test in Chrome, set up your package.json with karma and 3 plugins. As you might expect, karma-mocha is a karma plugin that enables you to use the mocha test framework, and karma-chrome-launcher enables karma to launch Chrome locally. The karma-chai package is a wrapper around the chai assertion framework, since neither mocha nor karma includes an assertion framework by default.

Now that you have karma installed, you need a karma config file. You can run ./node_modules/karma/bin/karma init to have karma initialize one for you, but the karma config file you’re going to need is shown below.

One key feature of the above config file is that karma configs are NodeJS JavaScript, not just JSON or YAML. You can do file I/O, access environment variables, and require() npm modules in your karma configs.

Once you have this config file, you can run karma using the ./node_modules/karma/bin/karma start test/karma.mocha.conf.js command. You should see a Chrome browser pop up and output similar to what you see below in the shell:

If you see the “SUCCESS” message, that means your test ran successfully.

Testing DOM Interactions with ngScenario

Karma isn’t just useful for testing isolated JavaScript. With ngScenario and the karma-ng-scenario plugin you can test DOM interactions against your local server in a real browser. In this particular example, you will test interactions with a trivial page. However, ngScenario, as the name implies, was developed by the AngularJS team to test AngularJS applications, so you can use karma to test single page apps too.

Suppose you want to test the following page, which uses the sample.js file from the previous section to display the number “42” on the page.

In order to serve this page and the sample.js file, you need an HTTP server. You can put together a simple one using express as shown below.

Navigate to http://localhost:3000/sample.html and you should see the number “42”. Now, let’s set up karma to test this.

In order to use ngScenario with karma, you need to install the karma-ng-scenario plugin.

Once you have the karma-ng-scenario plugin, you will need to create a slightly modified karma config file. The git diff between this file and the previous section’s config file karma.mocha.conf.js is shown below.

That’s all you need! Now, start your web server with node index.js (your web server needs to be running for ngScenario to work) and run ./node_modules/karma/bin/karma start to execute your tests.

The karma.ng-scenario.conf.js file is configured for karma to automatically re-run tests if any of the loaded files change. Suppose you changed the test file scenario.test.js to break:

Karma will automatically re-run tests and show you the following output.

Note the AngularJS team considers ngScenario to be deprecated in favor of protractor. However, I still think ngScenario and karma have a key niche to fill in terms of testing JavaScript DOM interactions against a local server as part of TDD workflow. If you’re interested in working with me to re-imagine how ngScenario should work, shoot me an email at val [at] karpov [dot] io.

To the Cloud!

So far, you’ve only tested your code in Chrome. While testing in Chrome is better than not testing in the browser at all, it’s far from a complete testing paradigm. Thankfully, there’s Sauce Labs, a cloud Selenium service that you can think of as Amazon EC2 for browsers. You can tell Sauce to start Internet Explorer 6 running on Windows XP and give you full control over this browser for testing purposes. In order to get the tests in this section to work, you will have to sign up for an account with Sauce Labs to get an API key.

To integrate karma with Sauce, you’re going to need the karma-sauce-launcher plugin.

To run the ngScenario test from the previous section on IE9 and Safari 5 using Sauce, create another karma config file. The git diff between this new file and the previous section’s config file is show below.

This is all you need to do. Karma handles setting up an SSH tunnel for you, so the Sauce browser can make real requests to your local server.

Note the above example does not use autowatch. This is because, in my experience, end-to-end tests run on Sauce labs are too slow for the type of fast feedback I’d like from the ‘test on save’ paradigm. These heavy end-to-end tests are best run through a CI framework like Travis or Shippable.

Conclusion

Karma is a powerful and extensible tool for testing client-side JS, and very much deserves its spot on the npm home page. If there’s one npm module I can’t write JS without, it’s karma. Karma enables you to test standalone JavaScript or DOM integrations, locally or in the Sauce Labs cloud. With karma, you have no excuse for half-answers like “I think so” when your boss asks you if the site works in IE8.

Like this article? Chapter 9 of my upcoming book, Professional AngularJS, is a detailed guide to testing AngularJS applications. It includes more detail about using karma and ngScenario to test AngularJS applications, as well as protractor.

Getting Started with the RethinkDB Connector for the LoopBack Node.js Framework

This tutorial is the second in a series of posts that will help you get started with some of the many user contributed NoSQL connectors for Node.js LoopBack framework. In this series we will cover usage of the connectors for:

Last week we covered Couchbase Server using the community contributed loopback-connector-couchbase. In this post, we’ll review connecting RethinkDB and LoopBack using another community contributed connector: loopback-connector-rethinkdb.

rtplussl

RethinkDB

RethinkDB is described as “the first open-source, scalable JSON database built from the ground up for the real-time web. It inverts the traditional database architecture by exposing an exciting new access model – instead of polling for changes, the developer can tell RethinkDB to continuously push updated query results to applications in real-time. RethinkDB’s real-time push architecture dramatically reduces the time and effort necessary to build scalable real-time apps.”

We’ll primarily focus on connecting RethinkDB to LoopBack in this getting started tutorial, but we’ll include a little bonus content at the end to discuss how to use the RethinkDB real-time features with LoopBack.

Installing RethinkDB

RethinkDB offers packages for numerous flavors of Linux as well as OSX, all of which you can find on the main download page.

There are official packages for:

  • Ubuntu
  • OSX
  • CentOS
  • Debian

Additionally, there are community contributed packages for:

  • Arch Linux
  • openSUSE
  • Fedora
  • Linux Mint
  • Raspbian

We’ll be developing on OSX, so navigate to the OSX download page. In addition to building from source, we can also install RethinkDB using the official installer or by using Homebrew. We’re going to use the installer to get RethinkDB up and running for the purposes of our tutorial. Simply download the disk image and run on the rethinkdb-2.0.1.pkg file.

RethinkDB2.0.1-installer

The installer will guide you through the remaining steps in order to get RethinkDB installed.

To start the RethinkDB server, run the rethinkdb command in your terminal.

Creating test data

We’ll need some data in RethinkDB to test our application. I’ve included some sample JSON and a short script that we’ll use to populate our database. RethinkDB creates a test database automatically during installation which we’ll use. You can find the script and sample data in the data directory in the Github repository. Run the sampledata.js file to create some sample documents. This will create a customer table and insert a few documents that we’ll use for testing.

LoopBack – Creating the application

Now, we’ll create our application using the slc loopback application generator. The slc command line tool, which is Yeoman under the hood, can scaffold a full LoopBack application structure for us in just a few simple steps. Run slc loopback in your terminal.

Now that we have our skeleton and all we need to run a basic LoopBack application, let’s install loopback-connector-rethinkdb from the npm registry.

Creating the backend datasource definition

Alright, now that we have the RethinkDB server and loopback-rethinkdb-connector installed, let’s setup our RethinkDB datasource. We’ll start by again using the extremely useful slc from the command line. This time we’ll tell slc that we want to create a new datasource.

While slc does recognize some community contributed connectors, it does not know the RethinkDB connector, so we’ll just select other and tell slc the name of the connector. Also note that since this is a community contributed connector, StrongLoop does not have a generator for the datasource definition. What’ll you’ll get automatically generated for you is a datasource skeleton.

We’ll need to fill in the remaining information to connect to RethinkDB.

Note: This connector is a bit different from others as you are required to pass in connection information in the form of a URL that the Node.js url module can parse. So, instead of defining the connection in the form of an object, we need to create a connection string in the form of http://username:password@host:port/database. Since we’re not using authentication, we can ignore the username and password.

Creating the models

Now we’re ready to create the LoopBack models that represent the sample data we added to RethinkDB. There are three main ways in which you can create LoopBack models, all of which were reviewed in my previous post on Couchbase Server. Since we’ve already used the command line, let’s switch it up this time and use slc arc.

StrongLoop Arc is a graphical UI for the StrongLoop API Platform that complements the slc command line tools for developing APIs quickly and getting them connected to data. Arc also includes tools for building, profiling and monitoring Node apps.

Start Arc by running slc arc from the command line.

Your browser will automatically open to http://localhost:55988/#/. Click on Composer to launch the LoopBack model composer.

arc
Click on “Add New Model” in the menu on the left, enter “customer” as the model name, “customers” as the plural name and select “rethinkdb” as the datasource.

arc2
Next, we need to enter the properties for our customer model. Fill in the names and types for each property as shown below.

arc3
Click “Save Model” and go check out the common/models directory inside our project. Arc has automatically created the files necessary to represent our customer model using the properties we entered in the GUI. Simple!

Start the LoopBack application with slc start (slc to the rescue yet again) and navigate to http://localhost:3000/explorer in order to use the built in Swagger API explorer to view our customer model.

arc4
Click “Try it out!” under GET /customers to see our sample data.

arc5
Querying data

We’ve seen that we can view our customer model via the Swagger explorer. Now, let’s take a look at some of the more advanced features of API interaction using loopback-connector-rethinkdb.

We can get a single document by calling GET /customers/{id}.

In addition to equivalence, the connector supports the where filter types between, gt, gte, lt, lte, inq, nin, neq, like and nlike. We’ll cover a few examples here, but you can see many more in the LoopBack documentation.

Find documents where the balance is greater than 2000.

Use like to find documents that match a certain expression.

In addition to using the format above, you can also use stringified JSON with the REST API to create your filters. Here’s an example with the inq operator.

Of course, the connector also supports other filter operations like order, limit, and fields.

Creating data

When we initially created our application, LoopBack also generated a POST endpoint for our customer resource so that we can easily create new customer documents.

Note: We provided values for the document id when we loaded our initial sample data, but if you do not pass in an id field when creating a document, RethinkDB will automatically generate one for you.

Updating data

Updating a document is as easy as creating or querying – simply send a PUT request to the individual customer resource using the document id.

RethinkDB real-time and LoopBack

Now that we’ve covered the basics of creating a LoopBack API application and connecting it with RethinkDB, let’s look at how we could possibly use the RethinkDB real-time features with LoopBack.

Changefeeds lie at the heart of RethinkDB’s real-time functionality. They allow clients to subscribe to changes on a table, document or even a specific query. Our example will cover how we can subscribe to changes on a table and get notifications in real-time when a document is created or updated using the LoopBack REST API.

First, we’ll create a simple script that will listen to changes on our customer table and output notifications to the console. The portion of the script that listens for changes is one line of code. That’s how simple it is to implement real-time functionality in RethinkDB. You can review the script in this tutorial’s Github repository.

Make sure RethinkDB and the LoopBack application are started and run this in one terminal window to start the script and listen for changes on the customer table.

Next, open up another terminal window and insert a document into the customer table using our REST API.

You’ll see that in the terminal where we started the rethink-changes.js script that RethinkDB immediately recognized that we added a new document and sent a notification to the console. Real-time notifications with one line of code!

You’ll also see that RethinkDB returned and old_val object which is null. That is because this is a brand new document so there is no “old value” to reference.

Let’s update our document and see what kind of notification we get now.

Now we get a notification from RethinkDB that includes both the old values and new values of the document that we updated.

Incredible! Is your head spinning with ideas on how you can use LoopBack along with RethinkDB’s real-time functionality to create awesome applications? I know mine is!

Conclusion

Even though the documentation for loopback-connector-rethinkdb is missing, the connector is solid and this article will give you a headstart to using LoopBack with RethinkDB. If you stop here and and only use this connector with LoopBack and RethinkDB for your REST API, you’ve already created an amazingly simple, yet powerful application.

If you want to take the next step and create applications using LoopBack along with RethinkDB’s real-time functionality, I hope this article has you excited to dive in and imagine your own applications that can be created using these two amazing technologies!

You can find the code for this article on Github.

What’s next? More real-time goodness

If you are working on a Node project which needs real-time communication with other Node and/or Angular apps, devices and sensors, checkout StrongLoop’s new unopinionated Node.js pub-sub for mobile, IoT and the browser!