Making the most of JavaScript’s “future” today with Babel

Share

From CoffeeScript to ClojureScript to PureScript to CobolScript to DogeScript (woof woof!), JavaScript is the write-once-run-anywhere target for many forms and styles of programming. Yet, its biggest compile-to-language today actually isn’t any of these adaptations; its JavaScript itself.

Why?

Universality

If you are a client-side developer, it’s likely you’ve already enhanced older platforms with newer features. Take ES5 for example. Supporting IE8 was a pain, but es5-shim made the JavaScript part a lot smoother.

One of the things that attracted me to Node (circa 2009) was the ability to break away from browser support humdrum. It was blissful for a while, but Node is not exempt from legacy JavaScript compatibility issues anymore. Say you are a library author who is using generators in io.js but wants to support Node 0.10. Or you are sharing code between the client and server (like in a React application) and want to use classes or destructuring. The fact is, developers want to utilize new features without having to be concerned about legacy support. Legacy support isn’t a passion of mine, and I bet it isn’t one of yours.

JavaScript should be a write-once-run everywhere language

The JavaScript language is developing faster (ES7/2016 anyone?) than it ever has. Libraries are taking advantage of the new features, even though Node and Browser’s haven’t yet settled in to the new standards (take React’s adoption of ES6 classes, for example). I expect the trend to continue.

The good news is that it is easy to start taking advantage of new language features now with Babel and have them work across legacy and current platforms. Babel has built in a source-map support for browsers and proper stack traces for Node, so it gets out of the way for you to focus on the ES6 code.

babel

Babel isn’t the only source-to-source compiler for ES. Traceur from Google is another example. For clarity, we will just focus on Babel here.

Practically speaking: Developing with ES6 and beyond

The remainder of this article will be a practical launching point if you haven’t used Babel before and perhaps fill in some gaps if you have. We will build a simple app to provide context. The final product is located on GitHub.

Here is a starting structure.

  1. The build folder contains the built assets for Node. This is used in production (or for published modules) and typically ignored in version control.
  2. The modules folder contains the application components themselves.
  3. The .eslintrc file is lint configuration.
  4. The .babelrc file is Babel configuration.
  5. The package.json contains scripts for running and building the project
  6. The index.js file sets up Babel hooks for development.

Go ahead and create these files and directories. To create a package.json file quickly, just run npm init -y.

Installing dependencies

Now let’s get our dependencies installed and saved. Run the following to install our development dependencies:

  1. babel is the main core babel project that allows us to set up our development environment
  2. babel-eslint is a parser for eslint that teaches the linter about experimental features that aren’t in ES6.
  3. eslint is a linting tool and eslint-config-standard is a set of configurations for eslint that we’ll write our code against which follows the JS Standard style.
  4. babel-tape-runner hooks in babel when running tape and blue-tape is an extension to tape that adds promise support (which will come in handy in a bit).

Now that we have necessary dependencies to start our development, let’s install one more that will be used for production:

The babel-runtime package allows us to require only the features we need when distributing our application without polluting the global scope.

Configuring Babel

Let’s look at the .babelrc file next. Having a .babelrc file allows you to configure Babel in one spot in your project and it will work regardless how its run. Create a .babelrc file with the following content:

There are a number of options for configuration, but we will focus on two:

  1. The stage option defines what minimum proposal stage you want to support. By default, Babel provides the functionality found in the ES6 standard. However, Babel also includes support for language proposals for the next standard. This is pretty cool because it allows you to test drive features and give feedback to implementers as it goes through standardization. Specification proposals are subject to change in breaking ways or completely fizzle out all together. The higher the stage, the further along the specification is in the standardization process. You can view all of the proposals supported on the Experimental page. We will use the async/await proposal in our application.
  2. The loose option will generate cleaner and faster output as it won’t check ECMA specification fringe cases that are likely not to appear in your code. However, make sure you are aware of the edge cases before you use loose mode. This is handy for production performance as well.

Building our application

Now that we have Babel configured, let’s write some code! First, set up the root index.js file for development purposes with the following code:

  1. The require(‘babel/register’) line registers Babel, pulls in our .babelrc configuration and also includes a polyfill for ES6 extensions for native objects like Number.isNaN and Object.assign.
  2. Now that Babel is registered, any file we require after that will be transpiled on the fly. So, in our case, we require our application with require('./modules').

Next, let’s create an entirely lame app that makes use of ES6 and the experimental async/await proposal. Put the following code in modules/index.js:

I told you it was lame. However, we are making use of the new import syntax to pull in the hostname function from the os module and include a sleep module (which we’ll write in a bit). We are also using the async/await proposal to write clean asynchronous code (currently at stage 1 in the standardization process).

Let’s write our sleep module next. Add the following code to modules/utils/sleep.js:

This little helper function turns setTimeout into a promise returning function that resolves when a timeout is completed. Since the await syntax we used above awaits promises, this allows us to write a succinct delay code.

Let’s see if our application works! Run the following from the project root to test:

You’re output should be similar to this:

Exciting right?! Don’t answer that.

Now that we have an application to play with, let’s look at a few more tools you likely use in day-to-day development and how they translate when using Babel.

Testing Babel code

Let’s add a test for our sleep utility we developed in the last section. Inside modules/utils/__tests__/sleep-test.js let’s add the following:

Notice how we are using async/await and ES6 syntax in our test suite just like in our application code. Let’s add the following script to our package.json file in order to run this:

Now we can run:

And we will get the following output:

Groovy. We can use Babel for tests as well as application code.

Linting Babel

Let’s turn to our .eslintrc file next and add the following:

  1. The extends line hooks up JS Standard rule definitions.
  2. The parser line tells eslint to use the babel-eslint for parsing instead of the default parser allowing us to parse experimental JavaScript features.
  3. The env lines let eslint know that we are using Node and ES6 features.
  4. By default the es6 environment enables all ES6 features except modules, so we enable that as well in the ecmaFeatures block.

Let’s add a script to our package.json file for linting.

And we then can run:

Which will give us no output currently as there aren’t any linting errors.

Running Babel in production

Our index.js is handy for running Babel in development as its all in-memory and we don’t need a manual compilation step. However, that isn’t ideal for production for a couple reasons:

  1. Start up times are slower as the code base needs to be compiled in-memory first. Time increases with larger code bases.
  2. Second is that “in-memory” bit. We will have extra memory overhead if we do it this way; it will vary depending on the project size and dependencies.

We can add a build step for production that can be run before publishing to npm or as part of continuous integration. Let’s add a couple more scripts to our package.json file:

  1. The clean script just cleans out our previous build.
  2. The build script compiles the app. First, it cleans. Then, it copies any assets (including any .json files) to the build directory so they can be referenced properly. Finally, it runs the babel command to build all the JavaScript files in modules and puts the output in the build directory.

We also include an additional configuration option for Babel called runtime. The runtime optional won’t pollute the global scope with language extensions like the polyfill that is used when called require('babel/register') above. This keeps your packages playing nice with others.

Let’s try a build by running:

You should get the following output refering the compiled files:

Now we can run our precompiled version with the following command:

And we should get the same output as we did when we ran in development.

Now that we’ve done a build, poke around at the files in the build directory and see how they compare with the originals.

Source maps in production

Although loose mode (which we enabled in the Configuring Babel section above) will generate cleaner and faster output, you may still want to use source maps in production. This allows you to get at the original line numbers in stack traces. To do this, change your babel command to:

You will also need the source-map-support package in npm in order for proper stack traces to appear in your error messages.

To enable, add the following at the top of build/index.js

Wrapping up

Babel allows you to write ES6 and beyond today and have it work across different versions of Node and also work across different browsers on the client side (see http://www.2ality.com/2015/04/webpack-es6.html for an example). The most exciting thing for me that has been a joy to work with is universal JavaScript applications that share most of their code and then I get to write it in ES6.

PS: Syntax and Babel

Let’s quickly talk about your text editor before we go shall we? Lots of the new constructs won’t be highlighted properly when you start using Babel. Thankfully, the community has rocked this one and you should definitely switch if you haven’t as a lot of these have good support for things like JSX and Flow.

  1. Sublime Text
  2. Atom
  3. Vim
  4. Emacs

An Introduction to JavaScript ES6 Iterators

EcmaScript 2015 (formerly known as ES6) introduces a brand new concept of iterators which allows us to define sequences (limited and otherwise) at the language level.

Let us break it down a bit. We are all very well familiar with the basic for loop and most of us know its less popular cousin for-in. The latter could be used to help us explain the basics of iterators.

There are multiple problems with for-in, but one of the biggest is that it gives no guarantee of order. We will solve this with iterators below.

for-of

for-of is the the new language syntax that ES6 introduces to work with iterators.

What we want to see here is an implementation that guarantees the order of keys. In order for an object to be iterable, it needs to implement the iterable protocol, meaning that the object (or one of the objects up its prototype chain) must have a property with a Symbol.iterator key. That is what for-of uses, and in this specific example it would be table[Symbol.iterator].

Symbol.iterator is another ES6 addition that we will discuss in depth in another article. For now, think of this as a way to define special keys that will never conflict with regular object keys.

The table[Symbol.iterator] object member needs to be a function conforming to the iterator protocol, meaning that it needs to return an object like: { next: function () {} }

Each time the next function is called by the for-of loop it needs to return an object that looks like {value: …, done: [true/false]}. The full implementation of our sorted key iterator looks like this:

Laziness

Iterators allow us to delay execution until the first call to next. In our example above the moment the iterator is called, we immediately perform the work of getting and sorting the keys. What if next is never actually called? That’s going to be a wasted effort. Let’s optimize it:

Difference between for-of and for-in

It’s important to understand the difference between for-of and for-in. Here’s a simple but very self explanatory example:

As you can see, the for-of loop only prints the list values, omitting all other properties. This is the result of array iterator returning only expected items.

Built-in iterables

String, Array, TypedArray, Map and Set are all built-in iterables, because the prototype objects of them all have a Symbol.iterator method.

Spread construct

The spread operator also accepts an iterable, you can do some neat tricks with that:

Infinite iterator

Implementing an infinite iterator is as simple as never returning done: true. Of course, care must be taken to avoid an infinite loop.

Generators

If you haven’t familiarized yourself with ES6 generators yet, do checkout the MDN documentation. In a couple of words, generators are functions which can be exited and later re-entered and are the most talked about ES6 feature. Their context (variable bindings) will be saved across re-entrances. Generators are both iterators and iterable at the same time. Lets take a look at a simple example:

With that in mind, we can rewrite our original ordered table keys iterator using generators like so:

Bottom Line

Iterators bring a whole new dimension to loops, generators and value series. You can pass them around, define how a class describes a list of values, create lazy or infinite lists and so on.

ES6 Today

How can you take advantage of ES6 features today? Using transpilers in the last couple of years has become the norm. People and large companies no longer shy away. Babel is an ES6 to ES5 transpiler that supports all of the ES6 features.

If you are using something like Browserify in your JavaScript build pipeline, adding Babel transpilation takes only a couple of minutes. There is, of course, support for pretty much every common Node.js build system like Gulp, Grunt and many others.

What About The Browsers?

The majority of browsers are catching up on implementing new features but not one has full support. Does that mean you have to wait? It depends. It’s a good idea to begin using the language features that will be universally available in 1-2 years so that you are comfortable with them when the time comes. On the other hand, if you feel the need for 100% control over the source code, you should stick with ES5 for now.

 

Build Real-Time Node.js Apps with Angular LiveSet and LoopBack

What is LiveSet?

LiveSet is an AngularJS module that makes it easy to build real time applications. We created it so you can focus on building your app instead of cobbling together several libraries and frameworks. Unlike other real-time libraries and frameworks, we didn’t build LiveSet on top of our own proprietary protocols and transports. Instead we used the HTML5 EventSource Web API. This means you aren’t locked into our client or server and can use existing tooling (for example, Chrome’s EventStream tab–see below).

r1

Below is screen recording of several basic LiveSet examples. These examples illustrate just how much you can accomplish with LiveSet and LoopBack in 10-20 lines of JavaScript.

gif

 

The LiveSet API

The LiveSet module includes two factories you can inject into your AngularJS controllers and services.

createChangeStream(src)

This function returns a ChangeStream, which extends the Node.js PassThrough stream with some basic JSON parsing support. When you provide a src argument (an EventSource), the stream will add event listeners to it and write data from the EventSource to the stream.

LiveSet(data, changes, options)

This is the LiveSet constructor. It requires an array of initial data, and aChangeStream.

Let’s see the code!

Favorite Colors

r2

This is a snippet from the colors example that demonstrates creating a basic LiveSet and wiring it to an EventSource. The Color service is generated by the loopback-angular-sdk.

Changes made using the Color service will be pushed to other clients listening on the same change-stream.

Live Draw

r4

The drawing example creates a LiveSet in a similar way. The draw method uses a service provided by the loopback-angular-sdk to create additional points in the drawing. This data is streamed to other browser clients.

Streaming Chart

r5

The streaming chart does not use the LiveSet class at all. It demonstrates how to stream arbitrary JSON data to browser clients.

See the entire chart example code in the Angular LiveSet example.

Server-sent events

LiveSet uses HTML5’s EventSource web API to accept data pushed from the server (server-sent events). Other similar libraries and frameworks use WebSockets or HTTP long polling (among other strategies). LiveSet does not require HTTP requests to be upgraded to WebSockets, which means it is compatible with load balancers and proxies that don’t support WebSockets. It also means most modern browsers (see below) can receive data natively instead of requiring a large JavaScript dependency.

To send changes to objects in your LiveSet from your server, your server should support sending a ChangeStream using the EventSource protocol. LoopBack, the Node.js API framework, supports this natively. You can see an implementation with the /my-data/change-stream route in the example code.

Most browsers natively support EventSource and Server-sent events. For IE and other browsers that do not support it natively, you can use one of several polyfills.

Creating the Favorite Colors Example

Here is a basic tutorial that walks you through creating a simple AngularJS application using LiveSet. It assumes you are aware of setting up a basic angular application and its dependencies.

Step 1: Dependencies

You will need the following Node modules. Take a look at their getting started guides and READMEs if you are not familiar with setting them up:

Also the following Bower packages:

First you will need a basic AngularJS application running in a browser including the dependencies mentioned above. In the next steps I’ll walk through using the lbServices and LiveSet modules to build the application.

Step 2: The API

The Favorite Colors example was created first by running slc loopback to scaffold a LoopBack API server. If you are unfamiliar with creating LoopBack apps read more about it on the LoopBack site.

Using the slc CLI tool

Once you have the LoopBack API scaffolded, you can add a model. All we need for this app is a basic Color model.

Follow the CLI instructions and create the property “val”. Select “string” for the type. Add another property named “votes”. This should be a “number”.

Using the StrongLoop Arc UI

Alternatively, you can use StrongLoop’s Arc to create the model. Arc is a fully featured UI that allows you to compose APIs very quickly (as well as many other cool things). Open it by running:

Click on the Composer icon and create your model. See the screenshot for the exact settings. Make sure you select db for the Data source and create two properties. val which is a string and votes which is a number.

r6

The Color service

Generate your lb-services.js angular module using the loopback-angular-sdk. This will give you access to the Color resource in your angular app. Make sure you include the script tag and register the module as an angular dependency for the lbServices module (generated by loopback-angular-sdk). You also must include the source for the ngResource module (which you should have installed as part of step 1).

Step 3: The Controller

Now that you have a Color model API and the angular-live-set module available from your angular app, you can create a simple controller for interacting with the Color data. Start with a simple template that renders an array of color objects. I’ll assume you know how to do this. The template should look a bit like this:

Within our controller we need to create a LiveSet of colors as well as implement the createColor() and upvote() methods. It should look something like this:

And in our model source code we need to add the upvote method:

Step 4: The LiveSet

The last step is to add the actual live set.

The code above creates a LiveSet from a ChangeStream. The LiveSet is a read only collection of data. The items in the set will be updated as changes are written to the ChangeStream.  Since the change stream was created with an EventSource the changes will be written from the server as they are made. This will keep the data in the LiveSet up to date.

Also, the LiveSet will make sure that changes are applied to the $scope. This means often you can create a LiveSet as a view of the data and use the model API (eg. Color) to modify the data. Once the change has been made on the server, the change will be made to your LiveSet.

Now it’s your turn!

Now that you’ve learned what LiveSet is and how to build your first LiveSet app take a deeper look at the code or browse the API, and don’t forget to star it on github –>

What’s New in the LoopBack Node.js Framework – June 2015

Curious what new developments are happening with the LoopBack project? Here’s a curated selection of the most important changes that have been made to LoopBack in the past few weeks.

LoopBack Core

Enable Auth

We changed app.enableAuth to automatically setup all required models not attached to the app nor a datasource. The purpose of this change is to make it easier to set up authentication from code that does not use slc loopback project scaffolding, such as unit tests.

To use this new option, just provide the name of the data source to use for these models.  For example:

Observer API

We made several improvements to the hook API. Including the ability to skip other observers by calling context.end(). Now you can also notify multiple observers by passing an array of operation names (for example, notifyObserversOf(['op1', 'op2'])).

Strong Remoting

We added a couple of new features to strong-remoting and remote methods in LoopBack.

Default Status Codes

Now you can define both a status and errorStatus in the HTTP options of your remote method.

Header and Status Argument Targets

To set a header or the status code for an HTTP response, now you can specify the target of your callback argument to either header or status.

Here is an example of both default status codes and argument targets.

Filters

We extracted the implementation of data filtering from the memory connector to a new module called loopback-filters. Using this module, you can now filter arrays of objects using the same filter syntax supported by MyModel.find(filter).  We’ll soon be converting all LoopBack modules to use loopback-filter, so it will become the common “built-in” filtering mechanism.

LoopBack supports a specific filter syntax: it’s a lot like SQL, but designed specifically to serialize safely without injection and to be native to JavaScript.  Previously, only the PersistedModel.find() method (and related methods) supported this syntax.

Here is a basic example using the new module.

For a bit more detail, let’s say you are parsing a comma-separated value (CSV) file, and you need to output all values where the price column is between 10 and 100.  To use the LoopBack filter syntax you would need to either create your own CSV connector or use the memory connector, both of which require some extra work not related to your actual goal.

Once you’ve parsed the CSV (with some module like node-csv) you will have an array of objects like this, for example (but with, say, 10,000 unique items):

To filter the rows you could use generic JavaScript like this:

This is pretty simple for filtering, but sorting, field selection, and more advanced operations become a bit tricky.  On top of that, you are usually accepting the parameters as input; for example:

You can rewrite this easily as a LoopBack filter:

Or if you just adopt the filter object syntax as user input:

But loopback-filters supports more than just excluding and including.  It supports field selection (including / excluding fields), sorting, geo/distance sorting, limiting and skipping.  All in a declarative syntax that is easily created from user input.

As a LoopBack user this is a pretty powerful thing. Typically, you will have learned how to write some complex queries using the find() filter syntax; before you would need to figure out how to do the same thing in JavaScript (perhaps using a library such as underscore). Now with the loopback-filters module, in your client application you can re-use the same exact filter object you were sending to the server to filter the database without having to interact with a LoopBack server at all.

Middleware generator

The new LoopBack middleware generator adds a middleware configuration to an existing application.

The tool will prompt you to:

  • Select the phase to use for the middleware.
  • Add a list of paths.
  • Add parameters.

See more about the new middleware generator in the documentation.

Workspace

The loopback-workspace module provides the backend functionality for Arc API composer and slc loopback

The default template for generating LoopBack projects now includes default lookup paths for mixins. The defaults are the following:

  • loopback/common/mixins
  • loopback/server/mixins – the first two will include loopback core mixins.
  • ../common/mixins relative to your project.
  • ./mixins relative to your project.

Note that the last two paths are relative to your project (just like modelSources).

We also added support for middleware.json. This lays the groundwork for support for generating middleware.json files with Arc API Composer and with slc loopback.

DataSource Juggler

Persist Hook

We added a new “persist” hook. Observers are notified during operations that persist data to the datasource (for example, create, updateAttributes). Don’t confuse this hook with the existing “before save” hook:

  • before save – Use this hook to observe (and operate on) model instances that are about to be saved (for example, when the country code is set and the country name not, fill in the country name).
  • persist – Use this hook to observe (and operate on) data just before it is going to be persisted into a data source (for example, encrypt the values in the database).

Loaded Hook

We have also just added a new “loaded” hook. Observers are notified right after raw data is loaded or returned from the connector and datasource. This allows you to do things like decrypt database values before they are used to create a model instance.

Here is a filtered changelog including important changes to the juggler.

  • PR#611 Dedupe ids args of inq for include
  • Released 2.29.1
  • PR#584 add test suite for scope – dynamic function
  • PR#588 Fix pagination on collections with many-to-many relationships
  • PR#604 Fix destroyById not removing instance from cache
  • PR#609 Don’t silently swallow db errors on validation
  • Released 2.29.0
  • PR#602 Enhance the apis and add more tests
    • End observer notification early
    • Allow multiple notifications in a single call: notifyObserversOf([‘event1′, ‘event2′])
  • PR#600 Fix toJSON() for level 3 inclusions
  • PR#598 Mixin observer apis to the connector
  • PR#597 Enhance fieldsToArray to consider strict mode
  • Released 2.28.1
  • @cbb8d7c Remove dep on sinon
  • PR#586 Add new hook persist
  • Released 2.30.1
  • @8302b24 Pin async to version ~1.0.0 to work around context propagation
  • Released 2.30.0
  • PR#618 Allow 0 as the FK for relationships
  • PR#626 Fix for issues #622 & #623
  • PR#630 Promisify ‘automigrate’

Connectors

We added a new “execute” hook to the connector API. It allows you to observe the low-level connector.execute() method. See the documentation for more information.

Below is a filtered changelog including important changes to connectors.

loopback-connector-soap

  • @b40f92b Add before and after “execute” hooks for the underlying soap invocation
  • PR#20 bump(soap) bump node-soap version from 0.8 to 0.9

loopback-connector-rest

  • Released 1.9.0
  • PR#33 Add before and after “execute” hooks

loopback-connector-mongodb

  • @84a1ab0 Add before and after “execute” hooks
  • Released 1.9.2
  • @8a55f92 Update to memwatch-next for node 0.12 compatibility
  • Released 1.9.1
  • @5d907cf Update deps
  • Released 1.9.0
  • @331e158 Replaced ensureIndex() with createIndex()
  • Released 1.11.0
  • PR#142 Add a workaround for auth with multiple mongos servers
  • PR#141 Autoupdate and automigrate now respect settings.mongodb.collection

loopback-connector-postgresql

  • @111f8c2 Add better support for the Date type
  • Released 2.2.0
  • PR#88 Make sure UTC is used for date

loopback-connector

  • Released 2.2.0
  • @2fc9258 Update deps
  • PR#18 Add before/after hooks for connector native operations
  • Released 2.1.2
  • @a5f11ac Put request with non existent properties no longer results in error
  • Released 2.1.1
  • @a62e06d Improved query support for Date

Boot

  • Released 2.8.0
  • PR#130 Port can’t be number checked to support iisnode
  • @44f733f Support iisnode using named pipes as PORT value (Jonathan Sheely)
  • Released 2.8.1
  • PR#133 Better debug output when loading complex configurations

Component OAuth2

  • Released 2.2.0
  • @83635b0 Tidy up the models to work with MySQL
  • Released 2.1.1
  • @a32e213 Allow models to be customized via options
  • Released 2.1.0
  • @32fcab9 Clean up oAuth2 client app attributes
  • Released 2.3.0
  • @807762d Remove auth code after 1st use
  • @f613a20 Allow options.scopes to be a custom function
  • Released 2.2.1
  • @79a7df8 Allow options.userModel/applicationModel to be strings
  • Released 2.0.0
  • @e5da21e Change license to StrongLoop

Other Module Changes

  • loopback-component-storage
    • PR#74 Bugfix: Cannot read property ‘forEach’ of undefined
    • Released 1.5.0
    • PR#70 Add missing finish event when uploading to s3
  • loopback-testing
    • PR#47 Add withUserModel to extend user related helpers
    • PR#51 use findorCreate to create roles
    • PR#45 Update helpers.js
  • loopback-component-passport
    • Released 1.4.0
    • PR#70 feature: Make email optional
  • loopback-gateway
  • loopback-sdk-angular
    • Released 1.4.0
    • PR#138 Add createMany method
  • loopback-component-push
    • PR#88 Pass contentAvailable through to APNS
    • @e9022a0 Forward “contentAvailable” and “urlArgs” to APNS
    • @08e73ee Update deps

You can help too!

LoopBack is an open source project welcoming contributions from its users. If you would like to help but don’t have any own itch to scratch, then please pick one of the issues labelled as “Beginner Friendly”, see this github view for a full list.

The full changelog

As always, you can find the full list of changes at http://strongloop.github.io/changelog/

 

Announcing Transaction Tracing for Node.js Beta

At StrongLoop, we develop tools to support development and operations throughout the entire lifecycle of API development. Initially, we released Arc Profiler to help you understand performance characteristics of your Node application. Next, we added Arc Metrics to provide real-time visibility into your staging and production environments. Today we’re announcing the latest Arc module, Tracing, (currently in public Beta) that enables you to perform root cause analysis and triage incidents in production. (You can watch a short overview and demo of the Tracing module here.)

Those who have cut their teeth on the seemingly endless iterations in the dev lifecycle  will understand this satirical spin on Dorothy’s line from the Wizard of Oz:

“Toto, I don’t think we’re in staging anymore…  There’s no place like production… There’s no place like production…”

Simulated load, automated testing, and all the CI magic in the world won’t prepare you for the “gotchas” that can happen in production.  If you’re lucky, you’ll have a canary that keels over as soon as you enter the production mine.  But what then?

The answer is Tracing.  The Arc Tracing module provides the ability to call in the artillery when you need it.  If you see something of interest in Metrics, open up Tracing and you’ll be shown a timeline of memory and CPU usage.

Understanding the Timeline

tracing21

Locate the point of interest—more often than not a CPU or memory spike in the form of a peak in the line—and start to drill down. When you’ve located the incident, you’ll want to freeze the time slice by clicking on the chart drawing your black line denoting a time slice and starting your drill down.

Read more