We recently added the Tracing module (currently in Beta) to StrongLoop Arc’s monitoring and performance analysis tools. Tracing helps you identify performance and execution patterns of Node applications, discover bottlenecks, and trace code execution paths. This enables you to monitor Node.js applications and provides tracing data at the system and function level, giving you insights into how your application performs over time. Tracing works with applications deployed to StrongLoop Process Manager (PM).
This blog post describes how to quickly get up and running with Tracing, assuming you have some familiarity with StrongLoop tools. It demonstrates the quickest path to try out the new Tracing module, using an example that simulates some load, and how to view and understand the data visualizations based on your application traffic.
Step 1. Setup
Start by installing the latest version of StrongLoop and creating a basic Loopback application. If you have never done this previously, please refer to: http://loopback.io/getting-started/
$ npm install -g strongloop
Clone the example app from https://github.com/strongloop/tracing-example-app and go through the set up:
$ git clone https://github.com/strongloop/tracing-example-app.git
$ cd tracing-example-app
$ npm install
This example demonstrates the tracing in of StrongLoop Arc. The example includes a simple HTTP server with one route that starts an internal busy loop when triggered. This generates fluctuations (and thus more data) for the StrongLoop Arc tracing graphs.
Please review the README to get your example app up and running and create some variation in the graphs by running the ./send-request script to make repeated curl requests to the server.
Step 2. Start things up
Change directories into your app folder
$ cd tracing-example-app
Start your local PM instance and deploy the application to it by typing:
$ slc start
This will bring up a PM instance listening on the default port 8701
Next start Arc by typing:
$ slc arc
This launches StrongLoop Arc in your default browser. If you haven’t done so already, you’ll need to register and log in.
The first stop is the Process Manager module in Arc. This module enables you to add, edit, and monitor PM instances. Arc communicates with StrongLoop Process Managers dynamically, so you can do things such as deploying apps, stopping and starting apps, changing cluster size, and so on.
Now, connect Arc to the local PM instance you started previously using slc start. By default, the local PM runs port 8701, so in the Strong PM field, enter localhost and for Port, enter 8701.
If Arc gets the appropriate response from PM, and an app has been deployed to it, you should see the following:
IMPORTANT NOTE: Once you’ve connected to the Process Manager instance running the example, set the number of processes to one (1), as shown below. By default,
slc start runs the application with a number of processes equal to the number of cores on your machine. Since the purpose of this example is to demonstrate an overloaded application, it will maximize CPU usage and if that happens to all your CPUs, you system could eventually overheat! To avoid this, simply run the app in one process.
Click on the Tracing module to view the instrumented data.
The first time you come to Tracing there may not be much to see. You will see a dropdown select control to choose which pm instance you want to enable tracing on, the name of the current service/application, a toggle control to turn tracing on and off.
The Tracing view will automatically attempt to load tracing information on the first process of the first PM instance. StrongLoop PM enables tracing at the process level and by default tracing is not enabled. Turn tracing on by using the On/Off toggle switch as shown below.
Wait while processes get cycled.
Drill down – Trace summary view
The first chart displays CPU load and process heap usage and simply serves as a tool to pick a time slice into which you want to drill down. The Timeline View is updated in real-time every 20 seconds and shows data up to the last five hours.
By selecting a time slice, you’ll get a sequences of traces that occurred in the system, at that time. Two types of trace sequences are instrumented, HTTP/HTTPS transactions and database transactions. Specifically, MySQL, PostgreSQL, Oracle, Redis, Memcache, Memcached and MongoDB. You can select the trace sequence that you wish to drill down further, or use the Previous and Next buttons for traversing through the traces sequentially.
The screenshot below explains the navigation elements in the Tracing view.
Anatomy of a Waterfall
The top bar is the time spent initiating the original function call. The lower bar is the time spent executing the callback function, and the line connecting the two represents the time spent waiting for a response; for example, for a database server to return a value. For more about synchronous versus asynchronous waterfalls, see the documentation at http://docs.strongloop.com/display/SLC/Tracing.
The next chart, the “flame graph,” is a rich visualization tool, providing information about the application function call stack, and enabling you to see where your app is spending the most time. Typically the flame graph is read bottom up, where the bottom most brick is the root of the call stack tree and the application’s initial call. Each module is displayed in a different colored block and the blocks display function names when available. The selected call is always shown in yellow. The size of each block/brick is proportional to the time spent in the corresponding function. The flame graphs below are a couple of examples that show you the variations in a flame graph based on application execution.
To further detect anomalies around the patterns where most time is spent, use the Inspector to the right. When you mouse over a brick in the Flame graph, the Inspector displays details of that particular function call. Depending on how far you’ve “drilled down,” the Inspector displays different details, such as module name, cost, top costs, version, and—most importantly—the exact line and column in the source code where the function call occurs!
Below is a screenshot of the Inspector, when you mouse over a brick on the Flame graph.
The call stack contains the same data as presented in the Flame graph, but as you can see, it is in its raw form. As one might expect, clicking on the call stack will update the Inspector details on the right.
The blog got you started quickly with navigating the Tracing UI using a local Process Manager using a basic example app with artificial load. If you’re ready to use it for production, simply get your Tracing license, then deploy your application(s) to a remote StrongLoop Process Manager, add the remote host/s to the Arc Process Manager view, and monitor your application on any remote PM as demonstrated here.
- Get started with transaction tracing by signing up for StrongLoop Arc and unlocking the tracing module by contacting us at firstname.lastname@example.org.
- Read the in-depth “Node.js Transaction Tracing by Example” blog
- Want to see the tracing module in action? Sign up for our free “Node.js Transaction Tracing” webinar happening on Friday, June 26 at 10 AM Pacific.
- Learn more about how tracing works in the official documentation.