We are excited to announce the immediate availability of a new version of the StrongLoop Process Manager with enhanced remote and local run capabilities, Docker support, Nginx load balancing and enhanced security.
What is StrongLoop Process Manager? It’s an enterprise-grade production runtime process manager for Node.js. StrongLoop Process Manager is built to handle both the vertical and horizontal scaling of your Node applications with ease.
Key features include : Multi-host remote deployment , SSH/Auth support, Docker container and hub support, zero downtime with soft and hard starts, clustering and on-demand resizing, plus automatic Nginx load balancer configuration that can all be managed with a CLI or GUI.
Vertical Scaling – Maximizing Each Unit of Scale
If you are a Node developer like me, you’ve already come to discover that applications start as a single process running on a core within their host. On a four-core host, you’re obviously not taking advantage of all the compute resources available by only utilizing one out of four cores. In this scenario, the StrongLoop Process Manager will automatically cluster your Node application creating a master process and worker processes with one worker per core, by default. This is what would be referred to as “vertically scaling” your app on a single host, as a unit of scale.
Horizontal Scaling – Multiplying Many Units of Scale
Now that you’re utilizing all the computing resources on the machine, you are ready to scale out by replicating the same unit of scale multiple times, so that in effect have multiple units that act as one large distributed application. In other words, take the single StrongLoop Process Manager running on a host and spin up multiple hosts running a Process Manager and distribute the load among the many Process Managers.
Why StrongLoop Process Manager?
Now that you’re able to run multiple hosts that make up the entirety of your application – you need a way to be able to act on those hosts as a whole, at the application level OR depending on the task, perform operations on a per host basis.
Here are some examples use cases:
- As a DevOps engineer, you want to deploy the next version of your application. This affects all hosts running the application at a particular version.
- As an operations person, you want to see if one of your hosts is acting up and has CPU utilization issues. In this case, you’ll want to be able to look at a single host and set of processes to do some triaging.
This is precisely where Process Manager really shines. It gives you the ability to do these things and makes it even easier with its visual interface in StrongLoop Arc.
Here’s a summary of what the StrongLoop Process manager can do for you:
- New! – Control your application remotely through a newly improved and streamlined CLI –
slcor visually through StrongLoop Arc
- New! – Run your application locally within StrongLoop Process Manager to quickly test your unit of scale
- New! – Perform all operations securely through http+ssh and http + basic auth
- New! – Install and run StrongLoop Process Manager as a Docker container
- Remotely deploy your application
- Stop, start and restart the application remotely with zero downtime if necessary
- Automatically restart your application if it crashes
- Automatically cluster your application to take advantage of all cores on your host
- Vertically scale up or down the number of worker processes for your Node app running in cluster mode
- Act as the API endpoint to:
- expose application metrics collected by strong-agent and expose through statsd interface
- perform root causes analysis by remotely profiling all processes for both CPU and memory usage
- Auto-integrate with Nginx and dynamically configure itself as a load balancer to round-robin across hosts.
For more detailed information on how these features work, please refer to the StrongLoop Documentation on Operating Node Applications.
Managing at Scale
On any one given host you can use the
slc command line tool to control your application through the StrongLoop Process Manager. For example:
As mentioned, StrongLoop Arc ties together the Process Manager hosts to give you a unified console to operate a given application.
Monolithic Distributed Apps
If every StrongLoop Process Manager is running the exact same code, you have the benefit of a full redundancy of all functions running on every unit of scale. You would typically round-robin or use some other load balancing algorithm to distribute the request among the Process Managers based on things like overall CPU load of the host. The drawback to this is that when you make a change to the application you have to update all hosts.
Microservices Based Distributed Apps
What if you could intelligently divvy up the hosts based on a set of discrete functions?
What if those hosts provided services or APIs that were fully self contained all the way from their logic to their persistent systems of record?
Imagine how quickly and easily you could iterate on your application with such a clean separation of concerns! If we step into our time-machine for a second, we’ll remember that this was the dream of service-oriented architecture SOA, which has largely remained unrealized. Why? Because you couldn’t decouple backends enough to be able to update services discretely. Node and frameworks like LoopBack have made this much easier. LoopBack provides connectors to make it easy to connect to multiple backend stores at the same time. With its model-driven development and aggregation capabilities, it’s now possible to take a subset of hosts and have them act as a pool that performs a set of services divided by your application domain. If those services need to be changed, you can update just that designated pool without affecting the rest of the application.
Watch the demo! Check out my short video that gives you an overview of the StrongLoop Process Manager.
Sign up for the webinar! “Best Practices for Deploying Node.js Applications in Production” on April 16 with StrongLoop and Node core developer Sam Roberts.
In the coming weeks, look for more enhancements to the StrongLoop Process Manager and its runtime capabilities. But for now, here’s a few additional technical articles that dive into greater detail on how to make the most of this release:
- Best Practices for Deploying Node in Production
- How to Test Node.js Deployments Locally Using StrongLoop Process Manager
- How to Run StrongLoop Process Manager in Production
- How to Secure StrongLoop Process Manager with SSH
- Best Practices for Deploying Express Apps with StrongLoop Process Manager
- How to Create and Run StrongLoop Process Manager Docker Images