Why we chose Node.js for server monitoring
2011/08/22 1 Comment
Asynchronous, non-blocking, network chatter sounds like something a server monitoring service could use. So instead of running 1500 checks in series, one after another, each taking maybe hundreds of milliseconds to complete, we’re able to start hundreds of checks, one after another, without having to wait for the return results. For example, we may start an HTTPS request, move on to start 3 PINGs, 5 SMTP checks, and hundreds of other checks before the first HTTPS response has returned with the status code and a block of data from the webpage we requested. At that point Node.js processes the return information using a callback that we fed into the function when we started the request. That’s the magic of Node.js.
One limitation of Node.js is all that branching of a single process is bound to a single CPU. A single Node.js script is unable to leverage the hardware of today’s multi-core, multi-cpu servers. But we’re able to use Node.js’ “spawn” command to create multiple instances of our service checking processes, one for each CPU on the server and then balance our check load across the multiple running processes to make full use of the hardware.
Having non-blocking network IO allows our check servers to run thousands of more checks than our competitors with fewer resources. Fewer resources means fewer and cheaper servers which means less overhead. That’s how we’re able to charge only $10/month for 1 minute checks on 1000 target services. You won’t find a better deal anywhere – you can thank the folks over at the Node.js community for that.
I’m sure some will be quick to point out there are other languages that can do the same thing, some of them probably better at one particular thing or another than Node.js and I won’t argue with most of them. We think the way Node.js handles network IO makes it a great choice for a server monitoring service and if you give NodePing’s 15-day, risk-free trial a shot, we think you’ll agree.