Why we chose Node.js for server monitoring

NodePing’s server monitoring service was built from the front-end webapp to the backend SMTP requests, in 100% Node.js.  For those who may not be familiar with it, Node.js is server-side javascript using Google’s famed V8 engine.  It’s that engine that makes your Chrome browser so fast at javascript processing and NodePing so efficient at service monitoring.

Arguably, Node.js’ most interesting feature is the performance of its evented, asynchronous, non-blocking IO. In javascript fashion, the vast majority of IO functions use callbacks to handle the ‘results’. This allows the logic of NodePing to branch out in several directions without IO processes blocking others. This handling works when talking to databases, reading and writing files, and talking to other machines via network protocols.

Asynchronous, non-blocking, network chatter sounds like something a server monitoring service could use. So instead of running 1500 checks in series, one after another, each taking maybe hundreds of milliseconds to complete, we’re able to start hundreds of checks, one after another, without having to wait for the return results. For example, we may start an HTTPS request, move on to start 3 PINGs, 5 SMTP checks, and hundreds of other checks before the first HTTPS response has returned with the status code and a block of data from the webpage we requested. At that point Node.js processes the return information using a callback that we fed into the function when we started the request. That’s the magic of Node.js.

One limitation of Node.js is all that branching of a single process is bound to a single CPU.  A single Node.js script is unable to leverage the hardware of today’s multi-core, multi-cpu servers.  But we’re able to use Node.js’ “spawn” command to create multiple instances of our service checking processes, one for each CPU on the server and then balance our check load across the multiple running processes to make full use of the hardware.

Having non-blocking network IO allows our check servers to run thousands of more checks than our competitors with fewer resources.  Fewer resources means fewer and cheaper servers which means less overhead.  That’s how we’re able to charge only $10/month for 1 minute checks on 1000 target services.  You won’t find a better deal anywhere – you can thank the folks over at the Node.js community for that.

I’m sure some will be quick to point out there are other languages that can do the same thing, some of them probably better at one particular thing or another than Node.js and I won’t argue with most of them.  We think the way Node.js handles network IO makes it a great choice for a server monitoring service and if you give NodePing’s 15-day, risk-free trial a shot, we think you’ll agree.

10 Common Server Monitoring Mistakes

Server monitoring is an essential part of any business environment that has services.  Even if you don’t have your own servers and use cloud-based services, you’ll want to know about downtime.  You don’t want to find out your web site is down from customers and you don’t want your boss to be the one to point out the email server has wandered off into the weeds.  Done properly, server monitoring alerts those responsible for the services the minute they’re unavailable, allowing them to respond quickly, getting things back up and running.

David and I have been responsible for servers and server monitoring for years and have probably made nearly all the mistakes possible while trying to do it properly.  So listen to the war stories from a couple of guys with scars and learn from our mistakes.

Here are 10 common server monitoring mistakes we’ve made.

1. Not checking all my servers

Yeah it seems like a no-brainer but when I have so many irons in the fire, it’s hard to remember to configure server monitoring for all of them.  Some more commonly forgotten servers are:

  • Secondary DNS and MX servers.  This ‘B’ squad of servers usually gets in the game when the primary servers are offline for maintenance or have failed.  If I don’t keep my eye on them too, they may not be working when I need them the most.
  • New servers.  Ah, the smell of fresh pizza boxes from Dell!  After all the fun stuff (OS install, configuration, hardening, testing, etc) the two most forgotten ‘must-haves’ on a new server are the asset tag (anybody still use those?) and setting up server monitoring.
  • Temporary/Permanent servers.  You know the ones I’m talking about.  The ‘proof of concept’ development box that was thrown together from retired hardware that has suddenly been dubbed as ‘production’.  It needs monitoring too.

2. Not checking all services on a host

We know most failures take the whole box down but if I don’t watch each service on a host, I could have a running website while FTP has flatlined.

The most common one I forget is to check both HTTP and HTTPS.  Sure, it’s the same ‘service’ but the apache configuration is separate, the firewall rules are likely separate, and of course HTTPS needs a valid SSL certificate.  I’ve gotten the embarrassing calls about the site being ‘down’ only to find out that the cert had expired.  Oh, yeah… I was supposed to renew that, wasn’t I.

3. Not checking often enough

Users and bosses have very little tolerance for downtime.  A lesson learned when trying to use a cheap monitoring service  that only provided 10 minute check intervals.  That’s up to 9.96 minutes of risk (pretty good math, huh?) that my server might be down before I’m alerted.  Configure 1 minute check intervals on all services.  Even if I don’t need to respond to them right away (a development box that goes down in the middle of the night), I’ll know ‘when’ it went down to within 60 seconds which could be helpful information when slogging through the logs for root cause analysis later.

4. Not checking HTTP content

Standard HTTP check is good… but the ‘default’, ‘under-construction’ Apache server page has given me that happy 200 response code and a green ‘PASS’ in my monitoring service just like my real site should.  Choose something in the footer of the page that doesn’t change and do an HTTP content matching check on that.  Don’t use the domain name though – that may show up in the ‘default’ page too and make that check less useful.

5. Not setting the correct timeout

Timeouts for a service are very subjective and should be configurable on your monitoring service.  Web guys tell me our public website should load under 2 seconds or our visitors will go elsewhere. If my HTTP service check is taking 3.5 seconds, that should be considered a FAIL result and someone should be notified.  Likewise, if I had a 4 second ‘helo’ delay configured in my sendmail, I’d want to move that timeout above that.

Timeouts set to high let my performance issues go unnoticed; timeouts set too low just increase my notification noise. It takes time to tweak these on a per-service level.

6. Not realizing external and internal monitoring are different

When having an external monitoring service watch servers behind my firewalls, I may need to punch some holes in said firewall for that monitoring to work properly.  This can be a real challenge sometimes as many monitoring services use multiple locations and then dynamically pick one to monitor my servers making it hard to maintain a white-list of their IPs or hostnames to let in my network.

Another gotcha I’ve run into is resolution of internal and external DNS views.  If these aren’t configured properly, you’ll most likely get lots of ‘down’ notifications for hosts that are simply unreachable.

7. Sensitivity too low/high

Some servers or services seem more prone to having little hiccups that don’t take the server down but may intermittently cause checks to fail due to traffic or routing or maybe the phase of the moon. Nothing’s more annoying than a 3AM ‘down’ SMS for a host that really isn’t down.  Some folks call this a false positive or flapping- I call it a nuisance.  Of course I should jump every time a single ping looses its way around the interwebs and every SMTP helo goes unanswered – but reality sets in and a more dangerous condition might occur – I may be tempted to start ignoring notifications because of all the false positives.

A good monitoring service handles this nicely by allowing me to adjust the sensitivity of  each check.  Set this too low and my notifications for legitimate down events take too long to reach me but set it too high and I’m swamped with useless false positive notifications.  Again, this is something that should be configured per service and will take time to tweak.

8. Notifying the wrong person

Nothing ruins a vacation like a ‘host down’ notification.  Sure, I’ve got backup sysadmins that are covering it but I forgot to change the service so notifications get delivered to them and not me.

Another thing I’ve forgotten to take into consideration is notification time windows.  John’s always the first in the office at 6AM, he should get the alerts until Billy shows up at 9AM because we all know Billy is useless until he’s had that first hit of coffee.

9. Not choosing the correct notification type

Quick on the heels of #8 is knowing which type of notification to send. Yeah, I’ve made the mistake of configuring it to send email alerts when the email server is down.  Critical server notifications should almost always send via SMS.

10. Not whitelisting the notification system’s email address

Quick on the heels of #9 (we’ve got lots of heels around here) is recognizing that if I don’t whitelist the monitoring service’s email address – it may end up in the bit bucket.  Mental note – dang, all out of mental note paper.

Bonus!

11. Paying too much

I’ve paid hundreds of dollars a month for a mediocre monitoring service for a couple dozen servers before.  That’s just stupid.  NodePing costs $10 a month for 1000 servers/services at 1 minute intervals and we’re not the only cost effective monitoring service out there.  Be sure to shop around to find one that fits your needs well.  Know that most services are charging way too much though.

They say a wise man learns from his mistakes but a wiser man learns from the mistakes of the wise man.  Nuff said, true believer.

Dear PHP, I’m leaving and yes, she’s sexier

Dear PHP,

I know this letter won’t come as much of a surprise to you.  We’ve been growing apart for a while now but today we officially part ways.

It wasn’t easy to write this.  You and I have a lot of history.  Hard to believe it was over 10 years ago when you welcomed me into your arms.  You were young, sexy, and a breath of fresh air compared to my ex, Perl (shudder – lets not go there).  It didn’t take long for our relationship to start paying the bills.  In fact, every job I’ve had in the last decade had you on a pedestal.

We had plenty of good times.  Remember how we survived a front-page CNN link and pulled in $500k in 14 days?  And all the dynamic PDF creation over the years still brings a smile on my face.

But we had our rough times too.  I could go the rest of my life without ever hearing the words ‘register globals‘ again.  And you know quite well that I still bear the scars from creating SOAP clients with you.  While neither of us has been truly faithful (what ever happened to V6 and UTF-8?), we’ve always been able to work out our differences up to now.

But starting tomorrow, for the first time in 10 years, my ‘day job’ won’t include you.  I’m leaving you for node.js.  We were introduced by our mutual friend, JQuery.  At first I thought she was just the latest flavor of the month; popular among the guys on the mailing lists but now I’ve fallen victim to her async charm and really think we have a future together.

When you and I started having fun on the couch a year ago, I thought maybe our relationship would just keep going.  But then node and I spent some time on that couch and – oh, what she can do with JSON makes my toes curl.  To be perfectly honest, you just can’t compete with her.  She’s all that you used to be – young, sexy, and fast.  I’m sure some of your other boyfriends will probably argue with me about it but I’ve been smitten.  While they’re fighting with you over namespacing, she and I will be branching in non-blocking ways and spawning like crazy.

I’m not saying we’ll never see each other again – heck, you’re even serving this blog.  But I’m moving on and I hope you will too.  If you want to see what node.js and I are up to, stop by some time.  Maybe we can even help you keep an eye on all those fatal exceptions of yours.

Sincerely,
Shawn