Server monitoring is an essential part of any business environment that has services. Even if you don’t have your own servers and use cloud-based services, you’ll want to know about downtime. You don’t want to find out your web site is down from customers and you don’t want your boss to be the one to point out the email server has wandered off into the weeds. Done properly, server monitoring alerts those responsible for the services the minute they’re unavailable, allowing them to respond quickly, getting things back up and running.
David and I have been responsible for servers and server monitoring for years and have probably made nearly all the mistakes possible while trying to do it properly. So listen to the war stories from a couple of guys with scars and learn from our mistakes.
Here are 10 common server monitoring mistakes we’ve made.
1. Not checking all my servers
Yeah it seems like a no-brainer but when I have so many irons in the fire, it’s hard to remember to configure server monitoring for all of them. Some more commonly forgotten servers are:
- Secondary DNS and MX servers. This ‘B’ squad of servers usually gets in the game when the primary servers are offline for maintenance or have failed. If I don’t keep my eye on them too, they may not be working when I need them the most.
- New servers. Ah, the smell of fresh pizza boxes from Dell! After all the fun stuff (OS install, configuration, hardening, testing, etc) the two most forgotten ‘must-haves’ on a new server are the asset tag (anybody still use those?) and setting up server monitoring.
- Temporary/Permanent servers. You know the ones I’m talking about. The ‘proof of concept’ development box that was thrown together from retired hardware that has suddenly been dubbed as ‘production’. It needs monitoring too.
2. Not checking all services on a host
We know most failures take the whole box down but if I don’t watch each service on a host, I could have a running website while FTP has flatlined.
The most common one I forget is to check both HTTP and HTTPS. Sure, it’s the same ‘service’ but the apache configuration is separate, the firewall rules are likely separate, and of course HTTPS needs a valid SSL certificate. I’ve gotten the embarrassing calls about the site being ‘down’ only to find out that the cert had expired. Oh, yeah… I was supposed to renew that, wasn’t I.
3. Not checking often enough
Users and bosses have very little tolerance for downtime. A lesson learned when trying to use a cheap monitoring service that only provided 10 minute check intervals. That’s up to 9.96 minutes of risk (pretty good math, huh?) that my server might be down before I’m alerted. Configure 1 minute check intervals on all services. Even if I don’t need to respond to them right away (a development box that goes down in the middle of the night), I’ll know ‘when’ it went down to within 60 seconds which could be helpful information when slogging through the logs for root cause analysis later.
4. Not checking HTTP content
Standard HTTP check is good… but the ‘default’, ‘under-construction’ Apache server page has given me that happy 200 response code and a green ‘PASS’ in my monitoring service just like my real site should. Choose something in the footer of the page that doesn’t change and do an HTTP content matching check on that. Don’t use the domain name though – that may show up in the ‘default’ page too and make that check less useful.
5. Not setting the correct timeout
Timeouts for a service are very subjective and should be configurable on your monitoring service. Web guys tell me our public website should load under 2 seconds or our visitors will go elsewhere. If my HTTP service check is taking 3.5 seconds, that should be considered a FAIL result and someone should be notified. Likewise, if I had a 4 second ‘helo’ delay configured in my sendmail, I’d want to move that timeout above that.
Timeouts set to high let my performance issues go unnoticed; timeouts set too low just increase my notification noise. It takes time to tweak these on a per-service level.
6. Not realizing external and internal monitoring are different
When having an external monitoring service watch servers behind my firewalls, I may need to punch some holes in said firewall for that monitoring to work properly. This can be a real challenge sometimes as many monitoring services use multiple locations and then dynamically pick one to monitor my servers making it hard to maintain a white-list of their IPs or hostnames to let in my network.
Another gotcha I’ve run into is resolution of internal and external DNS views. If these aren’t configured properly, you’ll most likely get lots of ‘down’ notifications for hosts that are simply unreachable.
7. Sensitivity too low/high
Some servers or services seem more prone to having little hiccups that don’t take the server down but may intermittently cause checks to fail due to traffic or routing or maybe the phase of the moon. Nothing’s more annoying than a 3AM ‘down’ SMS for a host that really isn’t down. Some folks call this a false positive or flapping- I call it a nuisance. Of course I should jump every time a single ping looses its way around the interwebs and every SMTP helo goes unanswered – but reality sets in and a more dangerous condition might occur – I may be tempted to start ignoring notifications because of all the false positives.
A good monitoring service handles this nicely by allowing me to adjust the sensitivity of each check. Set this too low and my notifications for legitimate down events take too long to reach me but set it too high and I’m swamped with useless false positive notifications. Again, this is something that should be configured per service and will take time to tweak.
8. Notifying the wrong person
Nothing ruins a vacation like a ‘host down’ notification. Sure, I’ve got backup sysadmins that are covering it but I forgot to change the service so notifications get delivered to them and not me.
Another thing I’ve forgotten to take into consideration is notification time windows. John’s always the first in the office at 6AM, he should get the alerts until Billy shows up at 9AM because we all know Billy is useless until he’s had that first hit of coffee.
9. Not choosing the correct notification type
Quick on the heels of #8 is knowing which type of notification to send. Yeah, I’ve made the mistake of configuring it to send email alerts when the email server is down. Critical server notifications should almost always send via SMS.
10. Not whitelisting the notification system’s email address
Quick on the heels of #9 (we’ve got lots of heels around here) is recognizing that if I don’t whitelist the monitoring service’s email address – it may end up in the bit bucket. Mental note – dang, all out of mental note paper.
Bonus!
11. Paying too much
I’ve paid hundreds of dollars a month for a mediocre monitoring service for a couple dozen servers before. That’s just stupid. NodePing costs $10 a month for 1000 servers/services at 1 minute intervals and we’re not the only cost effective monitoring service out there. Be sure to shop around to find one that fits your needs well. Know that most services are charging way too much though.
They say a wise man learns from his mistakes but a wiser man learns from the mistakes of the wise man. Nuff said, true believer.