If you’ve spent any amount of time on the receiving end of tickets for a Service, Support, or Help Desk, you know that even the most obvious emails can have a completely unrelated impact or resolution. Such was the case with the simple message I received one weekend while “on call.”
“We’re having trouble accessing Address Lookups. It appears to be down?” a message read while I was enjoying a barbecue at the beach. I shrugged it off since it was a system that only a couple people used. From that point forward, I learned the road of life is paved with experience.
Now, in the message above, you have to understand that Address Lookups was not an essential service. However, it just so happened that this application also resided on a web server that housed an externally facing Intranet in addition to one of the essential systems of the organization. So, when there was trouble accessing Address Lookups, this actually meant the web server hosting the other two systems had experienced a catastrophic hard drive failure. But, we wouldn’t know this for at least a couple days (many warning systems were either non-operational, or had been turned off).
On the support side, I had evaluated the exposure, and triaged it accordingly. The system indicated was not one that required a mobilization of an emergency force of IT Super Heroes. Instead, I tucked it away to be identified first thing Monday morning – at which point all heck broke loose when the realization one of our essential systems was inaccessible, had been that way for a couple days, and would be that way for some time. How could this have been avoided, or at least triaged more effectively?
It really goes back to planning, understanding the relationship of assets within your organization, and evaluating the effect a change can have. A lot of things went wrong. Warning systems had been turned off, and core systems were on old hardware that housed rarely used systems. But even worse, an essential web application had been moved, mistakenly, to this old hardware.
What we really needed, but didn't have until after the disaster, a true database that tracked the status of our assets, and allowed quick, detailed reporting on the status of those assets. These tools would have shown the interdependencies of our systems, and alerted us to the potential problems the changes made could cause. Of course, just having the right tool is half the battle. Populating the tool with relevant and updated data is essential as well.
Therein lies the trouble though. Many organizations, still turning to Excel, or perhaps even pen and paper to record, track, and monitor essential IT resources, don't have the bandwidth to tirelessly keep everything updated. Modern ITSM tools that include the CMDB as a core part of the functionality, have transitioned to including automatic discovery as an essential feature. With this, gone are the days of having to physically scout out the location of hardware and inspect them manually. Now, you'll be able to do that with the push of a button.
So, how does your CMDB process or ITSM solution stack up though? Take a look at our CMDB Checklist, and see the twenty-five essential functions your IT organizations needs.