With a little luck, you can get out of a parking ticket or jury duty, but you'll need a little more than wishful thinking to avoid network downtime.
Every minute enterprise IT spends searching for a solution, the company at large loses nearly $9,000, according to a recent study by Emerson Network Power and the Ponemon Institute. Research also indicated since 2010, businesses have witnessed a 38-percent rise in average total costs of downtime ($740,357), an 81-percent uptick in the maximum costs (pushing $2.5 million) and substantial increases in cybercrime activity. Six years ago, 1 in every 50 downtime events resulted from criminal activity. Today, it's more than 1 in every 5.
Enterprise IT must push for greater security, faster implementation and more powerful technology to prevent network downtime from devouring their prosperity whole. However, before we tackle specific methods for avoiding costly downtime, let's examine the behavior that leaves enterprise IT susceptible to these problems in the first place.
1. Too Much Manual, Not Enough Automation
Regression testing, recording and release management are just some of the automated IT service management tools enterprise IT teams have in their tool shed. The decision to implement one, a few or all depends entirely on legacy processes and where businesses see their technology moving in both the near and far futures. Automating nothing, however, spells disaster from simply a logistical standpoint.
Higher demand on the speed and reliability of continuous integration has enterprise IT performing harder than ever before. Code from a rushed, overworked IT employee is fertilizer for change management mistakes. Let the robots lighten the load by overseeing low-value, high-computational tasks that would either slow down ITSM or casually persuade enterprise IT staff to abandon best practices for quick fixes.
2. Not Checking the CMDB Before Implementing Change
Change implementation may look like jumbled values on a screen to laymen peeking at an IT department's monitors, but small discrepancies in code can infect simple updates or patches and cause a pandemic of downtime across multiple applications.
The greater IT community recognizes, to one degree or another, human error causes a lot of network downtime - one study by Avaya revealed as much as 81 percent of network downtime links back to configuration changes made by enterprise IT personnel. Checking changes to the configuration against configuration management databases shows developers and operations crews exactly how change as written will impact system processes.
Enterprise IT cannot neglect these kinds of checks and balances. In the best-case scenario, failure to utilize a CMDB makes ITSM creep like a turtle. At worst, an unused CMDB turns enterprise IT into a breeding ground for errors.
3. No Battle-Tested Disaster Recovery Plan
Business continuity plans all look good in theory, but writing down intangible, unproven tactics for combating downtime or mitigating its effects doesn't guarantee success. Quite the opposite, actually - without knowing exactly how well an enterprise IT team mobilizes against downtime and adjusting accordingly, no recovery plan is worth more than the paper it's printed on.
As such, enterprise IT needs dynamic, flexible processes that both respond well to rearrangement when necessary and involve clearly defined steps that make planning easier. Most importantly, ITSM resources should also assist IT in regularly scheduled downtime drills, helping teams navigate many different scenarios as quickly and consummately as possible to deliver real results to end users should anything happen.
When downtime strikes, does your IT staff run like a Pentagon situation room, or like someone's dad assembling furniture with instructions in Swedish?