Author(s): Peter Bye, Posted 11/30/11
My last couple of blogs were about the importance of disaster recovery planning. If an organisation does not have a well-tested DR strategy – sadly, an all-too-common state of affairs – the ability to deliver IT services and hence run the business will be severely disrupted if disaster strikes.
But it’s not just disasters that can compromise IT service delivery. Sudden traffic surges cause dramatic increases in the workload on the systems used, often bringing them to their knees. All the systems and their supporting infrastructure are intact – it’s just that they can’t cope with the demands placed on them.
I’m not talking about planned increases in workload, typically associated with dates or events. Month and year end processing, or big increases in travel associated with holiday seasons, are examples. There is time to plan for them although some systems still manage to crash even when the extra load is anticipated long in advance.
It’s big increases in traffic with little or no advanced warning that cause the real problems. Disasters may in fact be the trigger. Incidents such as floods or earthquakes can put tremendous loads on the IT systems used in the response. Emergency services are the prime examples. Police, fire, ambulance and even military systems could be involved, depending on the scale of the incident. All are likely to have deal with far more than their normal traffic. And systems outside of the emergency services, such as those used for recording faults in telecommunications and other infrastructure and scheduling repairs, are likely to be affected as well.
Less cataclysmic events can also cause sudden surges. Special promotions for products or services such as discounted fares by airlines increase the demands on systems. The business making the promotion should be able to prepare its systems but other organisations affected, such as for payment processing, may not have had the same warning.
It’s obvious that we require technology elastic enough to cope with sudden wide variations in load. One possibility is to provide systems big enough to handle just about any possible peak although predicting peaks has been made more difficult by the ever-rising use of the Internet. The number of end users clearly governs how much load can be generated: the numbers of PCs, smart phones and other user devices are unrestricted.
Assuming that we have configured a system big enough to cope, cost then rears its ugly head. For much of the time, the capacity available would be under-used. How can we square the circle of providing capacity for peaks while at the same time controlling costs?
The pay-for-use approach with ClearPath systems, using metering technology, overcomes the problems. Hardware technology developments have reduced costs, allowing the delivery of systems with far more power than is needed for normal operation. Headroom is left for planned peaks and sudden shocks. But the user only pays for what is used (in ‘MIP time’ units), not the full capacity available in the system. It’s analogous to electricity supply. The incoming supply allows wide variations in load, for example depending on the weather. The customer pays for the kW hours consumed, not the maximum possible.
Metering is simple and immediately handles sudden increases in load. Alternative approaches, such as repurposing and switching in additional servers or virtual servers, are more complicated and may be less responsive.
The statements posted on this blog are those of the writer alone, and do not necessarily reflect the views of Unisys.
To prevent spam and inappropriate or offensive content, please note that all comments are moderated. Thank you.