Mobile devices and other consumer-driven technologies are set to increase transaction volumes by an order of magnitude. Now is the time to start preparing your data center and enterprise applications for this shift.
But be forewarned: The challenges you will face are not necessarily technical in nature. Technology is the easier part. But before you can start dealing with the technology, you need to overcome the dual challenges of good process and good management.
A Mobile Transactions Boom
It was only a decade ago that the advent of the Web made it possible for consumers to directly access services such as airline booking systems. It’s easy to forget that, before then, essentially 100 percent of airline bookings were conducted over the phone. The moment the public could browse and book reservations online, the number of transactions going through the airline reservation systems went through the roof.
There wasn’t a material change in the number of bookings. But people could suddenly do a lot more comparison shopping and took advantage of their new-found access to corporate data, driving up the number of transactions (and increasing the opportunity for the airlines to interact with their customers).
The transaction load is about to spike again, and the result could dwarf what we have seen thus far. Mobile broadband, devices, and apps now give users 24/7 access to the Internet, and this is changing user behavior again. People can check flights, check in, and change seats at will. Before, they had to find a laptop, desktop, or public terminal to get an Internet connection.
We are heading into an era where people will demand always-on connectedness from all of their relationships, whether business, personal, or commercial. If I buy something from Amazon, I now expect that I can track the order straightaway, in real time, on my phone, netbook, or other device with a mobile broadband connection. If I need access to a company document or database, I am frustrated if I can’t access it immediately via my mobile connection.
These expectations are exploding with every new smartphone or mobile broadband device delivered to users. And these expectations are going to expand the demands on our data centers faster and more dramatically than we saw a decade earlier, when the Web was young.
Every application will be affected. Change won’t be limited to only those organizations with a public-facing infrastructure. While they are certainly the first to have to deal with increased transactions load, the demand will ultimately be experienced across the board — even on internal applications.
For example, I can now do my business expenses on my mobile phone, using a mobile app. I can review and approve my employees’ expense reports the same way. This is just the beginning of how mobile access will transform how people work, and how the data centers that support them must work.
Targeted Evolution is Key
So what is the best way to prepare for this new era, which is already upon us, and for which delay is not an option? Step one: Think modular, focusing on a targeted evolution of your organization’s infrastructure.
Let’s return to the airline example. It’s an interesting and appropriate one, because airlines historically have some of the largest and most complex mainframe or mainframe-class back-end infrastructure. And there are both public- and non-public-facing aspects to their applications.
The best approach in this type of environment is not just to simply rip and replace the existing infrastructure. Rather, in a modular fashion, break out the new components and architectural layers that are required. The result is a hybrid of back-end, very high-performance (and perhaps proprietary) infrastructure, with open mobile Web and mobile communication standards on the front end.
Real-world success is not in expensive multi-year, multi-million-dollar infrastructure overhauls. It is in targeted evolution of the infrastructure, preserving the best of the existing business logic, business process, and application investment, and opening these up through selected interfaces.
Now let’s look at the other end of the spectrum. Does targeted evolution still apply in organizations that started small with Windows or Linux servers, and then scaled up by continually adding servers to handle the additional workload? Certainly it’s possible, theoretically, to continue adding more and more servers into their environment to handle the increased transaction load.
But management of such complexity reaches a tipping point where it becomes overwhelming, and costly. There’s the issue of physical space, which isn’t infinite. Tracking what everything is doing and why things were put in place years earlier becomes unwieldy if not entirely unworkable. Even in a virtualized environment, this type of scaling up eventually becomes inefficient to manage.
Here the opportunity is for deploying advanced automation-type capabilities to provision and control the infrastructure, to minimize or even remove human error, and to speed the modernization of infrastructure when and as needed.
Avoiding A Virtual Mess
Virtualization is often presented almost as a panacea for our infrastructure problems. Unfortunately, if you start with a physical infrastructure that is not in control — that is, not well-managed with good policy and process — and you start virtualizing it, all you do is transform your physical mess into a virtual mess.
That might reduce some infrastructure costs in the short run. But long term, as the system scales out, you probably end up with less control over your environment than you had before. You will certainly lose the one-to-one mapping of application to physical infrastructure. You will also have pain if you don’t have a good process in place for knowing what’s running where and why, and how it will be affected by a network outage, denial of service attack, or other issue.
Once again, the challenge is not so much a technical issue as it is a management and control matter. Let’s start with something many readers will likely be familiar with: “virtual sprawl.” Because it’s so easy to drop a new virtual instance into the environment, it is often not considered to be costing the organization anything. There’s no capital expense to put a server in. The result is unchecked expansion of server instances.
Yet all of these virtual servers require patching and maintenance to maintain security, and keep the OS and application base up-to-date. And once created, they live on, even though they might not have a good business reason to continue. Perhaps they were deployed by a small department to test some new functionality. Because they’re virtual, they’re forgotten about, and they tick away forever. Without life cycle management of the virtual infrastructure, the result is virtual sprawl.
The other common challenge with virtualization is what we call “virtual stall.” Organizations do the easy stuff, which may be the first 20 or 30 percent of their workloads. Then they hit a wall. The systems are more complex, the applications need to be better understood, or the management needs to be better implemented. Companies put the brakes on their efforts, and don’t actually get to take full advantage of what could be virtualized, and how it could be optimally managed.
Clearing up Cloud Confusion
What about cloud computing as a cost-efficient way to make certain systems ready for the coming increase in transaction load, driven by mobile ubiquity? There are certainly perceptions that cloud computing is a commodity purchase, that all cloud services are alike or at least very similar, and that there are really no architectural consideration or decisions that need to be made.
I wish it were that easy. The different cloud providers (including Unisys) have taken different approaches to their infrastructures, and you need to find one or more that are compatible with your organization’s workloads and requirements. Amazon, with its Web services offerings, essentially provides a slice of its capacity. You can do whatever you want with it, using whatever programming tools and model your organization prefers.
The Unisys cloud portfolio accommodates a straightforward “lift-and-shift” of enterprise infrastructure from a client’s data center into a public cloud infrastructure. We accomplished this by supporting multiple operating environments, an open interface, and different classes of compute infrastructure on the back end.
So, on one hand, Unisys offers a standard “scale-out pizza-box” approach to computing, where you have thousands of two-socket servers sitting in a site somewhere. On the other hand, we also offer four-socket and eight-socket scalable platforms, which can drive the more complex online transaction-processing-type workloads.
I’m not saying one approach is better than the other. What I am saying is you have to understand there are fundamental differences in how providers approach their cloud offerings. You have to start with the destination cloud in mind, have to know what the programming model looks like, and know how to operate within the vendor’s management tiers.
Which is the right approach? There’s not necessarily one answer to that question. Organizations are going to see success in many different iterations. The choice of the “right” cloud architecture might depend simply on which of the many different categories of applications in your environment you want, or need, to move to the cloud.
If you’re simply looking to move the Web tier to the cloud, that’s a whole lot easier than, for example, taking your SAP or Oracle ERP infrastructure and pushing that out. Indeed, ERP applications are much more complex in terms of their interaction with the infrastructure in general, and with each other.
A typical SAP infrastructure may have 100 or more servers within it. If you choose to automate a process, you need all of these pieces interacting seamlessly. As a result, there’s an entirely traditional set of management infrastructure and automation that would be required to successfully move an ERP infrastructure to the cloud.
These issues are not the first things most folks think about when considering a move to the cloud. But as you start to push the edge of what’s possible in the cloud, this sort of management planning and resultant infrastructure will be essential.
Beyond the Public vs. Private Cloud Debate
Of course, public clouds are not the only option, nor are they the appropriate choice for many requirements. National governments, for example, are unlikely to shift significant workloads to the public cloud due to security concerns. Certainly there will be instances where non-secure public records information can (and will) be situated in a public cloud, but the majority of what governments are now doing, or planning, is targeted at private clouds.
Here in the U.S., there’s strong interest and, indeed, a mandate from the White House to move the Federal government toward a cloud-based infrastructure. What we’re seeing is that some of the Federal agencies will, in effect, become shared service providers; that is, cloud providers for other Federal agencies.
While this is certainly not a public cloud infrastructure, it is a shared, or “community” cloud to be used by a restricted set of government organizations. In fact, there is growing interest in building community clouds, not just at the Federal level, but also at state and city levels. In these scenarios, a city’s IT organization would be providing capacity not just to the core municipal infrastructure, but to local school districts (for example) and other community entities under local government umbrella.
Unisys has been working with public cloud providers, companies with private clouds, and organizations with community clouds to help them establish or strengthen their infrastructure. Our emphasis is on enterprise capability and security; however, in helping other public cloud providers establish their infrastructure, we find they sometimes choose to emphasize different elements, depending upon their requirements.
The security aspect, as an example, might not be as important in certain situations. But all of the automation and process we’ve built for the Unisys public cloud can be deployed in a less rigorously secure public cloud infrastructure, and we’re helping clients benefit from that today. Likewise, we’re leveraging the processes, knowledge, and experience we developed from building our own public and private clouds, and sharing that with other companies to help them establish private, public, or community clouds.
Is there a true distinction between public, private, and community clouds? When it comes to the core technologies and processes used, the distinction is somewhat blurred. For instance, when we help organizations with a private cloud, they might not require the Stealth technology we’ve employed for the public Unisys Secure Cloud because they’re operating behind their own firewall within their own security model. But they might be interested in the provisioning, automation, orchestration and, oftentimes, the charge-back models that we built and put in place.
In the final analysis, while there are certainly architectural elements to preparing your data center for transformation, many critical success factors are related to good process and good management, which I call the Five Critical Elements of Data Center Transformation. Feel free to share them with your team.
Have a question, suggestion, or comment about readiness for data center transformation? Post it here or write to me.