In today’s information-centric world, the heart of any enterprise undoubtedly lies in its data center. With virtualization now a mainstream technology, workloads are increasingly being migrated to and consolidated in data centers. As per the InformationWeek 2014 State of the Data Center Survey, 73% of respondents see demand for data center resources increasing over the previous year. Even the growing adoption of public clouds is unlikely to affect this – a Market Pulse survey by IDG Research Services indicates that more than 65% of organizations are implementing or planning to implement hybrid cloud solutions, rather than move entirely to the public cloud. It is clear that the strategic importance of the data center is unlikely to diminish anytime soon.
However, as data centers grow in size, concerns about complexity of management and security are rising. Most enterprises are still investing primarily in perimeter-based defenses to secure their data center. Analyst Dell’Oro group states that the previous generation of perimeter-based firewalls cannot address new network complexities. Clearly protecting just the data center boundary is not enough – if an intruder manages to breach the perimeter, he/she gains access to the crown jewel applications in the data center. On the other hand, inserting hardware appliances (firewalls and switches) on the internal network might improve security, but also increases complexity and makes the network difficult to manage. Which is why, the concept of software-defined networks has been gaining attention in recent times. With a promise to make networks more secure as well as more manageable, it is not surprising that every analyst is talking of software-defined networks as the next revolutionary technology to hit the data center after server virtualization.
There are primarily two approaches to achieve software-defined networks. Solutions adopting the first approach offer a centralized software controller for controlling network infrastructure. In this case, the control plane is separated from the data plane at the network layer. Applications specify their network requirements to the controller, which in turn instructs the network hardware on how to deal with packets from each application. This approach, which some analysts refer to as Software-Defined Networking (SDN), has been standardized in the OpenFlow protocol. This approach may be suitable for extremely large and complex networks (for instance, a large search engine uses OpenFlow in the backbone between its data centers) – however it introduces additional overheads. Firstly, most of the existing network equipment in the data center is unlikely to be compatible with OpenFlow, since not all vendors support this protocol. Also, applications need to be rewritten to truly take full advantage of this mechanism, leading to significant expenses on application development and testing.
The second approach involves creation of overlay networks, or to be more precise, virtual overlay networks on the existing physical network. One way of creating virtual overlay networks is through cryptography. By assigning crypto keys to workstations and servers and encrypting data-in-motion, communication on the network can be restricted to those systems with matching keys. Each set of systems that can communicate among themselves can be considered as a “community”. Keys can be assigned based on user or device identity – this makes it possible to segment the network such that systems can be accessed only by authorized users, while appearing hidden from other users. This approach can be extended for server to server communication as well – for instance, restricting access to a database server only from the application server and the database administrator user. Multiple such “virtual communities” can co-exist on the same physical network backbone, effectively virtualizing the network without insertion of additional hardware. And finally, these virtual communities can be integrated with the enterprise’s existing identity management system (Active Directory, LDAP or RADIUS) so that access control can be governed through the existing identity system. Changes to the identity system will then result in changes in the network fabric in real-time, making the network a truly “software-defined network”.
In the second approach, if encryption is handled at the lower layers of the networking stack, it can be entirely transparent to applications. This has the added benefit of providing security at the device (server or workstation) level, and not just at application level, thus securing the system against multiple attack vectors. The use of encryption also adds an extra layer of defense against insider attacks and hacks, which are becoming increasingly commonplace today (think Edward Snowden!). Also, no changes are required to the existing physical network topology and any changes to the virtual communities are done through Active Directory or LDAP, a familiar tool for network administrators. This approach therefore solves the twin problems of complexity of network management as well as network security.
Irrespective of the approach chosen, it is clear that enterprises can now no longer rely on the traditional external defense mechanisms, and need to adopt a “defense-in-depth” approach coupled with innovative new approaches to securing their data centers. The risks of not doing so, as evidenced by the increasing number and cost of headline breaches, are becoming abundantly clear.