The days of the static data center ended when x86 virtualization took hold in the 2000s. However, that was not the end of the story for workload agility. Just as our personal communication devices get more powerful and easier to use, so too have application architectures. Yes, containers and microservices are becoming a reality because the agility, comparative availability, and scale of virtualized workloads are not enough for businesses going forward. With that in mind, what customers had called data centers are now being retooled as private clouds. In all, this is a useful change because it enables the infrastructure to provide what the applications need instead of the applications being made to fit a system’s model.
Along with new models for applications, the impact on the network has been a change to much more of an east-west traffic flow. The use of hypervisors to host application workloads began this new traffic paradigm. If you consider the density of workloads per host on a hypervisor, containers as a method of virtualization offer a much higher density, due to the fact that only the application and not the operating system is included in each container. This is the impact we now must build networks to support, spreading the traffic patterns between various applications’ workload elements. We must also consider the fact that there will continue to be host-, hypervisor-, container-, and microservice-based workloads all coexisting on the same network infrastructure.
This means that the network now needs to be as flexible and scalable (for both increase and decrease) as the applications it supports. This is where a workload-independent network policy really enables IT to provide that private cloud capability for workloads, whether bare metal, virtualized, containers, or microservice based. Without this consistent policy, the effect of having to manage different types of workloads, potentially for the same application, becomes a huge administrative burden. If we consider that between 60 to 70% of the cost of an infrastructure is administrative (that is, the people operating it), it’s the first area of focus for bringing those costs down. In addition, use of network policy enables the eventual movement of workloads between private clouds (including multiple locations) and public clouds, thus enabling the realization that is the hybrid cloud.
New Application Types
We’ve briefly touched on new application models. We can follow the change in application design with the change in how we more effectively communicate via smart devices of all types. Whereas applications used to reside exclusively inside the walls of the data center, now the applications (or elements of them) can also reside in the public cloud or on the smart device itself. Let’s follow the well-understood model of the smart device application. It must be compact in size and able to be versioned with some regularity. You are certainly well aware of how you see daily updates to certain smart device applications. This is the new model of application development and maintenance that traditional data centers must be able to adapt to in order to become private cloud infrastructures.
So how do we take something like a smart-device application and apply it to something like an Enterprise Resource Planning (ERP) style application? Consider all the moving pieces of an ERP application: There are presentation tier elements, several database elements, and application logic of multiple elements such as customer contacts, projects, products, and so on. Let’s take just the most common of these elements: the presentation tier (in essence, the web front end). This can now evolve into a smart-device element, but some level of access from traditional x86-based laptops and desktops also needs to be present. In recent applications (many are called traditional applications), these web servers provided content to browsers via multiple ports and protocols, for example including TCP-80, TCP-443, TCP-8080, TCP-8443, and UDP-FTP. Scale was achieved by adding more web servers, all configured identically.
You might think that having 10 or more web servers configured identically would be just fine. However, consider the recent security vulnerabilities on SSL (eventually on multiple TCP ports like 443, 8443, and so on). Now having to patch every server for that vulnerability basically takes the application out of service or leaves critical vulnerabilities in the application. Instead, modern application design breaks those elements down to the very essence of the service they provide — the microservice. Imagine being able to patch the SSL vulnerability by creating new operating elements of the presentation tier that include the SSL fix for ports 443 and 8443 and deploy them without touching the port 80 and 8080 microservices? Think of this when it comes to scale of the application.
Consider the idea of what in the retail industry is known as “looks to books.” In other words, someone browsing for information or a product is a “look”, whereas a user wanting to perform a secured transaction is a “book.” Each of these uses different ports of the application tier. Should you scale the SSL portion of the application tier if you see a spike in look-related traffic? It’s not the most efficient use of infrastructure resources. Therefore, microservices design provides the freedom to scale, patch, update, and quiesce elements of any application more effectively than monolithic, traditional applications. However, the infrastructure also needs to be aware of this scale, both up and down, to better support the applications.
Automation, Orchestration, and Cloud
We have only briefly mentioned cloud concepts to this point. However, with the changes coming in application models, cloud (in the form of public, private, and hybrid models) is a reality for customers of any size and sophistication. As such, the building blocks of successful cloud deployments and operations begin with a flexible infrastructure that provides some level of policy-based configuration. For private cloud and eventually hybrid cloud to be realized, all infrastructure elements must become pools of resources. Think of how effective Cisco Unified Compute System (UCS) Manager has been at taking a bunch of x86 compute resources and, via policy, creating easy-to-scale, -manage, and -maintain pools of compute for application consumption. The same must now be applied to the network. The Cisco ACI policy provides similar ease of scale, management, and maintenance found in UCS Manager. However, those are just the building blocks of cloud.
Automation and orchestration are where we find the tools to provide the flexibility and potential to provide self-service to the business and the applications that operate it. First, let’s better define the terms orchestration and automation because they tend to get used interchangeably. Note that automation, by definition, is the ability to make a process work without (or with very little) human interaction. Can you automate the deployment of a configuration of compute elements, including operating system and patches, via a tool like Puppet or Chef? Absolutely. One could argue that Cisco UCS Manager is automation for compute hardware configuration, and that the ACI policy model is automation for network and L4-7 services within it. I would agree with both of these assertions. However, orchestration means taking those infrastructure elements being automated and tying them together into a more meaningful motion to achieve an end result.
Thus, orchestration tools must address more than one domain of the infrastructure and can also include items within or supplied by the infrastructure. Good examples of orchestration include tools such as Cisco UCS Director, Cisco CloudCenter, Cisco Process Orchestrator, VMware vRealize Orchestrator, Heat for OpenStack, and others. These examples all allow, in varying degrees, the ability to string together tasks and processes that can provide a self-service capability for application deployment, application retirement, user on- and off-boarding, and so on. The key is that automation on its own provides the ability to lower that 60–70% administrative cost to the data center infrastructure, and orchestration provides the tools to realize a flexible and real-time infrastructure for private, public, and hybrid cloud environments. You cannot achieve true cloud operations without automation and orchestration.
Do the bad guys ever sleep? That question is best answered when we first understand who the bad guys are. Because we cannot identify all immediate risks to an infrastructure or a company — and, yes, even headlines can be severely impacting to business — we must do our utmost to keep the infrastructure from harm and from harboring harm, both from the outside and within. Even with an unlimited budget, security cannot be considered 100% or ironclad. However, if the infrastructure is flexible enough — perhaps the network even provides scalable and flexible security — your environment can be less hospitable or accommodating to the so-called “bad guys.”
To this end, let’s focus on the network. Until the advent of ACI, the network mantra had been “Free and Open Access” from its inception. I recall hearing this from several of my university customers, but even they have been forced to change their viewpoint. In legacy networks, we had free and open access, which necessitated the use of firewalls, where we only opened ports and protocols appropriate to allow applications to operate correctly and provide protocol inspections. This is something the firewall was originally designed to do. However, something the firewall was never originally design to do was to act as a route point within the network. Due to the need to secure segment portions of the network and provide bastion or edge security, we have forced firewalls to become routers, and they pose severe limitations on the routing capabilities and options for a legacy network.
Now consider the ACI network, where, as you might have heard already, a whitelist security policy is in place. Nothing can communicate unless the network policy explicitly allows it. Are you thinking firewall now? Well, not so fast. Although the Cisco ACI whitelist model does change the paradigm, it is more akin to stateful access control lists (ACLs) at the TCP/UDP level within a switch or router — effective, but cumbersome in the legacy or blacklist model sense. However, there is still a need to have deep protocol inspection and monitoring, which is something firewalls and intrusion prevention systems (IPSs) do very well. So let’s get those devices back to doing what they do best and let ACI handle the forwarding and ACL-based security. IPSs and such network services devices can be automatically inserted in the network with ACI services graph.
Have you ever experienced a completely correct configuration on a firewall? No cruft from legacy configurations or applications long since retired? This is directly due to the fact that most applications and their needed communication protocols and ports are not truly understood. What’s the usual outcome of this misunderstanding? A very open firewall policy. Ultimately the thought of security behind a firewall configured in such a manner is as hollow and gaping as the open pipe the firewall permits to not impact application communication.
Finally, how do we approach securing different applications from each other or from improper access? Effective segmentation is the answer, and ACI provides the industry-leading option for that. From multi-tenancy and endpoint group (EPG) isolation to individual workload segmentation for remediation or rebuild when an intrusion is found, Cisco ACI can act on these items for any type of workload — bare metal or virtualized. Additionally, Cisco ACI can offer enhanced segmentation that starts outside of the data center via TrustSec. This capability, based on a device or user credentials, only allows packet flows to specific segments within the ACI fabric, which effectively allows segmentation of users of, say, a payroll system or sales data from the endpoint to the application elements hosted on an ACI fabric, inclusive of firewall, IPS, and other relevant L4-7 services.