We’ll start with why you even considered ACI in the first place. Industry changes are impacting almost every aspect of IT. Applications are changing immensely, and we’re seeing their lifecycles being broken into smaller and smaller windows as the applications themselves are becoming less structured. What the network used to control and secure has changed greatly with virtualization via hypervisors. With containers and microservices being deployed more readily, we’re seeing change happen at a much faster pace.
Recall what moving application workloads from “physical” to “virtual” meant to traffic flows within the data center. Those long-established north-south flows were significantly disrupted to a more east-west direction. Assume, for a moment, that the collapse ratio of a physical-to-virtual move is 1:30. Now imagine the potential collapse ratio using containers at a factor of 1:300. Consider the impact of this new east-west pattern. As we look at these new traffic flow patterns, consider that many data centers still contain mainframes and other legacy compute elements, as well as physical x86-based bare metal servers, hypervisors, containers, microservices, and whatever may be next. This is where ACI excels: It supports various workload platforms simultaneously and consistently.
This faster pace necessitates change in how the networks are built, maintained, and operated. It also requires a fundamental change to how networks provide security via segmentation. Firewalls alone cannot keep pace, and neither can manual device-by-device configuration. No matter how good you are at copying and pasting, you’re eventually going to commit a mistake—a mistake that will impact the stability of the network. It is time for an evolutionary approach to systems and network availability. It is time to embrace policy.