Understanding Real Capacity of a Cloud Data Center

Cloud Data Center

A cloud data center has incredibly high hypervisor control panels scaling abilities, but not many of them realize its real capacity. Did you ever wonder as to how good the performance of cloud as storage, I/O utilization, networking, and storage really is, and the number of the virtual machines (VM) it’s capable of hosting? Today’s cloud data centers are bulked up, big, and in fact, really pumped up, but what’s their real strength?

In tech terms, do they have the ability to perform or can they scale? The answers to these questions are not too straightforward.

Assessing the Real Capacity - A Tricky Task

Knowing a data center’s capacity is almost impossible. The technical specifications (like number of servers, CPUs, amount of storage or I/O bandwidth) may seem to be easy, but they don’t really measure the end-to-end performance of the whole stack.

In many organizations, the engineers themselves are not aware of the number of VMs that their cloud can host. Usually, the statistics of the real and virtual CPU and I/O utilization are monitored by the operations department and when the pre-set thresholds are reached the load is redistributed or more resources get added.

Generally, the engineers are not aware of the cloud capacity of the data center in terms of how the software and hardware stack of the cloud are tuned for maximum scalability, where the performance holdups are, and the extra capacity the racks have for growth in future.

They are mostly aware of only the capacity figures like number of CPUs, number of servers, network bandwidth, and storage capacity. They lack the knowledge on whether the cloud gets provisioned by a single gear rack or dozens and are not confident of their capacity estimates.

Specs Don't Indicate Everything

The main aspect of such an issue is that it’s not easy to decipher this.

The specification sheets don’t indicate anything; nor do the VM hypervisor control panels. For huge cloud infrastructure, its capacity depends upon the best presumptions worked out in excel, further depending upon extrapolating a single computer’s workload. However, that’s not the way in which the cloud scales, particularly when you consider the multiple software, hardware and infrastructure levels in the stack.

You need to factor in the dynamic architectures provided by SDNs (Software Defined Networks), and software-oriented firewalls as well as load-balancers described by NFV (Network Functions Virtualization). No one knows the outcomes, which means everyone’s guessing things following which they observe the operations in real time to handle real-time capacity.

They hope that the administrators will observe the slowdown in performance with adequate time space to respond before there are any outages. Even in such cases, they should provide a very quick response. This does not help with capacity planning for the long term, which is why it’s essential to have plenty of extra capacity prepared to enter the cloud within short notice.  

Testing Load of Data Centers

There are several products for testing loads that assess the components of individual data centers, but there’s nothing that’s commercially offered to stress-test the whole cloud.

However, the latest load-testing system by Spirent called HyperScale Test Solution designed for very big virtualized cloud data centers seem to be a good solution for testing clouds with even a million VMs. This is expected to help cloud operators get some clarity about the real-world and measured capacity. This will help administrators to realistically forecast when the cloud attains its limits, which in turn, will help managers to decide how and when to expand or opt to stop appending new services to maximize ROI. So, if you really want to test out the true capacity of a cloud data center, do keep these pointers into consideration.