“Where does a 400-pound gorilla sit?... Anywhere it wants to.”
One of the few jokes you can tell around kids, it perfectly describes a hyperscaler. Hyperscalers also are the biggest providers of data centers that offer public cloud services.
The big three of global hyperscalers are Amazon’s AWS, Google Cloud and Microsoft’s Azure. IBM Cloud and Oracle Cloud could be considered 300-pound gorillas based on their market reach and customer profiles, while China-based Alibaba Cloud could be considered a 500-pound gorilla with limited global reach but lots of potential to throw its weight around.
Together, they account for 80% of the public cloud market. To pull off this market share, in the last decade they’ve created huge build-outs of data center facilities and networking technologies to connect them. These investments have transformed virtually every other industry. And their size and performance even created new industries, specifically artificial intelligence, machine learning and hyperscale data analytics.
Unlike a lot of IT terminology, hyperscale is pretty self-explanatory. It’s a data center and networking operation designed and purpose-built for massive compute and data storage, with lightning-fast connectivity optimized for global traffic.
A hyperscale data center is big, not only in data capacity (hyperscale data centers typically house 5,000 or more servers) but also power consumption. A single hyperscale site can consume more than 50MW annually and there are estimates of global power consumption that approach 200 TeraWatt-hours annually.1
Hyperscale facilities are really just supercharged outsourced data centers with virtually unlimited scalability. The ease in which new services can be deployed – for organizations of any size – is a major business benefit responsible for their growth and impact.
Hyperscale data center designs are state of the art, employing innovations in power delivery, cooling, security and maintenance. There are an estimated 800 hyperscale data centers worldwide; the majority of them are in the U.S.2
To understand how we came to this hyper place…a short history lesson. Due to networking limitations and financial constraints, all but the largest enterprises entered the modern computing era by implementing small data centers located in spare closets on-premises. Early “on-prem” data centers consisted of a few servers for desktop applications and email, primitive network connections using telco backbones and routing equipment that required little space. Power demands were low and a small, in-house team could manage operations.
The explosion of modern, web-based applications and distributed workforces gave rise to hosting and colocation data centers, multi-tenant facilities where Infrastructure as a Service (IaaS) could provide the physical resources needed to deliver applications. The benefits: reduced capital expenditure costs, improved performance, rapid scalability of applications in an increasingly global economy, and the opportunity to focus IT teams on strategic IT projects instead of the data center.
Big enterprises kept their mainframes, built dedicated data centers and used multitenant providers. Small-to-midsize enterprises have benefited in recent years from an array of on-premises, private cloud and public cloud options.
When network data traffic can move across the country at sub-second speeds, why exactly do hyperscalers need to spread their data centers across geographies and provide “zones” of service? Geographic dispersion is both necessary and strategic.
It’s necessary because these facilities are typically quite large and require direct access to economic power generation sources and water for cooling. They also need to be dispersed so that any potential disasters (hurricanes in the U.S. South are the classic example) can be minimized by rerouting traffic.
There are strategic performance reasons as well. AWS, for example, has approximately 25 regions around the world to provide services as close to the network edge as possible. Dispersing these data centers means traffic is better managed for performance, but it also addresses local laws and compliance requirements. Streaming giant Netflix is a huge AWS customer. It needs to deliver localized content, securely, at scale and only a hyperscale infrastructure can handle the demand.
Because of their size and access to capital, hyperscalers are driving much of the cloud services market segment, although innovations and opportunities abound for partners and cloud startups. The big public cloud hyperscalers are offering these areas of services:
From a practical business standpoint, the reality is that you probably don’t have much “choice” of hyperscale provider for the hosted applications and data processing running your business today. Any IT infrastructure company worth its salt is going to partner with or at least make compatible its offerings with all hyperscalers.
The primary benefits of hyperscale cloud are scale and speed. Deploying applications for a business once took weeks or months. Constant cycles of building, repairing and replacing hardware ate up way too much time and didn’t always provide desired return on investment. As applications moved to the web and on-premises applications moved to cloud services, service delivery became more important than the physical location. Now, entire business models are built on commodity services, pricing is based on nodes, data storage capacities and broadband consumption.
Cloud services providers will tout their cost savings capabilities too, but this is one area with plenty of room for improvement. Given the explosion in data generated, there are significant costs with keeping massive volumes of data safe, secure and available in cloud environments. Costs can quickly spiral upward without vigilant performance and utilization monitoring and management.
No cloud implementation fits all needs. The best choice is a few trusted partners that understand the industry and have interconnecting technologies and practices that are proven. Being able to demonstrate capabilities through multiple implementations and proofs of concept are paramount. CoreSite’s interconnection services allows you to work with all the major cloud providers
Since nothing is ever constant in the technology sector, the hyperscaler market is guaranteed to evolve in ways that may be unpredictable. Today, a slowing economy is reducing ad revenues and creating uncertainty in media markets served by some cloud and secondary hyperscale providers, and lingering supply chain issues may impact all sectors.
Yet, according to IT research firm Gartner, “Worldwide end-user spending on public cloud services is forecast to grow 20.7% to total $591.8 billion in 2023, up from $490.3 billion in 2022.”3 Although not the solution to every IT infrastructure challenge, it appears that the evolution of computing and enterprise will surely include these 400-pound gorillas.
1. APAC Hyperscale Data Centre Market 2022 Update, (source)
2. US dominates global hyperscale datacentre capacity and will continue to do so (source)
3. Gartner Forecasts Worldwide Public Cloud End-User Spending to Reach Nearly $600 Billion in 2023 (source)
The CoreSite Team
Combining expertise, research and thought leadership to inform and advance hybrid IT.Read more from this author