Skip to content
CoreSite Helps Customers Accelerate AI Adoption as NVIDIA DGX-Ready Data Center Partner. Read News

4 Strategies for Creating an Environment for Supercomputing

In 1964, Control Data Corporation (CDC) introduced the CDC-6600. The steady-state computer could sustain 500 kilo flopson standard mathematical operations, ten times faster than other computers at the time, and ushered in the concept of the Supercomputer.

More than six decades later, supercomputing is still the Holy Grail for the world’s leading academics, researchers and corporate visionaries. It powers everything from quantum mechanics and oil and gas exploration to weather forecasting and genomic research that may someday find a cure for COVID-19 and other catastrophic diseases.

Today’s supercomputer manufacturers are racing to one-up each other and pack additional compute capability into ever-smaller form factors. But while computing power increases, so does the amount of electrical power needed to make it run efficiently and cool. Just one megawatt of power is roughly equivalent to operating costs of just under $1 million per year. Running computing systems of this magnitude could cost anywhere from $60 to $100 million throughout its usable lifecycle, in addition to the millions of dollars that were likely spent upfront to procure the machine.

As a result, enterprise, government and manufacturing leaders are eagerly seeking ways to improve supercomputing efficiency. Here are four core principles they should follow in pursuit of making supercomputing more accessible and sustainable for the long run.

1.  Update cooling technologies

Supercomputers can process unfathomable amounts of data and information, thanks to the hundreds of GPUs and processing cores housed inside boxes that routinely weigh more than 20,000 pounds.

The amount of machinery that simultaneously runs trillions of complex calculations also produces a nearly unimaginable amount of heat that requires continuous cooling to prevent processors from overheating and failing. Water-based cooling systems are becoming increasingly popular because of their efficiency and effectiveness. These systems pass cool water directly into the computer nodes by the CPU, allowing for power capacities in excess of 60 kW in a single cabinet.

This cooling system dissipates up to 95% of the heat the computer produces to bring the Power Usage Effectiveness (PUE) — a measure of how much total power is dedicated to cooling the machine — as low as 1.25, which is well below the government’s 1.5 PUE efficiency benchmark.

2.   Consider the color and tiny details

Along with next-generation cooling technologies, choosing the right color hardware within a data center can reduce ambient temperatures and related cooling costs. While that may seem trivial or otherwise a nickel-and-dime approach, the fact is that something as simple as choosing all-white server racks over conventionally darker colored racks can help deflect heat from the supercomputer and reduce direct cooling costs.

3.   Choose your location wisely

Supercomputers require immense amounts of space, upwards of 10,000 square feet in most cases. But the data centers with enough space to house them aren’t all created equal. 

Data centers in high-density, metropolitan areas like New York City, Washington, D.C. or Los Angeles might have enough floor space for the initial installation. Still, they may have limited additional space for future growth, which will either limit future expansion or force the owner to relocate again and incur additional costs. 

More importantly, prime locations command premium prices – for both space and power. Physical floor space is more expensive per square foot, and raw electricity costs in popular markets are generally pricier per kilowatt-hour (kWh). Those costs are often amplified by outdated cooling technologies or insufficient power density.

Instead, choosing data centers in strategic edge locations can deliver scalable space and power at more affordable rates — sometimes up to 50% less — to enable long-term growth within a single facility.

4.   Emphasize connectivity

Not every computation or workload needs to be executed by a supercomputer, despite the machine’s obvious ability to handle the burden. Creating a hybrid IT infrastructure in which less critical or less complex workloads can run on cloud platforms can unburden the super machine of some of its work and incrementally reduce the amount of energy needed to run and cool it. 

Offloading other workloads via low latency, hyper-secure connectivity to cloud nodes can reduce real-estate expenses and reallocate resources to update and upgrade the supercomputer as next-gen technologies arrive. 

Modern supercomputing has kicked off an age of unprecedented and unrivaled exploration, discovery and insight. Each supercomputer alone can process more data and information than has been produced in the history of humanity, enabling groundbreaking discoveries in science, healthcare and other pursuits. 

As computing power continues to increase, organizations must find new ways to blunt the financial and environmental impacts of running the machines. They’ll need to aggressively seek ways of creating a scalable, efficient and flexible technology environment that will enable them to fully realize their computing mission and sustain their operations now and in the future.

Download our latest customer success story to learn more about how a leading research university uses CoreSite data centers to sustainably operate one of the world’s fastest and most efficient supercomputers.
Tags
Matt Gleason | SVP, General Management
Matt is responsible for the general management as well as providing oversight for sales engineering and capacity inventory management functions. He has 20+ years of experience in critical power and mechanical system operation, leadership and business management of network-centric data centers.

RELATED ARTICLES