What 2025 Revealed About AI Infrastructure Problems and Requirements for 2026
2023 and 2024 were the years of AI headlines and mainstream adoption. 2025 was the year AI became an infrastructure problem. Effectively, we all watched the starting gun go off in a global AI race. We see the race happening between AI tech companies as they try to outpace each other: OpenAI releases something; Gemini responds, then Anthropic, then the next model provider. Each one adds capabilities and chases new benchmarks, trying to offer the latest, greatest version of artificial general intelligence … whatever that ultimately means.

For enterprises, though, the hyperscalers’ on-going game of one-upmanship is as chaotic as it is exciting. And suddenly every board and leadership team is asking, “What’s our AI strategy?” There are more models than ever, with the technology changing faster than in any cycle we’ve seen. It makes the early days of cloud adoption look almost glacial by comparison.
All of this is forcing some challenging and necessary questions about where these AI workloads can live, whether we have the power, cooling and connectivity to support them, and how we move and store the unfathomable (and ever-increasing) amount of data they depend on.
That’s why, when I look back at 2025, I don’t just see a year of shiny new model announcements. Rather, I see urgent questions emerging about the future of AI infrastructure – ones that need actionable, concrete answers, and quickly.
From Internet-Scale Models to Enterprise AI
When we talk about AI, there’s a tendency to lump all capabilities into one “AI bucket.” That viewpoint elides an important distinction.
In reality, there are at least two different worlds emerging. In one, you have the internet‑scale, general‑purpose models – the chatbots everyone likes to use, trained on enormous public datasets. These get most of the attention, but they’re only part of the story. The other world consists of what we’d call enterprise AI or private AI. These are small or medium‑sized models trained on much more focused data: a company’s own operational telemetry, financial records, supply chain, customer interactions and so forth.
At CoreSite, for example, we’re working on aggregating huge volumes of environmental and operational data (on the order of a petabyte) from tens of thousands of devices to build models that can help us predict maintenance needs and optimize data center performance. This is an important capability, and also very different from scraping the entire internet.
For enterprises, that difference matters. They don’t need to see the whole internet; they need to see their world. They also may want to use models from different providers over time. And, they can’t afford to be locked into a single ecosystem when the landscape is evolving this fast.
That’s why flexibility is becoming an infrastructure design imperative. Enterprises need to be able to shift workloads between providers, to try new models as they emerge, and to colocate data and compute where it makes the most sense for performance, cost and regulatory requirements. In other words, the future of enterprise AI isn’t only about choosing the “best” model – it’s about building the right infrastructure so as to leverage specific data and maintain flexibility, even as technology keeps racing ahead.
Why Enterprise AI Belongs in Colocation
Once you separate internet-scale AI from enterprise AI, the next important question is: Where should that enterprise AI actually live? From what we’re seeing, the answer increasingly is “in colocation.”
There are a few reasons for that. First, the data gravity is enormous. When you’re talking about tens of thousands of devices streaming telemetry, or years of transactional and operational history, you’re easily in the petabyte range. You can, in theory, push that over a 10‑gig internet link, but then you’re spending hundreds of hours and a lot of money “paying by the drink” for compute while you wait. That’s an unsustainable model.
So modern latency, to me, means having access to very high‑speed, cost‑effective bandwidth – multiple 400‑gig connections, and beyond – that lets you move and process data as quickly as you need. This trend will be exacerbated as enterprise workloads are increasingly performed by agents, the systems that execute tasks, which are going to use a variety of platforms simultaneously. And that’s exactly the kind of fabric CoreSite is building in colocation.
Second, the physical requirements of AI are outgrowing most on-prem environments. High‑density GPUs and specialized accelerators demand levels of power and cooling that typical enterprise data rooms just weren’t designed for. You can try to retrofit your own facilities, but you’re looking at major upgrades … and even then, you’re still unlikely to match the scale, flexibility and network performance of a purpose‑built data center.
Colocation offers enterprises a solution to those pesky infrastructure questions that dog AI transformation initiatives. You get access to dense power and cooling without rebuilding your real estate. You can connect directly, at very high speeds, to multiple clouds and multiple AI providers. And you maintain flexibility. If a better model or a new provider emerges, you don’t have to rip and replace your entire infrastructure.
We’re seeing this play out as the race speeds on. New AI‑focused “clouds,” specialized GPU providers, and the large model companies themselves are coming into facilities like CoreSite’s to interconnect with the traditional hyperscalers at 400GB, 800GB, even 1.6 terabits per second. Enterprises then tap into that same ecosystem, often in the same building, and now they have options: They can train smaller private models, run inference close to their users, and switch between providers as the landscape evolves.
In other words, colocation is becoming the nexus where enterprise data, AI compute and high‑speed connectivity all meet. For most organizations, that’s inarguably where their AI belongs.
Navigating The AI Power Squeeze
I’m just speculating, but I actually think compute is getting solved. We now have more options such as GPUs, TPUs, neoclouds, alternatives from AMD and Intel, and more efficient hardware for inference. Over the next few years, the cost of raw compute per token will likely keep dropping.
But power is a different story. Utilities in many markets are already strained, and they’ve been burned by what’s sometimes called “ghost capacity,” megawatts reserved for data center consumption but never fully used. As a result, they’re redefining terms: long queues, multi‑year timelines and requirements to pay for a very high share of the capacity you ask for, whether you use it or not. For a multi‑tenant operator like CoreSite, that’s a real constraint, and for smaller operators trying to build their own high‑density environments … fuhgeddaboutit.
This is where colocation and self‑generation come together. In large facilities, it starts to make sense to invest in alternative power such as fuel cells, turbines and gas‑based generation. These technologies originated in the oil and gas industry and are being repurposed for data centers. If you can generate a meaningful share of power on‑site, you’re less exposed to utility bottlenecks and you can keep adding the high‑density capacity AI requires.
Most enterprises cannot or will not do this on their campus. And through colocation, they don’t have to, as it brings an array of benefits, including shared investments in power and cooling innovation, and a provider whose full‑time job is navigating utility constraints and permits. Colocation provides an environment that can actually scale alongside enterprises’ AI ambitions.
So, as AI pushes against the limits of the grid, I expect colocation – with increasingly sophisticated self‑generation strategies supporting it – to be one of the practical ways for enterprises to keep expanding their AI footprints.
Looking At 2026 and Beyond
Heading into 2026, there are a couple more trends I want to call attention to.
The first is education. Data centers have become dinner‑table conversation, often peppered with misconceptions. For instance, they’re frequently conflated with hyperscale data centers, when in fact we support AI, but intentionally focus on AI workloads different from what traditional hyperscale facilities enable.
For decades, data centers have underpinned everything from banking and healthcare to media and everyday consumer apps. If we want smart policy on land-use, water, power and permitting, we need a broader understanding among regulators, communities and customers of the full spectrum of services and value that data centers provide. Data centers must lead that discussion.
The second is the edge. So far, most of the AI story has been about big models in big regions. But we’re already seeing the early signs of what comes next. Intelligence is moving closer to where data is created and consumed.
This concept is embryonic; you can still sometimes feel the “dial‑up era” of AI when you wait for some responses, like how it used to take an hour to download a single picture in the 1990s. But as the connectivity fabric and infrastructure mature, those delays will shrink and new edge‑native applications that we haven’t even thought of will appear.
From my perspective at CoreSite, we’re only in the first stretch of the race. The work we’re doing now – on power, connectivity and colocation ecosystems – is still largely invisible to most people. But it’s exactly what will make their future AI experiences instant, reliable and, eventually, remarkably ordinary.
Know More
Visit CoreSite’s Knowledge Base to learn more about the ways in which data centers are meeting constantly increasing power and other infrastructure requirements.
The Knowledge Base includes informative videos, infographics, articles and more, all developed or curated to educate. This digital content hub highlights the pivotal role data centers play in transmitting, processing and storing vast amounts of data across both wireless and wireline networks – acting as the invisible engine that helps keep the modern world running smoothly.











