Data Center News - CoreSite Connect[ED] Blog

Enabling Low-Latency Inference Hubs Across Hybrid Cloud Infrastructures

Did you see the announcement from Oracle and AWS about the general availability of Oracle Database@AWS?1 Customers can now implement Oracle Cloud Infrastructure (OCI) on AWS Real Application Clusters over a direct network connection between two competitive cloud providers.

What’s next, dogs and cats living together? Mass hysteria? A giant Stay Puft Marshmallow Man tromping buildings in New York City?

Brewing Cloud Interconnection Trends

Once the initial shock subsides, it makes sense. I also think it marks a step in AI development – early-stage inferencing, enabled by running AI applications across a hybrid cloud architecture. According to the press release I referenced, Oracle Database@AWS allows customers to run workloads with AI and native vector embeddings in combination with AWS’s advanced generative AI and analytics services. It gives them the ability to connect data in their enterprise database to AI-applications in AWS; the service includes capabilities that simplify the data migration process for developers.

Other industry trends are related, for example hybrid cloud adoption. Hybrid cloud usage has steadily grown from 55% in 2022 to 62% in 2025, signaling a continued shift in infrastructure strategy that we’ve been tracking in the annual State of the Data Center report we sponsor with Foundry/CIO.2

Bellweather trends also include the deployment of regional infrastructure pods like Local Zones by AWS and Oracle to increase geographic coverage, as well as Microsoft’s data center expansion in Georgia for their newest East US 3 region.

“Oracle will build more cloud infrastructure data centers than all its infrastructure competitors combined.”

Larry Ellison, CEO, Oracle

As we have learned as AI matured, large language model (LLM) training and application development are primarily done in hyperscale data centers built where power and land are relatively low-cost – which is NOT where users and service providers are. Inference and machine learning (ML), however, depend on AI workloads in data centers enabling real-time data exchange between servers in the data center or within cloud regions, which means it’s critical to have low-latency interconnections to cloud providers via direct, native onramps and the ability to easily establish and manage enterprise cloud-to-cloud connections.

I applaud Oracle and AWS, and other cloud providers, for allowing customers to access data and resources in their respective clouds. I also think that even wider multi-cloud interconnection would be even more valuable.

Cloud Availability Zones and Inference Zones

The OCX enables interconnection across the entire CoreSite data center platform and to all cloud regions across the U.S., ensuring optimal application performance, reduced risk and high resiliency.

As I learned more about the Oracle/AWS collaboration, I found that they call it “open cloud.” That raised a smile for me because CoreSite has been offering the Open Cloud Exchange® networking platform for more than a decade. When the OCX was launched (and named), its primary purpose was to provide an open-to-enterprise, direct connection to Amazon’s early cloud offering through our data centers. I guess we were serendipitously prescient.

Today, in addition to making it easy to take advantage of CoreSite’s multiple direct connections to all the major cloud providers from our data centers, the OCX includes cloud-to-cloud virtual routing as well as inter-campus and inter-market data center linking.

Another thing – Oracle Database@AWS is executed within individual AWS Cloud Availability Zones, which Tech Target defines as “a single data center or set of data centers in a region.”3 Isolating the service enables the low-latency network performance critical to inference.

You might have seen a previous blog from me, Inference Zones: How Data Centers Support Real-Time AI, that explains our thinking and perspective of what AWS is talking about. There are two critical “ingredients” to the inference recipe: where an enterprise determines it is best to deploy AI, and the network access to that region, or “zone.” Deploying inferencing workloads in densely populated areas with a concentration of industries and IT centers makes sense because that’s where availability zones and cloud clusters already exist. Furthermore, those regions are supported by high-bandwidth, low-latency networks and direct cloud connections. Most importantly, that’s where most users and potential customers of AI cloud services are.

Who You Gonna’ Call?

Foundational elements of CoreSite’s business model are to locate multi-tenant data centers at the epicenter of key availability zones and to steadily increase the number of native connections into public clouds while also serving as network interconnection hubs.

Over time, the model has enabled us not only to host and deliver customers’ services from each data center, but to build a platform of 30 data centers that forms an ideal vehicle for data exchange and service delivery across multiple markets. That includes CPU-dependent as well as the latest high-density workloads.

For example, application developers can access LLMs in the cloud and then leverage low-latency networks to execute inference in a CoreSite data center or campus at the center of an inference zone. And, as I mentioned a moment ago, they can use OCX to move data to their colocation or on-premises GPU-driven servers through direct cloud onramps, cost-effectively and with real-time bandwidth control.

The recent development in AI inferencing might be described as the start of “SaaS 2.0,” if you will. Here are a few of the advantages of being able to selectively utilize best-of-breed, GPU-powered and tensor processing unit (TPU)-powered applications in neoclouds:

  • Infrastructure where you want it. CoreSite products, including Any2Exchange® for Internet peering, Blended IP (internet access every organization needs delivered by Tier-1 ISPs) and the OCX give you optimal network flexibility, agility and real-time hybrid cloud management.
  • No vendor lock-in. We are cloud-agnostic; resources in multiple major clouds can be utilized.
  • Cloud-to-cloud data sharing, as described above.
  • Resiliency through redundancy. It’s critical to have more than one cloud connection in any given zone, for public and private clouds.
  • Cast a wider data collection net. With deployments in several CoreSite data centers, you can aggregate data from a broader range of endpoints for analysis and decision-making (aside from local inferencing).

Some truly remarkable changes are in motion, making competitors such as AWS and Oracle “frenemies” and distributing AI where it delivers the most value, whether that’s on-prem, in colocation data centers or across hybrid cloud infrastructures.

While taking advantage of all this this may be daunting, even frightening, remember – we are not afraid!

Who you gonna’ call?

CoreSite!

Know More

CoreSite helps power the next era of AI with colocation and high-performance network solutions. Our data centers support hybrid architectures with access to scalable power, as well as liquid cooling solutions for high-density workloads.

Ready to talk about how CoreSite can help you bring AI into your infrastructure? 

Contact us to start the conversation.

In the meantime, learn more about what our clients are doing with AI and download our Tech Brief to get insight on actualizing AI's potential for your organization.

 

References

1. Oracle Database@AWS Now Generally Available (source)
2. 2025 State of the Data Center Report, Foundry and CoreSite
3. Understand AWS Regions vs. Availability Zones (source)