From building more, to connecting more: Why the next frontier of cloud services lies in data center interconnectivity
![](https://ifvex.com/content/uploads/photos/2025/02/ifvex_64bacbf70d15f63399d895f1e4fbca96.jpg)
In a world where demand for cloud services shows no signs of slowing down, data centers are the driving force behind our technological future. Now, however, their need for power is being tested amidst the rise of artificial intelligence (AI) which is expected to greatly increase traffic. As the AI revolution gathers steam, data center capacity in Asia Pacific is expected to hit 94.4GW by 2028, according to IDC, driven by increasing adoption of AI and cloud services.
While Singapore has long been a major player in the industry, other countries in Southeast Asia are seeing a rise in data centers, and are expected to move to high-performance IT infrastructure in the coming years.
Beyond the data center race
The race to build the biggest, highest-performing data centers is gaining speed. The question isn’t simply building larger facilities; equally important is improving connectivity between data centers. The growth of AI, cloud services, and other bandwidth-hungry applications requires a new approach to data center operations—one that presents opportunities for investors, developers, operators, and global cloud service providers alike.
With approximately over 8,000 data centers around the globe, enterprises likely own or are connected to more than one. However, data centers do not exist in isolation. To improve the resilience of an organization’s workloads, many choose to utilize different data centers spread across geographically distinct regions. Data centers are no longer just located within the inner city but are also diversifying to other parts of the country. Fast, reliable, and secure connectivity between different data centers is critical to consider as it determines the efficiency of data transfer, seamless communication between systems, and accessibility to resources.
Moreover, shifts in application consumption are also necessitating changes in how connectivity is delivered, particularly with the rise of AI. The ramp-up of AI is expected to happen in two phases. At present, we are in the large language model (LLM) training phase, which is power, storage, compute, and network-intensive. During this phase, LLMs are being trained on vast data sets that pass through artificial neural networks consisting of billions to trillions of parameters. This is so that the model can mimic human decision-making tasks.
The second phase, also known as the interference phase, begins once these models are trained so they can perform useful functions using real-world data. This phase is less power- and compute-intensive, but much more geographically distributed, and could drive an improved business case for edge compute applications requiring low latency decision-making. For instance, edge computing plays a transformative role in developing and functioning smart cities. In these urban areas, an extensive network of sensors, cameras, IoT devices, and other technologies generate a massive amount of continuous data. However, core cloud computing models face challenges related to latency and bandwidth due to increased distance from the core to the edge, where data is both created and consumed.
Edge computing alleviates these challenges by processing data closer to the edge, enabling real-time insights, faster decision-making, and improved user experiences. The proliferation of edge computing drives the need for distributed data processing and storage closer to end-users, requiring connectivity between centralized data centers and edge computing nodes for efficient data transfer and workload management.
As the strain that’s being put on networks increases—particularly on routes to, from, and between data centers—the challenge lies in how to seize new opportunities with limited budgets and fewer resources. While AI’s voracious appetite for computing and storage power and associated energy will spur growth in data centers, it’s also positioned to drive the next phase of data center industry growth: data center interconnectivity. It’s clear that everyone wants a piece of the data center gold rush. However, only those that can build data center facilities, and move content to, from, and between data centers can lead the charge in today’s AI-driven era.
Data Center Interconnect (DCI), a technology that connects two or more data centers together over short, medium, or long distances using high-speed packet-optical connectivity, is seen as a critical requirement. From StarHub to VITRO Inc, more network providers are realizing the opportunities of DCI in delivering scalability and efficiencies to meet the needs of a hyperconnected world.
Revolutionizing connectivity: The DCI advantage
Research shows that the growth of data centers will continue to increase over the next several years. With so many data centers to connect, businesses looking to build additional data centers must consider several key factors when it comes to connectivity to support applications. DCI-optimized networking ensures that the data you need is securely and reliably in the right place when and where you need it.
Designed for operational simplicity, the right DCI platform will provide hassle-free, rapid scalability. The simple operations and quick turn-up available with DCI help streamline operations, business performance, and end-users’ experiences – people and machines. In addition, common management interfaces and industry-standard APIs are also built into modern DCI solutions, enabling businesses to automate labor-intensive manual tasks to reduce errors from repetitive operations. This openness and programmability allow for easy automation with back-office tools.
Lastly, businesses should consider latency. The measurement of the time it takes for data to travel between two points is particularly important for industries such as the financial services sector and the rapidly growing ecosystem of cloud service provider sites. If a data center has a low latency to the larger ecosystem, businesses can ensure its connectivity is as efficient and responsive as possible.
Take StarHub, for instance. Its Low Latency Data Center Connect is looking to provide its customers with seamless interconnection, as well as enhanced access to cloud services and applications. In doing so, it aims to improve customer and user experiences, reduce cost, and increase operational efficiency.
Connecting the future
Looking ahead, we clearly face a future where bandwidth demand will only grow, especially with the increasing pervasiveness of AI and the resulting need for more capacity. As far as data centers are concerned, this means that in order to support this new future, the focus will need to move beyond simply building more – to connecting more. Looking ahead, there will be a need for scalable, reliable, and secure data center interconnectivity that is sustainable over the long term. Enterprises should also consider the diverse placement of data centers for increased resilience so traffic can be served closer to their end users.
Without such interconnectivity, a data center is simply a warm building of computers that can only communicate in isolation. It’s a business opportunity that’s won and lost on networking. The ideal data center interconnection can handle the volume of traffic between data centers and other network resources, all while guaranteeing low latency, high availability, reliability, and sustainability. With DCI, this solution exists – and if we shift our focus towards it, then the path towards ensuring seamless connectivity even amid growing bandwidth demand is clear.
#datacenter #AI #connectivity #DCI #edgecomputing
- Art
- Causes
- Crafts
- Dance
- Drinks
- Film
- Fitness
- Food
- Jeux
- Gardening
- Health
- Domicile
- Literature
- Music
- Networking
- Autre
- Party
- Religion
- Shopping
- Sports
- Theater
- Wellness