Capability vs. capacity: The duality of AI in telecom networks

A survey from Ciena published earlier this year revealed the dual nature of the potential impact of artificial intelligence on telecom networks.
On one hand, more than half of the telecom and IT engineers surveyed reported that they thought the use of AI would improve network operational efficiency by 40% or moreâletâs call that the âAI for the networkâ aspect. But when asked about the needs of the other aspect, âthe network for AIâ, nearly all of the respondentsâ99%âsaid that they believed fiber network upgrades will be required in order to support more AI traffic.
âThe survey highlights the optimistic long-term outlook of CSPs regarding AIâs ability to enhance the network as well as the need for strategic planning and investments in infrastructure and expertise to fully realize the benefits,â said Ciena CTO Jürgen Hatheier. In a recent interview with RCR Wireless News, two other experts from Ciena discussed the dueling aspects of AI in telecom and the network specifically.
AI for the network
âAI, and things like ML and data analytics, have been embedded in assurance for many, many years, for both everyday requirements like faster trouble-shooting and fault isolation, and more recently, newer use cases around proactive fault identification, isolation and prevention,â reflected Kevin Wade, senior director of product marketing at Cienaâs Blue Planet division, which focuses on network automation and orchestration. He sees AIâs overall role within network operations as an extension of automation: Another way to leverage data to optimize operational processes. In terms of where he sees operators being interested in AI use cases, the majority at this moment are focused on assurance and optimizing network planning. Operators have a lot of data, he pointed out, and as they build more links across that distributed data, the more insights they get and the better they can plan the evolution of their services, their networks and their business. This is a similar, but perhaps more sophisticated, evolution of traditional telecom applications of AI and ML.
But over the last couple of years, Wade noted, service providers have also become highly interested in generative AI in particular, and its implications for their businesses and their networks. âThatâs a fundamentally different approach,â Wade said. âYes, itâs all AI, but itâs not necessarily an evolution of ML.â
Gen AI has been fundamentally built around natural language and large language models (LLMs)ânot as an extension of AI/ML that lives in the world of data largely generated by network equipment and software. So the gen AI use cases that Wade sees the industry working towards that directly impact the network itself are essentially an extension of, or perhaps an intersection of, coding and orchestration and intent-based networking. âThe idea might be, letâs use natural language for an end-customer to express their intent of what they want for a service, a connection from this point to this point, for this amount of time, this amount of bandwidth, with this type of security privilege attachedâif you can just say that or write that down in simple language, and it automagically happens,â Wade explains.
Thatâs the goal of AI in telecom that service providers are looking toward, from Blue Planetâs view, âBut itâs really still very much in the formative stage,â Wade adds. âIt will take a couple of years, probably, to get there, because there are no standards for gluing this all togetherâthose are also just being formulated.â The Ultra Accelerator Link (UALink) group, which is focused on standardizing interconnect interfaces for AI accelerators within data centers, was just established earlier this yearâand also is seen as an effort to establish an alternative to Nvidiaâs NVLink.
That desire to avoid proprietary technology is also carrying over to the models for gen AI, Wade said, and it may lead to some network operators treading lightly on gen AI until more interoperable or standardized frameworks for applying AI emerge. âSome service providers, right now, are always concerned with lock-in ⦠around software vendors in particular,â he said. âThey donât necessarily want to be locked into one LLM either. So ⦠thereâll be a mix of some waiting until some standards, guardrails, are in place for interoperability and so on. But others have initiated, and some of the larger European operation in particular initiated their own, telco-specific LLM activities.â (SK Telecom, Deutsche Telekom, e&, Singtel and SoftBank, after making a commitment at MWC Barcelona 2024, announced a joint venture in June of this year to jointly develop and launch a multi-lingual LLM specifically for telcos, with an initial focus that includes the use of gen AI in digital assistants for customer service.)
The network for AI
âAI infrastructure challenges lie in cost-effectively scaling storage, compute, and network infrastructure, while also addressing massive increases in energy consumption and long-term sustainability,â wrote Brian Lavallée, senior director of market and competitive intelligence at Ciena, in a recent blog post. He pointed out that âTraditional cloud infrastructure success is driven by being cost-effective, flexible, and scalable, which are also essential attributes for AI infrastructure. However, a new and more extensive range of network performance requirements are needed for AI.â
That includes both within the data center and outside it. Lavallée cited numbers from Omdia on expected traffic growth for AI, with monthly âAI-enrichedâ network traffic expected to see a 120% compound annual growth rate through 2030. He also touched upon the needs of generative AI to move massive amounts of data within a data center, over links operating at 400G, 800G and 1.6 Tb/s or more. Ciena has had two recent trials of 1.6 Tb/s capabilities, one with Telstra and Ericsson and another with global fiber backbone provider Aurelion.
âWe know inside the data center, traffic is exploding already today,â Lavallée said. âItâs going to spill out very quickly into campus networks.â He expects to see data centers begin to be build in that loose campus style, with multiple buildings within 10 kilometers of one anotherâleading to the virtualization of data centers. âYouâre going to have multiple data centers acting as one larger, virtual data center. ⦠for a whole bunch of reasons,â Lavalee addedâprimarily, power. âThereâs not enough electricity in existing data centers to park all the AI hardware, which is 10 times more capacity per rack,â he continued. âSo you may have 10 times less space consumed, but youâve used 100% of the electricity coming into that building.â
Lavallée points out in his blog post that the gen AI models are ânotoriously power-hungry in their LLM training phaseâ and consume âimmense amounts of electricity.â Power usage in data-center hot spots such as Ashburn in northern Virginia, is expected to double over the next 15 years, driven mostly by data center growth. The GPU usage intensity wanes once a gen AI model is sufficiently trained and âprunedâ, however, and offers an opportunity for algorithms to be moved to a more distributed edge location closer to end-users.
While the power needs and intensive high-speed links within the data centers are already becoming apparent, it is less certain what the needs of the rest of the network will be. As Lavallée told RCR Wireless News, while there are some estimates on the overall traffic impact that AI may have, that still leaves some significant uncertainty about exactly where in the network, and by how much, data transmission capacity will need to be bolstered. In metro rings? In long-haul links? Across submarine cables? That breakdown isnât known yet. And Lavallée makes the point in his post that âAI will only scale successfully if data can move securely, sustainably, and cost-effectivelyâ from core data centers to edge data centers.
He also thinks that the network performance demands of AI in telecom may ultimately mean that it is very well-suited to being supported by 5Gâwhich, after all, is supposed to be a highly distributed, cloud-native transmission network that can support high data rates and low latency.
âI think 5G was a network upgrade in search of a use case. AI is the use case,â said Lavallée. âIf we can marry the two together, I think some of the promise and opportunity of 5G can be enabled with artificial intelligence.â
Comments are closed.