Skip to main content

The impact of AI on IP network evolution

The impact of AI on IP network evolution

Artificial Intelligence (AI) is taking the world by storm. A whirlwind of media attention and the impressive feats of generative AI models such as OpenAi’s ChatGPT or Google’s Gemini, clearly show that AI is a force to be reckoned with that will become even more formidable with future generations of Graphics Processing Units and large language models that are powering AI. What used to be science fiction only years ago is rapidly becoming a reality, leaving many of us wondering how AI technology will impact our personal life, business and society in general.

If you’re a network operator or digital infrastructure provider and wonder how AI will shape the evolution of your IP network, this blog may give you some insight.

Unlocking the value of massive data with AI

In today’s digital world it is hard to keep up with existing data, let alone the exabytes of new data we generate daily. Potentially valuable data is stored and discarded without ever being used, or lost in the vast ocean of the Internet.

AI and Machine Learning (ML) give us powerful tools to unlock the hidden value of data. AI can tap into the vast resources of cloud data centers and use the parallel processing power of hundreds of servers and thousands of GPUs to multi-task. In days, an AI model can be trained to learn complex tasks that may take years or are impossible for a human mind. Like cracking the human DNA code to create an image of a person based on a hair sample. In seconds, the accumulated knowledge stored in a trained AI data model can be replicated, scaled and applied where needed.

AI adds value to both new and existing applications. It gives billions of smart phone users an incentive to upload their photos and videos to the cloud so they can easily search and edit personal content or generate funny personalized memes. AI can sit in on millions of video calls to take notes or analyze surveillance feeds and indicate trouble. It can recommend content you might like but never knew existed, and create entirely new formats of highly immersive and interactive content that seamlessly extend the personal user experience into digital and virtual realms.

But for all its potential to be realized, AI depends on high-performance IP networks to securely, reliably and expediently move the data and deliver its value to users.

The value of IP networks for AI deployments

IP routers and switches play an essential role in all stages of AI/ML deployment:

  • Data center network fabrics, to connect the servers used for AI training and inferencing

  • Data center gateways, to exchange AI models and data between multiple data centers

  • IP routers, to interconnect the distributed AI edge and serve billions of user queries

Explains the various steps in which AI models and applications are created and deployed with help of data network technology.

When it comes to networks for AI there is much attention on the initial model training stage. This is the costly and time-consuming process of parsing petabytes of Internet data through a Large Language Model (LLM) like ChatGPT or LlaMa. It requires massive, high-performance data center network fabrics to connect the hundreds of servers that are needed to train the foundation models on which many AI applications are based.

The second stage, model fine-tuning, requires additional data that is highly curated and often restricted for privacy, copyright, legal or data sovereignty reasons. For example, scientists in Scotland are training an AI model to assess the risk of dementia by analyzing brain scans. Such closely held data is highly confidential and its privacy must be safeguarded as it crosses the IP network to private and secure data centers. There are many applications in science, healthcare, education, finance, public safety and government services where AI is used to support business-, mission- or life-critical decision processes such as loan approvals, fraud detection or patient care where IP network security and integrity are critical.

Finally, the interaction of a trained AI model with end users is called inferencing. This is a highly dynamic and compute-intensive process in which large volumes of potentially sensitive data are exchanged in real-time with connected users and devices over wide area IP networks and the internet. This is also where the enormous investments in AI model training and fine-tuning pay off through their use by an expanding set of applications.

AI inferencing logic can be embedded in user devices such as the latest model smart phones, smart TVs, VR headsets, or self-driving cars. But for many devices or applications it is not practical, possible, economical, safe or even necessary to do so. Due to the distributed nature of users and devices, there are potentially many edge locations with AI inferencing servers that must be securely connected over a wide area IP network (the blue cloud in the figure).

Shows the wide area network view connecting AI era datacenters and end users.

The optimal location of AI inferencing workloads depends on the data security, bandwidth, latency and availability requirements of the application.

  • Mission-critical AI applications for large enterprises, industry verticals or government functions that process private and sensitive data should be hosted on servers within the confines of a Local Area Network (LAN) or a secure (Virtual) Private Network with quantum-safe network cryptography to protect against hacking and eavesdropping.

  • High-bandwidth, low latency applications such as AI vision (e.g., video surveillance) or interactive multimodal content generation (e.g., virtual and extended reality) are best placed at the network edge to minimize data transport costs and exposure to network congestion and DDoS security risks.

  • Other AI workloads can be hosted on bare metal servers equipped with GPUs in edge colocation datacenters (GPU-as-a-Service, AI-as-a-Service), or in the public cloud (e.g., Google Cloud, Apple’s iCloud, Microsoft Azure, or Amazon Web Services).

In turn, IP networks can leverage AI technology to help orchestrate and automate network operations to best accommodate these distributed AI workload and traffic dynamics at scale. But that’s a topic in and of itself that we’ll address in a future blog.

AI and IP network evolution

With its ability to process massive amounts of data in real-time, AI has great potential to boost our productivity and creativity. AI technology evolution is mainly fueled by silicon, software and big data, but also depends heavily on IP networks to move the data and deliver its value to users.

The evolution of AI will challenge IP networks in areas like data transport security, DDoS protection, bandwidth capacity, latency and reliability. Some of these trends may take time to fully manifest themselves, but it’s important to anticipate them early and be ready to meet them.

To learn more about our IP network solutions for the AI era, visit our IP networks and AI webpage.

Arnold Jansen

About Arnold Jansen

Arnold is a senior solution marketing manager in Nokia’s Network Infrastructure business division and responsible for promoting IP routing products and solutions. Arnold has held a number of roles in research and innovation, sales, product management, and marketing during his 25 years in the telecommunications industry. He holds a Bachelor degree in Computer Science from the Rotterdam University of Applied Sciences.

Follow Arnold on Twitter

 

Article tags