AI

Generative AI for Network Operations: Building an AI CableLabs Expert

Generative AI for Network Operations

Austin Pahl
Architect

Tyler Glenn
Principal Engineer

Oct 16, 2024

Key Points

  • ChatGPT and other large language models (LLMs) are enabling new ways to interact with computers, but they often make up false information about real subjects. This tendency to “hallucinate” makes it hard to rely on LLMs for real-world applications.
  • CableLabs is building a Domain Expertise Layer that solves the hallucination problem by giving LLMs access to CableLabs specifications and SCTE standards as sources of truth for more accurate answers, backed by a growing validation dataset.

This is the first in a series of blog posts about large language models and CableLabs’ efforts to apply them to transform knowledge-based work across the cable industry.

What happens if you ask ChatGPT cable-related questions? To conjure an example, we asked the AI chatbot to describe adjacency misalignment and its symptoms. Here is ChatGPT’s response:

Adjacency misalignment, also known as adjacency mismatch, is a common issue in network routing. It occurs when two routers have different views of their network adjacency. In other words, one router believes it has a direct connection to another router, but the second router does not have the same understanding. This can happen due to various reasons such as incorrect configuration, network congestion, or hardware failure.

[ChatGPT then describes symptoms of this issue.]

If you're unfamiliar with adjacency misalignment, the ChatGPT definition sounds pretty convincing! If you are familiar with adjacency misalignment, however, you probably noticed right away the answer is completely false. Adjacency misalignment is actually a radio frequency (RF) impairment. ChatGPT's answer wasn't even at the right level of the network stack.

ChatGPT and other tools of its kind are amazing for what they're beginning to achieve across industries and use cases, but answers like this aren’t helpful at all. CableLabs is actively solving this problem to help bring unprecedented AI advancements to the cable industry. Let's see how.

Why Is ChatGPT So Confidently Wrong?

ChatGPT and other generative AI products are built on revolutionary machine learning (ML) technology commonly known as large language models, or LLMs. The ability to talk naturally to a computer was science fiction only a few years ago, but LLMs have made human-to-AI conversation easier than entering a Google search.

Unfortunately, LLMs often fall short when it comes to technical, domain-specific questions like the example above. Unless you already know the answer to your question, it becomes difficult to check the LLM’s response is factually correct.

At their core, LLMs are like extremely sophisticated “predict-the-next-word” machines. When you provide an LLM with a sequence of text, the text gets chopped into small chunks called “tokens,” which the LLM can understand. Then, the LLM performs processing to guess the most likely token that would come after the input sequence. The method may sound basic, but when text is repeatedly added to the end of a sequence you get a chatbot!

The main reason LLMs act so confident when they're wrong is that optimizing for "predicting the next word" makes them expert at sounding convincing — but not at recognizing truths and falsehoods. If they don’t “know” an answer to a question, they still try to sound legitimate, and this often leads to making something up. The AI community has called this behavior “hallucination,” because the LLM acts like its own made-up writings are real. In reality, hallucinations are an extension of the LLM’s generative nature; they are simply not rooted in reality.

A Huge Opportunity Within Reach

If we can solve the hallucination problem — that is, build an AI that can reliably give accurate answers to cable-related questions — then we have the potential to revolutionize all sorts of knowledge work across the industry, including field operations, customer experience and innovation.

The industry has decades of technical, operational and business knowledge just waiting to be harnessed to help enable more seamless connectivity and network platform evolution, as well as pervasive intelligence, security and privacy. These key themes form the foundation of CableLabs’ Technology Vision.

For that reason, CableLabs has been hard at work building our solution to the knowledge accuracy problem faced by LLMs.

The Solution: CableLabs Expert LLM

CableLabs' Expert LLM is a first-of-its-kind chat application that answers questions based on a domain-expertise layer consisting of hand-curated, authoritative sources on cable technologies. If we ask the CableLabs Expert LLM the same question from the beginning of this article, you’ll get an answer like this:

Adjacency misalignment is an RF impairment recognizable as a difference in channel power between adjacent channels or groups of channels. It may be observed multiple times at different points in the spectrum and can be attributable to RF combining or source issues at the headend or hub or in a node utilizing a broadcast/narrowcast overlay or a split band configuration when the two spectrums are not combined correctly.

When adjacency misalignment is observed, the lower power channels may indicate poor performance, seen as a poor modulation error ratio (MER) when the delta between channels is large. This condition can manifest as lost packets, video tiling, freezing, or in very extreme cases, black screens at a customer’s home. Because adjacency misalignment is introduced very early in the downstream signal path, it has the potential to impact a significant number of customers.

Sources:

    • SCTE 280
    • CM-SP-PHYv4.0-I06-221019

Much better!

As you can see, the CableLabs Expert even cites its sources. Currently, the system has access to the specifications for DOCSIS 3.0, 3.1 and 4.0, as well as select SCTE documents including 280 and 40. Soon, we will expand support to other key sources of information related to cable broadband technologies.

The application supports all the latest state-of-the-art LLMs, including the GPT series, Claude, Mistral, Llama and many more. Whenever a useful new model comes out, the application can be extended in minutes to support the model in minutes without expensive fine-tuning or training from scratch.

The CableLabs Expert LLM's capabilities are mainly thanks to a powerful technique known as Retrieval Augmented Generation (RAG). In a nutshell, RAG is like giving an LLM an open-book test. When a user asks a question, the words are converted into a numerical representation known as "vector embeddings," and then those representations help us automatically pick out snippets of the CableLabs specifications and SCTE standards that are most likely to have the user's answer therein. The LLM is given those snippets as context for it to make an accurate, fact-based answer. Additionally, RAG can run on cheap, low-end hardware as opposed to alternative methods like fine-tuning, which requires GPUs to complete in a timely manner.

In addition to the chat interface, the CableLabs Expert application provides a comprehensive validation dataset and benchmarking framework to automatically evaluate models against a large body of known questions and answers from varied sources. Model evaluation is a critical part of this process: We must be able to precisely understand how well our system is performing, especially when comparing specific approaches, datasets or models.

Building for the Future

Generative AI is here to stay. ChatGPT captured the imagination of people around the world, across all business sectors and walks of life. Everybody agrees that it is a disruptive force, but the real question is who will disrupt and who will be disrupted. At CableLabs, we are building a better future for the broadband industry using cutting-edge AI technologies.

Foundational discussions are happening now between CableLabs and our members to bring the industry together for generative AI innovation and interchange standards.

Stay tuned for future blog posts on generative AI for network operations, in which we'll take a closer look under the hood of the CableLabs Expert LLM! Next time, we'll explore evaluation and analysis of the Expert's writings.

If you want to know everything about CableLabs' work with LLMs and RAG, check out our technical paper, "The Conversational Network: AI-Powered Language Models for Smarter Cable Operations," which was presented at TechExpo 2024.

DOWNLOAD THE PAPER