HFC Network
Latency 101: Getting From There to Here
Welcome back, once again, to the CableLabs 101 series! In our most recent post, we discussed the fiber portion of the hybrid fiber-coax (HFC) network, as well as the coherent optics technology that’s widely considered to be the hyper-capacity future of internet connectivity. Today, we’ll focus on a topic of growing importance for many of the new applications in development—a topic that significantly impacts the user experience even if it’s not well known. That topic is latency.
What Is Latency?
Simply put, latency means delay.
In our post about coherent optics technology, we pointed out how quickly light can travel through a piece of fiber-optic cable: an astonishing 128,000 miles per second. However, as incredibly fast as that is, it still takes time for light to carry information from one point to another.
Imagine for a moment that you’re reading this blog post on a computer in New York City. That would mean you’re about 1,600 miles away from the CableLabs offices here in Colorado. If we assume that the entire network between you and our offices is made of fiber (which is close enough to true for our purposes), it would take a minimum of 0.0125 seconds—or 12.5 milliseconds (12.5 ms)—for the text to travel from our server to your computer.
That’s not a lot of time, but distance is not the only source of delay—and those delays can add up.
For example, to read this post, you had to click a link to view it. When you clicked that link, your computer sent a request to our server asking for the article. That request had to travel all the way to Colorado, which also took the same minimum of 12.5 ms. If you put the two times together, you get a round-trip time (the time it takes to go somewhere and back), which in our case would be a minimum of 25 ms. That’s a longer amount of time, but it’s still pretty small.
Of course, the server can’t respond instantly to your request. It takes a moment for it to respond and provide the correct information. That adds delay as well.
In addition, these messages have to traverse the internet, which is made up of an immense number of network links. Those network links are connected by a router, which routes traffic between those links. Each message has to hop from router to router, using the Internet Protocol to find its way to the correct destination. Some of those network links will be very busy, and others won’t; some will be very fast, and some might be slower. But each hop adds a bit more delay, which can ultimately add up and become noticeable—something you might refer to as lag.
Experiment Time
Let’s try a little experiment to illustrate what we’re talking about.
If you’re on a Windows computer, select Start, Programs, Accessories, Command Prompt. Doing so will open up a window in which you can type commands.
First, try typing the following: ping www.google.com
After you hit Enter, you should see some lines of text. At the end of each line will be a “time” in milliseconds (ms). That’s the amount of time it took for a ping request to get from your computer to Google’s server and for a response to come back, or the round-trip latency. Each value is likely different. That’s because each time a ping (or any message) is sent, it has to wait a small but variable amount of time in each router before it’s sent to the next router. This “queuing delay” accumulates hop-by-hop and is caused by your ping message waiting in line with messages from other users that are traversing that same part of the internet.
Next, try typing the following: tracert www.google.com
You should see more lines of text. The first column will show a hop number (the number of hops away that point is), the next three will show times in milliseconds (since it checks the latency three times) and the final column will show the name or the address of the router that’s sending you the message. That will show you the path your request took to get from you to the Google server. You’ll notice that even as close as it is (and as low as your latency might be), it had to hop across a number of routers to get to its destination. That’s how the internet works.
(Note that you might have some fields show up as an asterisk [*]. That’s not a problem. It simply means that the specific device is configured not to respond to those messages.)
If you’re on a Mac, you can do the same thing without needing a command prompt: Just search for an application on your computer called Network Utility. To send a ping in that app, click on the Ping tab, type in www.google.com and click the Ping button. Similarly, to check the route, click on the Traceroute tab, type in the same website name and click the Trace button.
What Is Low Latency?
A term you might have heard is low latency. This term has been getting more and more attention lately. In fact, the mobile industry is touting it as an essential aspect of 5G. But what exactly is low latency, and how does it relate to our definition of latency?
The reality is that there’s no formal definition of what qualifies as low latency. In essence, it simply means that latency is lower than it used to be, or that it’s low enough for a particular application. For example, if you’re watching a streaming video, low latency might mean having the video start in less than a second rather than multiple seconds.
However, if you’re playing an online game (or perhaps using a cloud gaming service), you need the latency to be low enough so that you don’t notice a delay between moving your controller and seeing the resulting movement on your screen. Experiments have shown that anything above about 40ms is easily noticeable, so low latency, in this case, might mean something even lower than that.
How Do We Achieve Low Latency?
Reducing latency requires us to look at the sources of latency and try to figure out ways to reduce it. This can include smarter ways to manage congestion (which can reduce the “queuing delay”) and even changing the way today’s network protocols work.
Reducing latency on cable networks is something CableLabs has been working on for many years—long before it became a talking point for 5G—and we’re always coming up with new innovations to reduce latency and improve network performance. The most recent of these efforts are Low Latency DOCSIS, which can reduce latency for real-time applications such as online gaming and video conferencing, and Low Latency Xhaul, which reduces latency when a DOCSIS network is used to carry mobile traffic.
How Does Low Latency Affect Me and My Future?
Achieving low latency opens the door to do things in near real-time: to talk to friends and family as if they were close by, to interact in online worlds without delays and to simply make online experiences quicker and better. In the long term, when combined with the higher-capacity networks currently in development, low latency opens the door to new technologies like immersive interactive VR experiences and other applications that have not been invented yet.
The future looks fast and fun.