Comments
Wireless

TCO of DOCSIS® Network XHaul vs. Fiber BackHaul: How DOCSIS Networks Triumph in the Indoor Use Case

Shahed Mazumder
Principal Strategist, Wireless Technologies

Joey Padden
Distinguished Technologist, Wireless Technologies

Jennifer Andreoli-Fang
Distinguished Technologist, Wireless Technologies

Dec 12, 2018

In our recently published blog post, we demonstrated why indoor femtocells have reemerged as an attractive deployment model. In particular, indoor network densification has huge potential for converged cable/wireless operators who can leverage their existing Hybrid Fiber Coax (HFC) footprint to either backhaul from full-stack femtocells or fronthaul from virtual Radio Access Network (vRAN) remote radio units.

In the second blog in our series, we shift the focus from system level benefits to making the business case. As we walk through our TCO model, we will show a 40% to 50% reduction in Total Cost of Ownership (TCO) for an indoor deployment model served by DOCSIS networks compared to a more traditional outdoor deployment served by fiber. Yeah, that is big, so let’s break down how we got there.

Why DOCSIS Networks?

Before jumping into the TCO discussion, let’s revisit the key motivations for using DOCSIS networks as a tool for mobile deployments:

  • Broad-based availability: In a Technical Paper prepared for SCTE-ISBE 2018 Fall Technical Forum, a major Canadian MSO pointed out that there typically is 3~5X more coax cable than fiber in its major metro markets. In the US too, per FCC’s June 2017 statistics, nation-wide cable Household Passed (HHP) stands at 85% (115M units), whereas fiber HHP stands at 29% (39M units)
  • Gigabit footprint: As of June 2018, over 63% of US homes have access to gigabit service over cable. In other markets cable operators are pushing ahead with gigabit buildout as well
  • Ease of site acquisition: No permitting, no make ready, limited installation effort.
  • Evolving mobile-friendly technology: Ranging from latency optimization to timing/synchronization techniques and vRAN support for non-ideal fronthaul links like DOCSIS networks.

Scenarios We Looked At

For TCO comparison, we looked at the following 3 deployment scenarios:

Scenario 1: Outdoor small cell served by leased fiber backhaul

Scenario 1

This is the traditional solution for deploying small cells. For our TCO model, we treated this as the baseline.

Scenario 2: Indoor femtocell/home eNodeB served by residential/SMB DOCSIS network links as backhaul.

Scenario 2

In this scenario, we modeled the deployment of a full-stack femtocell in residential customer homes and small to medium businesses (SMB) served by the converged operator’s DOCSIS network. A converged operator here refers to a cable operator that deploys both DOCSIS network and mobile networks.

Scenario 3: Indoor vRAN Remote Radio Unit (RRU) served by residential/SMB DOCSIS network links as fronthaul

Scenario 3

Scenario 3 is essentially scenario 2 but using vRAN. In this case, the virtual base band unit (vBBU) could be deployed on general purpose processors (GPP) servers in the distribution hub site with low-cost radio units deployed in DOCSIS gateways at the customer premise, or SMB location.

Apples to Apples

To build the TCO model, we start with a representative suburban/urban area we want to model. In our case, we used a 100 sq. km area with a total of 290k households (HH). At 2.4People/HH (the US average), our modeled area covered roughly 700K people.

Next, we considered that this area is already served by 10 outdoor macrocells, but the operator needs to boost capacity through network densification.

Under Scenario 1, the operator deploys 640 outdoor small cells that cut existing macro cells’ traffic load by half and boost the spectral efficiency (and therefore capacity) across the network. To create an apples-to-apples comparison of system capacity under all three scenarios, we applied the concept of normalized spectral efficiency (SE) and kept that consistent across the three scenarios. For SE normalization, we added up weighted SE for different combinations of Radio Location-User Location (e.g. In-In, Out-Out, Out-In) in each scenario.

In the end, we used the normalized SE to find the appropriate scale for each scenario to achieve the same result at the system level, i.e. how many femtocells/vRAN radios will be required in indoor scenarios (2 & 3) so the system capacity gain is comparable to the traditional deployment in scenario 1.

Work Smarter, Not Harder

Crucially, converged operators know who their heavy cellular data users are and among them, who consistently use the network during non-business hours, i.e., most likely from their residences. As an example, a CableLabs member shared empirical data showing that the top 5% of their users consume between 25%~40% of overall cellular network capacity on a monthly basis.

So as a converged operator, if you want to prevent at least 25% of network traffic from traversing through walls, you can proactively distribute home femtocells or RRUs to only the top 5% of your users (assuming their entire consumption happens indoor).

In our model, we used the following approach to get the scale of indoor deployment for scenarios 2 and 3:

Figure-A: Determining Scale of Deployment for Indoor Use Cases

Figure-A: Determining Scale of Deployment for Indoor Use Cases

Therefore, theoretically, if only 2.5% of subscribers start using indoor cellular resources, we can achieve the same SE improvement in scenarios 2 and 3, as observed under scenario 1’s fiber outdoor deployment.

However, we know assuming 100% of heavy users traffic is consumed at home or indoors is unrealistic. To account for a combination of real-world factors including that indoor doesn’t mean only at your residences, that some of those heavy user locations may not be serviceable by a DOCSIS network, and/or some users may opt out from using a home femtocell/RRU we boosted that percentage of the subscriber base we modeled using femtocell/RRU deployments to 12.5% (or roughly 35K units) to make sure that we can definitely capture at least 12.5% of cellular traffic in scenario 2 and 3.

Our Analysis and Key Takeaways

For the TCO model assumptions, we gathered a wide range of input from a number of CableLabs operators and vendors. In addition, we validated our key assumptions with quite a few Telecom Infra Project (TIP) vRAN fronthaul project group’s members.

Though configurable in our model, our default TCO term is 7 years. Also, we calculated the TCO per user passed and focused on the relative difference among scenarios to de-emphasis the overall cost (in dollars), which will differ by markets, the scale of deployment and supplier dynamics among other things.

Figure-B: Summary of 7-Yr. TCOs across 3 Deployment Scenarios

Figure-B: Summary of 7-Yr. TCOs across 3 Deployment Scenarios

According to our base case assumptions, we see the following:

  • TCO in scenarios 2 and 3 can be around 40%~50% cheaper than the TCO in scenario 1.
  • For scenario 1, Opex stands out as it involves large fees associated with outdoor small cell site lease and fiber backhaul lease.
  • Scenario 2 commands a higher Capex than scenario 3, largely because of higher (~2X) unit price per full-stack home femtocell (vs. home RRU) and the need for security gateway, which is not required in scenario 3.
  • Scenario 3’s Opex is nearly double (vs. scenario 2 Opex), as it requires a significantly higher DOCSIS network capacity for the upstream link. Yet, notably, despite the increased DOCSIS network capacity used by a vRAN deployment, the TCO is still the most favorable.
  • We allocated 20% of DOCSIS network upgrade (from low split to mid split) cost to DOCSIS network-based use cases (scenarios 2 and 3). If we take those out (since DOCSIS network upgrades will happen anyway for residential broadband services) the TCO of these indoor use cases get even better compared to the fiber outdoor case (scenario 1).
  • Other key sensitivities include monthly cost/allocated cost of the XHaul, number of small cell sites within a small cell cluster, radio equipment cost, and estimated number/price of threads required for vBBU HW to serve a cluster in the vRAN scenario.

In an upcoming strategy brief (CableLabs member operators only), we intend to share more details on our methodology, assumptions and breakdown of observed results (both Capex and Opex) along with a sensitivity analysis.

What Do These Results Mean?

To us, it was always a no-brainer that a DOCSIS network-based deployment would have favorable economics compared to a fiber-based model. The TCO model introduced here confirms and quantifies that perceived benefit and points out that for network densification, there is a business case to pursue the indoor femtocell use case where market conditions are favorable.

Subscribe to our blog because our exploration of DOCSIS networks for mobile deployments isn’t over. Coming up next we explore a similar TCO model focused on outdoor deployments served by DOCSIS backhual. Later we will shift back to technology as we look at the DOCSIS networks ability to support advanced features such as CoMP.


SUBSCRIBE TO OUR BLOG

Comments
Wireless

Converged Carriers, Femtocells and Spectral Efficiency: Rethinking the Traditional Outdoor Small Cell Deployment

Joey Padden
Distinguished Technologist, Wireless Technologies

Nov 15, 2018

With the release of any new generation, or “G,” in the cellular world, the goal is always to outperform the previous generation when it comes to spectral efficiency—that is, how many bits you can pack into your slice of airwaves. To telecom nerds, this is expressed as bits per second per hertz (bps/Hz). Going from 3G to 5G, peak spectral efficiency skyrockets from 1.3 bps/Hz with 3G, to 16 bps/Hz with 4G LTE , to 30 bps/Hz with LTE-A, and to a truly eye-watering 145 bps/Hz with 5G (in the lab).

And it makes sense: Spectrum is an expensive and limited resource. Operators pay billions for every MHz they can acquire.

Not What It Seems

Unfortunately, the reality of spectral efficiency in deployed mobile networks is far less stratospheric. A 2017 study pegged spectral efficiencies for a live LTE network at roughly 1 bps/Hz on average with a peak of about 5.5 bps/Hz. So where did all that spectral efficiency go?

The short answer is that it ran smack into a wall. Literally! In 2016, ABI Research Director Nick Marshall said that “more than 80 percent of all traffic [is] originating or terminating indoors,” and we serve the vast majority of that traffic with outdoor cells.

The Inertia of Tradition

In the push toward 5G, we hear a lot about network densification. So far, given the amount of effort going into changing the siting rules, it sounds like the plan is to deploy more outdoor cells to help increase spectral efficiency in 5G networks. In a recent RCR Wireless News article, the headline read “US outdoor small cell antenna shipments to grow by 75% in 2018: Study” citing a study by EJL Wireless Research.

Putting aside the immense issues facing the economics of that approach (more on that in the next blog post covering our TCO analysis), it still relies on an architecture of deploying outdoor cells to handle a largely indoor traffic load. It still puts literal barriers in the way of increased spectral efficiency.

Airtime Perspective

Let’s quantify this issue a bit to make sure we have a shared perspective on the system capacity impact of using outdoor cells to handle indoor traffic because it’s a big deal.

Small cell DOCSIS

Sending a typical video packet from an outdoor cell to an outdoor user takes 33 resource blocks, whereas sending that same frame to a deep indoor user can take 209 resource blocks (1500B IP packet, I_TBS 3 vs I_TBS 19, TM2 with 2TRx)! On average, it takes seven times more airtime resources to serve an indoor user than an outdoor user.

Given the inefficiency, why are we still trying to cross the walls?

User Behavior

It’s probably not news to anyone that indoor penetration is costly. A common industry view says that when a user is indoors, his or her data should be served by Wi-Fi to offload the burden on the cellular network. Industry reports are produced every year showing that large amounts of traffic from mobile devices are offloaded to Wi-Fi networks (e.g., ~80 percent in 2017).

However, as the industry moves toward unlimited data plans, and as mobile speeds increase, the incentives for seeking out Wi-Fi for offload are diminishing. A recent CableLabs Strategy Brief (CableLabs membership login required) provides empirical data showing that Wi-Fi data offload is declining as adoption of unlimited data plans increases. The trend, across all age groups, shows increased cellular data usage. So as demand for cellular data is going up, an increasing portion is going to be crossing the walls.

There are a number of long-held complaints about the Wi-Fi user experience. I won’t enumerate them here, but I’ll point out that as the incentives to offload data to Wi-Fi are weakened, even the slightest hiccup in the Wi-Fi user experience will drive a user away from that offload opportunity at the expense of your cellular system capacity.

Introducing Low-Cost Femtocells

There’s a growing breed of operator that has both cellular operations and traditional cable hybrid fiber coax (HFC) infrastructure—a big wired network and a big wireless network (Note: here I am talking about full MNOs with HFC/DOCSIS networks, not MVNOs. MVNOs with HFC/DOCSIS networks will have different goals in what optimizing looks like). For these operators, the carrots of convergence dangle in all directions.

Over the past couple of years, CableLabs has ramped up efforts to solve the technology issues that have traditionally hindered convergence. Latency concerns for backhaul or vRAN fronthaul can be resolved by the innovative Bandwidth Report project. CableLabs leadership in the TIP vRAN Fronthaul project is making latency-tolerant fronthaul protocols a reality. Timing and synchronization challenges presented by indoor deployments are months away from commercialization, thanks to CableLabs’ new synchronization spec.

The summation of these projects (and more on the way) provides a suite of tools that converged operators can leverage to deploy mobile services over their HFC/DOCSIS network.

Enter the femtocell deployment model. Femtocells aren’t new, but with the new technologies developed by CableLabs, for the first time, they can be done right. Gone are the days of failed GPS lock, poor handover performance, and interference issues (topics of our 3rd blog in this series). From a spectral and economic viewpoint, femtocells over DOCSIS are poised to be the most efficient deployment model for 4G evolution and 5G cellular densification.

Wi-Fi Precedence

Take Wi-Fi as a guide to how femtocells can improve spectral efficiency. Modern Wi-Fi routers—even cheap home routers—regularly provide devices with physical link rates approaching 10 bps/Hz. That is a huge gain over the sub-1 bps/Hz achieved using an outdoor cell to serve an indoor user. In such a scenario, the benefits are myriad and shared between the user and the operator: The user experience is dramatically improved, the operator sees huge savings in outdoor system capacity, and it all occurs with more favorable economics compared to traditional small cell strategy.

When selectively deployed alongside home Wi-Fi hotspots, indoor femtocells give the converged operator the chance to capture the majority of indoor traffic with an indoor radio, freeing the outdoor radio to better serve outdoor traffic.

More Discussion to Come

In this post, I talked about the spectral efficiency problems of traditional outdoor small cell deployments and how a femtocell deployment model can address them. Next time, I’ll discuss a total cost of ownership (TCO) model for femtocells over a DOCSIS network, both full-stack and vRAN-based solutions.

And don’t take my word for it! Stay tuned to the CableLabs blog over the next couple months for more discussions about cellular deployments over a DOCSIS network.


SUBSCRIBE TO OUR BLOG

Comments
DOCSIS

vRAN Over DOCSIS: CableLabs Making it a Reality

Joey Padden
Distinguished Technologist, Wireless Technologies

Feb 26, 2018

In November, CableLabs announced the opening of our new Telecom Infra Project (TIP) Community Lab. Today, CableLabs joins TIP in releasing a whitepaper, making public deeper insights into the vRAN fronthaul interface under development in the TIP vRAN Fronthaul project group. With this new interface, the addressable market for virtualized RAN (vRAN) deployment architectures can grow significantly. This increased market is evidenced by the diverse set of use cases being sponsored by the growing set of operator-based TIP Community Labs.

With the release of the white paper, the project group highlights key milestones which have been reached, including agreements further defining the open API and a set of interoperability metrics to be used in validating the interface in multi-vendor configurations.

As the project continues, work on the CableLabs DOCSIS network vRAN fronthaul use case will take place at the CableLabs TIP Community Lab. We look forward to sharing more as we continue to check milestones off our list, so check back soon for updates.

You can find the whitepaper "Creating an Ecosystem for vRANs Supporting Non-Ideal Fronthaul v1.0." here

Comments
Wireless

A Little LTE for You & Me: Build Your Own LTE Network on a Budget

Joey Padden
Distinguished Technologist, Wireless Technologies

Nov 14, 2017

If you’re in a technology role in the cable industry, you’re probably aware that cable is undergoing a tectonic shift from “the future is wired” to “the future is wireless.” Wireless means a lot of things to a lot of people. In the past, wireless meant Wi-Fi if you were talking to a cable nerd. But today, wireless is rapidly shifting to mean mobile, or more specifically 4G LTE and/or 5G. For those of you interested in this wireless future, below, I'll explain how you can build your very own LTE network on a budget.

Time to Tinker

I learn best by doing. Growing up this always terrified my parents. Now that I’ve matured a bit (eh hem), this tendency manifests less as a risk of bodily harm and more as time spent in the lab tinkering. My tinker target as of late has been LTE networks. It turns out there are open source solutions and low-cost parts out there that let you build a simple LTE network (eNB + EPC) for about $1500. I’ve been studying LTE since about 2013, but the last couple of months building and configuring LTE components in the lab have taught me about as much as the prior years combined.

In addition to the great learnings that came from my efforts, we (CableLabs) have ended up with a great tool for research and experimentation. With a cheap and fully open source LTE network we can explore novel use cases, configurations, and deployment architectures, without the need for outside collaboration. Don’t get me wrong, we love collaborating with industry partners here at CableLabs, but it’s great to kick the tires on an idea before you start engaging outside partners. Using this setup, we have the freedom to do just that.

Hardware

The hardware setup is straightforward:

  • Two Intel quad-core i7 PCs
  • A software-defined radio
  • A SIM card
  • The UE

An example bill of materials is below. Replacement of any device with a similarly spec’d product from a different manufacturer should be fine (this list is not meant to be prescriptive or seen as an endorsement).

Build your own LTE Network on a Budget


Software

For both machines, we use Ubuntu as our OS. The LTE system software comes from an open source project called Open Air Interface (OAI). This OAI software is broken into two projects:

  1. The eNodeB (eNB) called “openairinterface5G”
  2. The evolved packet core (EPC) called “openair-cn”

Figure 1 shows the LTE functional elements included in each project:

Once downloaded and built you get four executables: eNB, HSS, S/PGW, and MME. With my limited Linux chops, it took me a couple of days to get everything happy and running. But for a Linux ninja, even one with limited LTE knowledge, it can be up and running in a day.

For help getting it going, OAI has a great wiki with a bunch of how-to docs and mailing lists that are quite active. In addition to the great docs on the OAI wiki, do some googling and you’ll see many forum posts and how-to sites around the web, e.g., here is a great tutorial for doing EPC + eNB in a single PC.

It largely works. It’s open source, so the stability is ok, but don’t expect weeks of uptime. Also, note the SGW and PGW are a single executable, so the S5/S8 interfaces are not exposed, even though it’s a solid line in Figure 1. Does this limit your plug-n-play interoperability testing a bit? Sure, but overall the solution is tough to beat for the price.

Another thing to watch out for is UE interoperability. Many phones work, for example, the Samsung S7, Moto G4, but others don’t. LTE has many variations on the attach procedure, but not all are supported by OAI’s EPC currently. But again, it’s free! And it supports some mainstream readily accessible phones, which is pretty sweet.

Other Things to Consider

So we discussed the basics, but there are a couple of other bits you need to line up to get everything working:

  • Even though this set up is for tinkering, you will need a plan for regulatory compliance if you want to go over-the-air. For example, in the US you’ll need to contact the FCC to apply for a Special Temporary Authority for the frequency of your choice. Alternatively, you can do all of your testing conducted over cables in your lab. In that case, a UE with external antennas becomes really handy, e.g., the Huawei B593 family of products is what we have used (added bonus that it works great with the OAI EPC).
  • You will also need to get some SIM cards. SIM cards are wildly more complicated than I ever realized! My best advice is to go to the experts for help. Gemalto is the tier 1 provider. If you are a tier 1 kinda person, maybe start there. We have also found SmartJac to be super helpful. In either case, I advise starting with the OAI default SIM data set. It will make your initial connection efforts that much easier. Once you get that working, if you want to change the details, you can use a SIM editing software from either Gemalto or SmartJac.

Now do something cool!

Now that you are armed with some knowledge, go forth and make some LTE! Post in the comments if you have questions, want to share your project, run into issues, post in the forums I linked to, or on the reflector… you get the idea…

--

We just announced our new TIP Community Lab where engineers will have access to a bevy of state-of-the-art wired and wireless test equipment. Make sure to read my blog post "CableLabs Introduces New Telecom Infra Project (TIP) Community Lab" for more information and subscribe to our blog to find out about future innovations. 

 

Comments
Labs

CableLabs Introduces New Telecom Infra Project (TIP) Community Lab

Joey Padden
Distinguished Technologist, Wireless Technologies

Nov 8, 2017

Today we are excited to announce a new venue for wireless network innovation and collaboration at CableLabs. CableLabs and the Telecom Infra Project (TIP) have opened a TIP Community Lab located at CableLabs’ headquarters in Louisville, Colorado.

What is a TIP Community Lab?

The TIP Community Lab is an integral component of community-based innovation with data-driven results. The goal of a Community Lab is to enable at-scale real-world projects that lead to adoption. These labs provide an open and collaborative working environment for members of TIP project groups to meet, test and refine the solutions they’re developing.

Currently, Community Labs are located at the offices of Facebook and SK Telecom. Today, beyond the CableLabs announcement, Deutsche Telekom announced the opening of its Community Lab in Berlin and Bharti Airtel announced that it is launching a Community Lab based in India.

What goes on at the CableLabs Community Lab?

At CableLabs, we set aside dedicated lab space for the TIP Community Lab. When at the CableLabs TIP Community Lab, engineers will have access to a bevy of state-of-the-art wired and wireless test equipment, including our:

  • Channel emulators
  • Traffic generators
  • LTE and DOCSIS sniffers
  • A host of HFC networks we use for lab work
  • Various LTE UEs
  • Multiple EPCs (LTE core network)

The first project to enter the CableLabs TIP Community Lab is the vRAN Fronthaul project. This project is focused on virtualization of the radio access network (RAN) for non-ideal fronthaul links (i.e. not CPRI). A key component of 5G wireless networks is going to be densification; deploying more, smaller cell sites closer to the users. Think of a small cell site inside your favorite coffee shop, or several small cells peppered throughout the hottest restaurant and bar streets in your city.

The economics of this deployment style don't support pulling fiber links to every small cell, it’s just too expensive. Therefore, a fronthaul technology capable of using “non-ideal” links to connect these small cells (i.e. DOCSIS®, G.Fast, Ethernet, Microwave), can enable new deployment economics.

The Telecom Infra Project

Founded in February 2016, TIP is an engineering-focused initiative driven by operators, suppliers, integrators and startups to disaggregate the traditional network deployment approach. The community’s collective aim is to collaborate on new technologies, examine new business approaches and spur new investments in the telecom space. TIP has more than 500 member companies, including operators, equipment vendors and system integrators. TIP currently has project groups working in the strategic areas of Access, Backhaul, and Core and Management.

CableLabs began participating in TIP a year ago and we now hold a seat on the TIP Technical Committee. We view TIP as a great opportunity for cross-pollination between the different ecosystems that influence the telecommunications networks of the future, and an excellent opportunity to leverage the diverse skills within the TIP community to create new possibilities for end users.

Everyone who has access to 4G LTE today loves how speedy their smartphone is and they want more. They want the speeds that 5G wireless networks promise. But let’s be honest, we want it for equal to or less than what we pay for our service today. TIP is focused on building networks of the future through collaboration that will give operators the flexibility to grow their networks quickly, efficiently and in a cost-effective manner while delivering the 5G speeds users will demand.

In addition, there are more than 4 billion people who are not online. Dramatic improvements in network flexibility and cost reduction would help close this digital divide. To meet these two goals, the industry should pursue new approaches to deploying wireless networks.

Interested?

CableLabs members interested in more information should check out the CableLabs Tech Brief on the topic posted in Infozone (login required). The CableLabs Community Lab is a great opportunity for telecom vendors unfamiliar with cable infrastructure to get their hands dirty with HFC and DOCSIS networks.

CableLabs is also active in other TIP project groups that may come to the Community Lab in the future. For example, we participate in the Edge Computing group. The Edge Computing group focuses on lab and field implementations for services/applications at the network edge, leveraging open architecture, libraries, software stacks and MEC. Contact CableLabs principal architect of network technologies, Don Clarke, if you want more details.

The TIP Community Lab continues the tradition of innovation at CableLabs. So stay tuned, this is just the beginning of exciting news to come from the work going on in the CableLabs TIP Community Lab.

If you got this far and you’re thinking “I want me some of that Community Lab goodness,” join TIP! You can sign up here and get involved. Project groups are open to anyone, operators and vendors, and collaboration is what it is all about and we’re excited to help facilitate.

TIP Summit 2017 - Patrick Parodi - Panel Discussion from sysadmin on Vimeo.

Comments
Consumer

Can a Wi-Fi radio detect Duty Cycled LTE?

Joey Padden
Distinguished Technologist, Wireless Technologies

Jun 24, 2015

For my third blog I thought I’d give you preview of a side project I’ve been working on. The original question was pretty simple: Can I use a Wi-Fi radio to identify the presence of LTE?

Before we go into what I’m finding, let’s recap: We know LTE is coming into the unlicensed spectrum in one flavor or another. We know it is (at least initially) going to be a tool that only mobile network operators with licensed spectrum can use, as both LAA and LTE-U will be “license assisted” – locked to licensed spectrum. We know there are various views about how well it will coexist with Wi-Fi networks. In my last two blog posts (found here and here) I shared my views on the topic, while some quick Googling will find you a different view supported by Qualcomm, Ericsson, and even the 3GPP RAN Chairman. Recently the coexistence controversy even piqued the interest of the FCC who opened a Public Notice on the topic that spawned a plethora of good bedtime reading.

One surefire way to settle the debate is to measure the effect of LTE on Wi-Fi, in real deployments, once they occur. However to do that, you must have a tool that can measure the impact on Wi-Fi. You’d also need a baseline, a first measurement of Wi-Fi only performance in the wild to use as a reference.

So let’s begin by considering this basic question: using only a Wi-Fi radio, what would you measure when looking for LTE? What Key Performance Indicators (KPIs) would you expect to show you that LTE was having an impact? After all, to a Wi-Fi radio LTE just looks like loud informationless noise, so you can’t just ask the radio “Do you see LTE?” and expect an answer. (Though that would be super handy.)

To answer these questions, I teamed up with the Wi-Fi performance and KPI experts at 7Signal to see if we could use their Eye product to detect, and better yet, quantify the impact of a co-channel LTE signal on Wi-Fi performance.

Our first tests were in the CableLabs anechoic chamber. This chamber is a quiet RF environment used for very controlled and precise wireless testing. The chamber afforded us a controlled environment to make sure we’d be able to see any “signal” or difference in the KPIs produced by the 7Signal system with and without LTE. After we confirmed that we could see the impact of LTE on a number of KPIs, we moved to a less controlled, but more real world environment.

Over the past week I’ve unleashed duty cycled LTE at 5GHz (a la LTE-U) on the CableLabs office Wi-Fi network. To ensure the user/traffic load and KPI sample noise was as real world as possible… I didn’t warn anybody. (Sorry guys! That slow/weird Wi-Fi this week was my fault!)

In the area tested, our office has about 20 cubes and a break area with the majority of users sharing the nearest AP. On average throughout the testing we saw ~25 clients associated to the AP.

We placed the LTE signal source ~3m from the AP. We chose two duty cycles, 50% of 80ms and 50% of 200ms, and always shared a channel with a single Wi-Fi access point within energy detection range. We also tested two power levels, -40 dBm and -65dBm at the AP so we could test with LTE power above and just below the Wi-Fi LBT energy detection threshold of -62dBm.

We will have more analysis and results later, but I just couldn’t help but share some preliminary findings. The impact to many KPIs is obvious and 7Signal does a great job of clearly displaying the data. Below are a couple of screen grabs from our 7Signal GUI.

The first two plots show the tried and true throughput and latency. These are the most obvious and likely KPIs to show the impact and sure enough the impact is clear.

Duty_Cycled_LTE_fig1
Figure 1 - Wi-Fi Throughput Impact from Duty Cycle LTE

Duty_Cycled_LTE_fig2
Figure 2 - Wi-Fi Latency Impact from Duty Cycle LTE

We were able to discern a clear LTE “signal” from many other KPIs. Some notable examples were channel noise floor and the rate of client retransmissions. Channel noise floor is pretty self-explanatory. Retransmissions occur when either the receiver was unable to successfully receive a frame or was blocked from sending the ACK frame back to the transmitter. The ACK frame, or acknowledgement frame, is used to tell the sender the receiver got the frame successfully. The retransmission plot shows the ratio of retransmitted frames over totaled captured frames in a 10 second sample.

As a side note: Our findings point to a real problem when the LTE power level at an AP is just below the Wi-Fi energy detection threshold. These findings are similar to those found in the recent Google authored white paper attached as Appendix A to their FCC filing.

Duty_Cycled_LTE_fig3
Figure 3 - Channel Noise Floor Impact from Duty Cycle LTE

Duty_Cycled_LTE_fig4
Figure 4 - Wi-Fi Client Retransmission Rate Impact of Duty Cycle LTE

Numerous other KPIs showed the impact but require more post processing of the data and/or explanation so I’ll save that for later.

In addition to the above plots, I have a couple of anecdotal results to share. First our Wi-Fi controller was continuously changing the channel on us making testing a bit tricky. I guess it didn’t like the LTE much. Also, we had a handful CableLabs employees figure out what was happening and say “Oh! That’s why my Wi-Fi has been acting up!” followed by various defamatory comments about me and my methods.

Hopefully all LTE flavors coming to the unlicensed bands will get to a point where coexistence is assured and such measures won’t be necessary. If not — and we don’t appear to be there yet — it is looking pretty clear that we can detect and measure the impact of LTE on Wi-Fi in the wild if need be. But again, with continued efforts by the Wi-Fi community to help develop fair sharing technologies for LTE to use, it won’t come to that.

Comments
Wireless

Wi-Fi vs. Duty Cycled LTE: A Balancing Act

Joey Padden
Distinguished Technologist, Wireless Technologies

Dec 3, 2014

In the second installment of my discussion on proposed LAA-LTE and Wi-Fi coexistence schemes, I am going to look at duty cycled solutions.

Let’s recap: We know that this new technology for unlicensed spectrum will be available only to mobile operators since the mobile industry standards body (3GPP) has decided not to pursue the ‘standalone’ version of LTE-unlicensed. (Hence the LAA acronym, which stands for License Assisted Access.) Much of our focus is therefore on ensuring that this new proprietary technology won’t disadvantage other users of unlicensed spectrum, like Wi-Fi. In my last post, I explored the impact of having LAA-LTE adopt “listen before talk” politeness standards, and found that they were not a coexistence panacea. Now let’s cover other politeness approaches.

Different from the LBT approaches discussed last time, duty cycled configurations do not sense the channel before transmitting. Instead, they turn the LTE signal on and off, occupying the channel for some period of time, and then vacating the channel to allow other networks (e.g. Wi-Fi) to access for some time. See Figure 1 for a simple visual of how a duty cycle works.

Duty Cycle Fig1

Figure 1

This approach has been proposed by various sources. The first reference we found was from a Nokia research whitepaper, but recently Qualcomm has also proposed an adaptive version they dub Carrier Sense Adaptive Transmission (CSAT) with a flexible duty cycle, while still others including ZTE in their recent 3GPP contribution are suggesting a time domain multiplexing “TDM” (another name for duty cycle) approach.

Duty Cycle %

Duty cycle configurations have two main knobs that define the on and off behavior, the duty cycle percentage and the duty cycle period. Essentially, the duty cycle is a repeating on/off pattern where the period defines how often the pattern repeats (usually in milliseconds for our discussion) while the duty cycle percentage is the fraction of the period that LTE is turned on. See Figure 1 for how these two are related.

Let’s first look at the duty cycle percentage. This configuration knob has a very easy to understand cause and effect relationship on coexistence.

Let’s take an example of Wi-Fi and LTE sharing a single channel. In general, the duty cycle of LTE will define the time split between the two networks because Wi-Fi is a polite protocol that does listen-before-talk (LBT). So if Wi-Fi were alone on the channel, it would get 100% of the airtime. If LTE joins the same channel with a 50% duty cycle, Wi-Fi would now get 50% of the airtime because it would sense the LTE and stop transmitting. In general this means Wi-Fi would get about 50% of the throughput it had in the 100% airtime case.

Now, notice above I said “in general” when talking about the duty cycle percentage? The predictable relationship described starts to break down if the duty cycle period gets really small. So let’s look more closely at the duty cycle period.

Duty Cycle Period

Of the duty cycle proposals described above, a primary difference is the scale of the duty cycle period being proposed. The Nokia Research paper studied the use of almost blank subframes (ABS) or blank uplink subframes in the LTE standard frame structure to produce the duty cycled LTE signal. In either case, the duty cycle period for the Nokia paper is 10 milliseconds.

In comparison, the papers from Qualcomm and ZTE both suggest duty cycle periods of hundreds of milliseconds. Duty cycle periods of this size are likely supported by either the LTE feature called Scell activation/deactivation described here (warning: that link is fairly technical) or the newer release 12 small cell on/off features.

CableLabs Tests Wi-Fi Products

To better understand the effects of a duty cycled LTE signal on Wi-Fi, we did some testing here at CableLabs with off the shelf Wi-Fi products.

First we tested Wi-Fi throughput. For this, we used a wired test configuration where the LTE signal level was above the Wi-Fi clients' LBT threshold i.e. when LTE was on, the Wi-Fi client should sense their presence and not transmit. We then pumped data through the Wi-Fi network and watched what happened as we duty cycled the LTE signal. Figure 2 below shows the results.

Duty Cycle Fig2

Figure 2

What you can see in Figure 2 is that the duty cycle period has an effect on that nice predictable behavior of the duty cycle percentage discussed above. For the small period case (i.e. 10ms) the throughput performance is worse than predicted.

With a 10ms period, the gaps left for Wi-Fi are too small for Wi-Fi to use effectively.

In addition, because the duty cycled LTE doesn’t do LBT, many Wi-Fi frames that start transmission within a gap get corrupted when LTE starts transmitting before Wi-Fi is done with it’s transmission.

Next we did some over the air testing in our anechoic chamber, again using a duty cycled LTE signal on the same channel as a Wi-Fi network. This time, we measured the delay of packets on the Wi-Fi link, also called latency.

As a quick aside on why latency matters, check out Greg White’s blog post about the recent efforts in the DOCSIS 3.1 project on reducing latency in cable internet services. As Greg points out in his post, user experience for various user applications (gaming, VoIP, web browsing) is heavily impacted by increased latency.

So we looked at latency for the same set of duty cycle percentages and periods. Figure 3 shows the results.

Duty Cycle Fig3

Figure 3

As you can see in Figure 3, as the duty cycle period of the LTE signal is increased, the latency of the Wi-Fi network goes up with it.

With duty cycle periods of a few hundred milliseconds, Wi-Fi users sharing the same channel will see their latency go up also by hundreds of milliseconds. This may not sound much, but as Greg pointed out, an eye-blink delay of several hundred milliseconds could mean many seconds of wait while loading a typical webpage.

The Balancing Act

Based on our testing, it is clear that using a duty cycle approach for LTE and Wi-Fi coexistence is a careful balancing act between throughput and latency.

If the duty cycle period is configured as too low, the throughput of a Wi-Fi network sharing the channel will be negatively impacted. On the other hand, if the duty cycle period is too high, the latency of a Wi-Fi network sharing the same channel will be negatively impacted.

This points to a conclusion similar to our LBT post. While existing options appear to provide some level of channel sharing between Wi-Fi and LTE, there is a lot of work left to do before we see the fair and friendly coexistence solution that Wi-Fi users want. Moreover, the proposed duty cycle solutions offered in recent papers and contributions do not appear to be closing that gap.

Stay tuned for my next post looking at channel selection based solutions and the occupancy of 5GHz spectrum of today and tomorrow.

By Joey Padden -

Comments
Technology

Wi-Fi vs EU LBT: Houston, we have a problem

Joey Padden
Distinguished Technologist, Wireless Technologies

Nov 17, 2014

Licensed Assisted Access using LTE is the nascent LTE tech that puts cellular signals into the unlicensed spectrum. It goes by LAA-LTE or LTE-U for short. By all accounts the blitz is on to push this new tech into the field as fast as possible. NTT DoCoMo and Verizon have already announced their testing LTE-U. In addition the effort in 3GPP (the mobile standards body) on the approved study item is going fast & furious after kicking off at RAN1 78bis in Ljubljana Slovenia in early October.

The "license assisted" moniker is an indication of something unique: though it uses unlicensed spectrum, it is actually linked to licensed spectrum. This is a technology for mobile operators to supplement their networks by integrating unlicensed spectrum. Unlicensed spectrum will "assist" licensed LTE.

Having already decided to retain this link to licensed networks, 3GPP is now turning its attention to implementation. A key issue for 3GPP to tackle when creating LAA-LTE is how to modify LTE so that it can fairly share spectrum with other technologies e.g. Wi-Fi. As some have pointed out (see blog 1, blog 2, blog3, and most recently blog4) it is still hotly debated how nicely LTE-U will ultimately play with Wi-Fi.

Three coexistence methods commonly appear in contributions to 3GPP so far; channel selection, some form of duty cycling the LTE signal, and Listen-Before-Talk. Combining channel selection and LBT is also quite popular. This post will be the first of a series looking more closely at the pros and cons of each approach.

We thought it would be fun to go straight to the Holy Grail, Listen-Before-Talk.

Listen-Before-Talk

Of the three methods, Listen-Before-Talk, or LBT for short, is thought to be the most onerous to implement, but also the most likely to provide fair coexistence with Wi-Fi. After all, Wi-Fi does its own flavor of LBT called the Distributed Coordination Function (DCF), or Enhanced Distributed Channel Access (EDCA) depending on the Wi-Fi vintage. For more on how these LBT schemes work, see this great explanation from Cisco.

Now, some regulatory regions e.g. the EU and Japan, require unlicensed spectrum users to use LBT for accessing certain spectrum bands. However, if you look closely at the EU regulations, they provide two options for LBT schemes. Option 1 is to use the DCF/EDCA as defined in Wi-Fi standards. The other option is to use one of two schemes, Load Base Equipment or Frame Based Equipment, defined in the document linked above. The rules for Load Based Equipment (LBE) (sorry for the acronym soup!) are similar to Wi-Fi LBT. A review of the recent 3GPP contributions confirms more companies are pointing to the LBE rules compared to the Frame Base Equipment alternative. Figure 1 depicts the backoff process for LBE.


Figure 1 - EU Load Based Equipment LBT Rules
Figure 1 - EU Load Based Equipment LBT Rules

The key difference between Wi-Fi LBT and EU LBE LBT is the backoff process.

In EU LBE it is called the extended CCA. EU LBE uses a static range from 0 to q slots, where each slot is 20 µs. The value q is fixed for a given product, i.e., the extended CCA range is always the same size.

In comparison, Wi-Fi uses 9µs slots and what is called an exponential backoff. This means that when a Wi-Fi client determines a collision has occurred, it doubles the range used for the next attempt (the parameter is called the contention window).

To help visualize the effect of a static backoff range versus an exponential backoff, we created a Monte Carlo simulator. In the simulator, we create a set of nodes that follow the Wi-Fi rules, and a second set of nodes that use the EU LBE rules. We then model a million transmission opportunities on a shared channel. As time ticks on, clients are drawing random numbers, or counting down their timers, or winning the race and sending a packet, or in some cases sharing a random number with other clients and a collision happens. Figure 2 draws simple network diagram of what this would look like.

Figure 2 - Simulated Network Topology

Figure 2 - Simulated Network Topology

 

The results of the simulator show that static versus exponential backoff makes a big difference.

Figure 3 shows the transmission success rate gain or loss for 3 scenarios. Transmission success rate here is defined as the likelihood that each client can complete a CCA successfully and then send a burst without a collision. In this plot, the LAA clients were implementing EU LBE with a q = 32, or the maximum allowed backoff range i.e. most likely to provide good coexistence.

The three scenarios are listed below:

  1. Operator A = Wi-Fi, Operator B = Wi-Fi
  2. Operator A = Wi-Fi, Operator B = LAA-LBT, 15 devices/operator
  3. Operator A = Wi-Fi, Operator B = LAA-LBT, 20 devices/operator

In displaying the results, we use case 1 as the baseline for comparison.

In case 2, when Operator B deploys an LAA network using EU LBE, you can see that the coexistence behavior is poor. The LAA clients from Operator B show increased transmission success rates, while the Wi-Fi clients from Operator A show a 77% decrease.

In case 3 the coexistence is even worse still. With just 20 devices on each network, the Wi-Fi devices on Operator A’s network show an 88% decrease transmission success.

Not shown in figure 3 is what happens with low client counts. In a similar case to 2 & 3 but with 4 or fewer devices per operator, there does appear to be good coexistence; both Wi-Fi and LAA clients see improvement over the Wi-Fi to Wi-Fi case, case 1.

Also keep in mind this is for the most coexistence friendly case of where q = 32 for the LAA using EU LBE clients. If you decrease the q value, both the LAA using EU LBE clients and the Wi-Fi performance suffers.

Simulation Assumptions: Full buffer traffic, all nodes within CCA range of each other, single 20 MHz channel, co-channel operation, clients and APs/eNBs are stationary, Wi-Fi STAs use EDCA, BE AC (AIFS = 3, CWmin = 15)


Figure 3

Houston, We have a problem

Throughout the LAA-LTE development process, companies who are interested in ensuring fair coexistence between Wi-Fi and LAA have been looking to the EU LBT regulations as a safe haven. Thoughts like “well worst case we can adopt the EU LBT rules and we’ll be safe” are frequently expressed.

But after a closer look, it appears the EU LBT rules aren't the Holy Grail of coexistence we were looking for.

The key contributing factor is the EU LBE’s limited and non-adaptive backoff range. When client count rises, more clients are likely to choose the same backoff due to the limited and non-adaptive backoff range. The result is increased collisions and reduced performance as we have seen in the plots above.

Looks like it’s back to the drawing board to find additions to the EU rules to help mitigate these issues.

By Joey Padden —

Comments