It’s the year 2031, and the pandemic is in the past. While Dave drinks his morning coffee and reads the news, a headline catches his attention. A large quantum computer is finally operational! Suddenly, Dave’s mind is racing. After few seconds, as his heartbeat slows, he looks up into the mirror and proudly says, “Yes, we’re ready.” What you don’t know about Dave is that he’s been working for the past 10 years to make sure that all aspects of our broadband communications and access networks remain secure and protected. Besides searching for new quantum-resistant algorithms, Dave has been focusing on the practical aspects of their deployment and addressing their impact on the broadband industry.
Here in 2021, the broadband industry needs to start traveling the same path that Dave will have navigated 10 years from now. We need to make sure we remove the roadblocks ahead of time so that we can lay the groundwork for the adoption of new security tools like post-quantum (PQ) cryptography.
The Post-Quantum Cryptography Landscape
Although NIST is still finalizing its standardization process for PQ cryptography, there are interesting trends and practical long-term considerations for PQ deployment and the broadband industry that we can already infer.
Most of the algorithms that are still present in the final round of the algorithm competition are based on mathematical constructs called lattices, which, in practice, are collections of equally spaced vectors or points. Lattice-based cryptography security properties are rooted in the difficulty of solving certain topological problems for which there is not an efficient algorithm (even for a quantum computer), such as the Shortest Vector Problem (SVP) or the Closest Vector Problem (CVP). Algorithms like Falcon or Dilithium are based on lattices and produce the smallest authentication traces overall (i.e., signatures range from 700 bytes to 3,300 bytes).
Another class of algorithms to keep an eye on is based on isogenies. These algorithms use a different structure than lattices and have been proposed for key exchange algorithms. These new key-exchange algorithms—namely Key Encapsulation Mechanism (KEM)—leverage morphisms (or isogenies) among elliptic curves to provide “Diffie-Hellman–like” key exchange properties to implement Perfect Forward Secrecy. Isogeny-based encryption uses the shortest keys in the PQ algorithm landscape but is computationally very heavy.
Besides these two classes of algorithms, we should keep hash-based signature schemes in mind as a possible alternative. Specifically, they provide proven security at the expense of very large cryptographic signatures (public keys are extremely small) that hinder, at the moment, their adoption. A well-known hash-based algorithm that will probably be re-included in the NIST standardization process is SPHINCS+.
DOCSIS® Protocol, DOCSIS PKI and PQ Deployment
Now that you understand the available options to consider for your next-generation crypto infrastructure, it’s time to look at how these new algorithms impact the broadband environment. In fact, although the DOCSIS protocol has been using digital certificates and public-key cryptography since its inception, the broadband ecosystem relies on the RSA algorithm only—and that algorithm has very different characteristics than the PQ algorithms in consideration today.
The good news is that from a security perspective, minimal upgrades are required to replace the use of RSA using the latest version of the DOCSIS protocol (i.e., DOCSIS 4.0) when compared with previous versions. Specifically, DOCSIS 4.0 removes the dependency on the use of the RSA algorithm in terms of key exchange and leverages a standard signature format—namely, the Cryptographic Message Syntax (CMS)—to deliver signatures. CMS is already scheduled to be upgraded to provide standard support for PQ algorithms as soon as the algorithms standardization process ends. In DOCSIS 1.0–3.1, because of the dependency on the RSA algorithm for key exchange, the required protocol changes might be more extensive and employ the use of symmetric keys, in addition to RSA keys, to deliver secure authentications.
The size of the new algorithms is another important aspect of deployment. Although the lattice-based and isogenies-based algorithms are quite efficient for the sizes of authenticated (signature) or encrypted (key-exchange) data, they’re still an order of magnitude (or more) larger than what we’re used to today.
Therefore, the broadband industry needs to focus a first set of considerations surrounding the impact of cryptography on the size of authentication and authorization messages. In the DOCSIS protocol, the Baseline Privacy Key Management (BPKM) messages are used, at layer 2, to transfer authentication information across the cable modem and its termination system. Fortunately, because BPKM messages can provide support for any data size via fragmentation support, we don’t envision the need to update or modify the structure of Layer 2 authentication messages to accommodate the new size of crypto.
Somewhat connected to the size of the new crypto are the considerations related to algorithm performances. PQ algorithms, unlike RSA and ECDSA, are computationally very heavy and therefore might pose additional engineering hurdles when designing the hardware to support them. For end-entity devices such as cable modems and optical network units, there are various options to consider. One option, for example, is to look at the integration of modern microcontrollers that can offload computation and provide isolated environments in which algorithms can be securely executed. Another approach is to leverage trusted execution environments already available in many edge devices’ central processing units (CPUs), without the need to update today’s hardware architectures. On core devices, the added CPU load—when compared with the very fast RSA verifications—might require additional resources. This is an active area of investigation.
The final set of considerations is related to algorithm deployment models and certificate chain validation considerations. Specifically, because the current implementation paradigm for PQ algorithms required by NIST doesn’t use the hash-and-sign paradigm (it directly signs the data without hashing it first), there are some important considerations to make. Although this approach removes the security dependency on the hashing algorithm, it also introduces a subtle but important performance hit; the data to be authenticated or signed (i.e., when a device is trying to authenticate to the network) must be processed directly by the algorithm. This might require large data buses to carry the data to the MCU or to transition through the trusted execution environment on the CPU. Performance bottlenecks generated by the adopted signing mechanism have already been observed, and further investigations are needed to better understand the real impact over deployments.
For example, when signing with the “hash-and-sign” paradigm, the signing part of the operation on a 1TB document or 1KB document takes the same time (because you’re always signing the hash that’s only a few bytes in length). In comparison, when using the new paradigm (not possible with algorithms like RSA), signing times can differ wildly depending on the size of the data you’re signing. This problem is even more evident when addressing the costs associated with the generation and signing of hundreds of millions of certificates via this new approach. In other words, the new paradigm, if adopted, could potentially impact certificate providers and increase the costs associated with the signing of large quantities of certificates.
Available Tools and Projects
Now that you know where and what to look for, how can you start learning more about—and experimenting with—these new algorithms for real-world deployment?
One of the best places to start is the Open Quantum Safe (OQS) project that aims to support the development and prototyping of quantum-resistant cryptography. The OQS project provides two main repositories (open-source and available on GitHub): the base liboqs library, which provides a C implementation of quantum-resistant cryptographic algorithms, and a fork of the OpenSSL library that integrates liboqs and provides a prototype implementation of CableLabs’ Composite Crypto technology.
Although the OQS project is a great tool to start working with these new algorithms, the provided integration with OpenSSL doesn’t support generic signing operations: a limitation that might affect the possibility to test the new algorithms in different use-cases. To address these limitations and to provide better Composite Crypto support together with an hash-and-sign implementation for PQ algorithms, CableLabs started the integration of the PQ-enabled OpenSSL code with a new PQ-enabled LibPKI (a fork from the original OpenCA’s LibPKI repository) that can be used for building and testing these algorithms for all the aspects of the PKI lifecycle management, from validating the full certificate chain to generating quantum-resistant revocation information (e.g., CRLs and OCSP responses).