Come Along for the Ride: Innovation Boot Camp
Innovation is difficult. A full 95 percent of consumer products fail. Plus, the innovation process is highly unreliable, so we need every advantage we can get. In just three and a half days at the CableLabs Innovation Boot Camp, my team and I found a way to change all that. Yes, they were long days, and we had to make efficient use of that limited time. But 95 percent? We considered that an opportunity!
My part of the project pitch—the ultimate goal of Innovation Boot Camp—started off with a statistic that was just as surprising as that 95 percent figure, which was the number of people who could seriously benefit from the use of autonomous vehicles—if they were made to be safe. About 15 percent of the world’s population is disabled, and many of them could benefit from autonomous vehicles in some way (but also could be more easily harmed by them). Then again, everybody could significantly benefit from automated vehicles, so that percentage rises to 100 with a little change in perspective! Doing a bit of research and framing a problem with real statistics is something I learned a long time ago in high school debate. But it’s a corner I often cut, as I think many of us do. Innovation Boot Camp reinforced that important aspect, which is critical to solving an important problem.
As for our pitch—it went very well! We got fantastic feedback, and the Q&A session afterward quickly generated further great ideas. A key feature of the boot camp was Phil McKinney teaching attendees about innovation antibodies. I listen to his podcast regularly, so I had heard of the concept before. But it was during Innovation Boot Camp that I realized those antibodies were real—and they were inside me! I’m not talking about autonomous bots in my system, and I’m not referring to the autonomous vehicles and robotics at the core of our focus at Innovation Boot Camp. Rather, to me, innovation antibodies mean that I could talk myself out of taking the needed risks to execute on innovation. Thankfully, I recognized their presence, so I could deal with them directly— an important step! I really came away with confidence that I could overcome those innovation antibodies. Our successful pitch demonstrated that nicely. But just the evening before, I wasn’t so sure.
I remember preparing for our final pitch. We had driven our project far in just a few days, and we had successfully followed the innovation process. We were coached on what elements belonged in our pitch to sell it well, and we were learning about the hard part: execution. Our idea had merit! (Of course, every team at Innovation Boot Camp could confidently say the same thing.) We knew to focus on the “why.”
Executive Director of UpRamp Scott Brown gave us great pointers; I’m sure he was proud of us! But even as the teams were getting ready to frame their pitches and were forming their use case stories, we were also learning from Ryan Wickre and Scott Thibeault about design practices, and how excellent companies like Frog Designs solve tough design problems.
Learning is a constant at Innovation Boot Camp. Our dinner one night was a live Q&A session at the Computer History Museum. After looking over the technology that touches our topic area for our innovation project (autonomous vehicles), we enjoyed a live session with a few seasoned entrepreneurs who shared a lot of wisdom with us:
- Mark Varrichionne, CEO of Innovators Network;
- Kym McNicholas, Editor at Large, Corporate Innovators Series; and
- Phil McKinney, CEO of CableLabs.
That evening was all about gaining both knowledge and wisdom! When I first arrived at Boot Camp, I knew almost nothing about autonomous vehicles, but after that evening, I knew more than enough to innovate in that market space. And I was already learning the steps required to innovate well. We were motoring!
As my team prepared its pitches, I thought about the time we spent enjoying food and drinks with our target market: the early adopters. When it comes to focusing ideas on an innovation area, I learned just how important it is to know the target market. And that takes having real, frank conversations with those users. Our coaches and subject matter experts found actual early adopters in Silicon Valley and steered them to us so that we could ask them questions. What we learned that night heavily influenced our innovation project and convinced me that not knowing a target market is a highly significant innovation mode of failure.
While talking with early adopters of robotics, drones, and autonomous vehicles, my team and I learned a great deal that helped us focus the chaos of ideas we’d brainstormed earlier. We spent most of that day learning how to innovate and rank our ideas. It’s amazing how quickly you can generate excellent ideas for problem-solving. Bringing our individual ideas to the team, grouping them, enhancing them, and developing them from that point resulted in some exciting opportunities! Although I had some experience going through this process, Innovation Boot Camp introduced me to concepts such as the scamper method, as well as proven methods for ranking ideas.
Idea generation and ranking were reinforced throughout the boot camp. Think of it as driver’s education for innovation. Each day, Phil McKinney told us to get our books out to generate and rank ideas about an identified problem. Framing the problem was also important, and having some skills reinforced on that first step was essential. This kind of daily practice takes only a few minutes, but it brings great value!
Zero to sixty in no time! We were already talking to innovation experts on the first day of boot camp, even getting a presentation from Aditya Kaul through a remote presence robot. We toured autonomous vehicle research facilities. Time went by quickly, even though the days were long. Innovation Boot Camp started exactly the way you’d expect: a rapid learning experience that established a strong foundation from which to innovate. We saw:
- The challenges of the autonomous vehicle space
- Learned what companies were working on, and understood where their biggest challenges were
- And at the same time, we were already learning about framing, ideation, ranking, and execution—also known as the FIRE process
When the Boot Camp began, I wondered what it would be like. Was I going to learn enough about autonomous vehicles to actually come up with a good idea? Was I going to learn anything new that would help me step up my innovation game? Could I find a way to take what I learned back to my team and bring more value to CableLabs and its members? Was I going to crash?
As I received my certificate on the final day, I knew my time wasn’t wasted, and the return on my investment was high. I had a wonderful time! But it wasn’t just because the food was great or because the location was right. The event staff were fabulous, the topic was interesting, the content was solid, and my team members were fantastic. Our coaches really drove us to success; Lori Lantz, Dan Smith, Christian Pape, and Lisa Warther were great leaders. And Michelle Vendelin was the event master; she guides us and made the whole experience highly valuable! Although the process alone was completely worth the investment, the project outcome was hugely valuable too. I grasped a few new skills, I learned a process I can reference later, and I got some great ideas worth pursuing after the event. I can confidently replicate what I learned, and in fact, I’ve already done that in my job and in my personal life!
That 95 percent failure rate for innovation is low-hanging fruit. I can now assuredly do my part to lower that failure rate. Off to the races!
Our next Innovation Boot Camp is September 25-28 in Louisville, Colorado. The topic is Connected in Extreme Weather and Natural Disaster. Register now and don't miss the opportunity to learn a framework and new methods for Innovation that is repeatable and has led to incremental and breakthrough innovation for past attendees.
Inform[ED] Video: Cable Modem Validation Application
Ever-present communication is an important part of life these days. Cable technology provides connectivity for homes and businesses, providing entertainment, information and increasingly important functions for life. As we rely more on all forms of communications access, we rely on our cable modems to help keep services running their best. Cable modems have therefore become more capable - they can report on network problems they see as they adjust around those problems. The cable industry refers to the information obtainable from these capable cable modems as Proactive Network Maintenance data.
CableLabs has created an application to share with the industry which can make sure cable modems are doing their best with reporting their Proactive Network Maintenance data. This sharing enables members, vendors and our own laboratories to be on the same page with validating our cable modems. The application automates the Proactive Network Maintenance tests that are part of the certification tests conducted at CableLabs for the industry. Everyone in the industry can use it to reduce their cycle times and costs around certification testing, but they can use it also to develop new capabilities, special versions of modems to support new capabilities, and more.
If you’re interested in learning more about the Cable Modem Validation Application you can read my technical blog here and watch the video below.
Validating Cable Modems for DOCSIS® 3.1 PNM Deployment
The cable industry is always trying to find ways to improve service. When the cable industry made proactive network maintenance (PNM) a part of the DOCSIS® specifications, we showed great commitment to service. CableLabs supports that commitment through its work in specifications, and particularly through its PNM project.
This blog entry in our PNM series focuses on cable modem validation. Cable modem (CM) validation is the work to assure that the CMs can fully support PNM. When CMs can be assured to report data about impairments in the network, service providers have a tool for finding and fixing network issues before they impact service. CableLabs built the cable modem validation application (CMVA) to help bring that assurance to the industry.
Validating CM PNM functionality for DOCSIS 3.1 network deployments might seem like a small step in a large technology life cycle. But it’s an important step, and one we wish to highlight.
Why is it important to validate PNM data reporting from CMs?
- Continuous service improvement: Before deploying a technology, it is important to know ahead of time whether CMs will be capable of supporting network maintenance and troubleshooting. We never want to introduce a new technology that costs more to maintain than the previous. Ideally, a new technology will cost less to build, be less expensive to maintain, and provide superior service. PNM capabilities are an important part of this needed improvement. A consistent approach with CMs is the first step toward CMTS testing and having integrated PNM capabilities for the entire architecture.
- Getting ready for future technology evolutions: New DOCSIS 3.1 modems provide more information about the plant and its ability to support enhanced services and deploy new technologies like FDX. This capability can become an important source of information for all sorts of planning and engineering activities. It is a critical first step toward many possible futures for DOCSIS.
- Best practices we can share: With a consistent industry approach to PNM data reporting, collection, and certification testing of modems, everyone can validate and verify consistent reporting. Therefore, we can build best practice operation solutions on that strong foundation.
You can’t manage what you can’t measure, so having modems capable of reporting PNM measurements allows cable operators to manage their networks effectively, inexpensively, and reliably!
Realize: It’s always too late to start thinking about reliability!
You can’t add in reliability as if it’s a separate feature. You need to design it into the system as early as possible for the lowest cost, or work it in later at a much higher cost. DOCSIS is a sophisticated system, especially 3.1 and Full Duplex DOCSIS. This complexity is why having PNM within the DOCSIS specification is an important move for the industry, supporting its ability to evolve. But, this is only a first step. We need to make sure these PNM capabilities work as intended in systems before we deploy and assure we can take full advantage of the capabilities once deployed.
The Common Collection Framework (CCF) and the Cable Modem Validation Application (CMVA)
CableLabs built two solutions that together help address this industry need:
- The Combined Common Collection Framework (XCCF): The XCCF provides management of data requests to network elements, and provides the data to a REST API to support applications of all kinds. The CMVA, one of those applications, uses the data provided by the XCCF to validate modem performance in support of PNM. If you want to learn more about the XCCF, you can read the previous entry in our PNM blog series here, or access the public version of the architecture document here. We are building the future of the XCCF right now, so it’s a great time to get involved.
- The Cable Modem Validation Application (CMVA): The CMVA allows any of us to test CMs for compliance to DOCSIS 3.1 specifications, specifically the PNM portions. The tests conducted are based on the Acceptance Test Plans (ATPs) supported here at CableLabs, specifically the DOCSIS 3.1 PHY and OSSI ATPs, based on the DOCSIS 3.1 specifications. But not only does CMVA provide concise test results based on these ATPs, but it provides nice graphical output (plots, tables) so you can visually confirm the results too. Sometimes what passes a specification is still not desirable or functional necessarily. Looking at the results is a great way to get introduced to the wealth of data available in DOCSIS 3.1 CMs, allowing the CMVA to be useful toward confirming specific results you may envision for your own PNM deployments. To facilitate that idea further, we are adding to the CMVA a few extra capabilities so that users can test additional PNM workflows, look for test anomalies, or further experiment with PNM capabilities.
We use it…
CableLabs and our subsidiary, Kyrio, are using the CMVA in our own CM certification testing. CableLabs will use it further to explore improved workflows for PNM, in support of the InGeNeOs Forum’s planned work on PNM Best Practices for DOCSIS 3.1 technology. Just like the XCCF is the foundation for a lot of PNM related capabilities, the CMVA is a step beyond and toward greater PNM capabilities that support low cost and high effectiveness in DOCSIS 3.1 network deployments.
…Others use it…
We envision a couple of important use cases for our partners.
- Vendors can use it to validate their modem for compliance to the PNM portions of the specifications, test chip capabilities, improve firmware, or explore potential PNM developments. We’re aware of a vendor using XCCF to test silicon, so, for example, the CMVA could be added to find issues and share them with their suppliers during design testing.
- MSOs can verify compliance in their own labs, develop CM builds that help them differentiate, and examine CM sensitivity and capability at PNM tasks and operations workflows. For example, if a particular modem is vulnerable to LTE ingress at the interface, a few lab tests might detect it before deploying the problem, and the CMVA would be one way to detect and display the problem.
…Wouldn't you like to use it too?
CMVA was designed specifically for Kyrio and CableLabs to use in certification testing of CMs, with vendors and members able to use it for their own equivalent needs. But, CMVA is well suited for exploring a lot of other needs. Thus, we look forward to working with you to get the full benefit from the XCCF, CMVA and all the CableLabs PNM developments completed and yet to be built.
If you first just want to learn more, please look for our demonstration video to be announced soon. When you are interested in gaining access or discussing it with me further, please feel free to contact me directly by clicking below.
PNM Series: The Business Case for a Common Collection Framework
This is the second in our series on Proactive Network Maintenance (PNM). If you missed our introduction to PNM, you can check out the first entry which explains some background on the subject.
PNM is our CableLabs project focused on assuring cable service provider companies can maintain the network at a level of quality so that major impacts to service are avoided. The proactive part means the maintenance happens before the customer’s service is impacted. But, to do this well, a service provider must collect data from the network. However, collecting data from the network in a way that doesn’t impact service is not easy.
What is the Common Collection Framework?
The Common Collection Framework is a set of Python software modules that handle the task of collecting PNM data from the network elements and presenting the data to PNM applications. It provides the data in a common form so that software applications don’t have to talk network language to get the data it needs. It also protects the network from overly frequent data requests, which can impact service.
CableLabs created a DOCSIS® Common Collection Framework (DCCF) and a Wi-Fi Common Collection Framework (WCCF). We have also started the creation of an optical-centered collection framework. We may even create an in-home wired (MoCA) framework if members express the need. To keep the usage model simple, CableLabs intends to join these frameworks into a combined Common Collection Framework (XCCF). Because cable services are provided over a network comprised of many different technologies, CableLabs is making it easy for members to use the right mix of collection frameworks to get data from the right network elements for their needs.
CableLabs recently released an architecture document to the public that describes the DCCF in detail. You can obtain a copy at this link and reference it in your work. The document describes what the DCCF is, as well as the intended architecture for XCCF. There is also a partner document reporting on the Wireless Common Collection Framework, available here.
What’s Under the Hood?
Because the PNM data are presented in the formats presented by the network, existing applications shouldn’t have trouble connecting to the XCCF to obtain its data. Translator software takes the output from the network and gets it ready for applications to use.
Why did CableLabs build it?
A PNM application or program needs data to drive it - obtaining the data required can be a significant request to network elements. Service providers need to know that the network isn’t impacted by PNM requests, so they need some level of control to assure service is the priority. Further, there are potentially numerous PNM applications that need the same data, so having every application impact the network in uncoordinated ways is not efficient, and not necessarily customer friendly. A PNM program that utilizes multiple applications needs a common collection capability to support the applications and relieve the network.
There are clear advantages to using the XCCF to support network operations:
- It provides one polling mechanism to manage, serving all applications.
- Building your own applications, and supporting purchased applications, becomes easier with the XCCF.
- The network isn’t overly taxed with data requests, so it can be ruled out as a cause when there is a problem.
- You get clear separation from the network and the applications, which fits the way operations are usually organized.
- Updating is easy between the applications and the network when you have the XCCF as the point to manage those changes, and XCCF is built to support that.
- XCCF is extensible, and we have loads of great ideas to consider on the roadmap.
- Because XCCF is based on SDN architecture concepts, scaling is understood, and high reliability is supported.
- Because it is accessible by all CableLabs members, any member can use it to test out a PNM capability in a field trial to learn about its benefits to their business.
There is quantifiable business value here too!
- Testing a new PNM capability within operations is easier and more realistic when the data are already presented to the applications in a common way, reducing the uncertainty in the payback of a PNM business case.
- Using the XCCF can streamline implementation of PNM applications in a PNM program, making the business case for PNM pay back faster.
- CAPEX is lower because simpler, cheaper PNM solutions can enter operations and scale better when small applications can be pointed to existing XCCF instances.
- OPEX is lower because applications are separate from the network, and the XCCF interface can be rapidly, easily maintained.
- PNM advantages are achievable because a significant effort in any PNM program is solved with the XCCF. It avoids scaling risks that could otherwise increase the OPEX of a PNM program.
For all these reasons, CableLabs heard from our members that an XCCF capability was needed, so we responded.
Where do I get a copy?
CableLabs members can obtain a copy here. Vendors who are willing to sign the necessary CableLabs agreements can also obtain a copy. We hope our community can contribute feedback, and potentially contribute code as well, to the XCCF. We also look toward the community to drive our roadmap for the XCCF, providing input to what capabilities need to be supported with the highest priorities.
Don't forget to subscribe to our blog to read more about PNM in the future.