Sorry, you need to enable JavaScript to visit this website.

Designing for Billions of Things

Soon billions of devices will sense, analyze, communicate and transact with each other and with us. Gartner estimates that more than 20 billion connected devices will exist by 2020, exchanging what some say will be 1,000 exabytes of information over networks. That’s not so far-fetched when you factor in video and virtual reality, as well as sensors in traffic lights, elevators, home appliances and smart factories.

Even today we have glimpses of the scale of things. Consider Visa’s Ready Program, a contextual commerce service that enables everyday things, such as automobiles, home appliances, clothing and other connected devices to pay for services. Medical equipment company Proteus Digital Health has developed an ingestible sensor that measures whether patients have taken their medication. View, a smart window maker, uses sensors and a smartphone app to tint window glass on demand, reducing energy use and the need for blinds.

Regardless of the projections, such hyper-scale systems must be factored into the design and development of the infrastructure that will support the IoT economy. Much has been written about the design of hyper-scaled systems. Take, for example, the need to get work done by designing through microservices-small pieces of software that talk more efficiently to each other. Or leveraging the design principles of computer and network virtualization so a cellular core packet network architecture can be designed to handle millions of smartphones and billions of IoT endpoints. 

Designing for billions of devices (and connections) requires innovation across the ecosystem of silicon vendors, equipment makers and service providers that are on the receiving end of large volumes of interactions. The future of the systems and the infrastructure for all of these sectors depends on expertise in designing three fundamental capabilities: software definition, open hardware and service velocity. 

Designing for Billions of Things

Design for software-definition

Today, owners of smart homes can use the IFTTT language to create a rule that instructs a window to open if the temperature rises above a certain threshold. Amazon’s Alexa digital assistant can do the same thing using voice commands. The potential of defining everything—hardware, services and equipment—as programmable and versatile software assets started almost a decade ago, when Amazon Web Services introduced the world of IT to “elastic” CPUs, networks, compute cycles and storage that maximize utilization according to varying workloads. Now, AT&T and other telecommunications providers are banking on software-defined networking (SDN) as the lynchpin to their success in IoT. In fact, AT&T has stated that by 2020, 75% of its network will be not only virtualized but also programmable. The anticipation of billions of things means software-centric architectures will greatly influence R&D roadmaps. That includes the next generation of switching and routing, data center operating systems or container services that orchestrate IoT applications, which expect very large workloads. 

There is, however, a downside when everything is programmable; everything becomes hackable—from nuclear power plants to cars and baby monitors. Consider the smart thermostat, which is reachable at a distance through an API over the Internet. The fact that a hacker can find a way into the embedded operating system needs not translate into a takeover, though. The onboard software can take advantage of virtualization to compartmentalize critical functions, such as resetting the firmware. An industrial equipment field device, such as a programmable logic controller, also can defend itself, incorporating intrusion tolerance techniques to deceive an attacker. The introduction of these and other more advanced software-defined security countermeasures will be necessary when billions of things are turned on. 

Design for open hardware

The idea of taking advantage of commodity hardware has been driven in large part by Facebook, Yahoo and Google in their efforts to bring down datacenter costs. The approach is to abstract complexity through software, and then leverage application-specific integrated circuits (ASICs) to build simple, standards-based hardware. Hyper-scaled data centers are being constructed using clusters of x86 CPUs that are connected by enormously scalable networks and powered by SDN controllers and standard arrays of storage. Microsoft and others are using open hardware as a leverage point in their cloud businesses.

Operational Efficiencies and Programmability

Open hardware platforms leverage the simplified design capabilities such as front-of-rack serviceability, toolless repair operations, power and space optimization and simplicity to enable data center operators in addressing the deployment and operational challenges with current platforms. Embracing the principles of software defined solutions enables open hardware platforms to drive the programmability and in service configurability/provisioning of the platforms.

Performance and Scale

The ability to utilize the best in class components with the disaggregated software defined platform design drives forward the optimal performance while enabling the platform to meet the scaling challenges in the every growing compute and data intensive market.

Optimal Economics and Rapid Innovation

In addition to the cost benefits due to not using proprietary hardware and vendor lock in , the open hardware provides significant energy and data center infrastructure cost reductions to drive forward optimal economics. The open ecosystem challenges the technology organizations to foster new innovations in a collaborative manner that drives the open hardware evolution to meet the customer demands while providing the ability

At Facebook, the company depended on inflexible, expensive proprietary systems and hardware. It launched the Open Compute Project to power its data centers using less expensive, interoperable building blocks. Recently, Facebook co-founded the Telecom Infra Project (TIP) to reimagine global connectivity by opening up the deployment of telecom infrastructure from access, backhaul and core systems.

Every service provider, device maker and OEM should take a cue from the open hardware movement to find ways to differentiate their products and services by decoupling software and hardware. AT&T, for example, has already built out white-box networking nodes that are a hybrid of open source and off-the-shelf technology. The company believes the economics, scale, performance and programmability of bare metal switching makes sense for its customers.

Open hardware links directly to open source software. Indeed, open source is a significant technology disruptor and proof that vendors are rethinking their business models for hardware platforms. Open source communities managed by the Linux Foundation and others have seen tremendous growth—and acceptance—within both software and hardware development communities to deliver network solutions. OpenStack, an open source software platform for cloud computing, has developed an industry-accepted cloud platform that works across multiple hypervisor environments.

Design for service velocity

Thinking and acting fast has always been a competitive advantage. Both will be paramount as we look to the promise of 20 billion connected devices and their insatiable data consumption. Deploying software infrastructure and digital services in an optimally controlled and risk-free manner is necessary to meet the scalability and growing demands networks face. The philosophy of continuous delivery and integration, while ingrained in the cultures of software companies, is still nascent in other industries, such as telecommunications, industrial equipment and automotive. 

Designing for Billions of Things

To support the scalability of millions, if not billions, of virtual things into their business models, R&D organizations need to think about how they can accelerate service introduction. There will be complexity in the integration and interoperability of these systems.

The architecture of end-to-end solutions through third-party IoT platforms can facilitate smoother service lifecycle management. For example, Verizon’s ThingSpace is designed to make it easier for developers to roll out IoT devices and applications. Cisco’s recent acquisition of Jasper Technologies, another IoT platform that targets service providers, will help the company launch, manage and monetize IoT services on a global scale.


  1. Make the end customer experience the source of differentiation in products and services.
  2. Revisit supply chain risk management for disaggregated open hardware and software components.
  3. Understand the impact on R&D processes of closed-loop learning mechanisms.