feature article
Subscribe Now

The Industrial Internet Reference Architecture

The IIC Tries to Think of Everything

Deep in the heart of Portland or Austin or Minneapolis or any of dozens of towns across the nation and the world, Makers are busily building components for the Internet of Things (IoT). Long dismissed as hobbyists unworthy of sales attention, many of these skilled designers are where the IoT rubber hits the road.

Way on the other end of abstraction, folks in the Industrial Internet Consortium (IIC) have been expending much effort trying to lay out an IoT framework that might promote interoperability and numerous other desirable traits. Many of these abstract characteristics may, at some point, be implemented in a concrete fashion by one of those engineers.

To me, these feel like opposite ends of a spectrum: high ideals for how things should be vs. practical considerations for how things have to be, this time anyway, in order to get a product finished. To be sure, not all IoT design is done by Makers; there’s much happening in large companies too. But the little guy captures, for me, that practical, entrepreneurial garage spirit.

On the abstract side, however, this summer saw the release of an Industrial Internet Reference Architecture. Now, don’t panic… this isn’t a hard standard specifying how to build an IoT. To me, it seems like two things:

  • A generalized model of the industrial IoT (IIoT), along with language to be used for promoting a cogent conversation with consistent terminology, and
  • A comprehensive list of things to think about when designing an industrial internet system (IIS).

And yes, based on its name, this is about the IIoT, not the Consumer IoT that might come to your home at some point. Both IoTs have a current tendency towards walled-gardenness, with Apple and Google vying to control our homes (along with other proprietary schemes) and with current machine-to-machine (M2M) installations dominated by specific companies.

Of course, this is about much more than just interop (which we’ll return to shortly). It’s a comprehensive hundred-page document that starts by laying out concerns from four different viewpoints and then addressing a series of issues that are nominally orthogonal to the viewpoints: Safety, Security, Resilience, various notions and levels of interoperability, Connectivity, Data Management, Analytics, Control, and, for advanced students, notions of dynamic and automated interop.

Four Ways to See the IIoT

The four viewpoints constitute a set of generalized requirements based on the needs of different stakeholders at different levels. They’re four levels of abstraction, each of which has a series of concerns.

  • The business viewpoint deals with your typical high-level business concerns. Why would someone buy this? Will we make money? Might we get sued for something? What will be our long-term cost?
  • The usage viewpoint deals more with “use cases” and their ilk. Who is going to use this? Do they all have the same or different roles? How will they interact with the system? What will the workflow look like?
  • The functional viewpoint starts to take the needs articulated in the first two viewpoints and cast them into an overall functional architecture. What high-level pieces will we need, and what will they do? What will talk to what? The document identifies five different functional domains, delving into further details for each one:
    • Controls, at the lowest level, interacting with the physical systems;
    • Operation, at the mid-level;
    • Information, at the mid-level;
    • Application, at the mid-level; and
    • Business, at the top level.
  • The implementation viewpoint gets down to brass tacks, dealing with specific system architecture, protocols to be used, components to be used, and how it all comes together. They identify five different architecture patterns, again, further deconstructing each one:
    • Three-tier architecture pattern, with – surprise! – three tiers: the Edge tier (which includes edge gateways); the Platform tier (which includes various data services and operational considerations; they may be local or remote), and the Enterprise tier (which covers domain applications and business rules and such).
    • Gateway-Mediated Edge Connectivity and Management architecture pattern, which models a local network connected to a wide-area network by a gateway.
    • Edge-to-Cloud architecture pattern, which is like the former one, except that individual edge nodes can directly access wide-area connectivity (rather than having to go through a gateway).
    • Multi-Tier Data Storage architecture pattern, which addresses the various levels of data storage that might be implemented to balance capacity, performance, and locality.
    • Distributed Analytics architecture pattern, which is all about where data travels, where it’s reduced, and where it’s analyzed.

Security, interop, deep learning, and more

Security permeates all viewpoints and all tiers. While it has a section all of its own, each of the viewpoints include security issues specific to that viewpoint. No other issue gets sprinkled as liberally throughout the document as security. In particular, they make a distinction between mandatory security considerations (arising out of regulations and other non-negotiable considerations) and other security issues; the latter may be subject to cost tradeoffs, while the former won’t be.

The discussion of interoperability is particularly illustrative, and, interestingly, many of the concepts run parallel to a prior piece I did on interop – including the references to natural language syntax, semantics, and pragmatics. While I came to a somewhat dreary conclusion about the likelihood of interop becoming a thing in the face of companies preferring to keep control, this document doesn’t veer in that direction; it merely lays out the notions and their implications.

It defines three levels of interop that roughly correspond to those linguistic notions. The first is “integrability,” which they define as “the capability to communicate with each other based on compatible means of signaling and protocols.” Next is “interoperability” (which I’ve been using more generically, for lack of a better generic term); their definition is “the capability to exchange information with each other based on common conceptual models and interpretation of information in context.” Highest up is “composability,” which they define as “the capability of a component to interact with any other component in a recombinant fashion to satisfy requirements based on the expectation of the behaviors of the interacting parties.”

That all sounds relatively impenetrable, so they have a couple of examples to better illustrate the distinction. One involves an airplane cockpit: integrability refers to whether or not a pilot can physically fit into the cockpit and see the various instruments as well as out the window. Interoperability means that the pilot also understands what each of the instruments means. Composability means that this understanding is part of a broader grasp of a particular type of plane. How you fly a Cessna will be different from flying a 747, even if they share some of the same dials. It’s this bigger picture that makes something composable.

Later on they talk about how these notions can be made dynamic, allowing a system to learn about its own configuration and circumstances and to adapt to changes as they occur.

Of course, they don’t say how to build these in; rather, they’re raising them as issues to take into account during architectural planning.

Another interesting distinction they make gets to the familiar area of models. As programmers, we’re used to creating a model of some entity or system and then coding accordingly. The programmer understands the model, but the system itself doesn’t.

This is contrasted with ideas more closely tied to deep learning, where the system itself builds a model based on its experience. A machine-learned model may be completely different from a programmer-created model – and yet, because it’s based on the reality of experience (assuming it has been exposed to a suitably wide range of situations), it may be more realistic than a more idealized model that a human is likely to build. On the flip-side, the machine model may be completely impenetrable to a human, unlike the programmer’s model.

These few examples don’t begin to penetrate all of the other ideas and notions spelled out in the document. As you can see, they’re pretty high level, and you might wonder what relevance they might have to Fred in the Shed with the soldering iron. Clearly, those of you building this stuff will mostly be represented through the implementation viewpoint. Hopefully you’re also able to interact as the functional – and even the usage – viewpoint is refined into specific implementation requirements.

At the very least, it’s a useful read – I’d be willing to wager that most of you can’t get through it without encountering at least one new thing that you hadn’t thought of before.

More info:

The Industrial Internet Reference Architecture (need to enter personal info to download)

13 thoughts on “The Industrial Internet Reference Architecture”

  1. Pingback: puffco plus faq
  2. Pingback: Training
  3. Pingback: seedboxes
  4. Pingback: binaural
  5. Pingback: gvk biosciences
  6. Pingback: DMPK Studies
  7. Pingback: pezevenk
  8. Pingback: Olahraga Terkini
  9. Pingback: agen bola sbobet

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Industrial Internet of Things
Sponsored by Mouser Electronics and CUI Inc.
In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from CUI Inc explore power supply design concerns associated with IIoT applications. They investigate the roles that thermal conduction and convection play in these power supplies and the benefits that CUI Inc. power supplies bring to these kinds of designs.
Aug 16, 2024
50,903 views