The internet of things (IoT) is all about sensor data and communications. It involves some entity taking the data it receives, making some complex (or even simple) calculations, and then making decisions for the purposes of control or informing someone or something. Of course, there’s more than one way to do this.
The consumer IoT (CIoT) is all about sending the data – probably from your phone or wearable gadget, but, in the future, from various appliances in your home or elsewhere – up to the cloud, which acts as the brain of the system. It’s centralized and hierarchical.
We’ve seen before that the Industrial IoT (IIoT) operates differently from the CIoT. Today we’re going to dig a little deeper into how data and communication can work in the IIoT. The motivation for this was PrismTech’s announcement of their Vortex platform. It’s based on the DDS standard, which we’ve seen before, in our RTI coverage, but they also happened to have a nifty whitepaper contrasting various different data communication options.
They do an in-depth look at AMQP, JMS, MQTT, REST, CoAP, and DDS. Many of these were designed to be quick, low-overhead protocols for resource-constrained devices (and may be the best choice when resources are extremely tight), stemming from various vertical markets. Several have publish/subscribe capability. And there are, of course, lots of fiddly differences between them. But there were a couple of high-level themes that caught my attention; with some clarifications from PrismTech, they seemed good fodder for discussion.
The first gets to who-gets-what-data in the grand network scheme of things. In other words, how do messages get routed to their intended recipient? And in publish/subscribe schemes, who keeps the subscription list? This is typically done by a “broker” in most of these systems. Problem is, this adds a level of risk to any system that’s mission- or safety-critical, since the broker becomes a potential single point of failure with no workaround, and the entire system can come down if the broker goes south.
DDS is distinguished by having no brokers; a discovery process lets subscribers figure out what publishers are out there, and they can subscribe themselves. We saw a hybrid approach to this in RTI’s use of data routers to help filter the amount of data on the network, but, even in that case, the router isn’t a point of failure for the entire network; just for the sub-network behind it.
The second major point is more subtle. This is the idea that most of the other protocols are message-centric, while DDS is data-centric. Exactly what that means has nagged my brain for a bit. But it gets to the role of the communication protocol with respect to data semantics.
A while back, in trying to organize my IoT thoughts, I noticed that, above (or instead of) the usual protocol suspects like TCP and IP, companies were building generic message layers. You put some data in a packet, wrap it, stamp it, put a bow on it, and it gets to its destination without any reference to what’s inside the packet. As far as the transport system is concerned, it’s simply delivering a bunch of bits. The protocols can marshall and unmarshall the data (although some protocols are less “standard” in that the sender and receiver need to agree on how the data should be packed, which hurts interoperability), but the meaning of the payload remains opaque.
How do the systems apply semantics to the data? That’s the role of a higher layer, and the sender and receiver presumably (hopefully) have consistent views of how to interpret the content of the messages. Down within the messaging system, however, there’s nothing to indicate content.
This is the essence of the message-centric system. At the message level, its sole focus is on getting the enclosed bits from here to there.
The data-centric nature of DDS is distinct from this: there is content information in the packaging of the message; the transport isn’t neutral to the message contents. This allows the packet to identify a particular “topic” to which the message pertains. Subscribers can subscribe specifically to individual topics, and this becomes part of the discovery process.
The transport is also aware of the structure of the data, including various keys – not in the random keyword sense, but in the database sense. This allows devices to use SQL to filter or query what they receive, simplifying the process of extracting the useful bits from the message.
The data structure itself is specified in a language-independent way using IDL (interface description language) from CORBA, presumably the OMG-standardized one (since they handle both DDS and CORBA; there are other flavors).
So message-based systems are akin to shipping something in a cardboard box; UPS or FedEx can’t see what’s inside. DDS is more like shipping something in clear plastic with some content fields embedded in the bar code so that, when you see it, you don’t need to open the box before knowing useful things about what’s inside.
Some of this awoke a few latent memory cells, reminding me of ITTIA, who make relational databases specifically for resource-constrained embedded systems, enabling full SQL querying of data. At first I wondered if this would be at all at odds with DDS, but they reminded me that it’s complementary: ITTIA (or any other such system) is about the persistent storage of data on a node; DDS is about communicating that data. Within DDS, the data is transient; you can even specify a lifetime for a given message such that, beyond its useful life, it’s no longer available to any late-coming subscribers.
The upshot of all of this is that DDS provides for predictable real-time sharing of large volumes of data that may be consumed by a complex network of subscribers. The data will be understood by nodes independently of the underlying transport mechanisms and the particular creators of the nodes. Which specific transport schemes are supported is determined by individual DDS systems providers. Support for IP4/6 is common now; the lower-power 6LoWPAN is also under consideration.
This means you can mix and match nodes and transport technologies as necessary to leverage legacy connections and take advantage of multiple vendors. Quality of Service (QoS) and other settings help to optimize how data and messages are prioritized and what latency they experience.
There is further work being done to strengthen DDS and data transport in general. One big gaping hole in DDS is the lack of standardized security – which is being remedied by the OMG. According to PrismTech, a security standard has been approved by the OMG as version 1.0 beta and has entered what they call the finalization phase, which can take as long as a year. But companies are already jumping on the bandwagon to provide implementations.
I also talked with Angel Orrantia of SKTA Innopartners, an incubator/investor group, and one of the things they’re looking for is companies working to improve the structure and communication of IoT data. According to him, they’re not looking for alternatives to DDS; these would be technologies built above DDS.
My general sense of DDS is of a powerful, flexible standard. Which, of course, also translates to something more complex than an idiot-proof point-and-shoot standard. Want to have some fun? Hide a camera in your average homeowners abode and then deliver them a home automation system that requires them to set up and configure DDS details, SQL filters and all. Good rollicking fun for the entire family! Which is why you’re going to find this in a factory, not someone’s home. (Even if it’s an engineer’s home, DDS would presumably be massive overkill for that application.)
With respect to PrismTech’s recent Vortex announcement, they have a number of different flavors optimized for different platforms. They have Vortex Café for Java virtual machines; Vortex Cloud for cloud instantiation; Vortex Lite for small, resource-constrained devices, and Vortex Web for browsers. The one oddball in the family is a result of the fact that PrismTech has actually been doing this for a long time – in the enterprise space. They have an existing product called OpenSplice that’s well-established – so much so that they wanted to keep that brand rather than naming it something like Vortex Enterprise.
Different DDS vendors will have different specific capabilities, so it bears comparing details. Examples PrismTech notes in their favor are dynamic discovery (vs. static configuration of routers) and the ability to do multi-hop routing rather than simply single-hop. RTI says they also support both of those features. Of course, such features may come at the expense of something else (code footprint, power, cost, who knows…) so the selection process could work both ways.
More info:
How do you see DDS as distinct from other protocols? Or do you see it as that distinct?
Pingback: shemale Escorts in UK London