I’ve read with great attention the ten challenges that Stefan Ferber in his Bosch blog indicates as primary for IoT. You can find it at
While undoubtedly he touches very important topics, and I agree on a number of them, I think the absence of a clear taxonomy for the challenges is rather confusing, as it mixes different domains. The way these challenges are laid out makes me think that Stefan’s ten points are more a list to start a real brainstorming on the topic and to influence future roadmaps than a final list.
Any innovation touches 4 different and complementary field, listed here in no particular order.
2- Business models
In the Technology area, at EPoSS we identified back a few years ago 9 different axes, where considerable advances were supposed to take place before a full IoT vision could be implemented. These axes are:
2- Harsh Environments
3- High Quality systems
5- Self-Organising systems
7- Zero Entropy
8- Integration in non-standards materials
While some points are clearly similar with Stefan’s ones, if not the same, the accent we put was rather on the capabilities of the smart object itself, rather than on the effects that a fully-fledged IoT will bring. Nobody denies that the zillions of devices will produce a data deluge; but if we cannot package “smartness” into devices, if we cannot power the smartness, or if the device will simply fail as cannot resist to external conditions, then we won’t have any data coming.
Stefan’s most controversial point seems the big code one. His explanation brings this point close to the high-quality systems we identified. In logistics, for instance, a requirement could easily be a to have at least 99.99999% (“five nines”) reading precision. If a palette passes a gate, companies cannot afford to re-check every single item in the palette in case of a mis-reading, if not very seldom – every second check means time wasted and additional costs. However, for sure there will be issues; 100% precision systems do not exists (and never will), any system engineer knows that (or at least he should) and must deal with it. State machines are developed to default to harmlessness.
Stefan in an answer clarifies “big code” saying that
“our test assumption in Computer Science today is that code provided by others is error free. And that is simply not true.”.
I have to disagree on this. in my coding days, I never assumed that someone else code is error-free. That’s usually the best recipe for disaster. I always checked inputs, preconditions, and so on and so forth, and always provided some way to leave “gracefully” the current execution if something went wrong, even outside the scope of my programming. AFAIK this is quite normal practice for network programmers.
So, in case, I would rephrase his point to “Failure modes have to be simple and clearly published”, which, in EPoSS view, belongs to the self-organising area. Any environment of different smart objects must be able to self-manage itself, including when something goes wrong, of course. Just to make an example, in my opinion the IP protocol has proven so successful because of its very straightforward failure mode: either an IP packet reaches its destination or not. And if not, it simply doesn’t.
Therefore, high quality systems are those that can cope with (hopefully rare) failures in a graceful way; not that they avoid ANY failure.
2- Business Models
This is perhaps the single most important issue of IoT at present. Everybody talks about it, nobody knows exactly how to make money with it; and if they know, maybe they are trying to kill the baby in his cradle. Looking back at 1980, it’s impressive to see how the list of the top 100 companies by capitalisation has changed. Internet was a real game-changer, sending long-established companies out of business in just a few years. Mobile is on the same track: we just need to think about Nokia to see how the explosion of smartphones changed the market shape in less than 10 years.
Now, we can expect IoT to provoke a similar, if not bigger, revolution. Clearly, many of the current big fish will try to tame the wave – or to stop it altogether, for both calculation or little knowledge. I know of countless episodes of companies refusing to understand a new way of doing business – usually at their peril.
I suggested some time ago that being able to trace a device can bring the world towards a leasing economy rather than a consumption one. Generally, I don’t need to “buy” a tool; I need the service that the tools can provide, and from time to time only. When I don’t use my tool I just have an unused asset. If I could lend my tool and be able to bill precisely its usage, then I would make a much smarter use than my asset; or else, I can borrow it and have the same result at a fraction of the price I pay today. I asked Stefan some time ago how much a drill is used on average in his lifetime, and he told me it was less than a minute. A drill cost 100+ euro; just think at the cost-per-hole …
Clearly this model cannot be implemented on every single object (take the toothbrush as a counter-example), but we can go a long way towards it.
People say that a technology is ultimately successful when it enters the fabric of everyday’s life. Nobody today leaves home without his mobile phone without a sense of “nakedness”. When anyone needs some information about something, he googles it.
But what happens when the fabric itself has a much longer lifespan than the technology it’s supposed to host? I can put a sensor on any chair in an office, for asset management, for airco optimisation, for monitoring people presence, whatever. Now, the technology of the sensor itself is will probably be obsolete in 18 months. The chair might be there for 18 years. Will we be able to develop legacy systems spanning several decades of technological developments? Will companies push for extreme consumerism in domains typically conservative, such as furniture? I might change mobile phone every 2 years, but I don’t change my air conditioning unit or all my shirts that often. My drill is 20 years old, it still works pretty well, and I have no plans to change it in the foreseeable future.
New adoption models need then to be developed, likely together with new business models, especially if based on the leasing of services rather than on possession of tools. But will a company (like Bosch) be happy to sell only 1 drill instead of 10.000, and then develop a leasing scheme where a drill is used by an entire community for a couple of years then put out of service (and then dismantled and hopefully recycled)? And how this switch between an economy based on buying goods to one based on leasing services can happen?
The issue with IoT governance is that currently there are long-established institutions that take care of specific parts of the big “IoT” picture, and nobody seems very willing to compromise. When the Internet started, there was a clean slate, so IP addresses, for instance, could be allocated by a single authority without any problems. However, as IoT will span over totally different activity and technological domains, the same will not apply.