Recently we held a webinar on the topic hot and cold data. Here, our experts respond to the additional questions we received after the live stream concluded.
In addition to the recorded webinar on Hot and Cold data, get even more details in our new guide “Data-as-an-Asset: tips for effective design of IoT solutions”.
Most things are about lifecycle management, and in this case hardware may not be a problem. Also training and updating in machine learning models in the cloud is likely not going to be a problem either. But what is a challenge, is how to deploy models to the edge.
To be successful you need to put constraints on the machine learning-model in your cloud where you train and design them so that they can be directly deployable and not have compatibility or operational limitations on the device.
One way to avoid this is to take the upfront cost of sending real time data to the cloud and let the cloud host the machine learning-model and respond to the device with the predicted outcome. The downside is that this mainly works for always-on devices.
In general, I would consider data communication to be cheaper than investing in edge processing capabilities, if you consider the overall life cycle management challenges that come with edge computing. If you do not consider the life cycle management of your edge, then it may actually look cheaper to pay more for you onboard hardware rather than pay for the real-time traffic costs but it is most likely that you as consequence need to pay for traffic cost for updating the onboard models and have long and complicated release and deployment cycles.
The cloud based computing is much more cost efficient and scalable, and you only need to update or deploy in one place rather than push out updates to every single device in your fleet. So cloud should be the default option for most cases.
However, under some circumstances where you cannot rely on being connected all the time, e.g. like in the mining industry, where edge processing may be the only way. There are of course other use cases, such as self-driving cars, where ultra low latency is your only option. However, most use cases only need near real-time (200ms – 2 minutes), especially if there is a human involved somewhere in the loop.
In our IoT terminology guide you can find explanations for IoT related terms such as IoT communications and protocols, as well as IoT connections.
If it would be only one thing, it is the balance of trade offs. Make your decisions so it fits your first use case, but ensure that you have enough capabilities to add use case 2, 3 and 4 onto the same IoT-solution stack, without the need to redesign it completely or have the need to do costly replacements of solutions components.
This is a complicated and hard question to address. To give a full answer would require an essay. There is a significant long term cost of going for a lock-in approach, however the upside is that you can likely go fast to production.
We have worked with real-time streaming data at Telenor Connexion for more than 5 years and I can highly recommend AWS cloud. One example is the AWS GreenGrass (edge) and Amazon SageMaker (ML) that solve so many problems for you when you move to production, and the upfront cost is low / pay as you go.
For generic use cases on streaming and analytics, most technologies are valid – we use the Kinesis family from AWS. I think a key thing to consider here is the lifecycle management and the license model. Historically, I have come across several excellent and innovative streaming technologies but they failed to deliver a scalable licence model. If the licence model is poor, it will kill innovation, because it will be too expensive to experiment. So, stay alert about their business model.
I agree that there is a buzz or hype around edge. However, there is one solid case for edge, that is when you cannot guarantee connectivity.
You need to balance this tradeoff because, in some cases, maybe it is fine that the automated business logic does not work for a shorter period of time and that it can invoke an alert at a later stage when connectivity is re-established.
The self-driving car is the exception, but I think that most use cases of IoT can be done much more cost-effectively and in more scalable ways by moving the heavy computing to the cloud.
Yes (and No). I would like to point out that the concept of real-time is subjective (for some it is milliseconds, for others it is minutes). However, the concept of Hot can actually be used in a slightly different context, like in the fraud alert case. There you do not have real-time data that is constantly processed or sent to the central systems, it is only when an event is triggered. But when the event is triggered it sort of behaves as a real-time event. So such a solution changes data characteristics (real-time/hot/cold) depending on the context.
The capabilities of the hardware device should be considered closely because this is usually costly or practically impossible to upgrade at a later stage when deployed out in the field. Onboard software can be updated remotely so you can change the behaviour but not the hardware constraints.
If you look more at the data processing and automation, to be really successful you should have designed the solution with a life-cycle management perspective from the start. This means you can fix security issues, change configurations and behaviour. But also from a customer life cycle perspective, i.e. changing the owner of the device over time requires both identity management and historical data to be part of the design.
One such example is that you don’t want your second-hand connected home alarm to keep videos from the previous owners, either on the edge, or in the providers Data Lake in case you are no longer a customer.