Bringing compute resources to the edge implies efforts in reducing the amount of data that needs to be sent up to the cloud. For many data-intensive applications such as video surveillance, natural speech recognition and health related monitoring, the need to process data as close to the source as possible can be important for both efficiency and security.
The major pitfall with edge computing, as it is now, is not a lack of data available but the lack of understanding in how, where and when to use it. Enterprises are starting to really understand that sometimes data has more value staying close to where it is, rather than centrally collecting it.
So why does certain data need to stay at the edge?
1. It’s big. Bandwidth issues exist: networks need to be kept clear of large volumes of data needing to be sent up into the cloud.
2. It’s timely. Speed of reaction is important: having low latency low allows rapid response as the data is processed where it sits. Imagine the reaction time necessary on the brakes of a self-driving car, for example.
3. It’s valuable. Security and privacy concerns that are alleviated (mostly) by keeping data processing as local to the data source as possible thus eliminating massive amounts of potentially hackable personal data from being stored in the cloud.
What is the power of data in a smart structure? Trust in data is now essential to effective digital transformation. Single verifiable source of truth is critical to sharing and collaboration using data sets. We are not getting into a blockchain discussion here, but it is imperative that data can be reliably sourced. Leaving it where it is, as a single source which can be both dynamic and real-time, is important.
I believe that data-driven implementations of Edge as a service (EaaS) will be important this year, especially in mission critical production activities, which could be oil and gas, machine assisted surgery in healthcare or any activity where immediate data inputs affect automation-assisted performance.