Dr. Tilman Buchner, Berlin Director of Engineering, discusses how the central, cloud-based system architecture of the social media giants is a discontinued model in a world of Internet of Things.  

With more than 130 million followers on Instagram and 56 million on Twitter, Selena Gomez is one of the most popular celebrity and social media stars on earth. For influencers like Gomez, social media presents a captive audience, with 2.9 billion daily active users across the globe who comment, like and share photos and videos with their loved ones every day.

The number of users continues to increase at a staggering rate of more than one million per day. Facebook alone registers an incoming data stream of 600 TB  per day in its data center HIVE which is similar to 600 commercial available, filled up 1TB hard disks. It’s not just Facebook, however: Instagram, Line, Snapchat, Tencent, VKontakte, WeChat and WhatsApp process 10-100TB of data every day. The list of available platforms is exhausting, and yet nowhere near exhaustive.


Technically speaking, the challenge is not the incoming data stream, but its distribution and synchronization to followers around the world. To make content (3D gaming, videos, photos) around the world instantly available, companies like Google, Microsoft, Facebook and Amazon have invested in their own operated fiber cable infrastructure (see project MAREA from Virginia Beach to Bilbao) and multiple server farms around the world to replicate their data. According to Geoff Bennett, Director of Solutions and Technology for Infinera, keeping all data centers synchronized consumes more bandwidth than public internet traffic.

In a world of Internet of Things (IoT), the challenge is not so much the distribution and synchronization of data, but rather the delay-free (latency & bandwidth) and uninterrupted communication. However, the central, cloud-based system architecture approach of social media giants is a discontinued model for this.

Even if we were able to cover the planet with fiber optic cable at 200.000 km/sec speed, it would still take 91 milliseconds to get a response from a data center in San Francisco if you were located 9.105 kilometers away in Berlin, for example. For IoT applications in the area of home automation (e.g. connected kitchen and washing machine) or wearables (smartwatches, health tracker, etc.), this delay is not a problem.

In the context of industrial applications, however, 91ms is a considerable amount of time. For example, a self-driving car could cover 1.25 meters at a speed of 50 km/h – a not inconsiderable distance – which, in some cases, could mean the difference between life and death. Therefore, the self-driving cars of the future are equipped with their own computing capacity and memory (e.g. NVIDIA DRIVE PX) in order to process the large amounts of data of the optical sensor system (LIDAR) locally. In this way, the car turns into a data center on wheels.

Waiting for the response of The Cloud is not an option in a future world of billions of internet-connected devices. Entering the Industrial IoT World (IIoT) will reinforce this trend. In comparison to the IoT world, IIoT means at least one order of magnitude higher amount of data and the need for two power of ten higher data processing speed.

In addition to the self-driving car, typical IIoT applications are predictive maintenance solutions designed to anticipate the outage of machine components, for example.  The amount of generated data often poses a challenge to the IT infrastructure of manufacturing companies. The main driver of the data volume is the high sampling rate (e.g. 1.000 Hz) and the need to store measurement results as accurately as possible (16-digit floating-point number needs 8 bytes).

A machine tool, for example, fitted with 60 sensors for condition monitoring of temperature, vibration and lubrication generates with a sampling rate of 0.1ms up to 27GB of raw data in a two-shift operation (16h) mode.

“If your factory is fitted with a 100 MBit data upstream and 60% of your network capacity is blocked for security and overhead communication, you can connect up to 10 machine tools until your bandwidth capacity is exhausted,” said Dr. Markus Obdenbusch, Chief Engineering, WZL RWTH Aachen.


A new hybrid IT-System architecture that leverages the power of The Cloud in combination with computational power and storage at the edges of the network remedy this. The task of The Cloud will shift from central decision maker to backup and optimization while simple single board controller (e.g.  Qualcomm® Snapdragon™ 410E SoC) running at The Edge will be responsible for data preprocessing and local decision making. As a result, only a fraction of the data needs to be sent to The Cloud. Problems of latency and lack of bandwidth will be a thing of the past.

These examples illustrate that there is a strong need for Chief Information Officers (CIOs) to think about how to manage the increasing amount of data in their enterprise IT architecture to meet the ever-growing volume of data and to lay the foundations for future digital value-added services.

Read more from Dr. Tilman Buchner on Edge Computing here