BIG DATA IN ROBOTICS

 

Speed sensors, accelerometers, gyroscopes, visual and tactile sensors are not a full list of devices which can be functioning at the same time at a robotic platform and generate a continuous data flow, 7 days a week, 24 hours a day. The sensors sampling rate usually lies in the range from tens to hundreds hertz, and amount of generated data can reach hundreds of megabytes an hour, and this estimation is without taking into account  possible technical vision subsystems. In case the system uses elements of technical vision, the generated information can reach enormous volume, tens terabyte an hour.

 

Robotics is inseparably linked to data, and data is inseparably linked to robotics. At the same time on the one hand the capabilities of a platform depend on the readiness of algorithms to process large volume of data, on the other hand the readiness of algorithms defines a hardware structure for a particular application. In other words, the concept of processing big amount of data is not a new concept in robotics, both robotics and Big Data continue to coexist and effect on each other.

 

 

With the development of Deep Learning technics a question of handling of large volumes of data turns from a category of opportunities into a category of requirements, this is caused by the concept of Deep Learning approach in which a large number of free parameters requires large and continuous volume of input data. The amount, complexity and adequacy of input data flow, along with the architecture of deep neural networks, allow creating of adaptive control systems which are effectively functioning in difficult, continuously changing environment.

 

Structure and volume of data flows as well as platform hardware capabilities define design approaches and future system functionality. Today it is possible to highlight three basic approaches to system design:

 

Fig. 2.1 introduces the concept in which all sensor data is processed and stored completely within a hardware platform. This concept is good that the platform can function completely in isolation, without existence of any external infrastructure.This concept is good also by the fact that it is possible to minimize dataflow latencies between system elements, this is important for fast regulation.

 

Fig. 2.2 introduces the concept in which the main data processing is happening on the computer located in the relative closeness from a robotic platform, at the same time the embedded controller within the platform executes only input-output operations, filtering, data transfer and fast regulation. Transmission process between the computer and the platform is carried out with time delays which usually do not exceed 50 msec in each direction, wireless connection is often provided for moving platforms. Using this approach it is possible to remain smooth motion control along with extensive data processing on a stand-alone computer station. It should be noted that sometimes this approach is used even when the hardware on a robotic platform has sufficient computing power for a standalone operation, in this case the accent is made for software development process improvement.

 

Fig. 2.3 introduces the concept of cloud computing, in which all data is sent to the cloud processing center located remotely from the robotic platform. In this case transmission delays reach hundreds of milliseconds and this, unfortunately, doesn’t allow direct motion control implementation, but it opens a possibility of access to virtually unlimited scalable computing resources and to creation of high-quality technical diagnostics systems based on deep data analysis. In addition, this concept allows data to be simultaneously received and processed from multiple platforms connected to cloud. This concept is being widely developed and called - Industrial Internet of Things (IIoT).

Fig.1. Garbage bins and Big Data. Photo accidentally  taken in Bangalore, India, in 2016.

Fig. 2.1.

Fig. 2.2.

Fig.. 2.3.