flagge de flagge en

Approaching the digital future

IoT and the digitization of the production allow many companies to design their processes more efficiently and look at them in more detail using timely data. In the following paragraphs we will detail solutions that allow you today to generate the data foundation for the digital future.

First of all the foundation of the data has to be laid. It consists generally of the geometrical data, dimensions, interactions and connections between and amongst installations, buildings, associated facilities, sensors, meters and so on.
All relevant objects, be it buildings, rooms, installations or installation parts, are connected logically to one another. These relations can be shown alphanumerically via tree views and networks or graphically in 2D as plans and/ or 3D as models. The latter is normally called a BIM model. Representing all relevant objects – even if no BIM data is available – we have created a digital twin of the reality. This digital twin of course has its own life, meaning all basic data as well as all relations which may change over time are logged. Thus, the user knows precisely when which data was valid and which objects interacted with one another.

However, the digitization causes us to have machinery and installation data available, besides the above relatively stable basic data and object relations. This is the case when the objects communicate with one another and exchange data as defined by the internet of things. These data are generally process data, values and measurements that result from processes or are relevant influencing factors for the process itself.

Today modern machinery and production plants can log and output a large numbers of measurements, sensory data and other relevant information. The so called retrofitting enables the digitization of older machinery to allow them to communicate efficiently as well.

These data collectively build the foundation for all further reporting and analyses of the production processes.


WiriTec GmbH was founded particularly for the logging and analysis of energy data. Nowadays it is our specialist who has the necessary knowledge and experience with the handling of these data.  It is important to understand that these data are of high frequency. This means that they are created and have to be logged in time frames that exceed the normal data base handling rates. To do this, the method of time series databases is used. The InfluxDB saves 30 to 50.000 values per second in such a good performance so even working with millisecond values is no real challenge for the process analysis.

The transfer of these data are handled using specialized protocols which are designed particularly for the expected amount of data and frequency. On top of that, WiriTec GmbH also uses the possibilities of edge computing. This means, that the InfluxDBs are distributed within the field as to minimize network congestion. Access to the data is of course possible from the central hub. Thus the continuous monitoring of sensor data and measurements is possible as well as the direct reaction to states and changes.


Having mass data is nice but being able to analyse these consecutively is far more important. Analytic methods will separate the wheat from the chaff: of course all functions of our integrated chart engine are available for mass data. Generally the approach is that the chart engine “knows” by itself where the data to be analysed can be found. A record or dataset can exist solely in a coarse raster (resolution/grid) of from minute values upwards, in a fine grid (less than minute values) or in a combination of both. The low frequency data (values of a frequency longer than minutes) are normally stored in SQL databases whereas the high frequency data are stored in the InfluxDB.

The information, which dataset is stored where, is attributed to the data series itself. Thus, the chart engine “knows” where to look for the values. Of course this is also true for the decentralized systems. The location (i.e. server or WiriBox) where the values are saved to is also attributed to the data series itself. The chart engine collects the values on the fly from the feeder source and depicts them graphically. This method ensures that the user does not need to consider which data are saved to which system. Instead he can just select the data to be analysed and the chart engine takes care of everything else.

Using the above methods and their applications we have made a large step towards the digital future.

Or have we brought the future to the present?