Real-time DataLogger reference

Available from firmware 2019.6

Sources and data types

The DataLogger can record data from any IN or OUT ports and variables. The following data sources are available:

  • Global Data Space IN and OUT ports
    • Type real-time program (C++, IEC 61131-3 and MATLAB®/Simulink®) – task-synchronous mode
    • Type component
    • Global IEC 61131-3 variables

The following data types are supported:

  • Elementary data types (according to Supported elementary data types);
    all STRING variables, regardless of their length, are supported and thus belong to the elementary data types.

Schematic overview

Find all details on the attributes in the tables and sections below.  

How the DataLogger works in detail

Configuration parameters explained visually

Sampling intervals, publishing intervals, and writing intervals? Why are there so many parameters for transporting the data?

Here's why this is the best way to log data in a real-time automation environment ‒ explained by Martin Boers, Technical Specialist in the Product Management PLCnext Runtime System at Phoenix Contact Electronics: 

Icon for duration 05m:26s    Icon for resolution HDTV 720p    Icon for audio language English    icon for subtitles language none

XML configuration file

An XML configuration file for the DataLogger is structured as shown in the following example:

<?xml version="1.0" encoding="utf-8"?>
  <DataLoggerConfigDocument
     xmlns="http://www.phoenixcontact.com/schema/dataloggerconfig"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://www.phoenixcontact.com/schema/dataloggerconfig.xsd">
    <General name="data-logger" samplingInterval="100ms" publishInterval="500ms" bufferCapacity="10"/>
    <Datasink type="db" dst="test.db" rollover="true" maxFiles="3" writeInterval="1000" 
       maxFileSize="4000000" storeChangesOnly="false" tsfmt="Iso8601"/>
    <Variables>
      <Variable name = "Arp.Plc.ComponentName/GlobalUint32Var"/>
      <Variable name = "Arp.Plc.ComponentName/PrgName.Uint32Var"/>
      <Variable name = "Arp.Plc.ComponentName/PrgName.StuctVar.IntElement"/>
      <Variable name = "Arp.Plc.ComponentName/PrgName.IntArrayVarElement[1]"/>
    </Variables>
  </DataLoggerConfigDocument>

As you can see in the structure, there are only a few XML elements to be configured.

Within the <DataLoggerConfigDocument> schema, there are always:

  • <General>
  • <Datasink>
  • <Variables>, containing one or more <Variable> elements

Attributes for file-based configuration

Of course, there's a bunch of attributes adding some complexity. See the following tables in which the attributes are grouped by the XML tags they belong to. Where there's much more to explain to an attribute you will find additional information below the tables.

Note: If working with PLCnext Engineer, please refer to the reference topic in its online help file - there are other restrictions to obey. 

<General>

Attribute Description
name Unique name of the logging session. Note: Must not begin with "PCWE_" which is reserved for the triggered Logic Analyzer of the PLCnext Engineer.

samplingInterval

Interval during which the data points of a variable are created, e.g. samplingInterval ="50ms"
The default value is500ms.
The following suffixes can be used: ms, s, m, h.

taskContext

Available from firmware 2021.6 or newer.

The name of an ESM task which samples the values of all variables of this session, e.g. taskContext="myTaskName"
The attribute is optional. If it is configured the samplingInterval attribute will be ignored.

publishInterval

Frequency for forwarding the collected data from the ring buffer to the data sink, e.g. publishInterval="1s".
The default value is 500ms.
The following suffixes can be used: ms, s, m, h.
Note: This attribute will be ignored if a taskContext is also configured. 

bufferCapacity

Configuration of the buffer capacity. Capacity of data sets for the internal buffer memory. The default value is 2.

<Datasink>

In each cycle, the values of all ports of a task are stored in a ring buffer. Therefore, the capacity of the ring buffer determines the maximum number of task cycles that can be recorded before data must be forwarded to the data sink. 

The data to be archived is written to an SQLite database.  For each configured DataLogger instance, a separate SQLite database is created. 
Note: In SQLite3 data bases, a maximum of 996 variables must be obeyed.

Attribute Description

type

Configuration of the data sink:
db: (SQLite) Database file
volatile: RAM only (Note: maxFileSize must be larger than 150000000 = approximately 150 MB).

dst

With file-based configuration, this is the file name and path under which the data sink is to be stored. If no specific path is given it will be placed in the working directory of the firmware which is /opt/plcnext/.
Note: The DataLogger does not create folders. If you want to store a data sink under a specific path, you have to create it first.

If the configuration is made with the PLCnext Engineer interface (available from software 2020.6), this attribute is set to dst="/opt/plcnext/logs/datalogger/<LogSessionName>.db" where  <LogSessionName> is what you have written into the "Name" field of the interface. 

rollover

true: Once the maximum file size is reached, the file is closed and renamed with an index starting from 0 (e.g. database.db.0). Then a new file is created. Every file with an index is closed and can be copied for evaluation. 
The current data is always logged in the database that is defined in the attribute dst.

false
When rollover is set to false and the maximum file size is reached, a configurable amount of the oldest data is deleted before the record proceeds. The amount of data to be deleted is configured with the attribute deleteRatio.

writeInterval

Number of data records the DataLogger collects and writes to the SD card.

The default value is 1000 to keep write access operations to the SD card as low as possible.

In other words, as soon as 1000 data records have been transferred to the data sink, they are grouped in a block and written to the SD card.

When the data sink or the firmware is closed, all the values that have not yet been transferred are written to the SD card. 

Note: If the value of the attribute writeInterval is low, the resulting high number of write operations to the SD card might cause performance problems. If a faster writeInterval is required, Phoenix Contact recommends to create the database in the RAM. 

Otherwise, it is possible that the data cannot be written to the SD card in the required speed. This may result in the loss of data. If there is a loss of data it will be displayed in the database in the ConsistentDataSeries column (see details below at Data consistency check).

maxFileSize

Maximum memory size of the log file in bytes.

(Note: maxFileSize must be larger than 150000000 (approximately 150 MB) in case a volatile sink is used.

maxFiles

Maximum number of rolling files (default value is 1). 
The rollover attribute must be set to true. When the maximum number of files is reached, the oldest file will be deleted. The file index of the closed files will be counted up further. 

If the maximum number of files is set to 0 (maxFiles="0") the behaviour corresponds to a deactivated rollover (rollover="false").

If the maximum number of files is set to a negative number (e.g. -1) the file number limitation is deactivated. This results in logging activity until the memory is full. The default value is -1

Note:
When the value for maxFiles is 1rollover is set to true and the maximum file size is reached, a configurable amount (attribute deleteRatio) of the oldest data in the database is deleted. The deleteRatio is related to the maximum file size that is defined with the attribute maxFileSize.

storeChangesOnly

true: The values are only stored if they change. If a value stays the same, it is defined as NULL in the database.

false: The values are always stored, even if they do not change.

(see details below at Recording mode)

deleteRatio

Available from firmware 2020.0 LTS

Percentage of maximum memory size to be deleted for the logging of new data. Default is 30 %.
This attribute defines the amount of data that is deleted before new data is written into the database. The old data is deleted when the value that is defined in maxFileSize is reached and the attribute rollover is set to false.
The value for deleteRatio must be provided as an unsigned integer value (16 bit). It must be in the range from 1 to 100. The value corresponds to the percentage of old data to be deleted. 
Examples:
5 = 5 % of old data is deleted.
30 = 30 % of old data is deleted.

Note: For large data sinks (larger than 10 MB) the deleteRatio should be set to a value lower than the default to avoid data loss during the deletion process. E.g., to delete 15 MB (= 30 % of a 50 MB data sink) on an SD Card may take so much time that the limits of values being buffered and published to the RAM are exceded. In this case, some values get lost which you can detect in the interrupted series of timestamps.

tsfmt

Available from firmware 2020.0 LTS

Configuration of the timestamp format

Raw:
The timestamp is stored as 64 bit integer value.

Iso8601:
The timestamp is stored in the ISO 8601 format with microsecond accuracy (see details at Timestamp).

Note: Logging with Iso8601 timestamps is on the expense of performance. In high-performance scenarios maybe converting the timestamp after logging is a better way.

<Variables>

Attribute Description

Variable name

Complete name (URI) of a variable or a port whose values are to be recorded. Example:  Arp.Plc.ComponentName/PrgName.Uint32Var

Note: When using an SQLite3 database sink, a maximum of 996 variables per session must be obeyed. 
In case XML based configuration tries to add more variables than possible to a session a notification will be generated and the configuration is not used to configure a session.
In case RSC based configuration is used to add more variables than possible to the session the call to IDataLoggerService::SetVariables will result in a single returned value Error::InvalidConfig.

<TriggerCondition>

Available from firmware version 2020.3

From firmware version 2020.3 on you have the possibility to define trigger conditions for starting a DataLogger session via the XML configuration file. To do this, use the element <TriggerCondition> and the attributes described below.

This element starts a list of <RpnItem> where one item consists of an attribute type and a text that, depending on the type, either names a variable, a constant or an operation. The optional attributes postCycles and preCycles can be used to specify the amount of datasets recorded before and after the trigger condition is fulfilled.

Note: RPN (Reverse Polish Notation) is used for the configuration of the trigger.

<TriggerCondition>

Attribute Description

postCycles

The postCycles attribute is optional.
It defines the number of datasets that are recorded after a trigger event.

preCycles

The preCycles attribute is optional.
It defines the number of datasets that are recorded before a trigger event.

taskContext

Name of the task in which the trigger condition is evaluated.

<RpnItem type>

List of trigger items. A trigger item can be a variable or a constant or an operation.

Attribute Description

Variable

The attribute type Variable is used to define a single variable as a trigger condition item. It must contain the complete name (URI) of a variable or a port whose values are to be considered.
Example: Arp.Plc.ComponentName/PrgName.Uint32Var

Constant

The attribute type Constant is used to define a constant as a trigger condition item. The item must contain the value of the constant, e.g. 5.

Operation

The attribute is used to define an operation. The following values are valid for the Operation attribute:

  • None - No trigger condition, so recording starts immediately
  • Equals - Starts recording if Variable/Constant 1 is equal to Variable/Constant 2
  • NotEqual - Start recording if Variable/Constant1 is not equal to Variable/Constant2
  • GreaterThan - Start recording if Variable/Constant1 is greater than Variable/Constant2
  • GreaterEqual - Start recording if Variable/Constant1 is greater than or equal to Variable/Constant2
  • LessThan - Start recording if Variable/Constant1 is less than Variable/Constant2
  • LessEqual - Start recording if Variable/Constant1 is less or equal to Variable/Constant2
  • Modified - Start recording when a modification of the Variable/Constant1 is detected
  • RisingEdge - Start recording when a positive (rising) edge of the Variable/Constant1 is detected
  • FallingEdge - Start recording when a negative (falling) edge of the Variable/Constant1 is detected
  • And - Start recording if TriggerCondition1 and TriggerCondition2 is true
  • Or - Start recording if TriggerCondition1 or TriggerCondition2 is true
  • Not - Logical not
Example:

The trigger condition (Variable a > Variable b) & (Variable c > Variable d) can be configured using the following list (a, b, c, and d are used in this example instead of the complete names (URI) of the variables or ports for better readability):

<TriggerCondition postCycles="200" preCycles="100" taskContext="Cyclic100">
    <RpnItem type="Variable">a</RpnItem>
    <RpnItem type="Variable">b</RpnItem>
    <RpnItem type="Operation">Greater</RpnItem>
    <RpnItem type="Variable">c</RpnItem>
    <RpnItem type="Variable">d</RpnItem>
    <RpnItem type="Operation">Greater</RpnItem>
    <RpmItem type="Operation">And</RpnItem>
</TriggerCondition>

Database layout

The values of the configured variables are saved in a table inside the SQLite database. The default path for the database files on your controller is /opt/plcnext. The database files are saved as *.db files. The file system of the controller is accessed via the SFTP protocol. Use a suitable SFTP client software for this, e.g., WinSCP

Copy the *.db files to your PC and use a suitable software tool to open and evaluate the *.db files (e.g. DB Browser for SQLite). 

Depending on your configuration, a database table that is created by the DataLogger can consist of the following columns:

  • Timestamp:
    Timestamp for the logged variable value (see details below at Timestamp).
  • Consistent Data Series:
    This index shows if there is a inconsistency in the logged data (see details below at ConsistentDataSeries).
  • Task/Variable:
    One column for each variable that is configured for data logging. The column name consists of the task name and the variable name (see details above at <Variables>).
  • Task/Variable_change_count:
    In case of storeChangesOnly="true" (see details below at Recording_mode), this column serves as change counter. There is a change counter for every configured variable.

Time stamp

The DataLogger provides a time stamp for each value of a port. Only one time stamp is generated for ports from the same task because this time stamp is identical for all the values of the task. Time resolution has a precision of 100 ns.

  • Firmware 2019.6 to 2019.9 :
    The time stamp is always displayed as raw 64 bit integer value.
  • From firmware 2020.0 LTS:
    It is possible to configure the format of the time stamp inside the database. It can be displayed as ISO 8601 or as raw 64 bit integer value.

Despite the format, all time stamps are reported using the UTC timezone. The implementation and internal representation complies to the Microsoft® .NET DateTime class, see the documentation of the DateTime.ToBinary method on docs.microsoft.com

The time stamp is created in the task cycle via the system time of the controller. It is set at the start of the task (task executing event) and maps exactly the cycle time of the task, so that the values of the following task cycles are always one interval apart.

Data consistency check

If recording gaps caused by performance problems or memory overflow occur this information will be saved in the data sink. If there is a loss of data it will be displayed in the database in the ConsistentDataSeries column.

This column can contain the values 0 or 1:

  • Value 0:
    If the value is 0 a data gap occurred during the recording of the preceding data series. The first data series always has the value 0 because there is no preceding data series for referencing.
  • Value 1
    If the value is 1 the data is recorded without a gap related to the preceding data series. Therefore the data series tagged with a 1 is consistent to the preceding data series. 

Example from a database with an indicated data gapClick to open an example from a database with an indicated data gap

RowId ConsistentDataSeries VarA
1 0 6
2 1 7
3 1 8
4 1 9
5 1 10
6 1 11
7 1 12
8 1 13
9 0 16
10 1 17
11 1 18

In this recording, the first 8 data rows are consistent and without gaps caused by data loss (ConsistentDataSeries=1). Between rows 8 and 9 a data gap is indicated (ConsistentDataSeries=0). The rows 9 to 11 are consistent again.

Note: Phoenix Contact recommends the evaluation of the ConsistentDataSeries flag to ensure that the data is consistent.
If ConsistentDataSeries=0 is stated in other rows than row 1, an inconsistency has occurred during recording.

Recording mode

The recording mode is set by the attribute storeChangesOnly. There are two recording modes available:

  • Endless mode
    The DataLogger records the data in endless mode. All the ports and variables configured for recording are recorded without interruption (storeChangesOnly="false"). 
  • Save on change
    The DataLogger only records the data when they change. If the value stays the same it is displayed in the data base with a NULL (storeChangesOnly="true"). 

Examples for storeChangesOnly configuration

Note: In these examples the time stamp is displayed in a readable format. In a *.db file generated by the DataLogger, the time stamp is UTC of typeArp::DateTime. It is displayed as 64 bit value in the database. The implementation and internal representation complies to the .NET DateTime class, refer to the documentation of DateTime struct at https://docs.microsoft.com to convert the time stamp into a readable format.

Attribute storeChangesOnly="false"

show this exampleshow this example

In this example the logged variables are from the same task. Therefore there are values for every timestamp.

Timestamp ConsistentDataSeries Task10ms/VarA Task10ms/VarB
10 ms 1 0 0
20 ms 1 1 0
30 ms 1 2 2
40 ms 1 3 2
50 ms 1 4 4
60 ms 1 5 4
Attribute storeChangesOnly="true"

show this exampleshow this example

In this example the logged variables are from the same task. Therefore there are values for every timestamp. When there is no change in the value of a variable in relation to the value of the preceding timestamp, it is displayed as NULL, meaning that the value has not changed. 

Timestamp ConsistentDataSeries Task10ms/VarA Task10ms/VarA_change_count Task10ms/VarB Task10ms/VarB_change_count
10 ms 1 0 0 0 0
20 ms 1 1 1 NULL 0
30 ms 1 2 2 2 1
40 ms 1 3 3 NULL 1
50 ms 1 4 4 4 2
60 ms 1 5 5 NULL 2
Attribute storeChangesOnly="false" and variables from different tasks

show this exampleshow this example

In this example the logged variables are from different tasks (Task10ms and Task 20ms).
Usually different tasks have different timestamps which affects the layout of the table.  When the variable values of a task are added to the database table, the variable values of the other task are displayed as NULL. 

Timestamp ConsistenDataSeries Task10ms/VarA Task20ms/VarB
10 ms 1 0 NULL
20 ms 1 1 NULL
21 ms 1 NULL 1
30 ms 1 2 NULL
40 ms 1 3 NULL
41 ms 1 NULL 2
50 ms 1 4 NULL
60 ms 1 5 NULL
61 ms 1 NULL 3

 

 


• Published/reviewed: 2024-10-30   ☀  Revision 074 •