How we fixed alarm export with SQL queries without alarms loosing

Wiki post extracted from this discussion


Hey,

We faced the same challenge a few years ago and did exactly as @valerio.manenti suggested.
However, with this implementation, we have unpleasant utilization peaks caused by the HistoryUI module, especially when the history is “full.”
In addition, you have to be careful that no alarms are lost when too many alarms are added within one cycle of the timer.
So it is inevitable to export “too early” to avoid loosing alarms, but this causes some alarms to be exported twice.

For these reasons, we switched to a solution with two queries, which is basically an SQL query.
This has enabled us to significantly reduce the utilization peaks:
(On the left is the implementation with HistoryUI, on the right with queries and manual writing of the file) and also avoid the duplication issues.

How did we do it?

Here is a brief guide on how we implemented it.
I can’t share the entire code, but I think this should give you a clear idea of how it works.

Queries

The queries must be configured in mpAlarmXHistory.
You need one query that only checks how many alarms have been added since the last export:

We also need a second query, which then exports the actual alarm list:

Why timestamp and instance ID?

It’s simple:
Depending on the alarm configuration and implementation, alarms with the same timestamp may also be generated.
The instance ID is required to distinguish between them.

SPS-Code

The queries must be started in the init in the PLC:

We then built a state machine that checks whether new, unexported alarms have been added.
To do this, you can use the update count of the “NewSinceLastExport” query.
This is automatically updated when the query is initialized.
With this information as a trigger, the alarms can then be read out once (only timestamps and IDs).

The next step is to check whether more than, for example, 500 entries were read out with this query.
If this is the case, the second query is started, which can then be used to read out the entire table (with text, additional info, etc.).

We then convert this information into text and write it to a txt file (csv is also possible). In the sample code below, a separate FB that we wrote for this purpose is used.
This combines all the texts of the arrays, creates a new file, writes the data, and closes the created file.

Bonus:
In the next step, we check whether too many files have been exported in the meantime and delete the oldest ones (ring buffer, FIFO) if necessary.

Here is the (pseudo) code for the text above.
ATTENTION: No error handling is built in here, and I have removed a few things to make the code more readable.

	CASE State OF
		
		ALMHISTORY_IDLE:

				IF (UpdateCount.current <> UpdateCount.old) THEN
					UpdateCount.old := UpdateCount.current;
					Query_NewSinceLastExport.Execute := TRUE;
					State := ALMHISTORY_GET_NEW_ENTRIES;
				END_IF
				
				
				
		ALMHISTORY_GET_NEW_ENTRIES:
			
			IF Query_NewSinceLastExport.CommandDone THEN
				
				AlmHistory_Query.SinceLastExport.Execute := FALSE;
				
				
				// --- Automatic Export ---
				// If more then ALMHIST_EXPORT_CHUNK (new) alarms have been found, trigger an export
				
				IF Query_NewSinceLastExport.Info.RowsRead >= ALMHIST_EXPORT_CHUNK THEN
					
					TempUDINT := Query_NewSinceLastExport.Info.RowsRead - ALMHIST_EXPORT_CHUNK;
					
					// Set the newest entry which has to be exported
					NextExport.Timestamp := Data.SinceLastExport.Timestamp[TempUDINT];
					NextExport.InstanceID := Data.SinceLastExport.InstanceID[TempUDINT];
					
					AlmHistory.State := ALMHISTORY_GET_WHOLE_TABLE;
					Query_ExportChunk_EN.Execute := TRUE;
					
				// --- Nothing to export ---
				// Go back to IDLE and continue checking for new entries
				ELSE
					
					State := ALMHISTORY_IDLE;
					
				END_IF
				
			END_IF
			
			
			
		ALMHISTORY_GET_WHOLE_TABLE:
		
			IF Query_ExportChunk_EN.CommandDone THEN
				
				AlmHistory_Query.ExportChunk_EN.Execute := FALSE;
				
				// Build Filename-string with current date and time
				brsmemset(ADR(CurrentFileName), 0, SIZEOF(CurrentFileName));
				DT_TO_STRF(DateTime := gGeneralInfo.DateTime.DateTime, 
							Millisecond := 0,
							Format := 'Alarms_#A-#D-#H_#N-#R-#T.txt', 
							Language := 1,
							Size := SIZEOF(CurrentFileName),
							ReturnVal := CurrentFileName
							);
				
				// Trigger export to file
				// Use the actual number of rows to account for manual exports with less rows then ALMHIST_EXPORT_CHUNK alarms
				AlmHist_WriteToFile_EN.Enable := TRUE;
				AlmHist_WriteToFile_EN.FileName := CONCAT(LANG_EN, '\');
				AlmHist_WriteToFile_EN.FileName := CONCAT(AlmHist_WriteToFile_EN.FileName, CurrentFileName);
				AlmHist_WriteToFile_EN.RowCount := UDINT_TO_UINT(AlmHistory_Query.ExportChunk_EN.Info.RowsRead);
				
				AlmHistory.State := ALMHISTORY_EXPORT_TO_FILES;
				
			END_IF
			
			
			
		ALMHISTORY_EXPORT_TO_FILES:
			
			// Wait until all Write-FUBs have finished
			IF AlmHist_WriteToFile_EN.Status = ERR_OK THEN
			
				LastExport := NextExport;
				(*AlmHistory.State := ALMHISTORY_CLEANUP_FILES;*)
				AlmHistory.State := ALMHISTORY_IDLE;
			END_IF
			
			
	END_CASE


To write the actual file content to a string, we have our own functions that can concatenate multiple STRINGS.
In addition, you have to “translate” the StateChange if you want to have the same format as the HistoryUI export.

It looks like this:

Nice to know

Since alarms do not always use the same amount of memory, it is not possible to say exactly how many alarms will fit into the history.
This depends on the texts, snippets used, etc.
With an estimate (e.g., normally about 1,000 entries fit) and this solution (the last 500 entries are always exported), you can prevent duplication and the risk of something being lost.

In the help section, I would recommend starting with this article:
64891ba7-cb8a-4bac-a940-5449792eadcd



I hope this helps you a little.
Good luck!

6 Likes