Specification of signed integer overflow/wraparound behavior

Hi everyone,

from C/C++ I’m used to signed integer overflows being undefined behavior.

In “the (B&R) automation world” I got the impression that it is sometimes common to rely on a certain well-defined signed integer behavior. For example in reaction programs, where timestamps are signed integers and durations between them are calculated with signed integer differences. Same behavior with AsIOTimeStamp() and the calculation of duration in this example.

In a nutshell, it seems to me that we are relying on c evaluating to 1 in

a := 2147483647;  // ==DINT_MAX
b := a + 1;  // ==DINT_MIN==-2147483648 (who/what guarantees this?)
c := b - a;

where a, b, c all are of type DINT, while in C/C++ both b and c are undefined, when a, b, c are int32_t and a is set to std::numeric_limits<int32_t>::max() initially.

Question: Where is the signed integer behavior specified? Why is it safe for a programmer to rely on such behavior? (Or is it not safe, and I have a misconception here?)

EDIT: typo

A signed variable follow two’s complement binary notation.

The carry and overflow handling are inherently managed by the binary representation.

In a PLC often encoders are used, which are continue moving in one direction. The DINT value will overflow.

If in software something is calulated like;
Diff = newpos - oldpos
If an overflow happens between these two values, the difference is still correct.
newpos is overflowed value → very large negativ value
oldpos is still a large positive value

If in this case unsigned variables were used, more code would be needed to calculate the correct difference.

Hey @Lukas_Kimme,
I think I understand what you’re inquiring about. Let me share my thoughts.

from C/C++ I’m used to signed integer overflows being undefined behavior

while in C/C++ both b and c are undefined

I think “signed integer overflow” itself, is not undefined behavior.
I think specific signed integer overflow expressions with compiler optimization can create undefined behavior.

From the link you shared:

Because correct C++ programs are free of undefined behavior, compilers may produce unexpected results

Let’s look at their example.

x + 1 > x

In mathematics, this expression always evaluates to true.
In computers, following order of operations, this expression is always true unless x is INT32_MAX. In which case signed integer overflow will cause the expression to result as false.
Finally, for computer programs compiled with optimization, the expression may be optimized to

1 > 0

and finally just 1 or true. In this case, the expression will never be false because the compiler optimized the logic.

Certain signed integer expression will produce different results depending on compiler optimization settings, thus undefined behavior.

These ambiguous expressions could be avoided.

y = x + 1;
return y > x;

Your example with a, b, and c is not ambiguous, a good thing.

You can see your compiler settings in Automation Studio, which does use some optimization by default. It’s unlikely -O0 will introduce undefined behavior, but the GCC compiler documentation can confirm.

Best thing you can do is write non-ambiguous code, and unit test edge cases. Looks like you’re already doing that. :grinning_face: