Hi everyone,
from C/C++ I’m used to signed integer overflows being undefined behavior.
In “the (B&R) automation world” I got the impression that it is sometimes common to rely on a certain well-defined signed integer behavior. For example in reaction programs, where timestamps are signed integers and durations between them are calculated with signed integer differences. Same behavior with AsIOTimeStamp()
and the calculation of duration
in this example.
In a nutshell, it seems to me that we are relying on c
evaluating to 1 in
a := 2147483647; // ==DINT_MAX
b := a + 1; // ==DINT_MIN==-2147483648 (who/what guarantees this?)
c := b - a;
where a, b, c
all are of type DINT
, while in C/C++ both b
and c
are undefined, when a, b, c
are int32_t
and a
is set to std::numeric_limits<int32_t>::max()
initially.
Question: Where is the signed integer behavior specified? Why is it safe for a programmer to rely on such behavior? (Or is it not safe, and I have a misconception here?)
EDIT: typo