0

This code here runs fine on -O but fails to exit on -O2 and -Os.

#include <iostream>

int main() {
    int ctr = 2000000000;
    while (ctr++ > 0) {
        if (ctr % 100000000 == 0) {
             std::cout << ctr << '\n';
        }
    }
    return 0;
}

I know that it has something to do with integer overflow, but I thought that was defined behavior. In case it may be relevant, I'm compiling on a Linux virtual machine on a Windows 64-bit computer.

EDIT: Integer overflow is not defined behavior. So then what optimization or combination of optimizations causes the problem? The question is: "Why does the code work fine on -O but fail on -O2 and -Os?"

cpplearner
  • 11,873
  • 2
  • 42
  • 63
TheZouave
  • 53
  • 6
  • 1
    http://stackoverflow.com/questions/16188263/is-signed-integer-overflow-still-undefined-behavior-in-c – tkausl Dec 18 '16 at 00:15
  • 1
    It isn't defined behaviour. – Pixelchemist Dec 18 '16 at 00:18
  • 1
    You can't argue about "why does UB work sometimes". There are no guarantees at all. – Pixelchemist Dec 18 '16 at 00:34
  • I'm not arguing. I'm asking what optimizations I need to disable in order for this code to work. – TheZouave Dec 18 '16 at 00:35
  • 1
    Optimisation or combination of optimisations don't cause the problem. Undefined behaviour causes the problem. Optimisation or combination of optimisations may or may not hide the problem. If hiding the problem is what you are after, you have to try different combinations of flags and find those that work with your specific program. – n. 1.8e9-where's-my-share m. Dec 18 '16 at 00:39
  • The problem is fixed by using the flag -fno-strict-overflow – TheZouave Dec 18 '16 at 04:42
  • There is an optimization that can only apply if `i + 1 > i`. LLVM makes a big deal about using this optimization because on some platforms it can speed up a tight loop by 20%. Of course, `i + 1 > i` is true only if `i + 1` doesn't overflow. If the compiler knows an upper bound for `i`, it can prove overflow doesn't happen, and apply the optimization. In your case it can prove that `i + 1 > i` in all cases where `i + 1` is defined, and it doesn't need to worry about what happens when `i + 1` overflows because **`signed`** overflow is undefined, so the compiler applies the optimization. – Max Lybbert Dec 18 '16 at 16:48
  • Instead of monkeying with flags, either (1) switch to`unsigned` or (2) use a check like `while (i < INT_MAX)`. If you use `unsigned`, you won't get the optimization. If you use `INT_MAX` (or `std:: numeric_limits::max()`), you'll need to split up the increment from the test. For instance, put the `i++` on the first line of the loop. – Max Lybbert Dec 18 '16 at 16:51
  • (More information about undefined behavior -- **including this specific behavior** -- is at http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html . The big takeaway, in my opinion, is that undefined behavior isn't limited to the line where it appears (e.g., the line where you dereference a `NULL` pointer); a program that has undefined behavior is essentially undefined in all aspects.) – Max Lybbert Dec 20 '16 at 22:59

0 Answers0