Sunday, October 24, 2021

[SOLVED] Why does gcc not add tmin + tmin correctly?

Issue

I've been playing around with bitwise operations and two's complement, when I discovered this oddity.

#include <stdio.h>
int main ()
{
    int tmin = 0x80000000;
    printf("tmin + tmin: 0x%x\n", tmin + tmin);
    printf("!(tmin + tmin): 0x%x\n", !(tmin + tmin));
}

The code above results in the following output

tmin + tmin: 0x0
!(tmin + tmin): 0x0

Why does this happen?


Solution

0x80000000 in binary is

0b10000000000000000000000000000000

When you add two 0x80000000s together,

    |<-          32bits          ->|
  0b10000000000000000000000000000000
+ 0b10000000000000000000000000000000
------------------------------------
 0b100000000000000000000000000000000
    |<-          32bits          ->|

However, int on your machine seem to have 32 bits, so only the lower 32 bits are preserved, which means the 1 in your result is silently discarded. This is called an Integer Overflow.

Also note that in C, signed (as opposed to unsigned, i.e. unsigned int) integer overflow is actually undefined behavior, which is why !(tmin + tmin) gives 0x0 instead of 0x1. See this blog post for an example where a variable is both true and false due to another undefined behavior, i.e. uninitialized variable.



Answered By - nalzok