Why am I getting -1 when I run this function on max uint256?

Hello, I am looking at the following function, which takes a uint256 and gets the leftmost 24 bits. It casts to int24 because it interfaces with uniswap, which uses int24 for this variable. I tested it using the max uint256 which should give the result of the maximum int24 since max uint256 is all 1s in binary.

But instead I get -1. Obviously there is an overflow, but why? I also tried this using a uint256 that was 24 1s followed by all 0s and I got the same result.

function testLiquidityChunk() public pure returns(int24) {
    return (int24(int256(115792089237316195423570985008687907853269984665640564039457584007913129639935 >> 232)));
}

Because integer values are represented using Two's Complement.

TLDR: the represented value is negative if and only if the first (most significant) bit is set to 1.


Example for 4-bit values:

+--------------+------------------------+----------------------+
| Bit sequence | Unsigned integer value | Signed integer value |
+--------------+------------------------+----------------------+
| 0000         |  0                     |  0                   |
| 0001         |  1                     |  1                   |
| 0010         |  2                     |  2                   |
| 0011         |  3                     |  3                   |
| 0100         |  4                     |  4                   |
| 0101         |  5                     |  5                   |
| 0110         |  6                     |  6                   |
| 0111         |  7                     |  7                   |
| 1000         |  8                     | -8 ==  8 - 2^4       |
| 1001         |  9                     | -7 ==  9 - 2^4       |
| 1010         | 10                     | -6 == 10 - 2^4       |
| 1011         | 11                     | -5 == 11 - 2^4       |
| 1100         | 12                     | -4 == 12 - 2^4       |
| 1101         | 13                     | -3 == 13 - 2^4       |
| 1110         | 14                     | -2 == 14 - 2^4       |
| 1111         | 15                     | -1 == 15 - 2^4       |
+--------------+------------------------+----------------------+

As you see, the trick for calculating the signed value represented by a bit-sequence in which the first bit is set to 1, is to subtract "two to the power of the number of bits" (2^4 in the example above) from the unsigned value represented by the same bit-sequence.

Note that the smallest integer type in Solidity is 8-bit long, so the example above only serves as a theoretical explanation.

okay thank you that makes sense

1 Like

sorry one other question though....if the very first digit indicates the sign, and I put a 1 there (meaning negative) followed by 23 more 1s, then why am I getting -1 rather than -(type(int24).max)?

The 1111 line in the table in my answer might help you understand that.


You'd get this value if you use "1" followed by 22 "0" and then another "1".
The 1001 line in the table in my answer might help you understand that.


I strongly recommend that you go over that table, because I sincerely did my best in making it as detailed as possible, and at the same time - as simple as possible, for the purpose of quickly understanding how signed-integer representation works.

thank you i do see now what you are saying

1 Like