If you don't standardize intmax_t, programs will invent the concept for themselves through some combination of preprocessing directives and shell scripts that probe the toolchain.
If you want to know "what is the widest signed integer type available", you will get some kind of answer one way or another.
GCC changed long to 64-bits and x86 codebases had to accept that. The base integer types are designed to be variable sized and you code accordingly based on the minimum size guarantee. Assuming types are frozen in time is how you get things like LLP64.
Because it isn't "64-bits on x64". It's 32-bit minimum on all platforms. Assuming larger sizes than the minimum is how brittle code gets written. A compiler vendor can set the sizes to whatever they want regardless of the target architecture and claim conformance if they meet the minimum.
You are confusing ABIs with what the C standards mandate. A compiler emitting code for a particular architecture need to conform to that architecture's ABI. Or else it won't be able to interface with syscalls or other libraries. This goes for other things too like calling conventions and setup of stack frames. As klodolph states, long is 64 bit on x86_64, (32 on x86_64-win) and 32 on x86.
Right. One is what the C standard says, and the other is what the ABI standard says. Maybe I’m missing something here.
Obviously, when someone says “type X is size Y on architecture Z”, they’re not talking about the C standard, they’re talking about some particular ABI.
There are multiple ABIs for x86_64. I think the major compilers use 32 bits for int and 64 for long, but there could very well be a compiler that used different sizes, with a different ABI.
Yep, the x32 ABI on Linux is based on the amd64 instruction set, registers, etc. but uses 32-bit long and pointers (which in my experience is a nice performance boost, I measured up to 20% in some cases)