Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But all major C compilers promise that they won't make ABI changes. So I'd rather say that standardizing intmax_t was a mistake.


If you don't standardize intmax_t, programs will invent the concept for themselves through some combination of preprocessing directives and shell scripts that probe the toolchain.

If you want to know "what is the widest signed integer type available", you will get some kind of answer one way or another.


Or even worse, if they don't know their build tools, some kind of union based monstrosity...


GCC changed long to 64-bits and x86 codebases had to accept that. The base integer types are designed to be variable sized and you code accordingly based on the minimum size guarantee. Assuming types are frozen in time is how you get things like LLP64.


No, gcc added support for ABIs that defined long as 8 bytes. They didn't change the size of long for any ABI.


Long is 32 bits on x86, 64 bits on x86_64. x86 code bases broke because they made non-portable assumptions.


No. It's 32-bits minimum. It can be anything larger. That's what the language says. Assuming otherwise is nonportable.


Why are you saying “no” and then agreeing with what I wrote? Am I misunderstanding something?


Because it isn't "64-bits on x64". It's 32-bit minimum on all platforms. Assuming larger sizes than the minimum is how brittle code gets written. A compiler vendor can set the sizes to whatever they want regardless of the target architecture and claim conformance if they meet the minimum.


Talking about the SysV ABI. There are multiple standards involved, not just the language standard itself.

> A compiler vendor can set the sizes to whatever they want regardless of the target architecture and claim conformance if they meet the minimum.

Compilers claim conformance with other standards besides just the C standard.


You are confusing ABIs with what the C standards mandate. A compiler emitting code for a particular architecture need to conform to that architecture's ABI. Or else it won't be able to interface with syscalls or other libraries. This goes for other things too like calling conventions and setup of stack frames. As klodolph states, long is 64 bit on x86_64, (32 on x86_64-win) and 32 on x86.


Because "32 bits or greater" != "exactly 32 bits"


Right. One is what the C standard says, and the other is what the ABI standard says. Maybe I’m missing something here.

Obviously, when someone says “type X is size Y on architecture Z”, they’re not talking about the C standard, they’re talking about some particular ABI.


> the other is what the ABI standard says

There are multiple ABIs for x86_64. I think the major compilers use 32 bits for int and 64 for long, but there could very well be a compiler that used different sizes, with a different ABI.


Yep, the x32 ABI on Linux is based on the amd64 instruction set, registers, etc. but uses 32-bit long and pointers (which in my experience is a nice performance boost, I measured up to 20% in some cases)


Right but the reason we don’t have ILP64 is mostly legacy code.


The mistake was not properly versioning the ABI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: