You would need a new CPU that doesn't yet exist to address 32TB of memory per socket. Existing parts can address 4TB. x86-64 has an ultimate system limit of 256TB, due to its 48-bit virtual address space.
Also worth considering that 32TB of DRAM would draw over 12kW, just sitting there.
> You would need a new CPU that doesn't yet exist to address 32TB of memory per socket. Existing parts can address 1TB (Intel) or 4TB (AMD). x86-64 has an ultimate system limit of 256TB, due to its 48-bit virtual address space.
That's the virtual address space. A page table entry has enough bits to have a 64-bit physical address space, it just wouldn't be able to have it all mapped at once in the same virtual address space. Although CPUs don't have 64 physical address lines yet, there's nothing fundamental in the x86 architecture preventing them from doing so.
Intel have already implemented 5-level paging, which would give a 2^57 bit virtual address.
Also, this wouldn't be the first time that the x86 has supported more physical memory than virtual. PAE allowed for 64GB of RAM on a system with a 32-bit virtual address space.
For what its worth, a significant fraction of that is in the communication interfaces, and not from the ram itself, and there have already been significant process improvements to reduce ram power consumption. A modern 256GB RDIMM draws a heck of a lot less then 50w--I have never measured but based on the thermal solution I would say closer to 5w
I don't see how that could be true. On my server right here with a Xeon Silver 4114 it is measuring the power consumption of the memory at ~75W for 256GB.
Like I said, ram power consumption does not scale linearly with capacity due to the significant overhead from the IO. A single 128GB stick will draw much less than 16x16gb sticks (not sure why you are using 256GB on a 4114, it has 6 memory channels so surely you have 288GB?)
Here is the datasheet for a 128GB dimm from 2017 [1], which shows 3.4A IDD0 (normal operation) on the 1.2V rail at the highest speed of DDR4-2666, and 0.2A on the 2.5V precharge rail for a total of just over 6W. Also worth noting is that is a LRDIMM, which draws more power from the DC rails due to the additional buffering. A normal RDIMM draws a bit less static power.
Compare to a manual for a similar vintage 32GB stick [2], which consumes 2A on the 1.2V rail and 0.1A on the precharge rail for a total of a bit under 3w. One quarter the capacity, but still half of the power draw.
If I could send you back in time to stop this machine's designer from deploying it at scale with some of the channels depopulated, I would! It's an HPE DL360 g10, if you go look at their catalog you'll see that all of the off-the-shelf and BTO memory configs are nonsense.
Thanks for doing the math on the power story. I didn't realize about the scaling.
Also worth considering that 32TB of DRAM would draw over 12kW, just sitting there.