Most of what makes the M1 interesting is TSMC's 5nm process, why wouldn't we expect similar results from future x86 CPUs on 5nm or smaller.
At least AMD is supporting upstream Linux development for GPU support, if you care about Free software and control over your computers, throw your money at companies better aligned with those goals. Don't reward Apple for being assholes.
5nm certainly helped, but what really made M1 is the team Apple has assembled. Apple’s fat margins allowed them to attract and retain the best engineers with top of market total comp. Meanwhile Intel viewed engineering as a cost center to be optimized. So here we are.
Those are some of the worst colors ever chosen for a bar chart. Topped off with "Chart series order in same order as legend order" while having the bars vertically stacked, and the legend with two columns.
I don't know how some of these tech writers made it out of 6th grade science class with those charting skills.
You optimize for where your competition is. For the past couple generations, Intel had to optimize for prices and, with hiccups in the process improvements, this pressure gets even worse.
When Zen 2 came out, IPC became a serious issue for Intel. I imagine that M1 makes it a lot more urgent.
Also keep in mind that, in a sense, M1 is a dead end. Integrated fast memory is great when the size fits, but it's terrible if it doesn't.
I haven’t the foggiest idea if any of this could make it to competing ARM chips, but given how many years Snapdragons have been lagging relative to the iPhone “A” chips, I don’t feel too optimistic.
The “special javascript instructions” are a single instruction that’s only impact is removing a perf advantage intel has due to JS exposing x86 specific sentinels to the web.
The only thing it does is remove a branch in arm64 where the various JITs have to do:
Int value = (int)some double;
If (flags) value = x86 sentinel
Whereas x86 systems don’t have a branch by definition.
I don't think it will be that hard for Intel & AMD to put little operations like the javascript float one (which is a standard ARM instruction BTW) or some special instructions to help garbage collection and retain/release counts work more effectively that all programming languages could leverage. It's something else that makes it better.
Are they really assholes? Competition is important to progress and I like that their M1 release has shaken things up so much. We should hopefully be seeing similar CPUs released in the future, now that Apple has shown a glimpse of what's possible.
Until then I'll be enjoying the M1 because even if they're assholes, they're assholes who can make a computer I actually like using.
Something nobody ever mentions is Apple is the new Intel: they will tend to be at least one process node ahead of the competition.
The reason for this is they have the scale to book all of TSMC's processing capacity for a new node. The AMDs of the world have to wait a year to receive the scraps while Apple move to the next TSMC node.
There are e.g. Dell XPS "developer edition" which are shipping with Ubuntu and have near 100% hardware support (I think there's some issue with fingerprint reader).
I have it too. The battery is shot after 4 years (?) but lasts quite a bit longer in Ubuntu. It's had coil whine as long as I had it, the supplied Toshiba NVMe drive died and I've also replaced the Wi-fi card with a mostly sane Intel 9560.
If you're using a SATA drive with the default UEFI settings, Linux will see the drive. If you use a PCIe drive, it won't until you set it to AHCI mode.
Edit: oh and Windows frequently wouldn't recognise the USB-C port unless you had something plugged into it during boot.
Which is what I've done when installing Linux. Am I losing performance? Power efficiency? Features?
It also makes dual booting trickier if you don't plan it beforehand.
It frustrates me that there's not that many reviews that cover this kind of stuff, so it's hard to avoid silliness like this when making a substantial purchase like a laptop.
> Which is what I've done when installing Linux. Am I losing performance? Power efficiency? Features?
From what I read a year ago before doing this, you don't lose much, and software-based RAID (if you go for that sort of thing) in GNU/Linux is just as efficient/reliable, and maybe more so. And anyway, if you only have one HDD/SSD, there is no point in RAID.
> It also makes dual booting trickier if you don't plan it beforehand.
I don't think so: installing Windows will work just the same with Intel's RAID turned off.
Apparently, Intel's RST is not just RAID, but it's also supposed to help when a laptop is equipped with two storage devices, one small and fast (SSD) and another big but slower (HDD): https://superuser.com/a/1578326
IIRC there were Dell XPS models like that in the lower price segment. Never tried one personally.
It is simple enough to set up windows to use the normal boot rather than the RAID one. I've got my XPS set up with one drive in Linux, one in Windows - and boot into the drive I want to be in.
The other thing that makes M1 interesting is it's ARM with specific hardware support for fast x86 emulation, which IMO is unlikely to be replicated in a top of the line laptop for the next 5 years.
Interesting. It's either that Apple wanted to use it as the first-gen ARM-based Macs or that they planned it to be used internally and in the DTK while M1 isn't ready.
Either way, it's quite impressive to have deliberately engineered dark silicon on consumer devices.
I could be mistaken, but I’m unaware of any kernel capabilities/optimizations for heterogeneous cores with differing power requirements. System memory for both gpu and cpu workloads would also be a new requirement requiring some thought.
I’d expect it to be difficult for amd/intel to match performance without similar software capabilities.
Aren't you basically describing big.LITTLE, which has been standard in most mobile SoCs for years now? (And was probably introduced in Android before iOS.)
Without any independent actual measurements of the same stack running on the same hardware with those optimizations turned off, we can't draw any conclusions regarding their significance.
Considering Apple has been so adamant about misrepresenting what's really TSMC silicon as "Apple Silicon", I'm viewing everything they say through "PR oozing with insecurity about not actually controlling access to their chip's manufacturing process, and desperate to convince consumers (and investors) of this being a uniquely Apple advantage" glasses.
Based on your qualification of who's silicon it is, AMD, Broadcom, Marvell, MediaTek, Nvidia, STMicroelectronics, and Qualcomm do not manufacture chips... Even IBM uses TSMC. That's a lot of the CPU silicon shipping right now.
The market has changed, and most CPU vendors don't fab their own chips anymore. The IP is in the chip design. Are you saying TSMC designed the M1?
I'm saying TSMC makes non-Apple chips as well, so Apple isn't enjoying as defensible of a moat when it comes to silicon as their marketing wants you to believe.
The other thing that makes it interesting is getting performance comparable to the fastest Intel/AMD processors as 3Ghz. I expect that AMD will catch up with Apple on performance when they switch to 5nm, but Apple's low clock speed means they're likely to retain the power usage advantage.
It’s mostly compared to Intel since there’s no official AMD support on MacOS whatsoever. You very well could compare hackintosh AMD to Apple’s M1, but it wouldn’t be comparing Apples to Apples. There are indeed AMD Windows comparisons, however[0].
Although Apple booked TSMC’s entire production line (for an unknown length of time)[0], this official graph looks like it indicates 5nm before 2022 (so 2021)[1].
At least AMD is supporting upstream Linux development for GPU support, if you care about Free software and control over your computers, throw your money at companies better aligned with those goals. Don't reward Apple for being assholes.
<=5nm AMD Zen SoCs will be fantastic as well