It's not just that the big platforms provide a much stronger incentive for fraudsters. It's that providing that stronger incentive is not a bug for the big platforms, it's a feature. It goes hand in hand with their business model, which is to sell access to as large a pool of users as possible.
> The big platforms actually are radically more competent at anti-fraud / anti-spam / etc
No, they're not. They are purposely incompetent at those things, because it's part of their business model. They cannot implement actual competence (which would mean providing sufficient actual human support to deal with the volume of mistakes that their automated algorithms make), not because it would cost too much (the article quite correctly points out that their huge profits provide plenty of cash to pay for human moderation), but because it would reduce their value to their actual customers, who are not their users.
I don't see how spam or fraud brings value to advertisers (which is what I assume you're talking about?) or any one else for that matter (other than the spammer/fraudster).
Perhaps I can squint and see how an email platform like Gmail could be read as a protection racket, but I don't see it in other platforms.
> I don't see how spam or fraud brings value to advertisers
Spam and fraud still generates "engagement" (whether the bad content itself or the people getting outraged by it) which contributes to the numbers they use to attract & bill advertisers.
The modus operandi of these platforms when it comes to moderation is to quickly take down whatever will get them in trouble fast with the law (DMCA takedowns, CSAM), or what will provide ammunition for attack pieces in the media (pornography, sometimes drug- or weapon-related content). But anything else is fair game to stay and earn "engagement" even in the face of user-submitted reports.
In the unlikely event that this "anything else" does get reported on in some big media, then it'll be taken down and they'll issue an apology and how they will do better in the future, and everyone seems to buy it no matter how many times they've done this.
> I don't see how spam or fraud brings value to advertisers
It doesn't; that's not the point. The point is, as I said, that the big platforms make money by selling access to as large a pool of users as possible, and that requires them to not crack down on spammers and fraudsters--because at their scale, it is impossible for them to reliably distinguish the spammers and fraudsters from the rest of their users. So they can't crack down on spammers and fraudsters without also reducing the size of their user pool and thereby killing their business model.
The article explicitly talks about this:
Either you have to back off and only ban users in cases where you're extremely confident, or you ban all your users after not too long and, as companies like to handle support, appealing means that you'll get a response saying that "your case was carefully reviewed and we have determined that you've violated our policies. This is final", even for cases where any sort of cursory review would cause a reversal of the ban, like when you ban a user for impersonating themselves. And then at FB's scale, it's even worse and you'll ban all of your users even more quickly, so then you back off and we end up with things like 100k minors a day being exposed to "photos of adult genitalia or other sexually abusive content".
> The big platforms actually are radically more competent at anti-fraud / anti-spam / etc
No, they're not. They are purposely incompetent at those things, because it's part of their business model. They cannot implement actual competence (which would mean providing sufficient actual human support to deal with the volume of mistakes that their automated algorithms make), not because it would cost too much (the article quite correctly points out that their huge profits provide plenty of cash to pay for human moderation), but because it would reduce their value to their actual customers, who are not their users.