Once you see it you can't unsee it. Although maybe this how corporate blogslop has always been, and we're just now noticing now that it's infected everything.
> "These are not complaints, merely observations."
> "There are repairable laptops, and then there are ThinkPads."
> "iFixit approached the relationship as collaborators, not critics."
> "[...] they didn’t declare victory and go home. They kept pushing."
> "Designing for repairability doesn’t mean compromising innovation or premium experiences; when done well, it actually drives smarter innovation, better modularity, and more resilient platforms."
> "It would be one thing to make a highly repairable but low-volume niche device or concept. Instead, Lenovo just threw down a gauntlet by notching a 10/10 repairability score on their mainstream-iest business laptop."
> "This is [...] how repair goes from being an enthusiast’s “nice-to-have” to being baked into procurement checklists and fleet-management decisions."
There's a desperate grasping for drama and simplicity about it -- same as most mass-media news stories. I recall reading somewhere that the two watchwords of journalism are "simplify, and exaggerate". Maybe add to that: "Make all your metaphors cliches, so the reader doesn't have to think about what is meant."
Yeah, it's weird. It's like one person writes articles for the whole world. Probably will be fixed in a few AI iterations to present more styles, but right now it's everywhere. Articles, even forum posts.
I found a way to 'de-smell' LLM copy: tell it to take a second pass that processes the text output with the William Burroughs cut-up method. Works well for a small subset of use cases.
Presumably the smelly AI text problem is just ... a problem that will be solved. Or maybe we'll just get used to it.
I believe it's already a solved problem especially with base models (pre RL) but they still push the LLM voice either to make it easy to identify or because they think it's likeable, so it's not that OAI, anthropic, Google can't get rid of the assistant voice it's that they don't want to
I recently destroyed the screen on a Google Pixel during a repair following a shoddily-written set of iFixIt instructions. I wish I had checked the comments, where many people complained that the instruction was wrong.
It was about a very fragile part of the process, and so it seemed like an error of omission that seemed atypical for iFixIt. It made me suspect the instructions might not have been wholly human written. I feel a bit vindicated for that suspicion.
The most generous interpretation I can have for this type of article is that it's a second-order phenomenon. If it was written by a human, it was written by one who consumes a lot of AI generated content and whose standards for what they produce have slipped.
I’ve only tried doing a phone repair per iFixit’s instructions once, and the instructions sucked. They explained in excruciating detail how to take the phone apart and then the instructions just ended. No details on reassembly.
Ah, yes, everything needs to be phrased as an existential crossroads now. Same thing the other day when I was debating between olives or pickles on my pizza.
Not GP but I'm personally hoping that if I'm inevitably doomed to be exposed to this horseshit every day that it becomes tolerable to read. For world-shaking language-based superintelligences, they can't write to save their very expensive lives.
> I'm personally hoping that if I'm inevitably doomed to be exposed to this horseshit every day that it becomes tolerable to read.
Thank you for replying, but that doesn’t answer the question. Why would you want to make made up bullshit output more tolerable to read? Being intolerable to read is a feature, it’s a useful signal to know a piece of text may not have had human review, and that you should spend your time reading something else.
I use that same strategy with website consent banners. If a website is so invasive that they go out of their way to make rejection hard (which, by the way, is against the law), I know it’s a company not worth supporting.
It indicates a baseline competency of the AI user or whomever they are trusting to use it and it will hurt brand trust and trusting humans even more.
I'm glad I haven't let AI write much for me, its better for it to help me develop my ideas and writing and do the work to learn, explore and end up with something where my brain is in the gym.
.
Passive generation might not always map well to passive consumption
What annoys me the most is that the information has become much less dense. There's a lot of unnecessary repetition.
I feel like I need to feed every article through an LLM just to get a summary of it.
* "This isn't X. It's Y"
* "Some sentence emphasizing something. Describing the same thing with different framing. Describing it a third time but punchier.
* The em-dash of course
* A hard to describe sense of "cheesiness"
I only hope the models get good enough to not be so samey in the future.