Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Damn, everyone is using AI for copyediting now aren't they? Once you notice the patterns you see it everywhere.

* "This isn't X. It's Y"

* "Some sentence emphasizing something. Describing the same thing with different framing. Describing it a third time but punchier.

* The em-dash of course

* A hard to describe sense of "cheesiness"

I only hope the models get good enough to not be so samey in the future.



Once you see it you can't unsee it. Although maybe this how corporate blogslop has always been, and we're just now noticing now that it's infected everything.

> "These are not complaints, merely observations."

> "There are repairable laptops, and then there are ThinkPads."

> "iFixit approached the relationship as collaborators, not critics."

> "[...] they didn’t declare victory and go home. They kept pushing."

> "Designing for repairability doesn’t mean compromising innovation or premium experiences; when done well, it actually drives smarter innovation, better modularity, and more resilient platforms."

> "It would be one thing to make a highly repairable but low-volume niche device or concept. Instead, Lenovo just threw down a gauntlet by notching a 10/10 repairability score on their mainstream-iest business laptop."

> "This is [...] how repair goes from being an enthusiast’s “nice-to-have” to being baked into procurement checklists and fleet-management decisions."


There's a desperate grasping for drama and simplicity about it -- same as most mass-media news stories. I recall reading somewhere that the two watchwords of journalism are "simplify, and exaggerate". Maybe add to that: "Make all your metaphors cliches, so the reader doesn't have to think about what is meant."


Yeah, it's weird. It's like one person writes articles for the whole world. Probably will be fixed in a few AI iterations to present more styles, but right now it's everywhere. Articles, even forum posts.


I found a way to 'de-smell' LLM copy: tell it to take a second pass that processes the text output with the William Burroughs cut-up method. Works well for a small subset of use cases.

Presumably the smelly AI text problem is just ... a problem that will be solved. Or maybe we'll just get used to it.


I believe it's already a solved problem especially with base models (pre RL) but they still push the LLM voice either to make it easy to identify or because they think it's likeable, so it's not that OAI, anthropic, Google can't get rid of the assistant voice it's that they don't want to


We've gone the wrong direction on the verbosity scale.

Unless I'm reading for pleasure, I want everything in concise summaries. I don't need flowery language. Or even complete sentences.

Maybe an LLM verbosity slider that dynamically truncates text we don't need. I'll dial mine down.



I recently destroyed the screen on a Google Pixel during a repair following a shoddily-written set of iFixIt instructions. I wish I had checked the comments, where many people complained that the instruction was wrong.

It was about a very fragile part of the process, and so it seemed like an error of omission that seemed atypical for iFixIt. It made me suspect the instructions might not have been wholly human written. I feel a bit vindicated for that suspicion.

The most generous interpretation I can have for this type of article is that it's a second-order phenomenon. If it was written by a human, it was written by one who consumes a lot of AI generated content and whose standards for what they produce have slipped.


I’ve only tried doing a phone repair per iFixit’s instructions once, and the instructions sucked. They explained in excruciating detail how to take the phone apart and then the instructions just ended. No details on reassembly.


>A hard to describe sense of "cheesiness"

This is the "Reddit" factor. I picked up on it being LLM written with this sentence:

"This is the treacherous, final-boss stage where repairability usually dies,"


Ah, yes, everything needs to be phrased as an existential crossroads now. Same thing the other day when I was debating between olives or pickles on my pizza.


Now that I know pickles are a pizza topping, maybe.


Only in the final boss stage.


LLMs bring up the “final boss analogy a lot too. I’ve gotten that in my own prompts


> I only hope the models get good enough to not be so samey in the future.

Why would you hope to be more easily fooled?


Not GP but I'm personally hoping that if I'm inevitably doomed to be exposed to this horseshit every day that it becomes tolerable to read. For world-shaking language-based superintelligences, they can't write to save their very expensive lives.


> I'm personally hoping that if I'm inevitably doomed to be exposed to this horseshit every day that it becomes tolerable to read.

Thank you for replying, but that doesn’t answer the question. Why would you want to make made up bullshit output more tolerable to read? Being intolerable to read is a feature, it’s a useful signal to know a piece of text may not have had human review, and that you should spend your time reading something else.

I use that same strategy with website consent banners. If a website is so invasive that they go out of their way to make rejection hard (which, by the way, is against the law), I know it’s a company not worth supporting.


It indicates a baseline competency of the AI user or whomever they are trusting to use it and it will hurt brand trust and trusting humans even more.

I'm glad I haven't let AI write much for me, its better for it to help me develop my ideas and writing and do the work to learn, explore and end up with something where my brain is in the gym. . Passive generation might not always map well to passive consumption


What annoys me the most is that the information has become much less dense. There's a lot of unnecessary repetition. I feel like I need to feed every article through an LLM just to get a summary of it.


If only a human could edit the output before posting.


Ironically, the editors probably haven't opened a text editor for months.


Em dashes aren’t an actual tell IMO. Many people use them.


Surely you mean: Em dashes aren’t an actual tell IMO — many people use them.


Maybe he isn't one but has a close friend who is? That would describe me.


Em dashes aren’t an actual tell IMO: many people use them.


There are dozens of us!


— dozens!


It is though if the rest of the prose is trash.


Jokes on you—humans write trash all the time.


> everyone is using AI for copyediting now aren't they?

If the studies that say that humans prefer AI writers are to be believed then you'd be a fool not to


Depends on the type of human you want to attract.


* "This isn't X. It's Y"

I find that Gemini uses that phrase way too much.


Ugh I have actually started hating Gemini for this specifically.


I don’t mind the AI generated aspect. I mind the lack of carrying that it looks like AI slop.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: