I'm not sure I follow. How does an integrity check help when the source is compromised? The developer doesn't know that their repo is compromised. They continue posting legitimate hashes because the repo is legitimately compromised.
I didn't see any complaints about any kind of artificial intelligence, research or otherwise, besides large language models, in this article.
Large language models are a single kind of AI, and a particularly annoying kind when you are forced to use them for deterministic or fact seeking tasks
or did you read the article? you're probably an LLM. why am I here? fuck this website
True but LLMs are all that are being sold right now. Mainly because people think they are intelligent because they're basically bullshit artist simulators.
I don't think the future of AI is with LLMs either. Not only LLMs anyway.
> How hard is that to fix? Aren't they using CoPilot? Just ask it to fix the invisible icon.
Maybe that's the problem? Imagine a Microsoft employee allowed to program only by using a CoPilot prompt, screaming and begging to just apply a patch he already written without touching anything else :D
This might not be too far from what's happening. In the dotnet repos you can see MS employees constantly fighting it across hundreds of PRs: https://github.com/dotnet/runtime/pull/120637
After all that noise, the clanker just says it can't do it and the PR is abandoned. I'd say it would have been easier to literally do nothing and have the same result.
If a human wrote it, at least there would have been a possibility for learning or growth. This just looks like a waste of time for everyone.