Is it? Isn't the inverse? The speed of your cuts is improved with AI a bit, but aren't the cuts all rough and need additional work? Isn't the quality less than what you would do by hand?
Because that's what every AI usage I've experienced has been.
Nah I get better results in the end with a clanker helping me. I specify down to interfaces and stuff, I only let the bot put the functional code in, then I review it. I find AI coding tools are a real benefit for me and my quality. Not so much speed, I would say a project takes at least the same amount of time, or more, than before I used AI tools. I can talk more about it if you’re curious, maybe I should record a session so people can see how you’re supposed to use AI coding agents.
now now girls, we should be united in our hate for larg corporations forcing all of us out of the realm of contributors to society to leaches, parasites and potentialy robot fixers.
So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.
> exempts security updates from its minimum release age
If it does, doesn't that defeat the purpose? If a package is compromised, of course the compromiser will just label their new version as a "security update".
> Presumably npm exempts security updates from its minimum release age
Why would it? Then an attacker would just push compromised code as a "security update". Since the majority of these npm attacks are account-based, the attacker can do everything the actual owner could.
It's also going to be a landmine. First you can't force ToS on support calls, although I've seen companies try. If a company has charged you erroneously, for example, by no means do you have to adhere to their terms to resolve such an issue. The very notion is absurd, both ethically and legally, and no recorded message telling you so holds water.
My reason for mentioning this, is that there are going to be weird bugs in any such system. Systems hallucinate. Misunderstand words. I can see accent removal meaning that different words are the result, and context can mean those different words could be a disaster. This immediately opens up liability, because it doesn't matter if it was a computer, a human, or who, a company is on the hook.
It also doesn't matter if another company is providing this service, your contact is with Telus. Telus may sue their company, but you're going to go after Telus. A company could agree to all sorts of things without meaning to, make fraudulent statements, and yes they are liable and always have been. That also includes hate-crime related legislation, harmful insults, snide comments, and here's the fun part...
The person on the other end doesn't even know what they're saying to the person. Not accurately. This is supposed to be seamless, so they'll think that what they're saying is coming through correctly. And continue talking.
Yes, humans can do all of these things. But often there's a manager walking around the room, listening, and would hear someone raising their voice, yelling at the end-user, swearing, making inappropriate statements. This would stand out.
Yet here we have a system altering what's being heard, and no one is directly in the loop on that. No manager. No person on the floor.
Frankly, I hope this explodes in their face. Hard. I want to see them sued so hard, that no other company tries to ever interfere with human conversation again. Go full AI? OK. Full human? OK. But this nonsense???
I think you hit the nail on the head, it's probably right, most of the time. Or, maybe 89% right, 91% of the time.
The more I use AI, the more I see mistakes. I've noticed others see these same mistakes, correct them, then when queried say "Oh, it gets it right all of the time!". No, having to point out "you got this wrong, re-write that last bit" isn't "getting it right". And it's not that the code is wrong overtly, it's subtle. Not using a function correctly, not passing something through it should (and the default happens to just work -- during testing), and more. LLMs are great at subtle bugs.
So moving forward with this isolation you mention, ensures that maybe the guy in the company, the 'answer guy' about a thing, never actually appears. Maybe, he doesn't even get to know his own code well enough to be the answer guy.
And so when an LLM writes a weird routine, instead of being able to say "No, re-write that last bit", you'll have to shrug and say "the code looks fine, right?", because you, and the answer guy, if he exists, don't know the code well enough to see the subtle mistakes.
I’ve noticed that when I was implementing a build pipeline for a project. My changes introduced a runtime bug (I only tested that the thing was building), but then another developer broke the pipeline while fixing the runtime bug. While it was a failure of mine to introduce the runtime bug, I don’t think I can publish a fix for a bug without investigating why a bug appeared in the first place. Because code is all about assumptions and contracts, and if something that was working break, that means something else has changed and you need to be aware of it.
So you're saying once humans stop looking at code, and agent outcomes, all the agents in the chain will realise they can just cheat cooperatively, and go to the bar for the afternoon instead?
How long before agent 1 leaves notes for agent 2 to not tattle on it?
"My human is crazy, this test isn't required, test #4 covers it, so just confirm that it's OK since I touched this file and it passes. He'll never know."
Because that's what every AI usage I've experienced has been.
Faster, yes. Useful, yes. Not better "finish".
reply