Feels like a false equivalency. It's just my experience, but I've completely ignored crypto and the metaverse, and I don't get the sense I'm missing out on much.
In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life. Transformative for the better? Time will tell I suppose, but I'm really enjoying it so far.
As a freelancer I do a bit of everything, and I’ve seen places where LLM breezes through and gets me what I want quickly, and times where using an LLM was a complete waste of time.
For sure. The more specialized or obscure of things you have to do, the less LLMs help you.
Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.
Designing a new SLAM algorithm? Probably LLMs will spin around in circles helplessly. That being said, that was my experience several years ago… maybe state of the art has changed in the computer vision space.
> The more specialized or obscure of things you have to do, the less LLMs help you.
I've been impressed by how this isn't quite true. A lot of my coding life is spent in the popular languages, which the LLMs obviously excel at.
But a random dates-to-the-80s robotics language (Karel)? I unfortunately have to use it sometimes, and Claude ingested a 100s of pages long PDF manual for the language and now it's better at it than I am. It doesn't even have a compiler to test against, and still it rarely makes mistakes.
I think the trick with a lot of these LLMs is just figuring out the best techniques for using them. Fortunately a lot of people are working all the time to figure this out.
Agreed. This sentiment you are replying to is a common one and is just people self-aggrandizing. No, almost nobody is working on code novel enough to be difficult for an LLM. All code projects build on things LLM's understand very well.
Even if your architectural idea is completely unique... a never before seen magnum opus, the building blocks are still legos.
Specialized is probably not the word I'd use, because llms are generally useful to understand more specialized / obscure topics. For example I've never randomly heard people talking about the dicom standard, llms have no trouble with it.
I think there is a sweet spot for the training(?) on these LLMs where there is basically only "professional" level documentation and chatter, without the layman stuff being picked up from reddit and github/etc.
I was looking at trying to remember/figure out some obscure hardware communication protocol to figure out enumeration of a hardware bus on some servers. Feeding codex a few RFC URLs and other such information, plus telling it to search the internet resulted in extremely rapid progress vs. having to wade through 500 pages of technical jargon and specification documents.
I'm sure if I was extending the spec to a 3.0 version in hardware or something it would not be useful, but for someone who just needs to understand the basics to get some quick tooling stood up it was close to magic.
The standard for obscurity is different for LLMs, something can be very widespread and public without the average person knowing about it. DICOM is used at practically every hospital in the world, there's whole websites dedicated to browsing the documentation, companies employ people solely for DICOM work, there's popular maintained libraries for several different languages, etc, so the LLM has an enormous amount of it in its training data.
The question relevant for LLMs would be "how many high quality results would I get if I googled something related to this", and for DICOM the answer is "many". As long the that is the case LLMs will not have trouble answering questions about it either.
One tendency I've noticed is that LLMs struggle with creativity. If you give them a language with extremely powerful and expressive features, they'll often fail to use them to simplify other problems the way a good programmer does. Wolfram is a language essentially designed around that.
I wasn't able to replicate in my own testing though. Do you know if it also fails for "mathematica" code? There's much more text online about that.
> Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.
This is actually where I would be most reluctant to use an LLM. Your website represents your product, and you probably don’t want to give it the scent of homogenized AI slop. People can tell.
They can tell if you let it use whatever CSS it wants (Claude will nearly always make a purple or blue website with gross rainbow gradients). They can also tell if you let it write your marketing copy.
If you decide on your own brand colors and wording, there’s very little left about the code that can’t be done instantly by an LLM (at least on a marketing website).
Some subscriptions offer "unlimited tokens" for certain models. i.e. GitHub co-pilot can be unlimited for GPT-4o and GPT-4.1 (and, actually, GPT-5 mini!). So: I spent some time with those models to see what level of scaffolding and breaking things down (hand holding) was required to get them to complete a task.
Why would I do that? Well, I wanted to understand more deeply how differences in my prompting might impact the outcomes of the model. I also wanted to get generally better at writing prompts. And of course, improving at controlling context and seeing how models can go off the rails. Just by being better at understanding these patterns, I feel more confident in general at when and how to use LLMs in my daily work.
I think, in general, understanding not only that earlier models are weaker, but also _how_ they are weaker, is useful in its own right. It gives you an extra tool to use.
I will say, the biggest findings for "weaknesses" I've found are in training data. If you're keeping your libraries up-to-date, and you're using newer methods or functionality from those libraries, AI will constantly fail to identify with those new things. For example, Zod v4 came out recently and the older models absolutely fail to understand that it uses some different syntax and methods under the hood. Jest now supports `using` syntax for its spyOn method, and models just can't figure it out. Even with system prompts and telling them directly, the existing training data is just too overpowering.
I would say they are not changing but evolving and you evolve with them.
For example: gemini became a lot better in a lot more tasks. How do I know? because i also have very basic benchmarks or lets say "things which haven't worked" are my benchmark.
Honestly I think this is the primary explanation for why there is so much disagreement on if LLMs are useful or not. If you leave out the more motivated arguments in particular.
> In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life.
Feels like a false dichotomy.
Have I become faster with LLMs? Yes, maybe. Is it 10x or 1000x or 10,000x? Definitely not. I think actually in the past I would have leaned more on senior developers, books, stack overflow etc. but now I can be much more independent and proactive.
LLM-based tools are a wide spectrum, and to argue that the whole spectrum is worth exploring because one sliver of it has definite utility is a bit wonky. Kind of like saying $SHITCOIN is worth investing in because $BITCOIN mooned as a speculative asset:
- I’m bullish on LLMs chat interfaces replacing StackOverflow and O’Reilly
- I could not be more bearish on Agents automating software engineering
Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead.
>Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead
I truly believe so much of the anti-AI sentiment is the same as the Luddites.
They're often used as a meme now, but they were very real people, faced with a real and present risk to their livelihoods. They acted out of fear, but not just irrational fear.
AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity ... and also, unquestionably, a threat to programmer jobs.
Maybe the OP is right about waiting, but to me whenever new tech is disrupting jobs, that seems like the best time to learn it. If you don't, it's not just FOMO as the author suggests ... it's failing to keep up on the skills that keep you employed.
> it's failing to keep up on the skills that keep you employed.
I judge "failing to keep up" by my ability to "catch up". Right now if I search for paying courses on AI-assisted coding, I get a royal bunch for anything between 3$ to about 25$. These are distilled and converging observations by people who have had more time playing around with these toys than me. Most are less than 10 hours (usually 3 to 5). I also find countless free ones on YouTube popping up every week that can catch me up to a decent bouquet of current practices in an hour or two. They all also more or less need to be updated to relevancy after a few months (e.g. I've recently deleted my numerous bookmarks on MCP).
Don't get me wrong, LLM-assisted coding is disruptive, but when practice becomes obsolete after a few months it's not really what's keeping you employed. If after you've spent much time and effort to live near that edge, the gap that truly separates you from me in any meaningful way can be covered in a few hours to catch up, you're not really leaving me behind.
The burden of proof lies with he who makes grand claims. My counterargument in the face of your lack of evidence is: “Where are all the improvements to my daily life? Where are the disrupting geniuses who go-to market 100x faster than their Luddite counterparts?”
To paraphrase another analogy that I enjoyed, it’s a bit like when 3d printing became a thing and hype con artists claimed that no one would buy anything anymore, you could just 3d print it.
You don’t need 100x productivity to be disruptive. In business 10% gain can be quite enormous. My senior engineers are estimating 25-50% gains. That is a far cry from your 100,000% gain, but very real and meaningful.
I have found that maximising AI coding is a skill on its own. There is a lot of context switching. There is making sure agents are running in loops. Keeping the quality high is also important, as they often take shortcuts. And finally you need an somewhat of an architectural vision to ensure agents don’t just work in a single file.
This is all very tiring and difficult. You can be significantly better than other people at this skill.
This is not an argument for its revolutionary utility. Balancing rocks on the beach is very tiring and difficult for some people, and you can be significantly better at it. Not really bringing anything to the immediate conversation with that insight.
AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity .
And yet, the only research that tries to evaluate this in a controlled, scientific way does not actually show this. Critics then say those studies aren’t valid because of X, Y or Z but don’t provide anything stronger than anecdotes in rebuttal.
It’s ridiculous double standard and poisons any reasonable discussion to assert something is a fact and anyone who disagrees is a hysterical Luddite based on no actual evidence.
The question isn’t if you’ve improved. It’s if the path you took to getting to your current improvement could have been shortcut with the benefit of hindsight. Given the number of dead ends we’ve traversed, the answer almost certainly is yes.
Crypto and the Metaverse were solutions in search of a problem. LLMs kind of felt like that until tooling arrived that enabled doing a lot more than copying + pasting chat conversations.
Sure, maybe crypto changed some lives, but an entire industry? I think ALL of software dev is going under a transformation and I think we're past the point of "wait it out" IMO.
Or I'm wrong, but right I'm being paid to develop a new skill professionally. Maybe the skill ends up not being useful - ok, back to writing code the old way then.
It's clearly a textbook example of survivorship bias.
In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.
It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time.
In the 90s was also the dotcom boom, and the vast majority of those who placed an all-in bet on it being everything lost it all in the dotcom bust and also "ended up being forgotten by history". Some of those bets were prescient but too early but many of those bets never made any sense. The dotcom bust was worse than the software industry crash we're experiencing now.
"It's rather obvious that this AI thing is a transformative event in world history" perhaps but it's not at all obvious how it's going to shake out or which bets are sensible.
> In the 90s was also the dotcom boom, and the vast majority of those who placed an all-in bet on it being everything lost it all in the dotcom bust and also "ended up being forgotten by history".
I think you are missing the point, and also the very site you're posting on.
Look at the top 50 list of most valuable companies in the world. Over half of the total market value reported today is attributed to companies which were either dotcom startups or whose growth was driven by the dotcom growth period. Dismissing the advent of the internet as anything short of revolutionary is disingenuous, no matter how many zombo.com companies failed.
LLMs have the exact same transformative impact on humanity.
> LLMs have the exact same transformative impact on humanity.
But this is begging the question.
Yes, we can see that the internet was radically transformative.
But you are arguing that this somehow proves that LLMs are too, when there's wildly insufficient evidence—either on where LLMs are going in themselves, or in the comparison—to credibly make that claim.
> It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time.
It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads. Certainly I remember having to go down a few results to get SO results for a while, even when the top results were just copypasta from SO.
> It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads.
I don't think that's it. SO was the go-to page for troubleshooting, whose traffic was not exactly originating from web search. Also, the LLM-correlated drop in traffic is also reported by search engines. Stack Overflow just so happens to be a specialized service with a very specialized audience whose demand is perfectly dominated by LLM chatbots.
Internet is something new. By definition llm coding isn’t doing anything you couldn’t have done already. Once the agents aren’t writing a human syntax based language but are spitting out opaque functions in binary machine code, then they are doing something new and compelling imo because there are real performance gains with that.
No, this is wrong. AI has drastically shortened the time and effort between idea and implementation. The upshot being, that not only do you get things done faster, but things you wouldn't otherwise countenance doing are now within reach.
So where is all the new tech years into ai? Turns out that wasn’t the limiting factor. The limiting factor is still what it was 10,000 years ago in business: accumulating capital to start and finding a fit in the market to last.
As someone who does monitor /new, Show HN is overwhelmed with copycat low effort AI rubbish, not innovative AI generated tech that is interesting or useful. How many AI powered coloring book generators and AI powered anime-ification sites does the world really need?
> In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.
Allow me to introduce you to the dot-com boom, where everyone who bet on the internet went broke.
> In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.
Almost all people are "forgotten" by history.
In any case, people who were not even born yet in the 1990s are using the internet today, very successfully, so clearly you can wait.
> In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life.
I can't really agree. I've never seen anything from an LLM that I would consider even helpful, never mind transformative.
You're right, you are using it wrong. An LLM can read code faster than you can, write code faster than you can, and knows more things than you do. By "you" I mean you, me, and anyone with a biological brain.
Where LLMs are behind humans is depth of insight. Doing anything non-trivial requires insight.
The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work. Kind of like paint by numbers. In your case, I would recommend some combination of defining the API of the library you want yourself manually, thinking through how you would implement it and writing down the broad strokes of the process for the LLM, and collecting reference materials like a format spec, any docs, the code that's creating these packets, and so on.
> An LLM can read code faster than you can, write code faster than you can, and knows more things than you do.
I don't agree. It can't write code at all, it can only copy things it's already seen. But, if that is true, why can't it solve my problem?
> The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work
Okay, so how do I do that? Remember, I want to do ZERO TYPING. I do not want to type a single character that is not code. I already know what I want the code to do, I just want it typed in.
I just don't think AI can ever solve a problem I have.
When you write a library the first step is always designing it. LLMs dont get rid of that step, they get rid of the next step where you implement your design.
Is this really "additional"? do you not do design docs/adrs/rfcs etc and talk about them with your team? do you take any notes or write out your design/plan in some way even for yourself?
If I'm writing a library to work with a binary format, there is very little English in my head required, let alone written English.
That is a heavily symbolic exercise. I will "read" the spec, but I will not pronounce it in literal audible English in my head (I'm a better reader than that.)
I write Haskell tho so maybe I'm biased. I do not have an inner narrative when programming ever.
I’m not part of any team, I work on my projects alone. I rarely write long-form design documents; usually I either just start coding or write very vague notes that only make sense when combined with what’s in my head.
Security theater, perhaps. Don't underestimate the degree to which those turnstiles were intended to serve the purpose of tracking employees' movements.
This is very cool, thank you for sharing. I work in automation and SWE for a certain 4-letter organization that delivers your mail. Pick/place is something we've rolled out using articulated and delta robots with vacuum end effectors, and it's an interesting and challenging space to be in. As in your case, bags and other amorphous shapes are always the most difficult. It's always an uphill battle to hit throughput targets due to exception cases that can stop things until a human gets involved. Ultimately, it can be a struggle to avoid overpromising and to generate ROI since automation is so costly, especially when there's no opportunity to bound the problem by influencing the inputs to your system or the output requirements (in your case, the cart being loaded). Best of luck and looking forward to seeing your new end effector.