Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe that's why 90% of the focus in these firms is on coding. There is a natural difficulty ramp-up that doesn't end anytime soon: you could imagine LLMs creating a line of code, a function, a file, a library, a codebase. The problem gets harder and harder and is still economically relevant very high into the difficulty ladder. Unlike basic natural language queries which saturate difficulty early.

This is also why I don't see the models getting commoditized anytime soon - the dimensionality of LLM output that is economically relevant keeps growing linearly for coding (therefore the possibility space of LLM outputs grows exponentially) which keeps the frontier nontrivial and thus not commoditized.

In contrast, there is not much demand for 100 page articles written by LLMs in response to basic conversational questions, therefore the models are basically commoditized at answering conversational questions because they have already saturated the difficulty/usefulness curve.



> the dimensionality of LLM output that is economically relevant keeps growing linearly for coding

Doubt. Yes. there was at one point it suddenly became useful to write code in a general sense. I have seen almost no improvement in department of architecting, operations and gaslighting. In fact gaslighting has gotten worse. Entire output based on wrong assumption that it hid, almost intentionally. And I had to create very dedicated, non-agentic tools to combat this.

And all of this with latest Opus line.


I’ve started to pick up on some of the “unwilling to dig deeply into the humans perspective” & “provide ideation and then run with it” in 4.7. I actually think it’s consistent with confabulation, now that they’ve removed most of the models ability to observe its own reasoning in 4.7.

The effect is over-complicated engineering that takes way more time to review as to its right-size for the job.

Feels like hiding things, however.


Agreed. The proprietary nature of these tools is a huge impediment to their usefulness.

A intelligence plateau will happen sooner or later (my bet is on sooner), and when it does the open models will catch up. And everybody will be using open models and open source agents because they're so much more flexible.


Also doubt. But most likely because of organizational inertia. After a while, you’re mostly focused on small problems and big features are rare. You solution is quasi done. But now each new change is harder because you don’t want to broke assumptions that have become hard requirements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: