Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if there are different use cases. You sound like you’re using an LLM in a similar way to me. I think about the problem and solution, describe what I need implemented, provide references in the context (“the endpoint should be structured like this one…”) and then evaluate the output.

It sounds like other folks are more throwing an LLM at the problem to see what it comes up with. More akin to how I delegate a problem to one of my human engineers/architects. I understand, conceptually, why they might be doing that but I know that I stopped trying that because it didn’t produce quality. I wonder if the newer models are better at handling that ambiguity better.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: