Hacker Newsnew | past | comments | ask | show | jobs | submit | jexe's commentslogin

This tweet was from 3 days ago.

Mismanaged comms? Yes

HN front page effect? Prob not

(could be Reddit frontpage effect or related tho)


I saw the tweet about the Reddit post about 2 days ago. It probably was X.

There are a lot of comments on that issue demanding Anthropic give the guy the money back, I assume they saw the writing on the wall.

> I don't like it, but can't do much about it.

Is the culture really such that you can't escalate an obvious, fairly minor mistake that is turning into disastrous PR?

That would explain a lot of recent Anthropic takes actually.


Tech companies have too many layers for anything to happen. This is partly by design to slow down this exact thing.

Not all tech companies are like this, though too many are.

Such culture has become common in big tech.

I'm concerned that there's no real way to "opt out" of an AI future realistically. Is this something that people are seriously thinking they'll be able to do and successfully stay gainfully employed and contributing to the world?


> Is this something that people are seriously thinking they'll be able to do and successfully stay gainfully employed and contributing to the world?

No. I resisted for a bit but have started using it at work. Mostly because I believe usage is now being monitored. I'm in a very high-scale engineering environment involving both greenfield and massive brownfield codebases and the experience is largely a net loss in productivity. For me and some others who I've spoken to in my org, opting in is a theater that we're required to engage in to keep employment and not a genuine evolution of our craft.

These tools struggle with context once you get deep into a codebase with many, many millions of lines of code and sprawling dependencies. Even for isolated Python scripts or smaller, supporting .NET apps, the time spent correcting subtle bugs or bullshit, or just verifying the bullshit, often exceeds the time it would take to have written it from scratch.

Regardless, what I've observed is that these tools do nothing for the actual bottlenecks of software engineering: requirements gathering (am I writing the right thing?) and verification (does it work without side effects?). Because LLMs are great at generating text, they're actively exacerbating these issues by flooding our process with plausible looking noise.


Agreed. I think the starting comparison actually works here. It's a bit like the automobile. The advice of "just don't" doesn't work for cars. It takes a deliberate effort on every scale of society to accomplish, it's not something an individual can just do and succeed at. An American can't just not have a car the same way someone from the netherlands might be able to.


Over hundreds of hours of actively using AI for basically every area of my life, it has just never actually achieved anything besides giving me the feeling of productivity.

Ideas are mediocre. Plans are arbitrary. Research is untrustworthy. But telling it "generate me 100 ideas for X" feels really productive.

I think a version of me with no access AI will not just stay competitive, but even outcompete the version of me with unlimited access to AI.


There isn't. Just like with climate change and governments, we're all effectively in one big boat together. You can stop paddling towards the waterfall, but you can't stop everyone else from paddling and you can't get off the boat.


I'm not an OAI fanboy by a longshot - but I'd view lots of experiments that didn't work out as a healthy thing, especially for a company trying to find footing in a new industry.


It's not an experiment if you publicly showcase and create tens of millions worth of marketing materials on it.

Usually company "experiments" are typically hush hush, not blasted on every corporate media channel as a means to boost your company holdings.


Right, depends on your use cases. I was looking forward to the model as an upgrade to 2.5 Flash, but when you're processing hundreds of millions of tokens a day (not hard to do if you're dealing in documents or emails with a few users), the economics fall apart.


Half of the founders will say never quit. The other half will say you have to fail fast.

Choose your gurus wisely.


Context is everything. Ultimately you have to use your own judgement about what makes sense because no one can see all ends. Generalized advice from someone without skin in the game is at best a weak datapoint for any significant life decision.

That said, let me give mine. Persistence over generally pays more dividends that constantly chasing quick wins. The modern information economy has cheapened success and skewed perceptions of how much effort and luck is behind outlier winners. The success I've had in startups was not quick, was not a straight line, and honestly probably didn't net me as much as if I had joined Google or Facebook early career, but the benefits in terms of broad skills and success that I can credibly claim on a personal level are actually more valuable to me than a larger number in my bank account.


Reading an AI blog post (or reddit post, etc) just signals that the author actually just doesn't care that much about the subject.. which makes me care less too.


I read it as rolling with her own joke and lightening the load on the B+ rating (obviously also expressed as a loving ribbing given the context around it)


This. The man was a saint till the very end.


That's a lot to invest in someone at a large comparative loss, in a world where employees don't last more than a couple years before job hopping.


I agree that high turnover is a real constraint. That’s why the answer isn’t “10 years of apprenticeship” but designing scaffolds that combine learning with contribution in a shorter timeframe. Things like short rotations, micro-credentials, or mentorship stipends let juniors add value while they’re still on the job. Even if they leave after a few years, the investment isn’t wasted — both sides still capture meaningful returns.


Maybe this is unpopular, but what if 5/10-year long contracts started getting traction?

I guess you'd need to trust the company, which is hard to come by.


Interesting thought — long-term contracts could indeed align incentives for growth and stability. The challenge, as you note, is trust: few employees or companies are willing to bind themselves for 5–10 years in today’s fluid market.

That’s why governance frameworks (whether in labor or in AI) matter: they provide external guarantees of trust where bilateral promises may not hold.


nobody in my life feeds me as many positive messages as Claude Code. It's as if my dog could talk to me. I just hope nobody takes this simple pleasure away


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: