Hacker Newsnew | past | comments | ask | show | jobs | submit | ratsimihah's commentslogin

this is such a wall of shame haha


I bought a Mac mini m4 before openclaw to use as a music production machine, when it turned out not to work out I tried setting up openclaw on it after hearing all the noise, but that turned out not to work out either.

I’ve found a much better use for it now. I use it as a Tailscale + ssh + tmux + Claude code machine, which gives me an always on Claude code environment with persistent sessions. I ssh from my phone using termius and from my laptop through ssh, and I can even access my projects through Tailscale with hot reloading for the most part, no deploy needed. It’s really good and my mini isn’t idle at all.


That sounds like a complete waste tbh lol. Your mac mini m4 is now doing something any computer with 16GB of RAM can do


You're not wrong, but at least it's not idle. And I can use it for something else if the need arises.


I just run claude code on my phone, in termux


Claude has Dispatch now, which should replace the tmux + CLI for you


That's really not remotely the same thing


Yea I have to try it and see. When Claude released Remote Control I hoped on right away and it was crap, it kept disconnecting. Tailscale + SSH was much better.


This game is getting so hard. Everyone can now spam build like Pieter Levels and Marc Lou did years ago, so solo bootstrapping’s got way harder it feels.

I’ve taken a break from building to try to find an audience, a real problem, and real users before building anything anymore.


>a real problem

I think this is the issue with the bulk of the saas spammers I see on reddit or whatever. They are just duplicating existing things that don't have a welcoming market anymore.


Serious question, how do you market a novel (and useful) SAAS product in the face of all that spamming? Other than make sure to market where the users are of course?


Im not sure you even can. It's turning into a market for lemons.


You pay for ads or influencers


This is basically the situation.

If you don't have an audience don't bother to build anything for anyone else, it literally doesn't matter how good it is or how much people need it, they'll never see it unless you directly spam them.

If you're a 10x builder with 0 followers on socials, sorry to say but you can get cucked by a noob with claude code and a big audience.


How long has it been? It can’t not be hard at first. But if you try hard enough you’ll learn to be comfortable with yourself and be alone. Finding hobbies does help for sure, particularly those that involve people.

You could try yoga too, starting from scratch is a great place to begin. It’s a great tool to learn to see yourself objectively and be able to let external things affect you less. (I’ve been practicing for 10ish years and teaching for 5ish) Also yoga communities are usually great because it’s mostly people trying to actively improve themselves. But do go for the dynamic style if you try it, because it builds the mind but also the body. So even if you don’t get into the spiritual stuff, you always get a good workout.

Best of luck. Hope you can find the strength to embrace the pain and not flee or hide from it, because it truly makes us stronger.


I find non-constructive feedback more tiring. People just dismiss things as soon as it has the faintest trace of AI without judging them for what they actually are.

Not saying the AI slop noise isn’t annoying though.


Why are you entitled to receiving constructive feedback on "your" project when you couldn't be bothered to write the project yourself in the first place?

If you want "feedback" of the same quality and effort as the project itself, you can always go ask your beloved AI for feedback instead of wasting precious human time.


Would you dismiss solutions to mathematical problems solved by AI?

If I’m driving an AI towards finding a solution, would it be any different for a software project?


Mathematical proof vs a web app that doesn't actually run? Not much of a contest.

Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.


> Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.

This is empirically wrong as of early 2026.

Since Christmas 2025, 15 Erdos problems have been moved from "open" to "solved" on erdosproblems.com, 11 of them crediting AI models. Problems #397, #728, and #729 were solved by GPT-5.2 Pro generating original arguments (not literature lookups), formalized in Lean, and verified by Terence Tao himself. Problem #1026 was solved more or less autonomously by Harmonic's Aristotle model in Lean.

At IMO 2025, three separate systems (Gemini Deep Think, an OpenAI system, and Aristotle) independently achieved gold-medal performance, solving 5 of 6 problems.

DeepSeek-Prover-V2 hits 88.9% on MiniF2F-test. Top models solve 40% of postdoc-level problems on FrontierMath, up from 2%.

Tao's own assessment as of March 2026: AI is "ready for primetime" in math and theoretical physics because it "saves more time than it wastes."

You can disagree about where this is heading, but "haven't and aren't going to" doesn't survive contact with the data.


Indeed. And adding on to this, in a slightly different realm, Donald Knuth's conjecture that he solved with Claude: https://www-cs-faculty.stanford.edu/%7Eknuth/papers/claude-c...


> solved more or less autonomously

So, not autonomously.


q.e.d.


You got really specific to help prove your point. We were generalising to projects built by AI, not web apps that don’t run, which isn’t relevant since LLMs can clearly build fully working projects.

Also how does getting into the specifics of which type of AI can solve mathematical problems helps the comparison here?


You were the one who made the comparison


Man, the overwhelming majority of your comments over the past several months are you whining about AI or being extremely salty about anything remotely AI related. You bash AI content, people who use AI to make cool stuff, AI companies, people who say anything positive about said companies... I really wonder what exactly you think your negative attitude contributes to these discussions.


I think it contributes to a general pushback against AI, which some of us appreciate...


It contributes far more than yet another low effort AI-generated Show HN on top of the dozens already submitted every day.

If you think you made "cool stuff" with AI, great, enjoy it, but also please keep it to yourself because anyone else can generate the exact same thing if they want it, you are not special, and are actively downing out real human effort and passion.


How is that any different than your incessant whinings drowning out real human discussion?


It’s probably what trumpers feel like. You wear a red hat one time and all of a sudden no one will even talk to you, they just talk more about you behind your back.


Please don't introduce off-topic flamebait here.


This is evil but I love it. AI-generated content needs to remain discernible, or we’ll be in even more trouble.


That’s what I loved about NYC, people were generally open-minded and easy to talk to, so I’d chat with tons of people spontaneously. Having moved back to France now, it generally feels harder and weirder, but I got used to it.


I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”


>Most people use ai to rewrite or clean up content

I think your sentence should have been "people who use ai do so to mostly rewrite or clean up content", but even then I'd question the statistical truth behind that claim.

Personally, seeing something written by AI means that the person who wrote it did so just for looks and not for substance. Claiming to be a great author requires both penmanship and communication skills, and delegating one or either of them to a large language model inherently makes you less than that.

However, when the point is just the contents of the paragraph(s) and nothing more then I don't care who or what wrote it. An example is the result of a research, because I'd certainly won't care about the prose or effort given to write the thesis but more on the results (is this about curing cancer now and forever? If yes, no one cares if it's written with AI).

With that being said, there's still that I get anywhere close to understanding the author behind the thoughts and opinions. I believe the way someone writes hints to the way they think and act. In that sense, using LLM's to rewrite something to make it sound more professional than what you would actually talk in appropriate contexts makes it hard for me to judge someone's character, professionalism, and mannerisms. Almost feels like they're trying to mask part of themselves. Perhaps they lack confidence in their ability to sound professional and convincing?


People like to hide behind AI so they can claim credit for its ideas. It's the same thing in job interviews.


I don't judge content for being AI written, I judge it for the content itself (just like with code).

However I do find the standard out-of-the-box style very grating. Call it faux-chummy linkedin corporate workslop style.

Why don't people give the llm a steer on style? Either based on your personal style or at least on a writer whose style you admire. That should be easier.


Because they think this is good writing. You can’t correct what you don’t have taste for. Most software engineers think that reading books means reading NYT non-fiction bestsellers.


While I agree with:

> Because they think this is good writing. You can’t correct what you don’t have taste for.

I have to disagree about:

> Most software engineers think that reading books means reading NYT non-fiction bestsellers.

There's a lot of scifi and fantasy in nerd circles, too. Douglas Adams, Terry Pratchett, Vernor Vinge, Charlie Stross, Iain M Banks, Arthur C Clarke, and so on.

But simply enjoying good writing is not enough to fully get what makes writing good. Even writing is not itself enough to get such a taste: thinking of Arthur C Clarke, I've just finished 3001, and at the end Clarke gives thanks to his editors, noting his own experience as an editor meant he held a higher regard for editors than many writers seemed to. Stross has, likewise, blogged about how writing a manuscript is only the first half of writing a book, because then you need to edit the thing.


My flow is to craft the content of the article in LLM speak, and then add to context a few of my human-written blog posts, and ask it to match my writing style. Made it to #1 on HN without a single callout for “LLM speak”!


> I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”

Unfortunately, there's a lot of people trying to content-farm with LLMs; this means that whatever style they default to, is automatically suspect of being a slice of "dead internet" rather than some new human discovery.

I won't rule out the possibility that even LLMs, let alone other AI, can help with new discoveries, but they are definitely better at writing persuasively than they are at being inventive, which means I am forced to use "looks like LLM" as proxy for both "content farm" and "propaganda which may work on me", even though some percentage of this output won't even be LLM and some percentage of what is may even be both useful and novel.


If you want to write something with AI, send me your prompt. I'd rather read what you intend for it to produce rather than what it produces. If I start to believe you regularly send me AI written text, I will stop reading it. Even at work. You'll have to call me to explain what you intended to write.


And if my prompt is a 10 page wall of text that I would otherwise take the time to have the AI organize, deduplicate, summarize, and sharpen with an index, executive summary, descriptive headers, and logical sections, are you going to actually read all of that, or just whine "TL;DR"?

It's much more efficient and intentional for the writer to put the time into doing the condensing and organizing once, and review and proofread it to make sure it's what they mean, than to just lazily spam every human they want to read it with the raw prompt, so every recipient has to pay for their own AI to perform that task like a slot machine, producing random results not reviewed and approved by the author as their intended message.

Is that really how you want Hacker News discussions and your work email to be, walls of unorganized unfiltered text prompts nobody including yourself wants to take the time to read? Then step aside, hold my beer!

Or do you prefer I should call you on the phone and ramble on for hours in an unedited meandering stream of thought about what I intended to write?


Yeah but it's not. This a complete contrivance and you're just making shit up. The prompt is much shorter than the output and you are concealing that fact. Why?

Github repo or it didn't happen. Let's go.


[flagged]


It’s certainly more interesting than whatever the AI would turn it into.


tl;dr


Even though I use LLMs for code, I just can't read LLM written text, I kind of hate the style, it reminds me too much of LinkedIn.


Very high chance someone that’s using Claude to write code is also using Claude to write a post from some notes. That goes beyond rewriting and cleaning up.


I use Claude Code quite a bit (one of my former interns noted that I crossed 1.8 Million lines of code submitted last year, which is... um... concerning), but I still steadfastly refuse to use AI to generate written content. There are multiple purposes for writing documents, but the most critical is the forming of coherent, comprehensible thinking. The act of putting it on paper is what crystallizes the thinking.

However, I use Claude for a few things:

1. Research buddy, having conversations about technical approaches, surveying the research landscape.

2. Document clarity and consistency evaluator. I don't take edits, but I do take notes.

3. Spelling/grammar checker. It's better at this than regular spellcheck, due to its handling of words introduced in a document (e.g., proper names) and its understanding of various writing styles (e.g., comma inside or outside of quotes, one space or two after a period?)

Every time I get into a one hour meeting to see a messy, unclear, almost certainly heavily AI generated document being presented to 12 people, I spend at least thirty seconds reminding the team that 2-3 hours saved using AI to write has cost 11+ person-hours of time having others read and discuss unclear thoughts.

I will note that some folks actually put in the time to guide AI sufficiently to write meaningfully instructive documents. The part that people miss is that the clarity of thinking, not the word count, is what is required.


Well, real humans may read it though. Personally I much prefer real humans write real articles than all this AI generated spam-slop. On youtube this is especially annoying - they mix in real videos with fake ones. I see this when I watch animal videos - some animal behaviour is taken from older videos, then AI fake is added. My own policy is that I do not watch anything ever again from people who lie to the audience that way so I had to begin to censor away such lying channels. I'd apply the same rationale to blog authors (but I am not 100% certain it is actually AI generated; I just mention this as a safety guard).


ai;dr

If your "content" smells like AI, I'm going to use _my_ AI to condense the content for me. I'm not wasting my time on overly verbose AI "cleaned" content.

Write like a human, have a blog with an RSS feed and I'll most likely subscribe to it.


> I don’t think it’s that big a red flag anymore.

It is to me, because it indicates the author didn't care about the topic. The only thing they cared about is to write an "insightful" article about using llms. Hence this whole thing is basically linked-in resume improvement slop.

Not worth interacting with, imo

Also, it's not insightful whatsoever. It's basically a retelling of other articles around the time Claude code was released to the public (March-August 2025)


The main issue with evaluating content for what it is is how extremely asymmetric that process has become.

Slop looks reasonable on the surface, and requires orders of magnitude more effort to evaluate than to produce. It’s produced once, but the process has to be repeated for every single reader.

Disregarding content that smells like AI becomes an extremely tempting early filtering mechanism to separate signal from noise - the reader’s time is valuable.


I think as humans it's very hard to abstract content from its form. So when the form is always the same boring, generic AI slop, it's really not helping the content.


And maybe writing an article or a keynote slides is one of the few places we can still exerce some human creativity, especially when the core skills (programming) is almost completely in the hands of LLMs already


Love this! Had the same idea as Mockingjay for emergency situations where you don’t have time to upload, e.g. robberies, attacks, etc… will give it a try!


Let me know what you think!


My first 5 years or so of solo bootstrapping were this. Then you learn that if you want to make money you have to prioritise the right things and not the fun things.


I'm at this stage. We have a good product with a solid architecture but only a few paying clients due to a complete lack of marketing. So I'm now doing the unfun things!


If you have had zero marketing, how do you know what you have is a good product?


Because we have a few paying clients who seem pretty happy. We have upsold to some clients and have a couple more leads in the pipeline. We are good at stopping bots and we have managed to block most solvers, which puts us (temporarily) ahead of some very big players in the bot mitigation sector.

If we can do this with nearly zero marketing, it stands to reason that some well thought out marketing would probably work.


Not really . Even Cloudflares free bot detection is better .


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: