I'm currently in repos where the context window required is so large that the output is almost always "wrong" for the problem at hand. Quite a few people at my company burn through tokens this way, and it certainly isn't providing value to the company.
As always, improving accessibility for humans makes automation more effective. If the humans need to remember a PhD's worth of source code/documentation to contribute effectively, your codebase stinks.
People at my company have started writing docs specifically for claude. They're quite useful for me too, but kinda disappointing they never wrote these docs for their colleagues.
As someone who has written many docs, it's because 99% won't read it (rightfully so if it's verbose). You can turn that doc into a skill in a repo and Claude will read it everytime it's needed.
I recently saw this with the logseq api - the published api was an auto-generated stub. So I tried to grep the source code for the function and found detailed documentation written for claude. So I guess one benefit of all of this is that it's making people actually document things and maybe plan a little bit before implementing.
The LLM hype train has me reflecting on what a spoiled existence working in a ‘proper’ language provides though…
React devs, JS devs, front-end devs working on large sites and frameworks might be triggering tens of files to be brought into context. What an OCaml dev can bring in through a 5 line union type can look very different in less token-efficient and terse languages.
Is anyone really reviewing code anymore though? It sounds like you are, but where I work its pretty much just scan the PR as a symbolic gesture and then hit approve. There's too much to review, to frequently.
I'm in a large enterprise context--you have to use human reviewers if you don't want to end up like Github's status page. So much context exists outside of the code that the bots are either not provided or are far too large of contexts for current windows.
A lot of people thought the same thing with everything going from analog -> digital. Or heck, even learning an instrument when MIDI was first introduced.
Even before generative AI, there is a long-going debate in audio circles around simulated guitar amplifiers. The truth is, the simulations of them have gotten so insanely good that now one could simply purchase an all-in-one pedalboard and have basically all of guitar history at your toes.
My rule-of-thumb is this: "does this tool I'm using in particular take away from the authenticity of my performance or songwriting?" Example: I am very keen on performing vocals and guitar at the same time, and I don't have an expensive studio setup, and my office has background noise. I use these tools, and yes even some open source AI ones, 1) remove background noise of the individual tracks and 2) do a final master against a recording I want to target (using something like Matchering or similar [0]). It still sounds like me, my voice isn't perfect, my beat isn't consistent, but it sounds like I rented some studio space. So for me it was a cost-saving measure.
>> one could simply purchase an all-in-one pedalboard and have basically all of guitar history at your toes
And this is actually a problem. Great art usually comes from constraints, real or artificial. These things are a lot of fun to tinker with (a really fun hobby) but one amp, one guitar, and a small number of effects pedals will probably lead to you actually make more and better stuff.
I have an all-in-one amp / pedalboard and it's just more practical, even though all I do is just pick an amp, plug in my guitar and play. They take up less space and cost less money in the long run if you actually do want to use many pedals.
I get what you're saying but in general this specific case I think the all-in-ones win for most people.
This was definitely true for me, which is why I write everything acoustically and ensure the song is "good" before going in my later age. If I want a specific effect, I then google what pedals were used in a particular song or artist, then I try to recreate the chain, and then tinker with that on top.
Ultimately I spent so much of my time worrying about "what crazy expensive equipment should I buy" when I was younger and more into this stuff, and I should have simply just played my shitty instruments and recorded on my shitty equipment. That's on me, but I also find it empowering as an artist that I can clean up my recording in the way that replaces my need for expensive equipment while maintaining (in my humble opinion) a sense of authenticity of my performance. I agree there may be too many knobs, but finding the knobs that I want has never been easier and I would rather live in the now than in the past.
> A lot of people thought the same thing with everything going from analog -> digital.
A lot of people were right. Music gear lead heavily back into analog after the initial analog to digital transition. I started out using computers exclusively. When I purchased my first analog synth, I couldn't believe how much better it sounded than my VST's. It's hard to quantify exactly why, but my ears lit up the second I started using it.
In terms of amp modeling software, some of it is indeed very impressive. But, tends to fall apart when you need to tweak parameters. I assume this has to do with the capture process. But, if you are happy to use stock patches, it's basically an amp replacement.
Not to be "that guy that just says to use LLMs", but writing out how you want these things to work on your computer to something like Claude, or heck even Google AI mode without logging in with an account, allows you to describe your ideal "home server as a docker-compose.yml file" and for me it did a damn fine job doing it. I had done this all with a previous server manually, and with a new server I simply had provided that I was a Fedora Linux box, with these hard drives, with these containers, and these are the locations of the files, etc. It worked the first try.
This wasn't something that I didn't want to learn myself, but I have so little time with children and gardening on top of my super busy work at this point that I didn't have time to simply google everything. I did know enough about it beforehand to provide a general idea of what I want, so YMMV.
I'm at a large enterprise outfit, and "shoving things in your face" has been a problem with large software suites for a long time, long before the AI craze. I keep telling my skip level leadership that we need more User-Experience "mob goons" that have authority across product domains to (metaphorically) beat the living daylight out of bad "PM-brained" ideas.
My work 64GB M1 Max Macbook Pro is consistently out of memory. (To be fair my $LARGE_ENTERPRISE_EMPLOYER reserves about half of it to very bad Big Brother daemons and applications I have no control over)
I have a 128GB M3 Max from my employer. Due to some IT oversight, I was able to use it for a few months without the corporate "security" crapware. Didn't even ever noticed this machine had a fan before the "security theatre" corporate rootkits were installed.
I have only purchased Toyota vehicles (currently in the market for an EV) and it baffles me that Dodge created a Charger in EV form and Toyota hasn’t made even an EV Corolla or Camry.
> it baffles me that Dodge created a Charger in EV form and Toyota hasn’t made even an EV Corolla or Camry
Dodge's Charger EV has been a sales flop [1] and pretty much universally panned by critics as something that nobody asked for.
The Camry and Corolla were the best-selling sedan and compact sedan of 2025 [2]. I think this shows that Toyota is listening to what Corolla and Camry drivers want - something inexpensive and reliable to get them to and from work every day without issue.
Some day Toyota will make an EV sedan. I think their 2026 bZ Woodland [3] shows that they are starting to figure out how make compelling EVs. And Toyota's EV strategy seems pretty reasonable to me overall - their delays to develop a decent EV don't seem to put them under threat from any legacy automakers. They are being threatened by Chinese EV makers, but so is Tesla - so even a huge head start likely wouldn't have benefited Toyota much either in that regard.
I don't feel like this really answers the question thought, right? At least not at face value.
I could see the side of maintenance burden being a potential point, meaning that one would be "pushed" to update the system between releases more often than something else.
Typically you want stability and predictability in a server. A platform that has a long support lifecycle is often more attractive than one with a short lifecycle.
If you can stay on v12.x for 10 years versus having to upgrade yearly yo maintain support, that’s ideal. 12.x should always behave the same way with your app where-as every major version upgrade may have breaking changes.
Servers don’t need to change, typically. They’re not chasing those quick updates that we expect on desktops.
Yeah, and that's the take I assumed to hear based on what was said.
However, for something like ARM and the use case this particular device may have, in reality you would _want_ (my opinion) to be on a more rolling release distros to pick up the updates that make your system perform better.
I'd take a similar stance for devices that are built in a homelab for running LLMs.
Depends on what you're building an ARM system for. There are proper ARM servers out there; server work isn't the exclusive domain of x86, after all.
For homelabs, that's out the window. Do whatever you want/fits your needs best. This isn't the place where you'd likely find highly available networks, clustered or highly available services, UPS with battery banks, et. al.
It's more maintenance due to its frequent release cycles, but it's perfectly good as a server OS. I've used it many times, friends use it.
You can't mess up the release cycle because their package repos drop old releases very quickly, so you're left stranded.
A friend recently converted his Fedora servers to RHEL10 because he has kids now and just doesn't have the time for the release cycle. So RHEL, or Debian, Alma, Rocky, offer a lot more stability and less maintenance requirement for people who have a life.
I think it's highly circumstantial. For example, my personal servers run a lot of FreeBSD and even though I could stay on major releases for a rather long time, I usually upgrade almost as soon as new releases are available.
For servers at work, I tried running Fedora. The idea was that it would be easier to have small, frequent updates rather than large, infrequent updates.
Didn't work. App developers never had enough time to port their stuff to new releases of underpinning software, so we frequently had servers with unsupported OS version.
Gave up and switched to RockyLinux. We're in the process of upgrading the Rocky8-based stuff to Rocky9. Rocky9 was released 2022.
reply