Who is "they"? Yes the US Government owns .gov. No it isn't owned by the Department of War/Department of Defense/War Department. It's owned by the Department of Homeland Security.
It costs millions because the entire point of this admin is to spend public money on their friend's businesses.
It's literal mafia strategy, because that's what Trump has always done. Large, nebulous contracts where it's hard to demonstrate that the sum paid to X contractor was actually used to pay for materials and labor rather than just pocketed.
That's why everyone connected to the admin is picking up billions of dollars in record time.
Things being done poorly and for a lot of money is the point
There's plenty of reasons to set a unique identifier before database save, or to want a unique identifier that doesn't have a 1-to-1 relationship with your object.
For example, in the idempotent kafka consumer pattern we set a unique ID in the header of every kafka message at the time of message publishing. We then have our consumers do a quick check of the ID against their data store to see if they have processed the message before or not. This way there is no impact if a consumer sees the same message twice. This allows us more flexibility during rebalancing events or replaying old offsets.
Historically last generation hardware is sold at steep discounts compared to original MSRP. The insanity is we live in a world where manufacturing 9 year old hardware has increasing manufacturing and shipping cost, where this was not the case. Another layer of insanity is that Nintendo did the analysis and decided it's going to make sales at this increased price.
The entire activity of going to buy a few things for around the house during the weekend is something that is performed by a consumer. This guy is exactly talking about you, but you aren't seeing it because your own internal identity isn't Consumer it's something like "guy who wants a chill weekend." However in the marketplace your identity is consumer.
I think we're just arguing semantics at this point. From an economic definition you, me, and everyone else who has to exchange money for goods and services is a consumer. I'm referring to consumer culture/consumerism, which wikipedia defines in a cleaner way than I can. Buying groceries period makes you a consumer, having a weird sense of superiority because you go to a specific grocery store is consumerism.
> In contemporary consumer society, the purchase and the consumption of products have evolved beyond the mere satisfaction of basic human needs,[1] transforming into an activity that is not only economic but also cultural, social, and even identity-forming.
I'm specifically trying to make the point that identity is not just a personal experience -- the experience of how you see yourself. Identity is also a label that can be placed upon you.
Parent Comment was trying to reject the personal identity of consumer while their words actively affirming their identity as a consumer in the marketplace. You can reject the personal identity of a consumer all you want, but businesses will still judge you by your actions (how much you spend).
Parent Comment was responding to a person talking about self-identity and not the mere fact that everyone buys things. "This guy is exactly talking about you" is completely wrong, OP was not talking about every person ever.
Isn't 4-6 weeks about normal conditions? It feels like a large amount of slack for a modern JIT logistical system. Anymore enters strategic reserves territory.
I think it's possible (10-15%) that the AI bubble pops and we all live without 50M token/day OpenClaw installs and running Opus to do things that should have been done by a shell script to the point that it causes a dip in total compute demand. I think it's likely (75% likely if the AI bubble pop causes a dip in compute demand) that this dip extends longer than the median lifespan of the hardware currently being installed in datacenter.
Of course in 20 years we'll be using more compute than today (99% likely).
EDIT: Of course cryptocurrencies provide a floor compute pricing.
PC is the last major open platform. While other platforms like Android and becoming less open, PC in general is becoming more open than it's been in a long time as heavy MacOS/Android/iOS competition is creating a focus on open standards and all-time high strong Linux support gives people a place to land and tinker/hack to their heart's content.
I think we will see an abandonment of consumer grade PC components and individuals are either pushed towards closed hardware like Playstation, MacBooks, and Android devices or they are pushed towards server grade components. I already have home sever rack, and would recommend it for other people.
> I already have home sever rack, and would recommend it for other people.
I just want to warn people who haven't heard server-grade hardware in-person before: this is only for people who can put a server rack somewhere unpopulated like a garage or basement. Servers will make you think "wow, leafblowers sure are quiet". They are not suitable for apartment dwellers such as myself. When I was setting up my 1U before shipping it off to a colo, I wrote scripts and had detailed plans of the things I needed to run so I could minimize the time it was making my ears bleed.
The noise problem is pretty easy to mitigate by choosing 2U servers instead of 1U. The latter are forced by the form factor to use smaller, higher speed fans.
A bigger issue for enterprise hardware is that it's optimized for performance per watt under load, not idle power consumption. Running a mostly-idle rack server 24/7 can result in a pretty sizable electric bill. This also depends heavily on the model. Some will idle at ~50 watts, others at ~300, but both of these are significantly higher than a Raspberry Pi or an old laptop which for personal use will generally do the job.
Business class desktops are also a good alternative here. Many models have pretty reasonable idle power consumption (check this for yourself, I've seen 6W but also 60W) and then you get a couple of drive bays and PCIe slots and expandable RAM which you don't get from a Raspberry Pi.
These days, pretty much the only thing that makes sense is a mini PC. AMD laptop chips generally trade blows with Apple stuff on power efficiency when you thrash them, and you get a surprisingly capable machine for not very much money.
It's really not worth it to run old hardware 24/7 unless it's making money. Buying a new machine of equivalent capability is (normally) pretty cheap, and it doesn't take very long for the power savings to pay for themselves.
They can be had with fairly respectable specs too. Certainly enough to play around with small local models.
"When you thrash them" is kind of the issue. There are ten year old business desktops with a <10W idle power consumption. If your use for it is to have something to rsync files to and host your personal website and the like, even old hardware is going to average 99% idle. There is no meaningful power savings from newer hardware unless you're consistently putting it under significant load.
Some of the newer hardware is actually worse because the idle power consumption of PCs since around 2010 is determined in significant part by the low-load efficiency of the power supply. Brand new machines with the wrong power supply can use several times as much power at idle as ten year old machines with the right power supply. Annoyingly, power supply efficiency at idle is rarely documented so the only thing to do is measure it.
Kind of a random aside, but I never realized how obnoxious LEDs were until I got a studio apartment and started sleeping in the same room as my homelab / workstation / networking hardware. Electrical tape saved me, but wow. You sure can produce a lot of light with a milliwatt of electricity :)
(And yes, my workstation has a clear case and LED RAM. Yes, I'm an idiot. Whenever Windows applies an update late at night, I wake up if it turns back on. I don't know what I was thinking when I built that thing, but never again.)
I like to put a little red wax over LEDs (at least, ones that I don’t touch). That way you can still see them, but they are dimmer, and the red tint makes the light less annoying at night.
I always thought it would be low-grade hilarious to record a fairly long video of the unboxing and assembly of a ridiculously elaborate in-case LED setup, only to reveal with a straight face and at the absolute last minute that the case in question is entirely opaque.
Even worse are phone chargers, intended to be used next to your bed, that light up like a Christmas tree when running. Black electrical tape is great for the worst of it, but you still need a few things available to tell you the operational status, if only they'd dim them a bit.
One of the best investments I ever made was in ordering a set of dimming LED stickers from Lightdims.com
I went throught my whole house adding them to powerbars, routers, toothbrushes, smoke detectors, etc. I even found the exact of location to put one over the motion detector on my Ecobee thermostats so they wouldn't light up everytime you walked by. I swear my house is about 1000% calmer than it would be without them.
Very nice, thanks! I was thinking of gluing baking paper over the most annoying LEDs as diffusers but these are much less of a kludge.
For anyone else seeing this, they ship to most countries for 99 cents so it's not the usual $29.95 for Fedex shipping on a packet of stickers. I'd seen similar ones on Aliexpress in the past but they're just light-blocking dots, not dimming ones.
You're right, I may have significantly over-estimated the percentage of people on hn that have dealt with server hardware. It's expensive, big, loud, power hungry and temperature sensitive.
You can buy server boards that don’t require loud fans. If you’re buying used server gear from a datacenter then it will be like what you said.
I have a 4U NAS with a supermicro board and an i3 chip with 6 WD Red NAS drives and it’s very quiet. The chassis came without fans so I installed the brand I like.
Tell my youve never owned a supermicro board without telling me. They support regular 80mm/90mm Noctua and function just fine. There's specific supermicro mounts for me.
Yeah, it certainly wasn't the quietest choice for form-factor, but the fact remains: all server grade hardware are not optimizing for noise. They are meant to be running in datacenters, not livingrooms, so noise was never a concern for them. A nice thing about consumer-grade hardware is they are optimizing for both sound and power consumption because those devices are designed to be around humans. So I certainly hope consumer-grade hardware survives.
In my first job we worked in a room full of 4Us and it was always refreshing when we powered them all down for the weekend. So quiet. It’s almost like there was a reason why consumer-grade hardware existed.
You can make those rackmounted servers as loud or as quiet as you like. For home, optimize quiet (and low power consumption).
Even though my server rack is in the garage I try to keep it quiet. A couple of them are fanless Atom-based and others have fans but they are built to be quiet. If you need hardware that generates a lot of heat, go with 4U for large fans that spin slow, thus low noise.
The "wow, leafblowers sure are quiet" happens when you stuff a lot of heat generation into a 1U chassis that then requires lots of tiny fans running at full speed. Those you don't want at home! But it is easy to avoid. Data centers do this to maximize density, but that's unlikely to matter at home.
And do you need a full-on enterprise-grade server? Given the choice between a 1U server whose fans even at minimal utilisation can still be heard three doors away and something with a low-power/laptop-grade CPU that does the same job silently and with little power use, I'll take the latter.
Not quite, this thread was about encouraging people with interest in open hardware to have a home server rack.
That said, it's not clear here what people in the thread even mean by "enterprise grade". Some of the commenters seem to assume "enterprise grade" is defined solely by how loud it is. That's not the definition.
Enterprise grade simply means top quality hardware such as Supermicro boards, ECC RAM and quality components built for 24x7 use for many years. Some of it is enterprise-y features like remote management which is fun, but hardly necessary at home. I do admit it is fun to access the remote management console of my rack servers from my home office, although clearly I don't need it since I can walk 30 seconds down to the garage and access the console directly. But of course a home rack is something to be done for fun, so fun counts as a feature.
> Some of the commenters seem to assume "enterprise grade" is defined solely by how loud it is. That's not the definition.
It's not?
Isn't it like enterprise grade software which is anything that costs over $50,000, sprays its files all over the filesystem, takes half a dozen full-time IT admins round the clock to keep it running, and has more bugs and 0day than an undergraduate student assignment?
Supermicro sells Atom-based SKUs with enterprise features like a BMC+IPMI, 10Gb SFP+ ports, ECC memory, SFF-8087 ports, chassis intrusion detection, etc.
I sit next to my 4U server with all enterprise components apart from fans - these are consumer grade.
I had to mod the chassis slightly (with just pliers, tape and random inserts) to fit these fans in there, and add fans in front to push the air in. The PSU that came with it was obnoxiously loud, but thankfully, Supermicro has a quiet version that I can't even hear. Even if SM didn't have this PSU, I could have easily modified the PSU and fit some noctuas in there without any issue or safety concerns - like I did with my enterprise grade Mikrotik switch that also had obnoxious fans by default.
I even have an enterprise grade UPS that is dead silent when it's not running on battery power (I swapped the fans there too).
I essentially try to buy enterprise gear whenever possible. Not only is it usually much better than the consumer alternative, but it also is frequently much cheaper too because of second hand market. Before AI sucked the soul out of the hardware market in general, you could have bought enterprise SSDs that had life expectancy - TBW - measured in petabytes, and MTFB - practically never - for half the price of the top consumer SSD that had TBW measured in tens of TB and MFTB of yesterday.
And the entire rack is just slightly more louder than the PC I was using.
The only consumer grade computer at my home is my MacBook and my phone.
Enterprise SSDs are all that. Just make sure you power it up. For data retention without power the requirements are 3mo for enterprise vs 1yr for consumer grade.
I built PCs for a number of years and then I shifted to some combinations of RPis, MacBooks, and (maybe) Mac Minis. It was a (long) phase that involved quite a bit of money as well as frustration oftentimes but almost certainly not going to do it again.
I had exactly this problem, 1U server that sounded like a 747 taking off downwind. I solved it by getting a mini-PC that had more processing power than the eBayed 1U server (I just looked up what was available in terms of CPUs and got the best bang per buck, an 8C16T AMD CPU) and that runs essentially silent except when it's under load - they're designed for low-power/silent operation. If you're running your server at 100% load 24/7 then this isn't for you, but for home "server" use it was ideal.
This. At one company we ran out of space in the server room, so the excess machine temporarily landed next to my desk. Dear god. Noise cancelling headphones couldn't cope with the noise.
I had to provision a 1U server in grad school. Turing that thing on in the office was a joke. Completely impossible to work with it on if you were anywhere in that part of the office.
Reminds me of when I as a kid got one of those Delta 7000rpm fan powered cpu coolers, my mom promptly asked what it would cost to make that noise (that was heard in the entire apartment) go away. Got a Zalman (back when they were great) and everything was good.
It was a learning experience, and I think everyone should experience that kind of industrial noise at least once to appreciate how quiet consumer hardware is.
In the whole history of computing PC is the only platform where buying a computer means crazy number of options and configuration mixes to choose from and expect it to work! And warranty would support it too! You can run any OS of your choice on it and that's also reasonable expectation.
Any other platform (SUN, Be, Amiga, NeXT, Apple) it was always buying it from one company only from its list of products. And even running with a different version of OS means warranty doesn't cover it.
I came back to this comment 12+ hrs later hoping to find someone make a great argument for some platform in the 70s that I didn't know enough about, or maybe a modern open hardware movement that is building niche support.
> ... or they are pushed towards server grade components. I already have home sever rack, and would recommend it for other people.
An actual rack with noisy 1U or 2U servers may be a bit overkill but on the plus side there's a guaranteed endless supply of such used servers.
Now there's a happy middle ground: used workstations with ECC memory, that you then use as servers.
People would be really wise to not underestimate what a 12 years old dual-Xeon, 14 cores each, 56 threads in total can do, for example. And such a complete workstation can basically be found for less than what it takes to fill my car's gas tank (granted it's got a big tank and it's fancy car whose manufacturer recommends to only use 98+ octane).
A single Xeon workstation with shitload of memory in a tower form factor is basically silent. Mine is. Dead quiet, next to the vaccuum cleaner and the cat's foot in a tiny room. I use it as a headless server.
And that's with the default PSU and fans. There are, of course, people modding these with adapters for regular consumer PSUs and then putting ultra-quiet PSUs in those. Same with Noctua fans etc.
And as for the usual complain: "but a server that is on 24/7 consumes too much electricity"... I only turn on my servers at home when I begin to work: I don't need these to be on 24/7.
So yeah: "Server CPU + ECC" doesn't imply noise. And "Server CPU + ECC" doesn't imply it has to be on 24/7 neither.
Assuming this trend continues, I think people are going to start re-using older hardware rather than turning to server-grade hardware (which is often not convenient for the average residential situation).
At least, that's what I hope happens. What will probably happen is people will continue to migrate away from the PC platform and towards closed platforms for the convenience, if history is any indication.
That's what I've been doing for years. I buy (or get for free) enterprise PCs coming off lifecycle at surplus sales. Nothing I do at home needs a cutting edge CPU. Unless you're a hard-core gamer or serious hobbiest/tinkerer a 5 year old or even older PC running linux is very adequate.
I think this is already happening, sort of. At least, people are hanging onto their older-but-not-yet-old components for much longer than they used to. I recently tried to build a NAS from eBay parts, and I was surprised to find that the newest stuff affordably available was 6th/7th generation Intel Core parts (retailed 2016/2017). I think people are trying to offload these CPUs in particular because they can't run an unmodified Windows 11 installation (no firmware TPM 2.0 implementation, and the corresponding consumer motherboards typically didn't have a discrete TPM module, either, if they had an LPC bus connector at all). Very little (reasonably-priced) availability of similar-aged Ryzen CPUs (which have firmware TPM support) or newer Intel CPUs.
Why would most people need a home server rack? That's a lot of noise, space, and electrical usage. For what most people would need a home "server" for a NUC PC or Mac Mini would do the job.
Ziply Fiber is offering 50 gbps home internet connections in some US locations. You cannot utilize that type of speed with a Mac Mini. Even the modest 8-10gbps connections offered by T-Mobile and Google probably require more.
VPNs. If you have a NAS and require high-speed access from/to your home files (dumping your Apple ProRes RAW rushes off your external SSD, so you can keep shooting your video, for instance), that kind of bandwidth cements your income.
> You cannot utilize that type of speed with a Mac Mini.
Mostly because the base Mini has Thunderbolt 4 which maxes out at 40Gbps. Anything with a PCIe 4.0 x16 slot will take a 100Gbps NIC. 100Gbps is around 10GBps (8 bits per byte plus encapsulation overhead). Desktop CPUs can do AES-GCM at 2.5GBps+ per core and have up to 16 cores and around 50GBps of memory bandwidth (dual channel DDR4-3200), so the NIC still seems like the bottleneck.
Contrary take: I believe we will see an expanded market for capable PCs that can be sanely put in a living space. By extension of the gaming PC niche to local AI. Both NVidia and AMD are developing product lines in that direction (DGX Spark, Ryzen AI Max). And Linux will be more prominent than ever, due to several independent reasons: MS dropping the ball hard on Windows, SteamOS making Linux attractive for gamers, 'digital sovereignty' as a trend, and Linux being the de facto standard for hosting AI (or anything really).
Well, the two chips I mentioned (DGX Spark uses the GB-10) are both a SoC, so no motherboard needed there. I don't know if that's the full explanation, but it could be a factor.
The SoC design with unified memory is generally well suited for residential use because it's quite energy-efficient, quiet and small (compared to traditional GPU-powered gaming rigs). Great performance-per-annoyance, so to say.
Yeah, the DGX Spark could qualify as a mini PC too. The AMD chip is sold as a laptop chip I believe, but I've mostly seen it in mini PCs. And the Framework Desktop. A brand that probably carries a lot of trust among the kind of tinkerers who would consider buying a barebone motherboard in the first place.
You might be interested in the IBM PC compatible and Wintel wikipedia pages. This is a super high level timeline, but it is more interesting to get into the detail.
At a high level, the IBM PC platform were very well documented & sold well, to the effect of producing tons of software and peripherals add-ons ("PC Compatibles"). This led some other computer companies to reverse engineer the proprietary IBM BIOS, allowing them to run the same software and use the same peripherals. Because these were clean room reimplementations, IBM didn't have a legal case to prevent their sale.
Fast forward a bit, IBM's attempt at a new, closed platform, PS/2, flopped. People wanted their more open hardware. Windows became dominate enough that all the demand was for x86 based hardware that could run Windows. Microsoft was happy to work with many vendors.
The PC is very open today, but Apple survived. Atari ST and Amiga probably survived longer than you think as well.
Wouldn’t recommend a home server rack in an apartment. For high wife approval factor, you can put Epyc hardware with Noctuas in a bigger case. I’ve got one at home. Runs my blog and a bunch of other things. Home is at 32 dB right now.
Realistically a Mac Mini will probably blow a lot of things out of the water on price / performance. Even an older one.
The problem with all those devices you listed is that they have lost the "general purpose" ability. I guess you could define "general" to mean "carefully curated"...
Says who? Oracle spends a lot of money to get ready for AI customers like OpenAI. They aren't there yet. They can't lose money serving what they don't have.
reply