The fact that they found three independent paths to Turing completeness is what makes this paper fun. Even removing regex back-references doesn't kill it.
At some point you start wondering if there's any tool with conditionals and some form of persistent state that ISN'T Turing complete. The bar seems way lower than most people assume. Reminds me of the mov-is-Turing-complete result from a few years back.
For a TM, you nees the ability to write and read in some kind of list and a finite state automata that is driven by what’s in the list. The bar is very low.
Turing's "On Computable Numbers" paper is credited with inventing the Universal Turing Machine; but really it lays out many remarkable things:
- Undecidability: that there are mathematical/logical questions whose answers cannot be calculated by any (formal/logical/physical) system
- Universal computation: there exist systems which can answer all decidable questions
- Universal Turing Machine: an incredibly simple example of such a universal system
Of course, these are inter-related and inevitable (otherwise they wouldn't be provable!); but at first glance it feels like these could have gone either way: Maybe all questions could be calculated, given sufficient cleverness (as Hilbert expected)? Maybe different systems would be required for different sorts of question? Maybe a universal system would be too outlandishly elaborate to make sense in our universe (as existence proofs often are)?
Yet here we are, discussing multiple ways to perform universal computation with GNU find!
The real issue is that mandatory registration doesn't actually stop scammers. It stops hobbyist developers and small open source projects.
Scammers will use stolen identities or shell companies. They already do this on the Play Store itself. The $25 fee and passport upload haven't prevented the flood of scam apps there.
Meanwhile F-Droid's model (build from source, scan for trackers/malware) actually provides stronger guarantees about what the app does. No identity check needed because the code speaks for itself.
The permission-based approach someone mentioned above makes way more sense. If your app wants to read SMS or intercept notifications, sure, require extra scrutiny. But a simple calculator app or a notes tool? That's just adding friction for no security benefit.
The permission problem also affects normal apps. Things like KDE Connect quickly become useless without advanced permissions, for instance.
No permission system can work as well as a proper solution (such as banks and governments getting their shit together and investing in basic digital skills for their citizens).
The PROCHOT discussion in this thread is a good example. Lenovo stops making batteries, third party ones trigger artificial throttling, and the only fix is poking registers with a boot script. With coreboot you can just... fix it properly.
More broadly: faster boot times (sub-second POST is common), no vendor bloat or hidden phone-home behavior in the firmware, and you can actually audit what runs before your OS loads. That last one matters more than people think. Your BIOS has full access to everything on the machine before any OS-level security even starts.
I stopped trying to keep up with every release and benchmark. That was the turning point for me.
Now I pick one tool, learn it properly, and ignore everything else until my current setup actually fails me. Most "AI news" is just leaderboard shuffling that doesn't change how I work day to day.
The real overload isn't the technology. It's the marketing cycle around it. Every week someone announces the new best model, and if you're on Twitter it feels like you're falling behind by not switching. You're not.
Practical filter: if a new release doesn't solve a problem I currently have, I skip it entirely.
The real barrier was never technical. It was convenience and discovery. Running a Pi at home is trivial for anyone on HN, but the moment you want people to actually find your stuff, you need DNS, a stable IP, and some way to not get buried under the noise.
Tailscale and similar overlay networks have made the "accessible from anywhere" part way easier than it used to be. The missing piece is still discovery. RSS was the closest we got to decentralized discovery, and we collectively let it rot. Maybe it's time to bring it back properly.
I think the key issue here is that Attention is a temporal construct, meaning discovery is often tied to "being the first thing that comes to people's minds" which means SEO, reverse engineering the ranking algorithms, and constantly having to manage an "online persona". Note none of those things contribute to the actual work you're doing, just your "marketing department" (and whatever time/financial "budget" you intend to give it).
MrBeast figured out the YouTube algorithm - post early and often. Is that how we exist on modern Internet when every website/thumbnail is engineered by a team to maximize clickthrough rates? I agree RSS is useful, but it faces the same scalability issues if everyone starts filling up your RSS feeds. Given the limited amount of time you can devote to a particular task, we'll return to the era of A/B testing Headlines.
The cross-shard aggregate rewriting is really nice. Transparently injecting count() for average calculations sounds straightforward but there are so many edge cases once you add GROUP BY, HAVING, subqueries, etc.
Curious about latency overhead for the common case. On a direct-to-shard read where no rewriting happens, what's the added latency from going through PgDog vs connecting to Postgres directly? Sub-millisecond?
Subms typically, yeah. We measured the average latency between nodes in the same AZ (e.g., AWS availability zone) to be less than one ms, so you need to account for one extra hop and processing time by PgDog, which is typically fast.
That being said if you don't currently use a connection pooler, you will notice some latency when adding one. It's usually table stakes for Postgres at scale since you need one anyway, but it can be surprising. This especially affects "chatty" apps - the ones that send 10+ queries to service one API request, and makes bugs like N+1s considerably worse.
TLDR: not a free lunch, but generally acceptable at scale.
The interesting thing nobody's talking about here is that cheap code generation actually makes throwaway prototypes viable. Before, you'd agonize over architecture because rewriting was expensive. Now you can build three different approaches in a day and pick the one that works.
The real cost was never the code itself. It was the decision-making around what to build. That hasn't gotten cheaper at all.
I think the prototype thing is absolutely true but breaks down like all prototypes at the level of collaborating, sharing and evolving while handling entropy throug simplicity UNLESS you know what you're doing or the agent steers you with very opinionated tooling customized to your context. I'm thinking about empowering people to be builders and less so a software developer who can make the right tradeoffs.
Empowering people to work Tracer bullet style after they've selected their prototype of choice and thrown it away might be a powerful pattern that actually gets us into a nice collaborative space.
This feels to me like peak sfba mentality on par with "move fast and break things". Outside of trying to create a unicorn, is this really how people create things?
It seems to me that in order to obtain the ability to build things that other people like, you need to go through the process of creating things they won't. Like a painter needs to paint a bunch of crappy paintings to learn how to create a good painting. If you have the LLM create these throwaway prototypes, how will you even know when you come across a good idea and how will you be able to build it.
> It seems to me that in order to obtain the ability to build things that other people like, you need to go through the process of creating things they won't.
Okay, granted. What does that have to do with how the code is written? Do people generally care if a web app is running from nicely formatted JS or minified JS? Is a product manager not getting better at building things people like because they're not iterating on the code themselves?
Without agreeing or disagreeing with the premise, I think a relevant metaphor* here is that the painter can practice and iterate and go from creating crappy paintings to creating good paintings, without needing to make their own paint and canvas and brushes. If they're particular, they can have their assistant go to the supply shop and get just the right things they want, with increasing specificity as needed, but they don't need to manufacture them by hand.
* Like most metaphors, it's not perfect; please try to understand the intent.
I agree mostly with your metaphor, I think perhaps I disagree slightly on how it's applied. You don't need to create your own tools to create art, but I don't necessarily map the "tools" to code. The act of programming is mapping information to hardware, the value is in the information, and using LLM's to bypass the phase where you obtain, synthesise, and extend that information is the part where you lose the benefits of iteration. If you're just using the LLM as a mechanical tool to output code, it's mostly not different from, say, using speech-to-text to output code. When you start hearing things like "I don't care about the quality of the code, just it's outputs" that starts sounding like someone isn't iterating on the information which is the crucial bit.
But you need to actually be the one doing the iterating, you can't outsource it. The entire point to doing the iteration is the process, not the artefacts.
Hmm interesting, I didn't realise people were using it as a typing replacement instead of having it work agentically. Does that mean when you want to change a line of code somewhere, you just prompt the LLM to replace line 334 with your changes etc? So do you not use the LLM autonomously at all then? Sounds like it since you're still doing the iteration yourself.
I do both. A lot of changes are "autonomous" like "add a new Django model to record a change every time the title or body is edited in the admin", but I also do more fine grained edits like "have the import script truncate to 400 chars" (instead of 250.)
Sometimes I'll make edits like 400 to 250 by hand, but if I'm prompting on my phone it's faster to have the model do it as navigating code in an editor and changing it at the exact right point is fiddly on a mobile keyboard - models can spot and account for typos, direct code editing can't.
Running a small project on Hetzner from Germany. Got the email this morning. Honestly, even after the increase their dedicated boxes are still absurdly cheap compared to what you'd pay at AWS or GCP for equivalent specs.
The real story here isn't Hetzner being greedy. It's that AI companies are vacuuming up every DRAM chip on the planet and the rest of us get to pay the tax. I priced out a RAM upgrade for my home server last week. Same kit I bought 8 months ago for 90 EUR is now 400+. That's not normal market dynamics.
What worries me more is the second-order effects. Startups that would normally spin up cheap VPS instances to prototype and iterate now face meaningfully higher costs at the exact stage where every euro matters. The "just deploy it" culture that made European indie dev scene so productive was built on sub-10 EUR/month boxes. Those days might be over for a while.
"The real story here isn't Hetzner being greedy. It's that AI companies are vacuuming up every DRAM chip on the planet and the rest of us get to pay the tax."
We might also have our aquifers depleted and our electricity prices skyrocket. But at least we see really great benefits, such as being able to script some side-project while unemployed due to AI.
I just can't believe how HN turned into disinformation / propaganda machine over last few years. Pretty much every topic is politics and disconnected from reality.
Data centers consume...a lot...of water by design, recirculated water, does not means no water consumption.
Water must be continuously added in evaporative cooling systems used by many data centers.
[1] - Cooling towers reject heat through evaporation, which uses water, not just recirculates it. Evaporated water is lost to the atmosphere and must be replaced with "make-up" water. As a result, recirculating cooling loops still require new water input to make up evaporation and blowdown losses.
Anyone who thinks that modern data centers don't evaporate their "recirculated FRESH water" straight into the ocean can safely have their opinions summarily discarded.
What if there were a cooler that somehow didn't evaporate water, you might even call it a "dry cooler" - that would be a sweet invention. This might even be required in areas where adiabatic cooling isn't effective (humid climates)!
> It's that AI companies are vacuuming up every DRAM chip on the planet and the rest of us get to pay the tax.
DRAM is priced based on supply and demand, like every other market.
When demand goes up, the price goes up for everyone. It’s not a “tax” on the rest of us in any sense. There’s just a lot of demand everywhere.
> That's not normal market dynamics.
This is actually a textbook example of markets functioning in response to a demand shock where supply cannot be increased rapidly.
I do find it interesting that so many people think “market rate” means the opposite of what economics teaches, and that prices should stay stable and not change much when the economic conditions change.
I also find it interesting to read all of the “we shouldn’t let them…” takes in response to this situation. The DRAM market is international. Trying to restrict it in one country would just see the data centers get built in another country.
But... They're not wrong. That IS the market. Unrestricted, gloriously free market with its historically predictable outcomes - yay!
That's not where the interesting discussion is. The interesting discussion is with the notion that free unregulated markets are universally good and will naturally lead to positive outcomes because... I don't know, I'm personally not religious, but somebody here will help me :-).
Commodities used to be proper free markets. Many suppliers and many buyers of a product that was the same regardless of the supplier.
This lead to low prices and/or differentiation with new products.
Most of these markets were too good, so in general we now have a few big companies buying up the lion share of the supply so they can set the price regardless. For example soy, just to name one
Sorry, when you say "gloriously free market", do you mean whatever it takes EU, helicopter money (or, rewinding a decade, Greenspan put) US, or factory of the world China? :)
My point is that it's not a real market economy if the risk premium -- and in China's case, the exchange rate -- is rigged. And it has been, since the 90s.
EDIT: For clarity, I'm agreeing with you, since you were being facetious.
Absolutely! -- and we could play this game for a long time ;)
The right way of looking at it is, there was tiny little interlude of something vaguely approaching the free market -- back when Volcker was in charge.
> That's not where the interesting discussion is. The interesting discussion is with the notion that free unregulated markets are universally good and will naturally lead to positive outcomes because...
The textbook desirable outcome is that competitive markets minimize suppliers'surplus which is good for consumers.
Not that this doesn't mean unregulated markets. Monopolies and oligopolies acting like a monopoly are textbook examples of pathological markets where suppliers can maximize their surplus.
I think pretty much everyone would agree that the current situation is a failure of regulation not over regulation. Regulator and legislation have been constantly weakened in the name of international competitiveness since Reagan.
An example of unregulated market is where I come to your house and put a gun to your head and in exchange for not pulling the trigger you give me your various items of value.
While you are technically correct, you are neglecting that it would a be a bad idea, because in such a market I would likely answer the door with a shotgun or I would have an agreement with my other neighbor to shoot you if you come to my door brandishing.
This is actually also how global diplomacy works. Either have big guns or big friends.
I think you have gone in the end of the spectrum, in a sense that even a state law's are being broken, we are talking about rules in the market itself.
An unrelated market is an oxymoron. You could come to my house and put a gun to my head, but that's not a consensual trade. That's just thuggery; the point of a market is that both sides benefit from trade.
For markets to exist, property rights also need to be respected.
But this is my point. People say "unregulated market" and assume that means reverting to feudalism, but what it actually means is just... less regulation.
Don't forget the Republican policy of starve the beast that includes Republicans happily putting the US into un-sustainable debt as a matter of policy, hoping to break the government so badly that Republicans can then enforce unpopular policy they can't get any other way.
What they probably mean is that it is not a fair market, that there is no balance in purchasing power, pushing small scale buyers away while supply slowly catches up (or doesn't)
I'm not disagreeing with you, but I have not frequently heard the phrase "fair market" (as opposed to a far more limited and specific term "fair market value", where "fair" I believe applies to "value" and not "market") and would be interested in hearing more of its definition and criteria.
Trivially, I would assume proponents of "free market" and "fair market" are a tiny if not zero Venn diagram, and that terms are at least somewhat opposing, but will withhold my judgement :-).
People love to say that but they own a very small percentage of housing in reality. What’s driving housing costs is also supply and demand. Especially supply, since we’re not allowed to build any houses in most places people want to live.
You’re still missing the key point: Hedge funds and REITs aren’t arbitrarily buying housing at any cost.
They are responding to the market. If they overbuy then they will lose money and have to sell at a loss, at which point you could snap up some good deals.
This is ridiculously oversimplified, because there is no real market in housing. It is illegal to build in all of the places people want to buy. The purchase of housing by hedge funds isn't a problem on its own, it's simply a symptom of the bigger problem of supply restrictions.
The funds themselves say in their financials that they view housing as profitable because of the various restrictions on supply in every desirable city. They explicitly say that if those restrictions were lifted they would not be able to make money in that business and they would exit.
Any attempt to apply supply and demand and market theoreticals in housing is fundamentally misplaced, as the other commenter noted, because there are far too many forces that distort both supply and demand.
Which doesn't sound like a free market to me. Capping production to keep asset price high is one of the most straightforward default examples of market-distorting interventions there is.
Hedge funds don’t have as high of institutional ownership as you assume. It’s actually pretty small.
That said, nothing about the situation you described is at odds with “free market”. You’re describing the operation of a free market.
I think a lot of people want “free market” to mean the opposite: A highly restricted market where they are protected from any supply and demand inputs from anyone else. They just want cheap things and don’t want to compete with anyone.
There are two sides to a free market, though. In your example where a hedge fund comes in and buys your entire neighborhood, they would have to do so by outbidding everyone. This drives up the price. If it’s an economically irrational move you’d be smart to sell your house to them at an inflated rate, too! Then move back in when the prices crash down.
I should point out the relevance of my argument, is completely independent from the fact the reply to this questions of yours, is higher than zero.
So dont see this reply as a justification. Just as a note that you failed to do basic diligence on distortions that are well known. And as I said, that are not relevant to the analogy.
That article doesn't support your point. Only a small fraction of the homes in that area are actually owned by hedge funds. You should check the facts before commenting.
> When demand goes up, the price goes up for everyone. It’s not a “tax” on the rest of us in any sense. There’s just a lot of demand everywhere.
> This is actually a textbook example of markets functioning in response to a demand shock where supply cannot be increased rapidly.
You act like it's a competitive market. It's not the case.
It's an oligopoly with an extremely inelastic supply side.
The market is already completely broken and ineffective due to concentration and export controls. The actual response to a major demand shock should be investments to increase capacities but it's currently extremely limited because suppliers want to protect their margins and fear the market contracting again.
> It’s not a “tax” on the rest of us in any sense. There’s just a lot of demand everywhere.
Curious on whether you will still hold your stance if OpenAI gets a taxpayer bailout. Even disregarding a bailout, they are already lobbying hard for tax credit expansion.
A government bailout of OpenAI would be a regressive redistribution of wealth to some of the least needy people in all of society, which is a horrendously poor use of government funds. But that has no bearing on the fact that calling high DRAM prices induced by high demand a “tax” stretches the meaning of the word beyond all recognition.
There are many horrible things in the world and we don’t need to label them all as a “tax.” If we use words in an imprecise way, it obfuscates the truth.
Please note that OpenAI Partners and suppliers (Oracle, CoreWeave, SoftBank-linked entities) have taken on significant debt to fund infrastructure for OpenAI - around ~$100 billion reported in late 2025 alone.
Projections show $14-20 billion in losses for OpenAI expected just in 2026.
The chances that someone is not going to ask for a debt write-off approaches zero as the years go. OpenAI already began testing the waters since late last year. Senator Warren has already raised alarms about potential indirect taxpayer exposure when the "AI bubble" bursts.
When that happens - and it is all but guaranteed to happen - it will amount to a horrendous tax, rendering everything you’ve said about 'imprecise words obfuscating the truth' complete hogwash.
Economic history is full of examples of demand shocks. This is not some unique situation that has never occurred before.
This is actually a clean commodity price spike because it’s specifically not for market manipulation or financial engineering. It’s because demand for this product really did explode overnight.
> This is actually a clean commodity price spike because it’s specifically not for market manipulation or financial engineering. It’s because demand for this product really did explode overnight.
Based on how the same 3 billion has been circiling between Anthropic, OpenAI, Nvidia, Google, Microsoft, Amazon, and a few other companies... I really doubt that this is the case, to be honest.
I think it's reasonable to distinguish which side drove this. RAM prices are going up but it's not engineered primarily by RAM manufacturers. They are naturally jumping on the bandwagon and responding, but they aren't the drivers. Of course, how they respond matters. They could make other choices. Over time we'll see how this goes because AI could cool and then RAM manufacturers end up in a spot where they choose to manipulate prices to keep them higher.
Tax is also an economic term, which is not what’s happening. Calling it a “tax on consumers” doesn’t make sense because any data centers buying RAM right now are also buying from the same global market.
If commenters just want to be outraged and throw words around then use whatever words you want, I suppose.
> This is actually a textbook example of markets functioning in response to a demand shock where supply cannot be increased rapidly.
The problem is that demand is being propped up by speculative capital. The AI companies are a bubble that is suffocating productive parts of the market with the hording of capital which they're now using to also hoard hardware. All this without making money for data centres that aren't build yet, for a handwavy promise that an AGI will magically solve all the worlds problems.
This is not normal, and it is not good for the broader economy.
Yeah the dudes argument is bunk when we remember that openAI bought CAPACITY and not actual product. The market is also heavily manipulated by the big 3 players in the market.
OpenAI brazenly used their market position to create artificial scarcity. That's not normal market behavior. That's manipulation. And now we all suffer.
> Trying to restrict it in one country would just see the data centers get built in another country.
I'm surprised this isn't already what's being done. Inference doesn't require super low latency with the client, and the population's support of AI (and especially data centers for it) is waning quickly. This feels like another ideal use case for outsourcing the stuff Americans don't want to see to somewhere that it'll be someone else's problem.
I think the usefulness of market dynamics is their ability to follow things like factory capacity, which are themselves hard to follow, not the other way around.
For things that aren't inherently limited in production
it is supposed to work both ways..
A key element is that China still acts as a block..
So Chinese firms have lost a big opportunity by
not making DDR4 yet aren't ready with DDR5. When
they are ready it will probably tank the market
which is less profitable than selling at high
prices with actual availability of something
the whole time.
Can't agree more. We can also predict with some confidence that in a year or two, supply would have adjusted and ram will be cheaper in the long run. We benefit from the expanded demand even if the fact that it first lands as a shock is disruptive to prices.
GPU prices went through the roof for crypto and then the pandemic and never really recovered to pre-pandemic prices before once again spiking because of AI demand. So where's the increased supply of Nvidia cards to account for all the continued demand? And why haven't RAM manufacturers announced plans for increased production (instead of pulling out of the consumer market altogether)?
The past 6 years of GPU pricing (the 5080 launched at $1000, currently $1500-1800 at Microcenter) don't exactly fill me with confidence that RAM manufacturers will increase supply to meet demand and bring down prices again.
The problem is that OpenAI has cornered the market. Maybe they haven't crossed the legal line or more to the point no one in this corrupt and incompetent administration is going to prosecute them, but buying up 40% of a market which hasn't got any additional capacity is cornering by any measure.
So yes, this is not a normal market. Your claim of a functioning market is the same as saying my laptop, having lit on fire, is a functioning computer after having 10,000 volts applied across it.
> DRAM is priced based on supply and demand, like every other market.
Please don't explain it away like that - you are referring to the theoretical "ideal" market where a bunch of small companies compete with low margins to the benefit of the wider customer base. This is not what is happening. We have a couple of intrinsically worthless, LLM-whale companies, working literally to swallow and entshittify literally everything in their weird transhumanist/accelerationist/weirdo way. To add to the insult, the whole creation of artificial scarcity is almost a political construct, paid for with "monopoly-the-game-money" that these companies DO NOT EARN but instead BORROW based on vague and dishonest promises of achieving a "Country of PhDs in a datacenter"/"Pocket PhDs"/"AGI by 2025" (oops, now apparently by 2028 according to the OpenAI CEO). In their weird vision, as humans we should be merely cattle to be managed, not independent spirits with interest and aspirations. That ghoul Karpathy speaks about "ghost in the machine", overlooking the magnificence of the already existing "ghost in the machine" in the form of human beings. We should not have to swallow the increasingly crappier future these folks are insisting on pushing on all of us.
What makes it manipulation? If 5 companies want to buy a quadrilion ram chips to build datacenters, why is this manipulation moreso than a million companies each wanting to buy 100 ram chips?
I think the problem is that both the buyers and producers are too large. Governments should not allow companies to become this big, because... <gestures broadly at everything>. If there were a thousand ram makers and a thousand datacenter builders, this particular problem would not exist.
But you can't just label any price evolution you dislike as "price manipulation".
>×If 5 companies want to buy a quadrilion ram chips to build datacenters, why is this manipulation moreso than a million companies each wanting to buy 100 ram chips?
Because they are 5 companies, especially when it can be shown they work in unison (formed a cartel)
It's certainly price manipulation, but not likely to be intended price manipulation. Your arguments are flawed but you have reached the right conclusion.
This is one of the many flaws of badly regulated markets.
(There are no free markets, and there is never perfect information, and people often behave remarkably irrationally for many reasons.)
But aren't those the same startups that think they need to run on AWS EKS instead of using a single cheap server? The cheapest used Hetzner server currently is €39.24 / month:
Similar to my favourite OVH servers, but I have unlimited traffic at 0.5Gb/s 64gb ram and dual mics. Similar price (with vat in Poland).
If you wanted to run same workloads on Aws it would cost you few hundred euro a month.
I see a silver lining to all this. At least maybe the silly "throw more horizontal scaling at it" will stop being a default response to all performance problems and people that are able to squeeze more processing out of the same hardware will be sought after again.
If your only need is a lot of bandwidth with very low server CPU use that’s fine.
That CPU is ancient, though. Over a decade old. That DRAM is 2-channel DDR3.
This could be a good deal for someone, but entrusting your startup’s operations to a 10 year old slow computer in Germany instead of using EKS would be an extremely short sighted move. A startup should be developing software and shipping it quickly to validate the market, not pinching pennies to save the equivalent of a couple hours of developer salary.
Right, for exactly that reason Hetzner offers brand new AX42 / EX63 servers with ECC memory and modern (Zen 4 / Arrow Lake) CPUs for just a little bit more.
I would guess that 99.9% of startups wouldn't notice the age of the CPU if they aren't in the business for CPU compute power.
Also, if you don't want to provision software systems, you probably shouldn't use Kubernetes at all. Both this and compute are niche businesses and neither would rent a budget server anyway.
No, that's actually a really good deal for dedicated hardware with those specs. For a project sized for hardware like that, the CPU is a lot less relevant than the RAM and storage and transfer.
Measuring CPUs by thread count and clock speed is not a good way to gauge performance. A current gen CPU would be several times faster than this old CPU.
Depending on workload, this old CPU might be as slow as a 2 thread or even 1 thread current gen server.
It does 8000 CPU marks with 4 cores. Sure Xeon 674X does 83641 with 28 cores. But show me where can you find it for less than 10 times the price? And with 320GB RAM, 10TB of NVMe SSD storage and 10 GBit/s of "unlimited" bandwidth
More than that, compare it to modern cloud CPUs. Epyc 9845 gets 153000 but that's with 160 cores / 320 threads. Per core it's under 1000 and 4 cores would be 3825 when the 11-year-old i7 is 8000.
Because those big systems are optimized for power efficiency. That Epyc is ~2.4W/core compared to ~16W/core for the old i7. It has a lower base clock and is Zen5c. If we cut the 8-core Ryzen 9850X3D's score in half, 4 Ryzen cores from the same generation but with a higher base clock and six times the L3 cache per core would be 20942. But it's also back up to 15W/core. The Epyc still has better performance per watt.
The newer cores are significantly more efficient. That doesn't mean they're unconditionally faster independent of all other variables.
> And with 320GB RAM, 10TB of NVMe SSD storage and 10 GBit/s of "unlimited" bandwidth
I think you’re talking about something else. The comment above was about a machine that didn’t have 10TB of storage, 320GB RAM, or unlimited bandwidth.
If you find 320GB of RAM and unlimited bandwidth for 40 Euro monthly then send it over!
The 39 eur machine has 32GB of RAM ~1TB of storage and 1gbit/s. So to make it a fair comparison the 10 times faster cpu should also have 10 times of those resources
> Except 40€ a month is extremely poor value for this CPU that's more than a decade old.
This is a rather baffling opinion to have. All cloud providers charge far more for a virtualized instance running on God knows what hardware. You are faced with a deal where you can run your software on bare metal, and you complain about... About what exactly?
Excuse me, but if the difference between 10 EUR per month and 14 eur per month is going to kill your startup, you probably shouldn't try to start it.
Might be time to think about using and creating less memory-hungry software.
Actually I disagree. I've killed projects because I've run out of time for them and didn't like them costing me £50 a month. If I'd been able to keep them going at £10 a month, I might have kept them going until I could get back to them. Sometimes startups fail just because the owners get distracted by life, and the project just needs more time.
2) why could they not just up the prices for new deployments, like they did with their dedicated servers? I think that would be fairer to existing customers
If you have a company, I can recommend leaseweb for cheap hosting. I host my personal stuff like my email and my ente.io instance there. They are cheaper than Hetzner (already before the new price increase) if you don't need managed k8s.
The first known example is the 6th century BC, where a greek philosopher cornered the market on olive oil presses, because he predicted a richt harvest via his knowledge of astronomy:
Well, you think wrong. OpenAI is the one that pre-hired 40% of the world's memory fab capacity.
And if you are going to protest with the common "OpenAI doesn't even build datacenters themselves!", yes, they don't, and it's a complete non-sequitur.
When I have looked on Newegg and on Amazon USA last month, I have seen even greater prices than here in Europe, by 30% to 40% greater, which is reversed from previous years, when computers and computer-related components were cheaper in USA than in Europe.
So I think that the victims are all the computer users of the entire world, with the exception of a negligible number of humans tied to the AI companies. Moreover, the US victims appear to be hit by the price hikes even more than in other countries, at least for now.
At some point you start wondering if there's any tool with conditionals and some form of persistent state that ISN'T Turing complete. The bar seems way lower than most people assume. Reminds me of the mov-is-Turing-complete result from a few years back.