Correct and exactly how it is supposed to work. What do you expect a business owner to do when the job of a worker is not profitable / necessary any more? Pay him forever out of their own pocket, taking a loss?
If you've ever hired qualified personnel, you know how hard it is to find good people. No good business owner would fire a good employee for a whim. They'd keep them as long as possible (even when temporarily there's no work)
Other thing is if it's easy to re-hire when there is profitable work again. This happens where there's much more offer (of workers) than demand. For example jobs with low cualification.
So, get rid of manipulating the money supply and economy and the rest will sort itself out.
We already have LISP. Every once in a while someone invents a new language to solve some old problem, and as the new language evolves, it becomes closer and closer to LISP.
That's one thing I am acutely aware of and monitoring closely, and in fact it's one of the blog posts that's ripening to be released sometime: Am I building a Lisp?.
And every once in a while it looks like it might be that I am, in fact, arriving at a Lisp, but in the end they turn out to be false alarms, so the answer is still fairly resoundingly: No.
The only thing lisp seems to have not figured out yet is this small thing we call "Syntax", which the PL community mostly figured out 50 years ago : you write terse notations and the compiler builds up the tree for you, you don't layout the syntax tree yourself.
Lisp also seems to have a shortage of knowledgeable advocates who understands other programming languages enough to know that not everything is or needs to be a lisp.
Finally! You've done it! All the other front end architectures Are meant to be non-practical. Say what's special your solution in the title. Avoid using empty words. Less nose, more signal
Right?! Yeah, the title should be “A few patterns we implemented lately”. I do have more thoughts specific to practical frontend architecture though, so I guess I’ll just hope you see the next post.
How about building more nuclear electric stations? The nuclear energy production hasn't increased in the US for past 30 years, while oil and gas have exploded. Retards. Saame thing in EU - closing down the plants in Germany and Spain
You don't seem to understand bitcoin. You're talking about cashing out. Why would anyone cash out from bitcoin? Ok, you get some dollars. What do you do with them? They lose >10% yearly, whereas BTC has gained >200% yearly over the past 10 years.
Instead you hold it. And if you need money you borrow against it.
Likely the poorer people won't be able to save & hold much bitcoin at least in the beginning. But the government can keep their reserves (and pension funds for example) in BTC and thus benefit everybody.
Why would you spend money on crappy and locked down hardware that can't be fixed. A computer that you don't own but basically rent. Get a Lenovo Thinkpad and join the light side, you'll be amazed!
Whatever your opinions on Apples policies and behavior it's just ignorant to call the M1 'crappy' when it absolutely annihilates any processor in its class and doesn't at all get embarrassed when compared to high end desktop CPUs.
CPUs are a chump's game, and it's no surprise that Apple, the company with sole access to next-generation silicon, was able to reach last-generation performance on a laptop chip. Nobody freaked out when AMD's Ryzen 7 4800u hit 4ghz over 8 cores, I don't see a reason why I should freak out now when Apple's doing it with 10 less watts.
Plus, that's only the CPU side of things. The M1's GPU is annihilated by most GPUs in it's class... from 2014. Fast forwards to 2021, and it's graphics performance is honestly pathetic. Remember our friend the 4800u? It's integrated GPU is able to beat the M1's GPU in raw benchmarks, and it came out 18 months before it.
So yeah, I think there are a lot of workloads where the M1 is a pretty crappy CPU. Unless your workload is CPU-bound, there's not really much of a reason to own one. And even still, the M1 doesn't guarantee compatibility with legacy software. It doesn't have a functional hypervisor, and it has lower IO bandwidth than most CPUs from a decade ago. Not really something I'd consider viable as a "daily driver", at least for my workload.
"CPUs are a chump's game" - what? High performance CPUs which nevertheless use very little power are extremely difficult to design.
"AMD's Ryzen 7 4800u hit 4ghz over 8 cores" - It doesn't. AMD specifies it as having 1.8 GHz base clock, 4.2 GHz max boost clock. AMD's cores use ~15W each at max frequency. Since the 4800U's configurable TDP range is 10W to 25W for the whole chip, there is no way that all 8 cores run at 4.2 GHz simultaneously for any substantial period of time. In fact, running even one core in its max performance state probably isn't sustainable in a lot of systems which opt to use the 4800U's default 15W TDP configuration.
On the other side of things, Apple M1 performance cores use ~6W each at max frequency. It is actually possible for all four to run at full performance indefinitely with the whole chip using about 25W, provided there is little GPU load.
"Remember our friend the 4800u? It's integrated GPU is able to beat the M1's GPU in raw benchmarks, and it came out 18 months before it." - Say what? The only direct comparison I've been able to find is 4700U vs M1, in Anandtech's M1 article, and it shows the M1 GPU as 2.6x faster in GFXBench 5.0 Aztec Ruins 1080p offscreen and 2.5x faster in 1440p high.
Granted, the 4700U GPU is a bit slower than the 4800U GPU, but not by a factor of 2 or more.
This isn't an unexpected result given that M1's GPU offers ~2.6 single precision TFLOPs while the 4800's is ~1.8 TFLOPs.
Literally everything you wrote about M1 being bad is wrongheaded in the extreme, LOL.
Not being viable as your daily driver does not make it crappy.
But you heard it here first guys, building CPUs is a chumps game. And you see no reason to celebrate the first genuinely viable, power-efficient and fast non x86 CPU being a mass success. Fine I guess, but I don't agree.
Also not sure why you wave away CPU bound workloads as though they don't exist or somehow lesser.
> Not being viable as your daily driver does not make it crappy.
What does it make it then? Some unicorn device that I'm unworthy of? Is there something wrong with my workload, or Apple's? Apple is marketing the M1 to computer users. I'm a computer user, and I cannot use it as part of my workflow, I have every right to voice that concern to Apple.
> And you see no reason to celebrate the first genuinely viable, power-efficient and fast non x86 CPU being a mass success.
You must be late to the party, ARM has been around for years. Apple's power efficiency is about on-par with what should be expected from a 5nm ARM chip with a gimped GPU. What is there to celebrate, that Apple had the initiative to buy out the entirety of the 5nm node at TSCM, plunging the entire world into a semiconductor shortage unlike anything ever seen before? Yeah, great job Apple. I think it was worth disrupting the global economy so you could ship your supercharged Raspberry Pi /s
> Also not sure why you wave away CPU bound workloads as though they don't exist or somehow lesser.
CPU-bound workloads absolutely exist, but who's running them on a Mac? Hell, more importantly, who's running them on ARM? x86 still has a better value proposition than ARM in the datacenter/server market, and most local workloads are hardware-accelerated these days. I really don't know what to tell you.
Yeah, after two failed Macbooks from 2016 because of their ssds I can just say stay away from apple hardware until they reverse course on storage devices.
Other thing is if it's easy to re-hire when there is profitable work again. This happens where there's much more offer (of workers) than demand. For example jobs with low cualification.
So, get rid of manipulating the money supply and economy and the rest will sort itself out.