It’s actually quite simple: 1–5 mg lithium orotate, vitamin D, omega-3 from algae with high levels of polyphenols, a daily exercise routine, and good food—not the processed crap you often get in the US. My grandmother is 94 and still mentally so sharp that she amazes me every time.
It's actually not "quite simple" at all, and I think there is a gulf of difference (often misunderstood) about taking supplements as a personal decision because there is low risk of harm, there is some preliminary evidence they may be helpful, and one can afford them, versus touting them with certainty as the cure to a whole host of ailments without sufficient evidence, which is usually what "wellness influencers" and people selling supplements and "health optimization" programs do.
My grandmother lived to 102 and was mentally sharp right up until she died. She also smoked daily into her 90s.
I was about to say I would need a newer gen card to test the new open kernel driver stack, but on some research it appears that the 2080 series was the first to support them, and with that new knowledge I realized I have a 2080Ti on hand already.
So thanks for offering yours. It made me remember I actually own one!
Quote from webpage: "The COW filesystem for Linux that won't eat your data"
Quote from webpage: "It's the job of the filesystem to never lose your data: anything that can be repaired, will be."
Quote July 2025: "I've been digging through the bug tracker and polling users to see what bugs are still outstanding, and - it's not much. So, the experimental label is coming off in 6.18."
I was a big fan of bcachefs and was looking forward to deploying it across ~100 machines in production. Unfortunately, the removal from the mainline kernel has seriously undermined its credibility for use within a company environment.
A filesystem needs time to mature, and that's fine — but the official webpage should clearly display a warning that this is still experimental and that its long-term support situation is uncertain. People evaluating it for production use deserve to know what they're getting into.
Have you looked at the history of data loss bugs in other filesystems?
If you look at actual data - frequency of user impacting data loss bugs - bcachefs has been doing quite a bit better than other filesystems have /after/ they've dropped the experimental level.
We just live in the age of hype and overhype and excitement that turns into drama. Everyone just needs to chill out :)
And I don't hide stuff like this: compare the impact of the bug itself to what you'd see in other filesystems. We knew basically from the first report what caused it, were able to communicate to users what happened, it wasn't random, it wasn't silent data loss - error messages were good and it was able to understand what was going on.
Talk to people who are actually using it. I know of quite a few people who are now migrating from ZFS because they want something more reliable.
Fair point, and I appreciate the transparency around data loss bugs.
How does it look about long-term sustainability? Looking at the git history, ~97% of bcachefs commits are yours. What happens if you step back, burn out, or can't continue for any reason? Is there a fallback plan? A community or team that could realistically take over?
For anyone evaluating this for production use in a company, that's the question that matters most. A filesystem isn't a library you can swap out — you're locked in for years. The technical quality can be excellent and it still won't pass a risk assessment if it depends on a single person.
I'm not the only person who knows the codebase well enough to do actual work (there's at least one other person I'd be comfortable with giving commit access to), and it's clean and documented pretty well for a filesystem.
And the sustainability equation just changed dramatically, thanks to Claude.
I've been using it the past week for a lot of stuff, and I should really write something longer up, but suffice it to say that I'm impressed. It can't do much independently yet, but it's been able to handle a /lot/ of the grunt work - the other night I had it go through open Github issues, fix what it could and take notes on the rest, and I came back to 8 patches for actual bugs, all of them correct, with excellent commit messages. Holy shit, we're living in the future :)
It can't design for shit, it doesn't understand performance implications when writing code (I've noticed this repeatedly); most of how I'm using it is "pair programming". But I'm finally feeling like I'll be able to take a vacation in the near future and still keep up with everything.
Two other people are using it (with heavy review, actively telling it to go back and research topics more) for a big update to the Principles of Operations. Nice.
Basically, the sustainability aspect comes down to writing clean, maintainable, well documented code - and I think I've accomplished that. One holy shit moment the other day was watching Claude navigate /and understand/ btree and journalling code - and making the connection between the way I use assertions and linear typing/dependent typing. All those years spent on that code developing new ways of thinking about and using assertions to make that stuff practical for one person to do... it's paid off.
Beyond that, the real challenges are pushing the boundaries of how introspectable, understandable and transparent complex systems code can be for the end user - and bcachefs is pushing boundaries there. Making a habit of writing pretty printers for absolutely everything means that now our tracing is the best you'll see in software like this, and well integrated with 'bcachefs fs top'. The timestats stuff that I started well over a decade ago - we've now got a new 'bcachefs fs timestats' interface for that, which is already making debugging performance issues dramatically easier than it has been in the past.
I am using wayland since 5 years and never looked back to X11. I think it is the right way and time to remove the old insecure X11 backend. GNOME should not be bloated with legacy stuff.
Your experience is not universal. On an intel cpu/gpu laptop, I have zero issues.
But on an AMD/Nvidia desktop it's unusable because it's buggy as all hell. It's endless glitches in dozens of applications. At first it appears fine and then you get subtle stuff like like letters not appearing in vscode when you type, OBS won't record etc.
I have experience with all three manufacturers. We deploy them at work and the integrated AMD GPUs work just as good as the Intel systems. However I can't say much about the discrete AMD GPUs or older hardware.
Just yesterday I changed one nvidia system to the proprietary Wayland driver and started gnome with a three monitor setup. Works like a charm.
Frontend Developer for Desktop Applications (Munich / Germany) - AI Vision Platform
Are you interested in autonomous developing and finding creative solutions on your own?
Then join our Wahtari team as Frontend Developer (m/f/d) and take an active role in shaping the future of our hardand software platform for machine vision tasks.
You will work in a highly focused, independent and enthusiastic team, where you will get to play your dev skills in an highly motivated environment. Your responsibilities include the whole lifecycle of software products such as design, development, testing, deployment, maintenance and improvement. Also, utilize your expertise to solve scalability issues and to expand Wahtari’s product portfolio.
We have a similar high performance AI stack written in Go capable to load many different models from different frameworks. This is work of several years. Just saw your comment and thought about our company internal talk to release everything under an open source license. Thanks for reminding me :)
What are your use-cases?
Wow, make it open source quickly!!! :hype:. It's a classic Python REST API for model serving. But we have very low latency constraints. As such, rewriting in more high performant backend languages e.g. Go or Rust would substantially reduce resource usage (by reducing horizontal scaling need). Pre-baked model serving frameworks e.g. Nvidia's Triton aren't an option, since we have to query a feature store, and do some input feature tracking in between. Go seemed like an efficient, developer friendly choice, but there aren't any well maintained model inference libraries in Go up to this day...
We used Triton Inference Server (with a Golang sidecar to translate requests) for model serving and a separate Go app that handled receiving the request, fetching features, sending to Triton, doing other stuff with the response, serving. This scaled to 100k QPS with pretty good performance but does require some hops.
In general writing pure Go inference libraries sucks. Not easy to do array/vector manipulation, not easy to do SIMD/CUDA acceleration, cgo is not go, etc. I wrote a fast XGBoost library at least (https://github.com/stillmatic/arboreal) - it's on par with C implementations, but doing anything more complex is going to be tricky.
We are working with a huge Go and Python codebase and Python is just a pain in terms of using all system resources. We moved many parts to C++ which are called and handled by goroutines. The outcome was a big success.
This proposal/change is a big step forward, especially for the deep learning community.
Quote: "In PyTorch, Python is commonly used to orchestrate ~8 GPUs and ~64 CPU threads, growing to 4k GPUs and 32k CPU threads for big models. While the heavy lifting is done outside of Python, the speed of GPUs makes even just the orchestration in Python not scalable. We often end up with 72 processes in place of one because of the GIL. Logging, debugging, and performance tuning are orders-of-magnitude more difficult in this regime, continuously causing lower developer productivity."
Quote: "We frequently battle issues with the Python GIL at DeepMind. In many of our applications, we would like to run on the order of 50-100 threads per process. However, we often see that even with fewer than 10 threads the GIL becomes the bottleneck. To work around this problem, we sometimes use subprocesses, but in many cases the inter-process communication becomes too big of an overhead. To deal with the GIL, we usually end up translating large parts of our Python codebase into C++. This is undesirable because it makes the code less accessible to researchers."
This requirement could have been well served with a gil per thread and arena based (shared) object allocation model. Every other use case would have been unaffected.
Now we change the world for everyone and put most of library developers through a valley of desperation for 5 years+, just so that a very few narrow use cases get the benefits they want.
Good point. Did the Meta and Deepmind devs really miss this?
I try to avoid python as much as possible, because I mainly work with Go & C++ and multi-threading with those languages is just better (imho). Bringing python a step forward and making it future proof might be a good thing... Even if this means to break some things? Not sure if dismissing the GIL is the right step, but there is a big performance gap to fix. Or maybe the AI community must move to a better suited language? Having python code in production just feels so wrong. Especially if a rewrite in another language shows the performance gap.
The PEP notes subinterpreters as an alternative and says it can be considered a valid approach to achieve paralleism. However it does not discuss why nogil was given preferences. I guess that's ok because the PEP is about nogil.
I'm not sure whether the SC has considered alternative approaches but it would be surprising if not
The use cases of the ML and AI world are very important though, as they massively contribute to Python's popularity. Thanks to Python, researchers and developers don't have to use different languages and library ecosystems for developing and scaling models.
Alas, subinterpreters sound like they could be a feasible solution for many use cases as well.
There may be a misunderstanding about terminology here. COVID-19 is the clinical disease caused by the SARS-CoV-2 virus. If you are infected by the virus but asymptomatic then you don't "have Covid".
Can't say too much about it. Didn't really follow the whole Covid discussion and just continued my lifestyle. Eating healthy (fresh self-made food), doing sports and looking after a good mental state. Family and my circle did the same. Friends who have been vaccinated got Covid several times. But are also good now... My grandmom (89 years), also unvaccinated, didn't get Covid. Her sister got it (vaccinated). Both healthy now... Just let everybody do their own thing... The whole hate in the communities was unnecessary.
Attitudes like yours are why it kept spreading instead of petering out. I'm not saying that people need to get vaccinated, but I'm god damn sick of people not caring about spreading disease, whether vaccinated, or not. Humanity, as a whole, is in a war with disease. We don't need collaborators. All it takes for evil to triumph is for good people to do nothing.
> Healthy, young people who were intentionally exposed to the coronavirus SARS-CoV-2 developed mild symptoms — if any — in a first-of-its-kind COVID-19 human-challenge study.
That doesn't mean they weren't contagious.
> The first participants received a very low dose — roughly equivalent to the amount of virus in a single droplet of nasal fluid — of a virus strain that circulated in the United Kingdom in early 2020. Researchers anticipated that a higher dose would be needed to infect a majority of participants, says Andrew Catchpole, chief scientific officer of hVIVO. But the starting dose successfully infected more than half of the participants.
> The virus replicated incredibly rapidly in those who became infected. On average, people developed their first symptoms and tested positive, using sensitive PCR tests, less than two days after exposure, on average. That contrasts with the roughly five-day ‘incubation period’ that real-world epidemiological studies have documented between a probable exposure and symptoms. High viral levels persisted for an average of 9 days, and up to 12 days.
> Attitudes like yours are why it kept spreading instead of petering out.
Defining “why” can be a complex exercise, but let’s take a very simple approach: if there were not attitudes like the GP and everyone who could got vaccinated, would COVID have petered out? I don’t think so.
It’s plausible that, if enough production capacity had existed to rapidly vaccinate, say, 85% of the world population, evenly distributed, that it would have worked. But getting a uniform 85% was never in the cards, and, starting some time in 2021, the vaccine was nowhere near effective enough for a two-dose series to suppress transmission even with 100% coverage.
Sorry, but the idea of eliminating Covid with the vaccines we have was a nice fantasy, but it was not going to happen.
(If the vaccine were much better and had good worldwide coverage, then maybe. The smallpox vaccine was good enough. The measles and chickenpox vaccines are plausibly good enough. The oral polio vaccine might be good enough, but I have serious doubts that the strategy with which it’s used is actually appropriate. Somehow there does not appear to be community transmission of polio in New York right now, and I’m a bit surprised.
(People under about 23 years old in the US have generally received the injectable polio vaccine, not the oral vaccine. The injectable vaccine seems to be generally considered inadequate to prevent transmission. Maybe the under 23 year old NY population coupled with modern hygiene is not actually able to sustain an outbreak?)
My opinion doesn't lead to harm to other people, so you'll understand why I don't respect yours. Your right to swing your infected spittle ends where other people's mouths and noses begin.
> My opinion doesn't lead to harm to other people, so you'll understand why I don't respect yours. Your right to swing your infected spittle ends where other people's mouths and noses begin.
Don't you have the ability to stay home and avoid breathing near other humans if you're so concerned? I'm confused by that statement. How is demanding reduced freedom for him more just than simply exercising your own?
To be fair, Covid was never all that dangerous (in a statistical sense) for relatively young, relatively healthy people.
Of course, in the beginning that wasn't clear. And you might still want to get vaccinated, to decrease the likelihood of you passing the virus to your older relatives.
See my other comment. My circle consists of people up to 89 years. Thanks for the hint, but I am not convinced of the vaccine. I'll continue doing my stuff and it's my own responsibility.
I was very interested at the results of my spouse's antibody test! It was negative, and we thought for sure she had antibodies from infection. I have no scientific evidence, but she has genetic abnormalities in certain blood proteins, and I wonder if that assists with her resisting the infection!
antibodies are only measurable for a short time; long-term ability to defend is "learned" by the immune system but not measurable in any ordinary way; here in California coastal area there is a lot of social pressure about vaccination. Random people still insist that vaccination is important for healthy adults and sometimes under-18.
https://michael-nehls.de/