Hacker Newsnew | past | comments | ask | show | jobs | submit | floathub's commentslogin

It's wild to me Time Machine works on your network. Are you just doing "first backups" over and over again, or have you somehow achieved the very rare state where Time Machine can run for, say, a week at a time without falling over?

Sorry, this is snarky and off topic, but I'm nostalgic for the days when Time Machine "just worked".


For a very long time I thought Time Machine had become flaky, and I'm sure it's partially to blame, but with my current setup I've literally never observed it corrupt a backup and have to start over.

Before I was using one of the common Synology consumer NAS boxes that are often recommended. The NAS didn't report any errors with the drives or its own hardware, but at least once a month TM would glitch on at least one of my home laptops.

My new setup is an Asus FLASHSTOR 12 Pro Gen2 FS6812X. For a year now it's been running without a single apparent TM glitch while backing up multiple personal laptops and my work laptop. Sometimes I'm plugged in and sometimes I'm backing up over WiFi, but it's always worked.

I tried various recommended settings for the Synology and nothing helped so I strongly suspect that the Synology network protocol(SMB, AFP, etc.) implementations were either buggy themselves or at least not compatible with quirks in Apple's implementations. Synology->Asus fixed all my TM problems instantly and seemingly permanantly!


Backup failed.

Again. I’m on a Synology, and your comment is interesting.


I can't remember the exact phrasing, but are you talking about the error message that essentially says: "The Tardis is broken. Your backup has diverged into an entirely separate timeline, and I have no way of reconciling it. You may now sacrifice an entire weekend to do an initial backup again."?

I've been on a lucky streak for several years now, where I haven't gotten that one on any of my devices.

"Preparing backup..." taking an unreasonable amount of time is a regular occurrence, and some edge cases around adjusting TM backup size quotas aren't handled well. But other than that, TM has been working reasonably well for me to back up 10 TB over SMB to a Synology NAS.

My gripe is much more with Apple's abysmal support for SMB and NFS, especially after deprecating AFP. I've been back and forth between them over the years and over several OS versions, and their implementations for both are just terrible.

But over time SMB, for me, proved slightly more stable and performant, with the right tweaks in smb.conf, and authentication and permissions/ownership are easier to deal with than NFS, so I stuck with that.

I also yearn for the days where TM just worked, because somehow, the alternatives are even worse:

- Arq Backup does some things quite well, which is why I use it as part of my 3-2-1. But some of its bugs and implementation decisions just scream "hobby grade" to me.

- Kopia looks interesting, but it's not mature enough yet. Failed for me with absolutely cryptic error messages during repo init both times I tried it, with versions several months apart.

- Restic, Borg / Vorta: Not turnkey enough for me.


> "Preparing backup..." taking an unreasonable amount of time is a regular occurrence,

TM heavily throttles disk I/O used for backing up in order to ensure that normal user activity isn't affected. That makes it appear that TM is dramatically slower than you would expect which greatly annoys me. This becomes obvious after you run this command which will make both the preparing and transferring phases go closer to the theoretical speed you'd expect:

sudo sysctl debug.lowpri_throttle_enabled=0


> TM heavily throttles disk I/O used for backing up

That makes sense, and I usually quite like that behavior. I barely ever notice an impact when backups are running.

However, this is happening every time on one machine (Intel iMac), and semi-regularly on another one (M3 MBP), after a fresh restart, giving mds_stores some time to settle down, and the most recent backup just hours ago, with no significant changes on disk since.

In a situation like that, I would expect the "Preparing backup..." stage to just take a second to create an APFS snapshot, and maybe a minute to diff that snapshot against the remote state. But not 10+ minutes.

But thank you for the hint about that sysctl parameter! I will certainly give this a try.


TM makes my M2 Air chug when it runs against a local Samsung SSD. It does no such I/O throttling.


Time Machine to a network share via Samba has been pretty reliable for me - only once has it corrupted itself in the five+ years I’ve been using it.

Amusingly enough Time Machine to a local drive failed completely.


I've been using Time Machine for six months pointing to a network share on my TrueNAS box and it has worked fine. Sometimes a backup will fail when the Mac is taken off my home network (it doesn't play nice with Tailscale for whatever reason) but it will always work again if I tell it to retry the failed backup once I'm back on the local network.


> I'm nostalgic for the days when Time Machine "just worked".

I’ve used it for a long long time and never knew these glory days.


And then in vim you can spawn a shell to run ... oh, never mind.


Free software has never mattered more.

All the infrastructure that runs the whole AI-over-the-internet juggernaut is essentially all open source.

Heck, even Claude Code would be far less useful without grep, diff, git, head, etc., etc., etc. And one can easily see a day where something like a local sort Claude Code talking to Open Weight and Open Source models is the core dev tool.


It's not just that open source code is useful in an age of AI, it's that the AI could only have been made because of the open source code.


> All the infrastructure that runs the whole AI-over-the-internet juggernaut is essentially all open source.

Exactly.

> Heck, even Claude Code would be far less useful without grep, diff, git, head, etc.

It wouldn't even work. It's constantly using those.

I remember reading a Claude Code CLI install doc and the first thing was "we need ripgrep" with zero shame.

All these tools also all basically run on top of Linux: with Claude Code actually installing, on Windows and MacOS, a full linux VM on the system.

It's all open-source command line tools, an open-source OS and piping program one to the other. I'm on Linux on the desktop (and servers ofc) since the Slackware days... And I was right all along.


The primary selling point of unix and unix-like operating systems has always been composability.

Without the ability to string together the basic utilities into a much greater sum, Unix would have been another blip.


> Free software has never mattered more.

But the Libre part of Free Software has never mattered less, at least so TFA argues and while I could niggle with the point, it's not wrong.


Wow, some corps could offload some of their costs to "the community" (unpair labor), while end users are as disenfranchised as ever! How validating!


Why isnt LLM training itself open sourced? With all the compute in the world, something like Folding@home here would be killer


data bandwidth limits distributed training under current architectures. really interesting implications if we can make progress on that


Limits but doesn't prohibit. See https://www.primeintellect.ai/blog/intellect-3 - still useful and can scale enormously. Takes a particular shape and relies heavily on RL, but still big.


What bandwith limits? Im assuming the forward and backward passes have to be done sequentially?


Yes also passing data within each layer


It is in some cases. NVIDIA's models are open source, in the truest sense that you can download the training set and training scripts and make your own.


It's either illegal or extremely expensive to source quality training material.


Yeah, turns out if you want to train a model without scrapping and overloading the whole of Internet while ignoring all the licenses and basic decency is actually hard & expensive!


Well it is, it's in the name "OpenAI". /S


The power generated from Niagara river stations was traveling on an international "grid" between Canada and the US in the late 1890s.


Man, how could they not wait 2.5 weeks until April 1 !!!


Emacs will solve this too:

https://github.com/tanrax/org-social

:-)


This may help, it has an example pizauth config (scroll down to "Authenticating with pass and pizauth"):

https://stuff.sigvaldason.com/email.html


This another similar resource with some additional stuff about using mu4e-org:

https://stuff.sigvaldason.com/email.html


The "watch" method is so awesome:

    mpv https://live0.emacsconf.org/gen.webm


I used Emacs for several years before I discovered "project" (it's built in). If you're navigating dired trees or similar to find files or grep for strings in groups of files, this is like magic:

C-x p f (find any file in the current "project", e.g. git repo)

C-x p v (grep the whole project super fast)

It's embarrassing how long it took me to realize it was there all along. :-)


I am consistently using `m` for marking relevant files/directories in the dired mode and then `A` to find a regex among all included files. It does not seem that I miss anything by not relying on such a project approach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: