Hacker Newsnew | past | comments | ask | show | jobs | submit | xenadu02's commentslogin

Is the disconnect here that in many datasets there is some implicit distribution? For example if we are searching for english words we can assume that the number of words or sentences starting with "Q" or "Z" is very small while the ones starting with "T" are many. Or if the first three lookups in a binary search all start with "T" we are probably being asked to search just the "T" section of a dictionary.

Depending on the problem space such assumptions can prove right enough to be worth using despite sometimes being wrong. Of course if you've got the compute to throw at it (and the problem is large) take the Contact approach: why do one when you can do two in parallel for twice the price (cycles)?


> The most important question was the structured cabling in the walls; was it CAT-5E or CAT-6, or even CAT-6A? Remember from the last post, 10GBASE-T might work over short runs of -5E (even though officially it's not meant to be able to).

This is not quite correct.

The primary problem is cross-talk. Copper wire itself will carry the relevant frequencies up to 100m without issue but even with balanced pairs the balancing is not perfect and the "dirty paper precoding" is not perfect so some cross-talk will occur. How long you can go with Cat-5e depends on how well the wire is twisted, how many wires are bundled together, are there any loops or tight bends, and other factors. Cat-6A guarantees less cross-talk with more twists, better balancing, and a plastic separator inside the cable to make the cross-talk more regular and thus easier to cancel out.

Bottom line is: for almost any normal home or apartment any quality Cat-5e cable properly terminated will carry 10GBase-T without issue. In fact if you have problems I would first re-terminate the cable before assuming you need to run new cable. Cat-6 or 6A just isn't necessary.

As a PSA: beware of "CCA". I've noticed Amazon and eBay are absolutely flooded with cheap chinese electrical and networking cable that shows nice shiny copper in the pictures but is actually "copper clad aluminum". If they mention anything at all they code it as "CCA" cable without explaining what that means.

CCA cable cannot, by definition, be ethernet cable. I won't get into the full technical details but the standard was amended to clarify that only pure copper wires are acceptable for ethernet. Personally I would not dare use CCA for anything. It has lower performance, lower current-carrying capability for the same wire diameter (inherent in aluminum), and introduces the risk of oxidation and loosening of connections as people will treat them as copper connections when aluminum needs special installation procedures and connections to avoid them coming loose over time. For electrical connections especially this not only can but absolutely will lead to a fire over time if not treated with the appropriate care. All it takes is a little bit of mechanical action scraping off the thin copper layer and you now have an effectively aluminum wire - a time bomb ticking away.


> Cat-6A guarantees less cross-talk with more twists, better balancing, and a plastic separator inside the cable to make the cross-talk more regular and thus easier to cancel out.

This isn't quite right. Cat6A guarantees the frequency and therefore effective bandwidth that is available. But it doesn't talk about physical cable type. A common structured cable for Cat6A is U/FTP which has no plastic separator, no outer shield, but rather each pair is wrapped in foil and is actually less tightly twisted than even Cat5e. Anyone can buy U/FTP cable, but to actually be Cat6A it has to be installed correctly and the installation certified to support Cat6A speeds.


This comment is completely incorrect.

kqueue VNODE events are delivered so long as your process has access to the file. There is no "same-process" notification filter.


I have no idea why they aren't using kqueue but that works on macOS and FreeBSD. It has for years.

You want EVFILT_VNODE with NOTE_WRITE. That's hooked up to VNOP_WRITE in the kernel, the call made to the relevant filesystem to actually perform the write.


If you're interested you can use kqueue on FreeBSD and Darwin to watch the inode for changes. Faster than a syscall, especially if all you need is a wakeup when it changes.

Most edible bananas are seedless and most cultivars (human grown) bananas are genetic mutants with triploid chromosomes (though a few are tetrapolid or diploid). Getting them to produce functional reproductive structures at all let alone viable seeds is very difficult. There are ongoing efforts to cross-breed with their wild cousins and to preserve genetic diversity.

This is wildly inaccurate.

Windows 3.11 was a hypervisor running virtual machines. The 16-bit Windows virtual machine (within which everything was cooperatively multitasking), the 32-bit headless VM that ran 32-bit drivers, and any number of V86 DOS virtual machines.

Win9x was similar in the sense that it had the Windows virtual machine running 32-bit and 16-bit Windows software along with V86 DOS VMs. It did some bananas things by having KERNEL, USER, and GDI "thunk" between the environments to not just let 16-bit programs run but let them continue interacting with 32-bit programs. So no, Win9x was in fact 32-bit protected mode with pre-emptive multitasking.

What Win9x prioritized was compatibility. That meant it supported old 16-bit drivers and DOS TSRs among other things. It also did not have any of the modern notions of security or protection. Any program could read any other program's memory or inject code into it. As you might expect a combination of awful DOS drivers and constant 3rd party code injection was not a recipe for stability even absent bad intentions or incompetence.

Windows 2000/XP went further and degraded the original Windows NT design by pulling stuff into kernel mode for performance. GDI and the Window Manager were all kernel mode - see the many many security vulnerabilities resulting from that.


This is correct. Win9x did have memory protection, it just made an intentional choice to set up wide open mappings for compatibility reasons.

WSL9x uses the same Win9x memory protection APIs to set up the mappings for Linux processes, and the memory protection in this context is solid. The difference is simply that there is no need to subvert it for compatibility.


I do some machining as a hobby (I get enough of computers at work and elsewhere) so here's a similar tip:

Don't treat your lathe faceplate as a precious artifact. Need to clamp an oddly shaped part to it? Drill and tap some holes. That's what it is for.

Is that reamer too long to fit? Cut it shorter.

Modify your tools to make you happier or more productive.

What if the modification doesn't turn out well? Great! You learned something. Make the next one better.


IOKit was almost done in Java; C++ was the engineering plan to stop that from happening.

Remember: there was a short window of time where everyone thought Java was the future and Java support was featured heavily in some of the early OS X announcements.

Also DriverKit's Objective-C model was not the same as userspace. As I recall the compiler resolved all message sends at compile time. It was much less dynamic.


Mostly because they thought Objective-C wasn't going to land well with the Object Pascal / C++ communities, given those were the languages on Mac OS previously.

To note that Android Things did indeed use Java for writing drivers, and on Android since Project Treble, and the new userspace driver model since Android 8, that drivers are a mix of C++, Rust and some Java, all talking via Android IPC with the kernel.


There was also the Java-like syntax for ObjC but I don’t think that ever shipped.


> there was a short window of time where everyone thought Java was the future

Makes me think of how plists in macOS are xml because back then xml was the future


> Someone will fulfil the need as there is a high incentive to.

Unless the capital cost to compete is too high and the risk of the existing manufacturers undercutting you is very real. Plus it can take 5-10 years or more to build a new fab, debug/iterate your process, then start shipping product.

Markets are prone to natural distortions. This is one form of that. It can be perfectly natural for all potential competitors to choose not to compete no matter how much demand exists.

Frankly I'd expect nationalization of some of the DRAM makers before we see the rise of useful competitors. The more likely scenario is government pressure, up to and including arresting executives, to rattle the cages of the existing players who are way better placed to expand production quickly for relatively low capex. Not that I think any action is likely in the short term. My guess is the existing players are betting on an AI bubble pop so they don't see the use in really expanding capacity only to be left with idle fabs later. None of us really knows.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: