Hacker Newsnew | past | comments | ask | show | jobs | submit | 3836293648's commentslogin

Merging autism and aspergers was not a mistake. There wasn't at the time enough science to separate them.

There are separations to be made within autism, absolutely 100%, but the separations they hade made were also definitely 100% wrong.


Don't they get promoted to short?

Either you used nightly, explicitly non stable, rust instead of the default stable rust; or you used dependencies that have been yanked due to security issues; or you didn't commit your lockfile and implicitly upgrades everything by having to generate a new lockfile because you used a really wide range of compatible versions.

All of these options require you to go out of your way to enable breakage.

You could also be in the super unlucky state of using something that was later proved unsound in std, which is the only case where rust will break your code on stable. (Missused unsafe in std)


Strip_suffix won't break with new compiler versions. Anything explicitly imported takes precedence over the prelude, or else everything is a breaking change and would have to wait for an edition.

https://rust.godbolt.org/z/4bsb91Krf is code which calls our strip_suffix in 1.40

Switch to Rust 1.50 and now it's calling the stdlib strip_suffix silently, I actually wasn't expecting it to be silent, and obviously if they have the same exact behaviour (mine instead panics to show we're calling it) you wouldn't even notice, but it is a change.


Oh, wow. I am wrong. So much of the rust community must be wrong as this is commonly mentioned when discussing breakage. This is awful.

But on the other hand, it could be a bug as the trait resolver is commonly mentioned as the buggiest part of the language. I'm scared of the breakage if they fix it though.


Probably a key thing you misunderstood is that &str wasn't from the prelude. It's a type in the actual Rust language, that's why it has the lowercase name like u16 or bool

So we didn't bring str::strip_prefix from the prelude in preference to our custom trait, we made a string literal and those have type &'static str -- an immutable reference to a string which lives forever. So the "prelude doesn't win" rule does not apply for &str because it didn't come from the prelude.

If we were talking about a type which implements Iterator for example, new Iterator features would come from Iterator, which is in the prelude and you didn't specifically ask for Iterator so the things you did ask for beat Iterator. But here the language primitive type grew new methods, a thing which Rust does but many languages don't do - Rust has methods on pointers and bytes and anything, whereas a language like Java or C++ can only put methods on "classes" not the ordinary types.


Oh, yes of course. Switching to String and it works as I expected.

I thought the builtins were defined in core and reexported by the prelude (they are defined in core, they're just implicitly in scope anyway).

But I still think expected behaviour is that builtins should have the same precedence as the prelude.


The reason it works with `String` is because trait methods get priority over applying autoderef (which is needed to go from `&String` to `&str` and select `str::strip_suffix`). If you however already have a `&str` then autoderef won't be needed and the inherent method will win over the trait method. At no point does the prelude come into play

Not technically. But that's not the issue. The issue is that trait resolution and imports are treated inconsistently and that is a mistake.

Compare to [this](https://play.rust-lang.org/?version=stable&mode=debug&editio...)


`strip_suffix` will indeed break with new compiler versions because inherent methods always have priority over trait methods.

It's insane that this is going into an LTS. It's the kind of experiment I'd expect them to play with in a non-LTS and revert in LTSes until it's fully usable, like they did with Wayland being the default, which started in 2017

No? It's not because it's a cache, it's because they're scared of letting you see the thinking trace. If you got the trace you could just send it back in full when it got evicted from the cache. This is how open weight models work.

The trace goes back fine, that's not the issue.

The issue is that if they send the full trace back, it will have to be processed from the start if the cache expired, and doing that will cause a huge one-time hit against your token limit if the session has grown large.

So what Boris talked about is stripping things out of the trace that goes back to regenerate the session if the cache expires. Doing this would help avert burning up the token limit, but it is technically a different conversation, so if CC chooses poorly on stripping parts of the context then it would lead to Claude getting all scatter-brained.


>and doing that will cause a huge one-time hit against your token limit if the session has grown large.

Anthropic already profited from generating those tokens. They can afford subsidize reloading context.


No they can't, that's what you don't seem to get.

Reloading those tokens takes around the same effort as processing them in the first place.

It's ok to be ignorant of how the infrastructure for LLMs work, just don't be proud of it.


They literally can. They could make the API free to use if they wanted. There is no law that states that costs have to equal the cost it takes to process the request.

I’m not familiar with the Claude API but OpenAI has an encrypted thking messages option. You get something that you can send back but it is encrypted. Not available on Anthropic?

They are sending it back to the cache, the part you are missing is they were charging you for it.

The blog post says they prune them now not to charge you. That’s the change they implemented.

right. they were charging you for it, now they aren't because they are just dropping your conversation history.

That's not British, that's just old people


No, I'm claiming your source is outdated. It has become an old people thing now

Not laptops. Local dimming zones look awful when you have a white cursor moving around, so it's mostly still just a TV-feature

Looking awful has not prevented local dimming from becoming quite common on laptops. Apple has been doing an okay job of it in the MacBook Pro for several years. Lots of Windows laptops have been very hit-or-miss about it, but at least with those you often have an OLED option. I've seen multiple Windows laptops from more than one OEM where opening a terminal window with light text on a dark background means you can easily spot a single line of text getting much dimmer toward the center of the dark window, and lighter near the perimeter where it's close to other light content. And that's for static content; as you mentioned motion can bring more problems as the backlight lags behind the LCD.

GPT started off open? They just closed before anyone else even joined the space

Maths is like physics


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: