Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Now that I know the details of Rust's range type it is extremely weird. The constraints need to be separate types. Why should range be so general as to support things that are obviously not ranges, only to return values indicating the range is malformed?


> The constraints need to be separate types. Why should range be so general as to support things that are obviously not ranges, only to return values indicating the range is malformed?

I'm not sure what malformed return values this is referring to, because I can't think of any. Is it referring to the fact that ranges where the start is greater than the end will result in an empty range? Without dependent types, which Rust doesn't have, there's no way to detect that; even in the subset of cases where the range bounds are computable at compile-time, back at 1.0 Rust didn't have the compile-time evaluation machinery necessary to make that happen. You could instead choose to interpret that a range where the start is greater than the end indicates a descending range, but plenty of other people will regard that behavior as a flaw.


I assume the issue is with Rust letting you create such a range and then having the bad stuff happen when you try to use it, rather than failing fast.


Whether 5..0 being an empty range is "bad stuff" or "good stuff" is a matter of perspective. It is often "good stuff" for me, when computing some indices to slice with. Panicking on construction would force one perspective on every use case.


> But why isn't it panicking on len()? How is 0 the right answer there?

- len is `ExactSizedIterator.len()` which is the length of `Range` as iterator, i.e. the number of items yielded by next. Which is 0.

- When slicing with 5..0 it threats it not as an empty iterator but as an out of bounds access. This is without question slightly inconsistent and not my favorite choice but was decided explicitly this way as it makes it much easier to catch bugs wrt. wrongly done slices. Also it only panics if you do Index which can panic anyway but it won't panic if you use e.g `get` where it return `None` so making it traet the "bad" empty case differently for slicing doesn't add a new error path, but doing so for iteration and `len` would add a new error path especially given that `ExactSizedIterator.len()` isn't supposed to panic as it's a size hint.


But why isn't it panicking on len()? How is 0 the right answer there?


Because that’s the length of the iterator? The range is empty, its exact size is 0.


The range is invalid, not empty; someone had to do a validity check to return 0 to prevent it from returning -5 or trying to count up from 0 (depending on what it was willing to assume). A big point of the article is that a range of size 0 should always be iterable, but it somehow isn't, because it isn't actually of size 0.


No the author misrepresents the facts.

If you index a slice with a out of bounds index it will panic independent of weather the index is a usize or a Range<usize>.

If you use `get` with a out of bound index you always get a None.

Sure it's open for discussion if why a range with start > end should be treated the same as an out of bounds index or if it should be treated as empty slice. But then doing the former makes it easier to catch errors.

Enforcing start <= end would mean that the range construction is fallible which would be a major usability nightmare and now you would need two synatxes one for the normally error handling and one for panicking or you would need to add a lot of unwraps or similar.

Range's are mainly used ad-hoc (e.g. `slice[start..=mid+2]`) or `for x in x..y {...}` and are optimized for that usage patterns.

For other usages they might not be optimal. But you can always do your own types.


> obviously not ranges, only to return values indicating the range is malformed?

It's not the case. The only think affected by range being generic is that `contains` takes a reference instead of a copy (which btw. can likely be eliminated by the optimizer). Which is necessary to allow thinks like `Range<BigNum>`.

All other things have nothing to do with it being generic but with for which use cases it was designed for.

In the end in rust a Range is mainly an iterator.

If it's a Range<usize> and only then you can also use it to get slice arrays/vectors/slices.

Which means that e.g. the unstable experimental `get_unchecked` function is actually very well defined.

Lastly the reason why you can't enforce `start <= end` is because that would make the creation of an range fallible which would be a horrible usability nightmare, a thing the author somehow misses completely.

The thing is indexing a slice already can panic so moving the panic there is generally a good idea. Similar you always want to have a non-panic path. Which would be e.g. `[T]::get()` which in case of a "bad" slice does the same as on a "bad" index it returns `None`.

In the end both `Range` and `RangeInclusive` are compromises focused on the most common use cases of range, which is a ad-hoc creation "just around" the place you consume it for iteration or slicing of slices. Which also means that e.g. the fact that `RangeInclusive` is bigger is no problem as at the place it's used you elsewise would need to either turn it into a iterator just like `RangeInclusive` adding even more overhead then the current `RangeInclusive`. Sure if you want to store a lot of `RangInclusive`s then this is not the use-case it was defined for and you are better of defining your own range inclusive.


But shouldn't len() panic instead of returning 0? I don't even understand how it could return 0 without having already done all the work to determine it should have returned a negative number.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: