Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because then the car would break every time anyone approached the car from an angle (which is constantly). Think every intersection ever, every time driving near a sidewalk ever. The car would be herky/jerky as crap.

They should have had it set to spike the breaks once collision was imminent though, that's (maybe) the biggest programming omission here.



They should have set it to slow down gradually when approached from an angle at T-6 and then speed up once past the intersection risk, so that when the scenario emerged at T-1.6 it could emergency stop safely.


> Think every intersection ever, every time driving near a sidewalk ever. The car would be herky/jerky as crap.

I'm not sure that'd be a huge issue. The vectors have to be intersecting first of all, which most vectors emanating from sidewalks wouldn't be, and then a little hysteresis would smooth out most of the rest.


I don't know if you've been to New York or any other places where people walk, but vectors would absolutely be intersecting on a regular basis up until a fairly short time when the pedestrian would stop. Constantly I walk toward an intersection where, if I kept going for three more seconds, I would be pasted to the street by a passing car. But I stop at the end of the sidewalk, before the road begins, so the vector changes to zero in those last three seconds. It would be super weird if cars would brake at the intersection every time this happened. Cars would be braking at every major street on every major avenue, constantly.

What's actually needed here is some notion of whether the pedestrian is paying attention and will correctly stop and not intersect the path of the car. Humans are constantly making that assessment based on sometimes very subtle cues (is the person looking at/talking on a phone, or are they paying attention, for example).


Yeah, eye contact is a very important signal. Maybe there needs to be some specialized hardware to detect eyes and determine the direction they're looking in.


Good idea, although suicidal or oblivious/impaired pedestrians might fail to make eye-contact.


Well, exactly. Those are the ones you slow down for.


I like it. This 'jumping in front of cabs' thing is huge in Russia and it'd be cool if AI could prevent that here.


We use eye contact because we can't infer what another person is thinking and we can't react quickly enough to their actual movements at car speeds. This latter isn't the case with automated vehicles, so eye contact shouldn't be necessary, as long as you get the vector algorithms right.


> Constantly I walk toward an intersection where, if I kept going for three more seconds, I would be pasted to the street by a passing car.

These autonomous systems are evaluating surrounding vectors every few milliseconds. A timescale of 3 seconds simply isn't important, as they would instantly detect you slowing down and conclude that you wouldn't intersect with their vector.


You have missed the entire context here.

> But why didn't it brake from t minus 6 to t minus 1.3? Looks like it detected that the car's and object's paths were converging, so why didn't it brake during that interval?


You're missing the context of this thread. The software in the Uber car has a clear failure condition. That has nothing to do with whether it's possible to infer such vector collisions without jerky driving, which is the point I'm addressing.


The question was why the car doesn’t brake early. “Because scanning every few milliseconds” is not an answer. Scanning frequency is irrelevant to the fact that emergency braking is not a reasonable strategy in general.

Safe driving often does require slowing down in the face of insufficient information. If a human driver sees an inattentive pedestrian about to intersect traffic, they will slow down. “Drive until collision is unavoidable” is a failing strategy.

And anyway, jerky driving is a symptom of late braking, not early braking.


> And anyway, jerky driving is a symptom of late braking, not early braking.

I see it as more than just jerkyness, I see a massive safety issue in traffic. If your autonomous car is slamming on the brakes spontaneously there's a lot more opportunities for other drivers to plow into you from behind.


> The question was why the car doesn’t brake early.

No it's not, the issue was jerky driving.

> “Drive until collision is unavoidable” is a failing strategy.

No kidding.


They can't detect me slowing down before I start slowing down. So if it's t-4 until impact and I'm still moving at full speed, they would need to start braking now if they can't stop in 4s (assuming the worst case that I continue on my current trajectory).

That being said, I'm happy to find my assumptions about stopping time are incorrect and a car traveling at 25mph can stop in less than a second. So on busy NYC streets this wouldn't be an issue. Even at 50mph it appears that stopping time is sub 3s, so the vehicle could probably have avoided this collision if it were running a more intelligent program.


> They can't detect me slowing down before I start slowing down. So if it's t-4 until impact and I'm still moving at full speed, they would need to start braking now if they can't stop in 4s (assuming the worst case that I continue on my current trajectory).

Right, collision is basic physics accounting for the stopping time and distance of pedestrians and cars. So the question is whether pedestrians on sidewalks really have so many collision vectors with traffic such that autonomous vehicles would be jerky all of the time as the initial poster suggested.

I claim reasonable defaults that would far outperform humans on average wouldn't have that property. Autonomous vehicles should be programmed to follow the rules of the road with reasonable provisions to avoid collisions when possible.


I think the situation might be all the transitions. All the time people on bikes switch from a driveway to a bike lane, during which they could continue straight into the road. Or people step out of a building and walk diagonally across a large sidewalk, they could keep going straight into the road.


Which simply means it is because the car's AI isn't good enough to classify that object as something it should slow down for versus something it can ignore (like an empty plastic bag drifting across the road.)


Well, it is good enough, it just that it develops that confidence over the period of so many seconds. In this case it took until T minus 1.6 seconds to realize "ok this is something we should stop for".


well then, frankly, that’s not good enough.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: