Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I was hoping for something more like "the problem is specified such that invalid solutions aren't even representable, so only acceptable solutions are considered."

How on earth would one come up with a model where "crashing cars isnt't representable"? I don't think you recognize how ill-defined and nonsensical this expectation is. Especially when you consider that a such a car may encounter a situation where a crash is unavoidable, where there's certainly room for damage control. Sliding scales ALWAYS work better for optimizations anyways, since regression is so powerful.



I was speaking in the context of my original post, which was specifically talking about traffic lights at intersections and what combination of lights are allowed to be green at the same time.

I think it would be fairly straightforward to enumerate, given a set of lights at an intersection, which combination of lights can be green without allowing cars to cross paths. In other words, we're ruling out combinations that are fundamentally unacceptable and would never be seen in the real world (like "all lights are green at the same time").

That gives the AI a set of acceptable combinations that can be considered. Essentially the AI is choosing an integer in the range 1-max for each intersection at each point in time.

This doesn't eliminate the possibility of car crashes if someone runs a red light. But it lets us constrain the optimization problem to the set of green light configurations that are actually feasible to deploy.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: