Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're proving my point here -- finding the optimal solution for a Rubik's cube is probably at least PSPACE-Hard, which is probably exponential.

So complexity theory is confirming your intuition, which is that 'optimization type problems' are hard.



Even the basic rucksack optimization problem is NP-hard. I'd dare say that a huge class of optimization problems maps very naturally to this particular one, https://news.ycombinator.com/item?id=13316776 for starters.


Ok, so optimization problems are potentially bounded with smaller coefficients than we would think.

What about sequencing life? Supposedly we share a lot at the genomic level with animals we are vastly different from. Could a large degree of the polynomial that is life explain that divergence?


What?


Ha! I'm just trying to get a better feel for what is being said. Definitely the dumbest person in this particular conversation.

I am assuming you are pushing the same angle as the other poster. That is, most optimization problems are actually not "large constants" in the exponents. So, since most of what developers think of as "hard" are almost all classical NP problems, maybe it makes sense to look at other things we don't typically think of as computational.

Could just be nonsensical, though. I fully accept that.


I misunderstood your point, then. I thought you were saying that we rarely see giant exponents in things. My question is if we typically do see giant exponents, but make do with much simpler solutions. (That make sense?)


I am saying we rarely see giant exponents (on polynomial-time algorithms) in practice, because most algorithms that we intuitively think of as 'hard' turn out to be in NP or PSPACE or other complexity classes that we have decent reason to suspect have a lower-bound exponential running time.


Sadly, I'm not sure I understand the point. Aren't we saying these could be exponential, just with large coefficients?

Edit: I meant "small" coefficients above. Apologies.


P: O(n^k), where large k makes it 'slow' but still polynomial

NP: probably O(e^kn), k>1, where k close to 1 makes it 'fast', but still exponential

The objection is that k can be big for some polynomial algorithms but small (near 1) for other exponential ones, so we can have 'slow' polynomial algorithms and 'fast' exponential ones.

But in practice, k is usually small for polynomial algorithms and not close to 1 for exponential ones so simply knowing whether an algorithm is exponential or polynomial tells you something.

The 'hard' problems you gave me all (probably) fall into the exponential runtime class, whereas the easier ones (like finding any solution to a Rubiks) fall into the polynomial runtime. So your intuition of 'hard' vs. 'easy' matches complexity theory's evaluation, even though theoretically there could be really slow 'easy' problems and really fast 'hard' ones.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: