Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That slide deck is more questions than answers.

Here's a useful question: Suppose the LLM hallucination problem is not solved in the next 10 years. What happens to the AI boom?



I tried to capture this on the last slide before the conclusion - maybe all AI questions have one of two answers - "no-one knows" or "it will be the same as the last time"

this is one of the "no-one knows" questions


The question I'm asking isn't whether hallucinations can be fixed. It's what, if they are not fixed, are the economic consequences for the industry? How necessary is it that LLMs become trustworthy? How much valuation assumes that they will?


And is it even fixable?


The "hallucinations" problem feels to me like an inherent feature. For LLMs to have interesting output the temperature needs to be higher then zero. The whole system is interesting because it is probabilistic. "Hallucinations" (hate the word btw) are to LLMs as melting is to ice. There will be no 'meltless' ice because the melting is what makes it cold and useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: