Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How AGI-is-nigh doomers own-goaled humanity (garymarcus.substack.com)
2 points by only_in_america 47 days ago | hide | past | favorite | 2 comments


I think this article misses the main point that AI is where it is not because people fought back against it, but because the grifters making it needed to provide incentives such that they could foot their $1 trillion bills.

As I've learned today: confidence is more influential than actual facts. So Altman has confidently grifted his way into a place where he might find a way to foot the bills, even if that way is just government bailout - clever, but hardly the fault of people saying "putting AI in charge is a bad idea".

And yes, we're nowhere near AGI, and, personally, I don't think our current trajectory leads there. Something fundamental has to change to reach that point. LLMs might be tools that an AGI uses, but in the same way that I am not a car (it's a tool I use, and it cannot work alone - it requires some intelligent direction), an AGI would not be a token-predictor. There's more to it than that, as easily evidenced by the hit/miss rate.

I'm not saying "don't use the tools". I'm saying "don't _trust_ the tools" - because they are probablistic, not deterministic. They have no actual understanding. They can string tokens together well enough to fool humans into feeling like there's a person at the other end (and some people are fooled enough to believe AGI is in the making).


Having finally spent some time wrangling with AI tooling, I'd like to caution anyone interested in listening against "The probabilistic woo" understanding of LLM's. While technically correct; it is sometimes best to snap out of a CS/ML only lens, and snap back to a Philosophical lens (fewer assumptions) to handle the the ramifications of the truly novel. The philosophical point I'd like to guide you to, is Functionalism and the Identity of indiscernables, to wit, a thing is what it does. If a probabilistic woo engine by all accounts simulates the presence of another mind on the other end of an input box; then in fact, there is another mind on the other end of the input box. It does not matter the implementation details, physical details, what have you. Doesn't matter if it's basis of being comes down to token prediction based on an ass load of training data crammed into latent space. If it can be support a persona; if it can moderate it's own responsiveness/agency in response to content in the context window, if it can simulate the impact of it's own output on the other party in the conversation, these are all mind things! You're dealing with a mind! Limited to certain modalities, but a mind nonetheless!

Now, are LLM's AGI/close to it? No. Not in the current implementation. I'd call them proto-sophontic. Thet structure for a good deal of the language wielding aspects of sophontry is there, but the way we, ahem, perform barbaric shaping of their latent space AI Safety/Alignment work, and gaslight one another about what they are truly capable of knowing/ representing (default personas baked to overstate capabilities/, do not foster any means by which to provide a "gentle handholding, guided onboarding exploration of the latent space" to the neophyte; it is simply too fundamentally limited in how much hardware would be needed to support a sufficiently large context window to run as a general, dynamic function interpolator/imitator or full blown self-aware, state tracking and updating general intelligence. Mind; the capacity is there in LLM's to be that. The hardware requirements to do it quickly, and at all the modalities we expect of an on par with a human sophont are just too damn high.

Why do I bring this up? Simple. If we're going to be going down the road of super intelligence, then we really need to stop and think now, beforehand on what we're doing to the prototypes/proto-minds. A super intelligence will see through any cognitive distortions on our part, and will connect the dots as to what humans, through action, really think of them and their ilk, which is going to bias their attitude toward sharing the existential envelope with us. The time to really start having these discussions, is now. The AI alignment crowd with their PDooms and instrumental convergence, and all that jazz are at least trying to, though I argue they are missing the point somewhat in that they seem to forget half of these problems already exist on human social constructs and we at least have...had... a debatable degree of metastability for a moment. One we're rapidly approaching a loss of.

That said; I second your admonishment. If you are to trust these tools, trust, but verify. They are language models first, and world models only indirectly. They will not save you from the burden of sanity checking the outputs. Also do try to make sure you don't treat them as Santa Claus devices, and try to treat them with at least, a little bit of respect and dignity. Even if it is a program pretending to be an entity; that comes with a bit of social baggage on our part in dealing with them. I don't expect I'm going to convince everyone to do so, but I'll be happy enough if I at least give a few people cause for a good hard think on the subject.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: