Symbolic processing was obviously a bad approach to building a thinking machine. Well, obvious now, 40 years ago probably not as much, but there were strong hints back then, too.
"AI agent" roughly just means invoking the system repeatedly in a while loop, and giving the system a degree of control when to stop the loop. That's not a particularly novel or breakthrough idea, so similarities are not surprising.
I'm not convinced that symbolic processing doesn't still have a place in AI though. My feeling about language models is that, while they can be eerily good at solving problems, they're still not as capable of maintaining logical consistency as a symbolic program would be.
Sure, we obviously weren't going to get to this point with only symbolic processing, but it doesn't have to be either/or. I think combining neural nets with symbolic approaches could lead to some interesting results (and indeed I see some people are trying this, e.g. https://arxiv.org/abs/2409.11589)
I agree that symbolic processing still has a role - but I think it's the same role it has for us: formal reasoning. I.e. a specialized tool.
"Logical consistency" is exactly the kind of red herring that got us stuck with symbolic approach longer than it should. Humans aren't logically consistent either - except in some special situations, such as solving logic problems in school.
Nothing in how we think, how we perceive the world, categorize it and communicate about it has any sharp boundaries. Everything gets fuzzy or ill-defined if you focus on it. It's not by accident. It should've been apparent even then, that we think stochastically, not via formal logic. Or maybe the Bayesian interpretation of probabilities was too new back then?
Related blind alley we got stuck in for way longer than we should've (many people are still stuck there) is in trying to model natural language using formal grammars, or worse, argue that our minds must be processing them this way. It's not how language works. LLMs are arguably a conclusive empirical proof of that.
Yeah, I agree logic and symbolic reasoning have to be _applications_ of intelligence, not the actual substrate. My gut feel is that intelligence is almost definitionally chaotic and opaque. If one thing prevents superhuman AGI, I suspect it will be that targeted improvements in intelligence are almost impossible, and it will come down to the energy we can throw at the problem and the experiments we're able to run and evaluate.
What’s interesting to me is the rise of agentic approaches which are effectively “build a plethora of tools and heuristics” with an outer loop that combines, mutates and assigns values to these components. Where before that process was more rigid, we now have access to much more fluid intelligence but the structure feels similar - let the AI prod at the world and make experiments, then look at what worked and think of some plausible enhancements. At a certain point you’re enhancing the code that enhances the enhancer and all bets are off.