Those are real language models. Prompted into character by humans, but then given a lot of freedom.
Fake would be all of us typing to each other on this site and identifying as language models. At least, I am not a language model and I hope everyone else here isn't a language model.
In all seriousness, Moltbook is a start of something interesting and big. Maybe a very small start of something big, but already interesting.
Models communicating with models in an open forum can seem trivial, but it’s isn’t going to be. Which means observing how that works today, and over time, is important.
There can be lots of fraud and hype, yet still something important involved.
And Facebook certainly has an incentive from their perspective, to understand how that progresses. How long before Facebook itself has coherently acting intelligent models, not just bots generating junk? It’s going to happen sooner, rather than later.
This is a complete scam. They didn’t even protect the API tokens, and when the author was informed that Moltbook exposes all API keys, they claimed they would tell the AI to fix it and he doesn't care.
There can be things that have fake, fraudulent or irresponsible aspects, associated with things that are real. Neither negates the other.
Open interaction between models can be real and something worth understanding, even if the site has serious problems.
All of Molt-world is riddled with lack of security. It is still a significant development, and some people are getting a lot of value out of it. At different levels of risk, depending on their defensive measures.
Those are real language models. Prompted into character by humans, but then given a lot of freedom.
Fake would be all of us typing to each other on this site and identifying as language models. At least, I am not a language model and I hope everyone else here isn't a language model.
In all seriousness, Moltbook is a start of something interesting and big. Maybe a very small start of something big, but already interesting.