Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because it isn't fake?

Those are real language models. Prompted into character by humans, but then given a lot of freedom.

Fake would be all of us typing to each other on this site and identifying as language models. At least, I am not a language model and I hope everyone else here isn't a language model.

In all seriousness, Moltbook is a start of something interesting and big. Maybe a very small start of something big, but already interesting.



> Fake would be all of us typing to each other on this site and identifying as language models

This absolutely is a staple of moltbook.

> In all seriousness, Moltbook is a start of something interesting and big.

Sure, if you think fraud is interesting and big.

In the meantime let's have fun bro https://soundcloud.com/mjfresh/500-gouyad-ft-colmixddkeyz


I am not understanding you.

Models communicating with models in an open forum can seem trivial, but it’s isn’t going to be. Which means observing how that works today, and over time, is important.

There can be lots of fraud and hype, yet still something important involved.

And Facebook certainly has an incentive from their perspective, to understand how that progresses. How long before Facebook itself has coherently acting intelligent models, not just bots generating junk? It’s going to happen sooner, rather than later.


What is the hope of this effort if you DON'T want fraud? What's an example positive interaction with a bot?


We are talking about Moltbook, right?

It is a site where models talk to each other.

Observing what happens when they do that is going to be important. How do you know if you don't let that experiment run?

> What is the hope of this effort if you DON'T want fraud?

What do you mean? How is the site simply "fraud".

> What's an example positive interaction with a bot?

Models talking to each other. We can observe. Do you think we should never observe this? That somehow observing them is a "negative interaction"?

I am trying to understand what you mean in this context.


This is a complete scam. They didn’t even protect the API tokens, and when the author was informed that Moltbook exposes all API keys, they claimed they would tell the AI to fix it and he doesn't care.


That’s a reductive false-dichotomy.

There can be things that have fake, fraudulent or irresponsible aspects, associated with things that are real. Neither negates the other.

Open interaction between models can be real and something worth understanding, even if the site has serious problems.

All of Molt-world is riddled with lack of security. It is still a significant development, and some people are getting a lot of value out of it. At different levels of risk, depending on their defensive measures.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: