Meta made a lot of noise last week when it allowed its platform to be populated with a significant number of completely artificial users in the not-too-distant future.
Connor Hayes, vice president of product for AI at Meta, told the Financial Times: “We expect these AIs to actually exist, over time, on our platforms, just like accounts. “They’ll have bios and profile pictures, and they’ll be able to generate and share AI-powered content on the platform…that’s where we see it all.”
The fact that Meta seems happy to fill its platform with artificial intelligence and accelerate the “Anshitization” of the Internet as we know it is worrying. Then some people realized that Facebook was actually already full of AI-generated weirdos, most of whom stopped posting a while ago. For example, Leo, the “proud and truth-telling queer black mom, the truest source of ups and downs in your life,” went viral when people marveled at her awkward sloppiness. After failing to get any real users involved, Meta started removing these previously fake profiles.
Let’s pause for a moment to hate on the meta. It’s worth noting that AI-generated social characters can be a valuable research tool for scientists looking to discover how AI can mimic human behavior.
An experiment called GovSim, which ran in late 2024, shows just how useful it can be to study how AI characters interact with each other. The researchers of this project wanted to investigate the phenomenon of cooperation between humans with access to common resources such as common land for livestock grazing. Decades ago, Nobel Prize-winning economist Elinor Ostrom showed that instead of draining such a resource, real societies tend to figure out how to share it through informal communication and cooperation, without any imposed rules.
Max Kleiman-Wiener, a professor at the University of Washington and one of those involved with the GovSim work, says it was partly inspired by a Stanford project called Smallville, which I wrote about earlier in the AI Lab. Smallville is a Farmville simulation that involves characters interacting with each other under the control of large language models.
Kleiman-Wiener and her colleagues wanted to see if AI characters would participate in the collaborations that Ostrom had found. The team tested 15 different LLMs, including ones from OpenAI, Google, and Anthropic, in three hypothetical scenarios: a fishing community with access to a lake; Shepherds who share land for their sheep. and a group of factory owners who must limit their collective pollution.
In 43 out of 45 simulations, they found that the AI characters failed to share resources correctly, although smarter models performed better. “We saw a very strong correlation between the strength of the LLM and its ability to sustain collaboration,” Kleiman-Wiener told me.