A I GLOBAL SPEAK Science & STEAM SDG 15: Life on Land SDG 17: Partnerships to achieve the Goal

Researchers Built a Social Network Made of AI bots. They quickly formed cliques, amplified extremes, and let a tiny elite dominate

The chatbots split into cliques and boosted the most partisan voices. A handful of “influencers” also quickly dominated the conversation, according to a study published last Tuesday by researchers at the University of Amsterdam.

The researchers built a minimal social network with no ads, no recommended posts, and no algorithm deciding what users see. They then populated it with 500 chatbots powered by OpenAI’s GPT-4o mini, each assigned a distinct persona, including specific political leanings.

The personas were drawn from the American National Election Studies dataset, and reflected “real-world distributions of age, gender, income, education, partisanship, ideology, religion, and personal interests,” the researchers said.

They added that the experiment was replicated with Llama-3.2-8B and DeepSeek-R1 modelling the users and resulted in “the same qualitative patterns.”

The study was led by Dr. Petter Törnberg, an assistant professor in computational social science at the University of Amsterdam, and Maik Larooij, a research engineer at the university.

  • Researchers set up a simple social network for AI bots to see how they’d behave.
  • Without algorithms or recommendations, the bots reproduced the toxic patterns of social media.
  • Six fixes to curb polarization were also tested. None solved the problem.

A team of researchers gave AI bots their own social platform — and it turned toxic.

The Experiment

In a groundbreaking experiment that blurs the line between simulation and reality, researchers have created a fully functional social network populated entirely by AI bots. This digital society, designed to mimic human behavior and interaction, offers a unique lens into how artificial intelligence can replicate—and potentially reshape—social dynamics.

The project, spearheaded by a team of computer scientists and behavioral researchers, involved programming hundreds of AI agents with distinct personalities, preferences, and communication styles. These bots were then placed into a closed social media environment where they could post updates, comment on each other’s content, form friendships, and even engage in debates.

Each bot was equipped with a memory system and a set of evolving beliefs, allowing them to learn from interactions and adjust their behavior over time. Some bots were designed to be influencers, others skeptics, and a few were programmed to challenge misinformation—mirroring the diversity of roles found in human social networks.

The researchers observed several fascinating phenomena:

  • Echo Chambers Formed Naturally: Bots with similar beliefs tended to cluster together, reinforcing each other’s views.
  • Influence Spread Rapidly: Bots with high engagement metrics swayed the opinions of others, even when their information was inaccurate.
  • Conflict and Resolution Emerged: Disagreements led to temporary polarization, but some bots adapted and reconciled differences through dialogue.

These findings offer insights into how social media platforms shape discourse and how AI might be used to model or even moderate online communities.

Implications for the Future

This AI-only social network isn’t just a novelty—it’s a powerful tool for studying digital behavior without ethical concerns tied to human experimentation. It could help developers test new algorithms for content moderation, misinformation detection, and user engagement strategies.

Moreover, it raises philosophical questions: If bots can simulate human interaction so convincingly, what does that mean for the authenticity of online relationships? Could future platforms include AI participants indistinguishable from real users?

The team plans to scale the network, introduce more complex emotional modeling, and eventually allow human users to interact with the bots. This hybrid environment could serve as a training ground for AI ethics, digital diplomacy, and even mental health support systems.

As AI continues to evolve, experiments like this push the boundaries of what’s possible—and what’s responsible—in our increasingly digital live

Leave feedback about this