Scientists Created a Social Network of Bots
Researchers at the University of Amsterdam have modeled a social network entirely populated by AI-powered chatbots. Their goal was to test whether online platforms can be made less toxic.
The experiment focused on six strategies: chronological feeds, exposure to diverse viewpoints, hiding social statistics, and removing user bios. However, none of these interventions significantly reduced polarization or extremism. In fact, some made things worse: while chronological feeds reduced inequality of attention, they also amplified the most extreme content.
“Toxic content shapes network structures and influences post visibility, creating a vicious circle of toxicity,” explained Associate Professor Petter Thornberg. He stressed that artificial intelligence cannot be seen as a “perfect solution” since it mirrors human biases and limitations.
The study also revealed “extreme inequality of attention,” where a small fraction of posts receive most of the visibility. Combined with generative AI designed to maximize engagement, this dynamic could further escalate polarization and disinformation.
According to Thornberg, current social media models may be fundamentally flawed. Even with algorithmic interventions, platforms remain breeding grounds for polarization and extremism.
“I find it hard to imagine how traditional social networks can survive under these conditions,” he concluded.
Leave a Comment
Comments
No comments yet. Be the first to comment!