As AI wheedles its way into our lives, the way it behaves socially is becoming a pressing query. A brand new study suggests AI models construct social networks in much the identical way as humans.
Tech corporations are enamored with the concept that agents—autonomous bots powered by large language models—will soon work alongside humans as digital assistants in on a regular basis life. But for that to occur, these agents might want to navigate the humanity’s complex social structures.
This prospect prompted researchers at Arizona State University to research how AI systems might approach the fragile task of social networking. In a recent paper in PNAS Nexus, the team reports that models comparable to GPT-4, Claude, and Llama appear to behave like humans by in search of out already popular peers, connecting with others via existing friends, and gravitating towards those much like them.
“We discover that [large language models] not only mimic these principles but accomplish that with a level of sophistication that closely aligns with human behaviors,” the authors write.
To analyze how AI might form social structures, the researchers assigned AI models a series of controlled tasks where they got details about a network of hypothetical individuals and asked to make your mind up who to hook up with. The team designed the experiments to research the extent to which models would replicate three key tendencies in human networking behavior.
The primary tendency is referred to as preferential attachment, where individuals link up with already well-connected people, making a sort of “wealthy get richer” dynamic. The second is triadic closure, during which individuals usually tend to connect with friends of friends. And the ultimate behavior is homophily, or the tendency to hook up with others that share similar attributes.
The team found the models mirrored all of those very human tendencies of their experiments, in order that they decided to check the algorithms on more realistic problems.
They borrowed datasets that captured three different sorts of real-world social networks—groups of friends at school, nationwide phone-call data, and internal company data that mapped out communication history between different employees. They then fed the models various details about individuals inside these networks and got them to reconstruct the connections step-by-step.
Across all three networks, the models replicated the sort of decision making seen in humans. Essentially the most dominant effect tended to be homophily, though the researchers reported that in the corporate communication settings they saw what they called “career-advancement dynamics”—with lower-level employees consistently preferring to hook up with higher-status managers.
Finally, the team decided to check AI’s decisions to humans directly, enlisting greater than 200 participants and giving them the identical task because the machines. Each had to choose which individuals to hook up with in a network under two different contexts—forming friendships at school and making skilled connections at work. They found each humans and AI prioritized connecting with people much like them within the friendship setting and more popular people within the skilled setting.
The researchers say the high level of consistency between AI and human decision making could make these models useful for simulating human social dynamics. This might be helpful in social science research but additionally, more practically, for things like testing how people might reply to recent regulations or how changes to moderation rules might reshape social networks.
Nonetheless, in addition they note this implies agents could reinforce some less desirable human tendencies as well, comparable to the inclination to create echo chambers, information silos, and rigid social hierarchies.
In reality, they found that while there have been some outliers within the human groups, the models were more consistent of their decision making. That implies that introducing them to real social networks could reduce the general diversity of behavior, reinforcing any structural biases in those networks.
Nonetheless, it seems future human-machine social networks may find yourself looking more familiar than one might expect.

