As artificial intelligence agents turn out to be more advanced, it could turn out to be increasingly difficult to differentiate between AI-powered users and real humans on the web. In a recent white paper, researchers from MIT, OpenAI, Microsoft, and other tech corporations and academic institutions propose the usage of personhood credentials, a verification technique that permits someone to prove they’re an actual human online, while preserving their privacy.
MIT News spoke with two co-authors of the paper, Nouran Soliman, an electrical engineering and computer science graduate student, and Tobin South, a graduate student within the Media Lab, concerning the need for such credentials, the risks related to them, and the way they might be implemented in a protected and equitable way.
Q: Why do we’d like personhood credentials?
Tobin South: AI capabilities are rapidly improving. While a whole lot of the general public discourse has been about how chatbots keep recovering, sophisticated AI enables way more capabilities than simply a greater ChatGPT, like the flexibility of AI to interact online autonomously. AI could have the flexibility to create accounts, post content, generate fake content, pretend to be human online, or algorithmically amplify content at an enormous scale. This unlocks a whole lot of risks. You’ll be able to consider this as a “digital imposter” problem, where it’s getting harder to differentiate between sophisticated AI and humans. Personhood credentials are one potential solution to that problem.
Nouran Soliman: Such advanced AI capabilities could help bad actors run large-scale attacks or spread misinformation. The web might be stuffed with AIs which are resharing content from real humans to run disinformation campaigns. It’ll turn out to be harder to navigate the web, and social media specifically. You may imagine using personhood credentials to filter out certain content and moderate content in your social media feed or determine the trust level of knowledge you receive online.
Q: What’s a personhood credential, and how are you going to ensure such a credential is secure?
South: Personhood credentials assist you to prove you might be human without revealing anything about your identity. These credentials let you’re taking information from an entity like the federal government, who can guarantee you might be human, after which through privacy technology, assist you to prove that fact without sharing any sensitive details about your identity. To get a personhood credential, you’re going to have to indicate up in person or have a relationship with the federal government, like a tax ID number. There’s an offline component. You’re going to need to do something that only humans can do. AIs can’t turn up on the DMV, as an illustration. And even probably the most sophisticated AIs can’t fake or break cryptography. So, we mix two ideas — the safety that now we have through cryptography and the undeniable fact that humans still have some capabilities that AIs don’t have — to make really robust guarantees that you just are human.
Soliman: But personhood credentials could be optional. Service providers can let people select whether or not they wish to use one or not. Without delay, if people only wish to interact with real, verified people online, there isn’t a reasonable approach to do it. And beyond just creating content and talking to people, in some unspecified time in the future AI agents are also going to take actions on behalf of individuals. If I’m going to purchase something online, or negotiate a deal, then perhaps in that case I would like to make sure I’m interacting with entities which have personhood credentials to make sure they’re trustworthy.
South: Personhood credentials construct on top of an infrastructure and a set of security technologies we’ve had for a long time, resembling the usage of identifiers like an email account to sign into online services, and so they can complement those existing methods.
Q: What are a number of the risks related to personhood credentials, and the way could you reduce those risks?
Soliman: One risk comes from how personhood credentials might be implemented. There’s a priority about concentration of power. Let’s say one specific entity is the one issuer, or the system is designed in such a way that each one the facility is given to 1 entity. This might raise a whole lot of concerns for a component of the population — perhaps they don’t trust that entity and don’t feel it’s protected to have interaction with them. We’d like to implement personhood credentials in such a way that folks trust the issuers and make sure that people’s identities remain completely isolated from their personhood credentials to preserve privacy.
South: If the one approach to get a personhood credential is to physically go somewhere to prove you might be human, then that might be scary for those who are in a sociopolitical environment where it’s difficult or dangerous to go to that physical location. That might prevent some people from having the flexibility to share their messages online in an unfettered way, possibly stifling free expression. That’s why it’s important to have a wide range of issuers of personhood credentials, and an open protocol to ensure that freedom of expression is maintained.
Soliman: Our paper is attempting to encourage governments, policymakers, leaders, and researchers to take a position more resources in personhood credentials. We’re suggesting that researchers study different implementation directions and explore the broader impacts personhood credentials could have on the community. We’d like to ensure we create the appropriate policies and rules about how personhood credentials ought to be implemented.
South: AI is moving very fast, definitely much faster than the speed at which governments adapt. It’s time for governments and massive corporations to begin occupied with how they’ll adapt their digital systems to be able to prove that somebody is human, but in a way that’s privacy-preserving and protected, so we could be ready once we reach a future where AI has these advanced capabilities.