OpenAI forms a brand new team to check child safety

Under scrutiny from activists — and oldsters — OpenAI has formed a brand new team to check ways to stop its AI tools from being misused or abused by kids.

In a brand new job listing on its profession page, OpenAI reveals the existence of a Child Safety team, which the corporate says is working with platform policy, legal and investigations groups inside OpenAI in addition to outside partners to administer “processes, incidents, and reviews” regarding underage users.

The team is currently seeking to hire a toddler safety enforcement specialist, who’ll be chargeable for applying OpenAI’s policies within the context of AI-generated content and dealing on review processes related to “sensitive” (presumably kid-related) content.

Tech vendors of a certain size dedicate a good amount of resources to complying with laws just like the U.S. Children’s Online Privacy Protection Rule, which mandate controls over what kids can — and may’t — access on the net in addition to what styles of data firms can collect on them. So the incontrovertible fact that OpenAI’s hiring child safety experts doesn’t come as an entire surprise, particularly if the corporate expects a major underage user base in the future. (OpenAI’s current terms of use require parental consent for youngsters ages 13 to 18 and prohibit use for youths under 13.)

However the formation of the brand new team, which comes several weeks after OpenAI announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines and landed its first education customer, also suggests a wariness on OpenAI’s a part of running afoul of policies pertaining to minors’ use of AI — and negative press.

Kids and youths are increasingly turning to GenAI tools for help not only with schoolwork but personal issues. In keeping with a poll from the Center for Democracy and Technology, 29% of children report having used ChatGPT to cope with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Some see this as a growing risk.

Last summer, schools and colleges rushed to ban ChatGPT over plagiarism and misinformation fears. Since then, some have reversed their bans. But not all are convinced of GenAI’s potential for good, pointing to surveys just like the U.K. Safer Web Centre’s, which found that over half of children (53%) report having seen people their age use GenAI in a negative way — for instance creating believable false information or images used to upset someone.

In September, OpenAI published documentation for ChatGPT in classrooms with prompts and an FAQ to supply educator guidance on using GenAI as a teaching tool. In considered one of the support articles, OpenAI acknowledged that its tools, specifically ChatGPT, “may produce output that isn’t appropriate for all audiences or all ages” and advised “caution” with exposure to kids — even those that meet the age requirements.

Calls for guidelines on kid usage of GenAI are growing.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last yr pushed for governments to manage using GenAI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI generally is a tremendous opportunity for human development, but it may also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It can’t be integrated into education without public engagement and the essential safeguards and regulations from governments.”