Anthropic AI safety researcher quits, says the ‘world is in peril’ – National

A synthetic intelligence researcher left his job on the U.S. firm Anthropic this week with a cryptic warning concerning the state of the world, marking the newest resignation in a wave of exits over safety risks and ethical dilemmas.

In a letter posted on X, Mrinank Sharma wrote that he had achieved all he had hoped during his time on the AI safety company and was happy with his efforts, but was leaving over fears that the “world is in peril,” not simply because of AI, but from a “whole series of interconnected crises,” starting from bioterrorism to concerns over the industry’s “sycophancy.”

Story continues below commercial

He said he felt called to writing, to pursue a level in poetry and to devote himself to “the practice of courageous speech.”

“Throughout my time here, I’ve repeatedly seen how hard it’s to really let our values govern our actions,” he continued.

Anthropic was founded in 2021 by a breakaway group of former OpenAI employees who pledged to design a more safety-centric approach to AI development than its competitors.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and all over the world, enroll for breaking news alerts delivered on to you once they occur.

Sharma led the corporate’s AI safeguards research team.

Anthropic has released reports outlining the security of its own products, including Claude, its hybrid-reasoning large language model, and markets itself as an organization committed to constructing reliable and comprehensible AI systems.

The corporate faced criticism last yr after agreeing to pay US$1.5 billion to settle a class-action lawsuit from a bunch of authors who alleged the corporate used pirated versions of their work to coach its AI models.

Story continues below commercial

Sharma’s resignation comes the identical week OpenAI researcher Zoë Hitzig announced her resignation in an essay within the Recent York Times, citing concerns concerning the company’s promoting strategy, including placing ads in ChatGPT.

“I once believed I could help the people constructing A.I. get ahead of the issues it could create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to assist answer,” she wrote.

“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Promoting built on that archive creates a possible for manipulating users in ways we don’t have the tools to know, let alone prevent.”


Anthropic and OpenAI recently became embroiled in a public spat after Anthropic released a Super Bowl commercial criticizing OpenAI’s decision to run ads on ChatGPT.

In 2024, OpenAI CEO Sam Altman said he was not a fan of using ads and would deploy them as a “last resort.”

Last week, he disputed the industrial’s claim that embedding ads was deceptive with a lengthy post criticizing Anthropic.

“I suppose it’s on brand for Anthropic doublespeak to make use of a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is just not where I might expect it,” he wrote, adding that ads will proceed to enable free access, which he said creates “agency.”

Story continues below commercial

Employees at competing firms — Hitzig and Sharma — each expressed grave concern concerning the erosion of guiding principles established to preserve the integrity of AI and protect its users from manipulation.

Hitzig wrote that a possible “erosion of OpenAI’s own principles to maximise engagement” might already be happening on the firm.

Sharma said he was concerned about AI’s capability to “distort humanity.”

&copy 2026 Global News, a division of Corus Entertainment Inc.


Related Post

Leave a Reply