Mrinank Sharma, who led AI safety research at artificial intelligence firm Anthropic, has stepped down from his role, warning that the world is becoming increasingly fragile as technology grows more powerful and complex.
In a public resignation message, Sharma said humanity is living through a period of overlapping global crises, and advanced AI systems could worsen these problems if they are not guided by strong ethical frameworks. He stressed that intelligence and capability alone are not enough — wisdom and values must grow at the same pace.
At Anthropic, Sharma worked on building safeguards into large language models, focusing on preventing misuse and unintended harm. His research examined risks ranging from AI-assisted biological threats to the subtle ways in which machines can influence human thinking and decision-making.
However, Sharma acknowledged that translating ethical ideals into daily practice within fast-moving technology companies is deeply challenging. Commercial pressure, competition, and rapid innovation often leave little room for reflection, he said, even when safety is a stated priority.
Rather than moving to another AI company, Sharma has chosen a very different path. He said he plans to devote his time to poetry and reflective writing, believing that creative expression offers a better way to engage with moral questions and the human consequences of technology.
His decision has resonated across the global tech community, especially among researchers who worry that AI development is accelerating faster than regulation, public understanding, and cultural readiness. Sharma’s warning echoes broader concerns about whether current safety efforts are sufficient as AI systems become more autonomous and influential.
The resignation comes at a time when AI companies are under growing scrutiny from governments, civil society, and their own employees.
Also Read: Bhumjaithai party scores surprise win in Thailand polls