OpenAI Loses Top AI Safety Experts to Rival Anthropic

Employee leaving an AI company.

In a surprising move, several key employees and researchers are leaving ChatGPT creator OpenAI to join rival AI company Anthropic.

These high-profile departures show that OpenAI might be focusing too much on making money and not enough on important work to keep advanced AI safe and aligned with human values and ethics.

Jan Leike Joins Anthropic’s Ethical AI Mission

Leading the talent exodus is Jan Leike, who recently co-headed OpenAI’s “Superalignment” team, working to ensure superintelligent AI systems stay safe and aligned with human interests.

Leike very publicly resigned from OpenAI, bluntly stating the company’s “safety culture and processes have taken a backseat to shiny products.”

He slammed OpenAI for not devoting sufficient “bandwidth” to next-gen safety initiatives like “security, monitoring, preparedness, adversarial robustness, (super)alignment, confidentiality, and studying societal impacts.”

Leike has now joined Anthropic, where he will focus on “scalable oversight, weak-to-strong generalization, and automated alignment research” to keep advanced AI under human control. Anthropic’s mission to integrate ethical principles into AI development fits perfectly with Leike’s expertise.

His departure is part of a broader trend of talent leaving OpenAI, driven by the belief that the company prioritizes commercial interests over strict AI ethics and safety under CEO Sam Altman’s leadership.

Earlier this month, OpenAI co-founder Ilya Sutskever – also a staunch advocate for AI safety – abruptly exited following an attempted boardroom coup against Altman last November. Given his steadfast values alignment, Sutskever is viewed as a prime candidate to join Anthropic’s growing pool of ethics-focused AI talent alongside Leike.

Other recent exits include policy researcher Gretchen Krueger and the controversial attempt by OpenAI to unilaterally license Scarlett Johansson’s voice likeness without consent – yet another crisis over ethical AI development practices.

Leike emphasized the need for OpenAI to focus on responsible AI practices, stating, “We are long overdue in getting incredibly serious about the implications of AGI…We must prioritize preparing for them as best as we can.”

OpenAI Board Launches Crisis Safety Committee

Clearly shaken by the AI brain drain to Anthropic, OpenAI’s board formed an emergency “Safety and Security Committee” mirroring the priorities pushed by departed researchers like Leike and Sutskever.

Led by directors Bret Taylor, Adam D’Angelo, Nicole Seligman and CEO Altman, this committee will make critical safety and security recommendations as OpenAI advances its high-stakes “next frontier” AI model development.

However, the overdue move has the appearance of attempting to regain the ethical AI high ground rapidly being seized by Anthropic on the safety front.

OpenAI is trying to show that it is still committed to AI safety and ethics, aiming to match or even surpass the standards set by its competitor, Anthropic. This response highlights the increasing competition in the AI industry, where being a leader in ethical practices and safety measures is becoming very important.

Anthropic’s Meteoric Rise

Only recently emerging from stealth with $4 billion from Amazon and others, Anthropic is swiftly establishing itself as the leading destination for top AI researchers prioritizing responsible, robustly safe development over rushed commercialization.

Founded by former OpenAI ethicists Dario and Daniela Amodei, Anthropic explicitly aims to “integrate ethical principles into AI development” and lead on safety impact analysis – providing the fundamental alternative to OpenAI’s trajectory.

As the battle intensifies between OpenAI and Anthropic over AI’s ethical soul, one truth is inescapable: Developing superintelligent AGI capabilities exceeding human-level performance across domains may be this century’s greatest challenge. The divide between these firms’ safety philosophies could not be more profound.

If Anthropic’s rapid ascent signals ethics prioritization over commercial hype in paradigmatic AI development, the stakes impacting humanity’s future are extraordinary. Protecting that future may hinge on pragmatic, assured caution trumping a reckless AI arms race.

0 0 votes
Article Rating
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x