In a bold move, a group of tech insiders from top companies like OpenAI, Google’s DeepMind, and Anthropic have come together with an urgent message. They say that without better safety measures, artificial intelligence (AI) technology could be so dangerous that it might even lead to the end of humanity.
These 13 tech insiders, mostly former employees, wrote an open letter on Tuesday. They believe AI can do amazing things for people, but they also see big risks. These risks range from making some people much richer while others get poorer, to spreading fake news, to losing control of AI systems that think for themselves. In the worst case, they say, this could lead to humans going extinct.
What Is AGI, and Why Are AI Experts Worried About It?
The main worry is about something called AGI, or artificial general intelligence. Think of it as a super-smart computer that’s as clever as a human or even smarter. Right now, AGI is just an idea, but if it becomes real, it could change everything.
The people who signed the letter say companies working on AGI don’t want too many rules. They think these companies care more about making money than making sure their technology is safe.
This isn’t just about the public or government watching over them — even the companies’ own workers are being kept in the dark.
Trouble at OpenAI: Workers Leave as Safety Concerns Grow
This warning comes when there’s a lot of trouble at OpenAI, one of the biggest AI companies.
Many top people have left, including one of the founders, Ilya Sutskever. Some think they’re leaving because they believe the company cares more about profits than making sure its AI is safe.
OpenAI even got rid of a team that was supposed to study long-term AI risks, less than a year after starting it. This makes some people think they’re not taking the risks seriously enough.
AI Could Trick Voters and Steal Voices
The risks these experts talk about aren’t just made up. Scientists have found that image-making AIs from OpenAI and Microsoft can create fake photos that could trick people about voting, even though they’re not supposed to do that.
Even scarier, OpenAI said on Thursday that it stopped five secret groups from using its AI to spread lies on the internet. This shows the dangers are real, not just in theory.
There’s also worry about AI copying people without asking. Scarlett Johansson, who once played an AI assistant in a movie, says OpenAI used her voice for one of its products without her okay. This shows AI could also mess with people’s rights.
AI Companies: “Trust Us” vs. Workers: “We Need More Safety”
When asked about these concerns, a spokeswoman for OpenAI said in a statement:
We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society, and other communities around the world.
Lindsey Held, OpenAI
They also mentioned having ways to keep their products safe, such as a hotline for workers to report problems and a special committee to check their work.
But the people who wrote the letter don’t think that’s enough. They believe that without strong outside control and a culture that puts safety first, AI’s amazing power could become its biggest danger.
Call to Action: Let AI Workers Speak Up Without Fear
The letter is a call for big changes in how AI is made. These tech insiders want AI companies to give their researchers more protection so they can speak up about new developments that worry them. They also want companies to get more input from the public and government on where AI is heading.
They’re demanding that companies stop punishing workers who talk about AI risks. This comes after a problem at OpenAI where leaving workers had to choose between losing vested equity or signing a paper saying they wouldn’t say bad things about the company. OpenAI later changed this rule, saying it didn’t match their values.
Tech Giants’ Next Big Thing: AI That Talks and Sees Like Humans
This united stand by AI professionals is a big wake-up call. It comes just as AI is getting new, powerful abilities.
New AI assistants can have real-time voice chats with humans and understand things they see, like videos or math problems. As these technologies get better at acting human, we need stronger safety rules.
The message from these industry insiders is clear: to use AI’s huge potential for good, we need openness, responsibility, and a strong focus on safety. They’re pushing AI companies to welcome open debate, stop using agreements that silence people, and protect employees who raise concerns.
As AI gets ready to change every part of our lives, from how we vote to how we think about owning our own voices, we can’t ignore these warnings. AI’s promise is huge, but so are its dangers. The choices made today by the people building our AI future will affect generations.
Their simple but powerful request: put human safety before company profits, or risk leading humanity to an AI-driven end.