The race towards artificial general intelligence (AGI) is heating up, with OpenAI unveiling bold plans to develop radically advanced AI capabilities unmatched by current systems like GPT-4 and ChatGPT.
The San Francisco AI leader announced it has begun training its next groundbreaking model with the goal of achieving “the next level of capabilities” and pushing the boundaries of AGI – AI that can match humans across any cognitive domain.
Building AGI, Not Superintelligent Systems
However, OpenAI seems to be course-correcting slightly on its AGI ambitions. Company VP Anna Makanju stated OpenAI’s “mission is to build AGI” on par with human abilities, striking a more measured tone than CEO Sam Altman’s previous comments about devoting substantial efforts towards “superintelligence” research.
The development of this powerful new frontier model will be overseen by OpenAI’s newly-formed Safety and Security Committee led by board directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Altman.
This high-level group’s first priority over the next 90 days is to conduct a comprehensive evaluation and enhancement of OpenAI’s safety processes, protocols and technical safeguards for advanced AI.
Joining the committee are key OpenAI experts like Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Chief Scientist Jakub Pachocki. The committee will also consult prominent external advisors including former cybersecurity officials Rob Joyce and John Carlin.
After the 90-day review, the committee will present its recommendations to OpenAI’s full board. Approved measures will then be publicly disclosed by the company, with appropriate safeguards to protect critical security information.
Addressing Concerns After Top AI Safety Researchers’ Exits
The formation of the oversight committee represents a step towards assuaging growing concerns around the breakneck pace of advanced AI development and the potential risks posed by superintelligent systems.
The initiative comes on the heels of recent high-profile resignations of OpenAI’s top AI safety researchers, including co-founder Ilya Sutskever who led the “Superalignment” team focused on superintelligence safety before an internal clash with Altman. Several of Sutskever’s team members also exited shortly after.
As OpenAI pushes the technological envelope, it also finds itself fending off intensifying competition from deep-pocketed tech giants like Google and Elon Musk’s newly-launched AI startup. Concurrently, the company is making efforts to reassure policymakers, regulators and the public that it is prioritizing responsible development practices.
The Pursuit of Human-Level AGI: Promises and Perils
The training of transformative new AI models like OpenAI’s can take months or even years, leveraging massive troves of data from digital sources to imbue the software with enhanced comprehension and capabilities. However, the lengthy timelines involved mean the world may not see tangible outcomes from this latest initiative for over 9 months to a year at minimum.
As OpenAI charges ahead with this powerful but risky new AI development, the company’s handling of safety issues and transparency around its processes will be closely watched by rivals, policymakers and the AI ethics community.
With both great promise and peril potentially at stake, the company’s self-appointed safety vanguards have their work cut out to keep OpenAI’s ambitions in check as it pushes the frontiers of artificial intelligence.