OpenAI, the company behind ChatGPT, is taking steps to address concerns about AI safety and governance.
CEO Sam Altman recently announced that OpenAI is working with the U.S. AI Safety Institute to provide early access to its next major generative AI model for safety testing.
The move comes amid growing scrutiny of OpenAI’s commitment to AI safety and its influence on policy-making.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
— Sam Altman (@sama) August 1, 2024
Collaboration with the U.S. AI Safety Institute
The U.S. AI Safety Institute, a federal body aimed at assessing and addressing risks in AI platforms, will have the opportunity to test OpenAI’s upcoming AI model before its public release. While details of the agreement are scarce, this collaboration represents a significant step towards increased transparency and external oversight of AI development.
The partnership follows a similar deal OpenAI struck with the UK’s AI safety body in June, suggesting a pattern of engagement with government entities on AI safety issues.
Addressing safety concerns
OpenAI’s recent actions appear to be a response to criticism regarding its perceived deprioritization of AI safety research. The company previously disbanded a unit working on controls for “superintelligent” AI systems, leading to high-profile resignations and public scrutiny.
In an effort to rebuild trust, OpenAI has:
- Eliminated restrictive non-disparagement clauses.
- Created a safety commission.
- Pledged 20% of its compute resources to safety research.
However, some observers remain skeptical, particularly after OpenAI staffed its safety commission with company insiders and reassigned a top AI safety executive.
Influence on AI policy
OpenAI’s engagement with government bodies and its endorsement of the Future of Innovation Act has raised questions about the company’s influence on AI policymaking. The timing of these moves, coupled with OpenAI’s increased lobbying efforts, has led to speculation about potential regulatory capture.
Machine unlearning: Can AI really forget?
Altman’s position on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board further underscores the company’s growing involvement in shaping AI policy.
Looking ahead
As AI technology continues to advance rapidly, the balance between innovation and safety remains a critical concern. OpenAI’s collaboration with the U.S. AI Safety Institute represents a step towards more transparent and responsible AI development.
However, it also highlights the complex relationship between tech companies and regulatory bodies in shaping the future of AI governance.
The tech community and policymakers will be watching closely to see how this partnership unfolds and what impact it will have on the broader landscape of AI safety and regulation.
Featured image credit: Kim Menikh/Unsplash