OpenAI establishes team for controlling superintelligent AI
OpenAI is in the process of establishing a dedicated team to effectively manage the risks associated with superintelligent artificial intelligence (AI). Superintelligence refers to a hypothetical AI model surpassing human intelligence across various domains, presenting significant advancements compared to previous generation models. OpenAI anticipates the potential emergence of such a model before the decade's end. While recognizing the potential benefits in addressing crucial global challenges, OpenAI emphasizes the critical importance of managing the vast power of superintelligence to prevent the disempowerment or even extinction of humanity.
OpenAI Chief Scientist Ilya Sutskever and Jan Leike, the head of alignment at the research lab, will co-lead the newly formed team. OpenAI has committed to allocating 20 percent of its current compute power to this initiative, aiming to develop an automated alignment researcher. The purpose of this system would be to assist OpenAI in ensuring the safety and alignment of superintelligent AI with human values. While acknowledging the ambitious nature of this goal and the absence of guaranteed success, OpenAI remains optimistic that a dedicated and focused effort can address this challenge. Drawing on promising ideas from preliminary experiments, increasingly valuable progress metrics, and the utilization of today's models for empirical study, OpenAI anticipates sharing a roadmap for future endeavors.
Amid ongoing global discussions on regulating the emerging AI industry, Wednesday's announcement by OpenAI aligns with the broader context. OpenAI CEO Sam Altman has actively engaged with over 100 federal lawmakers in recent months, expressing the importance of AI regulation and the organization's willingness to collaborate with policymakers. However, it is crucial to approach such proclamations, including initiatives like OpenAI's Superalignment team, with a critical lens. By directing public attention towards potential future risks that may not materialize, there is a risk of deferring immediate regulatory challenges related to AI's impact on labor, misinformation, and copyright. Policymakers must prioritize addressing these pressing issues in the present rather than solely focusing on distant concerns.
Labels: AI, AI developments, AI regulation., Ilya Sutskever, OpenAI, Sam Altman
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home