OpenAI, the company behind the popular AI chatbot ChatGPT, has announced the formation of a team dedicated to managing the risks associated with superintelligent AI systems. In a blog post on July 5, OpenAI stated its belief that superintelligence, which it expects to emerge within this decade, has the potential to be the most impactful technology ever invented. However, it also acknowledged the risks involved and the need for careful control and alignment of such powerful AI systems.
OpenAI aims to dedicate 20% of its computing resources to the effort of managing superintelligence safety. The company plans to develop a “human-level” automated alignment researcher to assist in managing and aligning superintelligence with human intent. This researcher would be an AI system capable of understanding and working towards human goals in order to mitigate potential risks.
Ilya Sutskever, OpenAI’s chief scientist, and Jan Leike, the head of alignment at the research lab, have been appointed as co-leaders of this effort. OpenAI has issued an open call for machine learning researchers and engineers to join the team in tackling the challenges associated with superintelligent AI systems.
OpenAI’s announcement comes at a time when governments worldwide are considering measures to regulate the development, deployment, and use of AI systems. The European Union has made notable progress in AI regulations, with the EU AI Act passing in the European Parliament. The act includes requirements for disclosure of AI-generated content. In the United States, lawmakers have introduced the National AI Commission Act to establish a body responsible for determining the country’s approach to AI. Calls for the labeling of AI-generated content have also been made by Senator Michael Bennet and others.
OpenAI’s move to form a dedicated team reflects its recognition of the immense potential and risks associated with superintelligent AI systems. While the company acknowledges the need to control and steer such technology, it has also expressed concerns about potential over-regulation that could hinder innovation. Striking a balance between innovation and safety remains a key challenge in the development of AI technologies.
OpenAI’s creation of a team focused on managing the risks of superintelligent AI systems demonstrates the company’s commitment to addressing the challenges and potential dangers associated with advanced AI. By dedicating resources and seeking experts in the field, OpenAI aims to ensure the safe development and alignment of superintelligence with human values. As global regulations around AI continue to evolve, the responsible development and governance of powerful AI systems will be crucial for shaping their positive impact on society.
Get $200 Free Bitcoins every hour! No Deposit No Credit Card required. Sign Up