An era in which AI exceeds the scope of experts
On the 22nd, OpenAI, the developer of ChatGPT, posted on its official blog titled “Governance of Super Intelligence” that, in preparation for the advent of AI, which has far more advanced capabilities than the current artificial intelligence (AI), correctly understands super intelligence, He argued that we should start preparing to take countermeasures.
Initial ideas for governance of superintelligence, including forming an international oversight organization for future AI systems much more capable than any today: https://t.co/9hJ9n2BZo7
— OpenAI (@OpenAI) May 22, 2023
Co-authored by Chief Executive Officer Sam Altman, Chief Technology Officer Greg Brockman, and Chief Scientist Ilya Satzkyver, the blog says AI technology will evolve exponentially over the next decade. The argument is based on the premise.
Given the current situation, it is expected that AI will exceed the skill level of experts in most areas within the next 10 years and will perform production activities comparable to those of today’s large companies.
The blog calls such advanced AI “superintelligence.” Superintelligence has the potential to be more impactful than any technology humanity has competed with in the past, and could lead to a “dramatically prosperous future,” but it is also risky enough that it “needs special treatment and coordination.” It’s essential,” he said.
In addition, due to the rapid progress of technology such as artificial intelligence (AI), it is sometimes called “singularity” (technical singularity) as a word that refers to a future era that exceeds human intelligence and ability, but self-evolution and self-recovery capabilities, security risks have also been pointed out in terms of critical infrastructure systems such as finance, military, and medical care.
Suggestions as a starting point
The executives of OpenAI made the following three proposals regarding the points to be kept in mind during the development process of AI technology in order for humans and superintelligence to coexist successfully.
- Enhanced coordination among key developers and companies
- Establishment of an international governing body such as the International Atomic Energy Agency (IAEA)
- Technical competence to ensure the safety of superintelligence
One possible way to coordinate the development of superintelligence is for the world’s major governments to set up a joint project and make existing development projects part of it. An approach was also proposed in which developers agree to limit the growth rate of AI capabilities on the front lines of development.
On his blog, he compares the use of superintelligence to the use of nuclear power, which has higher risks but also greater benefits. From that perspective, he proposed the establishment of an international body, such as the IAEA, to supervise system inspections, audit requests, safety standards tests and security level restrictions.
The technical ability to control the safe use of superintelligence is a topic for future research, and many researchers, including OpenAI, are actively working on it.
About regulation
OpenAI advocates introducing regulations such as licensing and auditing of developments commensurate with the capabilities of AI models. He stressed the importance of not subjecting companies and projects that develop models below the threshold for a given capability to burdensome regulations.
The AI systems that Mr. Altman and others are concerned about are capable of far exceeding the technology created so far, and applying similar standards to technology that is not worthy of concern would hinder development. I advised that it was an act of inserting.
Altman just made similar remarks during a hearing before the U.S. Senate Judiciary Committee on June 16 about government regulation of AI based on its importance and ability.
connection:OpenAI CEO Altman Calls for AI Regulations at Senate Hearings
Why OpenAI Continues Development
OpenAI is keenly aware that how to build a democratic decision-making mechanism for superintelligent systems governance and its deployment is a very difficult task and a big risk.
Still, there are two fundamental reasons for continuing to build AI technology.
First of all, he said, “I believe it will lead to a much better world than we can currently imagine.” Early examples can already be seen in education, creative work and personal productivity, he said.
Second, he pointed out that blocking the creation of superintelligence is not only difficult, but also risky. While the cost of superintelligence development is falling, the number of developers is increasing rapidly. And since that is the essence of the “technical direction” of the age, preventing the birth of superintelligence will require a “global oversight system,” but even that is guaranteed to work. It emphasizes that there is no
That’s why we have to get things right.
Article provided by: THE BLOCK
The post OpenAI: “Now is the time to start thinking about ‘superintelligence’ governance” appeared first on Our Bitcoin News.
Read More: bitcoinwarrior.net