var TRINITY_TTS_WP_CONFIG = {“cleanText”:”OpenAI launches new research team to control superintelligence in AI.u23f8The maker of the generative artificial intelligence (AI) platform ChatGPT hasu00a0confirmed that it will create a new team to solve the challenge of controlling superintelligent AI.u23f8OpenAI predicts that AI systems will achieve superintelligence before the end of the decade, which may pose significant risks to humanity. OpenAI hopes to make enough technological breakthroughs within four years to u201csteer and control AI systems much smarter than us.u201du23f8u201cBut the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,u201d OpenAI said. u201cWhile superintelligence seems far off now, we believe it could arrive this decade.u201du23f8To undertake the daunting task, OpenAI announced hiring machine learning experts to join its superintelligence alignment team. The team will be led by OpenAI co-founder Ilya Sutskever and Head of Alignment Jan Leike, with researchers from other OpenAI units forming the team.u23f8OpenAIu00a0says it will be earmarking 20% of its resources to the new team and will leverage its previous studies to get a headstart. The firm is adopting a three-pronged strategy to create a u201chuman-level automated alignment researcheru201d to u201citeratively align superintelligence.u201du23f8OpenAI stated that it would achieve this through developing a scalable training model, validating the model, and using adversarial methods to stress test the alignment pipeline. Although the plan looks feasible on paper, OpenAI disclosed that the entire research hangs on the balance of probabilities, but it is still a risk worth taking.u23f8u201cWhile this is an incredibly ambitious goal and weu2019re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,u201d OpenAI remarked.u23f8In June,u00a0OpenAI launched a $1 million grantu00a0to support researchers building projects in the intersection of cybersecurity and AI. The fund will be focused on u201cattack-mindedu201d projects, with successful projects receiving up to $10,000 in direct funding.u23f8OpenAI faces increasing regulatory scrutinyu23f8Following the launch of ChatGPT-3 and its successor ChatGPT-4, OpenAI faced scathing opposition from regulators in the EU, coming within a hairu2019s breadth of being banned in Italy. Consumer groups and critics pointed out the risks posed by the generative AI platform to finance, Web3, security, news, and education sectors.u23f8In the U.S., the company is facing a class actionu00a0lawsuitu00a0bordering on the illegal scraping of the personal data of millions of individuals used in training its AI models. The plaintiffs allege that OpenAI was in breach of privacy and copyright laws for failing to seek the consent of individuals.u23f8To smoothen strained relationships with regulators, OpenAI CEO Sam Altman met with EU authorities in Brussels to speak on the downsides of overregulation. Altman has since toured over 16 cities across three continents as the firm meanders its way through the minefield of regulatory uncertainty.u23f8Watch AI Summit PH 2023: Philippines is ripe to start using artificial intelligenceu23f8″,”pluginVersion”:”5.6.2″}; |
The maker of the generative artificial intelligence (AI) platform ChatGPT has confirmed that it will create a new team to solve the challenge of controlling superintelligent AI.
OpenAI predicts that AI systems will achieve superintelligence before the end of the decade, which may pose significant risks to humanity. OpenAI hopes to make enough technological breakthroughs within four years to “steer and control AI systems much smarter than us.”
“But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” OpenAI said. “While superintelligence seems far off now, we believe it could arrive this decade.”
To undertake the daunting task, OpenAI announced hiring machine learning experts to join its superintelligence alignment team. The team will be led by OpenAI co-founder Ilya Sutskever and Head of Alignment Jan Leike, with researchers from other OpenAI units forming the team.
OpenAI says it will be earmarking 20% of its resources to the new team and will leverage its previous studies to get a headstart. The firm is adopting a three-pronged strategy to create a “human-level automated alignment researcher” to “iteratively align superintelligence.”
OpenAI stated that it would achieve this through developing a scalable training model, validating the model, and using adversarial methods to stress test the alignment pipeline. Although the plan looks feasible on paper, OpenAI disclosed that the entire research hangs on the balance of probabilities, but it is still a risk worth taking.
“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” OpenAI remarked.
In June, OpenAI launched a $1 million grant to support researchers building projects in the intersection of cybersecurity and AI. The fund will be focused on “attack-minded” projects, with successful projects receiving up to $10,000 in direct funding.
OpenAI faces increasing regulatory scrutiny
Following the launch of ChatGPT-3 and its successor ChatGPT-4, OpenAI faced scathing opposition from regulators in the EU, coming within a hair’s breadth of being banned in Italy. Consumer groups and critics pointed out the risks posed by the generative AI platform to finance, Web3, security, news, and education sectors.
In the U.S., the company is facing a class action lawsuit bordering on the illegal scraping of the personal data of millions of individuals used in training its AI models. The plaintiffs allege that OpenAI was in breach of privacy and copyright laws for failing to seek the consent of individuals.
To smoothen strained relationships with regulators, OpenAI CEO Sam Altman met with EU authorities in Brussels to speak on the downsides of overregulation. Altman has since toured over 16 cities across three continents as the firm meanders its way through the minefield of regulatory uncertainty.
Watch AI Summit PH 2023: Philippines is ripe to start using artificial intelligence
New to blockchain? Check out ’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.