OpenAI Forms Team To Study 'Catastrophic' AI Risks, Including Nuclear Threats
Published on October 27, 2023 at 12:11AM
OpenAI today announced that it's created a new team to assess, evaluate and probe AI models to protect against what it describes as "catastrophic risks." From a report: The team, called Preparedness, will be led by Aleksander Madry, the director of MIT's Center for Deployable Machine Learning. (Madry joined OpenAI in May as "head of Preparedness," according to LinkedIn, ) Preparedness' chief responsibilities will be tracking, forecasting and protecting against the dangers of future AI systems, ranging from their ability to persuade and fool humans (like in phishing attacks) to their malicious code-generating capabilities. Some of the risk categories Preparedness is charged with studying seem more... far-fetched than others. For example, in a blog post, OpenAI lists "chemical, biological, radiological and nuclear" threats as areas of top concern where it pertains to AI models. OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears รข" whether for optics or out of personal conviction -- that AI "may lead to human extinction." But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.
Published on October 27, 2023 at 12:11AM
OpenAI today announced that it's created a new team to assess, evaluate and probe AI models to protect against what it describes as "catastrophic risks." From a report: The team, called Preparedness, will be led by Aleksander Madry, the director of MIT's Center for Deployable Machine Learning. (Madry joined OpenAI in May as "head of Preparedness," according to LinkedIn, ) Preparedness' chief responsibilities will be tracking, forecasting and protecting against the dangers of future AI systems, ranging from their ability to persuade and fool humans (like in phishing attacks) to their malicious code-generating capabilities. Some of the risk categories Preparedness is charged with studying seem more... far-fetched than others. For example, in a blog post, OpenAI lists "chemical, biological, radiological and nuclear" threats as areas of top concern where it pertains to AI models. OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears รข" whether for optics or out of personal conviction -- that AI "may lead to human extinction." But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.
Read more of this story at Slashdot.
Comments
Post a Comment