Google Adds Generative AI Threats To Its Bug Bounty Program
Published on October 27, 2023 at 01:30AM
Google has expanded its vulnerability rewards program (VRP) to include attack scenarios specific to generative AI. From a report: In an announcement shared with TechCrunch ahead of publication, Google said: "We believe expanding the VRP will incentivize research around AI safety and security and bring potential issues to light that will ultimately make AI safer for everyone." Google's vulnerability rewards program (or bug bounty) pays ethical hackers for finding and responsibly disclosing security flaws. Given that generative AI brings to light new security issues, such as the potential for unfair bias or model manipulation, Google said it sought to rethink how bugs it receives should be categorized and reported. The tech giant says it's doing this by using findings from its newly formed AI Red Team, a group of hackers that simulate a variety of adversaries, ranging from nation-states and government-backed groups to hacktivists and malicious insiders to hunt down security weaknesses in technology. The team recently conducted an exercise to determine the biggest threats to the technology behind generative AI products like ChatGPT and Google Bard.
Published on October 27, 2023 at 01:30AM
Google has expanded its vulnerability rewards program (VRP) to include attack scenarios specific to generative AI. From a report: In an announcement shared with TechCrunch ahead of publication, Google said: "We believe expanding the VRP will incentivize research around AI safety and security and bring potential issues to light that will ultimately make AI safer for everyone." Google's vulnerability rewards program (or bug bounty) pays ethical hackers for finding and responsibly disclosing security flaws. Given that generative AI brings to light new security issues, such as the potential for unfair bias or model manipulation, Google said it sought to rethink how bugs it receives should be categorized and reported. The tech giant says it's doing this by using findings from its newly formed AI Red Team, a group of hackers that simulate a variety of adversaries, ranging from nation-states and government-backed groups to hacktivists and malicious insiders to hunt down security weaknesses in technology. The team recently conducted an exercise to determine the biggest threats to the technology behind generative AI products like ChatGPT and Google Bard.
Read more of this story at Slashdot.
Comments
Post a Comment