California Lawmakers Push for Watermarks on AI-Made Photo, Video
Published on January 27, 2024 at 02:30AM
California lawmakers are drawing up multiple plans to require watermarks on content created by AI to curb the abuses within the emerging technology, which has affected sectors from political races to the stock market. From a report: At least five lawmakers have promised or are considering different proposals that would require AI companies to implement some type of verification that a video, photo, or written work was made by the technology. The activity comes as advanced AI has rapidly evolved to create realistic images or audio on an unprecedented level. Advocates worry the technology could be ripe for abuse and lead to a wider proliferation of deepfakes, where a person's likeness is digitally manipulated to typically misrepresent them -- with it already being used in the presidential race. But such measures are likely to face scrutiny by the tech sector. Amid a pivotal election year and an online world full of disinformation, the ability to know what's real or not is crucial, said Drew Liebert, director of the California Initiative for Technology and Democracy. The harm from AI is already happening, with Liebert noting the aftermath of an AI-generated photo that went viral in May of last year that falsely portrayed another terrorist attack in the US. "The famous photograph now that was put on the internet that alleged that the Pentagon was attacked, that actually caused momentarily a [$500 billion] dollar loss in the stock market," he said. The loss would not as been as severe, he said, "if people would have been able to instantly determine that it was not a real image at all." Ask Slashdot:Could a Form of Watermarking Prevent AI Deep Faking?
Published on January 27, 2024 at 02:30AM
California lawmakers are drawing up multiple plans to require watermarks on content created by AI to curb the abuses within the emerging technology, which has affected sectors from political races to the stock market. From a report: At least five lawmakers have promised or are considering different proposals that would require AI companies to implement some type of verification that a video, photo, or written work was made by the technology. The activity comes as advanced AI has rapidly evolved to create realistic images or audio on an unprecedented level. Advocates worry the technology could be ripe for abuse and lead to a wider proliferation of deepfakes, where a person's likeness is digitally manipulated to typically misrepresent them -- with it already being used in the presidential race. But such measures are likely to face scrutiny by the tech sector. Amid a pivotal election year and an online world full of disinformation, the ability to know what's real or not is crucial, said Drew Liebert, director of the California Initiative for Technology and Democracy. The harm from AI is already happening, with Liebert noting the aftermath of an AI-generated photo that went viral in May of last year that falsely portrayed another terrorist attack in the US. "The famous photograph now that was put on the internet that alleged that the Pentagon was attacked, that actually caused momentarily a [$500 billion] dollar loss in the stock market," he said. The loss would not as been as severe, he said, "if people would have been able to instantly determine that it was not a real image at all." Ask Slashdot:Could a Form of Watermarking Prevent AI Deep Faking?
Read more of this story at Slashdot.
Comments
Post a Comment