G7 Nations Will Announce an 'AI Code of Conduct' for Companies Building AI
Published on October 30, 2023 at 04:36AM
The seven industrial countries known as the "G7" — America, Canada, Japan, Germany, France, Italy, and Britain — will agree on a code of conduct Monday for companies developing advanced AI systems, reports Reuters. The news comes "as governments seek to mitigate the risks and potential misuse of the technology," Reuters reports — citing a G7 document. The 11-point code "aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems", the G7 document said. It "is meant to help seize the benefits and address the risks and challenges brought by these technologies". The code urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle, as well as tackle incidents and patterns of misuse after AI products have been placed on the market. Companies should post public reports on the capabilities, limitations and the use and misuse of AI systems, and also invest in robust security controls.
Published on October 30, 2023 at 04:36AM
The seven industrial countries known as the "G7" — America, Canada, Japan, Germany, France, Italy, and Britain — will agree on a code of conduct Monday for companies developing advanced AI systems, reports Reuters. The news comes "as governments seek to mitigate the risks and potential misuse of the technology," Reuters reports — citing a G7 document. The 11-point code "aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems", the G7 document said. It "is meant to help seize the benefits and address the risks and challenges brought by these technologies". The code urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle, as well as tackle incidents and patterns of misuse after AI products have been placed on the market. Companies should post public reports on the capabilities, limitations and the use and misuse of AI systems, and also invest in robust security controls.
Read more of this story at Slashdot.
Comments
Post a Comment