Address risks: leading AI companies join safety consortium


Commerce Secretary Gina Raimondo announced the U.S. AI Safety Institute Consortium (AISIC). Raimondo said in a statement to Reuters, “The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence.”

The consortium members

Reuters published the list of consortium members, which includes BP (BP.L),  Cisco Systems (CSCO.O), IBM (IBM.N), Hewlett Packard (HPE.N), Northop Grumman (NOC.N), Mastercard (MA.N), Qualcomm (QCOM.O), Visa (V.N), and major academic institutions and government agencies, that will be housed under the U.S. AI Safety Institute (USAISI).

This group prioritizes the actions and guidelines listed in President Biden’s executive order:  “including developing guidelines for red-teaming (meaning identify new risks), capability evaluations, risk management, safety and security, and watermarking synthetic content.”

The executive order from U.S. President Joe Biden

Additionally, the Oct 30, 2023 executive order from U.S. President Joe Biden said that he “is seeking to reduce the risks that AI poses to consumers, workers, minority groups, and national security” with a new executive order. As per the Defense Production Act, creators of AI systems that endanger the national security, economics, health, or safety of the United States must notify the government of the United States of the findings of their safety texts before their public release.

In addition, agencies are instructed to establish guidelines for such testing and handle associated risks connected to cybersecurity, radiological, chemical, and biological hazards by the order Biden has signed at the White House. “To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said. “In the wrong hands, AI can make it easier for hackers to exploit software vulnerabilities that make our society run.”

The Commerce Department said in December 2023 that it was already taking the first steps toward “writing the key standards and guidance for the safe deployment and testing of AI.” The consortium also represents the biggest group of test and evaluation teams  who can now create a foundation for a “new measurement science in AI safety.”

Currently, generative AI has sparked both enthusiasm and concerns with its ability to produce text, images, and videos in response to open-ended cues, that it can eventually replace human labor in some occupations, disrupt elections, and have disastrous consequences.

The Biden administration is working to implement safeguards, but despite multiple high-level conferences, Congress has not passed laws addressing AI.

Featured Image Credit: Photo by Michelangelo Buonarroti; Pexels

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is an editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind, Editor in Chief for Calendar, editor at Entrepreneur media, and has over 20+ years of experience in content management and content development.



Image and article originally from readwrite.com. Read the original article here.