Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

AI companies will reportedly commit to safeguards at the White House’s request

Hi, Welcome

Seven AI companies commit to safeguards at the White House’s request

 

Kris Holt
Kris Holt
 

Microsoft, Google and OpenAI are among the leaders in the US artificial intelligence space that have committed to certain safeguards for their technology, following a push from the White House. The companies will voluntarily agree to abide by a number of principles though the agreement will expire when Congress passes legislation to regulate AI.

The Biden administration has placed a focus on making sure that AI companies develop the technology responsibly. Officials want to make sure tech firms can innovate in generative AI in a way that benefits society without negatively impacting the safety, rights and democratic values of the public.

In May, Vice President Kamala Harris met with the CEOs of OpenAI, Microsoft, Alphabet and Anthropic, and told them they had a responsibility to make sure their AI products are safe and secure. Last month, President Joe Biden met with leaders in the field to discuss AI issues.

The tech firms have agreed to eight suggested measures concerning safety, security and social responsibility. Those include:

  • Letting independent experts test models for bad behavior 

  • Investing in cybersecurity

  • Emboldening third parties to discover security vulnerabilities

  • Flagging societal risks including biases and inappropriate uses

  • Focusing on research into the societal risks of AI

  • Sharing trust and safety information with other companies and the government 

  • Watermarking audio and visual content to help make it clear that content is AI-generated

  • Using the state-of-the-art AI systems known as frontier models to tackle society’s greatest problems

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI,” the White House statement read. “As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.”

The fact that this is a voluntary agreement underscores the difficulty that lawmakers have in keeping up with the pace of AI developments. Several bills have been introduced in Congress in the hope of regulating AI. One aims to prevent companies from using Section 230 protections to avoid liability for harmful AI-generated content, while another seeks to require political ads to include disclosures when generative AI is employed. Of note, administrators in the House of Representatives have reportedly placed limits on the use of generative AI in congressional offices.

Update 7/21 5:47AM: This article has been updated to include a statement from the White House.

SAN FRANCISCO, CALIFORNIA - JUNE 20: President Joe Biden meets with AI experts and researchers at the Fairmont Hotel in San Francisco, Calif., on Tuesday, June 20, 2023. Gov. Gavin Newsom, is to the right, and Arati Prabhakar, Ph.D., Director of the White House Office of Science and Technology Policy and Assistant to the President for Science and Technology, is to the left. (Jane Tyska/Digital First Media/East Bay Times via Getty Images)

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

(7)

Report Post