- Venture Daily
- Posts
- Biden’s Executive Order Gives the US Its First AI Guardrails
Biden’s Executive Order Gives the US Its First AI Guardrails
The president rolls out "the strongest set of actions any government in the world has ever taken on AI..."
Recommended: Listen to this story (00:23 - 07:14):
The Story: The United States government has finally issued guidelines for responsible AI use, and it comes in the form of an executive order from the president.
Yesterday, President Biden issued an order on what the White House calls “Safe, Secure, and Trustworthy Artificial Intelligence.” It intends to find and limit national security risks within American-built large language models. Companies that build AI tools will now need to disclose safety testing information to the U.S. government.
“President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust,” said Bruce Reed, the White House deputy chief of staff. Efforts to implement the sweeping order will include assistance from the departments of commerce, energy, and homeland security.
Some believe this is the US playing catch-up with the EU, which has moved quickly on AI recently, drafting a groundbreaking law primed for approval by the end of this year.
President Biden believes an executive order is a step in the right direction for achieving eventual AI regulation. He’s stated that, quote, “we were going to need bipartisan legislation to do more in artificial intelligence.”
The Expert Take: Sarah Hinkfuss, partner at Bain Capital Ventures, explains the five things the executive order actually stipulates:
Sharing safety test results from big models that are critical to national security.
Developing standard tools and tests.
Calling for authentication of original content, like a watermarking program.
Creating a cybersecurity program.
Emphasizing anti-discrimination with AI.
Hinkfuss thinks the order’s focus on “output not input” is a savvy decision from the White House that sets it apart from the EU’s law proposal: “The EU AI law asks companies to have transparency requirements in terms of what information is being used or included in the actual models themselves… and what technologists have argued is that that’s actually impossible. That cuts to the core of how these models actually work by being trained on the corpus of public information.”
She continues:
I think in really important ways the US has put itself into a position of encouraging and enabling innovation and competition vs. what the EU has been criticized for of actually dampening or stymying all of that innovation in Europe.
*Stay informed about the three biggest stories in venture capital and tech news every weekday morning. 2-min reads only.
Reply