Harris shares concerns about the risks associated with AI with four top CEOs including Satya Nadella and Sundar Pichai
US Vice President Kamala Harris told CEOs of leading tech companies including Microsoft’s Satya Nadella and Google’s Sundar Pichai that they have a “moral” responsibility to protect society from the potential dangers of artificial intelligence.
“As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products,” Harris stated after a meeting at the White House Thursday.
Read: Society needs to adapt for impact of AI: Sundar Pichai (April 18, 2023)
Besides Nadella and Pichai, Sam Altman, CEO of OpenAI and Dario Amodei, CEO of Anthropic also attended the meeting with Harris and senior Administration officials on Advancing Responsible Artificial Intelligence Innovation.
“Advances in technology have always presented opportunities and risks, and generative AI is no different,†she said. “AI is one of today’s most powerful technologies, with the potential to improve people’s lives and tackle some of society’s biggest challenges.
“At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy,†Harris said.
Harris said that she and President Joe Biden, who briefly dropped by the meeting, are “committed to doing our part – including by advancing potential new regulations and supporting new legislation.”
As a United States Senator and member of the Intelligence and Judiciary Committees, she said, “we investigated Russian interference in the 2016 election and produced empirical evidence that state actors will use technology to undermine democracy.â€
“Through this work, it was evident that advances in technology, including the challenges posed by AI are complex. Government, private companies, and others in society must tackle these challenges together.”
“President Biden and I are committed to doing our part – including by advancing potential new regulations and supporting new legislation – so that everyone can safely benefit from technological innovations,†Harris said.
“As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products,†she said.
“And every company must comply with existing laws to protect the American people. I look forward to the follow through and follow up in the weeks to come,â€.
Read: 19-year-old launches artificial intelligence research organization (February 14, 2023)
According to a White House readout of the meeting Harris and senior Administration officials met with CEOs of four American companies at the forefront of AI innovation to share concerns about the risks associated with AI.
President Biden dropped by the meeting to underscore that companies have a fundamental responsibility to make sure their products are safe and secure before they are deployed or made public, it said.
Biden and Harris, it said “were clear that in order to realize the benefits that might come from advances in AI, it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security. These include risks to safety, security, human and civil rights, privacy, jobs, and democratic values.â€
Given the role these CEOs and their companies play in America’s AI innovation ecosystem, Administration officials also emphasized the importance of their leadership, called on them to model responsible behavior, and to take action to ensure responsible innovation and appropriate safeguards, and protect people’s rights and safety.
This includes taking action consistent with the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and the AI Risk Management Framework, the White House said.
The meeting also included “frank and constructive discussion on three key areas: the need for companies to be more transparent with policymakers, the public, and others about their AI systems; the importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems; and the need to ensure AI systems are secure from malicious actors and attacks.”
Administration officials and CEOs agreed that more work is needed to develop and ensure appropriate safeguards and protections, and CEOs committed to continue engaging with the Administration to ensure the American people are able to benefit from AI innovation, it said.
Thursday’s meeting was part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.
Read: AI companies have ‘moral’ responsibility to protect users: W.House (May 2, 2023)
This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation and risk mitigation in AI, the White House said.
This includes additional actions announced Thursday, the Blueprint for an AI Bill of Rights and related executive actions, the AI Risk Management Framework, and a roadmap for standing up a National AI Research Resource.