Artificial Intelligence and Democratic Values: Next Steps for the United States
More than fifty years after a research group at Dartmouth University launched work on a new field called “Artificial Intelligence,” the United States still lacks a national strategy on artificial intelligence (AI) policy. The growing urgency of this endeavor is made clear by the rapid progress of both U.S. allies and adversaries.
Europe is moving forward with two initiatives of far-reaching consequence. The EU Artificial Intelligence Act will establish a comprehensive, risk-based approach for the regulation of AI when it is adopted in 2023. Many anticipate that the EU AI Act will extend the “Brussels Effect” across the AI sector as the earlier European data privacy law, the General Data Privacy Regulation, did for much of the tech industry.
The Council of Europe is developing the first international AI convention aiming to protect fundamental rights, democratic institutions, and the rule of law. Like the Council of Europe Convention on Cybercrime (COE) and the Privacy Convention, the AI Convention will be open for ratification by member and non-member states. The COE remains influential, as Canada, Japan, the United States, and several South American countries have signed on to the COE.
China is also moving forward with an aggressive regulatory strategy to complement its goal to be the “world leader in AI by 2030.” China recently matched the GDPR with the Personal Information Protection Law and a new regulation on recommendation algorithms with similar provisions to the EU’s Digital Services Act. The Chinese regulatory model will likely influence countries in Africa and Asia, part of the Belt and Road Initiative, and give rise to a possible “Beijing Effect.”
The United States has done an admirable job maintaining a coherent policy in the Executive Branch over the Obama, Trump, and Biden administrations, highlighting key values and promoting an aggressive research agenda. In the 2019 Executive Order on Maintaining American Leadership in AI, the United States said it would “foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application.” Promoting the Use of AI in the Federal Government established the principles for the “development and use of AI consistent with American values and are beneficial to the public.”
The United States also played a leading role at the Organization for Economic Cooperation and Development (OECD) with the development and adoption of the OECD AI Principles, the first global framework for AI policy. Those principles, which emphasize “human-centric and trustworthy” AI, were later adopted by the G-20 nations, and are now endorsed by more than 50 countries, including Russia and China.
But the United States was out of the loop when the UN Educational, Scientific, and Cultural Organization (UNESCO) adopted the Recommendation on AI Ethics, now the most comprehensive framework for global AI policy which addresses emerging issues, such as AI and climate and gender equity.
“Democratic values” is a key theme as the United States seeks to draw a sharp distinction between the deployment of technologies that advance open, pluralist societies and those that centralize control and enable surveillance. As Secretary Blinken explained last year, “More than anything else, our task is to put forth and carry out a compelling vision for how to use technology in a way that serves our people, protects our interests and upholds our democratic values.” But absent a legislative agenda or clear statement of principles, neither allies nor adversaries are clear about the U.S. AI policy objectives.
The United States has run into similar problems with the Trade and Technology Council (TTC), an effort to align U.S. and EU tech policy around shared values. The inaugural Joint Statement laid a foundation for cooperation on AI for the EU and the United States in the fall of 2021, but Ukraine has upended transatlantic priorities, and it remains unclear at this point whether the TTC will regain focus on a common AI policy.
A similar challenge confronts EU and U.S. leaders on new rules for transatlantic data flows. After two earlier decisions from the high court in Europe, finding that the United States lacked adequate privacy protection for the transfer of personal data, lawmakers on both sides of the Atlantic worried that data flows could be suspended, as the Irish privacy commissioner has recently threatened. President Biden and President von der Leyen announced an agreement in principle in May, but several months later there is still no public text for review.
To restore leadership in the AI policy domain, the United States should move forward the policy initiative launched last year by the Office of Science and Technology Policy (OSTP). The science office outlined many of the risks of AI, including embedded bias and widespread surveillance, and called for an AI Bill of Rights. OSTP said, “Our country should clarify the rights and freedoms we expect data-driven technologies to respect.” The White House supported the initiative and encouraged Americans to “Join the Effort to Create A Bill of Rights for an Automated Society.”
We strongly support this initiative. After an extensive review of the AI policies and practices in 50 countries, we identified the AI Bill of Rights as possibly the most significant AI policy initiative in the United States. But early progress has stalled. The delay has real consequences for Americans who are subject to automated decision-making in their everyday lives, with little transparency or accountability. Foreign governments are also looking for U.S. leadership in this rapidly evolving field. Progress on the AI Bill of Rights initiative will help build trust and restore U.S. leadership.
Last year, the Office of Science and Technology Policy stated clearly, "Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly.” That should be the cornerstone of a U.S. national AI policy, and that policy will advance international norms for the governance of AI.
Marc Rotenberg is President of the Center for AI and Digital Policy (CAIDP), author the forthcoming Law of Artificial Intelligence (West Academic 2023), and a Life Member of CFR. Merve Hickok is the Research Director of CAIDP and founder of the AIethicist.org