Canada Has Denounced Clearview AI; It’s Time for the United States to Follow Suit
Eyako Heh is an intern in the Digital and Cyberspace Policy program at the Council on Foreign Relations.
Last July, the Privacy Commissioner of Canada, along with select provincial-level authorities, launched a joint investigation into the domestic facial recognition practices of Clearview AI, a New York-based biometrics company. Their seven-month inquiry culminated in a report detailing Clearview’s pervasive data collection processes and lack of disclosure protocols. In the report, it was revealed that, among other privacy violations, Clearview scraped individual faces from public domains without express user consent and then crafted biometric markers for each image to be used by police and private organizations. During a February 2021 news conference, Canada’s leading privacy officer, Daniel Therrien, likened Clearview’s actions to mass surveillance, accusing the company of forcing Canadians into a perpetual “police lineup.”
More on:
Although Therrien and his commission lack the legal authority to punish Clearview for their alleged violations, the commission has nonetheless moved forward with requests for the company to delete the biometric data of Canadians from its internal databases. Even with the lack of follow-up enforcement capabilities, the commission’s harsh condemnation of the company presents the first and clearest instance of a national government pushing back against the growing threat of data-informed mass surveillance.
Following the onset of the investigations into its data collection practices, Clearview abandoned its public-private partnerships with law enforcement agencies across Canada, including the Royal Canadian Mounted Police and Toronto Police Service. Moreover, the United Kingdom and Australia have launched their own joint probe into the data collection processes operationalized by Clearview, paving the way for possible future denouncements and market pullouts. The United States, with its historic issues regarding police brutality, mass incarceration, and systemic racism, should take similar if not more drastic measures.
Social scientists, computer scientists, journalists, and other data practitioners have developed a large body of research detailing how big data is collected, analyzed, and used by the carceral state. This includes predictive policing tools that generate racialized feedback loops and social media monitoring that disproportionately targets ethnic and religious minorities. In regard to facial recognition, the dangers are nearly ubiquitous. In a 2016 report released by the Center on Privacy & Technology at Georgetown Law, it was found that half of all American adults—117 million people and counting—are in police facial recognition databases. These systems, despite police claims that they are necessary for investigating violent crimes, have been used overwhelmingly for non-violent infractions and misdemeanors.
In June 2020, Robert Williams, a Black man and Detroit native, was the first U.S. citizen wrongfully arrested because of facial recognition errors. Although Williams’ arrest presents itself as an isolated incident, the ACLU notes that the number of those wrongfully criminalized by the technology is likely much higher. The National Institute of Standards and Technology upholds this claim; in a 2019 report [PDF], they found that facial recognition systems misidentified minorities and women at higher rates when compared to their white, male counterparts. Despite the apparent dangers of these surveillance instruments, over 2,400 police agencies in the United States use Clearview’s software. Given the risks of misuse, this is unacceptable.
Activists in the United States have fought back, and dozens of law enforcement agencies and legislative bodies have already taken action. In May 2019, San Francisco became the first American city to proactively ban facial recognition tools. Following this move, Oakland, Cambridge, and Boston implemented similar bans. In January 2020, California enacted a three-year moratorium on the use of facial recognition in police body cameras, making it the third state to do so. Last June, Senator Ed Markey (D-MA) and Representative Ayanna Pressley (D-MA) introduced legislation prohibiting the use of facial recognition tools and other biometric technologies by federal agencies. That same month, Microsoft, IBM, and Amazon announced moratoriums against selling facial recognition systems to law enforcement agencies. In February, the ACLU and forty other civil rights organizations drafted a letter urging the Biden administration to freeze all federal use of and funds toward facial recognition tools.
More on:
Canada’s denouncement of Clearview AI is encouraging news. However, much more needs to be done. Every day, more and more people, many of them Black and Brown, fall victim to the digital surveillance arm of the carceral state. In both Canada and the United States, grassroots activism against police surveillance has gained traction, but it is ultimately up to our elected representatives to legislate change. Markey's and Pressley’s proposed bill has unfortunately remained in congressional limbo, casting doubt on its ability to become law. President Biden, who has spent the first several weeks of his presidency reversing problematic Trump era policies via executive action, should now take charge. If the new administration is serious about maintaining their commitment to racial equity and justice, the use of facial recognition in law enforcement operations should fall next on their chopping block. The myriad of problems presented by digital surveillance are clear—it’s time to do something about it.