Cyber Week in Review: May 10, 2024
from Net Politics and Digital and Cyberspace Policy Program

Cyber Week in Review: May 10, 2024

State Department releases digital diplomacy strategy; Microsoft bans police from using AI facial recognition; sixty eight companies sign CISA pledge; researchers discover whale alphabet; Microsoft introduces AI for intel agencies.
People walk past a poster simulating facial recognition software at the Security China 2018 exhibition on public safety and security in Beijing, China on October 24, 2018
People walk past a poster simulating facial recognition software at the Security China 2018 exhibition on public safety and security in Beijing, China on October 24, 2018 Thomas Peter/Reuters

U.S. State Department releases new foreign affairs strategy for the digital age

On Monday, Secretary of State Anthony Blinken unveiled the U.S. International Cyberspace and Digital Policy Strategy at the RSA Conference in San Francisco. The strategy, which marks the first U.S. global cyber and digital strategy in over a decade, establishes a new vision for supporting “digital solidarity, and promotes an affirmative vision for a world in which digital ecosystems are rights respecting and secure, and operate across an open and interoperable web” It outlines four action areas to achieve this vision, solidarity while countering authoritarian goals and influence in the digital age. The action areas include working with allies and partners to promote a resilient digital ecosystem, aligning rights-respecting approaches to digital and data governance with international partners, building cyberspace coalitions, and strengthening international digital policy partnerships. The strategy also outlines a commitment to the U.S. Cyberspace and Digital Connectivity fund, a new $50 million appropriation to support realizing these goals through mutual assistance to allies and partners. The State Department’s Bureau of Cyberspace and Digital Policy has started identifying priorities for the fund. Secretary Blinken announced the policy this week at RSA, a leading cybersecurity conference, stating in a keynote address that the United States hopes to assist partners, especially emerging economies, “in deploying safe, secure, resilient, and sustainable technologies to advance their development goals.”

Microsoft bans U.S. police departments from using Azure OpenAI Service for facial recognition

Microsoft has updated the code of conduct for its Azure OpenAI Service to ban U.S. police departments from using its generative AI service for facial recognition. Additionally, Microsoft included a clause that prohibits "any real-time facial recognition… on mobile cameras used by any law enforcement globally." This new policy follows Axon, a weapons manufacturing company for law enforcement, having launched Draft One, a software program that uses OpenAI’s GPT-4 to automate police reports by transcribing audio from police body cameras sold by Axon. Microsoft isn’t the first company to ban its services from law enforcement facial recognition usage—in 2020 both IBM and Amazon stopped selling facial recognition technologies to police departments. Over twenty jurisdictions across the United States have barred police departments from using facial recognition. Critics of facial recognition technologies—especially generative AI—note that these surveillance technologies are racialized, disproportionately harming people with darker skin complexions and increasing inaccurate information reporting from police departments.

Over sixty tech companies pledge support for CISA’s Secure by Design

More on:

Digital Policy

Artificial Intelligence (AI)

Cybersecurity

Sixty-eight companies announced they were signing onto the U.S. Cybersecurity and Infrastructure Agency’s (CISA) Secure by Design pledge at the annual RSA cybersecurity conference in San Francisco this week. Companies that sign the pledge agree to increase their use of multi-factor authentication, reduce the use of default passwords in products, increase transparency about common vulnerabilities and exposures, publish a vulnerability disclosure policy, and make customers aware of common vulnerabilities and exposures (CVEs). The companies will implement Secure by Design principles throughout the design phase of a product’s development lifecycle to decrease potential harm before a product is available for public use. The new signatories include Microsoft, Alphabet’s Google, Amazon’s AWS, International Business Machines, Palo Alto Networks, and Cisco, among others. The pledge also dovetails with the Biden administration’s National Cybersecurity Strategy, which was released in March 2023, and calls for a shift away from positioning end users as the primary guardians of security in the tools they use, and toward companies and “stakeholders most capable of taking action to prevent bad outcomes.” 

Researchers discover sperm whale alphabet with machine learning

In a published report in Nature Communications, researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Project CETI (Cetacean Translation Initiative) highlighted research that used machine learning and algorithms to decode the sperm whale “phonetic alphabet." The researchers studied a family of two hundred whales in Dominica and collected nine thousand codas, the basic unit of sperm whale communications, which allowed the researchers to discover that whale repertoire clicks form a language in a combination manner. David Gruber, founder and president of CETI, stated "We're now starting to find the first building blocks of whale language." The researchers said they would likely need millions or billions of codas to fully understand how sperm whales communicate, but that advances in machine learning and analysis could make it easier to decipher the whales’ communications. Sperm whales—which have the largest brains in the animal kingdom—are revealing linguistic communication patterns that have been in place since before the development of human language. This groundbreaking research could have positive impacts on conservation efforts. Sperm whales are classified as "vulnerable," and many believe that decoding whales' language could further efforts to protect marine habitats.

Microsoft introduced a GPT-4 generative AI model for U.S. intelligence agencies

Microsoft has deployed a GPT-4-based generative AI model for U.S. intelligence agencies that can be used in environments where national security secrets might be at risk. The model can theoretically allow around 10,000 intelligence community members to safely use the model without having to worry about the model’s access to potentially classified data. The model operates in a static state, meaning that it can read files, but can’t learn from them or the broader open internet. Critics of the new development state that the GPT-4 model could potentially fabricate inaccurate summaries, misleading America’s intelligence community, and potentially causing high risk mistakes. This is the first publicly known large language model that does not rely on cloud services. Sheetal Patel, assistant director of the CIA for the Transnational and Technology Mission Center stated, “There is a race to get generative AI onto intelligence data… and I want it to be [the United States]” that does it first. The model is undergoing testing, and it is unclear when it will be fully rolled out into the intelligence community.

 

Cecilia Marrinan is the intern for the Digital and Cyberspace Policy Program.

More on:

Digital Policy

Artificial Intelligence (AI)

Cybersecurity

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail