New York Times sues Microsoft and OpenAI over AI copyright concerns
The New York Times filed a lawsuit in the Federal District Court of Manhattan alleging copyright infractions by Microsoft and OpenAI over use of copyrighted articles in model training data and the reproducibility of content in ChatGPT last week. The newspaper said it had started negotiating with OpenAI in April to license its stories for inclusion in ChatGPT and other OpenAI generative AI tools. OpenAI has reached similar licensing deals with other news outlets, including the Associated Press in June 2023 and with Axel Springer, the parent company of Politico and Business Insider, in December 2023. The lawsuit levels three major allegations at OpenAI: that OpenAI violated copyright licenses by scarping content from the New York Times; that ChatGPT users could circumvent the newspaper’s paywall by prompting ChatGPT to recite New York Times articles verbatim, although that error has now been corrected; and that ChatGPT and related products damage the reputation of the New York Times, because they can generate misinformation and then attribute it to the newspaper. An OpenAI spokesperson said that the company was “surprised and disappointed” by the lawsuit, but remains hopeful that “we will find a mutually beneficial way to work together.”
Google will begin phasing out third-party cookies this week
ODNI releases report on 2022 election interference, as 2024 election cycle looms
The Office of the Director of National Intelligence declassified a report on attempted foreign interference in the 2022 US midterm elections on December 22, 2023. The report had several major conclusions: a finding that China tacitly approved interference in a small number of races involving both Democrats and Republicans with positions hostile to China; a conclusion that Iran and Russia aimed to stoke division and sow broad distrust in U.S. electoral processes, and that U.S. intelligence agencies had not identified any efforts to gain access to or tamper with voting infrastructure at the federal, state, or local levels. The report comes as threats to the 2024 worldwide election cycle, the largest election cycle until 2048, loom large, including large technology companies distracted by declining profits and the U.S. election cycle as well as the potential for the use of new, powerful generative AI tools in misinformation campaigns. Governments, companies, and civil society organizations need to take steps to safeguard the 2024 election cycle, by increasing investments to counter information operations; engaging with and mobilize local and regional civil society leaders in the global majority; and ensuring voters become discerning consumers of information.
Dutch government partially revokes ASML’s license to export tools to China
Dutch microchip manufacturer ASML said that the Dutch government had revoked its licenses to export two types of immersion deep ultraviolet lithography systems, the NXT:2050i and NXT:2100i, to customers in China. Lithography machines are integral to the production of advanced microchips, and ASML’s machines are the only ones capable of producing three and five nanometer chips, currently the most advanced chips in the world. The government’s action comes after it reached an agreement with the United States and Japan to limit exports of advanced lithography machines in January 2023, and this case may be one of the first public impositions of the rules by the Dutch government. Bloomberg reported that the Dutch cancelled ASML’s licenses under pressure from U.S. National Security Adviser Jake Sullivan, who sought to prevent Chinese firms from stockpiling lithography machines before new restrictions came into effect on January 1. The Chinese government reacted angrily to the news, with Ministry of Foreign Affairs spokesperson Wang Wenbin saying “China has always opposed the United States’… use of various excuses to coerce other countries into imposing a technological blockade against China.”
UN High Level AI Advisory Body releases interim report
The UN High Level AI Advisory Body issued an interim report on governing AI internationally and ensuring AI is used to help achieve the UN Sustainable Development Goals (SDG). The divide between the well-resourced member-states who have played a large role in the development of AI, including the United States, and the global majority is stark, according to the advisory group, and governance needs to address this divide and ensure that AI development and application takes place across the world, not just in a few select countries. The report cites Gavi, the Vaccine Alliance as a potential model for including the global majority in the development of AI, by creating a repository of AI models that could be applied to different contexts. The advisory body also outlines risks posed by artificial intelligence, including its use in autonomous weapons systems or the proliferation of biases, but the report argues that risks should be evaluated from the perspective of vulnerable communities and the global commons, rather than categorizing individual uses of AI as risky. The group also found that while the UN is currently well placed to create and emphasize agreement globally on the need to achieve principles such as fairness, accountability, transparency, and other principles, the UN and other international bodies will likely struggle at this stage to outline a specific model of governance for the world.