Brazilian official uses ChatGPT to write local law which was later passed
This week, Ramiro Rosário, a councilman from Porto Alegre, Brazil shared that an ordinance reviewed and passed by Porto Alegre’s city council in October was written fully by OpenAI’s ChatGPT. The councilman asked the chatbot to create a proposal for preventing Porto Alegre from charging taxpayers if their water consumption meters are stolen. Rosário noted that he only now is revealing the origin of his proposal because if he “had revealed it before, the proposal certainly wouldn't even have been taken to a vote.” This is not the first time ChatGPT has been used to write legislation in the United States, Massachusetts Democratic Senator Barry Finegold prompted the chatbot to aid him in writing a bill focused on regulating artificial intelligence. It is the first known example of a successful use of the tool for drafting and passing laws. The city’s council president, Hamilton Sossmeier, was initially displeased and told media he believed the use of ChatGPT in lawmaking was “dangerous,” but he later noted that “unfortunately or fortunately, this is going to be a trend.”
U.S. healthcare regulators will likely propose a labeling rule for use of AI
Federal regulators are considering a new labelling system for AI healthcare apps with the aim of making it easier for clinicians to understand the risks of certain tools, according to a report in the Wall Street Journal. The regulations are reportedly similar to a framework laid out by Office of the National Coordinator for Health Information Technology in April, and would require companies producing AI tools to disclose to users a model’s intended uses, how it was trained and tested, and measures of its fairness and validity, although it is unclear how either fairness or validity would be quantified or disclosed. The disclosures will not be mandatory, but regulators hope that companies which provide more information on their models will gain a competitive advantage. The proposed regulations drew pushback, including from healthcare software company Epic Systems, which said in a comment to the Wall Street Journal that “Our risk-related information contains intellectual property that could be reverse-engineered and copied by others.”
IBM and Meta launch AI Alliance to support open AI innovation
On Monday, IBM and Meta launched the AI Alliance, bringing together members including U.S. and international organizations across industry, startups, academia, independent research organizations, and governments with the goal of working together in support of open AI innovation. The alliance includes groups like the Ecole Polytechnique Federale de Lausanne, Oracle, and Stability AI. The alliance aims to work on several objectives including AI benchmarking, advancing development of open-source foundation models, creating accelerators for AI hardware, supporting the development of AI skills, developing educational content on AI, and supporting the open development of AI in safe and beneficial ways. Members plan to convene working groups on each of those objectives and will also create a governing board and a technical oversight committee to supervise the operation of the group. Meta has been a strong advocate of open-source AI development since its most advanced AI language model, LLaMa, was leaked on 4chan in March.
EU member-states attempt to reach compromise on AI Act
Representatives for the European Union have spent the last week negotiating to finalize the provisions of the EU AI Act. For months the negotiators have struggled to come to an agreement on several critical issues, including whether companies producing generative AI models should be allowed to self-regulate, prohibitions on the use of AI for things like facial recognition, and the definition of a high-risk AI system, with some proposals circulating that would exempt open-source models from the act. The current negotiations have reportedly stalled over France, Italy, and Germany’s attempts to protect AI developers operating in their respective countries. France-based AI company Mistral has previously criticized the EU’s approach to regulating foundation models, arguing instead for rules governing uses of AI, rather than the creation of foundation models. Members of the European Parliament had been expected to host a press conference on the agreement on Thursday, but the briefing was postponed until further notice hours before it was supposed to begin.
UK government says Calisto threat actor is run by Russia’s FSB
The UK government formally attributed the Calisto threat actor to Center 18 of Russia’s Federal Security Service (FSB) as part of a statement in the House of Commons by Leo Docherty, an official from the UK Foreign Office. The UK National Cyber Security Center (NCSC) released a follow up report in coordination with agencies from the Five Eyes intelligence alliance, which outlined Calisto’s tactics, techniques, and procedures. The UK also named Ruslan Peretyatko and Andrew Korinets as members of Calisto and Center 18 and added them to its sanctions list. The UK government stated that Calisto was responsible for several hack and leak campaigns, in which it broke into politicians’ email accounts and released damaging material. In February, Scottish National Party Member of Parliament Stewart McDonald said that Calisto had broken into his email account and that Calisto was likely planning to release politically damaging information it may have gathered in the hack.
Eva Schwartz is the intern for the CFR Independent Task Force Program.