Young Professionals Briefing: Artificial Intelligence and the Economy—How Cities are Embracing New Technologies

Wednesday, March 20, 2024
Aly Song/Reuters

Director of Artificial Intelligence and Machine Learning, New York City Office of Technology and Innovation

Senior Counsel, Kalmanson Cohen PLLC


Senior Fellow for Digital and Cyberspace Policy, Council on Foreign Relations; @rightsduff

Digital and Cyberspace Policy Program, Young Professionals Briefing Series, and Diamonstein-Spielvogel Project on the Future of Democracy

Panelists reflect on how the digital transformation is reshaping the economy and how cities like New York can embrace new technologies, like artificial intelligence, to foster economic growth.

Please note that speakers are appearing in their personal capacities.

LAFOLLETTE: Thank you, everyone. Good evening. Thank you all for joining us tonight for this session of the Young Professionals Briefing Series. We have guests joining here in person in New York as well as on Zoom, so welcome one and all. 

I am Stacey LaFollette. I’m managing director of the Meetings Program here at the Council on Foreign Relations. And as some of you know, because you’ve attended these sessions before, the Council works to offer you all one session a month at least, and ideally we alternate between New York and Washington, and we offer via Zoom. So we hope you’ve attended some of these and that you will continue to attend them. 

One new development is that we’ve expanded the invite list to include staff members of the Council who are young professionals—so we welcome staff of the Council who are here this evening—as well as—as well as CFR’s term members. So we welcome any term members who are here this evening. 

Now, if you want more information about the Young Professional Series and the Term Member Program, you can look at the roster that was handed out this evening, and we have a little description on the second page of what each of these series and programs are with a link. So we encourage you to go on and get more information about Young Professionals and the Term Member Program. 

A reminder that this meeting is on the record. And for those in New York, we encourage you to check your electronic devices—cellphones—and please silence them so we don’t have any interruptions to tonight’s discussion. If you want to ask questions and you’re in person, please during the Q&A raise your hand. If you’re called on by our presider, please stand, wait for one of these microphones, and then introduce yourself and ask your question. If you’re joining us via Zoom, click on the “raise hand” icon on the Zoom window at any time. During the Q&A, if you hear your name being called out, please click on the “unmute now” prompt, and then proceed with introducing yourself and asking your question. We encourage you to please think about your questions now and raise your hand or click on the “raise hand” icon, and we hope you have plenty of questions. And also, after the discussion, there’s a networking recession for those in person, so we hope you will all stay and mingle with young professionals, staff, and term members. 

Thank you. 

DUFFY: So we’re all facing a future with AI, but I cannot face a future without dogs, so. (Laughter.) 

Hi, everybody. Thanks for being here tonight. I really appreciate you. It’s so exciting to see you. I’m Kat Duffy. I’m the senior fellow here at the Council for digital and cyberspace policy. Or, as Kyle and I like to say, the techie-techs. And so I’m really delighted to welcome you all here today, to two friends and colleagues of mine, Joni Kletter and Jiahao Chen.  

And what we’re really going to talk about today is sort of the way that we’re thinking about digital transformation in this moment. So we’re going to be telling you a little bit of a story of some digital transformation that was taking place in New York over COVID, that Joni was leading, and now how we’re thinking about digital transformation in cities, especially as we think about AI and its role. And what it’s going to take to get there because, like, spoiler alert, it’s not just about the data models, right? There’s a lot to think about when you think about how you actually operationalize and realize a digital transformation, in a city and space as complex as New York in particular.  

And so with that, I’m going to start by asking both Joni and Jiahao just introduce themselves, explain a little bit of their background, and maybe a little bit about why they’re excited to talk to, like, very fancy younger people today. So, Joni, why don’t I start with you? 

KLETTER: Hi. Yeah. Thank you so much for having me. This is really wonderful. Thank you, Kat. I’ve known Kat since college—we went to college together.  

DUFFY: We were your age together. And it was—thank God it was before social media. (Laughter.) 

KLETTER: Yes. Yes, on my God. Thank God.  

So and then I went to law school and became a lawyer. I was a federal law clerk for two years in the Eastern District of New York, and was a labor and employment lawyer for about eight years. But I lived in Park Slope and became friendly with Bill de Blasio when he was running for public advocate. I really liked what he stood for and his policy positions. And I did a little bit of volunteering for him then, but I stayed in private practice. And then I volunteered for his mayoral campaign. And when he won, I joined the administration. I started off in the city legislative affairs office. So I was working very closely with the city council and the agencies negotiating city council legislation and the budget, and just kind of in the middle of everything at city hall. And then I became his director of appointments. And then I became the commissioner of the Office of Administrative Trials and Hearings. That was through 2022. And now I do some consulting and I’m back in private practice as a litigator. So thank you. 

DUFFY: Fantastic. Jiahao, over to you. 

CHEN: Great. Hi, everyone. My name is Jiahao. This is Tank, over here. I’m currently serving as the director of AI and Machine Learning at New York City’s Office of Technology and Innovation. However, I’m here in my personal capacity, and my views do not necessarily constitute the views of the city or the agency.  

So, having said that, I have been in this position for two months. Formerly, I worked in a startup, and before that worked in machine learning at two banks you may have heard of. So I worked at—I was one of the founding members of the AI Research Department at JPMorgan. Before that, I was a data scientist at CapitalOne. And before that, I was an academic at MIT. So my background is not in law, not in policy. So I’m not really sure why I’m here. (Laughter.) 

But yeah, so I graduated in 2009. Not a good time to look for an industry job. Stayed in academia. And then 2016, the deep learning revolution hit, and I was really struck by how there’s all this innovation going on in tech companies. And I was really interested to see how industry and broader society is, like, adopting all these things that used to be solely the domain of obscure academics, computer science departments. And so I left, ended up working in banking. Got really interested in this space of regulated model decision making. So banks, of course, they will be very heavily regulated. Any machine learning models that are being built have to be analyzed for bias, they have to show compliance with the antidiscrimination laws. Consumers have the right to explanation. So how do we engineer that into models? I was really interested in that.  

And of course, famous last words, right? And I was like, oh, how hard could this be? This is an interesting problem. And I’m still working on some variant of this problem. What brought me—the reason why I applied for this job was because as you know, like, AI has really exploded. I like to remind people that we are only sixteen months into the launch of ChatGPT, even though it feels for some people that it’s been around forever. And just, like, the speed at which that became just so ubiquitous and normalized for us, I think is butting up against, like, the traditional, like, more languid processes of government policymaking.  

And I was starting to feel concerned that, you know, we’re in this situation where policymakers feel like they need to do something, but they don’t know necessarily what it is, because not only is that domain expertise hard to get, but also just very, very quickly evolving. And so when this opportunity came up, I was, like, well, I’ve been working in this area of, like, AI risk management and ethics for a while. And this is an opportunity to put my money where my mouth is. And so that’s how I ended up here.  

DUFFY: That’s awesome. And I want to ask you all, how many of you would say right now that you work in something involving AI, like that your jobs are AI-ish? OK. How many of you are working something that’s, like, hard-focused on AI? OK. And then how many of you are, like, AI is it not part of my daily job? OK. This is a really good mix. How many—for those of you especially who sort of aren’t working in or with AI routinely, how many of you feel like good, interested, excited about, like, what it might offer? And how many of you are like, oh, shit? (Laughter.) Yep. OK. That’s a healthy mix. Thank you for your honesty. Yeah.  

So I think for me, one of the—you know, what you’re already hearing is, you’re going to hear right now about two very different types of expertise. And the way that they’re getting brought into a place as complex as New York City. And how those experts—like, how that expertise is playing in a really unique moment in time, as well, right? Like, with AI, I mean, this is—AI is not new. It’s been around a real long time. And this amount of focus and attention to it is, I think, astonishing for most people who’ve been working in it for a long time. It’s really like, whoa, like, what happened?  

And so, Joni, I want to start with you a little bit. Can you walk us through what life—what you were doing and what life was in, say, like, December-November of 2019, and how it was feeling in, like, April of 2020? (Laughter.) 

KLETTER: Which I think we all have our stories. 

DUFFY: We all have our stories. 

KLETTER: That we could tell about the two different worlds that we were living in. But yeah. Well, first, I just want to say that I’m so glad that you’re focused on New York City. And I assume most of you live in New York City, and you use New York City services, and you probably interact somewhat with New York City government.  

And I think it’s really great that we’re talking about a municipality, even though we’re at the Council on Foreign Relations, because New York City—it’s an international city. It’s also a municipality, with nine million people, and with the larger metropolitan area impacts over twenty million people. And the work that’s happening in the city and the things that are being done have repercussions and an impact on cities and places around the world. I’m just thinking about, like, our IDNYC program, for example, where we offered IDs for immigrants who didn’t have passports or licenses, and the impact that had on the rest of the world. So the things that happen in New York City have major repercussions throughout the world.  

And there are—just to kind of touch on the level—how massive of bureaucracy in New York City is, there are over 300,000 people that work for New York City. There’s a budget that’s over $100 billion dollars. And there’s over one million schoolchildren who attend New York City public schools, which is, you know, larger than most cities in the country, right? And so it’s a massive, massive bureaucracy. There’s over fifty city agencies. There’s over 150,000 people that work, for example, at the Department of Education. There’s over 60,000 people that work at the NYPD police department. And so to get these agencies to innovate, and to work on kind of new ideas and try new things can be very challenging.  

And if you think about it, right, like imagine a mayor who comes into city government, and they’re running on all of these policy ideas, things they want to do differently changes they want to make. You know, universal—Mayor de Blasio ran on universal pre-K. Then at some point, he talked about closing Rikers Island. You know, these are major, obviously, programs that have to be implemented. And you need a lot of people to do it. And so then you come in. You win, and you come in, and you’re mayor, right? And you bring with you—maybe 150 to 200 people with you, right? So you’re staffing city hall and you’re appointing commissioners to run these agencies. But you still have 300,000 people that are mostly public servants who have been in those roles for twenty to thirty years, maybe. A lot of them have been there over at least fifteen years, most city employees. 

And you come to them as the commissioner of the agency. And you say, all right, you know, we’re ready. We’re going to do these different things. And they’re looking at you, like, what are you talking about? Like, I’ve been here for twenty years doing the exact same thing. Like and now you’re telling me I have to do something different? No, sorry. My—you know, my collective bargaining agreement doesn’t say that. I’m a union member. You can’t make me do anything, right? Go fly a kite. And it’s a real challenge. And I think for so many bureaucracies, you know, New York City is probably the best example, but you see this in bureaucracies in private companies, you know, other municipalities— 

DUFFY: Definitely the federal government as well. 

KLETTER: Yeah. Federal government, you know, most government agencies, but also very large—any large company has this challenge.  

And for me, when I became commissioner at the Office of Administrative Trials and Hearings, it was March 2020. It was literally the week before COVID hit. And fortunately, I had spent time at city hall and developed relationships with basically every agency in the city government, because I was working on legislation, working on policy, and working with the city council, I was able to build relationships with most agencies. I knew the commissioners. I knew a lot of the policy folks. And so that when COVID hit, and everything shut down, my agency, which conducts 500 in-person hearings a day—and I should—I’ll explain a little bit about what we do at Office of Administrative Trials and Hearings.  

We adjudicate the tickets that are issued and the sentences that are issued by the city agencies. That’s one thing we do. We have a couple other things we do, but one of—the main thing we do is to offer an independent hearing officer to adjudicate the tickets that are issued by the agencies. For example, you know, when the Department of Buildings issues, you know, an illegal conversion ticket, or sanitation issues a ticket, or—and there’s about 800,000 tickets issued a year by the various city agencies. Yeah, it’s a lot. Department of Health, Department of Consumer Affairs and Protection.  

And so when the city shut down, you know, all of a sudden it was like, what are we going to do? There’s 500 in-person hearings that we do throughout our five boroughs in these offices. And I had just arrived. (Laughs.) And fortunately, I was able to immediately contract with a company called CourtCall. And fortunately, because the emergency procurement services allowed us to initiate these contracts without having to go through an RFP and a longer process, I immediately hired this company. Which allowed us to set up a web platform to hold telephonic hearings and to electronically, you know, upload documents and evidence for cases, and to hold over 500 hearings a day. Which really required just a lot of ingenuity in many ways.  

And I think, for me, coming in, having new leadership into an agency was important. And to bring that new energy. I also think having the impact of COVID, where they just—we didn’t have a choice. We had to do something different. And it was able to provide us with this platform that now we’ll never go back to the way it was, right? So now the convenience that doing a telephonic hearing offers to people is, like, A, they don’t have to spend their day going into Manhattan, waiting three hours in an office to have an hour-long hearing with a hearing officer, and literally give up an entire day to do that.  

So with our platform, we’ll actually call you when the hearing is ready. So you don’t have to even, like, stay on the phone on hold for an hour. You know, the hearing officer, or the platform, will reach out to you when the hearing officer is ready for your hearing. We send electronic text messages to remind people that they have a hearing that day. And we’ve just added so many conveniences to make it easier and safer. And, yeah, I mean, because, like I said, we issue tickets related to health and safety, there was no way for us to just say, OK, stop issuing tickets. You know, we can’t do hearings.  

And so I think COVID really forced us to make this change, but it’s been so beneficial to the city. And I think in many ways, these types of online platforms—obviously, Zoom—and, you know, the convenience of the technology that we have now to do these types of hearings. I mean, jury duty and other types of—I’m just thinking—you know, just government court administration that can be done— 

DUFFY: And how many did you—like, by the time that you were sort of heading—like, it’s an insane number, right? 

KLETTER: Yeah. So there were—yeah it is an insane number. Like, in the first year we did 100,000 remote hearings. And then when I left, it was, like, over 250,000 remote hearings were done.  

DUFFY: And before that— 

KLETTER: I think it’s the most—we’re, like, the busiest court in the country, yeah. 

DUFFY: And before that moment, was there any capacity to do remote hearings at all?  

KLETTER: Very few. Yeah. I think if someone had a disability— 

DUFFY: A disability or something? Like, it would be like an ADA-type thing.  

KLETTER: Yeah, exactly. But we didn’t have, yeah, the capability, essentially, to have it—because you have to have it recorded, you have to have the hearing officer in place. And yeah, no, and it’s training—and then training the hearing officer, putting the rules together to make sure—like I had to work closely with the law department to make sure we had the policies and procedures in place. The mayor had to issue an executive order that allowed us to do it. And so, yeah, there’s a lot of agencies and people involved in making it work. And like I said, fortunately, I had those relationships to build upon to kind of make it happen. But, yeah, there’s so many people involved with that work. And if you’re not all on the same page, it becomes that much more difficult. 

DUFFY: And what’s so interesting is that you have a very non-technological problem that you were then trying to use technology to solve. And, Jiahao, I want to go over to you now, because one of the things that strikes me a lot about our current AI discourse as well is that there’s a lot of, like—there’s a lot of, of throwing AI at problems that, like, don’t necessarily need to be solved either, right? Like, it’s—sometimes it’s just people want the whizbang-ery of the AI. And it’s like, I don’t know that this—I don’t know that you need—it feels a little bit to me like the way I felt about the blockchain. It’s, like, I feel like you just need a database, maybe SQL, and you’re going to be fine. Don’t really think you need blockchain.  

But anyway, can you take us through—like, what I’m hearing from Joni is there were—there were—it was a moment in which political will had shifted. It was a critical sort of space and moment in time in which more change was possible than previously. That you needed to have people on board who truly understood how to make the technology work to solve problems, and then you also needed to have a lot of people on board who truly understood how to make things work in the city, and in the city administration, and within the laws, and was sort of with the clients, right?  

And then finally, there was flexibility in things like hiring and procurement. You didn’t have to do everything, as you said, at that sort of languid pace, right? So I know that you can’t speak in your official capacity, but when you think about cities, like big cities like New York, right, and the complexity that New York offers, for you what are some of the most sort of exciting opportunities that are out there? How do you think about it? And then where do you feel like people are maybe, like, going a little too far afield in their thinking of what it could do or how it could transform things?  

CHEN: Yeah. Yeah. Well, so first of all, I feel like that discourse on AI as technosolutionism—like it’s both, like, new and old at the same time. But it’s old in a sense of, like, oh, the disruptive ideas of technology have been around literally since the Industrial Revolution. And— 

DUFFY: Tank really wagged his tail when you said disruptive. (Laughter.) 

CHEN: Oh, OK. Yeah. And so in that capacity, like, you know, like any transformative technology, people are worried about, like, what will happen to my jobs? Your job is safe, OK? (Laughter.) Yeah. Yeah, people will worry about what’s going to happen to my job, what’s going to happen to my workplace, or, like, you know, how will all the structures, the social fabric, like, change in response to that. And of course, like, I wasn’t at the city during COVID. I was still at JPMorgan. But I can tell you that when we had to switch over to all virtual, like, our old telephony system just crashed for two days. And there was emergency procurement order to order a different platform. And then we just were like, this is so much better than the old thing that we had. So we all switched over. And so, like, you know, I remember that disruption.  

I think one of the—one of the things that COVID really highlighted was also, like, this tension between public responsibility and confidentiality. So especially when we think about contact tracing, for example, right, this is one of the big things where, like, oh, the old way of doing contact tracing was very manual. And you called people. And you were, like, oh, you were in contact with so-and-so in the hospital for X number of minutes. We need to trace you. And then when COVID blew up, it was just not practical to do this with purely human labor. And so then people were, like, oh, let’s do, like, apps and like digital tokens. And you can see that different cities, different countries had different responses to, like, how that was implemented.  

But that was really I think the first—for the first time. This is very different from, like, the previous SARS virus. So, like, you know, like, COVID is a descendant of SARS. It’s in the same, like, family. And, like, SARS was a big thing in in East Asia. And so when I was in Singapore at that time, like, you know, SARS version one in 2003, like that was very different. It was, quarantine, stay at home. Like, people will call you all the time and be like: Are you sure, like, you didn’t leave your home? Blah, blah, blah, right? And, of course, people tried that for the first—maybe the first month of COVID or two, before people were, like, everyone is falling sick. We don’t even have enough people to staff the call center, the call center and the hotlines.  

And so that particular discussion, at least in Singapore, I can tell you that contact tracing data was used to investigate some crimes. And so this is a thing where it was, like, well, the government’s collecting this data. It is then being repurposed for something else. And of course, Singapore is a different country from the U.S. But I think you would imagine that, like, maybe people here would not be so thrilled about having contact tracing data being used for police activity or, like, law enforcement of various forms. And so that’s always a tension that I think you think about in terms of, like, you build a technical solution, and then now you collect the data. And then, oh, what else could I—like, in a corporate sense, like, what other value is in this data? How can I use it for something else?  

And then that’s where I think a lot of the risk comes up, like governance of use starts to creep in. Where you go, like, I built this for this purpose. Now I have the data. Now I have the system. Can I use the data for some other purpose? Can I use the system for some other purpose? And I think that’s where a lot of the more problematic issues come up, where you go, like, you know, is it appropriate to repurpose the data for this use case? Is it—like, if I built, like, a—like, a thing that maybe it’s like a recommendation system for, like, you know, recommending you like what—you know, let’s say it’s an off the shelf recommendation system. It’s just recommending, like, what music to play? Like, can I—is it appropriate for me to take this recommendation system and now plug it into, like, 3-1-1, and say, like, you know, the recommendation system will now tell you what benefits you should apply for, right?  

And so, like, there are all these kinds of transfer type problems that I think are one of the—they form a very specific category of risk. But I think it’s one that I think it’s very common. And it’s very—particularly when you think about the economic motivations, right? So why do we automate things? It’s because we want to do more with less. So now you have the thing. Now you’ve scaled the thing. And now I want to use it for more things. And so I think there’s a very natural evolution of, like, I was motivated by scarcity. And now I have the thing and it works. Now I want to use it everywhere. But then you have to think about whether or not there’s transfer issues in terms of the use case and data and, yeah. 

DUFFY: Well, and so this question—you know, I think one of the most interesting questions right now that we’re seeing with large language models is this question of is more actually better? And, Joni, to your point around the just the size and diversity of New York City, right? If you think about the collective data of the twenty million people, essentially, in metropolitan New York area—like the amount of that that would be available—what do you think about, like, the research that could be done, if a lot of that data was, you know, cleaned and, like, pulled together, and people could run analytics against it? Like the way that you would think about city services.  

But then you also have to think about, like, what are the civil rights of different types of datasets? And how do you control—like, if you put it all into one model, right, like, how are you going to think about provenance? So when you’re thinking about—like, when folks—not you maybe—but, like, in general where do you feel like governments, when they’re thinking about citizens’ data and data that has come from, like, public systems and taxpayer funded systems, where do you think there should be, like, ethical lines getting drawn in terms of how that data is used, how it could be combined? Should it live with the city? Should it—to Joni’s point, should it be—is it about having stronger regulations for contractors? Like, how do you—how do you think about some of those sort of backend elements of it right now? Again, not speaking officially for New York. But, like, just your take as you. 

CHEN: Yeah, so I’ll talk about bias testing. I think this is, I think, the use case I’ve thought about the most. So the fundamental paradox here is that I think we want our system to bias, and—oh, I’m sorry—unbiased. And therefore, like, for things to be unbiased you may want to argue things, like, your—my resume screener should never see race. I should scrub the names because we know that this is a source of, like, unconscious or maybe not-so-unconscious bias by recruiters, hiring managers. Therefore we take this out of equation. So there’s, like, fairness through blindness, is what we call it. 

But we know that that’s also not enough, because we still need to be able to assess, is that actually bias in my—in my system? And so you need those labels in order to—you need that demographic information be able to assess whether or not your system is biased, even if you didn’t want your original system to be biased. And so then this creates a very difficult data management problem because imagine that I am a startup building, like, you know, the next LinkedIn, for example. And I want my thing to be, like, the most powerful like match—find your perfect candidate, like, type of solution. And then suddenly, like, you know, EEOC comes knocking on your door and goes, like, well, have you been checking if your—if your—if your AI system that you’re powering—your shortlisting function for hiring managers, how do you know that it’s not excluding all the women in my pool, right?  

And so—and if your answer is just, like, well, I scrub names so I can’t—I can’t see gender, like, is that believable? Like, you know, maybe you can imagine that maybe you went to Smith College, or maybe you played in the women’s lacrosse team and that’s on your CV, and so there is still a marker that is, in principle, possible. And this goes to, like, a very—again, one of these, like, sort of contradictory issues in AI. Where, like, the whole point of AI—like, the value proposition is to discover patterns that are literally beyond human comprehension. Like, if a human could understand it, I wouldn’t need AI to do it. Right? Then this is like automation, like, just to make it run at scale.  

And so if you’re tasking an AI system to learn patterns that are literally beyond human comprehension, how are you then expected to understand if it’s being biased, or if it’s being transparent, or it’s being explainable? So there’s a lot of talk about, oh, like, we remove this from the system. So it’s now, like, bias-free. It’s now explainable. We can do this and that. But fundamentally, are you solving this problem of this thing is doing something very complicated by construction, and it’s learning patterns. So in the resume screening example, these are—the patterns that I just described are the ones I can comprehend. And I can explain to you in a finite amount of time.  

But you can imagine that AI could learn much, much more subtle things like, I don’t know, maybe it could be down to, like, your choice of font. And maybe, like, you know, it will learn things, like, oh, minorities, are, like, you know, 2 percent less likely to change the default font, and therefore, like, that’s an indicator. And that’s also, like, a predictive signal. Like, it could be something as obscure as that, that humans would never pick up on but somehow to the AI system, it’s, like, oh, no, correlation with the thing that I was told to optimize. And so therefore, that’s the answer. 

DUFFY: Mmm hmm. And there’s—I think there’s a pretty famous example of an AI system that was trying to—it was a facial recognition system where they were trying to sort of remove gender bias. But then they discovered that the AI system essentially learned, and it was for, like, black faces. And the AI system had essentially learned that if somebody was wearing makeup it was a woman. And if they weren’t wearing makeup, it was a man. Like, you know, it was just one of these—like, they tried to scrape and pull and scrub out as much as they possibly could to avoid bias, and then the system had just learned on its own, right, that— 

CHEN: Well, clearly it didn’t learn on drag shows. (Laughter.) 

DUFFY: Right, no, exactly. So, like, basically, it was identifying people with, like, bright blue eyeshadow was women but like a neutral eyeshadow was, you know, not, right? But so again, it just sort of gone awry. 

KLETTER: Yeah, I was just going to—just go back to your question about the city and collecting information. You know, one issue that I believe is still out there is the gang database that the NYPD collects. And it’s not entirely clear how someone even gets in the database. And then they’re in a database that’s labeled a gang database. And it’s just not fully clear too what they’re doing with that information and what could be done in the future, right? The unknowns. I think that’s part of a concern too, is like what could be done with the information that is being collected, and where are we going with it? But I think it’s still an issue. If you Google it, you’ll find out. I believe, but it was a problem for a very long time that I don’t think ever got addressed.  

DUFFY: Well, and you think a lot about sort of liability, right, as well. Like, what is the responsibility to citizens? And then also, where are you open, essentially, to lawsuits from citizens for having violated their rights, right? And so how—like, when you think about the way that something like AI would roll out, or, like, massive sort of datasets of information in a city like New York, getting rolled in, what comes to mind for you in terms of like, whoa nelly? I’m from Kentucky. (Laughter.) 

KLETTER: Yeah. I mean, think there’s just so many different ways in which it could be used for targeting people. And I should have prepared for more and really thought about that question, because it’s such an important question. And I think there’s so many good things that are that are happening on the city level. And I know—and I’m not in the city anymore, so I can talk about it, because it’s just things that I think are relatively public. Which is some, like, data mapping, where there’s online accounts for people who are receiving SNAP benefits. And your eligibility for SNAP benefits is being used to connect you to other social services that you might need, right? So these are all very positive benefits. I could see at the same time, you know, targeting people who may be receiving those types of benefits. It’s like, OK, those are maybe a less—you know, a poorer group of people, and could potentially be targeted for nefarious things that you probably, you know, know better than I do around how that targeting could be used. 

CHEN: Yeah, so just that question of, like, what data is appropriate for use case. Like, in banking and financial services this is something that comes up all the time. And, you know, when you say something like, well, if I know—if I know where you live, because your mortgage is with me and I work at Chase, and therefore I know exactly where you are, like, can I now use this to cross sell you, like, a credit card? And so this immediately comes into question of. like, repurposing of data.  

So under current Dodd-Frank restrictions, like, there are very specific things on how you can and cannot reuse this data for marketing and cross selling. And in particular, like, when we think about, like, geographical data—as we know in this country because of the very complex social and economic history of this country—you know, where you live is very strongly predictive of your cultural background, your ethnic background. And therefore, if I’m trying to target a population—so, like, this is—again, like, think about, like, fundamentally the best marketing is to find exactly the right customer, which means that you want to be precise. But if you’re precise about locality, then are you being precise about race and ethnicity in a way that is not permissible?  

And so a lot of these questions, I think, come into, like, the territory of unintended consequences, right? And so this is, like, in some sense, like, an old example. Because this is, like, risk scoring, which is. Like, a very—by modern standards, a very mature area of machine learning and usage. But if you think about generative AI—like generative AI, like, if I ask it to make a picture of a beautiful woman, because, you know, apparently, when you are a sixteen-year-old boy, like, that’s what you ask Midjourney to create. And, like, what kind of, you know, ethnicity, what kind of art style comes out? You know, like it’s very, very stereotypical.  

And so if I’m now trying to think about, let’s say, I’m interested in Midjourney because maybe it can supplant my capacity to provide sign language, you know, and notations of a recording. Or maybe I’m going to use it to generate some sort of graphic art to put on, like, a public sign. Like, suddenly, I’m in this territory of, like, well, that that seems to work well for that use case. Like, what could possibly go wrong if I’m now creating public-facing material as a—you know, really just anybody, right? Like, it doesn’t even have to be government. It’s like, even if I’m a business, I worry about, like, is there a reputational risk if I’m using this thing and it creates something that people get—you know, it’s offensive, it blows up. And then, like, suddenly, you’re, like, you know, pilloried on social media. And that’s—and then suddenly it’s a PR crisis, right? So I think any large organization worries about this, right? 

DUFFY: Or you’re Google, and you launch Gemini. And in the name of inclusion, you’ve got racially diverse Nazis. (Laughter.) Like it was not a great week for them. And this is—I don’t know if this still works, but you all—you might want to try it. It was working as of a couple of months ago. I haven’t tried it recently. But if you go into GPT-4 and you use the prompt, “write this like a man,” or “write this like a woman,” you’ll get totally different results. And I write like a man, it turns out. (Laughter.)  

KLETTER: Really accurate then. 

DUFFY: I was mostly, like, I was opening—I write like an American, because it’s just like bullet points. So many bullet points. It’s, like, I just write like an American. 

We have this amazing group. I could talk to you forever and I have, like, a thousand questions. But I want to turn it over to you all into our members, our guests online as well, to hear any questions from you. So yeah, go ahead. 

Q: Thank you both for making the time. Really enjoy this, being a former New York City employee in two agencies.  

I had two questions. One, what would you say is the most, let’s say, interesting use case or unit across the agencies in New York City government today that you can talk about? And, two, having worked in the agency, and also now in the private sector, I think one of the big challenges that agencies often face, or institutions often face, is whether to innovate in house or to bring on an external vendor to achieve some sort of objectives. How do you guys think about the decision on whether to pursue something in house or by bringing on a vendor? 

DUFFY: I’m so interested in how you think about this, being two months in in the city after being at, like, JPMorgan. (Laughs.) 

CHEN: I will say that one of the things that has struck me about working in the government agency is that governments don’t—it’s not normal, it’s not very common for people to build technical tools in house. It’s much more common to procure. And if you look at now the research on AI governance and AI ethics, it’s almost exclusively dedicated to the assumption that you are a tech company and you’re building something in house, and how do I, like, audit the model for buyers, or audit it for risk of this form? And the interesting thing here is that if you’re in a procurement situation, you’re buying a black box, right? It’s very unlikely that the vendor is going to let you poke inside, basically, like—with very few exceptions, right? Like, if you’re like a military provider, like, they almost certainly, like, you know, will strongarm you to, like, show you what’s inside. But, like, if you’re not—if you’re anyone else they’d be, like, why would I show you my secret sauce? Like this is literally, like, my valuable IP. This is my trade secret. Blah, blah, blah.  

But on the on the on the user end, I now have to rely on the vendor’s assurances, whether it’s a sales pitch, whether it’s contractual liability, like these kinds of mechanisms to make sure that if I tell you that my tool performs as intended, that I’m actually trying to make a statement that they will. And, you know, to the point where, like, if it goes wrong, like, you can—you can sue me, you can find fault with me, right? So, like, it now comes down to, like, the very non-technical aspects of, like, what’s your—what’s your legal protection? What’s your contract look like? What’s your procurement structures? So there’s a lot of that kind of discussion, which I’m—which I’m learning about.  

And you can imagine that New York City, being a very decentralized place—this is the other thing I’m learning—is that it’s, like, there are cities that are very centralized. So there are local governments, like, there’s like a central tech office. And the central tech office does everything for that in that organization. And then there’s, you know, New York City that’s literally, what, 350 years old? And so it’s just, you know, this organic, like, milieu of, like, different tech agencies and chief information officers. And they all have their specific needs. So. like, Department of Health probably has very different needs from, like, your agency—your former agency. And, like, that maybe has very different needs from, like, Department of Education, and so on.  

And so, like, you know, all of—like, data—handling data regulations in these—in these, these different use cases are also quite different. And so there’s that challenge of, like, do we expect, like, a one-size-fits-all solution? Or do we have to customize it? But customization is antithetical to scale. And so this is, like, one of the tensions of, like, to be very—like, be very stereotypical, it’s like, stereotypical tech culture versus stereotypical, like, you know, legal and, like, government, like, kind of thinking. Like, that’s, I think, one of the big tension points. 

DUFFY: Well, and that’s— 

KLETTER: Yeah, CourtCall is— 

DUFFY: This is a follow up for you actually on this well. 

KLETTER: Well, CourtCall was the vendor that we ended up using. And I think the way that I described municipal government really kind of highlights some of the challenges you have with doing these types of things in house, right? Because so many city employees are civil servants, right? They took a test to fill the role that they’re filling. And those are the specific job duties that they have. And it’s very hard to pivot, and to bring—and even just to bring in, like, new—you know, to say, oh, we’re going to create a new unit and hire all new people. It can be very challenging, just given the hiring restrictions, and the budgets, and everything else.  

So particularly during COVID, it really kind of gave the city an opportunity to quickly procure this new technology. I do think—you brought up definitely some of the tension and challenges that you have. Like, A, yeah, I mean, what are they doing with the information? Are they keeping it secure? And are we allowed to see their code? Two is the silos that so many agencies are in because there’s such massive agencies. So DOE is contracting with all these vendors. But most of the other agencies don’t really know what vendors are being used. So there’s a lot of duplication. And just given the size of the city, I don’t know how you—accept problem, but yeah. 

CHEN: No comment. (Laughter.) 

DUFFY: No, but so I have a question on this then. So, like, when you think about, you know, having moved the hearings on to Zoom, for example, right? But, like, now Zoom has, you know, AI operating in the back end, can automatically generate transcripts. Like, there’s a lot of revenue savings for the city in having a really advanced like AI model that is backing, Zoom, generating— 

KLETTER: You might not need a hearing officer. Well, you might not need a hearing officer. You could have AI do the hearing, potentially. 

DUFFY: Right? But then also Zoom has all of the information, you know, that has been discussed during that hearing. And presumably there’s significant sort of confidentiality concerns in that mix. And then for the generative models, and Jiahao tell me if I’m right or wrong on this, what’s really interesting to me about the generative models is that they’re also—they’re learning against things like your own prompts, right? And so it’s one thing if you have, like, the sort of initial data model. But then if you are investing all the time of, like, it’s the expertise, and the creativity, and the questions of New York City citizens and employees and, right, and officials, who are working with that model that’s going to make that model smarter and more responsive.  

And who owns that? So how do you avoid something like vendor entrapment, right? Which is something we dealt with a lot in civic technology. Where does New York get to claim essentially the IP, so that it’s not trapped with one particular backend vendor, right, and move it over? So these are—like, if you had been thinking in that moment about, like, Zoom at—like, as a—like, talking to the lawyers. What are the questions—or what types of lawyers would you have wanted to have in the room? Because there’s an element of this, we have to understand how it works to understand the legal issues.  

CHEN: May I talk about Zoom, please? 

KLETTER: Yeah, yeah. Of course. 

DUFFY: Yeah. 

CHEN: So we can thank the Europeans for enabling this very obscure setting in Zoom that lets you choose which countries’ servers you allow your data to sit on. And so there’s a—there’s a specific thing. You can go in advanced settings to say, like, you can uncheck, like, I do not want to use a Zoom server located in—and there’s a list. And so there’s a list of countries in Europe, list of countries in Asia, and those kinds of things.  

DUFFY: I’m noting this for our meetings team. (Laughter.) 

CHEN: No, so data sovereignty is, like, one of the big aspects of GDPR. And I think the U.S. is not quite so—the U.S. is just, like, one monolith, right? Versus, like, in Europe, like every state is different. Like, speaking to a whole room of people who know this way better than I do. So the fact that, like, you know, there’s, like, processing restrictions on where that data can sit, that’s actually something that starts to resemble, like, the issues around government cloud. And so, you know, there are now certain kinds of government use cases where, like, data cannot physically leave the U.S.  

How do you guarantee that if you’re using Zoom? Well, OK, well, that’s why there’s now an option in Zoom to choose the server, or you literally buy, like, what is, like, gov cloud, right? So like, you buy the cloud solution that’s specifically approved for government clients, and then you sit there. And so you know, every now and then people get confused because. like, gov cloud Zoom is a different server from, which is the default server. And people go, like, this looks really janky. It’s like called USGovCloud-dot—like something random. And I’m, like, is this really Zoom? Actually, it is. But it’s because it’s on the gov cloud instance. 

DUFFY: So when you’re thinking about, like, procurement, right, that—and what you’re looking for, like, when you think about the people who are experts in procurement, and what’s legal, and what’s not, and, like, where you could bend a rule, right, versus where you can’t, those aren’t generally going to be the same people who are going to be expert in we have to think about the data security, we have to think about the servers, we have to think about cross-border transactions. So, Joni, when you hear that, like, what do you— 

KLETTER: Yeah. Right, well, when I came—when I was at OATH it was all happening so fast that it’s, like—and these issues weren’t even really ripe yet, because I didn’t know exactly what was going to happen. Yeah, I was worried about just making sure, OK, these hearings are going to get online, and we’re going to do them, and make sure that everyone’s getting their hearing, and— 

DUFFY: Yeah, while you had two children distance-learning at home. 

KLETTER: Yeah, exactly. I had two kids. 

DUFFY: To be clear.  

KLETTER: And so—and also, given that there wasn’t an RFP process, which made it so much easier. And we could—like, I hired CourtCall within the first week that I started at OATH. And it was—it’s not as much of an issue at OATH because the hearings are technically, like, public hearings. There’s actually reporters who could listen in if they wanted to. You know, Eric Adams ended up having a hearing at OATH—or, one of the attorneys appeared for him. But, like, if you wanted to, you could listen in onto the hearing. But it’s certainly—yeah, I mean, you would need lawyers. You need IP people. And there’s a whole just, yeah, group that needs to analyze what’s happening with this data. But it wasn’t even considered at the time that I was— 

DUFFY: Yeah. Others. Yes. 

Q: Hi everyone. Jordan Sandman here. I work with Co-Develop. We’re a funder of digital transformation efforts around the world, a new field called digital public infrastructure. We find both the technical systems but also civil society groups that are doing digital rights advocacy.  

One of the questions we’re really interested in is how governments are listening in this era of digital transformation. And so I’m curious if you had any examples of understanding the most at risk groups as you did digital transformation in government, or if you’ve sort of engaged with civil society groups and had something that we’re calling, like, a holding environment, where you can have open discourse to understand where the risks are as you’re doing digital transformation. 

DUFFY: Right. You want to—do you want to take—I mean, like, I immediately thought about, like, Computational Propaganda Project, and, like, what they did in Taiwan. But what—how do you think about it?  

CHEN: Yeah. So I can comment on what is public. So there is a—there is an advisory network being set up through OTI that is trying to bring in these external voices. So that’s definitely something that’s on our radar to make sure that we’re not—we’re listening to people representing the sheer diversity of what happens in New York City. And maybe I’ll comment on, like, one specific problem that has been—that the I’ve sort of gotten into this rabbit hole around Local Law 30 of 2017. So what is this law? This is the law that requires the city to provide services in languages other than English. And so there’s annual public reporting which, yes, if you’re—if need something to do before you go to bed, like, you can read the annual reporting for a Local Law 30. But it really is interesting to see, like, how does this—how does this mandate that is citywide, that, like, we need to provide services in ten languages that are not English, that are, by some definition, widely spoken in the city—how do you determine what those are? And what are your obligations there?  

And it’s really interesting to read that there are various—the various agencies that were, like, OK, well, that’s the legal mandate. But, like, when we’re actually sending field agents to, like, you know, like, at home nurses or like, you know, visiting, you know, checking in on people, like, we’ll be, like, this vulnerable population, like, doesn’t speak English. Like, you know, the first language is, like, you know, like, Somali, or something that’s, like, you know, not even on the top ten list, let alone, like, above resource list. And then now you bring that question of diversity, of linguistic diversity, to the question of, like, oh, imagine if I want to now put AI behind, like, some city service.  

Like, you know, what if AI was being used to, like, you know, do a quick screen and pre-approve you for benefits, or example? Maybe there’s a certain initiative to streamline you. But then the people applying for benefits are also the people most likely to now require services in a language that’s not English. And by law, like, you’re supposed to provide these. And now where’s your called quality control, right? So, like, every now and then there are these, like, famous gaps of, like, oh, like this road sign in Welsh actually says, like, I am out of the office, because, like, the translation service, like, just returned the voicemail auto response, and then the people who ordered the translation thought that was the answer and just wrote it out on the roadside. (Laughter.)  

So, like, how do you prevent those kinds of issues? Because they are—like, you could see this thing where, like, you’re trying to use the technology to solve a problem, which is improve accessibility. But you end up making things worse because it’s not—it’s just completely garbage as a result, and you don’t speak that language, so you can’t validate it. And so there are all these kinds of questions of, like, how do you validate this thing that needs to come into play, if you’re thinking of a technical solution around these kinds of things?  

DUFFY: Well, and this goes, I think, Joni, to your point as well, around what extraordinary opportunity New York offers as New York. Because this question of red teaming, for example, right, of auditing AI systems, and having different communities do it. The number of different communities, the number of different languages, the number of different sort of demographic groups that are represented in such a small geographic area, like when you think of the supply of experts who can actually help run red teaming that is sort of community-centric, and they have the technical expertise for that, the idea that you can access that many different types of communities and that many different types of people relatively quickly, is very unique. There’s only a few cities in the world truly where you could do that. It’s incredible.  

KLETTER: Right. I mean, when you think about—like, you know, you think about our city hospital system, which is eleven city hospitals that serves hundreds and hundreds of thousands of people every day. Or, at least every year. And there’s, like, over a hundred languages spoken in our city. And people can come into a New York City hospital and get care, and seek care, and speak to someone who is assisting them in translating whatever language they speak. It’s pretty incredible. And I have to think New York City is one of the only places you can do that, and especially the number of different languages that are spoken and that we provide that type of care for. And that’s just one example, the city hospitals. But it really is incredible.  

And I like—at OATH, for example, I didn’t say this, but we also have interpreters for anyone who needs it. And we’re constantly, you know, calling on the interpreters. I mean, every agency is required to, yeah, have translators and interpreters available. And so for every service there’s translations. And every document that’s issued by the city. Anytime something’s put up on a website, every single page has to have the translation in those eleven—I think it’s eleven languages. Is it eleven or ten? 

CHEN: It’s ten excluding English. 

KLETTER: Ten excluding, yeah. In those ten languages. So it’s just, yeah, millions of documents that have to get translated. 

DUFFY: Is there a question online? 

OPERATOR: We will take our next question from Owen Lee. 

DUFFY: Hey, Owen. 

OPERATOR: Owen, if you can speak into the microphone or unmute yourself. We seem to be having technical difficulties. So I’ll turn it back to you, Kat. 

DUFFY: OK. Owen, you can just email me, man. Any other questions in the audience? Yes. 

Q: My name is Rachael Schefrin (sp).  

I was one of the people who raised my hand when I said—when you asked who has absolutely no relation to AI in their job—(laughter)—so forgive me if this is an obvious question. It seems like from this discussion it’s really clear that the use of AI comes with a lot of sort of collateral risk, because AI doesn’t understand what collateral damage is in the way that people do, and therefore needs oversight in the same way that a human employee does. And that feels like it’s leading to a really fundamental shift in the way that, like, cities using AI staff things. And so I’m wondering, like, how you see—or if there is a plan in place to sort of make those shifts, because obviously AI reduces the need for certain humans in some places, but also demands, like, a lot of capacity building in other areas. 

DUFFY: All right, labor lawyer, flex it. (Laughs.) 

KLETTER: Yeah. I mean, it feels like there’s going to be a whole new civil service test—(laughs)—almost for people who are going to be overseeing our AI overlords, right? I mean, there’s going to have to be. And it’s something that, I think, my generation has never thought about, doesn’t have to think about, right? But for a younger generation, this is going to be—these are key, like roles that—I don’t know how many people we’re talking about who will fill those roles. You know, but it’s definitely going to be an important role given, as you as you said. And I think you made a great point about the risks and the unknowns that we just don’t have at our fingertips. 

DUFFY: And, Jiahao, you know, one of the questions for me, especially when we think about, like, cities and governments, right, is always, where does it make the most sense to create sort of new mechanisms, or really focus on bringing in, like, scientific or technological expertise? Versus, how much more powerful would it be to take the people who really know the system, the government, like, the way that things work, and invest in skilling them up in, like, some of the fundamentals? For you, what are the sort of costs and benefits of those different approaches? 

CHEN: So part of the AI governance question. oftentimes it points to a more systemic issue around the fact that you don’t have governance for this particular process. And so, for example, if you were just blindly trusting the output of your translators and not checking that it just says, this is out of office, and then something bad happens because you’ve just wasted public resources to put up a road sign that is just nonsense. So imagine, like, and now I’m trying to use AI for translation services, like, do we have a process in place for verifying the validity, the accuracy, the correctness, the acceptability of what is being produced? And just as we are using AI to try to automate and scale and do more with less, can we also do the same thing with governance, right?  

So the need for governance and checking the answer grows with the number of answers that come out. And so if the output scales, then you need to have something that tries to keep up with that. And oftentimes, this is the actual problem where, like, maybe AI will just generate a lot more work, because now you need—maybe you’re automating away call center agents, but then now you need people to make sure that ChatGPT is not saying, like, wrong things about your products or your city services to the people calling.  

And, you know, just to bring out something that happened two weeks ago, I don’t know how many of you saw this case in Canada about liability for Air Canada’s chatbot. Yeah, so—yeah, OK. Some people are familiar with it. So Air Canada was found liable for honoring the refund policy that the chatbot made up. (Laughter.) And Air Canada’s argument was to try to argue that the chatbot was a separate legal entity. (Laughter.) I’m not a lawyer, but, like, this sounds wild. So, like, until the day we can figure out how to sue AI and like make AI pay tax, like we’re babysitting this thing for a while. (Laughter.) 

DUFFY: Where, and when, and how you sue with AI is a fascinating question, by the way, and a deeply American one.  

We are—I know we have—we’re basically at time. I want to be respectful for the folks who are joining us online. I know we’re going to have a reception. Can you guys hang out and have a little snack? And so I want to—I want to end—because this is the young—sort of the young policy professionals group. I want to end with a—with a quick question. Kind of like three, four sentences, no more than that. If you—what do you know now, based on the years that you’ve been working in your respective fields—what do you know now to be true that you wish you could go back and convince, say, your twenty-seven-year-old self, this is true? Like, what do you—is there a thing that you would go back and tell yourself at 27 that, like, you know now that you wish you had known then? 

CHEN: Does it have to be within the scope of this discussion? (Laughter.) 

DUFFY: Not at all. Not at all. What do you think? Jiahao, what about you? 

CHEN: Four things, OK. So it’s OK to switch careers. I’ve now switched careers, like, three or four times. And I do work that did not exist when I was in grad school. And that’s fine. We should normalize that. That’s the first thing. Second thing is, sometimes opportunities just knock on your door and you don’t even recognize them as opportunities. So like when I left academia to go work at a bank, I’m, like, why would I work at a bank? And then they were like, oh, we’re going to do data science on call center data. And I’m, like, I have no idea why you’re even talking to me. Like, why—what is could possibly be interesting about this?  

And then I got—I started doing the work and I was, like, wow, OK, so now we’re worrying about comparing the call center experience in English and Spanish, because CapitalOne offers services in those two languages. And the regulators are interested to know, like, if you are getting more complaints from Spanish-speaking customers than English-speaking customers. How do you know that? And do you know why they’re upset? Like, are they upset because the agent went off script? Is it because the—you know, or was it something else? Like, just like new problems come your way. And sometimes you are that person to solve that problem because of the weird and wonderful life trajectory that brought you to that place. Sorry, I started rambling. So I’ll stop there. 

KLETTER: No, I love that. 

DUFFY: I love that. 

KLETTER: That pretty much covered everything I have too. (Laughs.) Because my career results have kind of gone in a strange trajectory. And I was fortunate to have, you know, Bill de Blasio come and asked me to join his administration, and sort just set me off on a different path. And I was doing things that I had no—like, I came to city government and got thrown into this world of city politics with the city council, and the budget, and had no background in this. And one thing—it’s important to know that, like, you’re much more capable than you realize. And with—you know, you’re college-educated. There’s so many things you can do and that you’re capable of doing. And you kind of figure it out. Yeah. Ask questions and you’ll figure it out. So never be afraid to try new things. 

DUFFY: And I feel like for me, you know, I was growing up like a white cisgender, like, you know, middle to upper-middle class, like, ivy league-educated woman, like, coming from Kentucky. So like systems worked pretty well for me, right? Like systems were, more or less, like, because we’ve dramatically failed at intersectional feminism, like, systems generally worked for me. And so if I could go back now knowing what I know, I would tell—I would tell twenties me have a lot less trust and faith in systems and give individuals a lot more grace. Be far more skeptical of systems and be far more open to individuals and listening to them. And I think that’s in part because the only way that you break, or build, or improve a system that is not working for people is with people and through people.  

And I think that in the discussions today, like, one of the reasons I was so excited for you all to meet Joni and Jiahao is that that’s exactly how they think about these questions of digital transformation as well. That it’s not fundamentally about the technology. It’s about the people that it’s trying to serve and how you work with those people and bring them together. And all of you, as you move forward, will lean on each other to do exactly that as well. It’s why I’m so grateful for this room. I’m so grateful to hope to get to meet all of you at the reception. And it’s why I’m so grateful to CFR for us bringing together. And so with that, please thank Joni, and Jiahao, and Tank. (Laughter, applause.) 


Top Stories on CFR

West Africa

The split between ECOWAS and the mutinous AES states may be just what the regional body needed. 

Election 2024

Each Friday, I look at what the presidential contenders are saying about foreign policy. This Week: Republicans are gathering in Milwaukee next week optimistic about their chances in November.  

United Kingdom

CFR experts discuss the results of presidential elections in France and the United Kingdom, as well as what to expect from the 2024 NATO Summit in Washington, DC.