Artificial Reason
A conversation on AI, rationality, and violence with Kevin T. Baker, Sophia Goodfriend, and Benjamin Recht

On Tuesday, May 5, Boston Review convened a panel of three prominent writers—Kevin Baker, Sophia Goodfriend, and Ben Recht—to discuss the way AI is changing the way individuals, institutions, and governments make decisions and the consequences for politics, war, and beyond. The conversation was moderated by BR contributing editor Lily Hu.
The following transcript has not been fully edited; it may contain errors.
About the Panelists
Kevin T. Baker is a historian interested in the history of computers, simulation, and artificial intelligence in public life. His latest essay, “AI got the blame for the Iran school bombing. The truth is far more worrying,” appeared in The Guardian. He writes the Artificial Bureaucracy newsletter here on Substack.
Sophia Goodfriend, the Harry Frank Guggenheim Research Fellow at the University of Cambridge’s Pembroke College and a nonresident fellow at the Harvard Kennedy School’s Middle East Initiative, writes widely on automation and warfare. For Boston Review, she recently wrote “The New Old Warfare,” a review of Petra Molnar’s book The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.
Benjamin Recht is Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His latest book is The Irrational Decision: How We Gave Computers the Power to Choose for Us. He writes the arg min newsletter here on Substack.
Lily Hu: In the past few years, we’ve seen so many horrifying plot lines developing globally. Israel’s relentless and brutal war on Palestine; the AI rush and the associated rush to build data centers, ensure trade flows, secure access to key minerals and materials; the ever greater, ever-present risks involved in global migration; and now, of course, the war in Iran. We can now begin to see very clearly how these plot lines are being woven together. New technologies, and perhaps even more, the unprecedented levels of hype around those new technologies, are playing a transformative role in how our activities of war and surveillance are being carried out, and in the public discourse about these activities, how systems of tracking and shooting and killing are being pitched and sold and justified to the public and to the military by companies like Palantir. We’re hearing so much about more precise, efficient, smart AI-assisted warfare.
I want to start with that new gloss that’s being given to the usual campaigns of war and military force. These newfangled technologies grab our attention and they build themselves as making war or border protection more precise, more efficient, and it’s implied less costly in terms of human lives, but of course, as so much of your work discloses, this is not so. It’s still also human choices in the end that are making decisions to surveil, target, kill, and maim. I want to start by asking everybody, how do we remain clear-eyed about what’s going on right now in this new AI-assisted realm of military technologies? What’s new and what’s old? AI is changing so much of how war is being conducted, it’s changing so much about how social and political decisions are being made more broadly. How do we sort out the evergreen truths that are important for us to not lose sight of about these particular technologies, while also seeing what’s really distinctive?
Sophia Goodfriend: Thanks, Lily, and thanks for having us. I’m excited for the conversation. You’re absolutely right. It’s important to be specific and clear-eyed about how precisely AI is changing warfare, armed conflict, and the various atrocities that you’ve named and that frame our conversation today. I know that the other speakers here who have written extensively and have training in technical backgrounds can speak more specifically to this. But I would like to start out by clarifying first and foremost that AI is not really a coherent technology. It’s really a diverse set of tools from machine learning algorithms to large language models that are capable of automating tasks that were once carried out by humans. So when we talk about AI, we should really be talking about automation. And when we talk about AI warfare, we should be talking about how militaries are automating warfare and soldiering.
We’re seeing that today most strikingly amidst the U.S.-Israeli bombardment of Iran in the wake of two years of war in Gaza. Alongside all that’s happened in Ukraine as well, we’re really talking about how automation and artificial intelligence are transforming targeting on the battlefield. And that unfolds through a host of different technologies, from object recognition systems that are used to cull through reams of satellite imagery and inform anomaly detection systems that can alert a targeting cell in a military of when and where to strike. That can be facial recognition algorithms that help militaries compile lists of potential militants that they can determine are valid military targets and decide to assassinate using drones, or other kinds of guided munitions. It can be recommendation algorithms that can speed up the pace through which intelligence analysts can decide who or what constitutes a valid military target. And so that’s really the set of automated systems that are transforming much of warfare today, and I think it’s quite important to be specific about that. And when you’re specific about that, you can also be specific, again, about the kinds of technical limitations that these systems run up against that contradict the claims of both the private companies that are pushing these technologies, as well as the militaries that are using them.
For example, the anomaly detection systems within something like Maven Smart System manufactured by Palantir have a really, really high error rate when it’s used in different terrain that doesn’t match the terrain with which the satellite images that inform it were trained on. Or the automated translation tools that the Israeli military relies on to automatically translate reams of telecommunication data taken from Palestinians living in the West Bank or Gaza are known to have very high error rates and mistranslate words constantly, which can inform other kinds of errors and limitations throughout the larger kill chain. So again, when you’re specific about what kinds of algorithms and what kinds of technologies and what kinds of systems are being used by militaries, then you can get into the weeds of how these systems work and how they don’t work. I think that’s important, and I’m sure others can speak more to that as well, but I say that because when you roll out a host of systems that are used to automate tasks once carried out by humans, what’s new is that they do speed up the pace, the tempo, and the scale with which militaries can act. And we’re seeing that in Iran, we’ve seen that as well in Gaza.
In Gaza, the Israeli military integrated a host of AI-assisted technologies into its kill chain, and that allowed it to strike at the height of its aerial bombardment of the Strip once every two minutes. Fast forward two years, and you see that Israel and the United States were striking once every seventy-seven seconds in Iran. Just to sit with the pace at which these technologies allow militaries to strike and kill on the battlefield is also quite important. So that’s something that’s new, to go back to your first question. As you said, and as I’d love to hear others talk about as well, these technologies really subtend old dreams of warfare, old dreams of domination, old dreams of power.
The first, I think most importantly, is the fallacy that technological supremacy will deliver military victory and lasting security on the battlefield. It’s an old dream that militaries, Western militaries in particular, have offered, that achieving total dominance over an enemy through air power alone will shore up military victory. We’ve seen that throughout the 20th century, throughout World War II and Vietnam, in places like Yugoslavia, and now Iran, with the kinds of strategic failures that have played out over the last few months. And we see through that history that air power alone could never allow a military to achieve its strategic aims in warfare. And that idea has endured and is so alluring to militaries because it also comes with the idea that you could attain victory without sacrificing your own soldiers in war. And you can wage war without having to mobilize both the popular support and the political will of your constituencies. I think that’s an important old and enduring fantasy, and we can talk more about how that’s played out over the last few months as well.
Another old theme that tangles contemporary hype around AI and warfare is also the dream that automation is a technology of control. I think that’s something that we can see quite tangibly. Kevin, I think you cited David Noble in in your piece in The Guardian about how automation is first and foremost this kind of managerial tool, a tool that shores up the power of CEOs, of politicians, by taking the discretion and power away from workers and placing it into the hands of a smaller number of people. I think we can see that quite tangibly in how wars are waged and how militaries are embracing automation, particularly when it comes to targeting on the battlefield.
In Israel, in the context in which I’ve done my research, in the 2010s, as intelligence units were integrating a host of AI systems into targeting to speed up their operations, you had intelligence heads say that they dreamed that in a few years, 80 percent of the tasks that were once carried out by human analysts would be carried out by automated systems. That dream also chimed with quite specific aims that tangled with the dreams of right-wing politicians and the military commanders they worked with; of annexation, of displacement, of population transfer. Again, you can see how AI as a tool of managerial control also chimes with the specific political aims of militaries and governments that are deploying these systems, how they shore up the authoritarian aims of militaries and governments by again, limiting the amount of discretion, the amount of responsibility and agency that other people bound up in war machines or in governments, or wherever you’re seeing these technologies deployed, and placing it in the hands of a smaller group of people overseeing a larger system. I think that’s a through line we can draw through earlier experiments at automation and how militaries are today using these technologies.
Hu: One of the things that, Sophia and Kevin, I think you both touched on in your recent writings, is this emphasis on how these tools make claims to efficiency and precision and automation and control. But they are actually extremely faulty. They’re not as smart as they’re sold, and these errors often lead to even more destruction, despite claims that the war is going to be less costly, or is going to be more rational. So there’s, on the one hand, a set of critiques that’s focusing on this ideological debunking about these benchmarks of precision and efficiency and how they’re really not so, they’re actually extremely inaccurate; look at these actual cases, due to not updating a map, not updating an address.
On the other hand, there’s this worry that a line of critique that focuses on accuracy, the extent to which they’re meeting these marks that have been set up by this master narrative of what justifies the deployment of these tools, plays right into the hand of this logic of technical rationality, where the primary questions that we’re asking about war are, how precise is your targeting? Or, how accurate is your facial recognition technology? Or, how many mistakes did your system make? I feel like one thing that Ben’s book was really good at bringing out to me was the extent to which the problem is not simply that the tools or the systems are failing on their own terms, that they’re not as accurate, or they’re not as efficient, but that the terms themselves are deeply problematic. The ubiquity of technical rationality itself is problematic, and the extent to which we continue to measure up humans or tools to those particular metrics is sort of a deeper problem than falling short of the standards.
I want to ask, how do we fight both fronts at the same time? How do we both beat back the ideological hype about precision and efficiency, emphasizing that we’ve been ushered into this new age of hyper-rational warfare, and at the same time, reject those very goalposts of efficiency, accuracy, and technical rationality? Ben, do you have a thought about that, since I know your book focuses so much on what are these metrics to begin with?
Benjamin Recht: They all come out of experiences during the Second World War and the desire to make things more administrative, or the view that somehow some of the greatest successes of the Allied effort were in the deployment of smarter administration. I think after the war, there was this push to think about, how can you do that even better? We had all these engineers and mathematicians conscripted to work in these planning offices during World War II, and afterwards they just came up with this push for, like, let’s see how we could actually design that to make that more streamlined. You would hear a lot of frustrations from these sorts of people that the decision-making of the war was too ad hoc, and we would have been better off had we been able to mathematize our planning then. I mean, so much of the tooling behind everything that we’re using today in computing was set up to build systems to automate decision-making, to make things less ad hoc, to decouple means from ends. Everything in optimization and game theory starts there.
It’s funny, because the logic doesn’t change. The technology changes, but the argument stays the same. Probably the most tragic and obvious failure was Vietnam, where they were very incredibly technocratic. I mean, they didn’t have computers to the same extent, but the fun part is that you could do a lot of these computations with tables. You don’t necessarily need to have the actual machine there to calculate the logic of these spreadsheets. It’s interesting that the story doesn’t change, and that the kind of planning that would go into a lot of that campaign in Vietnam doesn’t sound that different from a lot of the logic behind the two Iraq wars, and isn’t too different than . . . I mean, it’s no different than what we’re doing now, although the technology is getting more sophisticated and invisible.
Hu: Kevin, I want to turn it to you. How do you think we can fold in the critique of the new old warfare with the broader critique of the old old warfare, which is the very premise of war at all?
Kevin T. Baker: You mentioned earlier that technology criticism faces a kind of strategic dilemma. Do we play into this conversation about precision and accuracy, and all of the ways these firms sell this technology? I think that’s a dilemma for a certain kind of tech criticism, but I think it’s one that we need to move beyond, because it locates all of the action within the technology itself, and lets the technology itself guide the conversation. What’s important to me about technologies like the Maven Smart System is the context in which they’re embedded in. The war in Iran and the war in Gaza are notable for, first and foremost, a complete indifference towards civilian casualties, a complete indifference towards what used to be called, euphemistically, ‘collateral damage.’ So, that’s an important calculus that we need to take into consideration. Another one is, these casualties are very straightforward consequences of doing warfare in a dense urban environment. They’re not something any kind of technology is going to be able to eradicate or displace.
The other thing is, these systems are often precise in hitting the target that is requested, but that doesn’t really get into the question of target selection to begin with. And all of these are essentially political questions and not technical questions. I think when we focus too tightly on the ways that technology is embedded within these politics, we lose track of what is still political, where we as people not in government or within these firms have an ability to change the course of history. I think a lot of this needs to take place not in the questions of the fundamentals of the technology, in the internals, but in much more straightforward political conversations about whether this war is legal, whether it’s moral, whether we should be doing it to begin with. This was one of my frustrations early in this iteration of the Iran war, where conversations about Claude, and Claude supposedly bombing a site, tended to displace questions about the war in broader terms.
Read the whole transcript on BR’s website.

