Latest in Podcasts and Multimedia

Armed Conflict Cybersecurity & Tech Foreign Relations & International Law

Lawfare Daily: IHL and Private Tech in Conflict, with Jonathan Horowitz

Eugenia Lostri, Jonathan Horowitz, Jen Patja
Wednesday, November 13, 2024, 8:00 AM
How could international humanitarian law affect the private digital sector?

Published by The Lawfare Institute
in Cooperation With
Brookings

Eugenia Lostri, Senior Editor at Lawfare, sat down with Jonathan Horowitz, Deputy Head of the Legal Department to the ICRC’s Delegation for the United States and Canada, to discuss his recent article, “The Business of Battle: The Role of Private Tech in Conflict.” They talked about how international humanitarian law principles can affect the private digital sector, the risks that tech companies can face when they provide services to a party in an armed conflict, and what they should do to minimize those risks. 

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Jonathan Horowitz: You do have an obligation to take all feasible precautions, and one of those include verifying your targets to know that you're meeting the principle of distinction, that what you are directing an attack against is a military objective.

Eugenia Lostri: It's the Lawfare Podcast. I'm Eugenia Lostri, Lawfare's senior editor, with Jonathan Horowitz, deputy head of the legal department to the ICRC's delegation for the United States and Canada.

Jonathan Horowitz: And you might start to see militaries rely on those goods and services which companies had not thought about being used in situations of armed conflict, but to really no fault of their own. An armed conflict has emerged and there they are providing goods and services to parties to an armed conflict in ways that, for example, might start to raise questions about whether or not their infrastructure could ever qualify as a military objective.

Eugenia Lostri: Today, we're talking about why tech companies involved in armed conflict need to engage in dialogue with governments to understand the risks of wartime support.

[Main Podcast]

So, Jonathan, before we go into the legal questions that arise from the role of private tech in conflict, I think it would be useful if you could start by describing what sort of relationships fall under today's conversation. Right, because we tend to think about maybe offensive operations, or we're talking about certain services being offered as the prompts for this type of analysis, but that's not really the full extent of it. So, if you could just maybe get us started with that overview.

Jonathan Horowitz: So, I think that's a great place to start because if we don't start there, there are a lot of assumptions that can get baked into the conversation that need to be unpacked.

So from the International Committee of the Red Cross's perspective, my organization, right? We're a humanitarian organization that works almost exclusively on issues of international humanitarian law and providing humanitarian assistance to victims of armed conflict and other violence. And what we've been seeing, which I think a lot of the audience has been seeing, is our society is increasingly being digitalized, which means we just become more reliant on digital infrastructure for everyday goods and services.

But in situations of armed conflict where we do so much of our work, that means that those everyday goods and services are also essential services, right? Access to medical care, access to other emergency services, to clean and potable water, to electricity, to transportation, to access to fuel and agricultural goods. The list goes on. It's almost infinite these days, and that's also very true for situations of armed conflict.

But the other sort of half of this equation is not only how societies are increasingly reliant on our digitalized environment, but that you do have the private sector, in many instances, the ones who design, engineer, manage, control, secure those digital services, that digital infrastructure. And at the same time, we're seeing new and emerging military capabilities where cyber tools, cyber capabilities, are able to access vulnerabilities in that digital infrastructure. And so that raises serious concerns when it comes to protections for civilians, civilian populations, civilian objects. And so you've got several relationships all getting tied up together here, right?

So you have the private sector, you have essential services and companies that rely on those tech companies to deliver their services. You have the people, civilians, who rely on them. You have militaries who might decide, and we can get into the legal ramifications of whether it's lawful or not, to target that digital infrastructure. But you also have militaries that are reliant on that digital infrastructure that technology companies are managing and designing and, in some cases, providing. So, not a super crisp or clear answer, but that probably reflects what at least the ICRC is seeing on the ground.

Eugenia Lostri: Wow. I thought we were going to be done with the podcast in five minutes. You had all the answers.

Jonathan Horowitz: I don't think so.

Eugenia Lostri: So, you know, any, anyone who follows these issues and that's probably a lot of our audience is probably as tired as I am from hearing that we all agree that international law applies in cyberspace, but we don't agree how it applies in cyberspace. And one of the things that I appreciated about reading your writing about this, is that there is no reinventing of the wheel here. This is, you know, let's look at IHL, let's look at how it applies.

So, what would be the default position about the role of tech companies during conflict?

Jonathan Horowitz: So, where you started the question is where ICRC starts, right? International humanitarian law regulates militaries and parties to armed conflict that use cyber tools and capabilities.

And when it comes to technology companies, I think there are a few different ways of looking at it, but one of them is when you have a technology company, you have its physical assets in the territory where conflict is raging. And in many cases, you may have its employees working and living in a territory where that conflict is raging, you really need to know if you're a technology company, how international humanitarian law protects you, and depending on what type of activities you're involved in, if those protections for your assets and for your employees ever cease, right?

And so that's sort of the starting point for the ICRC on, as you mentioned, sort of, applying the rules of international humanitarian law to digital technology companies operating in situations of armed conflict without reinventing the wheel. Now, there are other issues about compliance and we should get into those but that's probably my first take at your question.

Eugenia Lostri: So what does it look like? You know, let's talk about the protections that are afforded to companies that are operating during conflict. What should they know?

Jonathan Horowitz: So there are a number of different rules in international humanitarian law, both law of armed conflict or the law of war that are particularly relevant, right?

And in the pieces that I've wrote, I really focused on what's called the rules relating to conduct of hostilities. And even more specifically, I've looked at rules relating to attacks, right? What can be attacked? What may be attacked? What must never be attacked? Now, I do want to emphasize for the listeners that there are also different rules to consider, but my focus is on issues around conduct of hostilities and specifically around these questions of protections from attack or mitigating harms from attack.

So, to your question, you're really going to be thinking about, as I mentioned before, sort of, digital technology companies, their assets, their, sort of, in IHL terms, their objects, whether or not they qualify as civilian objects, or if under any circumstances, they may qualify as military objectives, and determining that is going to lead you down a path to assessing how the law protects them, or when that law may cease to protect them in exceptional circumstances.

So that's relating to, like, assets to objects, but you've also got this issue of employees, right? And so that raises the question of our employees, civilians and protected as civilians, right? Because under international humanitarian law, civilians must never be the object of an attack.

And so there are tests that you have to do to determine whether or not company employees are civilians in exceptional circumstances. Maybe theoretically they may fall into a category of members of state armed forces, which is a whole ‘nother discussion but really you're looking at technology company employees as civilians and whether or not they would ever do what's called ‘directly participate in hostilities’.

Now, if that ever happens, and there's a test for that, of course, if that ever happens, then there are legal implications with regard to how IHL protects them, or not, for a temporary period of time when they're directly participating in hostilities.

Eugenia Lostri: So I really enjoyed reading one of your previous pieces where you go, you actually look into who would be a combatant and what circumstances and you go through all of the different stakeholders that are engaged during conflict.

So, applying the principle of distinction: what are the circumstances under which civilians working for a private company or as contractors could be seen as combatants? Walk us through some of these tests and how they could be applied.

Jonathan Horowitz: Right. So, one of the things to consider, and I think it is a little bit more than a technical legal distinction, but it, and it's really an important one, is that when a civilian directly participates in hostilities, they keep their civilian status. They don't become a combatant. But what they lose is their protected status as a civilian for such time as they directly participate in hostilities. So then the question is, how do you know if a civilian in this case, in this extraordinary case, right, if a tech company employee is directly participating in hostilities?

And there's a three part test, which you can find, or the listeners can find in greater elaboration in a guidance that the ICRC put out, that tries to unpack what it means to directly participate in hostilities and what this three-part test is comprised of but ultimately it's the following.

So, first, the act in question, right, that the civilian is engaged in, has to reach a certain threshold of harm, right? And that threshold of harm has to be death, injury, or damage to civilian or civilian objects, or adversely affect a military operation or capacity. So that's your first criteria to take into account, the question of harm.

The second is whether or not the act in question, that the civilian is engaging in, has a direct causal relationship to that harm, right? And so there has to be that additional element to assess, to determine if a civilian is directly participating in hostilities. And then there's a third element, a third criteria that has to be assessed, which is whether or not the act is specifically designed to be to the benefit of one side to the party of the armed conflict, and to be detrimental to the other side of the party to the armed conflict.

If you want to go into greater detail I'd really commend everyone to look at the ICRC guidance. And we can unpack further what those different criteria are.

Eugenia Lostri: So you mentioned before the question of compliance, right, and how that can affect the protections? Talk us through that a little bit more, what are you concerned about and how would you, you know, recommend that companies think about it?

Jonathan Horowitz: So, I think in terms of compliance, right, when I think of compliance in international humanitarian law, the responsibility primarily falls on the parties to the armed conflict, right? The belligerents, the warring parties.

And so, it is going to be up to them to ensure that any military operations, in this case, we're specifically talking about attacks, comply with the principle of distinction, right? That's the principle that you're prohibited from directing attacks against civilians and civilian objects, and you may direct attacks against military objectives including combatants.

And then you have the principle of precautions, which requires parties to arm conflict when they engage in attacks to take all feasible precautions to avoid or minimize civilian death, injury, and damage to civilian objects.

And then the third issue around compliance when it comes to international humanitarian law has to do with the principle of proportionality. And a lot of your listeners may be very familiar with the principle of proportionality, which ultimately says that an attack must not be expected to cause excessive civilian harm in relation to the anticipated military advantage, right?

So that's compliance when it comes to the parties to the armed conflict. There's also a lot of work and a lot of attention the ICRC has recently drawn to ensuring that civilians also comply with relevant rules of international humanitarian law. And I think two of my colleagues had the opportunity of appearing on the Lawfare Podcast several months ago, where they talked about a number of different rules that hackers need to follow, right? Because we know from the Nuremberg trials, for example, that civilians, if they engage in behavior that constitutes war crimes, they could be criminally liable.

And so that's also something that's really important, is not only for companies to know how they should be treated from the warring parties in terms of whether they are protected from being attacked, but also for the companies to know when, in those exceptional circumstances we talked about, they might be at risk of being attacked. But also you're going to want companies to ensure that their employees who are engaging in support operations for belligerence, that they also know the rules of international humanitarian law and they aren't running afoul of them.

Eugenia Lostri: You know, what happens when a party to a conflict, maybe has limited visibility into the integration of networks, right? Like, you started this conversation by talking about the ways in which everything has kind of become interconnected.

So if you cannot distinguish between, you know, the military use and the civilian use, right? Because it's not as you know, like it's hard. You can't just see it. And maybe you don't have the sophistication, maybe you don't have the skills to fully understand what you're seeing in the digital environment. What are some of the, you know, maybe particular precautions that can be taken to ensure that, you know, the cyber target, and I'm putting that in air quotes is legitimate or, you know, to, to prevent that type of mistake?

Jonathan Horowitz: Right. So the, one of the first ones is that you can't deploy an indiscriminate weapon. And there's a specific definition for what constitutes an indiscriminate weapon under international humanitarian law, but that's sort of one of your starting points.

Another is that you do have an obligation to take all feasible precautions, and one of those include verifying your targets to know that you're meeting the principle of distinction, that what you are directing an attack against is a military objective. Right, so, there's a lot of technical work that militaries, that cyber operators working for parties to an armed conflict can engage in to do that.

The, another thing that is often thought of is that you need to sort of map out your digital environment that you're on, or that you're wanting to disrupt and attack. And one of the things that's critically important about that, is that is what gives operators often a sense of what types of incidental effects their operation might have, right?

So understanding that if you go after a certain node, it may have an impact on another node and civilians may rely on that node for whatever essential service it is that get them, you know, from day to day in the hardships of living in armed conflict. So there's a lot of homework to do. There's a lot of visibility to try to achieve on the networks that militaries are operating on and trying to damage.

Eugenia Lostri: I'm gonna go back. There's the term that you've been using, attack, right? It's an attack if they can be, you know, under attack. And, you know, as far as I understand it, there are very different interpretations how we can define attack under IHL, and it's one of the areas where there might be some lively debate.

And as we continue to see different states put forth their interpretations of how international law and international humanitarian law applies in cyberspace, there's some interesting differences in how they see the term attack, what are the, you know, the requirements and what are the effects that we need to see? So, could you maybe explain a little bit what are some of these different interpretations and how they play a role in this discussion?

Jonathan Horowitz: So, the starting point for this, sort of, legal discussion, legal debate is that attack is a term of art under international humanitarian law and it attracts certain rules and principles of international humanitarian law, such as the principle of distinction that I mentioned, the principle of proportionality, the obligation to take all feasible precautions.

And so, a lot of what IHL affords to civilians and civilian objects turns on whether or not a cyber operation constitutes an attack. Now, if a cyber operation doesn't constitute an attack, there are really important rules that nonetheless apply in terms of either prohibiting it or restricting it in order to continue to protect civilians.

But the debate is essentially this: there seems to be common agreement among states that an attack under international humanitarian law is a military operation that may be expected to cause death, injury, or damage, right?

But when you start to unpack those different terms, the issue arises as to what type of damage, right? And so, in kinetic warfare that we're so unfortunately familiar with, the type of damage we are most familiar with is physical damage. In the cyber context, you can have a lot of disruption that occurs without physical damage, but you can still damage networks. You can still take down financial institutions, universities, and things of that sort, that civilian populations are highly reliant on and need in their day-to-day lives.

And so this debate exists. Sort of, is a cyber operation and attack only when it causes physical damage, or also when it causes non-kinetic damage, just for example, loss of functionality? And the ICRC takes a position that a cyber operation may constitute an attack when there's loss of functionality, and there are states that agree. There are a number of states that say no, it is only physical damage that's required for something to be an attack. And then there are a number of states that simply have remained silent on this issue.

Eugenia Lostri: So I think this ties pretty nicely with my next question, which is about how do we define data, right? Because data is one of the things that could be damaged. So this is also another lively debate about what protections IHL actually gives to data. So same as before, give us a sense of what is the debate here and why it's relevant to the discussion.

Jonathan Horowitz: So it's a debate that runs along the similar fault lines, right? There’re shades, but ultimately there are two different interpretations. So, the question of what is data, right? So you have social security data, you have financial data. You have other types of data that are used, for example, solely by the civilian population, and therefore a question arises, how does IHL protect civilian data?

And one answer would be to say, oh, civilian data is the same as or analogous to civilian objects. And therefore, the principle of distinction, which would mean you cannot attack civilian data, would be a rule of international humanitarian law. You would also have to take feasible precautions to minimize any harm against damage to civilian data, and the principle of proportionality would also apply.

But on the other side of the aisle in this debate are those that say, no, data is not an object. It is not something physical. It is not something tangible. And therefore, those rules that I just mentioned, of international humanitarian law, would not protect data in that way.

Now, there are a number of, sort of, exceptions to this. So, for example, international humanitarian law requires that parties to an armed conflict respect and protect the medical mission. And so you don't need to worry about whether data is an object or not. The fact that it's part of the medical mission means that it must be respected and protected.

But for the data that falls outside of those specific protections, it is a live issue and is an important one. And the example that the ICRC often gives is if you have a bunch of, you know, paper bank records that belong to civilians and civilian customers and has no military connection, for example. All those papers, all those physical records, would be a civilian object. So if you scan them, they become digitalized. And, you know, why would it be that international humanitarian law all of a sudden no longer provides them with the same protection?

Eugenia Lostri: So you've been painting quite a picture of the different tests, the different principles, norms, rules, debates that people should be paying attention to in order to better understand their role and what they could be facing if they find themselves providing services during conflict.

But I want to, and you've been giving a sense of it, but I want to make it pretty clear what the stakes are, right? Like, what can happen if you're not paying attention to these questions, right? What is, and not to be, you know, fear mongering or to say like, oh, this is the worst case, but just an actual case of, you know, this is what could happen. This is what you could be risking if you're not paying attention to it now.

Jonathan Horowitz: So I think there are two scenarios that are worth thinking about in how technology companies, if, like you said, they aren't paying attention to it, or if it's not on their radar, they may inadvertently walk into.

One is, you know, many digital technology companies are operating in peacetime context, providing services to civilians, to governments, and wars may erupt. Armed conflicts may begin. And you might start to see militaries rely on those goods and services which companies had not thought about being used in situations of armed conflict, but to really no fault of their own, an armed conflict has emerged and there they are providing goods and services to parties to an armed conflict in ways that, for example, might start to raise questions about whether or not their infrastructure could ever qualify as a military objective.

So that's one scenario, and I guess by implication, we're talking about not only cyber operations being directed at technology companies for the support that they provide to belligerents, but also kinetic operations, kinetic attacks, physical violence that could be directed at the infrastructure that allows that company to provide services to militaries, for example, to carry out military operations. So that's one scenario.

The other scenario, and you end up in the same finish point, is where a company proactively decides to offer its support to a belligerent, but the same end result is there, right? Does the company know what the risks and consequences are when they get involved?

And for the ICRC, the additional consideration is not only the risk that befalls the company's assets, but also because everything is, in many cases, interconnected, you do have those essential services that also might be affected, which means large swaths of the civilian population living through armed conflict might also be affected.

Eugenia Lostri: So of course, we've been hinting at this, but the time to discuss these questions is hardly once you find yourself actively involved in conflict, right. And the intertwining of digital environments that, you know, can make a lot of sense during peacetime suddenly introduces all these concerns that companies might not have been thinking about. So what are your recommendations, you know, recommended measures that companies should be taking now, ahead of time?

Jonathan Horowitz: So there are a number of different recommendations the ICRC is starting to talk to different companies about, as we start to understand how these partnerships are unfolding, or how this interconnectedness could potentially impact or is impacting civilians.

So the first is the basics, right? It’s for technology companies in-house to understand what international humanitarian law is and to understand what the risks are to its physical assets, its non-physical assets and to its employees. So once you figure out what some of the rules of international humanitarian law are, you're also going to want to figure out, for example, all the protections that a company receives, but also what types of behavior it's engaging in, where those protections might be lost.

And I think here it's really important for companies to engage in dialogue with the governments who they're providing support to. Because it is not clear to me that companies necessarily have full transparency over whether their goods and services are being used for military support in situations of armed conflict or not, right?So that's a really important conversation for them to have.

Once they've been, done those assessments, then there's a really important question of how to mitigate any risks that they've identified. Those could be policy mitigation measures. Those could be technical measures. The ICRC has talked about possibilities around segmenting infrastructure, where feasible, for companies to think about that as a possibility.

And then I think, also, it behooves companies to ensure that they inform their employees, for example, of what types of risks they might be exposed to, if those risks can't be mitigated. And again, where feasible, being able to inform their consumers what types of risks might affect civilian populations when a company decides to provide support to a belligerent that's a party to an armed conflict.

Eugenia Lostri: Focusing a little bit more on this communication between tech companies and the governments, you know, and in your paper, you discuss what you call clear communication channels that allow parties to understand each other's actions, right? And I'm curious about how you would see this being implemented.

Is this, you know, a bilateral conversation between one company and I don't know, someone in the government, or is this something more akin to the JCDC? You know, just wondering, because we've seen that some information-sharing schemes that happen during peacetime can face challenges, right, that hinder how effective they can actually be. So I'm trying to think, you know, if you have a sense of how we could prevent something similar happening or being replicated in an armed conflict setting,

Jonathan Horowitz: You know, I think it's really going to be very context-specific and I don't know if the ICRC is the right institution to sort of direct exactly what the venues are for having those types of conversations.

I would certainly assume that they would be bilateral, that they would be in a way reflective of whatever the contractual arrangements are between the companies and the governments. But in terms of the venue and the form and things like that, I think that's really up to the companies and the governments to decide on what's most effective, what's most efficient, what's most useful.

Eugenia Lostri: Regardless, I'm still gonna continue on this. No, but I'm curious, you know, how are you thinking about the line between, you know, these contractual agreements or public-private partnerships during conflict, and having a company operating under the direction and control of a government, right? If you're establishing these channels of communication where you're sharing all of this, you know, could there be legal implications to, you know, kind of, being in the middle of those two situations?

Jonathan Horowitz: So that's an extremely important question. And there are all sorts of legal frameworks and rules at the domestic level at the international level that start to attract questions of attribution of when the acts of third parties can trigger state responsibility for the actions of a state. And I don't want to avoid them because I think they're important ones.

And maybe that's the main point for me to say, is that if you're dealing with a situation where you have a belligerent having certain levels of direction and control over the activities of a third party, you certainly need to have legal discussions about whether or not those activities are attributable back to the state and all sorts of other important questions that need to be that need to be asked.

Because those are going to attract both legal liability issues and potentially a whole host of foreign relation and policy issues when you're talking about, in particular, third parties operating over international borders into a territory where an armed conflict is taking place.

Eugenia Lostri: So, unfortunately, there are several ongoing armed conflicts happening right now and I'm wondering if you're seeing these questions being addressed in practice, if there are interesting positions that you had not considered before being developed, you know, just as a matter of fact?

Jonathan Horowitz: What the ICRC has been observing is, I would say at first, let's say an uneven awareness of international humanitarian law, you know, going back, you know, two or three years ago. Just this not necessarily being on the radar of technology companies that were thinking about or are thinking about or did decide to provide support to, you know, parties to this conflict or that conflict.

And so the first hurdle was just making it known that there's a body of law that might influence their behavior and might affect how they operate. And so what I've been seeing and what my colleagues have been seeing is an increased awareness and willingness to engage on what the various risks are when a company decides to provide support to a party to an armed conflict, and how IHL engages with those particular risks, whether it has to do with their assets or whether it has to do with their employees.

In terms of, you know, policy shifts or mitigation measures, I think those are conversations that continue to develop, unfold, mature, and they're ones that the ICRC continues to be very eager to engage in. Both because we have a strong devotion to the application of international humanitarian law. But also because we have, I would say, you know, unique insights into how these risks can trickle down, if not, you know, immediately or directly, indirectly onto civilian populations who are relying on the digital infrastructure that's sort of been the topic of this conversation today.

Eugenia Lostri: Hoping not to be redundant, but you know, so what comes next, right? Like what are the next steps in order to either raise awareness at the private sector level or at the government level, and then to drive the policy changes that you would need to ensure these protections.

Jonathan Horowitz: So I would in a way retreat to what the ICRC regularly does on these issues, right?

And so, you know, we'll have bilateral conversations with companies that we think are interested in these issues, with governments that we think are interested in these issues. What we have found is, of course, the digital technology, you know, private sector is not a monolith, and different industries rely on different technologies, which might result in very different legal questions that arise.

And so the importance of having scenario-based discussions whether it has to do with cloud computing services or whether it has to do with satellites and other space assets, all of these different things I think are up for grabs for analyzing, because all of those things civilians are highly dependent on. We're seeing militaries being highly dependent on them. We're seeing military tools and capabilities, weapons, being able to find vulnerabilities in them.

And so, there is this sort of, circular, sort of, iterative process of understanding these different systems, what their vulnerabilities are, and how international humanitarian law either prohibits or regulates attacks or other types of disruptions against them.

Eugenia Lostri: Now, Jonathan, before we wrap up, I want to make sure, you know, if there are any final thoughts that you want to leave us with, or if there's anything that we didn't get to talk about that you wish we could have talked about.

Jonathan Horowitz: I just thank you for the opportunity to have this conversation, to unpack some of these issues and give some time to the ICRC's perspectives on an issue that we don't see slowing down, but as we become an increasingly digitalized society and as militaries continue to develop their capabilities, and as public-private partnerships seem to expand and deepen, I think it's an issue that will stay on our radar and hopefully will be on the radar of both digital technology companies and governments who find this relevant.

Eugenia Lostri: Thank you so much.

Jonathan Horowitz: Thank you.

Eugenia Lostri: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Jay Venables of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Eugenia Lostri is a Senior Editor at Lawfare. Prior to joining Lawfare, she was an Associate Fellow at the Center for Strategic and International Studies (CSIS). She also worked for the Argentinian Secretariat for Strategic Affairs, and the City of Buenos Aires’ Undersecretary for International and Institutional Relations. She holds a law degree from the Universidad Católica Argentina, and an LLM in International Law from The Fletcher School of Law and Diplomacy.
Jonathan Horowitz is Deputy Head of the Legal Department to the ICRC’s Delegation for the United States and Canada, based in Washington, DC. He focuses on legal issues relating to urban warfare, partnered military operations, and new and emerging technologies in armed conflict.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.