About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019) and AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021).

Friday, May 22, 2009

Killer robots in war: What do you think?

This thought experiment is not as far-fetched as it may seem at first glance. Many experts believe that we will be able, not too many decades down the track, to build a device with the capacities that I'll be describing. My Generation Y philosophy/international studies students may still be young enough to be involved in real-world decisions when this sort of technology is available. Even I may still be alive, to vote on it, if it's an election issue in 30 or 40 years time. Though it may be at an early stage, the necessary research is going on, even now, in such places as the US military's Defense Advanced Research Projects Agency (DARPA).

Imagine that the T-1001 is a robotic fighting device with no need for a human being in the decision-making loop. Note that it does not possess any sentience, in the sense of consciousness or ability to suffer. It cannot reflect on or change the fundamental values that have been programmed into it. However, it is programmed to make independent decisions in the field; in that sense, it can operate autonomously, though it would not qualify as an entity with full moral autonomy in a sense that Kant, for example, might recognise. It has some limited ability to learn from experience and upgrade its programming.

The T-1001 is programmed to act within the traditional jus in bello rules of just war theory and/or similar rules existing in international law and military procedures manuals. Those rules include discriminating between combatants and non-combatants. I.e., civilians, and other non-combatants such as prisoners of war, have an immunity from attack; however, there is some allowance for "collateral damage", relying on modern versions of the (admittedly dubious) doctrine of double effect. The T-1001 is not equipped with weapons that are considered evil in themselves (because they are indiscriminate or cruel). Its programming requires it to avoid all harms that are disproportionate to the reasonably expected military gains.

To accomplish all this, the T-1001's designers have given it sophisticated pattern-recognition software and an expert program that makes decisions about whether or not to attack. It can distinguish effectively between combatants and non-combatants in an extensive range of seemingly ambiguous situations. It can weigh up military objectives against probable consequences, and is programmed to make decisions within the constraints of jus in bello (or similar requirements). As mentioned above, it does not use weapons that are evil in themselves, and does not attack non-combatants except strictly in accordance with an elaborate version of the doctrine of double effect that is meant to take account of concerns about collateral loss of life. It also uses a general rule of proportionality. Indeed, the T-1001's calculations, when it judges proportionality issues, consistently lead to more conservative decisions than those made by soldiers. I.e., its decisions are more conservative in the sense that it kills fewer civilians, causes less overall death and destruction, and produces less suffering than would be caused by human soldiers making the same sorts of decisions in comparable circumstances.

At the same time, it is an extremely effective combatant - faster, more accurate, and more robust than any human being. With declining birthrates in Western countries, and a shortage of young enlistees, it is a very welcome addition to the military capacity of countries such as the US, UK, Australia, etc.

In short, the T-1001 is more effective than human soldiers when it comes to traditional combat responsibilities. It does more damage to legitimate military targets, but causes less innocent suffering/loss of life. Because of its superior pattern-recognition abilities, its immunity to psychological stress, and its perfect "understanding" of the terms of engagement required of it, the T-1001 is better than human in its conformity to the rules of war.

One day, however, despite all the precautions I've described, something goes wrong and a T-1001 massacres 100 innocent civilians in an isolated village within a Middle Eastern war zone. Who (or what) is responsible for the deaths? Do you need more information to decide?

Given the circumstances, was it morally acceptable to deploy the T-1001? Is it acceptable for organisations such as DARPA to develop such a device?

I discussed a version of this scenario with my students this week. It seemed that, with some misgivings, the majority favoured deploying the T-1001, but perhaps only if a human was in the decision-making loop at least for the purpose of shutting it down if, despite all, it started to make an obvious error that would be a war crime if done by a human soldier. Presumably this would mean an automatic shut-down if it lost contact with the human being in the loop, such as by destruction of its cameras or its signal to base.

Although the scenario postulates that the T-1001 is actually less likely to commit war crimes (or the equivalent) than a human soldier, we can't guarantee that it will never confront a situation that confuses it. It wouldn't be nice if something like this started to run amok. Still, soldiers or groups of soldiers can also go crazy; in fact they are more likely to. Remember My Lai. But does a moral problem arise over the fact that, unless there's somehow a human being in the loop, any unjustified civilian deaths that it causes are unlike other deaths in war? It seems to hard to call them "accidental", but nor can they easily be sheeted home as any individual's responsibility. Is it implicit in our whole concept of jus in bello that that kind of situation must not be allowed to eventuate?

What would you do if offered the chance to deploy this military gadget on the battlefield? Assume that you are fighting a just war.

20 comments:

The Jules said...

I don't think authorities using such a robot and having this situation occur would be too non-plussed.

To them, it wouldn't be much different from accidental deaths due to friendly fire, or a missile going off target and hitting a wedding party.

RichardW said...

I don't see that the moral issue is essentially different from other military situations, with the T-1001 being another kind of weapon which could malfunction and cause unforeseen "collateral damage", not so different from a missile.

The moral responsibility lies with those who deploy the weapon. They need to justify the risk of collateral damage in proportion to the expected gains.

There is a problem, however, that as weapon systems become more complex, it becomes increasingly difficult to judge the risks. The T-1001 would be an extremely complex system, and I have my doubts about whether it could be made any more reliable than a human being.

RichardW said...

Incidentally, I feel the scenario as described takes a rather simplistic view of what would be involved in programming the T-1001. The description gives the impression that it's just a matter of supplying hard-coded criteria for decision-making, with only a "limited ability to learn from experience and upgrade its programming". But the sort of judgements the T-1001 would have to make are no less sophisticated than those that a human soldier would have to make. Such judgements would require a highly sophisticated AI, perhaps based on an artificial neural network, which would probably have to be taught and not just programmed. And once you give the T-1001 such a sophisticated AI, it's no longer clear that we should accept the claim that it possesses no sentience.

MJ said...

"Assume that you are fighting a just war. "

Ha. "A just war". Hahahahahaha. Sorry, I know that's not constructive, but good luck finding any agreement on that statement. As a thought experiment, why not? We already have planes that do most of what you describe, they just have more human interface.

Not to be too cynical, but the amount of collateral damaged sanctioned by the powers that be here in the US far exceeds the death of 100 civilians in some little village in a war zone. White phosphorous in the city of Fallujah comes to mind...

Steve Zara said...

I would not use it, because ..

Note that it does not possess any sentience, in the sense of consciousness or ability to suffer. It cannot reflect on or change the fundamental values that have been programmed into it.

Jambe said...

I'd only use it if there were an override mechanism and a human constantly monitoring it (like that of aerial drones currently in service).

Also, I daresay more people would sign up for military service if said service entailed sitting in front of a monitor with some controls as opposed to in a foxhole with a rifle. Assuming the machine you're talking about requires a human in the loop at all times, I could see more people signing up for "military" service.

This is tangential: if the machine gained sentience it'd be intrinsically deserving of more consideration than a tank, tiller or tractor, right? It'd be akin to a horse or any other animal used in war. That'd introduce some interesting complexities to the mix!

mace said...

I'd deploy the T-1001 weapon,the real test would be the probability of it malfunctioning and committing a "war crime". If this probability was significantly less than a human in the same circumstances we would be a little closer to satisfying jus in bello criteria. Imagine a war of "our" T-1001 thingies fighting "their" T-1001's, witout human combatants, or victims.Assuming the T-1001's operational parameters are as described, whether or not there's a human overseer "in the loop" is irrelevant ethically.

mace said...
This comment has been removed by the author.
Russell Blackford said...

Thanks for the comments, folks. I don't have a terribly strong opinion about this, but I'm inclined to deploy a version of the T-1001 with a human being in the loop (possibly monitoring more than one of the things) with a capacity to shut it down in an emergency. But I'm very open to argument, and I also take the point that it may turn out impossible to have a machine with such effective software without also having a machine that is actually "awake" in some sense. At the moment, I'm inclined to think that it is possible; that even machines capable of very advanced, superior-to-human pattern recognition, etc., are unlikely to be conscious; and that building a machine that's actually conscious would be very difficult and may defeat us indefinitely (I'm not saying it's impossible, though).

I'd welcome more thoughts in response to the scenario or the above observations.

Meanwhile, I'm intrigued by Steve's reasoning. I would have thought there were moral advantages to the fact that the T-1001 is not sentient, etc. When we put it at risk, we're not risking that it will suffer, or anything like that, so the moral considerations are mainly to do with the suffering and destruction that it inflicts, which we can try to control.

Its very limited capacities to upgrade its programming also have advantages. Since it can't change its underlying values but only refine its recognition of patterns, let's say, we're not going to get a bunch of them developing goals of their own and turning into Skynet on us.

Care to explain your reasoning further, Steve? As I say, I'm intrigued.

Steve Zara said...

We expect that the way we deal with each others is as humans, even during wartime.

Even when we have machine proxies that are supposed to protect lives, we involve human oversight. An example is flight. We don't have planes which operate only on autopilot, although the evidence is that this would be safer. We expect to have people responsible for what happens to us, even in peacetime.

War is a situation at the extremes of ethics.

There are enough ethical problems even now with those who press the buttons to launch "smart bombs" that so often seem not that smart.

I would say that the more autonomous the device you use in war, the more ethical problems you have. In a battlefield situation, who can judge what is a hideout for terrorists, or a shelter for those who are innocents? That is hard enough for people. At least people take responsibility for their decisions and mistakes.

War isn't a form of calculus. Even at its worst, it is an ongoing negotiation between people: as at some point, it has to end. In the aftermath, the less you have treated people as individuals in the conflict, the more guilt you have. How should we treat those who have coded "var p = people_dead()" into the software of a killing machine after a war in which millions have died?

If we ever send "AI" machines into the battlefield, I want them with a full range of emotions. I want them terrified, and able to feel the pain of those they attack.

Russell- I would be ethically happier if they were Skynet!

Russell Blackford said...

Thanks, Steve - I understand your view and have some sympathy for it. You've put the case against the T-1001 very nicely. I'll look forward to responses to it. We could get a good discussion going about this.

Anonymous said...

I really can't imagine these machines not being sentient, as you postulated, while still being able to make good decisions on the battlefield. (Even humans have trouble distinguishing between combatants and noncombatants.) Therefore, if T-1001 would commit war crimes, it should be judged and punished, e.g. by permanent shutdown.

The alternative would be to stack up the T-1001s against similar machines from the enemy side, position them in some deserted area, and let them fight it out. The humans could then come back later to clear out the scrapheap.

BTW, note that the restrictions on "fair combat" that are built into the T-1001 could also be its Achilles heel. The enemy could deploy similar machines, but with less scruples. It seems likely that The Ministry of Defence will also order a "dirty" version of the software, to be loaded into T-1001 when deemed necessary.

RichardW said...

Very interesting discussion. But I think there's some confusion (at least on my part) about a major premise of Russell's scenario, and perhaps it would be helpful if he clarified.

For example, Steve wrote: "In a battlefield situation, who can judge what is a hideout for terrorists, or a shelter for those who are innocents? That is hard enough for people."

But, as I understood it, the premise is that the T-1001 can make such judgements better than a human. Now, I find that an implausible premise given the sort of time scale that Russell mentioned. But it's interesting to speculate on what the ethical situation would be if such an AI were possible some day.

On the other hand, Russell keeps referring to "pattern recognition", in a way that suggests a much simpler sort of decision-making capability, which could not make effective judgements of this sort. So maybe I misunderstood the premise.

Incidentally, Steve also wrote: "We don't have planes which operate only on autopilot, although the evidence is that this would be safer."

I don't believe that such a plane would be safer with today's auto-pilots or in the near future. If I was convinced it was safer, I would prefer to fly in such a plane. To be safer, an auto-pilot would need to be able to respond to all the complex and unforeseen events that a pilot can, and that would require a very sophisticated AI of a sort that is nowhere near development.

Thomas Hendrey said...

IF I am fighting a just war, and IF employing the machine was going to fulfil my military aims, and IF we can expect it to fulfil those aims with less 'collateral damage' then alternative methods (say soldiers) then I would deploy T-1001 and sleep like a baby.

To be honest I found this subject a bit bamboozling and a bit frustrating. The way I see it war is hell - it involves people being killed, maimed, raped, tortured, disabled, ophaned, and on and on. Fighting a just war means you can look forward to something much worse if you lose. When the stakes are this high I am just astounded that people in good conscience can be concerned with the sort of subtelties with which this course is largely concerned with.

If I can fulfil just war aims with as little of the death and destruction I referred to as possible then I would do it. Compared to that who cares who will be responsible for a malfunction? Who cares what intentions which people have and who deserves what? And who cares whether we violate expectation that we treat each other as humans? Or the guilt we might feel for not treating people as individuals?

I'm sorry if any of the concerns in the last paragraph suggest acting in ways that would be expected to increase the death and destruction or increase the likelihood of whatever terrible circumstance we are justly causing death and destruction to avoid how we could possibly accept them. The stakes are simply too high for such things to matter. I mean say we decide not to deploy T-1001 because war needs to be an ongoing negotiation between people. Suppose further we deplot regular soldiers, and a couple of them go crazy and rape and kill a couple of women in a village somewhere. How can we justify our decision to deploy soldiers to the people of the village. Say they come to us and say that it would never have hapenned if we had employed T-1001. Can we say "You are right that it this probably wouldn't have hapenned (though we can never be sure) and that there would not likely have been any similar tragedy. But decision making in war is not some sort of calculus and T-1001 would not be able to feel the experience of being in war and so to use it would go against our view of war as an ongoing negotiation between people."? Can we ever justify a decision that leads to such catastrophe without pointing to some other catastrophe of equal or greater significance that was thereby avoided?

I am ranting and I don't like to do it. I apologise particularly to Steve some of whose wording obviously inspired the way of putting the sort of position that is my target. I may well not have understood what you were saying and my use of your words may have been unfair. I guess I'm just tired of people saying that those who advocate consequentialist type prescriptions are don't appreciate some deep aspects of their subject matter(not that I'm suggesting that this was being done here except perhaps with 'war isn't a form of calculus). I'm frustrated because this is used to justify the sort of methodology that seems to end up considering ethical issues in war in ways that seem to me simply not to appreciate that people are going to get killed, maimed, tortured rape etc. Given that I don't think there is much interesting about war that should be said about war from the sort of "armchair" position often occupied by philosophers (certainly I don't think there's very much interesting I can say on it, but I probably have less experience and empirical knowledge than most).

Basically this is a rant coming from emotions that were not caused by comments here. I sincerely hope that it does not harm the reflective attitude of your blog Russell.

Thomas Hendrey said...

"Given that I don't think there is much interesting about war that should be said about war from the sort of "armchair" position often occupied by philosophers (certainly I don't think there's very much interesting I can say on it, but I probably have less experience and empirical knowledge than most)."

Comma would have been useful after 'Given that' in this sentence for clarity.

Russell Blackford said...

RichardW, when I say pattern-recognition software I suppose I'm handwaving a bit, and trying to pretend I know what I'm talking about. :)

All I have in mind, I think, is that when we assess someone as being a combatant or not that assessment is theory-laden - we have a whole lot of factors that we plug in. We have to recognise a certain kind of object as a rifle, not a stick, factor in the apparent age of the person (which involves a whole lot of cues), demeanour (which involves more cues), and so on. But I don't see why, in principle, we couldn't make the theory explicit: reverse-how human beings make these judgments, complex as they'll be, and design software that will be as reliable or more so at recognising various categories of situations. And the AI running this software won't be distracted by such things as fear and anxiety, which arguably detract from human performance.

It needn't (I think) involve the artifact employing the software being conscious - Chinese Room type arguments to one side (I don't really buy such arguments), all this is impressive, but still the equivalent to only one small part of what human beings' brains do that makes us conscious.

Based on Moore's law, I don't see any hardware limitations on having a sufficiently sophisticated but non-conscious AI to control the T-1001. The interesting questions, I suppose, are software problems. I have no expertise in whether what I describe can be done, but I don't think we would have too much difficulty finding people who claim to be experts who think it could be.

So my question is assuming that all this is right - what do we then do? I'll now point everyone to an article by my colleague, Rob Sparrow, who imagines a similar scenario and thinks we should not deploy such robots. He'd like Steve's answer.

If you have online access, and time, have a look at his article in Journal of Applied Philosophy 24 (2007), 1: 62-67, for a much more formal discussion, and a much more sophisticated understanding of the current state of military thinking about all this. Note, though, that I don't necessarily agree with Rob. I'm much more inclined to develop and deploy such devices, but perhaps with additional safeguards of some kind. I'm still thinking about it.

Russell Blackford said...

Aaarrghh, I should preview my comments (as I meant to do this time). Probably lots of glitches in the above, but one is "reverse-how". I meant to write "reverse-engineer how".

Richard Wein said...

Thanks for that, Russell. The question is whether your algorithmic approach could produce a robot that was as good as a human at making the relevant decisions. It certainly wouldn't be as good all the time. Unforeseen circumstances are bound to arise occasionally which the algorithms were not designed to handle. We're not just talking about distinguishing between civilians and combatants, though that could be difficult enough. There are many other decisons that a soldier needs to make. You might argue that unforeseen circumstances will occur rarely enough that the robot will still do better on average, despite making worse decisions on some occasions. But I remain very sceptical.

You wrote: "The interesting questions, I suppose, are software problems. I have no expertise in whether what I describe can be done, but I don't think we would have too much difficulty finding people who claim to be experts who think it could be." Yes, I'm sure we would have no difficulty finding such people, especially given the lucrative research grants available from the DOD! I have no expertise in the field either (except that I used to work in software development), but perhaps I'm more aware than you of the overoptimism for which the field of AI has been notorious.

I note your preference for having a human supervisor involved. I wonder whether that's because of some particular moral point (such as the suggestion I think someone made that there must always be a human on hand to blame if something goes wrong) or simply because you don't trust the robot to do the right thing. If the robot only performs more conservatively on average, it might make sense to employ a human supervisor to stop the robot if he disagrees with its decision. That would give you the best of both worlds, i.e. the more conservative of the two decisions. But would you refuse to employ the robot on a mission where human supervision was impossible, even though you know that the robot will on average be more conservative than a human soldier? Logic seems to suggest that, if the robot really performs as claimed, he shouldn't need a supervisor. I admit to feeling queasy about the idea of employing an unsupervised robot, but it seems irrational not to do so under these circumstances.

Perhaps a slippery slope argument is appropriate: once we start to employ robots unsupervised (or at all) their use is liable to spread to more problematic circumstances. Someone already made the good point that, if the conservative nature of their algorithms makes them less militarily effective than they could be, there may be pressure from the military to make them less conservative.

MosesZD said...

Just war? How is that a rational concept for which to excuse wanton bloodshed?

In my book there is rational self-defense from an aggressor. At least in the individual reference.

But states going to war? Even being on the "defensive side" of war hardly makes it "just."

anik said...

Well we have human pilots in aircraft just because we don't trust automated systems not to fail when we most need them (ie at landing & takeoff). If we can't trust an autopilot can we really trust a T1001?
The point that I find intriguing here is the concept of this machine having sentience. Please folks it is just a machine. It may look as if it is responding in a sentient manner to its environment but it is just a machine. Don't worry about its feelings it doesn't have any. It doesn't have any pity either!! That is the reason I wouldn't use it - unless of course I really needed to! All of this supposes that the programming will work correctly and I wouldn't expect that to be the case. What are the ethical questions around the deployment of known buggy T1001s?