Moral questions – and answers – around helping Ukraine, nuclear deterrents and self-defence


Thursday 3rd Mar 2022, 12.30pm

Presenter Michael Buerk and panellists, Melanie Phillips and Matthew Taylor, asked the questions:

Q. What principle helps us to draw the line on what help we give the Ukrainians?

Professor Dill: Any assistance that we, or anyone else, renders the Ukrainians right now, or in the future, must meet a fairly straightforward moral test…does it assist the Ukrainians in pursuing their just cause of self-defence? And, in doing so, does it cause more harm than good?

What principle helps us to draw the line on what help we give the Ukrainians?

Professor Dill: Any assistance that we, or anyone else, renders the Ukrainians… must meet a fairly straightforward moral test….does it assist…their just cause of self-defence? And…does it cause more harm than good?

Adding the facts to this moral judgment is extraordinary difficult – and these facts will change as the situation evolves – and…[providing] assistance could cause more harm than good, and may well change as the facts on the grounds evolve.

Q. Do you think that we should help them for their own sake, or our sake?….should uppermost in our minds be that we help on humanitarian grounds, on the grounds that we wish them to regain their freedom, because we don’t like people being invaded by tyrants who extinguish freedom?  Or should we only help them, if our own interests are threatened? In other words, if we conclude that the invasion has collapsed the global rules of order which ultimately threaten all of us?

Professor Dill: We should help them on the grounds of an objective test of moral permissibility…We have duties of assistance towards each other, and when a collective agent is fighting a war of self-defence with a just cause and they are asking for this assistance, giving consent, we can assist them in a way that doesn’t cause more harm than good. It doesn’t produce unreasonable moral burdens on us…we have an objective moral duty to assist them.

Bringing nuclear war on us would be an unreasonable moral cost for us to bear. Redirecting some of our defence spending or increasing our foreign aid budget and carrying some costs in rising energy prices, these are not unreasonable costs, given how high the stakes are for the Ukrainians in defending themselves.

Do you think that we should help them for their own sake, or our sake?…

We have duties of assistance towards each other, and when a collective agent is fighting a war of self-defence with a just cause and they are asking for this assistance, giving consent

Q. Do you think assassination is ever justified to stop the killing of the innocent? For example, historically, would it have been right, in your view, for Hitler to have been assassinated at an early stage?

Professor Dill: People can forfeit their right to life, if they pose grave unjustified threats to others. So, it can be morally justified defensively to kill somebody who poses a grave, unjustified threat to the lives or rights to lives of others…So, Hitler, like Putin, they’re not innocent human beings.  If we can foresee the way in which they are going to present imminent threats to the lives of others, then killing them isn’t assassination of an innocent person. It is a defensive killing of someone who is liable to…moral harm.

Do you think assassination is ever justified to stop the killing of the innocent? 

If we can foresee the way in which they are going to present imminent threats to the lives of others, then killing them isn’t assassination of an innocent person. It is a defensive killing 

Q. But, nevertheless, us I understand you’re talking about killing somebody if they pose a terrible threat to the lives of the innocent. But there are many people in the world…for example, President Assad of Syria has…not just posed a terrible threat but has actually caused the killing of hundreds of thousands of people. Should the West have killed him? In other words, should you kill a tyrant because of the benefit, the potential benefit to his victims, regardless…of whether you, as a country, stand to benefit or be harmed by him?

Professor Dill: I think what you’re asking is: how do we weigh two different moral considerations? …what is the liability of that individual we are contemplating [killing] and how much harm can we prevent by killing them, before they actually realise the unjustified threat that they are posing.

…someone who is liable to harm, like Putin or Assad, you’re not wrong…by harming them. That doesn’t mean that you cannot cause moral harms in the process… You have a very high moral obligation to calculate very carefully in advance whether it is worth the cost….

We have rules in place that protect the immunity of Heads of State and, by and large, we think that, obeying these rule, serves international stability.  And it helps us not violate the rights of innocents everywhere else, because international instability usually has these knock-on effects.

Q. Do you think that’s a sound assumption that deterrents will prevail?

Professor Dill: To some extent we’ve been lucky, not having to test that proposition and, I think, from a rational point of view, if you assume that agents on both sides maximise utilities along the lines of preserving the planet, people, rights to life, everything that we hold dear, then you should say, yes, deterrents must surely prevail.

There’s no such thing as a rational nuclear war. That assumes that we know what kind of utility someone is maximising, what the relevant agents perceive as cost and benefits.

Q. But deterrence doesn’t work as a principal ethically, because it has this massive contradiction…if you were ever to actually exercise the ultimate response…you would be committing an atrocity. So, ethically, it doesn’t work, does it, when it feels as it does at the moment the one side may be more willing to behave in that way, then the other side?

If you were ever to actually exercise the ultimate response…you would be committing an atrocity. So, ethically, it doesn’t work, does it?

At the moment, we are in a position where we cannot go back from facing this nuclear stand-off….thinking nuclear deterrence isn’t morally justified…would certainly spell moral catastrophe. So, right now I think we are justified in upholding nuclear deterrence

Professor Dill: …the question is a little bit ‘how have we gotten here’, and how can it ever be morally permissible to threaten to do something, that it would never actually be morally permissible to do? That is the paradox we have to grapple with.

At the moment, we are in a position where we cannot go back from facing this nuclear stand-off. So, now to hide behind the notion of thinking nuclear deterrence isn’t morally justified…would certainly cause moral catastrophe. So, right now I think we are justified in upholding nuclear deterrence. In a world without nuclear weapons, that moral calculation would be radically different.

Q. Don’t you think…that maybe part of what lies behind the complacency that led to this situation is that notion that, if things get really bad in the end, this deterrence idea, that worked through the cold war, will work again? But part of the reason people are so worried now is that… it relies upon the notion that we would do something that we would not do. And it also relies on the idea that both sides observe, in the end, that same kind of injunction.

Now that we are here, our chief moral concern must be the prevention of nuclear escalation and nuclear war. There is really no greater moral evil…We need to look forward and try to prevent with everything possible that greater moral evil

Professor Dill

Professor Dill: I’m not sure what got us here is complacency. That’s a factual question, not a moral question. Now that we are here, our chief moral concern must be the prevention of nuclear escalation and nuclear war. There is really no greater moral evil than that sort of escalation and I think we should not morally be distracted by the question of how did we get here, who is to blame? We need to look forward and try to prevent with everything possible that greater moral evil.