Metaconsent

I was talking with some people at dinner a little while ago about the ethics of killing animals for food. It's a question I've always found fascinating because I think that at some point, our technological means will make it unnecessary. Shortly after that it will become morally questionable. And shortly after that, barbaric and embarrassing. It's right at the nexus of a lot of interesting moral questions that we don't have good answers to: What is the value of life? How do we allocate moral weight to beings? And, my favourite, what is life anyway?

I find the non-commutativity of life particularly interesting. Creating a life is not as good as taking a life is bad. So if you can either have no children at all, or have two children but kill one of them, the former is ethically fine and the latter is deeply wrong. However, the net result of the first decision is one less person than would have existed with the second decision.

If you could go back in time and convince someone not to have children, and those children cease to exist in the present, have you murdered them? What if you cut out the time travel and just convince someone to not have children today? If you could choose for the entire human race to just stop reproducing tomorrow, meaning the complete eradication of our species in the next hundred years, would that be ethically superior to, say, the one-off genocide of a billion people? But what is that if not bigger numbers plugged into the same moral equation?

The relevance to animals is of course that there are so many animals that are alive because we keep them for meat. Perhaps it is barbaric that we breed and slaughter them. But what about all those lives that would never have existed otherwise? Are they really worth nothing? If you could choose to live for twenty years or no years at all, what would you choose?

I'm struck by the pig that wants to be eaten from Hitchhiker's Guide. It feels so weird because it takes two important values – individual choice and protection from harm – and rams them into each other. What do you do with someone who wants to be harmed? Our current, fairly un-nuanced answer is that someone who wants to be harmed doesn't really want to be harmed and actually has a mental illness. But there's a large subculture of people who enjoy being recreationally harmed without needing any psychiatric treatment at all! To say nothing of extreme sports, daredeviling and other high-chance-of-maiming activities.

I think to address at least some of these questions it would be helpful to have a notion of metaconsent. That is, perhaps there is no ethical issue eating an animal that wants to be eaten, but there is an issue creating that animal. It can consent to being eaten, but it can't metaconsent to wanting to be eaten. The decision to have those values was forced on it. It is equivalent to someone who hypnotises you into desperately wanting to eat tree bark. After the deed is done, that same person would be doing you a favour by feeding you all their extra tree bark. However, they still did you a grievous wrong by making you want it in the frist place: they violated your metaconsent.

The thorny question is how you could possibly obtain metaconsent from a being that doesn't exist. Obviously, we have not metaconsented to the desires we have or the things we value, they just happened to us. The universe is not a moral being, however, and as we take the ability to create life from nature we also take the responsibility to do better. One option would be to try to imagine a discussion with a version of the being that has a neutral position on the subject. An animal that doesn't feel strongly about being eaten or not eaten would still prefer not to *want* to be eaten, because being eaten also interferes with most other goals.

However, for many things it would be more complex. Either a neutral position might not be meaningful (what's a neutral food preference?), or tautological (would you rather want short or long socks, assuming you have no current feelings about them?). The latter case could be an indicator that metaconsent would be granted – I certainly wouldn't be dismayed to learn that I've been genetically predestined to prefer short socks – but there are some decisions where saying "see, no preference!" could be hiding a more serious problem. Maybe you could construct a hypothetical animal with no preferences of any kind. Why would it care, then, about whether it wants to be eaten or not?

I think a secondary option would be a metaconsensual form of the categorical imperative, or the veil of ignorance: if the position might also apply to you, would you metaconsent to it? It's pretty tough to justify creating an animal that wants to be eaten if you can't imagine ever metaconsenting to wanting to be eaten.

To bring it back around, then, we can use the same techniques to have a hypothetical discussion with the cows that we breed. Would they want to be brought into existence? By the first test, I think yes; if a cow takes a neutral position on being born and then eaten, then any other desires (enjoying grass and so on) would push it into wanting to exist. The second criteria is a little trickier. You could imagine an equivalent situation where humans are subjugated by some kind of evil aliens who kill us and eat us sometimes. Would we collectively rather have a lot fewer people in exchange for no more killing and eating? I think each individual person would rather exist, but maybe collectively we would agree that the improvement in dignity is worth the lost lives.

I don't know if it's very satisfying to finish with a maybe, but I think that this way of thinking at least provides an entry point to reasoning about the morality of bringing beings into existence or changing what they value. I'm certainly a lot less impressed with the animals-are-better-off-being-eaten argument, at least.