Monday 27 July 2015

In his paper "Free Will," what is Galen Strawson's view on free will and determinism, and how does it differ from the views of other philosophers?

Galen Strawson is a self-described "pessimist" on the matter of free will: He does not believe it exists or ever could exist in any possible universe.

His argument is basically as follows:
(1) For free will to exist, we would need to be ultimately responsible for our actions.
(2) For us to be ultimately responsible for our actions, we would need to be in control of all the relevant causal factors that determine our actions.
(3) We are not in control of all the relevant causal factors that determine our actions, and could not be in any possible universe.
(4) Therefore, free will does not exist, and cannot exist in any possible universe.

Strawson believes in determinism—he believes that all occurrences in the universe are uniquely determined by their causes—but unlike many philosophers, he does not believe that this is relevant for whether or not we have free will. He argues that indeterminism would only make premise (3) true in a different way, by making our actions uncaused and therefore still not under our control.

But in fact, it was never premise (3) that was up for dispute. Clearly there are many causal factors determining our actions that are outside our control—ranging from our parental upbringing to our genetics to our external environment. Even many things that seem like they are under our control may not be—when you reflect on how much control you really have over your own beliefs and attitudes, you realize that a lot of outside factors constrain what you could think and believe.

Premise (2) is basically just a definition of what Strawson means by "ultimately responsible," so it's not contentious either.

The core of the argument hinges upon whether premise (1) is sound—and I speak for nearly the whole of the philosophical community when I say that I do not believe it is.

This kind of mysterious "ultimate" responsibility is not something that any entity could ever have in any possible universe—which Strawson himself clearly recognizes. So what's the point in talking about it, then? It's like complaining that we can't have free will because we can't make 2+2=5 or have square circles.

If your theory concludes that there is no relevant difference in terms of free will and moral responsibility between a world where we make our moral choices rationally based on what we desire and a world where we are mindless automatons incapable of thought, clearly there is something wrong with your theory—not with the world for failing to live up to it. The task of philosophers was always to explain what makes us morally responsible, so that we can determine when it applies and when it does not and make moral judgments accordingly; if your conclusion is that moral responsibility does not exist, you haven't reached a counter-intuitive conclusion, you've given up on the entire point of the project. It would be as if we asked you to design a better car engine and you concluded that car engines do not exist because motion is impossible.

Indeed, the philosophical consensus (or near-consensus) on this issue is that we do have free will in the morally relevant sense, and it is compatible with determinism; in fact I am among those who goes so far as to say that it requires determinism, or something close to determinism—you can have some stochastic processes, but they can't be so unpredictable that it becomes impossible to predict the consequences of any of our actions. Fortunately, this is clearly the sort of world we live in—quantum mechanics may be stochastic, but it can make very precise predictions.

What then, in my view, gives us moral responsibility? In a word, incentives. It is our capacity to respond to incentives—to do things because we want to do them or because we will be rewarded for doing them, indeed to do things because—that gives us free will. We are different from rocks and empty space because rocks and empty space do not have reasons for their actions; they are not seeking goals; they do not respond to incentives.

Of course, many other animals respond to incentives; do I believe they have free will? Yes I do. When you ask whether a dog dug a hole "on purpose," do you answer "dogs are mindless automatons"? No, you determine whether it was a rational action to seek a goal or somehow accidental or inevitable. We even assign moral responsibility to dogs—"bad dog." The rationality of dogs is more limited than that of humans (though ours is still limited), and as such their free will is commensurately limited; but this is not a bug but a feature, since it explains how free will can evolve, gradually improving as chemicals evolve into single-celled organisms and ultimately into complex intelligent life.

No comments:

Post a Comment

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...