6/30/10

"Pescatarian"

Here's a column about vegetarianism from a writer after my own heart--except for the fact that she calls herself a "pescatarian" and considers eating fish, but not meat, hypocritical. Yes, I occasionally eat fish, but why should my philosophy of eating be named accordingly? Why should I label myself a fish-eater (and not a broccoli eater or an egg eater)?  My thoughts about animals and food link me to the long history of vegetarianism (covered in enormous detail in this book), so that's the best name for my diet.

As for the hypocrisy: not so fast. When I first stopped eating meat 17 years ago, it was entirely because I didn't want to support factory farming. (I could have opted for a humane-only diet, but that seemed non-viable. If I maintained my appetite for meat, I wasn't going to be able to limit myself to humane meat.) Killing was not the issue. Fish, I figured, have normal lives, not the bizarrely limited lives of factory-farmed cows, pigs and chickens. So there is a moral difference between eating fish and eating meat.

Since I made the decision not to eat meat, my thinking has changed somewhat, and so has the world. My reason for not eating meat is still first and most confidently my disapproval of factory farming. But now I'm also more worried about the killing itself. That means I have to be more concerned about eating fish. I've also become more concerned about the enormous problem of over-fishing and the way modern fishing methods damage the environment. Furthermore, since I stopped eating meat, more and more of the fish in the store is farmed fish, and those animals don't actually have normal lives at all.

Despite all of that, I still think eating fish can be morally better than eating land animals--especially if you take care about which fish you eat (wild, plentiful species like Alaskan salmon are OK; bluefin tuna is not OK). But it's not perfectly OK. So why do I continue?

Well, I don't exactly--I don't continue just as before. We've stopped buying fish for home consumption. I still eat fish when there's something to be gained beyond simple taste--when that makes it easier to eat out, eat with friends and family, sample a local cuisine while traveling, enjoy "sea life" in places like Hawaii. Those are morally good goals, but so is protecting animal life.

So: it's a conflict. But I'm not going to call myself a "pescatarian" or plead guilty to simple hypocrisy. It's more complex than that.

6/29/10

"Surrender, Atheists"

Why?  Because they can't say why there's something rather than nothing.  That's the upshot of Ron Rosenbaum's "agnostic manifesto" over at Slate.
In fact, I challenge any atheist, New or old, to send me their answer to the question: "Why is there something rather than nothing?" I can't wait for the evasions to pour forth. Or even the evidence that this question ever could be answered by science and logic.
Take another case where I don't have an explanation for some chain of events. I have no idea what caused the dinosaurs to go extinct, for example.  If I don't know, can I still believe confidently it wasn't X  or it wasn't Y?  Of course I can!  I can believe it wasn't, for example, human hunting (no humans back then) and it wasn't too much smoking (no cigarettes back then).

Likewise, I can not know what caused there to be something rather than nothing, and still believe it wasn't Donald Trump,  wasn't mixing vinegar with baking soda, nay, wasn't some all-good immaterial mind with stupendous causal powers.  I might have very good reasons for all those denials--Not Trump because no people back then.  Not vinegar/baking soda because there wasn't any.  Not an all-good immaterial mind with stupendous causal powers...because the whole idea is incoherent.

I admit I don't know why there's something rather than nothing, but not knowing why the universe exists doesn't (in the least) make it irrational to be an atheist.  Rosenbaum taunts the new atheists--
... many of the New Atheists seemed to have stopped thinking since their early grade-school science-fair triumphs ...
but I think Rosenbaum stopped thinking after he started stewing about why there's something rather than nothing. His first hunch was that if you don't know, no conceivable answer can be excluded. Further reflection should show him this isn't true.

6/28/10

Population Puzzles

While I was away, Peter Singer had an interesting essay about the most basic of questions at The Stone.  Should we make new people?  The question has harder, harder, and hardest versions.

Hard.  Are there already too many people?  Have we already exceeded earth's "carrying capacity"?  What's going to happen as we get to be a population of 9 billion in 2050?  (I've been reading Matt Ridley's new book, The Rational Optimist, and he--surprisingly--says "nothing awful.")

Harder.  Whats costs and benefits to ourselves and others should we pay attention to when we decide whether or not to have children?  Take, for example, Japan's effort to encourage reproduction by paying people to have more kids.  Presumably, the reason behind the policy is that new people are needed to support an aging population. Would it be bad if the financial incentive entered into someone's decision to have a child? Should reproductive decisions be purer than that?  

Hardest.  Do you (typically) "do good" by making a child--directly, with respect to the child?  Singer summarized David Benatar's anti-natalist reasoning but didn't embrace it.  (Benatar says you don't do good, and in fact you do bad.)

Here's a little thought experiment that's plaguing me.  Suppose there's a remote and bounteous island.  Just 100 people live there, though it has a carrying capacity of 1000.  They can (1) not reproduce, so die out, or (2) reproduce so as to replace themselves, or (3) reproduce so as to increase their population. 

When they ask the hard question, they see no obstacle to any of the options.  When they ask the harder question, they see no obstacles either.  Their desire for children has nothing to do with kids being good at climbing trees, and providing them with coconuts, as it might be.

But they ask the hardest question too.  They want to be sure that by making children they are doing good, not bad--directly, with respect to the child. A simple soul says--we 100 are thrilled with life, despite our occasional problems.  Yes, we occasionally step on a sea urchin or get a sunburn, but the good by far outweighs the bad.  Collectively our existence is A Good Thing to the tune of X units.  If we double our population, the new population's existence will be an even better thing--to the tune of 2X, to be exact.

Their reasoning seems impeccable.  It certainly seems to show that option (3) is permissible.  The trouble is that it seems to show more than that. It seems to show that (3) is obligatory (and (2) "next best"). There's the rub (and this rub is pivotal in Benatar's anti-natalist argument).  Is there any way to stop the slide from (3) being permissible to (3) being obligatory?  If not, is it so bad to think (3) is obligatory?

What is Philosophy?

Alexander George asks today at The Stone, and then gets some truly abysmal comments.  Toward the end of the essay he writes--

So--you're doing philosophy by asking "what is philosophy?"  But I would suggest that this is not the first philosophy you ought to "do."  If you haven't already done other philosophy, you're not in a position to fruitfully contemplate what philosophy is.  That, I hypothesize, is the problem in the comment section.  All signs are that readers are trying to characterize a discipline they've never really engaged in. 

Here's a sampling of the nonsense over there--


Maybe The Stone isn't really the best place for the nature of philosophy to be discussed?

6/26/10

Nooooo!

Just no.  (Stolen from Jerry Coyne, who says the Chinese are into tiger dogs.)

6/25/10

Templeton $$$

The Templeton Foundation is offering $$$ for essays on "big questions."  That's got to be music to a lot of ears, including mine, but Jerry Coyne finds it all nefarious.  Yes, yes, I see they have an agenda. Essentially, they want to preserve a voice for religion in the highest level discussions of science, morality, etc., rather than watch religion get gradually shut out--as has happened in academia and public intellectual life over time.  They ensure that voice and give it more credibility by creating symphonies of opinion that are all-inclusive.  In fact, they seem to go out of their way to include secular heavy-weights (see this forum on the nature of morality).   Even if what they're trying to achieve isn't something I'd personally strive for (preserving a voice for religion), Templeton forums have some very pleasing effects. They amplify secular voices way beyond their normal level.  They bring big philosophical questions into the public eye.  Full disclosure:  I have used one of their "big question" websites as a teaching tool (this one, on purpose), though not without first clarifying to students what I think the organization is up to.  So I'm already a consumer of Templeton goods. Should I also be a producer?  (Hmm.)

6/24/10

I Love the Way You Lie

One thing I love about being a parent is getting to sample all the stuff my kids read, listen to, watch, etc. My own little cultural bubble gets a little dull, what with my perpetual affection for Joni Mitchell, literary fiction, and all.

In the early years, you get to control this influx. We had the kids listening to Burl Ives and the Beatles, watching Little Bear videos, and we read them Roald Dahl...to mention a few of my favorites. Over time, the control decreases--the damned school decides what they should read, and they start absorbing other people's tastes. Then they discover the radio dial.

Thanks to my two 13-year-olds, I actually know my Black Eyed Peas and Lady Gaga and, etc etc etc. The biggest shock: my favorite part of the influx is hip hop. Best of all: Eminem. My son got his new album Recovery on Monday, and he’s been compulsively replaying the best songs. Especially the #1 (at itunes) song, "Love the Way You Lie." It’s a really beautiful duet that perfectly evokes dysfunctional passion, but, um, what about the message?

The guy raps about a messy relationship ... (lyrics here, listen here)

All I know is
I love you too much
To walk away though
Come inside
Pick up your bags off the sidewalk
Don't you hear sincerity
In my voice when I talk
Told you this is my fault
Look me in the eyeball
Next time I'm pissed
I'll aim my fist
At the dry wall
Next time
There will be no next time
I apologize

But then the attitude goes dark:

Even though I know it's lies
I'm tired of the games
I just want her back
I know I'm a liar
If she ever tries to fucking leave again
I'mma tie her to the bed
And set the house on fire

"Tie her to the bed and set the house on fire?" The girl (Rihanna) responds (with an achingly lovely voice and melody):

Just gonna stand there
And watch me burn
But that's alright
Because I like
The way it hurts
Just gonna stand there
And hear me cry
But that's alright
Because I love
The way you lie
I love the way you lie
I love the way you lie

Now, if I’m Eminem’s publicist, and I’m trying to explain, this is what I’m going to say: a song is a sort of play or fiction. There are characters. Rihanna’s character is indeed masochistic. But the song doesn’t say she should be that way, any more than Charlotte Bronte was saying we should keep madwomen in our attics when she wrote Jane Eyre.

OK...fine. So Eminem isn’t really telling women to enjoy being set on fire. But what about influence? Songs shape the way people experience things—you know it’s true! Some of the millions listening to this song right now are being abused by their boyfriends. Even if the song doesn't really say they should, won't it make some try to like the hurt and love the lies? And isn’t that effect intensified by the voice being Rihanna's—you know, she who famously got beaten up by her boyfriend a year or two ago?

My only hope is that the lyrics, and her singing them, are so appalling it’s almost funny. “Please set me on fire. I just love it, cuz I love you soooo much!”  Groan. Laugh.  Just don't take it seriously.

Having Reasons, Being Observant

As a fairly unobservant atheist (and fairly unobservant Jew), I have to respond to this post. It's one thing to have reasons to be an atheist (I do) and a Jew (I do), another thing altogether to adopt some level of "observance." You can have good reasons to be an atheist, and other good reasons not to be observant--i.e. not to focus on it, talk about it a lot, promote it.  Maybe you find it obvious and uninteresting that there's no deity (the attitude, I think, of many atheist philosophers).  Maybe you think there's no god, but you don't mind that others believe differently--since you think belief can have both good and bad effects.  Maybe you have other goals that would be thwarted, if you got on the "religion, baaaad" bandwagon.  For example, maybe what really matters to you is the environment, or poverty, or animals, and you think you can advance progress in those areas if you reach out non-divisively to both religious and non-religious people.

Finally, it's a very bad idea to use the term "anti-atheist" for unobservant atheists who criticize "the new atheists."  It echoes "anti-semite" and thus misleads badly.  There are people who really do despise atheists in the way that anti-semites despise Jews. Unfortunately, I come into contact with such people, and they upset me.  Critics of the new atheists (like Chris Mooney, like me once in a while) are nothing like them. The critics have reasoned complaints about a subset of atheists; they don't despise or fear or denigrate atheists just for being atheists.  They're not "anti-atheists." So much for that.

6/23/10

50 Greatest Ideas

The Philosophers' Magazine 50th issue is on the 50 greatest ideas of the 21st century (so far!). The magazine's website is publishing the ideas one at a time (you'll have to wait a long time for mine--I'm #27). Today's great idea (#2 in the series) is "public reason"--see here.

Respecting Animals

Angus wrote an awfully nice post about my book while I was gone, and asked a good question:  in defending degrees of respect, with humans entitled to more, how much do I really diverge from Tom Regan's seminal book The Case for Animal Rights?  Though Regan thinks every animal (or rather, every "subject of a life") has the same inherent value, and so has the same prima facie right to be treated with respect, he allows that in certain special situations, we ought to give priority to (normal) humans over (normal) animals.  Given that concession, is there really any more than a verbal difference between my view and his?

In fact, I think there's much more than a verbal difference.  That becomes clear when you consider the nature of the special situations in which Regan thinks that individual differences (between subjects of a life, human or animal) could make a moral difference.  Angus calls them situations of "unavoidable choice," but that suggests you'd be in such a situation much more often than Regan believes. 

The override situations Regan discusses are exactly five. Each one is very carefully analyzed to make it clear why it's an override situation.  That's not to say he's committed to just 5 override situations and not 6 or 7 or 8, but we should go very slowly when we postulate others. When you realize you're in one of these situations, it will sometimes make good moral sense to take into account differences, but not otherwise.

Here are Regan's override situations--

(1) Self defense.
(2) Punishment of the guilty.
(3) Innocent shields.
(4) Innocent threats.
(5) Prevention cases.

Human rights are overridable in these situations just as much as animal rights, and Regan talks mostly about human rights to explain cases (1) - (4).  Thus, most of his discussion of how animal rights may be overridden focuses on prevention cases.  But it's particularly when you focus on the whole list of override cases that you appreciate that he's talking about strange, exceptional circumstances.

"Prevention"  is a technical term here. Not just any situation where a bad is to be prevented qualifies as a "prevention" case.  Here's Regan's famous example--four men and a dog are in a lifeboat.  There's too much weight and all will die if no one goes overboard.  It's key to this being a special case that all on board are at risk--nobody's a bystander. Furthermore, if nothing is done, all will die.  In that singular situation, Regan thinks we ought to pay attention to differences.  We ought to throw a normal dog overboard rather than a healthy 25-year old human because (he says) the dog has less to lose.

Differences could enter into the other situations as well.  Suppose we're in situation (3).  Bad guys are shooting at us. They've deliberately surrounded themselves with innocents. May we take them out, even at risk of killing innocents? If we can, the next question is when to do it.  Suppose (this is my example, not Regan's) they shoot from a nursery school in the morning and from a zoo in the afternoon.  Regan would presumably say we should fire back in the afternoon. The animals have less to lose.

So Regan does think human-animal differences require us to prioritize human lives in some cases, but the crucial thing to see is that these are very singular cases.  He does not think that human-animal differences make for differences in inherent value or the strength of human vs. animal rights.  He doesn't think that, across the board, humans have priority over animals.

Suppose we are facing a plague that could kill a million human beings.  It doesn't affect dogs at all. Scientists discover, however, that the liver of a dog contains a miraculous cure.  By killing one dog, we can save a million.  Can we do it?  Definitely not, on Regan's view. This is not a prevention case, because the dog is not at risk.  It's also not self-defense: the dog is not attacking us. It's not any of the override situations.  The dog's right to be treated with respect protects him from being used to solve a problem that has nothing to do with him.  You can even picture this scenario taking place on a sinking lifeboat, and Regan will still say the dog cannot be sacrificed.  (See The Case for Animal Rights, pg. 385)  Regan is an abolitionist about animal research: we should do none at all, no matter what.

Another situation that's "none of the above" is one in which we simply need to eat an animal to survive (the caveman example in my book).  That's just like the research case, and isn't one of the five override cases.  So Regan's theory does not permit him to think this is morally acceptable either.  (And what matters is the theory.  Oddly, and without enough explanation, he briefly defends eating animals for survival on pg. 351.)

My view, by contrast, is that human-animal differences do make across the board differences in how we may treat them.  They do create differences in inherent value between different species, and they do affect how much respect we owe to members of different species. It's not just in the very peculiar override cases Regan discusses that differences matter. 

How do they matter? Which differences matter?  There's more on this in my book, but some readers will think: vague, not enough detail!  Maybe I should have quoted Aristotle--"Our discussion will be adequate if we make things perspicuous enough to accord with the subject matter."  I'm skeptical that any perfectly clear, detailed, and precise view about the grounds of respect would really be believable.

6/22/10

The Consolation of Religion

You go far away for a couple of weeks, and it kind of feels like big changes must have taken place in your absence.  Maybe the oil spill got cleaned up. Maybe the new atheists stopped being upset about "accommodationists."  Stuff like that.

Alas,  I see there's yet another round of railing against Chris Mooney at some of my favorite atheist websites (for example here, and that post includes more links).  What did he say this time?   For one, he related this story about a comment he made at an AAAS session on science and religion: 
At the close of the session, I rose and posed a question. One can never remember exact words, but in essence, it was this: “I’m glad you’re trying to foster dialogue between scientists and the religious community, and I’m sure you’ll succeed. But here is a harder question–how will you foster dialogue with the New Atheists?”
More interestingly, he told this story:
At the AAAS event, the pastor David Anderson told an unforgettable story underscoring this point–the story of a single mother who just lost her husband, and has two poorly behaved kids, disciplinary problems who keep getting in trouble at school. Does this woman care about the latest scientific discoveries about, say, asteroids? No, explained Anderson, “because an asteroid has just hit her family.”
Science, alone, is no consolation in this context. Religion gives this single mother something she can lean on. Religion, explained Anderson, provides one with inspiration, whereas science provides information (and science fiction provides entertainment).
It was naughty of Mooney to relate the pastor's tale because... because he shouldn't have bought into the pastor's notion that religion could offer this mother something science couldn't?

Why shouldn't he buy into that?  The pastor didn't say every single mother with problems must rely on religion. I should think he left it open that some would rely on sports...or on art...or psychotherapy... or whatever.

I can imagine the pastor saying different things about what religion provides. Must it be inspiration, rather than connection, community?  Presumably he wasn't ruling out a more complex story.

I suspect that for some atheists it's simply hard to accept that religion can provide all these wholesome and yummy things--much as for vegans (I have noticed) it's hard to accept that animal products have anything to offer.  In both cases, it seems there's a fear that once X is recognized as sufficient for some purpose, it's inevitable that X will be seen as necessary.

But that's just a logical confusion.  Being sufficient isn't being necessary.

Of course, once you've adopted some practice, because it's sufficient to fulfill some important need, you get used to it.  It's not easy to shift from one sufficient practice to another.  The mother who relies on her church for social support and comfort can't just cut the cord and immediately get the same needs met through art and psychotherapy (as it might be).  Similarly, it's not easy to shift from one diet to a very different one.

Given all of that--you might argue--it's not wise for science educators to stress the incompatibility of religion and science.  Though religion isn't really necessary to meet people's needs, it does do important things for people and so has a "felt necessity."  So the message of incompatibility will likely alienate many people from science.

I take it that's Mooney's basic position.  You could argue with it, but I find it completely unfathomable why he's viewed as such a bad, bad boy.

6/21/10

A Vegetarian in Paris

The French seem to eat meat, meat, and more meat.  Not only is there an absence of vegetarian options at most restaurants in Paris, but the meat options are on the barbarous side.  As in: every menu seems to include pate au foix gras. 

We did visit one vegetarian restaurant (Le Potage du Marais) while in Paris, but that was only a partial success.  Hurray for the fennel soup, but the seitan burguignon was...well...seitan burguignon.

Having a "different" diet is a pain while traveling.  You want to fully experience another place--that's the whole point of traveling.  But there are limits to flexibility, and they aren't even necessarily chosen.   When in China, I bet even the most avid carnivores have problems eating dog, even if they'd like to "do as the Romans do." Morality seeps down to the gut level, so that having a mouthful of dog is likely to be involuntarily unpleasant (for western dog lovers, whether they eat other meat or don't). 

I feel the same way about trying the beef burguignon in France or the prosciutto in Italy.  The ethics of eating I normally abide by has worked its way into my taste buds.  So I discovered when I attempted to respond flexibility to accidentally ordering pasta with a "pomodoro ragout" in Italy.  It turns out a ragout contains ground beef.  Over the protestations of my kids, I decided I'd just eat it (it was expensive, and I didn't want to try to explain the problem to an Italian waiter).  I soon discovered this was pretty much like eating dog.  It was--ewww.

It's a pity that vegetarians can't enjoy all the local specialties (like Tuscan polenta with wild boar).  On the other hand, we can be especially appreciative too.  Italy and France have the most gorgeous fruit and vegetable stands!  For non-vegans, the gelato in Italy is nothing short of incredible.

6/20/10

Travel Miscellany

Back from two weeks in France and Italy, and not yet quite coherent, here are misc. reflections--

 #1  Angus Taylor (Aeolus) wrote some great stuff here in my absence (thank you!).  No, I didn't change my mind about various and sundry--it's been him writing since June 4.  (It seems as though this created confusion for a few people--a little birdie tells me.)

Over a delicious pizza in Siena we pondered what he wrote about "unnatural" food.  I don't think it's a simple matter of "natural vs. artificial," but I do think it makes good aesthetic sense to prefer a fresh Tuscan tomato to a can of Campbell's tomato soup.  Super-processed food doesn't taste as good (to me, anyway), but sheer taste is just part of it.  The way we experience food also has to do with everything we believe about where it came from, how it was produced, etc.

When I'm less jet-lagged I'm going to respond to his post about my book.

The picture was taken at the fruit and vegetable market in Venice.
#2 Last semester when I taught environmental ethics there was much discussion of whether and how we can reduce our ecological footprints.  The French and Italians already do many of the things we're supposed to do.   They drive smaller cars, use much less air conditioning, live in smaller homes, eat less, use more public transportation, etc.  I don't think they necessarily mean to be more green, but rather--for cultural and historical reasons--they simply are more green. Why can't we be more European?
That's Roman laundry.

#3  What's the point of traveling?  I'd like to say it was for expanding our horizons, learning about other ways of life, visiting friends, etc. etc., but the truth is that it's all that, but also just for the fun of gawking ... at great art, architecture, landscapes, etc. Alain de Botton must say some perceptive stuff about these things in his book about travel.

I can't believe how beautiful Venice is. A little Disneylandish?  I don't care.
 
 
#4  Is there a lower moment in human history than the gladiator fights put on at the Roman Coliseum?  It's one thing to subject others to death and total misery, but to enjoy yourself completely while doing so is especially appalling. It turns out the Romans (50,000 at a time) not only feasted while watching men and animals kill each other, but they liked to be sprinkled with flower petals and sprayed with perfume. The women brought needlework with them and the men took a little time out for letter-writing.  What great engineers they were, but what miscreants.

Picture was taken you know where.
#5  I shouldn't have taken this picture, I know, but I was surprised to discover that there are culturally specific ways of begging.  Italian beggars create tableaux straight out of religious art.  The homeless in the US (we don't even use the word "beggar") would never try this.

A "beggar" in Venice (yes, we paid for the sin of photographing her by giving her money).
#6  Our kids were very keen to see "famous stuff."  The funny thing about famous stuff is that it's impossible to see it just as is, instead of seeing it as famous.  There's a thrill of recognition that's an inextricable part of the experience.

Plato and Aristotle detail from The School of Athens (at the Vatican).

6/19/10

Blowed Up Real Good (guest post)

Good news, fellow citizens of Earth! We now have irrefutable proof that the Galactic Guardians are benevolent. The great wisdom and compassion of these ancient, intelligent, and technologically advanced beings comports with the principle of the widening and deepening of morality among social species in the course of evolution, as Darwin surmised – and is contrary to the silly scenarios of evil aliens in so many science-fiction thrillers. The Special Session of the United Nations and Peoples’ Representatives is expected, in the next few days, to formally accept the Guardians’ generous offer to transport all human beings and all other sentient creatures from Earth to our future home planet, Eudaimonia, three thousand parsecs closer to the centre of our Milky Way Galaxy. There, on that abundant, green world uninhabited until now by sentient life, conditions are maximal for our flourishing as individuals and in harmony with other earthling creatures.

There has already been much discussion about an appropriate celebration. In particular, it has been suggested that, as we view Earth for the last time from the vantage of our departing Space Ark, we should treat ourselves to the ultimate fireworks display: a concatenated detonation of all the tens of thousands of nuclear weapons currently stockpiled on the planet. What a show! The end of history will truly be a political science beyond the dreams of Randy Newman!

Of course, there is a strong likelihood that all non-sentient life on the planet will be permanently extinguished in the nuclear firestorm. A handful of radical environmentalists and one or two eccentric philosophers have objected to this kind of celebration on the grounds that even non-sentient life has intrinsic value. For example, Paul W. Taylor argues that all living members of the planet’s natural ecosystems have equal inherent worth. Like Holmes Rolston III, he makes the point that every living organism has a telos; it is a unified system of goal-oriented activities, the aim of which is the preservation and well-being of the organism, whether or not the individual – plant or animal – is sentient. This means, according to Rolston, that the organism is an evaluative system, even if it is not yet a moral system. If we are to have an ethic that respect life, he says, we must take into account how our behaviour affects these amoral evaluative systems.

Does the idea of value existing without a conscious valuer make sense? The notion, it might be said, is simply incoherent; or, at best, the kind of intrinsic value exhibited by non-sentient organisms is different from that of sentient beings who consciously value (positively or negatively) their experiences. If so, it would seem that the bottom line is: If most of us want to blow up the planet, let’s do it!

The Guardians, with their unimaginably superior understanding of biology and ecology, have calculated that, without our little celebration, and given the current configuration of plant, microbial, and insect species on Earth, there is no more than a five per cent chance that sentient life will ever again evolve on this planet. The Guardians have also calculated that no sentient aliens from elsewhere in the universe are ever likely to arrive on Earth in the future, either by accident or design. Should the small chance of sentient life evolving once again on the planet give us pause? As the saying goes: Why should I care about posterity – what has posterity ever done for me?

6/17/10

Future (Im)perfect (guest post)

Is BP being made a scapegoat? The opprobrium resulting from the Gulf of Mexico oil spill obscures the fact that we are all addicted to oil – indeed, industrial civilization runs on the stuff – and not nearly enough is being done to confront our addiction. But more is required than just a shift to renewable energy sources. Industrialism itself – the organization of society for the maximization of production and consumption – increasingly appears to be unsustainable. Capitalism, the dominant and now globally triumphant form of industrialism, is an inherently expansive system that is threatening to eat the planet alive. There are those who argue that capitalism can be made green. That will be a key issue of this century: can future economic growth under capitalism come primarily via sustainable technologies as well as in non-material forms (e.g., services, computer software)? The answer is not clear.

What does seem clearer is that major changes are in store. Society a hundred years from now is likely to be profoundly different from today, either for better or for worse. At the end of the nineteenth century, William Morris envisioned the shape of post-industrial society in his utopian novel News from Nowhere. Morris’s faith in the goodness of human nature (there is no money and no politics in post-revolution Britain) and his deliberate rejection of advanced technology (why would anyone want to get somewhere quickly by train if they could get there leisurely by rowboat?) is bound to strike most of us as charmingly naive in some respects, even if his socialist biophilia has much to recommend it. At this point, then, let me put in a shameless plug for a non-fiction look at where we’re headed, written by my brother Graeme Taylor. Evolution’s Edge: The Coming Collapse and Transformation of Our World won the Independent Publisher Book Awards Gold Medal for 2009 in the category “Most Likely to Save the Planet”. As the book makes clear, industrial civilization must go not because it is morally bad (it is both good and bad in various ways) but because it is increasingly dysfunctional and unsustainable. And go it will. How it will go and what will replace it is the story of the decades to come.

6/15/10

Respect (guest post)

Jean Kazez’s Animalkind is a pleasure to read: it engages seriously with a range of animal-ethics issues, yet the writing is also engaging in the sense of making the reader comfortable and eager to hear out the author’s arguments, even – especially – when one may question them.

Kazez argues for animal liberation, but against the idea that humans and sentient non-humans have equal moral value. Animals count, and many of them count a lot, but humans count more. We ought to give animals due respect, and if we do, this will mean enormous changes in human behaviour – but, still, humans are due more respect than animals. How does Kazez’s position differ from that of Tom Regan, the most influential exponent of animal-rights philosophy?

Regan argues that all subjects-of-a-life (nearly all humans and many non-humans) have equal inherent value. This would seem to put him at odds with Kazez. Yet, perhaps surprisingly, he insists that, if it comes down to an unavoidable choice, a million dogs ought to be sacrificed rather than a single normal human. That is because, although all who have inherent value have it equally (all being equally entitled to be treated with respect), a dog loses less by death than a normal human does. That is, death forecloses fewer opportunities for satisfaction in the dog’s case than in the human’s case. To give the death of a dog the same moral weight as the death of a normal human would be to treat the human without due respect, because it would be to give the lesser loss suffered by the dog the same weight as the greater loss suffered by the human. And in rejecting utilitarianism, Regan also rejects the idea that we ought to aggregate the lesser losses suffered by numerous dogs and weigh them against the loss suffered by a single human.

Isn’t Regan being anthropocentric in his assessment of how much each kind of loss is to count? As Lori Gruen has asked, how do we know that a human thrown from Regan’s overloaded lifeboat would lose more by not getting the chance to write the play she’s been anticipating writing than the dog would lose by not getting to go for another run by the river? We might ask the same of John Stuart Mill: How does he know that it’s better to be a human dissatisfied than to be a pig satisfied? Mill says that in order to judge which of two pleasures is qualitatively superior, we must consult those who are competently acquainted with both. Now, you and I might prefer reading a philosophy blog or watching the World Cup to rolling around in the mud or sniffing out truffles, but are we competently acquainted with the joys of mud and truffle-sniffing?

Kazez recognizes this problem and takes a practical line:

We can study the lives of animals with an open mind. We can try to be less arrogant and proud. But then we must make judgments, because real-world choices depend on doing so. The judgment most of us arrive at is that there is something special about our capacities and thus about us. The worry about latent speciesism ought to make us more humble, more open to new evidence, more tentative. But we have to move on with our understanding of the way things are, imperfect though it may be.

Some will be uncomfortable with what Kazez calls her “sliding scale” of moral value, which places humans above members of other species. The sliding scale has also been championed by Mary Anne Warren, who argues that while it makes sense to ascribe moral rights to all sentient creatures, humans ought to be recognized as having stronger rights than animals, less easily overridden. Warren criticizes Regan’s notion of equal inherent value – and yet, as noted, Regan does not claim that everyone’s vital interests are equally protected from being overridden. He advocates equality in the matter of respect, but affirms that differences in capacities will, in some circumstances, properly give rise to differences in treatment. Kazez, by contrast, talks of different degrees of respect being merited on the basis of different capacities. This leads me to wonder how much of the disagreement here has to do with substance, and how much turns on different conceptions of “respect”.

The devil may be in the details, and constructing a coherent philosophical justification for one’s position may be tricky. Nevertheless, animal advocates generally agree that sentient non-humans deserve to be treated with respect, and that this at least means not inflicting unnecessary harm. The key difference between Kazez and Regan may be that she is more open to uncovering reasons why species membership can give extra weight – that is, weight in addition to that resulting from capacities of isolated individuals – to sometimes favouring humans over non-humans. (When she returns to blogging in a few days, perhaps she’ll say something about this, and correct any misrepresentations I may be guilty of.)

I like the fact that Kazez’s argument is sensitive to the changing historical contexts of human needs. I try to do something similar, and try to avoid getting bogged down in the question of the relative values of different kinds of lives, in “Animal Rights, Human Needs”. That essay, which attempts to articulate what I call a "vital-needs rights view", is far from perfect, and with regard to its idea of human and animal flourishing, I point readers to the somewhat related "capabilities approach" being developed by Martha Nussbaum. But before you tackle Nussbaum, Animalkind is a good place to go if you want to think about what it means to respect both humans and animals.

6/13/10

Unplugged

Just a reminder--  I will continue to be 99% unplugged for another week or so.

6/12/10

So Great an Absurdity (guest post)


Tis unconceivable that inanimate brute matter should (without the mediation of something else which is not material) operate upon and affect other matter without mutual contact; as it must if gravitation in the sense of Epicurus be essential and inherent in it. And this is one reason why I desired you would not ascribe innate gravity to me. That gravity should be innate inherent and essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of any thing else by and through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has in philosophical matters any competent faculty of thinking can ever fall into it.
Isaac Newton to Richard Bentley, February 25, 1693

Material bodes do not attract each other on their own. It is absurd to think that gravity is innate to matter, says Newton. No scientifically educated person could believe such a thing! And although Newton the scientist famously said that it was not his job to hypothesize about an ultimate cause, when he took off his scientist’s hat he let it be known that the only reasonable explanation for gravity was the active and continuous intervention of a divine intelligence. The inverse-square law is God’s law. (God the mathematician, the first-born child of Galileo and Descartes, and grandchild of the ancient Greeks.) "When I wrote my tract on our system," said Newton, "I had my eyes turned to principles that could act considering mankind’s belief in a Divinity, and nothing is more grateful for me than to see it useful for this goal."

Mary Midgley maintains that today’s anti-god warriors are doing much the same: using the prestige of science to defend a particular world-view. In both cases, she says, the scientific argument is weak, but its employment is not surprising.

Belief in God is not an isolated factual opinion, like belief in the Loch Ness monster – not, as Richard Dawkins suggests, just one more “scientific hypothesis like any other”. It is a world-view, an all-enclosing vision of the kind of world that we inhabit. We all have these visions. Though they are always loaded with lumber and often dangerous, we need them.

This is a case she makes at greater length in Science As Salvation: A Modern Myth and Its Meaning. According to Midgley, we humans by nature think in terms of purpose, direction, context, meaning. In recent decades scientists have vacillated between denying that science has anything to tell us about the meaning of life and claiming it can tell us a great deal about it – Richard Dawkins and Stephen Hawking being among the latter.

Teleology – reasoning from purpose – does not require belief in a deity; it does, however, imply that human beings are not the source of all value, but are parts of a larger, intelligible whole within which we situate and understand ourselves. By contrast, viewing the non-human universe as being without purpose is typically compensated for by "orgiastically" dramatizing the human mind and glorifying humans as the sole centre of value.

If our curiosity is in no way respectful – if we don’t see the objects we speculate about as joined with us and related to us, however distantly, within some vast enclosing common enterprise which gives them their independent importance – then (it appears) our curiosity, though it may remain intense, shrinks, corrupts and becomes just a form of predation. We then respond to these beings we enquire about with some more or less hostile, alienated attitude, something ranging between fear, aggression, callous contempt and violent suppression. We see them either as enemies to be conquered or as brute objects ranged over against us – as aliens, as monsters, as victims, as trivia or as meat to be eaten.

6/10/10

Unnatural Food (guest post)

Time magazine reports on a lab at the University of Missouri that has created a soy-and-wheat meat analogue with the texture of chicken:

First, you take a dry mixture of soy-protein powder and wheat flour, add water and dump it into an industrial extruder, which is essentially a gigantic food processor. (You have to climb a ladder to get to the hole at the top.) At first, the mixture looks like cake batter. But as it's run through the gears of the extruder and heated to precisely 346°F (175°C), the batter firms up and forms complex striations. It took Hsieh and Huff many years to get the temperature right, and it also took years to discover how to cool the soy cake very quickly, before it could melt. ... All this processing raises a question: Will vegans and other gastronomic purists buy a product that is vegetarian but highly processed?

Not only “gastronomic purists” but many others will worry that it’s highly processed, that “It’s not natural.” I find this objection odd. Is there anything we eat that is “natural”? Fruit, vegetables, meat, drink – it’s all the product of centuries or millennia of artificial processing, commonly involving genetic engineering in the form of selective breeding of plants and animals – on top of which today are the vast industrial processes of chemical, mineral, and fuel extraction, then transportation, ploughing, fertilizing, planting, irrigating, harvesting, housing, feeding, waste disposal, more transportation, then often cooking with gas or electricity. Does “natural” retain any discernible meaning in this context? In what sense is a meat analogue any less natural than real meat? I can’t see it. And what about vat meatreal meat grown in labs (and eventually, no doubt, in factories in industrial quantities)? Is that any less natural than real meat hacked from the bodies of artificially modified and industrially produced animals? Again, I can’t see it.

We are natural beings and yet, as Marx described in Capital, our “metabolism” with the external world involves a ceaseless process of dialectical modification.

Labour is, first of all, a process between man and nature, a process in which man, through his own actions, mediates, regulates and controls the metabolism between himself and nature. He sets in motion the natural forces which belong to his own body, his arms, legs, head and hands, in order to appropriate the materials of nature in a form adapted to his own needs. Through this movement his acts upon external nature and changes it, and in this way he simultaneously changes his own nature.

For human beings there is no tidy distinction between the natural and the artificial. This doesn’t mean that anything goes, that whatever we do is okay because, after all, it’s all natural (and simultaneously it’s all artificial). There are forms of dialectical exchange with non-human nature that are healthy and forms that are not – forms that promote the flourishing of ecosystems and individuals and forms that do not, forms that we can judge to be morally appropriate and forms that we can judge to be morally inappropriate. But whether getting B12 from a pill is “natural” – if that’s an issue at all, it’s small potatoes.

6/8/10

Religion and Animals (guest post)

The other day the chief minister of the state of Malacca in Malaysia, responding to protests against the building of an animal-research laboratory, said, "God created animals for the benefits of human beings. That's why he created rats and monkeys. ... This is the way it has to be. God created monkeys, and some have to be tested."

Peter Singer has now slammed the chief minister’s ignorance and deplored the influence of religion on animal issues and ethical issues in general:

The chief minister's comment is yet another illustration of the generally regressive influence that religion has on ethical issues – whether they are concerned with the status of women, with sexuality, with end-of-life decisions in medicine, with the environment, or with animals. Although religions do change, they change slowly, and tend to preserve attitudes that have become obsolete and often are positively harmful.

The assumption among animal advocates has typically been that the past – certainly in Europe and the West and at least until Darwin – was a kind of Dark Ages, in which animals were almost universally regarded as having no significant moral standing. Rod Preece displays a more nuanced understanding. In Brute Souls, Happy Beasts, and Evolution, he aims to show that, generally speaking, scholarship regarding the historical status of animals in Western civilization (that is, over the past 2,500 years or so) is seriously deficient. Specifically, the views he challenges are “(1) that the Christian doctrine, typically presented as an unchanging monolith, has denied immortality to animals, with the corresponding implication that they were thereby denied ethical consideration; (2) that there was a near universal belief animals were intended for human use, with the corresponding implication that they were not ends in themselves and thus not entitled to ethical consideration; (3) that Charles Darwin’s theory of evolution had a profoundly positive impact on the way in which nonhuman animals were regarded and treated; and (4) that the idea of the ‘happy beast’ was merely a trope to condemn humans for their hubris and was not at all a sincere attempt to raise the status of animals.” Preece believes that our ethical responsibilities to animals are ill served by this simplistic and misleading conception of the historical record.

Preece shows in detail that the reality is far more complex than normally appreciated. To take one example, his close reading of the Victorian debate over vivisection turns the standard notion of Darwinism’s benign influence on attitudes toward animals, if not on its head, at least on its side. Although Darwin wrote that the subject of vivisection made him “sick with horror”, he supported it in the interests of scientific progress. Indeed, those opposed to the practice, who included Queen Victoria, Lord Shaftesbury, and many other prominent Britons, were more likely to be motivated by their Christian beliefs than by a belief in evolution, Darwinian or otherwise. John Ruskin, who passionately opposed harmful experimentation on animals as being in defiance of “the great link which binds together the whole creation from its Maker to the lowest creatures”, resigned his professorship at Oxford in 1885 because the university Senate approved funds for a physiology laboratory that would perform vivisection.

6/7/10

The Funny Side of Suicide (guest post)

Jean Kazez has mentioned “suicide food” before, and the fascinating blog that is a repository for images of this bizarre, yet common, phenomenon. The “Bite the Burger” image you see here is a photo I took in Victoria, on the outside wall of an establishment called “The Cultured Cow”, which has since been replaced by a Starbucks. The photo has been published in Carol J. Adams’ The Pornography of Meat and in my own Animals and Ethics – unfortunately, in both cases, in black-and-white. (Black-and-white just doesn’t do justice to the MURDER comment.) So here’s the full In Living Color version.



What’s with these suicide-food images?

In Aboriginal cultures, hunted animals have commonly been understood to be voluntarily giving themselves to hunters who demonstrate proper respect, or to be engaging in a form of mutually beneficial voluntary exchange. (See "The Myth of Consent", the first chapter of Jean's book Animalkind.) Are suicide-food images a modern form of this ancient view of the relationship with animals used as food? The most obvious difference is that suicide food is presented as a kind of joke, a joke so common as probably to escape attention much of the time. Aboriginal peoples did not think of the voluntary-sacrifice account as a joke: it was understood as literally true.

I suggest that the joking aspect of suicide food reveals unease about what is being done to these creatures. It is an unease that is partly buried or unconscious, but nonetheless present and persistent. Aboriginal myths of voluntary sacrifice may have served to some extent to repress feelings of guilt, but given the necessity of using animals for food, clothing, and tool-making, these myths were a credible and integral element in a harmonious way of life. By contrast, in modern industrial society it is becoming increasingly difficult to sustain notions of happy animals willingly offering up their lives in exchange for a bucolic existence. The result is a significant tension between people’s desire to devour animals and their knowledge of how their degraded victims live and die. Joking is a mechanism that at once acknowledges and deflects the truth. We laugh because otherwise we’d have to cry. And if we cried, we might have to give up our meat.

Or perhaps a joke is just a joke?

6/5/10

Tea with Bertrand Russell (guest post)

Many or most atheists believe that atheism is rational, i.e., that the rational thing to do is not simply to refuse to believe in an intelligent Creator, but positively to believe that an intelligent Creator does not exist. Many atheists like to bolster their case by referring to what is called Russell’s Teapot, or, alternatively, the Celestial Teapot. In “Is There a God?”, written in 1952, Bertrand Russell said:

Many orthodox people speak as though it were the business of sceptics to disprove received dogmas rather than of dogmatists to prove them. This is, of course, a mistake. If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense. If, however, the existence of such a teapot were affirmed in ancient books, taught as the sacred truth every Sunday, and instilled into the minds of children at school, hesitation to believe in its existence would become a mark of eccentricity and entitle the doubter to the attentions of the psychiatrist in an enlightened age or of the Inquisitor in an earlier time.

Russell (writing in the pre-Sputnik era, before October 1957), was making the point that it was irrational to believe that there was a teapot orbiting the Sun somewhere between Earth and Mars. The fact that the existence of the teapot could not be disproved was no justification for believing in it. But more than that, I think Russell was pretty clearly indicating that the rational position was to disbelieve in the teapot (i.e., to believe positively that the teapot did not exist) – to be an “ateapotist”. And this, I think you will agree, seems sensible enough. I shall return to the teapot in a moment.

Suppose now that I’m in my office at the university, sometime after teaching a class, and I notice that my pen is missing. I think, “Perhaps I left it in the classroom.” So I head toward the classroom in the A wing of the Humanities building. Let’s say it’s now about 4 o’clock. As I start down the A-wing corridor, I see that the first door on the left is open, and that there’s a class in session. The door to the room on the right is also open, and there’s no one in there. This is not surprising; at this time in the afternoon, it seems normal that some, but not all, of the classrooms are in use. As I approach A309, however, I see that the door is closed; and what’s more, I can’t hear any sound coming from within.

What should I infer? That since I have no evidence that the room is occupied, it must be unoccupied? Should I throw the door open boldly and march into the room, perhaps whistling, or perhaps muttering something about my pen – confident in the expectation that I won’t be disturbing anyone? I don’t think so. There are several reasons why I might not hear any sounds. There may be a class in session and at this moment the instructor is writing something on the blackboard. Or perhaps there’s an exam in progress. Or perhaps there’s no class, but a small group of students have commandeered the room to study in. Or perhaps there’s just one person in there, asleep in a chair. It would be foolish of me to take the absence of evidence of occupation as evidence of absence. The rational position for me to take here is the agnostic one: I simply don’t know.

And yet Russell, as I read him, and pretty clearly Richard Dawkins and others, are not suggesting that we should be agnostic about the teapot’s existence. They are saying that absence of evidence for the teapot’s existence is good reason to believe in the absence of the teapot. The burden of proof is on anyone who claims there is a teapot in orbit around the sun. So what’s the difference between Russell’s teapot and the mystery of the closed classroom?

In 1952, pre-Sputnik, a belief in the orbiting teapot would have contradicted everything known about the laws of physics, the way teapots are formed, the capabilities of rockets, etc. In other words, there were good reasons, based on empirical evidence, for disbelieving in the teapot. Because of this, the burden of proof lay with anyone who claimed such an object existed. Similarly, we have reasons to believe that people cannot fly around on broomsticks; we have reasons to believe that it was Mummy who painted the Easter eggs and Daddy who hid them in the garden – not the Easter Bunny. The existence of invisible dragons in the garage or fairies at the bottom of the garden would violate the laws of nature as we understand them, and there are no natural phenomena whose existence requires dragons or fairies as an explanation. The burden of proof, then, rests with those who claim such things exist.

Even today, post-Sputnik, there are reasons to disbelieve in the orbiting teapot, though not as many as there used to be. But if we were to read on the TASS website that a Russian astronaut had taken the ashes of his beloved grandmother, sealed in her favourite teapot, into space and then, while on a spacewalk, launched the teapot into solar orbit, and if this story were corroborated by other normally reliable sources, the burden of proof would be dramatically shifted. It would now be irrational to disbelieve in the orbiting teapot.

However, the ultimate nature and origin of the universe is both a scientific and philosophical mystery. In the absence of some understanding of why a universe exists (i.e., the conditions that precede or are external to the Big Bang, space, time, matter, and the laws of nature), we have no grounds for saying that one sort of ultimate explanation is any more likely to be true than another. There is no way we can get outside the universe to examine the conditions of its existence. (Even scientific evidence that our universe is eternal would not rule out an Intelligent Designer, since a hypothesized Designer would have created all the parameters of our eternal universe, including time itself.) Dawkins and company assume that the hypothesis of a Designer is less probable than the hypothesis of no Designer – that is, they assume that we have some reason to believe that a Designer does not exist. But surely there are no grounds for this assumption that atheism is the default rational position.

There may be evidence – e.g., the undeserved suffering of animals and small children – that the benevolent Christian god does not exist, but that's another matter. Indeed, I would say there’s good reason to be an atheist on that score. In other words, we have weighty evidence that conflicts with that hypothesis. But remember, we’re after bigger things here. When it comes to the question of whether there is some kind of Intelligence at the ultimate root of existence, I must conclude with T. H. Huxley – Darwin’s original bulldog – that agnosticism is the only rational position. Atheism in this matter, it seems, is just another leap of faith.

Let’s look specifically at Richard Dawkins’ position. Dawkins does not confine his atheism to the Abrahamic god of Jews, Christians, and Muslims. In The God Delusion he wants to demonstrate that any kind of deism or belief in an ultimate Creator is to be rationally rejected. The heart of Dawkins’ argument is not the so-called problem of evil, which lends weight to arguments against benevolent deities; Dawkins is out to exterminate not just benevolent Creators but malevolent and indifferent Creators as well.

Dawkins will not tolerate agnosticism in this matter; he argues that the rational person will believe in the non-existence of any ultimate Creator, or Intelligent Designer (whether good, bad, or ugly), on the grounds that the existence of such a being is extremely improbable. Note again that Dawkins is not simply claiming that we have no good reason to believe in a Creator; that’s the namby-pamby claim of an agnostic. Rather, he is claiming we have good reason to believe in the extreme improbability of there being an intelligence at the root of existence.

The ancient teleological argument for the existence of God, famously articulated at the beginning of the nineteenth century by William Paley, who likened God to a watchmaker, is an argument from improbability. It points out that the natural world exhibits many complex systems with specifiable functions or goals, and maintains that it is highly improbable that such systems could exist without having been intelligently designed. From this it draws the conclusion that the universe must have an Intelligent Designer. In a well-known modern form the argument is that at least some natural systems are “irreducibly complex” and so require an Intelligent Designer. It also appears as the fine-tuning argument, which points to the extreme improbability of various fundamental physical constants’ all having the correct values to enable the existence of life. (In other words, how credible is it that our universe has won the jackpot in a lottery with odds of one in a bizillion?)

Dawkins turns the teleological argument back against itself. He says that an intelligence capable of designing a universe would itself have to be supremely complex. Hence the existence of a Creator must on the face of things be highly improbable, meaning that such a Creator stands in need of further explanation. The alternatives are (1) another intelligent designer, in which case we are potentially into an infinite regress, or (2) a non-intelligent natural mechanism that builds increasingly complex systems from simpler ones. Darwinian natural selection is an example of such a non-intelligent mechanism at work in the evolution of organisms. That an analogous mechanism (what Dawkins labels a “crane”) might apply to the origin and development of entire universes is of course speculative.

If a complex physical system is created by an intelligence, why must that intelligence be equally complex? What does “equally complex” mean in this case? Couldn’t a Designer just create matter and the laws of nature and then stand back and watch things evolve? Perhaps the problem has to do with the Creator’s assumed omniscience: this god must know everything about the universe’s complexity. But does knowing everything about the complexity of the universe require being as complex as the universe? (Dawkins says “the biologist Julian Huxley, in 1912, defined complexity in terms of ‘heterogeneity of parts’, by which he meant a particular kind of functional indivisibility.”)

As for Occam's razor, the claim that positing a Designer unnecessarily complicates the explanation begs the question: the claim assumes that we know something about the probabilities of how and why universes exist. But we don't, and so we have no way of knowing whether positing an Intelligent Designer complicates or simplifies the explanation.

In any case – and this seems to me to be the nub of the problem – in applying his improbability argument to the question of a Creator’s existence, Dawkins seems to assume that the existence or non-existence of this Creator must be determined by the same laws and logic that obtain within our universe. But why should the laws and logic of this universe apply “outside” the universe, to a transcendent being who, ex hypothesi, created the universe, including its laws and logic? Positing a Creator is not akin to positing fairies at the bottom of the garden. In assuming that a Creator must be bound by the laws and logic of this universe, isn’t Dawkins begging the question? Isn’t he assuming the very thing that needs to be demonstrated: namely, that there cannot exist a being whose nature is not fully explicable in terms of the characteristics of the natural world?

Jean Kazez, the creative goddess of this blog, thinks that the idea of an Intelligent Designer is “utterly outlandish”. Perhaps it is, but if so, there ought to be a reason, and I can’t think of a reason that isn’t ultimately question-begging. So come on, atheists, help me out. What’s a good reason to believe in the non-existence of any Intelligent Designer?

By the way, I define myself as a radical, militant agnostic. A radical agnostic denies even the possibility of humans ever rationally knowing whether there is an intelligence at the root of existence. A militant agnostic says we should darn well do something about this state of affairs. In particular, we should relax about the whole deity issue, break open a cool beer, value our family, friends, and other living creatures, and enjoy this life as much as we can.

6/4/10

Of Hens' Teeth and Baseball Deities (guest post)

Perfect games in baseball are as rare as hens' teeth. In all the tens upon tens of thousands of games played in the major leagues during the modern era (beginning 1901), until the start of this season there had been only 16 perfect games. As Wikipedia points out, more people have orbited the Moon than have pitched a perfect game in the majors. Not only must the pitcher allow no hits (a no-hitter itself being a rare event), but he must exercise unusual control, allowing no walks and hitting no batter. And his team must play flawlessly behind him, committing no errors. So when unheralded Armando Galarraga stood on the mound with two out in the ninth inning on Wednesday in Detroit, having retired 26 Cleveland batters in a row, everyone knew that another bit of history was just the breadth of a hen's tooth away. And everyone knows now what happened next. Galarraga got the batter to ground out, the pitcher himself taking the throw at first base to seal the perfect game - except that the first-base umpire inexplicably called the runner safe. Video replays clearly showed that the runner was out.

After the game the umpire tearfully apologized to Galarraga, but the damage had been done. A player who is unlikely ever to make much of a mark on the game has been unjustly deprived of making the ultimate mark. An umpire who has made an honest mistake will be haunted publicly and privately by that mistake for the rest of his life. In retrospect, should the umpire, on consequentialist grounds, knowing that a perfect game was on the line and that the play at first was close, simply have given the pitcher's team the benefit of the doubt and called the runner out, regardless of what he thought his eyes told him? Bizarrely, major-league baseball does not allow the use of video replays during a game to settle disputed calls on such plays. The commissioner of baseball does have it within his power to overturn the fateful decision. With everyone now agreed that an egregious and hurtful mistake was made, should he annul the umpire's decision and restore the perfect game, or should he instead wipe his hands and say, "The rules are the rules; if I make an exception, I open the gates to chaos, with every disputed play past and future up for grabs"?

I said that until the start of this season there had been only 16 perfect games in well over a century - the most recent of those being in July 2009. But on May 9, 2010, Dallas Braden of Oakland threw a perfect game against Tampa Bay. And then, incredibly, just twenty days later, on May 29, Roy Halladay of Philadelphia threw a perfect game against Florida. And then, just four days after that, Armando Galarraga...

Whoa! What's going on here? says God. I run a tight ship. The rules are the rules. They're not called laws of nature for nothing. That includes statistical laws. People like regularity. They need regularity. It makes them feel comfortable. It's bad enough that I let the Halladay one slip by, but, okay, the guy's a star, I'm a generous deity, so I let him keep it. But this Galarraga's a nobody. Three perfect games in less than a month? Is he trying to make a fool of me? I couldn't let it happen - I made sure the first batter in the ninth hit the ball a mile. But then that outfielder ran it down and made an amazing over-the-shoulder catch. (Who does he think he is: Willie Mays?) So that was it for me. No more Mister Nice God. I decided to take a page from the Descartes playbook and do an Evil Genius number on the first-base ump. I made him clearly and distinctly perceive the runner to be safe at first, even though he wasn't. But, hey, what choice did I have? Even God has a duty to respect the law.

6/3/10

Taking a Break

Angus Taylor is going to take over for me for the next couple of weeks, while I'm horrendously busy.  This will be a treat for me (I will be lurking!) and for you too. Angus teaches and writes about environmental ethics and animal rights in Victoria, British Columbia.  (What a cool place...)

6/2/10

Are Philosophers Experts?

Reflections on my recent adventure (see the comments) in the atheosphere--

Let's say I read some biologist affirming that there's such a thing as group selection--a tricky, subtle concept I don't understand really well. Group selection may not make much sense to me, but in Socratic fashion, I know what I don't know. I know there's a big debate in biology about group selection, with well regarded people on both sides, but I don't know all the arguments and positions. I haven't read the literature on this. It's all rather technical. I might not even be able to digest it if I tried. So a hunch about it, OK, I might even have a tentative view. A definitive view, no.  Dismiss other people's views out of hand? Certainly not.  I definitely wouldn't dismiss the biologist as a clown.

Now let's transpose. Andrew Pessin started his recent HuffPo post by recognizing the paradox of the preface (more here). To some people, the paradox seems preposterous.  Some say with total confidence that there's no paradox at all. To argue for this, in effect they try to solve the paradox. But wait, there are dozens of solutions in the literature, and objections to extant solutions! Without reading them, they nevertheless confidently espouse their own solutions. They also confidently trounce other solutions. Why read the literature? they say. There's no paradox! Or: why should I listen to philosophers if I don't listen to theologians? (To be clear--this is a composite of things said by several people in that comment thread.)

What I have learned: philosophers don't have the stature of biologists (gulp!). The assumption is that you can weigh in on philosophical issues without first studying what philosophers have said about them, in a way you obviously can't weigh in on biological controversies without first studying what biologists have said about them.

That perception really surprises me.  I do regard philosophers (the good ones) as experts--not so much on "the truth" but on the contours of problems.  It's not immediately obvious there's any problem of free will (for example), but the fact that philosophers see it as a problem is evidence that there's a problem. If you don't get it, listen up!  Read more!  If you quickly think you've got the solution, the fact that philosophers debate this endlessly, and there are lots of competing views, is evidence that it's not as simple as you think.  Ditto: the paradox of the preface.

Why should you give the views of philosophers that sort of weight?  Good question.  I have an insider's reasons for doing so.  How does an outsider recognize which groups should be credited with expertise  and what they're experts about?   I shall not have more than a few hunches before I spend some time reading the literature on social epistemology.