Monday, May 21, 2007

Curses

Even though profanity is commonplace in the movies, I've never quite gotten the hang of hearing it in music. Though I rarely swear myself, I'm not intrinsically opposed to hearing others do it. After all, one of my favorite movies is Pulp Fiction, and one of my favorite comedians is Chris Rock. But whenever I hear it in songs, it almost invariably seems coarse to me. Why the double standard?

It may have something to do with my age and generation. Much of the music I heard growing up came through the radio and MTV, both of which censor offensive language. Movies, on the other hand, I was most likely to see through theaters, video, and premium cable stations, none of which are known to edit for content. In any case, before the 1990s swearing was not remotely as commonplace in popular music as it was in movies. When Ozzy Osbourne reinvented himself as a reality TV star in 2002, he quickly gained a reputation as a foul mouth. Yet I cannot recall ever hearing profanity in any of his songs, from Black Sabbath to his present solo records. He came from a generation of musicians where swearing was rare in any music that got wide radio play. That tendency continued well into the '80s, despite the prevalence of strong language even in family movies like Back to the Future.

Having never followed hip hop, I first noticed the change during the alternative rock boom of the early '90s. Pearl Jam used the f-word in two early hits, "Even Flow" and "Jeremy," though it was so mumbled it often got past the censors. In the mid-'90s, Alanis Morissette brought profanity into the mainstream with songs like "You Oughta Know" and "Hand in My Pocket." Bleeped out words became increasingly common on adult contemporary stations.

Although I am a fan of certain artists who use profanity in their music, I have rarely found that the practice adds anything of value to a song. In Johnny Cash's wonderful cover of Nine Inch Nail's "Hurt," for example, the line "I wear this crown of shit" is changed to "I wear this crown of thorns." Now doesn't the altered version sound so much nicer? Hey, I know it's a dark song, but that doesn't mean I want to be reminded of poop.

I realize that what I'm saying might just be a cultural prejudice. Swearing itself is a curious phenomenon, if you stop to think about it. There's nothing intrinsic to the meaning of swear words that makes people take offense at them. The way we designate them as out-of-bounds, while tolerating other words with the same meanings, is almost superstitious. Sociolinguists, in fact, liken both profanity and racial epithets to the magical words deemed unutterable in certain tribal societies.

There are times when it would almost be perverse not to swear. Even the normally wholesome Bill Cosby couldn't help indulging himself one time during his classic performance Bill Cosby Himself:
I said to a guy, "Tell me, what is it about cocaine that makes it so wonderful?" and he said, "Because it intensifies your personality." I said, "Yes, but what if you're an asshole?"
If you replaced the word "asshole" with a more polite alternative, the joke would simply not work. This suggests that swear words occasionally convey nuances that milder language cannot achieve. Most of the time, however, people resort to swearing as a way of avoiding more descriptive language. In that sense, the real problem with swear words is not so much that they're crude as that they're clichéd. When overused, they begin to take on the quality of the word "smurf" in those old Smurf cartoons, just all-purpose expressions that make the language less varied.

Since movies aim to capture the dialogue of real people, swearing has a well-established place in the movies, even though it can be overdone--and often is. I have more trouble justifying the practice in music, because song lyrics, much more than dialogue, thrive on indirectness. That's one of the reasons that "Blowin' in the Wind" is such a better antiwar song than, say, "Eve of Destruction." In music, it seems, the last thing you want to do is get to the point. Or it could be that I'm just getting old.

Tuesday, May 15, 2007

A paranormal romance

I recently discussed my love of the Richard Matheson novel What Dreams May Come. Now I'm going to talk about another Matheson novel, Somewhere in Time (originally titled Bid Time Return but later changed to fit the movie). I cannot exactly recommend this novel. In fact, I thought the 1980 film version improved on it considerably. Matheson, however, considers this book and What Dreams May Come to be his two greatest novels. He may even have conceived them as one book. The similarities between the two are striking. The protagonists in both books are blond 6'2" screenwriters who live in Hidden Hills, California and love classical music. They each have a brother named Robert who acquires the protagonist's memoirs but cannot bring himself to accept the otherworldly events described in them. Both books are paranormal love stories, but they emphasize different phenomena.

In Somewhere in Time, a 30-something man named Richard Collier has been diagnosed with an inoperable brain tumor and has decided, upon a coin flip, to spend his last days hanging around the Hotel del Coronado, a famous California hotel. There he grows obsessed with the photograph of a nineteenth-century stage actress, Elise McKenna, who once performed there. Through research, he learns that she never married, that she had an overprotective manager, and that she may have had a brief affair with a mysterious man while staying at the hotel. The more Richard learns, the more convinced he becomes that it is his destiny to travel back in time and become that mysterious man.

This is an interesting setup. But the execution is shaky. I barely could get through the first fifty pages, which consist of Richard's rambling journal entries as nothing happens. I realize that Matheson was attempting realism, but I don't consider that a good excuse. A novel must involve the reader or it isn't worth our time. The book's deepest flaw, however, is the love story itself. I simply did not like the character of Elise. The development of her relationship with Richard feels artificial and forced. It's definitely a case of the journey being more interesting than the destination.

The novel's most striking feature is its depiction of time travel. It's probably the only novel I've ever encountered that proposes a step-by-step method that does not require any futuristic accessories or special abilities. The method is presented in such detail that it almost tempts readers to try it themselves. I bet that some have, though I have my doubts that any have succeeded.

Richard bases his method on the theories of a real book, J.B. Priestley's Man and Time. The basic idea is that he uses self-hypnosis to convince his mind that he's in the past. He listens over and over to a tape recording of his own voice declaring that the year is 1896 and listing many details of how his surroundings would look in that year. After discovering that his voice is distracting him, he writes the hypnotic suggestions out on paper, over and over and over. The historical roots of the hotel help reinforce his purpose, as does an 1890s suit he buys for himself.

Of course, skeptical readers may suspect that the time-traveling experience occurs only in Richard's mind. Matheson leaves open that possibility. So if you think the love story is unconvincing and seems more like a lonely man's pathetic fantasy, that may be just what Matheson intended.

The movie does a better job of handling these themes. The plot is clearer and more focused. There's no mention of Richard having a brain tumor, and there seems to be external evidence that his journey really took place, such as in an early scene where an old woman goes up to him in a crowd and hands him a pocket watch and says, "Come back to me." He later takes the watch with him into the past, where he gives it to the woman when she's young. This generates one of the most famous time-travel paradoxes in the movies, the watch that seems to have no point of origin, just eternally existing in an endless loop. The book contains other paradoxes of this kind, but more subtly.

While the movie did poorly in theaters and receieved mostly negative reviews, it went on to become a cult classic, with an actual fan club. Being less than enamored by the love story, I was only able to admire the film's craft. It felt to me like an episode of The Twilight Zone, a show to which Matheson contributed heavily. But neither the book nor the film enthralled me the way the novel What Dreams May Come did, perhaps because this time I wasn't truly permitted entry into a new world.

Wednesday, May 09, 2007

The science of fruits, nuts, and flakes

Some years ago, not long after reading parts of Carl Sagan's The Demon-Haunted World, a book about the remarkable persistence of superstitions among educated people, I discovered that a highly intelligent woman I know, who has a science degree and has always seemed rationalist in her outlook, believes in astrology. She described the traits of my sign, Aquarius, and they did seem to fit me fairly well. I don't remember her description now, so instead I'll post what Wikipedia says about the sign:
Individuals born under this sign are thought to have a modest, creative, challenging, inquisitive, entertaining, progressive, stimulating, nocturnal, and independent character, but one which is also prone to rebelliousness, coldness, erraticism, indecisiveness, and impracticality.
I have to admit that that does kind of sound like me. I like to think of myself as creative, challenging, and independent. I know I am impractical and indecisive, and unquestionably I am nocturnal. On the other hand, the article also claims that male Aquarians "are often said to tend to be effeminate in appearance." Hey, I may not be the most macho of sorts, but I sure as hell ain't effeminate looking.

But there lies the problem. One of the most obvious features of pseudoscience is allowing subjectivity to influence the testing of predictions. Having people examine their own personalities is notoriously unreliable, since people tend not to have accurate perceptions of themselves. What exactly would constitute a "bad" astrological reading? If you didn't feel the description sounded like you? If your friends didn't think so? And how close does the description have to match your personality in order to prove that its alleged accuracy isn't merely coincidental? (For example, I suspect that there are many creative and independent types not born under Aquarius.) Astrologers never set up any quantifiable boundaries by which their "predictions" can succeed or fail. It's all left up to the whim of the person reading the horoscope.

Still, I can imagine ways in which the claims of astrologers might be tested. Gather a group of people together, and give only some of them their true astrological readings. Give the others a deliberately false reading. For example, the Gemini gets the Scorpio's reading. None of the people know whether they are receiving a correct reading or not. Perhaps none of them know there even are any incorrect readings. Even the person doing the testing doesn't know which ones are correct and which ones aren't. The subjects are then asked to rate their readings, on a scale of one to ten, by how closely they feel it describes them.

Now comes the fun part: compare the reactions of the people who got false readings and the people who got true readings. If the claims of astrology are valid, then the people who got true readings should be substantially more likely to think the readings accurately describe them. If, on the other hand, there isn't much difference between the reactions of the two groups, then the claims of astrology would seem to be bogus.

Has anyone ever tried such an experiment? Do believers in astrology even care? Hey, I know that even this test probably wouldn't pass muster with the scientific community. It still has the problem that people are examining their own personalities, rather than being objectively evaluated by a disinterested outsider.

Of course, many scientists will dismiss astrology out of hand simply because of its premise that celestial bodies many light years away can have a perceptible effect on human behavior. But I'm willing to entertain this premise for the sake of argument, because astrology in principle does make real predictions about observable facts. If those predictions were scientifically confirmed, we'd have to concede that there's something to the system, no matter how absurd its philosophical underpinnings may sound. Skeptics can take comfort from the fact that we're a long way from ever having to confront that possibility.

Thursday, May 03, 2007

Those other Jews

I will now quote an anecdote from Alan Dershowitz's wonderful book Chutzpah, because I think it provides considerable insight into the way a bigot's mind works:

"I related the story of a slightly eccentric Brahmin woman from Boston who came to see me about a legal problem. I concluded that her problem was not within the area of my expertise, so I recommended another lawyer. She asked whether he was Jewish, and I responded, 'What difference does that make?' She said that she didn't 'get along very well with Jews' and didn't know whether she could 'trust them.' I asked her why she had come to me, since I was obviously Jewish. I'll never forget her answer: 'The Jews I know are all fine. I have a Jewish doctor and a Jewish pharmacist whom I trust with my life. It's those other Jews--the money-grubbing ones, the dishonest ones--that I'm not comfortable with.'

"I pressed the Brahmin woman about whether she had actually ever encountered one of 'those' Jews, and she responded, 'Heavens, no. I would never allow myself to have any contact with such a person.' The lawyer I recommended happened to be Jewish, and the two of them got along famously." (p. 99)

Wednesday, May 02, 2007

The most overlooked novel

We generally expect books to be better than their film adaptations. The most extreme example from my experience may be What Dreams May Come. I strongly disliked the 1998 film, and I probably would have quickly forgotten about it, except that I noticed it was based on a 1978 novel by Richard Matheson, whose fiction I had enjoyed previously. So I picked up a copy from the library. It has since become one of my favorite books.

I am not alone in this reaction, but I am in a minority. Most people haven't read the book, and most people won't even try. There are just too many preconceptions standing in their way. And, unfortunately, the movie has helped further those preconceptions. People assume it isn't the type of book they'd like to read. They think it is too "New Agey" for their tastes, or too sentimental. On the first point, they are probably correct; on the second, they couldn't be more wrong.

Why do I like the book so much? Put simply, it is the most vivid, complex, and surprisingly convincing depiction of afterlife I have ever encountered in a work of fiction. Nothing else I have seen on the subject, in literature or in film, comes close--certainly not the movie version of the book. Before I read the novel, I had no idea that a story about Heaven and Hell could have such a profound effect on me. I have always believed in afterlife as a matter of faith, but I would never before have thought it could be convincingly described in human terms.

The story involves a middle-aged man named Chris who dies and goes to Heaven, and who ultimately descends into Hell to rescue his wife. It's basically a modern-day variation on The Divine Comedy, as a man gets a tour of the afterlife. In the metaphysics of the film and the book, dying involves shedding your physical body and entering a mental environment shaped by thoughts. Your fate in such an environment is largely self-imposed. If you're a decent, pleasant person, your afterlife experience will be pleasant; if you're someone with moral problems, you'll naturally have a more difficult experience. As the novel puts it, "People are not punished for their deeds but by them" (p. 265).

That much of the movie intrigued me, the first time I saw it. The problem was the schmaltz. I mean real schmaltz, piled on in large mounds, in place of strong narrative. The beginning gives us scene after scene of two lovers (Robin Williams and Anabella Sciorra) doing little more than stare at each other and giggle, with solemn music in the background. The film is so inept at fleshing out their lives that it shows their wedding, the tragic death of their children, and fifteen years passing before it reveals out of the blue that Chris is a doctor! The performances by Williams and Cuba Gooding, Jr. are surprisingly atrocious, maybe because they could scarcely believe their own lines.

For those who have only seen the movie, it's hard for me to convey just how very different the novel is. Of course there are major differences in the plot. One such difference is the ending. (Even Roger Ebert, who heaped high praise on the film, was disappointed by the ending.) Another is the beginning, where the film has Chris's children also die and go to Heaven. In doing this, the movie (1) makes the early scenes so depressing they become surreal (2) needlessly clutters the story with extra characters (3) introduces a silly and confusing subplot about Chris's attempts to find his children, who are in disguise.

In the book, Chris's children are adults, not youngsters, and they're minor characters. The details of Chris's life on Earth differ so greatly between the book and the film that it's like reading about a completely different person. Even though I saw the movie first, the image of Robin Williams completely vanished from my mind as I read, because he was so unlike the character described in the book.

The entire feel of the book is different, telling a touching love story that uses real characterization, not cheap manipulation, to move the audience. And Matheson's vision of the afterlife truly comes alive on the page. The Hell scenes are actually terrifying, reminding us, as the movie does not, why Matheson is primarily famous as a horror writer.

I won't overlook the movie's gorgeous visual effects, which earned the film a well-deserved Academy Award. They just aren't put to good purpose. The movie's vision of the afterlife as like being inside giant paintings fails to evoke a sense of reality. The book, in contrast, bases its afterlife imagery (vividly brought to life by Matheson's skillful prose) much more on Earth-like scenery. This approach ironically leads to far more exotic ideas, such as architects who build things using their minds, and a library containing history books more objective than those on Earth. Matheson puts the reader right inside this setting, as the following passage illustrates:
I noticed, then, there were no shadows on the ground. I sat beneath a tree yet not in shade. I didn't understand that, and looked for the sun.

There wasn't any.... There was light without a sun. I looked around in confusion. As my eyes grew more accustomed to the light, I saw further into the countryside. I had never seen such scenery: a stunning vista of green-clad meadows, flowers, and trees. Ann would love this, I thought.

I remembered then. Ann was still alive. And I? I stood and pressed both palms against the solid tree trunk. Stamped on solid ground with my shoe. I was dead; there could be no question about it any longer. Yet here I was, possessed of a body that felt the same and looked the same, was even dressed the same. Standing on this very real ground in this most tangible of landscapes....

I turned my hands over and noticed that their skin and nails were pink. There was blood inside me. I had to shake myself to make certain I wasn't dreaming. I held my right hand over my nose and mouth and felt breath pulsing warmly from my lungs.... (p. 55-6)
And here is a passage from one of the book's Hell scenes: "Now I saw that, interspersed throughout the area we crossed, were pools of dark and filthy-looking liquid; I hesitate to call it water. A loathsome stench beyond that which I had ever been exposed to rose from these pools. And I was horrified to see movement in them as though unfortunates had slipped beneath the surface and were unable to rise" (p. 183).

Matheson, as I mentioned, is a famous horror writer. One of his unique qualities is his almost scientific approach to the supernatural. As Roger Ebert explains in his review of the Matheson-penned Legend of Hell House:
Matheson labored for years in the elusive territory between straight science fiction and the supernatural horror genre, developing a kind of novel in which vampires, ghouls, and the occult are treated as if they came under ordinary scientific classifications.

There was, for example, the Matheson classic I Am Legend.... In that one, a single normal man held hundreds of vampires...at bay by figuring out the scientific reasons for old medieval antivampire measures like mirrors, crucifixes, and garlic. The Matheson novels of the 1950s and early 1960s anticipated pseudorealistic fantasy novels like Rosemary's Baby and The Exorcist.
In What Dreams May Come, Matheson makes Heaven and Hell seem like a scientific, natural process, and one of the joys of the book is discerning all the intricate "rules" of how everything works. (That's another area where the movie falls short.) What needs to be kept in mind, however, is that Matheson doesn't do this just for entertainment purposes. In the novel's introduction, he tells his readers that the characters are the only fictional component of the novel, and that almost everything else is based on research. The book even includes a lengthy bibliography. Thus, the afterlife that Matheson describes isn't some fantasy world he concocted from his own head, but something he believes to be an accurate description of reality.

Some people may wonder, at this point, about Matheson's religious background. He was raised a Christian Scientist, but gradually developed what he calls his own religion, taking elements from many sources. One of the book's main influences, I believe, is eighteenth-century Christian mystic Emanuel Swedenborg.

The book avoids seeming "religious" in any way other than its setting. There are no deities, no mention of Christ or anything specific to the theology of a particular religion. The characters occasionally talk about God, but it is left up to the reader to decide whether the Creator in charge of this system is at all like the Judeo-Christian God or like something more pantheistic. The book even implies that no single religion has the whole truth:
"For instance, you'll find, in the hereafter, the particular heaven of each theology."

"Which is right then?" I asked, completely baffled now.

"All of them," he said, "and none. Buddhist, Hindu, Moslem, Christian, Jew--each has an after-life experience which reflects his own beliefs. The Viking had his Valhalla, the American Indian his Happy Hunting Ground, the zealot his City of Gold. All are real. Each is a portion of the overall reality.

"You'll even find, here, those who claim that survival is nonsense," he said. "They bang their nonmaterial tables with their nonmaterial fists and sneer at any suggestion of a life beyond matter. It's the ultimate irony of delusion." (pp. 90-1)
From what I've seen, people react negatively to this book based on how far it departs from their personal beliefs. Christians complain about the absence of Jesus, while those who don't believe in any afterlife consider the story too nonsensical to accept. Most readers, it seems, are put off by the New Age terminology and concepts scattered throughout the book.

These reactions are puzzling, if you stop to think about it. Books about elves, fairies, dragons, and wizards remain popular even though nobody believes in any of those things. Why should people be bothered by a fiction book portraying a Heaven and Hell that conflicts with what they believe? The book is perfectly enjoyable whether or not you accept Matheson's metaphysics.

Of course, I personally do think Matheson provides insight into the subject--though I admit I'm a little wary of his acceptance of paranormal phenomena. But it amazes me how so many people refuse to even touch the book, thinking that any story with such a plot must automatically be hokey. In most cases, they'd be right. What Dreams May Come is a big exception. It suggests the endless possibilities in a subject that normally is dead weight for fiction. And it really makes you think.

Wednesday, April 25, 2007

Ultra-Beautiful

When I was a child, I knew of only two divisions in Judaism: frum and not frum. From a Yiddish word meaning "pious," this is the Orthodox Jewish way of designating observant Jews. The word "Orthodox" itself seemed fairly alien to me, used mostly as a formality. Later, I became aware of a distinct sub-group called "Modern Orthodox," which I first conceptualized as frum Jews who ignore strictures against mixed dancing. Later still, I began hearing the term "ultra-Orthodox," which my friends and family perceived as a vague slur applied by ignorant outsiders to any Orthodox Jews they considered too extreme. We were irritated, and a bit perplexed, by the media's increasing use of the term as though it were an objective, neutral description of a distinct group.

By now, the term "ultra-Orthodox" has become so standard in the media that people use it without blinking an eye. I'm torn on the subject, wondering if I should still fight the trend, or just give in. There are two primary issues here. The first is whether the term is inherently pejorative. The second is whether such a group as "ultra-Orthodox" really exists.

To answer the first question, we need only look at the history of the prefix ultra. According to the Online Etymology Dictionary, the term originally meant "beyond" (as in ultraviolet) but came to mean "extremist" when applied to political movements.1 The connotation is that such movements are "beyond the pale," which is obviously a value judgment applied by outsiders, not a term people would normally apply to themselves. Occasionally you will find people today who proudly identify as ultraconservative or ultraliberal, but there's no question that those terms were originally intended as insults. It's like when some blacks call themselves by the N-word.

One blogger told me that he doesn't mind being called ultra-Orthodox, just as he wouldn't mind being called ultra-beautiful or ultra-smart. That's an interesting argument, but I think it proves my very point: people rarely use phrases like "ultra-beautiful," because the prefix ultra is generally reserved for insults, which is almost certainly what was originally intended by the term "ultra-Orthodox."

As an experiment, I googled the phrase "ultra-Orthodox." Of the first ten hits, two are Wikipedia articles, two are allegedly neutral news articles, one site complains about the term, and the remaining five are sites bashing ultra-Orthodox Jews. You may consider this result too small a sample to draw a conclusion, but I invite anyone to try the experiment on a larger scale. You will likely find what many of us have sensed all along, which is that "ultra-Orthodox" is widely used as an insulting term, and almost never used in a complimentary sense.

Of course, it is possible to take a pejorative expression and wear it defiantly, as a badge of pride. But so-called ultra-Orthodox Jews have made no collective attempts to do so. Those rare few who self-identify by the term are, I suspect, surrendering themselves to a trend they feel powerless against, rather than eagerly embracing the term.

Because the secular press regularly treats the term as a neutral expression, and because the term simultaneously exists as an insult, the people who use the term insultingly have gained a significant rhetorical advantage. It has become one of those words like fundamentalist where you can pretend to be neutral when you're actually invoking a stereotype. I'm reminded of an article by Rabbi Shmuley Boteach in which he describes religious fundamentalism as immoral and destructive.2 He never bothers to provide a precise definition of "fundamentalism," and he seems unaware that the standard definition would apply to his own religious practice. He unblinkingly defines the term according to popular negative stereotypes associated with the term. Once you start incorporating stereotypes about a group into the very definition of that group, you've won the argument before you've even started. The term "ultra-Orthodox" has that same two-faced quality.

That brings me to the second question. Does "ultra-Orthodox," insulting or not, refer to an actual group? It is supposedly the English equivalent of Haredi. There are certainly many Jews who self-identify as Haredi. Though there are also people who bash Haredim, nobody considers the term in itself to be insulting. It is fairly neutral, neither positive nor negative. (Repeating the Google experiment with "Haredi," I found three web definitions, one actual Haredi site, one site bashing the non-mainstream Neturei Karta sect rather than Haredim in general, and five neutral articles. The articles, I should mention, are on the whole more respectful than the ones I found in the "ultra-Orthodox" search.) There probably ought to be a campaign to have the secular press adopt the term, but for now it is rather obscure, known only to Orthodox Jews and occasional outsiders.

The problem, which few people acknowledge, is that Haredi is vague and imprecise. It presupposes that Orthodoxy can be neatly divided into two groups, those who reject the outside world and those who embrace it. The former are Haredi or "ultra-Orthodox," the latter are Modern Orthodox. This classification has been widely reported in the media, but it would raise the eyebrows of most Jews in my native Baltimore. Baltimore's Orthodox community is very largely "yeshivish" or "black hat," two insider terms referring to non-Hasidic Jews who are stricter in their observance than Modern Orthodoxy. By the two-pole classification, that would constitute Haredi. But most Baltimore frummies do not fit the standard definition of Haredim. Most people here have a strong work ethic, for example, and there has been no community ban on using the Internet in one's home. Anti-secular attitudes exist here but do not generally prevail.

Modern Orthodox, for that matter, covers a wide range of attitudes and practices. Some people have attempted to recognize a third group, "centrist," represented most prominently by the Orthodox Union and Yeshiva University. This would cover Jews who are strict in their observance but who embrace the outside world. In fact, it is pretty common to hear Orthodox Jews using phrases like "left-of-center," "right-of-center," "far left," etc., as though Orthodoxy resembled the left-right political spectrum in the secular world. While still a simplification, this outlook is a vast improvement over those who conceptualize Orthodoxy as two distinct "camps."

Thus, it's important to understand that when Orthodox Jews use a term like "Haredi," they usually recognize how blurry the dividing line is. But what about people outside the Orthodox community, especially those with little knowledge of Orthodoxy? Those people are likely to be considerably less understanding--and they're also more likely to use a term like "ultra-Orthodox" instead of "Haredi." It's no wonder, therefore, that most people who use the term "ultra-Orthodox" use it thoughtlessly, without a clear picture of what they're referring to. For many people, it's just a code word for "Jewish religious nut." Hence, it's not uncommon to see the term applied ignorantly to Religious Zionists, even though that group is usually distinct from the Haredim, at least in Israel.

If we were to run a successful campaign and the media were to stop saying "ultra-Orthodox" and to start saying "Haredi" instead, it wouldn't solve everything. Outsiders would continue to oversimplify the dynamics of the Orthodox community. But it would be a start. A few people might think twice before applying such an exotic term with such a broad brush to people they don't know.

Works Cited
1 The Online Etymology Dictionary. "ultra-" http://www.etymonline.com/index.php?term=ultra-"
2 Rabbi Shmuley Boteach. "How Religion Leads to Fundamentalism." http://www.shmuley.com/articles.php?id=183

Friday, November 17, 2006

In defense of Orthodox liberalism

Cross-posted at DovBear's blog

R. Harry Maryles writes in this this post, "It is a fact that the conservative principles are generally more in line with Orthodox Judaism than are liberal principles. Although that isn’t 100% the case, I think it is true most of the time."

I care to disagree. But I should note that if Harry had begun the sentence with "It is my opinion..." rather than "It is a fact..." I would not have objected. He is entitled to his views, but they are debatable. Still, I have heard similar sentiments from many other frum people, and it is a topic worth discussing.

A large part of what has inspired the rightward shift among frum voters in recent decades parallels the influences on evangelical Christians: the "traditional values" of which the Republican Party has appointed itself the sole bearer. While those values have nothing to do with the conservative philosophy of unfettered capitalism, Republican politicians created a marriage between these two meanings of conservatism. It is an unhappy marriage. Religious conservatives were duped by Reagan, and many of them have recently woken up to the fact that they've also been duped by Bush.

I've always been amazed at the mental acrobatics of those who argue that Judaism fits the philosophy behind economic conservatism. Their rationale depends partly on the standard but inaccurate translation of tzedakah as "charity." In modern American society, charity is simply a praiseworthy act. In ancient Israel, however, tzedakah was the law of the land. The conservative tenet that we must encourage volunteerism in place of government aid runs contrary to much traditional Jewish thought.

When I raised this point on Harry's blog, Bari noted differences between the ancient Jewish system and modern liberal programs. For example, in halacha a person gets to decide which poor people to give to. When I pointed out that one of the highest forms of tzedakah is giving to someone unknown, Bari replied, "And it's theft if you take it from me to give it to someone else who I don't know. When the govt. does it, maybe it's not theft, but it's not right Al Pi Din Torah."

Bari is walking on thin ice here. Either you think that it's okay to have the government enforce donations to the poor, or you don't. If you don't, but you make an exception for Judaism's specific mandates, and you declare anything else to be "theft" or something close to it, then you're not being philosophically consistent.

Having said that, I should point out that there is a good deal more to politics than philosophy. I don't fault any frum person for taking conservative positions on particular issues. There is room in Yiddishkeit for a variety of political perspectives, once we move past ideology and get into specifics. The problem is that many of us have a hard time stepping outside our own political perspectives and acknowledging that other viewpoints have legitimacy. When we feel strongly about an issue, it is easy to fall into the trap of ascribing simplistic motives to the other side and of not recognizing how complex the issue really is. I'm sure I have been guilty of this before, but I definitely see it in frum conservatives. It is implicit in Harry's statement that "conservative principles are generally more in line with Orthodox Judaism," which almost makes it sound like we can just do a head-count of political positions and declare this one as being more in line with Torah values, that one as being less, and so on.

So let me be clear: On almost any major issue in American politics today, a case could be made for both sides without sacrificing one's commitment to Torah principles. There are possible exceptions, like gay marriage or opposition to stem-cell research. But most issues fall into one of the following three categories:

1) Issues where the Torah's view is irrelevant. One example is gun control. Occasionally I have heard Orthodox rabbis on both sides of this debate attempt to "spin" their favored position as more Torah-based, but their arguments are unconvincing, for the disagreement (properly understood) does not stem from any fundamental difference of values and has no real bearing on halacha. So too with the vast majority of American political issues.

2) Issues where the Torah's view is relevant, but where there is still rabbinic support for both sides. An excellent example is the death penalty. Harry's mentor R. Ahron Soloveichik not only opposed the death penalty but believed that every Jew should.

3) Issues where Jewish law may seem more in line with one side, but where pragmatic considerations might tilt it the other way. This category includes many "social issues" that religious conservatives focus upon, such as abortion.

In sum, I welcome debate on the specifics of any issue. At the same time, I believe that there is much in common between traditional Judaism and many core liberal ideals. It's not absolute, but then neither is the pact that R. Lapin and co. have attempted to make with the Christian Right. And frankly I think the latter poses a greater danger to our freedom as Jews than the fuzzy liberal tolerance that so many frum people claim to despise. Christian conservatives may play nicey-nice to us, but in the long run they're being disingenuous, as becomes clear in the slip-ups by the less shrewd among them (e.g. Katherine Harris). You have to be extremely deluded to believe that the Christian Right views us as an equal partner. No doubt we should stand up for what we believe in, whether economic or social, but we must also be careful not to be so blinded by ideology that we enter into an unhealthy relationship.

Saturday, November 11, 2006

Too stupid for chess


It's one thing to know that someone is smarter than you, it's quite another to be reminded of that fact week after week after week.

From my childhood onward, I used to play chess regularly with a friend of mine. He beat me a good majority of the time. This made the game a tiring experience for me. I could have viewed my losses as a challenge, an incentive to work harder. But these were times when all I wanted to do was relax. The mental effort needed to keep track of a chess game just didn't inspire me.

Occasionally, we played other games, where our skills were more even. We even invented a new game we called "losing chess." While we weren't the first to come up with either the idea or the name, our version was somewhat original. In the "standard" form of losing chess, the object is to force the opponent to capture all your pieces, and the king holds no special importance. But in our variant, the object was to force a checkmate on your own king--in other words, to expose your king to capture when the opponent is threatening no other piece. This game turned out to be rather interesting and unpredictable. I tended to win the game (or "lose," if you will) slightly more often than he did.

I still tried to improve my skills in regular chess. I read books about chess strategy. I downloaded a fairly decent chess program to examine the strategies of a computer player. That actually kept me busy for some time, but as with all other computer games played with myself, I grew bored of it. In any case, my friend continued to beat me.

Only in the last few years did my interest in checkers get revitalized. Windows XP comes with a game called Internet Checkers. The computer sets me up with another actual player. The only information I get about the other player is his language and skill level. Players get a choice of three skill levels: Beginner, Intermediate, and Expert. I am set up with a player of the same skill level as long as one is available. Since the program rarely takes much time in setting up a game, it seems that numerous people around the world are constantly using this software. The only other possibility is that I'm unknowingly being set up with a computer player at least part of the time, though the instructions give no indication that the program ever does such a thing, and I believe I can tell the difference between a computer player and a human.

I can send the other player a message from the following pre-set list: "Nice try," "Good job," "Good game," "Good luck," "It's your turn," "I'm thinking," "Play again?," "Yes," "No," "Hello," "Goodbye," "Thank you," "You're welcome," "It was luck," "Be right back," "Okay, I'm back," "Are you still there?," "Sorry, I have to go now," "I'm going to play at zone.com," ":-)," ":-(," "Uh-oh," "Oops!," "Ouch!," "Nice move," "Good jump," "Good double-jump," "King me!" I assume that the comments get translated into whatever language the other player speaks. While I'm usually set up with another English speaker, I have also frequently been set up with players who speak German, French, Turkish, Arabic, Hebrew, Thai, and many other languages. No wonder there seem to be players available at all times of the day, and the night as well.

When I first started playing, I had very little knowledge of the game. I had played checkers before, and I was familiar with the rules. But I knew no strategies or techniques, except for a belief that I should avoid moving pieces in the back row. The first strategy I devised was a simple copycat routine: as long as I was the player who moved second, I could simply imitate the other player, doing a mirror image of his every move. Of course, the game always reached a point where I could no longer do this. Sometimes the opponent's opening move made it impossible for me to follow the copycat routine. But this routine usually got me to a point where I could find an advantage, and I did often win the game when playing as a Beginner.

I began to learn some tricks. One rule that casual checkers players frequently ignore is that when you can capture, you must capture. I think people avoid this rule because they feel that it limits their choices. But the computer versions of checkers require you to play by this rule. I began to discover that this rule is what makes the game so fascinating and unpredictable. Because you can force your opponent to capture a piece, you can make him do things he didn't expect to be doing, and then gain a sudden advantage. The simplest form of this technique is when you force your opponent to capture one of your pieces, and you end up taking two in return. I discovered this technique on my own, in a situation that frequently occurs toward the beginning of the game, at the side of the board. I became quite adept at making my opponent fall for this trick. But the opponent must be gullible enough to put himself into this vulnerable position. Moreover, I had to learn to avoid putting myself into this position. Because this is one of the simplest tricks, even players with modest experience usually know better than to allow it to happen. But they remain prepared should the opponent make himself vulnerable to this move. It's one of the litmus tests early in the game that makes it easy to tell the experienced players from the novices.

After I discovered that I was beating Beginner players the majority of the time, I decided to move up my skill level to Intermediate. Soon I moved it to Expert. Of course there was no guarantee that I was playing someone who actually was on that skill level. All it meant was that the player identified himself as being on that skill level. But I did beat Expert players less often than Beginner players, and while the challenge was intriguing me, I sometimes went back to the Beginner level, for relaxation purposes. I had given up the copycat routine and started to learn more sophisticated strategies.

Finally, I got a book out from the library on checkers techniques. Reading this book greatly refined my skills, teaching me techniques that I still use to this day. For one thing, I radically changed my opening strategy. I used to open the games most often by moving my side pieces first. Apparently, this is a common error that beginners make. They move the side pieces because the side pieces are less vulnerable. Unfortunately, this strategy is weak in the long run, because it doesn't help break through the opponent's defenses. It is best to start by moving the pieces in the center of the board, and to keep one's pieces close together. I also learned that moving the back row is not necessarily to be avoided. What I should avoid is moving the second and fourth pieces in the back row, but the first and third often can safely be moved early in the game. I also learned to keep my own double-corner well-protected, and to work on attacking the opponent's double-corner.

Getting used to these new techniques took some time. At first, I experienced some difficulties, and it appeared that I was getting worse, not better. But I soon realized that I was simply taking time to get accustomed to using the techniques properly. This new opening strategy made it easy for me to fall into a devastating trap, where the opponent would get a two-for-one and a king. But as I learned more caution, I began to see the great advantages of this strategy. I began to win games without using any tricks, simply by having a strong defense and by either making the opponent's back row vulnerable or putting him into a position where he couldn't move except by exposing his pieces to attack.

Of course, I also learned some more advanced tricks, not just from the book, but from a checkers program I downloaded where I got to examine a computer player's strategies. I learned complicated moves that involved making my opponent capture one piece, then another, and then finally launching a devastating attack that he had no idea was coming. To play checkers well, you have to be able to think four-dimensionally, to anticipate future moves by visualizing the board in other configurations.

The strategies for handling game endings are just as complex. For starters, if you have two pieces left and your opponent has only one, you can definitely beat him--but it requires some practice to learn how. If you have three pieces and your opponent has two, your best bet is to force him to take one of your pieces in exchange for one of his. It is possible for him to prevent you from doing this, using one or both of the double corners, and such a game will end in a draw even though you have more pieces.

I wanted a way to track my progress. Internet Checkers does not record wins or losses. So I created my own file where I kept track of that information. Next to each skill level, I wrote how many games on that level I won, how many I lost, and how many turned out as draws. Here are my current stats, which are still ever-changing:

Expert: 2697/964/365
Intermediate: 422/141/38
Beginner: 830/101/278

According to this record, I win approximately two-thirds of the time at any skill level. But I suspect my Beginner and Intermediate record would be much higher if not for the fact that most of the games I played at those levels occurred long ago, before I improved my skills. I regularly play the Expert level now, only rarely venturing to lower levels.

I have my own rules for determining whether I have won, because Internet Checkers has an annoying but understandable loophole: any player can abandon a game at any time. Thus, if a player is losing, he may simply quit without selecting the "resign" option. Sometimes this is not mere rudeness: computer and Internet glitches can cause a game to be ended prematurely. But I made a personal rule that if a player quits and I am clearly ahead, with more pieces, I record that in my personal file as a win. Similarly, if a computer glitch causes the game to be terminated and the opponent is ahead, I record it as a loss. If I'm at the end of the game and the opponent refuses to draw even though he clearly can establish no advantage, I quit the game and record it as a draw. I have recently adopted a 40-move rule used in official competitions, which says that if a player can gain no advantage in 40 moves, the game is automatically a draw.

These personal rules which I have concocted are relevant only to myself. The opponent doesn't know that I play by them, because I have no way of communicating them to him, given the limited pre-set list of comments we can pass between each other. I am somewhat amazed at the lame tricks that some of my opponents attempt to pull. For example, if they are losing, many of them will ask for a draw, hoping I will accidentally click "yes" when the pop-up window appears. On rare occasion I have even fallen for this trick, but so what? It makes no difference as to the truth of who won, and the truth is all that matters to me when I keep my personal record. I view these games as practice and recreation, and the recognition of having the computer say "You won!" means nothing to me.

Apparently, all this practice has paid off. After coaxing my friend to play checkers with me, I discovered that I was suddenly much better at the game than he is. He's still good--he has a strategic mind. But he's nowhere near my level, and he hasn't come close to reaching it. I've actually become a sort of tutor to him, showing him some of the techniques I've learned and giving him corny advice, like "The best offense is a good defense."

Why did I discover such skill at checkers, when I was always so hopelessly bad at chess? Part of the reason is that I stumbled upon the software that allowed me to compete with real players. This not only has kept me from growing bored of the game, but has been enormously good practice. Most of what I know now, I learned simply from the experience of playing, not from the strategy books. Perhaps if I were to start playing Internet versions of chess, my skill at that game would improve as well. But so far I lack the interest. Checkers just seems more suited to me than chess. It's a much simpler game, with far fewer rules. You have basically only two types of pieces, and you use only half of the board space that you use in chess. I'm the type of person who has trouble multitasking, processing many different things at once, and that may be the key to why I find checkers easier to deal with. I'm not going to admit that it's simply because I'm too stupid for chess.

Wednesday, October 25, 2006

Moral philosophy

I used to believe that atheists have no basis for accepting the idea of morality. I no longer believe this is true.

I have recently been discussing on David Guttmann's blog the issue of morality and religion. I admit that my views on this issue have significantly evolved in the last ten years. For my more recent views on the matter, see this post, which is based on an essay I wrote at the end of college. An essay I wrote at the beginning of college, however, presents quite a different perspective. I present an abridged version of the essay here:
Why must we act morally in the first place? An ethical system founded upon a religion that worships God is, in principle, more rational than an ethical system that denies this basis.

Personal feelings are too subjective to base an entire code upon them. The conclusion that "murder is immoral," for example, is not a direct implication of the premise that "murder does not feel good (to some people)." A person may be firmly ethical without following his or her emotions, while a person who follows his or her emotions may be morally lacking.

The argument that morality is a necessary part of society is also insufficient, because it is possible to behave contemptibly without harming society as a whole. While all cultures retain values at their foundation, they are free to adopt new tenets and discard old ones as time goes by. Morality, in contrast, is a permanent value system. It is not a product of Western society but rather an ideal toward which most societies strive. To say that murder is immoral is to imply that there is no possibility of it becoming moral anytime in the future, even if society would begin to approve of it. This is particularly evident in such topics as abortion or euthanasia, where people do not agree on the definition of an ethical concept. Their goal in debating the issue is to make society acquire a better understanding of morality. Since no society is infallible, no society provides us with the ultimate philosophical basis for ethics.

Another possibility is that disregarding proper ethical standards is self-destructive. It is certainly true that many people's moral actions are motivated by a desire for self-preservation. Obviously, most of us will follow the law of the land so as not to be punished. More to the point, all our actions have natural consequences, and for some moral actions, the consequences on the person who performs them are favorable. For example, the case for environmental protection is strengthened by the fact that problems in the environment can have hazardous effects for all the life on earth, including ourselves. On the other hand, the principle does not accurately describe all situations. Throughout history, evil societies and people have thrived while innocent individuals suffered. How does the self-preservation theory account for cases when criminals escape justice, even if such cases are uncommon? In any case, the suggestion that the goal of morality is self-preservation trivializes the concept, which has nearly always been understood to go beyond self-interest.

That last point exemplifies the problem with all these theories, which is their inconsistency with the morality understood to exist in our daily lives. If morality were merely a matter of personal taste or choice, as some philosophers have suggested, that would fail to explain most people's passionate hostility toward opposing moral philosophies. The passion implies that in most real-life situations, morality is assumed to occupy an objective reality of its own. In contrast, the previously mentioned theories invoke a conception of ethics that is more narrow and vague than the conception most people apply in practice.

Despite the general perception that morality is objectively true, people tend to relegate it to a separate realm of reality. For example, most people assume it to be an objective fact that advocating murder is "wrong," but would probably treat a comment that advocating murder is "inaccurate" with bewilderment. Moral judgments cannot be measured by accuracy, they would say. On the other hand, the same people would probably agree that the immorality in a doctrine of racial superiority is intrinsically related to the fact that it is false. The identification of racism as morally wrong in addition to being false is, like the first example, unprovable. Nevertheless, in the second example, we notice a logical connection between the realms of fact and value. It is inherently impossible to "prove" statements about how people should behave, yet such statements still constitute a kind of philosophical knowledge. This is clear because we so frequently use demonstrable facts to back up moral propositions.

If we assume that morality implies a system occupying a sphere of reality, the question presented at the beginning of this essay--Why must we act morally in the first place?--should be rephrased: How do we know that morality exists to begin with? That is, what knowledge confirms our general belief that all people must behave in certain ways and not others? From a rational standpoint, the assumption that morality exists is not self-evident, even though many people--including many religious people--treat it as such. It is rather a logical implication of the fact that it is God's will. Because God created and is in control of the world, He wants us to behave in specific ways, to satisfy the purpose of creation. This also provides the strongest rationale against destruction toward nature and society--because it is all part of God's creation.

Without God, there is no source by which to judge any statement on how we should behave as true. All we can say is that some people are motivated to behave that way, yet that does not tell us whether people should behave that way. Religion allows us to view moral propositions as truths rather than simply as preferences, instincts, or rules designed to maintain social order.

Accepting God's will as the ultimate basis for ethics does not preclude the previously mentioned motivating factors behind moral behavior; it simply gives them a unified point of reference. Compassion still plays an important role by enforcing moral rules deep within our psyche. Furthermore, even religiously based ethical systems use the standard of what is beneficial to society to decide on specific moral issues. Finally, religion embraces a version of the self-preservation argument, by its conviction that God punishes all evildoers. This belief, though, is based on faith rather than observation, whereas the secular version of the argument is an attempt to explain morality without reference to God.

I am not implying that those who do not recognize the ultimate basis for ethics are necessarily amoral. Divine authority is not the only cause of moral behavior, although it is the ultimate rational basis for ethics. The bottom line is that in the absence of religion, there are no ultimate grounds for condemning a person who chooses to behave immorally. Why is the evildoer's choice inferior to anyone else’s choice? None of the non-religious explanations for ethics answer that question; they merely clarify why some people prefer certain ways of behaving over others. Only when we recognize that the ultimate moral authority is God do we have a universal explanation for morality that applies equally to all people in all cultures, regardless of what the people or the cultures themselves may believe.
I actually still accept some of the ideas expressed in the above essay, even if I reject the general thesis. The philosophers who have attempted to root morality in social pragmatism or personal preference have not provided a convincing case for ethics. But I recognize now that basic morality is deducible without believing in God. I believe that the purpose of religion is to move us beyond this basic level in an attempt to perfect the world.

Friday, October 13, 2006

Teleporatation and the Clone Test

According to recent news, physicists have succeeded in teleporting a combination of light and matter, transporting the information over a distance. The news reports have hyped this achievement as the next step in a progression that will end in "Beam me up, Scotty!" transportation of human beings, the kind where you get "zapped" and reappear someplace else.

It's an intriguing possibility, but one that has always disturbed me. Wouldn't you be a little scared to go into such a machine, even if you'd seen it run successfully on hundreds of previous subjects? I'm not talking about the possibility of a disastrous malfunction. I'm saying that the whole idea of teleportation presents some curious philosophical problems, even if the process itself is foolproof.

I wouldn't have so much of a problem if I was assured that the machine was merely moving all the particles of my body to a different location. But not all science fiction writers have conceived of teleportation in that way. For many of them, teleportation means actually destroying all the particles, all the atoms, all the cells and flesh and tissues in your body, and reconstructing it using different material in another location. For anyone who isn't disturbed by this idea, I propose a simple test: would you be willing to be killed, if you were assured that a clone with all your memories would be created in your place?

My intuitive repulsion at this idea stems from my belief that there's an essential "me" contained within my body, that can't be reduced to the sum of my body's material. I'm perfectly aware that, due to growth and regeneration of cells, I'm not actually composed of the same material as I was a decade ago. But I carry with me a sense of self from every moment to the next, no matter how much my body changes.

Strangely, many reductionist scientists think that this "me" is an illusion. In the words of the late Francis Crick, from his book The Astonishing Hypothesis, "You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cell and their associated molecules." My response to Crick, or to anyone else who holds such a belief, would be to subject him to the Clone Test.

Friday, September 22, 2006

Of Mountains and Planets

A couple of days ago I happened to watch the 1995 comedy The Englishman Who Went Up a Hill But Came Down a Mountain, a film that naturally brings to mind a recent controversy. I'd intended to see this film for quite some time. I've long been an admirer of the actor Hugh Grant even though I have disliked many of his films, including the massively overrated Four Weddings and a Funeral. Only in recent years have I warmed up to his work, most notably with About a Boy and Love Actually. He has a particular presence that shines through even his lesser roles.

I got a little worried by the film's opening sequence depicting an old man telling a story to his grandson. Movies about adults telling stories to kids tend to have an artificial feel (though there are exceptions, like The Princess Bride). Fortunately, the film doesn't dwell on this contrived story device, and it quickly becomes an engaging British comedy that is hard to look at today without being reminded of the Pluto controversy.

The film takes place during World War I. Two English mapmakers (the younger one played by Grant) are sent to a Welsh village to survey a local mountain to see if it's really a mountain. Their readings show it to have only a 986-foot elevation, which means it has to be demoted to a hill, because it falls short of the 1,000-foot minimum needed to be considered a mountain. The villagers are tremendously upset by this revelation. As the grandfather narrator explains, "The Egyptians built pyramids, the Greeks built temples, but we did none of that, because we had mountains. Yes, the Welsh were created by mountains: where the mountain starts, there starts Wales. If this isn't a mountain...then [Grant's character] might just as well redraw the border and put us all in England, God forbid." (A great deal of the movie's humor comes from the cultural pride of the Welsh villagers and their antagonism toward these English outsiders.) Grant insists that he's only a scientist, out to discover the truth, and that the mountain, hill, or whatever is still a wonderful landmark regardless of its height. The villagers won't have any of it. They quickly craft a plan to fill in the missing 14 feet, while devising ways to keep the mapmakers from leaving town. The film manages to take this premise and stretch it to 90 minutes, with even a love story thrown in to boot.

For those who've been following recent news, does any of this sound awfully familiar? The film is from 1995. I've heard that the controversy over Pluto's status goes back to 1992, though I personally didn't hear about it until a few years ago. I doubt that the people who made this film had it in mind. But it's hard not to notice the parallels.

The controversy was provoked by the discovery that, well, Pluto is too small to be a planet. But what exactly defines a planet? Given that almost all our knowledge of planets comes from our solar system, there's very limited information to work with. You can call Pluto a planet, if you like. The problem is that, to be consistent you then have to include in your definition hundreds of other objects in the solar system that have not previously been considered planets. (Actually, they've previously been known as minor planets, or dwarf planets.) Not only are some of those objects larger than Pluto, but Pluto itself lacks many of the characteristics that the other "official" planets possess, such as a uniform orbit.

But the demotion of Pluto was met with outrage by some, depressed resignation by others. Part of the problem is that it's the only "planet" discovered by an American. When you listen to people's reaction to the demotion, you hear echoes of an emotional plea. As one curator of the American Museum of Natural History put it, "We had enormous numbers of telephone calls and I would say things that verged on hate mail from second-graders--very angry children who said, 'What have you done? This is the cutest, most Disney-esque of the planets. How could you possibly demote it?'"

Of course, unlike in the film, there isn't going to be any campaign to add piles of dirt to Pluto so that it qualifies as a planet once again. With no way to reach Pluto, much less change its appearance, the best we can do is argue about definitions. Still, the clash of science with culture must be something of a universal theme. Someone now ought to write a sci-fi parody titled The Astronaut Who Landed on a Planet But Left an Asteroid.

Wednesday, September 13, 2006

Religious tabloids

(Cross-posted at DovBear's blog.)

I see that Krum has posted about Yated Ne'eman's biases. I don't know if I've ever heard a bigger understatement. Yated Ne'eman doesn't just slant. It lies.

This became abundantly clear to me a few years ago when the paper did an article on Rabbi Yechiel Eckstein. Rabbi Eckstein is controversial because of his attempts to build bridges between Orthodox Jews and evangelical Christians. The Atlanta Jewish Times did a good profile of him in this article.

One day, Yated attacked Rabbi Eckstein. That in itself did not surprise me. It was the manner of the attack that caught my attention. According to Yated, Rabbi Eckstein had recently converted to Messianic Judaism:
In ads and books, [Eckstein's organization] has made numerous alarming remarks over the years, including Eckstein's declaration in one of his books that he had become a Jew for J. Eckstein has denied that he is a Jew for J.
Now, that's a pretty serious charge. But what was particularly confusing is that the two above sentences border on contradicting each other. One sentence says that R. Eckstein announced in a book that he'd become a Jew for Jesus, the next sentence claims that he has denied the charges. The conflation of the two sentences makes the paragraph come off rather like a Wikipedia article.

But this is one of the strange things I've noticed about Yated. It's not just that the paper lies. The paper lies, but unconvincingly. Even if I'd known nothing about R. Eckstein, I still would have been scratching my head after reading this article.

So, what is the truth of the matter? In 2001, R. Eckstein released a novel called The Journey Home. In that novel, a fictional version of R. Eckstein travels with a fictional version of a real-life Christian friend of his in the Holy Land. At one point, the rabbi says, "While I still don't believe in Jesus as the Christ as Jamie does, and view him instead as a Jew who brought salvation to the gentiles, in some respects, that is exactly what I have become--a Jew for Jesus."

Now, I can understand why some Orthodox Jews were alarmed by this statement. But that doesn't give anyone the right to lie about R. Eckstein. If Yated had clarified that this was a fictional story, and that even the fictional version of Eckstein was not embracing Messianic Judaism, the attack on Eckstein would have been more credible.

Of course, the article does quote someone defending R. Eckstein by pointing out that the claim against him was based on "words taken out of context from a story that was totally fictitious." But the article never explains what this remark means. It leaves readers with the impression that the rabbi really did become a Messianic Jew. Who cares if he claims that his comment was taken out of context? That's what they all say!

A couple of months ago, Rabbi Harry Maryles wrote on his blog about an article in Yated written by Dr. David Berger against Lubavitch. I objected to Harry's source, both because Dr. Berger is a known anti-Chabad zealot and because Yated is not a reliable source. Harry agreed with me, admitting that Yated was biased and even dishonest. But he insisted that they lie not overtly but "by omission." I remembered that Harry had on another occasion mentioned being friends with Rabbi Eckstein. Knowing this, I showed him the Yated article on Eckstein. This was Harry's response:
OK. I admit this stretches the outer reaches of truth, but although they are obviously wrong, I do not think they think they were deliberately lying. They were presenting the views of their misinformed Gedolim as fact. This is not the same as a deliberate lie.
I find the above statement disturbing to the max. So it's supposed to be better if Gedolim came up with the false information rather than the paper itself? And where did the Gedolim get the false information? At some point, somebody had to be lying--either that, or they were so careless they literally didn't care whether what they were writing was true or not. The article didn't just print a false rumor. It printed the rumor, but also printed the fact that R. Eckstein disputed the charges, and it vaguely hinted as to why the charges were disputed. But it still stated the false claim as fact.

Harry asked Dr. Berger, who is Modern Orthodox, why he had chosen to print his article in Yated. Dr. Berger contributed a lengthy explanation. He said that he was actually impressed by Yated's standards, because the editor censored a few sentences from his article. In Dr. Berger's words,
I argued that this additional information is critically important, but the editor felt that it was not important enough to overcome the larger editorial policy. I did not draw a line in the sand and allowed the deletion. While I think the editor's decision was mistaken, I admire the commitment to avoiding what he sees as unseemly content, a commitment that overrode any desire to add additional unfavorable information about Lubavitch. I ask myself if I can think of any other forum that would be so fastidious, and I come up empty.
Sarcastically, I replied, "Yeah, they think it's unseemly to mention Jesus by name, but they don't have a problem with falsely accusing someone of worshipping him."

Anyone familiar with Yated knows that distortions of this magnitude are hardly uncommon. The article on R. Eckstein appeared at least a year before the Slifkin controversy erupted, with all the lies and false rumors that went along with that account. Yated is essentially a mouthpiece for the forces responsible for the Slifkin fiasco.

Not too long ago, an article in the Baltimore Jewish News (the Orthodox spinoff of the Baltimore Jewish Times) talked about how Orthodox families in Baltimore handled exposure to secular media. A couple of the families interviewed were uncomfortable getting newspapers like The Baltimore Sun and The New York Times because of their perceived liberal and/or anti-Israel slant. One family preferred The Wall Street Journal, while another preferred, er, Yated Ne'eman.

There's nothing wrong with getting your news from the WSJ, because that publication, like The New York Times, is a legitimate newspaper, ideological slant or no. Sure, they may have occasional lapses from their fairly high standards, but at least they have standards. To prefer Yated, on the other hand, is laughable. Yated isn't a real newspaper; it's a frum tabloid rag. It's amazing to me that the same people who accuse others of being brainwashed are the most eager to brainwash themselves.

Bush is like Chauncey Gardiner

This is a post I wrote long ago, and recently posted to DovBear's blog.

Being There is one of the best films I have ever seen. It came out in 1979 but seems remarkably relevant today. I'm not the first person to have noticed similarities between President Bush and Chauncey Gardiner, but I did come to this conclusion independently.

Peter Sellers stars as a mentally retarded man, Chauncey, who, through a series of accidents, gets mistaken for a great thinker. His actual understanding of the world is so limited that he thinks a television set is people in a box. His only area of expertise is gardening. But nobody seems to notice this, and they interpret his literal statements about TV-watching and gardening (e.g., "As long as the roots are not severed, all is well, and all will be well, in the garden") as profound metaphors about the world. When he tells someone he can't read, the person thinks he means that he doesn't have the time in this busy world. (Sound familiar?) Gradually, he becomes famous, appearing on talk shows and meeting with public officials. All the while, nobody seems to notice that he doesn't have a clue what's going on! Everyone assumes he's this sophisticated, high-class thinker and misinterprets the simple, mundane things he says as brilliant kernels of wisdom. The film ends with the suggestion that the people around him might have him run for president.

None of this is meant to be taken literally, of course. The story is a satire designed to skewer the vapidity of television culture. I don't think the author of this tale, Jerzy Kosinski, ever foresaw that the situation he was describing would actually come true one day.

You might call me a Bush hater, but that would be a mistake. The Bush haters greatly overestimate Dubya's intelligence. Sure, they all say "Bush is a moron," reciting those words like a mantra, but they don't act like they really believe it. They give the man an awful lot of credit for the actions and policies of the Bush Administration, as though he's somehow in charge of everything rather than (as I see it) a puppet being controlled by others. When a reporter asked him for his opinion following the revelation of Deep Throat's identity, this was his response: "I don't have an opinion yet." Open-minded, huh? I'm sure that's how his admirers have spun it. I have a more sinister explanation: he hadn't yet discussed the matter with his advisors, who would tell him what his opinion should be.

He's like that a lot. You think he's the one who came up with those words about the sacrifice in Iraq being worth it? He never writes his speeches. I'm not saying there's anything wrong with that; most politicians have speechwriters. But the thing about Bush is that, in everything he does, he seems to rely heavily on the efforts of other people--Dick Cheney, Karl Rove ("Bush's Brain"), Condaleeza Rice. Remember Fahrenheit 9/11 and all the vacation time he spends? It often seems like Cheney's really the acting president, while Bush goes off to play golf, or jog, or relax somewhere. He has had this reputation ever since he was governor of Texas, a position, I should point out, that has so little power it's almost symbolic.

But what's truly amazing is how little the public notices this, even when they disagree with him. Take the aftermath of 9/11. He looks noble, delivers a nice speech he didn't write, and suddenly he's the most popular president ever. His handling of 9/11 was no more impressive than I would expect from any other president. He was just fortunate enough to be around when this great tragedy happened, and he has continually exploited its political value ever since. When he arrives with Bin Laden in chains, I'll give him credit. Until then, he should shut his trap.

I may be making the same mistake I mentioned before, of crediting Bush with the actions of others in the administration. It's often hard to tell who's really making the decisions, since most of Bush's public appearances are scripted. When he's forced to make off-the-cuff remarks every now and then, he ends up saying stupid things that reveal a startling lack of understanding. Rove is actually on record having instructed Bush to make as few public appearances as possible during his 1998 gubernatorial run. They wanted to keep him out of the limelight as much as possible, otherwise people were bound to notice that the emperor has no clothes.

What about his handling of debates? Wasn't there a consensus that he won all those debates against Ann Richards and Al Gore? What we have to realize is that ever since the first televised presidential debate in 1960, the press has had a strange tendency to judge a candidate's performance based on criteria that have absolutely nothing to do with the content of what the person is saying. In 2000, Al Gore was said to have "lost" the debates because of his body language and subtle behavior--he rolled his eyes a lot, stepped into Bush's space--making him come off, supposedly, as arrogant and rude. Then there was the matter of Gore's alleged "exaggerations," like saying he went to a Texas fire site with James Lee Witt, when in reality he went with another official, and traveled with Witt during another incident. The press jumped on Gore for this minor blunder, acting like he was a liar, all the while ignoring Bush's misstatements during the same debates (and there were several). The emphasis on all this trivial stuff ended up overshadowing the fact that the polls taken right after two of the debates showed that the initial consensus was that Gore had won. Only after the press started focusing on these irrelevant details did people change their minds.

If you actually listen to what Bush says during the debates, a different picture emerges. He frequently doesn't answer the questions given to him, sometimes completely changing the subject to talk about something else (i.e., something he's rehearsed). When he does come up with answers of his own, they are startlingly simplistic. A lot of what he does is just repeat key themes over and over, a technique that has proven effective. And his admirers mistake the simplicity for clarity.

As Roger Ebert wrote in his book The Great Movies II, Bush has never said anything Chauncey Gardiner couldn't have said. This is not to suggest that Bush is actually retarded--I'm not prepared to back that up, and it isn't true. He seems to have certain kinds of smarts. But he's intellectually vapid, and proud of it. (He actually boasted to having been a C student.) He's like Chauncey in the sense that he's a know-nothing who's being controlled by the people around him.

Thursday, August 17, 2006

The will to deny one's will

In Kurt Vonnegut's novel Slaughterhouse-Five, Billy Pilgrim is caught in a time warp, making him shift back and forth without warning to different points in his life. (Those who want to read my thoughts on the 1972 film adaptation can go here.) Vonnegut uses this plot device in two ways: to tell a semi-autobiographical account of his experience as a POW in World War II, and to express his mechanistic outlook on life.

This isn't the kind of time travel where you can change your own past. Billy occasionally tells people around him about future events he has seen, and he appears to take this information into account when making choices. But there's a sense that he never tries to change anything. Thus, he gets on a plane he knows will crash, without even making a fuss about it, because "he didn't want to make a fool of himself" (p. 133). What would have happened if he had avoided the flight? The book never answers that question, and there's an underlying implication that he has no ability to avoid what he knows will happen.

As in most time travel stories, the reader had best not scrutinize the logic of the situation. If it is possible to possess information about one's future, then there is a potential for the events to turn out differently. Books like this may try to ignore that fact, but it is inescapable.

Vonnegut wants to argue that everything which happens in life is inevitable, including the choices one makes, and thus that even someone who sees the future cannot control his own choices. But this notion overlooks the crucial consideration that a part of decision-making involves the information that one possesses. How many decisions might you have done differently had you possessed more information about the outcome?

Later in the book, Billy finds himself inside a dome on Planet Tralfamadore, where he becomes an exhibit at some alien zoo. This situation is not meant to elicit horror. He has conversations with the alien scientists, and he isn't trapped, since he's still constantly moving back and forth to other points in his life. The aliens themselves are even more detached from the human time frame than Billy is, living as they do in the fourth dimension. Because they view life from outside of time itself, all events to them look like one big simultaneous blob. They "don't see humans as two-legged creatures.... They see them as great millipedes--'with babies' legs at one end and old people's legs at the other'" (p. 75). This situation naturally affects their entire outlook. As one Tralfamadorian explains to Billy, "Today we [have a peaceful planet]. On other days we have wars as horrible as any you've ever seen or read about. There isn't anything we can do about them, so we simply don't look at them. We ignore them. We spend eternity looking at pleasant moments" (p. 101). Vonnegut here seems to be hinting at the real message of his book: that our purpose in life should not be to control what happens (that being impossible), but to cherish the good moments.

Of course, this philosophy involves a denial of the concept of free will. Vonnegut has the alien insist, "I've visited thirty-one inhabited planets in the universe, and I have studied reports on one hundred more. Only on earth is there any talk of free will" (p. 74). The position that free will is so unnatural that an alien likely would be perplexed by it is a strange assumption, considering how vital the concept is in human society.

If you don't believe me, imagine the following scenario: a stranger begins hurling nasty insults at you. Naturally, you think, "What a jerk!" But how would your judgment of that person change if you knew he had Tourette's Syndrome and his insults were involuntary? In that case, you wouldn't fault him for his behavior. And why not? Because he couldn't help it. In other words, he didn't choose to do what he did. We measure human behavior by the choices people make, which is why we do not count behavior stemming from a brain disease. That's why our justice system has a concept called "not guilty by reason of insanity." If we exempt a crazy person from guilt simply because he doesn't know right from wrong and hence cannot choose between the two, we're implying that sane people do possess such an ability.

The concept of good and evil goes out the window if there's no free will. Sure, everyone agrees that some people are destructive, but as Hannibal Lecter reminds us in Thomas Harris's novel Silence of the Lambs, a storm is also destructive, and we do not refer to a storm as evil. Our society makes a strong distinction between who a person is and what that person does. We detest any system that places strong limitations on a person based on birth; that's why we have such antipathy toward the Indian caste system and doctrines of racial superiority. We hold that anyone, regardless of background, has the right to be judged by behavior, and not to be judged disfavorably until behaving disfavorably. Among other things, this belief forms a large part of the basis for rejecting preventative detention, the practice of locking someone up in order to prevent a crime. Our society would begin to look draconian if we truly followed the principle that free will doesn't exist. It would hardly lead to the "let's just enjoy life" philosophy that Vonnegut seems to favor.

But people who deny the concept of free will never seem to think through the consequences of their belief. Like those who claim that morality is purely subjective, they make a philosophical claim but don't live life as though they really believe what they're saying. In speaking against injustice throughout the world, Bertrand Russell certainly didn't act like he truly considered the difference between his own positions and those of, say, Hitler's, a mere matter of personal taste. And Vonnegut's portrayal of the horrors of war doesn't seem to stem from the position that the Nazis couldn't help the way they were. Undermining a basic principle of society is easy enough in the abstract, but few such people truly live by their words.

Tuesday, August 01, 2006

A time to reflect

What's especially striking to me about the breaking news of Mel Gibson's recent anti-Semitic outburst while drunk is how rare this kind of situation is. Many celebrities and public figures have been accused of anti-Semitism, sometimes fairly, sometimes not, but in very few cases is there explicit proof. Even Vanessa Redgrave tried make it sound like she opposed only Zionists, not Jews, in her notorious Oscar acceptance speech, and Marlon Brando framed his attack on Jews as a criticism of Hollywood's alleged insensitivity toward "other" groups. In the last few years people have argued passionately over whether Mel Gibson is an anti-Semite, closet or otherwise. Until now, the proof was far more ambiguous than it was for Redgrave or Brando, and even some prominent Jews like Michael Medved rushed to his defense.

Now, what do his former defenders have to say? It looks like they really have egg on their faces. But the gist I've gotten from them is, okay, so it's true, Mel is an anti-Semite, but that doesn't mean we were wrong to defend him before. As Michael Medved argues in his blog, "Gibson's comments...remain particularly perplexing in the light of a previous record free of personal, anti-Semitic incident." I find this reaction naive. Gibson's former defenders should be considering that maybe they weren't quite as sensitive as they could have been to the presence of closet anti-Semitism in a man whose career could have been damaged by this information. Bigotry is a complex phenomenon, and it still amazes me how many people are blind to how subtle it can be.

Medved considers it perverse that the press focused on Hutton Gibson, Mel's father, who is an out-and-out Holocaust denier, instead of on Mel himself, who supposedly renounced the views of his father. But a closer look reveals that not to be the case. Mel was quite vague about the details regarding his own belief in the Holocaust ("chillingly ambiguous," as Charles Krauthammer put it). These are his words: "Atrocities happened. War is horrible. The Second World War killed tens of millions of people. Some of them were Jews in concentration camps. Many people lost their lives. In the Ukraine, several million starved to death between 1932 and 1933. During the last century, 20 million people died in the Soviet Union." Anyone who's familiar with the rhetoric of Holocaust deniers and Holocaust minimizers will recognize the similarities here. They all admit that some Jews were murdered. What they dispute is the numbers, and they deny there was any systematic attempt at genocide. Nothing that Mel said here contradicts that outlook.

I have not, by the way, seen The Passion. Still, even without getting into the debate over that particular film, there was plenty of evidence to support the claim that Gibson was an anti-Semite, long before this drunk-driving incident. I was once willing to give him the benefit of the doubt, on the grounds that maybe he was trying hard not to insult his father. That isn't an excuse, but it did leave open the possibility that he wasn't anti-Semitic in his heart. Now, we've gotten a rare glimpse into his heart, so maybe we need to look into our own.

Wednesday, June 28, 2006

The self-created monster

Although the great psychodrama Silence of the Lambs has enjoyed tremendous popularity and acclaim, many viewers have overlooked its most provocative insight: Hannibal Lecter, though a fearsome killer, is not truly crazy. This is a radical interpretation, I admit. The conventional view is that he can't help being the way he is. As Roger Ebert writes, Hannibal "bears comparison...with such other movie monsters as Nosferatu, Frankenstein (especially in Bride of Frankenstein), King Kong and Norman Bates. They have two things in common: They behave according to their natures, and they are misunderstood. Nothing that these monsters do is 'evil' in any conventional moral sense, because they lack any moral sense. They are hard-wired to do what they do. They have no choice."

I believe that this interpretation is mistaken. But I admit that there is superficial evidence to support it. There is no doubt that all the characters in the movie, aside from Hannibal himself, consider Hannibal crazy. That's why he's in an institution for the criminally insane. That's why Anthony Hopkins, on the DVD, describes Lecter as a good man trapped in a madman's body. Who am I to disagree with the actor who brought the character to life?

But I have observed that people tend to apply the word "madman" indiscriminately to anyone whose actions fall outside the boundaries of civilized behavior. Only in that sense is Hannibal "mad"; by any other criteria, he exhibits none of the usual signs of madness. He is not delusional in the least, and he has full control over his behavior. Everything he does is a carefully considered choice, based on a personal value system that permits him to perform grisly acts when he believes the circumstances justify it.

Dr. Chilton describes Hannibal as "a monster, a pure psychopath," but Hannibal in many ways does not fit the traditional definition of a psychopath. According to the diagnostic manual DSM-IV-TR, a person must exhibit three or more of the following behaviors to be classified as a psychopath:
(1) failure to conform to social norms with respect to lawful behaviors as indicated by repeatedly performing acts that are grounds for arrest
(2) deceitfulness, as indicated by repeated lying, use of aliases, or conning others for personal profit or pleasure
(3) impulsivity or failure to plan ahead
(4) irritability and aggressiveness, as indicated by repeated physical fights or assaults
(5) reckless disregard for safety of self or others
(6) consistent irresponsibility, as indicated by repeated failure to sustain consistent work behavior or honor financial obligations
(7) lack of remorse, as indicated by being indifferent to or rationalizing having hurt, mistreated, or stolen from another
Hannibal clearly is not reckless, irresponsible, or impulsive. His lack of impulsivity is notable, since the usual image of a psychopath is someone who lives in the present and doesn't think ahead. Hannibal seems to have everything intricately planned--including his escape, which he carries out while listening to classical music as if he had outlined the attack to the exact key.

One might assume that he's deceitful, but actually he lies only once in the entire movie, when he deliberately gives the FBI incorrect information about the name and whereabouts of the serial killer on the loose. Yet he does this in retaliation after he is lied to, hardly an indication of habitual lying. On the contrary, most of the time he uses his brutal honesty as a weapon, to wound others.

That leaves three categories that arguably apply to Hannibal: "failure to conform to social norms," "irritability and aggressiveness," and "lack of remorse." If those three traits truly describe Hannibal, then he may qualify as a psychopath. However, there is a good case for saying that he doesn't fit the second category. While he is certainly aggressive, I wouldn't describe him as irritable. His aggression is not haphazard but methodical. Whatever drives him, it isn't anger or rage. He is willing to hurt or kill those who stand in his way, but there is usually an element of moral judgment in his choice of victims. He tells Clarice that he has no intentions of coming after her, because "the world is more interesting with you in it." He has firmly held beliefs about how people ought to behave, and they influence his decisions on how to act. For example, when he causes Miggs's death, Dr. Chilton claims that Hannibal did it "to amuse himself," but Hannibal has his own explanation: "Discourtesy is unspeakably ugly to me." That is an ethical belief he repeatedly follows throughout the film.

What about his cannibalism? Doesn't this greatly undermine my argument? How could any sane man eat people? But there's nothing compulsive about his behavior. He performs no elaborate rituals along the lines of any standard serial killer. His cannibalism seems to reflect, rather, his contempt for much of the human race. He doesn't value human life, but he is capable of being kind to those he feels have earned his respect, like Clarice.

Hannibal is neither a psychopath nor a madman. Then how, you might ask, can we explain his monstrous behavior? Here is a telling exchange from the novel:
"You can't reduce me to a set of influences. You've given up good and evil for behaviorism...nothing is ever anybody's fault. Look at me, Officer Starling. Can you stand to say I'm evil? Am I evil, Officer Starling?"

"I think you've been destructive. For me it's the same thing."

"Evil's just destructive? Then storms are evil, if it's that simple." (p. 19)
Hannibal here is criticizing both the psychiatric profession and society as a whole. There is a common temptation to explain all human behavior in terms of mental states. We seek to distance ourselves from our horror by labeling anyone who commits horrifying crimes as "sick," as though that person is somehow the product of forces beyond his control rather than someone who has made a conscious choice to be the way he is. Hannibal Lecter represents our worst nightmare, a living proof that brutality and rationality do not necessarily conflict.

The promise of a sound resolution

Most people accept the concept of objective truth. If someone says that ice cream is a health food, that person is simply wrong. But if someone says, "ice cream is delicious," that statement is neither true nor false; it is simply a matter of opinion. A lot of people today place morality in the latter category. I hear this all the time: "Morality is subjective," they say. As Bertrand Russell asserts, "in a question as to whether this or that is ultimately Good, there is no evidence either way; each disputant can only appeal to his own emotions, and employ such rhetorical devices as shall rouse similar emotions in others." I disagree. Although people's emotions do often influence their views on morality (or, for that matter, on any other subject), it is possible to objectively assess a moral view based on the quality of the reasoning used to support it and on the weight of the evidence.

All societies share certain core principles. Killing would be a crime even in Hitler's ideal society. What Hitler claimed was not that killing in general was acceptable, but that the only way to create an ideal society was by first destroying or enslaving certain races. That claim rested on demonstrably false assumptions about reality, such as his pseudoscientific notions about race and the mortal threat that Jews allegedly posed for the rest of mankind.

Morality and truth are more closely linked than subjectivists admit. According to the subjectivist, if one culture practices cannibalism, and a second culture considers cannibalism immoral, there's no objective way of determining which side is right. If we investigate how the cannibals justify their actions, however, we are likely to find that they hold mistaken beliefs. They may believe, for example, that eating human flesh gives a person great powers, or that the people they are eating are less than human, coming as they do from outside the tribe. To suggest that those beliefs are rooted in superstition and ignorance is hardly a matter of subjective opinion.

By identifying core beliefs that all societies accept, we can determine through reason which moral views come closer to meeting those core beliefs. We can also determine when moral laws have exceptions, such as killing in self-defense. Since the goal of the law against killing is to protect human life, occasionally we must violate this law to reach the same goal. The reasoning here is similar to why people undergo surgery: they allow their body to be damaged in the short run to improve their health in the long run.

Moral ambiguity arises from the conflict between short-term and long-term consequences. If the United States government had learned that hijacked planes were heading toward the World Trade Center, it may have chosen to shoot down the planes, killing all the passengers, because failing to do so would lead to even more deaths. As a rule, long-term consequences take priority over short-term consequences. The problem is that they are harder to determine. The Nazi worldview perhaps represented the extreme of reasoning on the basis of long-term consequences, in the suggestion that enormous destruction of human life was needed to create a peaceful world. The primary danger of utopian visions is that people who seek to transform society to such an extent may ignore the harm they are driven to inflict on society in its current state. Sound moral reasoning involves a balance between what one knows to be true in the present and what one can reasonably infer about the future.

In real life, of course, many of the situations we face are not as clear-cut as the previous examples. The fallacy that subjectivists commit is in thinking that lack of clarity automatically implies subjectivity. Objective reality is not always accessible to human knowledge. For example, nobody knows whether life exists on other planets, but either it does or it does not; the answer does not depend on what humans believe or know. Of course, we may disagree on how to define life. But everyone agrees that humans, lions, trees, and bacteria are alive. The concept does not collapse into subjectivity simply because people aren't sure how far to extend the definition. By that logic, all concepts would turn out to be subjective.

Similarly, the fact that two people in full knowledge of the facts reach opposite conclusions on a moral question does not imply that the issue is subjective. One person may err in his reasoning, and their views may rest on assumptions that are hard to prove. Uncertainty is not subjectivity. While a person's emotions may influence where he stands on the issue, a rational person recognizes that any attempt to resolve the issue is ultimately a search for truth, not an appeal to emotions. Just as unsolved mathematical problems do not shake people's faith that one plus one equals two, complex moral issues do not refute the existence of simple moral truths.

After all, anyone who enters any moral debate hopes that this society will eventually resolve the issue when most people decide which side’s arguments are the most cogent. If neither side is ultimately right or wrong, however, then the view that triumphs in the end will do so simply because its proponents have enough political power. All moral controversies are ultimately power struggles, if one follows moral subjectivism to its logical conclusions. Only moral objectivity can offer the promise of a sound resolution.