November 16, 2013

Oops

I just accidentally deleted a very civil comment I had overlooked. Damned touch screen device...

November 15, 2013

Irreconcilable differences

In the autumn of 2008 I lived in a suburb of Dayton, Ohio, and worked in a village of about 2000 people, forty miles away. To make an otherwise hypnotic commute between corn fields more interesting during that election season, I got into the habit of counting campaign signs along the way. At the beginning of my trip, driving through my deteriorating middle-class suburb, the signage was overwhelmingly pro-Obama – about 5-to-1 Obama to McCain if I recall correctly. In the middle portion of my route I drove through parts of two towns, each with a population in the 20-40,000 range. These were about evenly split between McCain’s midnight blue signs, and Obama’s electric blue signs with their curious Grant Wood sunrise logo. After leaving the last full-fledged town, the McCain-Palin ticket reigned supreme. In the village where I worked (and still work) there was not a single Obama sign.

America is one country, but it contains two nations. Arguably there are more than two, but the primary division tends to push any lesser distinctions into one or the other of two camps – the predominantly urban liberals on the one hand, and the predominantly rural conservatives on the other. I don’t use the word “nations” lightly. We are not, at this point in our history, talking about a few minor policy disagreements, or about some superficial differences in habits or tastes. American liberals have more in common with their European counterparts than they do with most of the people who populate the vast interior of America. American conservatives, for their part, tend to consider places like New York City and Los Angeles as loci of crime and moral degeneracy – places entertaining to peer at on TV, but certainly not safe and wholesome places to take the kids. Each side stares at its image of the other with incredulous, smug contempt – only some of which, in either case, is actually deserved.

I have lived, worked, and traveled in both nations. It is plainly obvious to me is that neither side is going away. Elections change policies but, short of actual genocide or similar draconian actions, they do not eradicate cultures. Consider the Roe v. Wade decision. Legal abortion has be settled law in America for forty years now, but people who oppose abortion still number in the tens of millions, and are still as active and determined as they ever were. Like it or not, both sides of this issue stand on deeply held principles – beliefs about which they are unlikely, at any point, just to shrug their shoulders and say “Ok… you win.”

The two opposing nations are about on parity in number. If this were not so, neither Bush nor Obama could have won a second term. Despite this, each side likes to characterize the other as merely a radical fringe. Most people are surrounded by other people like themselves. Conservatives like to say we are a center right nation, and that liberals only make up about twenty percent of the total population. Again, unless you believe in election fraud on a truly massive scale, no right leaning country would have elected our current president after witnessing his first term. It may be that only a minority of the public self-identifies as liberal or progressive, but there are obviously plenty of others who either want to ride that train, hate the opposition, or both. Likewise, the idea that the Tea Party represents only the lunatic fringe of a generally pliable Republican base cannot be true. Despite an ongoing war of suppression by the media, the Democrats, and the Republican establishment, the Tea Party remains an active force. Its sympathizers, if not its actual members, are as thick as corn in August. The supposedly more mainstream Republican establishment, on the other hand, does not appear to represent anyone other than themselves – and, perhaps, a handful of very wealthy donors.

When separate incompatible nations vie for power within a single political state, the ultimate outcome is likely to be the oppression of the weaker by the stronger. The stronger side may see themselves as benefactors, “re-educating” their wayward or backward cousins, but this is really no more than a rationalization of what is really the same old exercise of dominance. When the US Army finally defeated the last of the native American tribes on the great plains, the government rounded them up and put them in reservations. They then took their children away to be re-educated in government schools. Government functionaries cut the children’s hair and forbade them to speak their native language. This is conquest. This is what the attempted eradication of a culture looks like. It is all done with the best of intentions, by people who are certain that they are doing the right thing. It seems harsh and shameful only in hindsight. In the moment it seems the best and kindest thing to do. Why would anyone not want to be like us?

If we were as enlightened as we like to think we are, we would leave Montana in the hands of Montanans, and New York City in the hands of its residents. We would not be so happy to see a level of cultural control inflicted on other Americans that we would find appalling to see inflicted on foreigners. We would not take for granted that our own ideology is so virtuous that its imposition on others by force or deceit is still a blessing.

It is probably wishful thinking to believe the two contending cultures of America will ever get along. It is probably wishful thinking, too, to imagine they can stand at arm’s length amicably. Contemporary liberalism is an internationalist movement. Not to keep Montana in line would embarrass American liberals in front of the French. Obviously, too, religious conservatives are unlikely to give up their effort to proselytize. Their unexaminable faith commands it. Thus, two enormous titans beat each other bloody in the dark. Both sides strengthen the growing police state, even though no ordinary person wants it. Year by year, it gets harder to say “I believe this” or “I believe that” without falling prey to the brutal orthodoxy of the question “are you one of us – or one of them?”

-----------------------------
Note: I wrote a much more detailed but rather rambling essay on the same topic under the title of The Great Divide. http://www.cadwaladr.blogspot.com/2010/12/great-divide.html

October 30, 2013

A simple argument against liberalism

This is the epiphany which transformed me from a leftist into a dedicated, if somewhat nuanced, conservative. It is really an argument against not only liberalism, but against leftism in most of its forms.


One has to understand that, for the liberal, the government has two chief functions. One is to constrain the greed of private enterprise, and the other it to steadily improve society. Phrased that simply, those sound like admirable things to do – and it isn’t difficult to understand why many people find liberalism appealing. Every liberal I have ever spoken with believes, whether they think about it or not, that the twin virtues a society should strive for are fairness and progress. If our leaders were angels, this might be an acceptable basis for government. We do not live in a world run by angels, however, but in a world run by less than perfect human beings.

To understand the heart of the liberal worldview, let’s imagine that every businessperson on Earth is ambitious to the exclusion of the slightest sense of compassion. In know this isn’t true – I personally know a CEO of a midsize corporation who spends a good portion of her time dispensing charity – but, for the sake of argument, let’s give liberals their cherished meme. Let’s imagine private industry is a den of heartless sociopaths. Isn’t it obvious, then, that it’s in everyone else’s interest if these evil people are regulated, punished, and suppressed?

No, it isn’t. The first thing one must recognize, when we are talking about the real world, is that the ruthlessly ambitious, who do exist, are not attracted to business per se. They are attracted to positions of power. Money is power, but so is political authority. Marginalize business with too many taxes and regulations, make business people society’s hated pariahs, exalt the status and authority of government, and you will only put the determined and ruthless few on a new career path. When government is small and business is booming – the ambitious gravitate toward business. When business is precarious and government is expanding – the ambitious gravitate toward government. History and common sense both bear this out.

At the height of that corporation’s power, the CEO of General Motors didn’t have the authority to arrest me, imprison me, seize my property, fly a surveillance drone over my house, draft me into his company’s security department, take my money without selling me something, or force me to buy health insurance. While the power of a major corporation is considerable, its power over individual rights is limited. Government’s power over the individual is fundamentally unlimited. They enact the laws. They direct the people who have the guns. In those instances in which corporations have trampled the basic rights of ordinary people, they have almost always done so by manipulating governments. While such corruption is genuinely appalling, it must be pointed out that corporations are only buying the coercive power that governments possess by their very nature. When a corporation buys influence, it can only do so because influence is for sale.

When a person advocates for an ever larger government with ever fewer Constitutional limits on its authority, he or she implicitly imagines a government that will be run by angels – which is to say, by some breed of people who are above the petty and self-serving motives that the rest of us are prone to, and certainly a far better breed than the wicked, predatory monsters that run business. More, the liberal imagines that these angels are not merely selfless but nearly all-knowing – having the capacity to know more about what is good for us than we ever could ourselves. The liberal trades freedom in a competitive, difficult world for the illusion of an eternal happy childhood in the household of the governmental parent. Even if it worked out that way, eternal childhood is a sad condition for a grown-up human being to be reduced to. In reality, since our leaders are neither angels nor all-knowing, expanding government is unlikely to produce much happiness, and very likely to produce exactly what Friedrich Hayek said it would – serfdom, suffering, and passivity.

October 25, 2013

The negligent slaughter of the US economy

After World War II the United States was, without question, the dominant economic power on the globe. Our industrial base, already huge before the war, had survived the conflict intact while the rest of the industrialized world had been significantly battered. At the end of 1945, America was ready and able to be the manufacturing center of the world. While there were some rough times transitioning from a war economy to a peacetime economy, and a period during which Europe had to be propped up enough to at least be viable as a market, the American economy grew steadily through the 1950’s and 1960’s. We had plenty of smart people and plenty of cheap Texas oil. A generation grew up believing that a gradual improvement in the American standard of living was the inevitable – a god-given birthright or a law of nature, depending on one’s religious views.

There was pollution, of course, in this otherwise rosy world. The air and water were less than ideal. Industrial economy – that is, making goods instead of moving data around – is an inherently dirty process. Steel mills and chemical plants will never be things you want to live near. Inevitably then, the people who had already brought us the New Deal, with its central notion that if there is a problem anywhere of any kind it is the government’s responsibility to fix it, decided to fix the rivers and the sky.

Now, don’t misunderstand me here. There is no question that clean water and clean air are good, wholesome, and laudable things. No one wants dirty air and dirty water, regardless of what the current president may tell you. Even industrialists really don’t like pollution. But there’s a problem. You cannot have your cake and eat it too. If you want to have cheap goods made of plastic then somewhere there has to be a plastics plant full of all sorts of toxic and carcinogenic chemicals. If you want copper to make wiring for electronics and nice saucepans for your local snobby restaurant, you are going to have to put up with huge, unsightly holes in Arizona and gigantic stinking leach piles of copper ore. Most things cannot be made by good-hearted hippies using garden waste and love. There is a whole science of thermodynamics that explains exactly why manufacturing makes some part of the world a great deal grungier, but I think that if you’re smart enough to read this you have probably already grasped the point without the mathematical fuss.

So, here were all these do-gooders out punishing American industry for its sins. Those sins amounted to: making the air and water dirty (by making goods that everybody wanted) -- and making lots of money doing it (and never mind that they provided the jobs that built the middle class). To be fair to the liberal planners, they did alot of good initially. The Cuyahoga River that runs through Cleveland stopped its novel habit of catching fire. Smokestacks began making pretty white smoke instead of dingy black smoke. I am talking about smoke, not people – so spare me any racist accusations. There certainly is a level of cleanliness that leaves industry with a manageable burden, and everyone, for awhile, is happier when that level is reached. Unfortunately, the problem with regulators is that having fixed a problem they will inevitably look for others to fix. If they can’t find any significant problems, they’ll magnify the importance of progressively smaller and smaller ones. They do not call themselves progressives by accident. To give one small example, there are now a number of regulations that deal with the danger posed by chemicals found in upholstery foams – not dangers to workers, but dangers to consumers. My memory may not be what it used to be, but I cannot recall a single instance in my lifetime of seeing a family expired in front of their TV, cut down by the lethal outgassings of their couch. My instinct is that if you lie on a couch long enough for the chemicals in the upholstery to kill you, obesity and heart disease will probably kill you first. It really will be too much Xbox and too many of mayor Bloomberg’s famous 24oz. sodas, and not the polybrominated biphenyls which snuff out junior’s one and only non-virtual life.

The gradual expansion of environmental regulation has had several consequences, probably unintentional. At some point, people who thought that manufacturing things could be a fun and lucrative way to make a living stopped thinking that. People are not naturally inclined to spend their lives in legal battles with Federal regulators, so the ambitious and creative ones ventured into finance, government, and the law – in other words, into occupations not involving smelly chemicals. Being in finance allows you to engage in business without getting your hands dirty in a literal sense. You can just make money buying and selling other peoples’ companies, mortgages, grandmothers, etc. Enron was never an energy company in any physical sense of the word – they were more-or-less an investment bank with a narrowly targeted set of assets. Going into government or the law, on the other hand, is following the adage – If you can’t beat ‘em, join ‘em! Lawyers, bureaucrats and politicians aren’t really in the business of making wealth – which is to say, real, tangible goods – any more than their banker counterparts are. All of these occupations are, for the most part, engaged in a zero-sum game of moving money around – taking from the haves and giving to the have-nots with one hand, and taking from the powerless to line the pockets of the powerful with the other. I sometimes think the processes of taxation, spending, and civil law might be easily replaced with a few traveling roulette wheels. Everyone puts their money in and everybody gets a chance. It would be only slightly less fair than the system we have now, but would be a good deal more honest and cheaper to operate. Like a state lottery only bigger. Any of our recent presidents or presidential candidates, with the possible exceptions of the wooden Al Gore and the grumpy John McCain, could easily serve as charming game show hosts…

Sorry. I digress.

Anyway, when industry became a sort of hated, punishable offense the financial people figured out that the perfect thing to do was move production overseas. Americans still wanted the same cheap plastic stuff and it wasn’t too hard to find countries without environmental laws where it could all be made. The boom manufacturing markets have traveled like a stinky circus around the far east for decades, finally settling, improbably, in China. The Chinese “Communists” belatedly concluded that authoritarian central government works pretty well – but central economic planning doesn’t. Here in the enlightened west, we are still trying desperately to accomplish both.

There’s another rub here of course – which none of our best and brightest planners managed to see, or if they did they didn’t take it seriously. As I said earlier, people who grow up getting richer every year tend to be outrageous optimists. They didn’t fully realize that stinking up the air and water isn’t just a way to make Winnebagos and Barbie dolls – it’s a basis for real international power. He (or she) who makes the goods holds power over those who want or need them. We imagined, somehow, that we could let the Chinese make the goods and pollute their countryside, but still walk off with not only a clean environmental conscience but a nice tidy profit. America, in other words, would stop making things and just own everything. China, with its strong central government, has clearly gotten tired of that game. They passed a law declaring that the controlling interest in all Chinese companies will henceforth be Chinese. American business may be infuriated by this – but what else could they expect?

The less material wealth America produces the more it’s international economic standing becomes a conjuring trick. We have gone a long way on the momentum of our own legendary power, but the engine is smoking suspiciously and the fuel is running very low.

Speaking of which, the fuel is running low. All that cheap Texas oil, and most of the Alaskan oil, is gone. Most of the Mexican oil boom is gone. The North Sea oil is merely a trickle. Even Saudi Arabia has sucked out their spare capacity. The price of oil is permanently over $80 a barrel for a reason. That reason is supply and demand.

There is an oil boom in the United States and Canada – of a sort. It is not because there are new major discoveries. The much-talked-about Bakken shale in North Dakota was discovered in the early 1950’s. The Canadian tar sands were discovered by the Hudson Bay Company in 1719. Nor is the boom the product of any breakthrough in technology. Horizontal drilling is 1970’s technology. Commercial hydraulic fracturing started in 1949. What has actually created the boom has been a price of oil reliably above $80 a barrel. When oil was $25 a barrel, recovering it by expensive hydraulic fracturing was a losing proposition. At $80 a barrel, it’s a bonanza. If one levied a tax of $50 or $100 on a $25 barrel of oil, anyone with a non-fantasy understanding of economics could see immediately what a brutal economic burden it would impose. Somehow when the law of supply and demand itself imposes the burden many conservatives imagine it’s a sign of hope!

The environmentalists, of course, are living on a mental planet all their own. My favorite example of this is the part of the green lobby that opposes wind power. Wind turbines have many problems as a power source, but they don’t pollute (apart from the pollution created in manufacturing them) and they are at least in the same ballpark with coal-fired power plants in terms of cost. The ultra-greenies complain, however, that the whirling blades kill birds and make an eerie moaning sound – no doubt the howling of the morally outraged mother Earth. It takes the wind right out of me to think about it. I live near a liberal college town whose lawns and shop windows proudly sport signs proclaiming “No Fracking Way.” No hydraulic fracturing for natural gas, in other words. These people all heat with natural gas of course. I have a feeling a couple of January days without gas in their furnaces might make them reconsider. “Frack It! I want heat!” On the other hand, changing your mind requires at least some rational understanding of the world – so I would probably be disappointed. It is really just as likely that these people would blame the gas company for engaging in a rightwing conspiracy – rich, bigoted, homophobes that the gas company obviously represents.

I believe the economic future of the country would be better guided by conservatives than by liberals. While it is true that conservatives are prone to a certain kind of naïve optimism, they are generally practical people who will get up off the floor on their own when reality knocks them down. Liberals will cry that the government needs to do more – and they will beat-up the only people whom a crisis has left standing.

October 16, 2013

Writing, tweeting, and barking

I used to pursue a public forum the old-fashioned way, writing books and laying them before the cold, businesslike eyes of editors. The writing part was always enjoyable, but the self-promotion exercise was boring in the extreme. I never found rejection slips humiliating – only tiresome. Little anticlimaxes at the end of modest expectations. Editors, for the most part, are looking for marketable material and have trained their noses to detect good sales prospects. They are not fine arbiters of artistic merit, much less the judges of the ultimate value of one’s soul. Marketability and self-worth really should be two different things. If you want to feel better about a publishing failure, go to a bookstore and have a look at all the worthless dreck that actually succeeds. It may be better to rule in hell than to serve in heaven – but it is also better to go unrecognized than to be a famous writer of dreck. Unless you are too shallow to know the difference – in which case you had better move on to another article because what follows will probably only insult you, if that’s possible.

We are all authors now. Anyone with fingers and an intact chunk of cerebral cortex can fling an electronic message-in-a-bottle into the abyss of cyberspace. One may blog and tweet and Facebook and Reddit and comment and venture into more venues than I can count. In one sense, it is nothing new – an electronic medium is just another medium, another kind of paper, but in another sense it is uncharted territory that humanity is stumbling enthusiastically into. Even grandma can have her Facebook page. In theory, Tamil villagers in Sri Lanka can be enthralled with gripping tails of the grandkids and their puppy, or whatever else the industrious grandma might choose to inflict upon a fascinated world. Everyone can strive to be famous, without the tedious, unsympathetic bulwark of the editor. Free at last! Free at last! Except, of course, that no one really cares about you now that didn’t care before. The Tamil villagers have no reason to care about Grandma, her progeny, or little Toto either. The only nearly sure-fire method of acquiring a willing audience is to make an utter fool of yourself in some novel and usually disgusting way. YouTube overfloweth with such fools. Video yourself jumping off a diving board into an empty swimming pool, or taking a dump in the aisle of your local grocery store, and you will get your 15 minutes of fame. A certain fraction of murders and would-be murders have always understood this possibility. John Hinckley not only got Jodie Foster’s attention by shooting Ronald Reagan – he got everybody else’s too. A sad, sick nobody became a sad, sick somebody. We all know Dzhokar Tsarnaev’s name and what he looks like. He didn’t even need an agent.

People throw themselves at the internet in all sorts of ways and for all sorts of reasons. I want my ideas to be heard, and I thereby hope to alter the world, however slightly, into something a little more to my liking. I want to be immortal, if only in a footnote somewhere. It is a ridiculous quest, of course, but most of the people out there trying to save the world, or some portion of it, are not actually doing anything better than that. We want our lives to have some meaning, and, paradoxically, the more of us there are and the more noise we make the less meaning we all have. Meaning is a luxury we indulge in when we aren’t starving or freezing. The pursuit of it is a bit better that self-mutilation or shooting holes in politicians du jour, but it isn’t a special or sacred avocation. It is just a quirk of the brain – a sort of Nietzschean will to power with a sugary glaze of altruism to make it more palatable to modern sensibilities. I should like to expand the realm of “me” by making you think like “me.” Who cares? So what? But I have held onto you this far – and that’s something. At least we can be ridiculous together.

Social media is a different species of animal from what I consider “real” writing, but it serves some of the same functions nevertheless. If it’s not an outreach of ideas it’s at least an extension of small talk. It is the communication mode of choice for those that find security not in meaning, but in simple attachment. It is now possible to engage strangers in the same sort of mutually reassuring babble we used to share with family members, friends, and neighbors. It is also possible to reinvent yourself in the convenient absence of a commonly known personal history – or even a body. I am a bit too misanthropic to spend my life in one long session of social network maintenance, so I admit I don’t have much to say about this sort of activity. I’m a deaf music critic when it comes to Facebook. Twitter, however, I despise on principle. Any form of communication that constricts an interlocutor to 140 characters is a direct assault not only on linguistic subtlety, but on the complexity of any underlying thought. It is reminiscent of Orwell’s newspeak from the novel 1984. The intention of newspeak was to constrain language to such a limited set of words that seditious or otherwise dangerous ideas could no longer even be entertained. This is language as a sort of straightjacket for the mind. Twitter, not surprisingly, has become a primary means for politicians and other celebrities to commune, in a sort of grunting troglodytic way, with their fans. I know I’m not insulting anybody here, because if you’re a dedicated tweeter, I’ve long since exceeded your attention span. LOL! ROFLMAO!

Last, and least, among the varied breeds of cybernoisemakers is the habitual snarky commenter. To be fair, I have read a few very insightful comments, and more than a few amusing ones. These are worthy products of the mind, and not the idle barking of the sort of beast I’m talking about. The kind of loathsome creature I’m referring to is the one whose favorite word is “idiot” or “asshole” and who takes a clear delight in name-calling from the anonymous safety of his smart phone – or whatever connective device they have these days in hell. Decades ago, public bathroom stalls were lavishly decorated with random (and not-so-random) insults, crude sketches of genitalia, and the general outpourings of small and primitively antisocial minds. Public bathrooms are much cleaner places now – but I’m pretty sure I know where all their unpaid decorators have gone. “FUCK YOU IDIOT!!!” Hmm. The similarity is uncanny. Some people don’t have the courage to pitch themselves headfirst into empty swimming pools to get attention. Still, in the spirit of tolerance and generosity, I wish them all the best in finding that much courage.

October 14, 2013

A simple argument for conservatism

I have a number of liberal friends and acquaintances, and have been considering how to make the best and simplest argument for the conservative cause – which I take to be the cause of limited government. This is that argument.

If you are a US citizen, your Congressional Representative in the US House now represents more than half a million people. If you live in one of the least populous states, your Senators represent about the same number. In the most populous states, the Senators each represent tens of millions of people. The odds are very good that you have never seen your Congressional Representative or either of your Senators. The odds are even better that you have never spoken to them. If you write any of these federal legislators, you will be virtually guaranteed a form letter rubber stamped by (and probably also written by) some minor member of the legislator’s staff. If you write on a matter of general concern, you can expect that the rough content of your letter will be compiled with thousands of others into a statistic. If your concerns are unique, you can expect they will be more-or-less politely ignored.

When your Congressman or Congresswoman and your two Senators climb the stairs to do the peoples’ business, it is a rare thing anymore for them to write or haggle over the detailed content of new laws. Rather, they vote on bundles of paper they have neither written nor even read. Documents thousands of pages long, written by staffers, lobbyists and federal bureaucrats – documents your representatives could not possibly read regardless of their level of public spiritedness and good intentions. They vote on their ideas about what the bundles might contain – typically along ideological or party lines.

Frequently, these legislators you have never seen or spoken to don’t even get to vote on the bundles of paper they have neither written nor read. Two people, the Speaker of the House and the Senate Majority Leader set the agenda for Congress, deciding what is brought up for a vote and what gets dropped into the legislative abyss. When your legislators do get to help vote a bundle of paper into law, and the President decides neither to veto the bundle nor to direct the bureaucracy not to enforce it, dozens or hundreds of faceless people you will never have the privilege of voting for implement and flesh-out the bundle of paper in ways that could profoundly change your life.

These are simply the facts. You may think that your party is good and the other party is evil, but in the best of circumstances the form of government you have is almost wholly untroubled by your wishes. You don’t have even weak representation – you have only a ghost of representation – a pathetic parody of the common will. You might, through some incredible act of blind optimism, believe that the people running things are the best, brightest, and most well-intentioned in society – but you cannot possibly believe they express your will. If you believe such a thing, it is only because you have chosen to align your will with theirs.

Send a letter to your state representative, and you have some chance of getting an actual response. Send a letter to your local city councilperson and you will very likely get one. The odds are better, too, that they will understand the issue you’re addressing. Putting more and more power into the hands of the central government is, straightforwardly, to surrender both freedom and representation. It cannot be otherwise. To advocate for such a course is, ultimately, to argue for your own political irrelevance.

October 6, 2013

wave with echo

wave with echo - e.m. cadwaladr
wave with echo, image by e.m. cadwaladr on Flickr.

Protesting in the street

Last Friday I stood on a street corner with a small group of strangers, protesting the so-called "Affordable Care Act," a.k.a. Obamacare.  Back in the 80's I protested nuclear arms and the U.S. intervention in central America. Once I was an enthusiastic leftist, now I am a somewhat reluctant, rather nuanced conservative.  Experientially, however, a protest is a protest.  It is all the same. In my youth, blind, angry people called me a communist.  Now blind, angry people call me a racist.  To oppose the enthusiastically anti-communist Ronald Reagan labelled me a communist.  To oppose the historically black Barack Obama labels me a racist.  Logically speaking, this sort of reasoning is a fallacy known as affirming the consequent.1  If you protest on the street corner, you must abandon any notion that you will be perceived as an individual, expressing your views to other individuals.  You will inherit every prejudice, good or bad, of every passerby.  As you throw yourself into this most direct of political expressions, the polity itself will strip you of any personal identity. You will be merely the embodiment of everyone's narrow slogan and everyone's haphazardly scrawled sign.

There is, of course, no other way.  One must stand upright, bravely, and take it as it comes. You must endure the hatred of those who hate you out of prejudice, and those who hate you simply because you have disturbed their illusion of order and peace.  If you can not endure this, then quietly except the world other people make for you -- or find a wiser, better, more just species than humanity.


----------------------------------------
1 http://en.wikipedia.org/wiki/Affirming_the_consequent

October 4, 2013

Notes on Morality

Although many have come to consider the very concept of morality something of an antique superstition, I would like to wrest the idea from the ashes of postmodernist neglect to see if any part of it is still serviceable. I will do this from a largely philosophical perspective rather than a religious or strictly scientific one. For the sake of the uninitiated reader, I will begin by laying out a (painfully) brief summary of the more well known moral systems.


I. A moral review

The first moral system I will sketch is utilitarianism. This doctrine was explicitly formulated by Jeremy Bentham in the late 18th century and later elaborated by John Stuart Mill in the 19th, but utilitarianist tendencies can certainly be found in earlier thinkers. Utilitarianism is popularly summarized as “the greatest good for the greatest number.” That which promotes happiness (or avoids suffering) is good, and a moral action is that action which promotes good for the greatest number of people. Because utilitarianism defines good as ultimately synonymous with some increase in pleasure, the system is fundamentally a hedonistic one.

Utilitarianism is very straightforward conceptually, but it has a number of uncomfortable implications. Because it demands the maximum collective benefit, it negates any notion of immutable individual rights. In principle, if someone is dying of kidney failure and would benefit by one of yours, utilitarianism would demand the confiscation of your one of your kidneys as a net good. Likewise, utilitarianism does not oppose theft in principle, so long as the distribution of stolen goods produces a greater collective happiness than their retention by the original holder. The doctrine also suffers from the deep problem of comparing various kinds of moral outcomes. How does one weigh a mild increase in the comfort of many people against the deep suffering of a few? What is the value of a life in units of the general collective good?

An entirely different family of difficulties arises for utilitarianism because human beings are rarely wise enough to predict the long term outcomes of their decisions. To expand the earlier example, imagine your kidney is confiscated to save the life a person who subsequently murders several other people. Was the action good because the initial result was good – or, even more tenuously, because the intention was good? Far from being an abstract philosopher’s objection, the predictive problem is a serious flaw. One need look no further than the government policies of any nation to find countless examples of unintended negative consequences.

Collective happiness isn’t the only basis for a moral system. Deontological systems – doctrines which either begin with or produce fixed, specific, moral rules – are not built on the foundation of maximizing pleasure. Deontology, literally, means the study of that which is binding. It is morality as duty. I would like to divide deontological systems into two categories: those based on religious canon of one sort or another, and those based on the autonomous individual’s moral evaluations. The first category has many and varied members. The second category is most prominently represented by the moral philosophy of Immanuel Kant.

Although religiously based moralities take different forms, they share some common characteristics. All religious moralities are defined externally to the moral agent, typically in a body of ancient scripture. Christianity, Judaism and Islam define moral action as that which pleases God. Hinduism, Buddhism and Taoism (if I may pretend for the moment to understand the Tao) define moral action as that which is in accordance with a universal law or dharma. Religion either tells us specifically what good or evil actions are, or at least tells us that actions have an absolute moral status, even if our knowledge of that status may be incomplete. Thus, it is not the role of the moral agent to maximize the collective good (as it is in utilitarianism) but rather to seek alignment with a universal law which is either supreme in itself or an expression of God’s will. Where utilitarianism defines the good ultimately as pleasure, religion defines the good ultimately as obedience. These are deeply incompatible values.

The briefest possible synopsis of Kant’s system can be found in his categorical imperative: “Act only according to that maxim by which you can at the same time will that it should become a universal law.” In other words, a moral action is one that you would consider acceptable for anyone to undertake – even if it is contrary to your interests. The Golden Rule (“One should treat others as one would like others to treat oneself”) is a corollary of this. Although the categorical imperative does not necessarily conflict with religious morality, it has a different basis. It makes the moral agent responsible for his or her own moral evaluations. It differs from utilitarianism in making morality an individual rather than a collective exercise, and in defining the good in yet another way. Where utilitarians define the good as the maximizing of collective pleasure or happiness, and the religious define the good as conformity to the supreme law, Kantians define the good as the maximization of a specific interpretation of equality. It is not necessarily an equality of means or capacity, but is rather an equality of moral agency. Equality before the law is a concept that accords with Kant, rather than the more material equality that utilitarianism implies.

All deontological moralities possess a common strength by their very nature. Compared with the never-ending evaluative problems of utilitarianism, both religious and Kantian moral systems offer the benefit of clarity and stability. A religious canon does not change rapidly according to circumstances. Likewise, Kantian universals, once established, are fixed principles for the particular moral agent that formulates them. To overthrow any moral evaluation as circumstances change would violate the categorical imperative in principle. By contrast, the utilitarian notion that morality is to be found in the ends rather than in the means is not only practically problematic but counter-intuitive. At a visceral level, it is incongruous to believe that no actions can be wrong in themselves, but are always subject to contextual factors.

The dark side of deontological systems lies, primarily, in the specific moral rules that compose them. It can be said, by way of an example, that Islamic suicide bombers are moral according to their particular interpretation of the Koran. They are supported by specific teachings of their scripture. They are, incidentally, good Kantians as well. They would be perfectly willing to live in a world in which their moral views were universal. They would not mind if everyone were prepared to persecute and destroy the world’s infidels. This universality of their beliefs, in fact, is precisely their goal. While the categorical imperative implies a certain equality of moral agents, it doesn’t dilate about the characteristics that would qualify one for membership in that group. Disqualifying other people as persons is the time-honored way of eliminating one’s own moral duties toward them, and deontological moralities often show a sad history of doing precisely that.

Finally, we come to moral theories based on sympathy. A good example of this kind of thinking is to be found in the moral writings of the 18th century political economist, Adam Smith. Smith’s theory was that we only care about the joys or sufferings of others insofar as we can imagine ourselves in their circumstances. We can never experience another person’s feeling directly. Morality, then, is rooted in our capacity to empathize. This is a position with more substance than is immediately obvious.

Even without developing Smith’s formulation any further, we can see that his approach to morality does different work than the systems we have discussed thus far. The salient commonality between utilitarianism and deontological moralities is that they are normative. They are planned strategies for achieving particular social and/or personal ends. Smith’s approach, on the other hand, is essentially empirical. He does not ask what sort of rules we should have, but rather asks what constitutes a moral sentiment in the first place. Smith approached the subject of morality in more-or-less the same methodical way he approached economics. While such a perspective is less helpful in providing simple moral heuristics than the other systems we’ve considered, it is also less prone to their obvious weaknesses.

In addition to the characteristics of utilitarianism I have already mentioned, an empirical examination of it would uncover the following. The claim that happiness is the ultimate good is not derived from experience, but is an axiom of the system. Likewise, the idea that the interests of the group should trump the interests of the individual is also axiomatic. If we ask “why” the interests of the majority should predominate we are left with nothing more enlightening than some restatement of the utilitarian program itself. Considered in the same way, Kant’s system yields nothing better. The categorical imperative essentially states that it is immoral to do things we would not want everyone else to do, but the warrant for this claim can only be some version of the categorical imperative itself. The religious moral systems of the west are also based on the same kind of axiomatic assertions, but at least they don’t claim to be intellectually enlightening arguments, being doctrines that by their own admission are based on revealed knowledge whose only warrant is faith. The religious moral systems of the east are typically as self-referential as western philosophical doctrines, but do, in certain cases, invite the practitioner to reflect on personal experience for a kind of verification. Smith’s analysis is not vulnerable to any such self-referentiality. The proposition that we cannot experience another person’s feelings directly is not a postulate, but a brute fact. The proposition that we feel for others only through an act of imagining ourselves in their circumstances is, if not quite a brute fact, at least as clear a product of introspection as we can hope to attain.1

Having sketched out the gross outlines of moral theory, we find nothing resembling logical certainty about what actions should correctly be considered either moral or immoral. In utilitarianism, we have a scheme that arbitrarily favors the interests of the collective, and which leaves the merits of particular actions to the vagaries of circumstance. In the categorical imperative, we have a principle which is only specific about its own scope of application, and is ultimately no more informative about the morality of specific actions than utilitarianism is. Religions innumerate sins in considerable number and detail, leaving as moral those actions which are not proscribed as sins. Unfortunately, since such systems vary somewhat, not only from religion to religion but from sect to sect, one is faced with the problem not only of faith in an invisible deity but of faith in the infallibility of one’s particular church. Further, it is innately unsatisfying, at least for me, to believe that goodness is reducible to obedience. In such a universe, if the deity were inclined to switch the lists of what is good and what is evil, obedience would compel us to steal and kill, and to abhor compassion, charity and mercy.2 The idea of a universal or dharmic law as the basis for moral strictures is a little more satisfying in that it gives morality a stable and unique character, but, here too, we fail to find a satisfactory agreement about the nature of such a law. The dharma is vague, and the Tao is utterly incoherent.


II. A morality without free will

Being unsatisfied with anything but Smith and company’s limited empirical observations, I must fall back on my own resources. To begin, I will have to return to my conclusion about the illusory nature of free will, which I have covered in detail elsewhere.3 Traditional views of morality make free will a prerequisite to moral agency. Traditionalists inevitably ask: “If one is not responsible for one’s own actions, how can one be held accountable for them?” The question shows the extent to which moral choices tend to get bound up in an adjudicative perspective. The assumption is that we are to be judged, either by God or some deputation of our peers, and that an action one performs as the result of non-volitional causes does not constitute something worthy of judgment. What we do because we are forced to either by others or by the some peculiarity of our brains is somehow not really “us”. On this view, a world bereft of free will, in which everything is the product of physical causes, is a world bereft of moral content. Morality, on this account, implies responsibility. But let’s consider Smith’s analysis again. If moral sentiment just is a sympathetic reaction to the suffering of another, whether or not we engage in that sentiment volitionally does not really matter. To have a moral sentiment is to experience a feeling of sympathy, and to act morally is to be motivated by sympathy. Thus, to be a moral agent one must possess sufficient awareness to imagine others as entities like oneself, but one need not possess the extraordinary first-cause power necessary for the full sense of free will. In other words, that sympathy contributes to our decisions does not prove we are free, but does prove that we are aware enough to be sympathetic – and that our sympathies are strong enough to be relevant (though still determinist) heuristics.

Let me illustrate the concept above with an example. Imagine you see a car about to run over a small child. You are standing nearby, and have the capacity to intervene by running into the roadway and pulling the child out of the way. Assuming you are a typical human being, I think the following factors will bear on your decision of what to do. Having been a child yourself, you will have a kind of reflexive empathy on seeing a child in danger. You will value the child’s continued existence in a way that you would probably not value the continued existence of a snake or an inanimate object in the road. You may also relate to the potential suffering of the child’s family, especially if you have children. You might even imagine the lifetime of guilt that the driver might suffer if the accident is allowed to occur. Against this, you would probably fear the potential injury or risk to your own life in the event that you could not get away quickly enough. Sadly, in our society, you might also be paralyzed by the fear of a potential lawsuit should something unexpected happen. In far less time than it has taken to describe it, you would either decide to act or not decide to act, and you would do so in accordance with the strength and immediacy of the impulses I have outlined. Either way, it would certainly be a decision with moral dimensions – whether to risk your life for the life of a child or to avoid risk by letting the accident occur. Nevertheless, moral though it might be, there is nothing in the decision that resembles free will in any pure, first-cause sense. You neither decided how much to value the child’s life nor how much to value your own. You came to the accident as a collection of predispositions that were rooted in your unique causal history, including all of your prior experiences and every rung on the ladder of your DNA. Morality is the evaluation of what the collection of resources you identify as yourself actually does in real circumstances. In general:

A moral action is the sacrifice, on the basis of sympathy, of one’s perceived self-interests for the perceived benefit of another.

The idea that morality resides in what one perceives – in intent and in consciously performed actions – is important here. The rescue of the child in the example above does not become an immoral act if the child turns out to be Hitler. Although physicalism requires that the mind is best understood as a property of the brain, it is nonsense to believe that a determinist set of brain processes can act on information it either does not or cannot take into account. Consider the case of the Green Revolution, the globalization of modern agricultural methods that was carried out from the 1940s through the 1970s. In general, the various programs of the Green Revolution were initiated by intelligent people with the laudable purpose of alleviating hunger in the less developed nations of the world. In the short term, the effort was fairly successful. In the long term, it was a key factor in nearly tripling the world’s population – from 2.5 billion in 1950 to over 7 billion now. Roughly speaking, every hungry human being in 1950 has now been replaced with three. It is difficult to call this progress. Nevertheless, although one might call the founders of the effort naïve, shortsighted, or simply under-educated (Malthus accurately outlined the mechanisms of population growth in 1798!) you could not say their decisions were immoral. They were quite moral, being based, at least to some degree, on sympathy. The negative consequences of the Green Revolution, however bad, were almost certainly unintended.

Returning from public policy to a more individual perspective, one might object that a moral definition based on individual evaluation would suffer the same problem suffered by Kant’s individually evaluated morality. In fact, there is a crucial difference between the two evaluations. Kant’s morality permits anything the moral agent would be prepared to accept as universal. My interpretation of morality, based on Smith’s, defines as moral those sympathetically motivated actions that the moral agent perceives as benefitting another. The suicide bomber passes Kant’s test, but does not pass the test I’ve proposed. More generally, Kant’s morality permits whatever degree of selfishness the moral agent is prepared to accept from everyone else, but by my definition morality is lost whenever the moral agent willfully causes more suffering by an act than he or she expects to alleviate by the same act.

In thinking of moral behavior as a manifestation of sympathy, one should be aware of the variability in individual human beings’ capacity for feeling sympathy. Some people are simply more compassionate than others. The totally unfeeling sociopath is, by my working definition, incapable of moral action. Though such a person’s condition may be wholly organic, his or her actions do not escape moral evaluation on those grounds. The only causal antecedent that is relevant to an action’s moral status is the intent, sympathetic or otherwise, of the moral agent. It is not morally exculpatory, on my elaboration of Smith’s account, to claim your actions are merely the product of an unbalanced brain. Snakes do not make the choice to be snakes, but they are snakes nonetheless. A helpful act carried out for non-sympathetic reasons is not moral, but a willfully hurtful act is immoral, regardless of its ultimate consequences.

Behavioral traits that are the product of experience cannot be exempted from moral evaluation either. A person who has been conditioned to behave immorally – which is to say, to knowingly bring about suffering without any countervailing benefit – is immoral nonetheless. Again, morality is a characteristic of the individual, expressed through patterns of sentiment and behavior. To deny that environment is exculpatory, however, does not imply that it’s irrelevant causally. Within the behavioral limitations that are the product of our genetic makeup we are changeable, even if, in an absolute sense, we are not free. We are moral or immoral – sympathetic or selfish – largely to the extent that those traits have been encouraged or discouraged in us by events in the world. Again, these are attempts to describe morality in a coherent and consistent way, not attempts to formulate idealized normative standards.

The average moral agent is neither a saint (who feels sympathy for everyone) nor a sociopath (who feels sympathy for no one). Rather, the ordinary person sorts the world into in-groups and out-groups and mediates sympathy accordingly. Perhaps this is not a philosophically impressive way to parcel out moral sentiments – it does not sound as lofty as the categorical imperative – but it is the way that most moral sentiments get parceled out in actual practice. It is a rare human being that doesn’t attach more sympathy to some groups of people than to others, usually preferring the familiar to the alien. There is a straightforward evolutionary reason for this: Any organism that is trusting of strangers by default makes itself defenseless against them.4 To have compassion for the injured rattlesnake does not give you the slightest immunity to its venom. A snake will not act morally toward you because you choose to act morally toward it. Thus, while empathy is obviously an evolutionary advantage in that it makes social cohesion possible, to include everyone and everything under the umbrella of sympathy is, in the real world, suicidal.


III. Some sociological outcomes to normative moral systems

Although you can never get a normative standard out of an empirical observation (in Hume’s formulation – you cannot get an “ought” from an “is”) you can at least make reasonable predictions about what sort of outcomes a particular consciously planned “moral” system (e.g. utilitarianism) would be most likely to produce. It is important to emphasize, however, that when we talk about what ought to be according to any consciously planned moral framework, we have ceased to discuss morality in the robust sense we can get at empirically. All consciously planned “moral” systems violate my definition of what morality is. Again – A moral action is the sacrifice, on the basis of sympathy, of one’s perceived self-interests for the perceived benefit of another. When “moral” actions are the product of normative rules, they are no longer the product of sympathy by necessity. While it is possible to have an artificial moral code so deeply ingrained in one’s makeup that one feels its strictures sympathetically, it is also possible follow the rules of such a system out of unfeeling habit or even simple fear. The products of habit and fear are not, by my account, moral. What we are talking about instead of morality are different schemes for organizing societies. While these ideological constructs do have moral ramifications, adding their share of input to an individual’s criteria of decision, all of them, even Kant’s, seek to mold the individual’s behavior to someone else’s ideal. Faced with the dark history of humanity’s utopian “moral” programs, let us do our best to retain our bearings as we forge ahead.

While different plans for organizing societies pursue different outcomes, the desired outcome of default is always bare survival. Where does this leave sympathy? Obviously, it is not conducive to an individual’s prospects of survival to either include everyone under the umbrella of compassion or to include no one under the umbrella of compassion. Nor are these extremes conducive to the survival of a society as a whole. Rather, to optimize the survival advantages of social interaction, human beings must nurture relationships which are beneficial while avoiding those that are undertaken at a loss. Further, mutually beneficial relationships are to be preferred over predatory ones because they are inherently more stable. While it is possible to have a society that tolerates theft in certain contexts, any society that encourages theft as a universal principle will annihilate itself. The same can be said of a general acceptance of murder. It is not by accident that all traditional “moral” systems eschew or severely limit theft and murder, at least within the sphere of their adherents. Any society which failed to do so wouldn’t be a society in any intelligible sense.

Stability is by no means a sufficient condition for most definitions of societal success, but neither is it an accomplishment to be denigrated. All of the developed “moral” ideologies of which I am aware have stability as an implicit goal, even those systems that seem preoccupied with change. For example, I can think of no egalitarians who would be satisfied with achieving universal equality for a year or two, and then having society degenerate into some other state of social relations. Likewise, libertarians do not strive for a single generation of perfect freedom, after which the game is won and the social order becomes irrelevant. The religious may view the society as having an apocalyptic endpoint, but only a handful want that end to come sooner rather than later. People can conceive of apocalypses both religious and scientific, but we are genetically predisposed to care about the fate of our children and grandchildren, and to want something better for them than a desperate life in a declining world. The desire for stability is innate.

Given that the attainment of stability is an implicit requirement for any non-absurd “moral” code, it is clear that the arbitrariness of a behavioral rule is of less concern than its caprice. An arbitrary but long-standing rule, like one prohibiting the sale of alcohol on Sundays, or even one prohibiting certain arguably harmless sexual practices, may induce a degree of suffering in some but strengthen the social cohesion of many. Stable rules, whatever they might be, give a culture a certain shape and character. On the other hand, a capricious new rule, no matter how immediately useful, leaves the rules that govern behavior suspect, formless and ephemeral. That which the dictator or the social engineer finds expedient today might as easily be ruled punishable tomorrow. While tradition may dull the imagination, caprice forces the citizen to deny his or her own memory – and become little more than a robot awaiting the next command. North Korea, for example, has neither a culture nor a moral system – it is simply a terrible living expression of the whims of the Kim family.

As I alluded earlier, there is reason to believe that some degree of reciprocity is also a bare requirement for any consciously planned moral system. Both utilitarian and deontological systems entail some degree of reciprocity. By “reciprocity” I mean:

Those social relationships that are perceived as beneficial to all of the participating parties.

The concept of a perceived benefit is important here. Arguably, a shaman whose social function is to serve as a liaison with a non-existent spirit world contributes nothing real, but so long as he and his devotees believe the function is meaningful it is nonetheless a trade item in the market of social exchange. Magicians who practice slight-of-hand engage in reciprocal relationships with those who pay to watch their tricks. The pick-pocket who quietly robs the crowd without their knowledge or consent does not.

Reciprocity does not imply equality or fairness. Imagine a boxing promoter who pays his fighters one percent of the receipts and pockets the rest for himself. This relationship is neither equal nor fair, but it is still reciprocal. A relationship is reciprocal so long as all the involved parties derive sufficient benefit from it to engage in it willingly. Working for the minimum wage is engaging in a reciprocal relationship, whereas working in a North Korean labor camp is not.

Reciprocity supports stability by helping to assure most people get enough out of the general social exchange to meet their own minimum needs. People who cannot get their minimum needs met in the social marketplace have no stake in society, and can hardly be blamed for anti-social behavior. Likewise, those who engage in wholly predatory relationships, at any socio-economic level, weaken the bonds that hold society together. While equality and fairness may be little more than idealistic abstractions, it is obvious that any functional set of “moral” standards must be able to maintain a critical mass of interpersonal cohesion sufficient to make the game at least tolerable of most of its participants.

In practice, utopian systems of social organization tend to fall into one or the other of two categories, depending on whether they emphasize equality or freedom, collectivism or individualism. In the end, both appear predestined to fall prey to quite non-utopian tyranny or its modern expression – totalitarianism.

Dogmatically egalitarian systems of social organization, that throw their arms around people without regard to their capacity or inclination to contribute useful effort, violate the requirement of reciprocity. Socialist systems of social organization begin in popular enthusiasm but have a predictable trajectory. Eventually, their outlay of benefits becomes unsustainable. They must meet the reciprocity standard or dissolve, and have no better recourse than to coerce work out of the unwilling by either force or the threat of force. At that point their egalitarian program becomes a joke. It is no use claiming that all are equal when all are subservient to the authority of the State, because the State itself is always run by planners who are not subject to their own rules. Decision makers are inherently not equal to non-decision-makers. Equality under socialism is never more than a linguistic trick. Socialism fails the survival test by ceasing to be distinguishable from other forms of authoritarianism. What begins as utilitarian justice ends in the despotic cynicism of the central committee.

Dogmatically free ideologies, such as libertarianism, objectivism and free market capitalism (at least in its present form) fail the stability test. While such systems do not preclude the possibility of human cooperation, they consider cooperation a subordinate virtue at best. Even a cursory look at the history of the modern period, dominated by the rise of capitalism, reveals a world whose salient feature is instability. My point here is not that freedom is bad, or even that a high degree of material inequality is bad, but that any system whose core principle is freedom is probably doomed to innovate its way out of existence. In an atmosphere of perfect freedom, the strong will rise to dominate the weak. In the particular case of free market capitalism, where once the capitalist elites were predominantly industrialists who made money by making and selling things to large numbers of people, they are now predominantly bankers who make money by moving the electronic tokens of ownership around. Love him or hate him, you must admit Steve Jobs directed Apple Computer to produce real useful things – and thereby created real wealth. It is in no way obvious that an entity like Goldman Sachs creates wealth at all. A major bank redistributes and effectively creates money, but that is not the same as creating wealth. Money is a medium of exchange for the transfer of wealth, but is not wealth itself. The U.S. Bureau of Engraving and Printing has the capacity to make everyone on the planet a millionaire, but has no capacity to add so much as a bushel of millet to the world’s collective wealth. When the easiest path to personal power is to move paper around, the actual creation of tangible wealth declines. The present stage of capitalism also violates the principle of reciprocity, in that it concentrates power in the hands of people who contribute little, if anything, to the common weal. Like its socialist counterpart too, capitalism lacks a mechanism to resist the relentless centralization of power.5 Thus, the endgame of capitalism appears to be graft, corruption, and the destruction of the very markets that made it flourish in the first place. A different shade of authoritarianism – but authoritarianism still.

In opposition to my argument that the world seems bent on the centralization of power by one path or another, it is certainly worth asking why the last half century or so has been characterized by the adoption of democracy by so many countries. A satisfactory explanation would require an essay in itself, but I will try to at least sketch an explanation here. In the main, democracy both follows and promotes a certain level of material achievement. Nations that become wealthy – that is, engage in a high degree of resource consumption – can afford to distribute some amount of wealth without impoverishing their elites. When a society gets rich, everyone can become at least a little richer than they were. This lowers the need for political repression to keep the population under control. The elites who control nations do not cease to exist, but they can tolerate more individual freedom and more public participation in policy making without risking upheaval. Unfortunately, this is not a moral achievement but simply a feature of economic growth. When growth declines, freedom and real democratic participation also decline. The force of wealth creation becomes insufficient to resist the natural centralization of political power, although the old democratic slogans and the ghosts of democratic institutions may remain in place. In aging democracies like those of Europe and the United States, people have become demonstrably less free, which is to say more and more at the mercy of intrusive bureaucratic authorities in which they have little or no voice.

This outlook is admittedly grim, but it is supported by history. While I can only speculate, it may be that the centralization of power is inevitable given certain conditions (i.e. huge populations and resource scarcity). If that is true, all the utopian forms of social organization one can devise are no more that palliatives to sooth the consciences of the leaders and the outlook of the subject classes. To take the obvious case, the authority of Stalin was no less absolute or brutal than the authority of the Czar, although nominally Stalin ruled in the name of the people. On the other hand, if we consider late stage capitalism, that purports to be the opposite of Stalinist communism, it is equally obvious that when politicians pursue money and money pursues politicians a kind of aristocratic authoritarianism arises that is no more democratic than any other. I see no rational reason to be hopeful that the future will be kinder, fairer, or more pleasant than the past. We may not fully understand the underlying forces that shape our world, but we know from experience that no law of nature guarantees our success.

Morality, I believe, is only a useful concept at the rudimentary, individual level where Smith found it. We can see another person suffer and have sympathy for them in some direct, immediate sense. When we engage in making “moral” public policy decisions we show sympathy not toward any real persons, but toward our own ideas about them. The reformer Francis Perkins said of Robert Moses, the great urban planner of New York City, that he loved the public but hated people. It was an insight worthy of much wider application.


-----------------------------------------------------------

1 Strictly speaking, I can only introspect such issues for myself – but I can imagine you have the same capacity. While I admit that this also constitutes a kind of circularity, it is still a stronger position than inventing moral axioms out of whole cloth, no matter how eloquent they might be or ingenious they might seem.

2 I’m not a big proponent of Kolberg’s stages of moral development, but I am still suspicious of any system that stops its program at stage 1! http://en.wikipedia.org/wiki/Lawrence_Kohlberg's_stages_of_moral_development
     I am always struck by the story of Isaac and Abraham, in which God asks Abraham to make a blood offering of his son, and only relents when Abraham is about to show obedience. I believe that the great majority of Christians reading this story heave a sigh of relief when God relents and, after all, does the right thing by sparing Isaac. Yet if goodness is defined as that which pleases God, then God himself is not even subject to moral assessment. In other words, God does the right thing – by definition.

3 http://cadwaladr.blogspot.com/2010/03/case-against-existence-of-free-will.html  If I refer to this essay often, it is because I find it is foundational to so many other conclusions.

4 The dodo leaps (or perhaps waddles) to mind. http://bagheera.com/inthewild/ext_dodobird.htm

5 This is the fatal flaw in F.A. Hayek’s argument for capitalism. While he acknowledged that markets required some external oversight to keep them functioning freely, he did not explain why a body of people with the authority to perform that function would be any less cynical than socialist planners. Twenty-three hundred years of intellectual effort haven’t managed to come up with anything much better than Plato’s guardians. The framers of the U.S. Constitution, of course, made a noble attempt.

September 3, 2013

Work and reciprocity

Several years ago, I did some minor volunteer work for a local sheltered workshop, a facility that gives the mentally or physically disabled the opportunity to do light assembly work for local businesses. The work typically consists of stuffing envelopes or constructing simple machines – whatever is within the capacity of this most marginal group of workers. Despite the necessary simplicity of the work involved, you shouldn’t imagine that employing these people is act of charity. In fact, with the assistance of good organizers and a few physical aids to steady shaky hands, their output is good and their quality is often better than that of people with abler bodies and minds. They are happy to get a paycheck – to be adult, productive members of society. There is a lesson here for all of us.

As a culture, we don’t respect work much anymore. Many people, rich or poor, do their best to avoid it. The trick is to make money without working, and whether one does that by siphoning wealth out of other people’s misfortune through a hedge fund or by mooching off an undeserved public entitlement the motive is the same. Work is for suckers. The awkward people at the sheltered workshop, struggling to assemble mailers and put little plastic parts together, are the biggest suckers of all. Not only do they work more than they have to, they have the naïve belief that it gives their lives some measure of meaning and dignity. What a bunch of retards.

Like it or not, the very basis of society is reciprocity. In our personal relationships, most of us recognize that a person that takes from us without giving anything in return is not our friend. Unless you are possessed by pathological self-loathing, you’ll do your best to avoid people whose goal is simply to use you. You will also do your best to be at least decent to the people that matter to you. In a healthy society, this simple understanding – that we have responsibilities as well as needs – is widespread, shaping most of our relationships with others. Work, in almost every society on the planet up until now, has been the reciprocity of default. To work is to make things or do things for others. To be paid for working is to earn the right to have things made and done for you. There have always been inequalities in this exchange, just as there have always been inequalities in personal relationships, but reciprocity has always bound us together nonetheless.

A society that encourages stealing, cheating, swindling and mooching will not be a society very long. To be fair, neither will one that cannot provide enough real jobs for most of its citizens. The official unemployment rate of around 8% is a widely acknowledged joke. As the Federal government sees it, if you’ve been out of work for a few months you’re not unemployed anymore. You’re “discouraged”. You don’t count in the statistics. If you are collecting disability because you have some back pain, you are not unemployed either – even though you could work at a desk, answer a phone, stuff envelopes, or do a million other things. All of the “disabled” I know are far more “abled” than any of the people I saw working in the sheltered workshop.

The real unemployment number is hard to estimate, but it is probably more than double that laughable 8%. If all of the real unemployed got up one morning and decided to look for jobs, it is obvious that most of them would not be able to find one. We live in a society in which working and acquiring a means of support are not reliably connected things. We departed from that path a long time ago. Instead, we went down a path of increasing automation (euphemistically touted as “high worker productivity”), outsourcing, low-wage part-time service jobs, and more-or-less permanent positions on the dole. If you have no means of support you can get help from the government, who will take the money (despite what anybody tells you) from the pockets of the middle class. It is nice, of course, that we do not let people starve – but is it fair that fewer and fewer people put in all the productive effort?

If there is a solution to the current structural problems of the economy, it will have to be more imaginative than either “just get off your ass and get a job” or “let’s raise taxes on the greedy rich”. It is will have to include, among other things, both the idea that work is worth doing, and the idea that satisfied employees are worth having. It is going to require people who know how to think of themselves as citizens, rather than as elites or victims.

August 19, 2013

On yachts and human beings

I spent some time recently in the harbor town of Saugatuck, Michigan. The land is awash with upscale boutiques and restaurants, and the harbor is awash in cabin cruisers and yachts. I don’t spend much time among the rich, but when I do it tends to make me reflect. I neither love nor hate them. Taken as a group, there isn’t much collective virtue there to love, but neither is there much virtue in grumbling about those who, through hard work, driving ambition, or the blind luck of heredity happen to be materially well off. It is usually better not to make sweeping judgments about other people, and it is almost never good to make self-serving ones. I stand on the shore; an edifice of fiberglass and dreams rumbles up the channel. That is all. The world is as it is.

A modern yacht, taken as an object, looks like nothing so much as an enormous wedding cake in a hurricane. It is a streamlined pile of white layers with a little man in a golf shirt standing on top. For all I know there may be women who own such objects too, but I have never seen a woman feel the need to steer one. The women are either inside or in the stern with a collection of underdressed teenagers. The teenagers are still caught up in the business of showing themselves off, rather than in displaying their limited possessions. A yacht is just a setting for this sort of human activity – a thing neither beautiful nor ugly in itself.

There is nothing very different from one yacht to the next apart from size. They are too big for their clean lines to convey any plausible sense of speed. If they are fast, they can only be so in open water – not lumbering and rumbling around in the shallows among other boats. They engage in a slow ritual, going up and down the channel. The smaller boat must make way for the larger. Thus, even in leisure there is competition. The man who proudly sheers the 70-foot yacht must pull aside for the 80-foot yacht, and must feel the eyes of the shore-bound spectators shift from his boat to the even more impressive one. The captain of the 80-footer, I can only guess, lives with a quiet dread that something even bigger may come grunting into port tomorrow. This concern, I know, is not so hard a fate as struggling against an ordinary mortgage. Nor is an ordinary mortgage as bitter as the struggle for a meal that many people on the planet have to go through every day. Still, human beings have an unerring capacity to scale their emotions to their circumstances, and can be miserable amid plenty or content with only a little more than bare subsistence. To see yachts and cabin cruisers coming and going is to see only those things that people have acquired in the attempt to please themselves – they are not guarantors of anything, least of all happiness.

I watched, amused, in this intermission between my own concerns, which swell or ebb in accordance with forces I can only now and then control. I envy not the yachtsman, but the water on which he rides– which parts in his wake but always returns to its serenity. Which neither minds being the wave, nor attaches itself to its interlude of calmness. Which makes the yacht, the harbor, and the reflection of the seagull possible. Even to envy such a thing, of course, is to disturb the very peace one seeks. Went I am in my right mind, I only watch, enjoy, and laugh.

July 8, 2013

The Prospect of Nuclear Terrorism

In order to justify various activities which have undermined some of the Constitutional protections Americans once enjoyed, certain elements of the political establishment (notably Dick Cheney) have raised the terrifying prospect the nuclear terrorism. Since no one in the media, left or right, has adequately addressed the likelihood of such an event, in seems well worth a look.

Let’s begin by allowing that there are terrorists who would certainly be willing to use a nuclear weapon if they had one. I see no reason to think, given the nature of Al Qaida’s various actions of a smaller scale, that the zealots of that movement would have any qualms about destroying a city in the name of their cause. Less fanatical sovereign governments have been willing to bomb and incinerate civilians by the tens of thousands, as the citizens of London, Hamburg and Tokyo discovered seventy years ago – thus, it would be deeply naïve to believe that Islamic extremists would show moral restraint in this regard. The restraint the nuclear powers have shown over the last seven decades has had more to do with self preservation than morality. The doctrine of mutual assured destruction (MAD) has worked. The nuclear powers have understood that starting a nuclear war is likely to bring about their own demise. It is not clear that stateless terrorists, scattered across the globe, willing, in many cases, to die for their cause are deterred by anything. So, we can dispense with the question of willingness and proceed to the question of capability.

A scenario often cited or implied is one in which a rogue nuclear state – Iran, or more imaginatively, North Korea – might supply a terrorist organization with a bomb. While this might make an interesting movie plot, there are reasons to believe it would not be likely. Consider, to begin with, that nuclear weapons are a hard-won technology that states develop at enormous expense. Not only is there a huge material and financial cost, but the diplomatic and physical hardship of possible economic sanctions. At the end of the long development process, middle-tier nations take years to produce even a small stockpile of bombs, perhaps sufficient to survive an enemy airstrike and thus provide a viable deterrent. Is it plausible that a nation, having suffered much to attain such weapons, would turn any number of them over to a vaguely sympathetic organization of fanatics over which that nation exercises little control? While it is at least conceivable that a council of radical mullahs in Iran or the “dear leader” of North Korea might decide to employ a bomb to please Allah or fulfill some twisted notion of personal destiny, it is very unlikely they would turn the matter over to some even more unstable middle men. If a bomb explodes in Tel Aviv or New York, it isn’t likely the originator of the device will go long without suffering retaliation in kind. The origin of these devices can be deduced with some considerable technical accuracy, and even if they couldn’t there would be an enormous impulse on the part of the victim to lash out at the most likely suspects. Thus, while Iran might conceivably use a bomb, handing one over to Hamas would gain them nothing. Doing so would also entail secondary risk. He who has the bomb has power, and there is always the risk a nuclear-equipped terrorist might turn that power against his own benefactors. It is notable that, in the entire history of the atomic age, every nation has produced its own weapons – and even such close allies as the US and the UK have shared the actual weapon technology only very sparingly and reluctantly.

A more plausible scenario in which stateless terrorists acquire the bomb, or the fissionable uranium to make a bomb, is based on the collapse of an existing nuclear power. This happened when the Soviet Union collapsed, and could happen if Pakistan disintegrates in civil war. In the former case, while no whole weapons went missing (as far as open sources can tell us), a considerable amount of fissionable material was smuggled out, and only some of it has been retrieved. What will happen in Pakistan in the next few years is anybody’s guess. One can only hope they manage to hold their shaky state together somehow. Still, as real as such possible threats might be, it is far from obvious that they are substantially mitigated by dismantling the civil rights of Americans within the borders of the United States. If 100 kilos of marijuana can be smuggled into the country by sea, smuggling a bomb across the border should be only slightly more difficult. Once here, no further communication traffic of the kind the NSA routinely intercepts would be required. In other words, while the threat is plausible, the efficacy of the intelligence countermeasures deemed necessary for our safety is marginal. Surely, a focus on intelligence gathering closer to the sources of the fissionable material would do more good.

In the final analysis, it is true that an utterly closed police state would be all but immune to terrorism – nuclear or otherwise. The Soviet Union, throughout its illustrious history, had little to fear from even the most dedicated groups of foreign radicals. A xenophobic police state is, however, a very high price to pay for security.

July 1, 2013

In defense of culture

We live in an era in which immigration, urbanization, the lingering effects of the 1960’s counterculture, government policy, and various other factors are driving rapid changes in American society. These changes have not distributed themselves uniformly across the country, but vary by region, by the urban-rural divide, by race, ethnicity, and age. America is now, in effect, an assemblage of separate nations. I am skeptical that there ever was a truly unified American identity, but it is obvious that there isn’t one now. The old metaphor of the melting pot – in which, after a generation or two, immigrants assimilated themselves to the general culture – is certainly not currently applicable. Rather, in contemporary America, people fracture along a variety of lines. Even people of a common general background now find themselves in different and often antithetical cultures.

So – what is a culture, really?

A culture is a set of constraints imposed on the individual as a prerequisite for acceptance in a given group.

Some cultural constraints are the product of environmental conditions. Traditional Inuit culture, for example, is characterized by a high degree of patience and cheerfulness. If you live in close proximity to your family in an igloo for several months of the year, moodiness and negativity are likely to have disastrous consequences. Inuit must either be nice or suffer miserably with one another. The famous politeness and formality of the Japanese, however, has little to do with the mountains and climate of Japan. Rather, the politeness and formality of the Japanese is essentially arbitrary. Their culture carries these traits by the usual mechanisms of societal tradition and individual upbringing, but Japan could just as easily have been populated by a race of rude, obnoxious slobs. For the actual Japanese, politeness and formality are self-defining rather than physically necessary characteristics.

Very many cultural constraints, perhaps even most of them, are arbitrary. For example, apart from climatic considerations at higher latitudes, there is no physical reason that we couldn’t all be nudists. The fact that some hunter-gathers groups do live without clothing proves conclusively that it is possible. Still, if you live in virtually any modern nation, east or west, you probably find the notion of universal nudism rather unpleasant. You really wouldn’t want to see your ugly neighbors mowing the lawn or standing in a checkout line in the nude, and, unless you are a deviant, you would not want to dispense with all of your own clothes either. You feel this way because you have certain arbitrary cultural standards. You have been taught to believe certain behaviors are acceptable while others aren’t – and you know that the great majority of others in your society share the same beliefs. Wearing clothing, often quite specific clothing, is a defining feature of your culture. While it is not a physical necessity, it is a social necessity.

The consequences of abrogating social constraints are many and varied, from mild rebuke to capital punishment, but one broad consequence is a constant. If you violate a cultural taboo, you weaken the bonds you have with other members of your group. A Japanese who decides to be a rude, obnoxious slob gives up a large share of his Japaneseness. To share a culture with another person is precisely to share certain fairly reliable expectations about one another’s possible behaviors. It may be quite possible for you to attend you aunt Margaret’s funeral in the nude, but everyone else in attendance is likely to become rather anxious about what you might do next. You will have branded yourself as someone who cannot be expected to behave within the usual social bounds. You will be treated as irresponsible, erratic, and mentally unstable.

Multiculturalism, as a culture in itself, makes hash of any coherent system of social limitations. Rational people can learn a certain degree of tolerance – which is to say, to accept a rather broad range of behaviors or at least suppress one’s learned revulsion to certain things – but one cannot accept two or more incompatible cultural systems simultaneously. Contrary to what multiculturalists might hope, one cannot find homosexuality acceptable and also find Islam, which considers homosexuality a serious or even fatal transgression, acceptable. You can find one, or neither, acceptable – but not both. When the Saudi or Iranian government puts a human being to death for the crime of homosexuality you cannot consider it a tragic injustice and a reasonable expression of another culture’s beliefs at the same time. What multiculturalists usually do to defend their worldview is to either ignore those aspects of reality that make their position self-contradictory, or to gloss them over with comforting narrative. I once heard of a feminist who argued that the burka – the bag-like garment under which many Muslim women are compelled to conceal themselves – should be properly viewed as a symbol of female empowerment. This would be like calling the shackles that bound a chain-gang together instruments for promoting Afro-American unity, or like calling Buchenwald a weight-loss spa for Jews.

Multiculturalism, as actually practiced in America, suffers the further problem of simple hypocrisy. To continue with the convenient example above, it is irrational to stretch one’s tolerance of one conservative religion, Islam, to absurd lengths – but show so little tolerance for the much less militant followers of Catholicism.1 How can tolerance of some cultures and intolerance of others be multiculturalism in any robustly meaningful sense? If a person is intolerant of relatively benign cultures nearby, but tolerant of hostile cultures he or she rarely encounters, that isn’t much different from being intolerant in general. I can say that I am tolerant of the culture of New Guinea headhunters – but that’s not really much of a claim if I don’t live in New Guinea.

If we attempt to examine the phenomenon of culture as objectively as possible, it should be plain that any viable culture must have at least two traits. First, a culture must have a set of characteristics that define it. A culture has to be something other than a boundary drawn around some random group of human beings. It can be defined by subjectively good characteristics (such as patience) or subjectively nasty ones (such as racism) but it must have some consistent nature – something to bind its members in a sense of solidarity. Second, perhaps as a corollary to the first, a culture must have a means of recognizing what a non-member is – or, perhaps more to the point, of recognizing what an enemy is. Islam, it happens, does both of these things extremely well. Muslims know what being a Muslim means. They know what they should and should not do, what observances they need to make, and what they are prescribed to believe. Likewise, they know what is not Islam, what threatens Islam, and they are unapologetic in this knowledge. Western multiculturalists, on the other hand, have no idea who they are. By attempting to throw their arms around a large assortment of alien cultures, they leave themselves without any common characteristics. The multiculturalist cannot even claim tolerance as a dogma, for the reasons I’ve already outlined: some of the people they’ve included under their imaginary umbrella are decidedly intolerant, and most multiculturalists harbor hatreds of their own conservative brethren. Further, the multiculturalist, in a fantasy of universal inclusion, finds it impossible to fully reconcile himself (or herself) to the very concept of an out-group, let alone an enemy. The political and religious conservatives of their own ethnicity can be the target of enmity precisely because they share a common background with the multiculturalist. Conservatives are a sort of alter-ego, symbolic of what the multiculturalist strives to reject. True enemies, outsiders who despise the multicultural anti-culture for its flaccid tolerance and amorality, must be embraced, placated, or imagined not to exist.

A very recent example of an in-group / out-group disconnect is to be found in the case of Paula Deen, the once-popular cooking show host. Deen was discovered to have used the word “nigger” in a private conversation many years ago, and, as a consequence, was dropped by her network and is being actively harassed by MSNBC and others. I don’t doubt that many people find the “n-word” offensive, but why is it any less offensive when wielded causally by hip-hop performers in an entirely public context? Deen is vilified because, as a successful, Christian, heterosexual Caucasian, she is perceived as part of the evil conservative alter-ego that must be constrained. This, despite her actual political affiliations. Hip-hop performers, as symbolic victims of conservative oppression, are exempt from the standards of political correctness that apply to Deen.

To be clear, I am not saying that some degree of tolerance isn’t nice, or that traditional standards are necessarily laudable. What I am saying is that in a conflict between cultures, one that is coherent and cohesive has a substantial advantage over one that is inconsistent and heterogeneous. I’m not saying xenophobia is good – I’m saying xenophobia has often proven successful.

Another thing I am not doing is fully equating western liberalism with multiculturalism. The latter is a dominant meme of the former, but the two are not synonymous. In America, at least, the government has usually been able to recognize out-groups so long as they remained beyond the US border. There are interesting parallels between 19th century gunboat diplomacy and the policy of drone warfare. Ethical considerations aside, American political leadership is not completely unwilling, even now, to conduct international affairs with blunt instruments in the time-honored human way. What our leaders do not seem to understand anymore is what the in-group is – or, more romantically, what a citizen is.

One need look no further that the illegal immigration problem to grasp the growing irrelevance of American citizenship. Many Americans, some of them in positions of high authority, take it for granted that the circumstances and sufferings of illegal aliens are our responsibility. Being within the US border makes them automatically part of the in-group. Imagine you were to entertain this attitude on a person basis? What if a squatter entered your house through an open window, sat down on your couch and asked, in an alien language, to be treated as one of the family. Wouldn’t you ask him to leave? Wouldn’t you call the police if he refused? Would you feel the slightest guilt to see him ejected? Almost all of us would be indignant at the effrontery. Could you imagine sneaking across the border of a foreign country, with no intention of learning that country’s language or adapting to that country’s culture – but simply showing up there with the expectation of carving out a niche?

Immigrants can approach their relocation to a new country in one of two ways. First, they can assimilate – adopting the language, customs and other cultural aspects of their new country. With some notable exceptions, this is what most 19th century immigrants to the United States eventually did, if not fully in the first generation then certainly in the second.2 Alternatively, immigrants can colonize – which is to say, they can settle in discrete and permanent enclaves, keeping their old culture and rejecting that of natives. 19th century European colonists to Africa and Asia obviously did this, remaining British, French, Belgians, Germans and Italians in new exotic surroundings. Muslim immigrants in America and Europe have done largely the same, not merely rejecting but often despising the cultures of the countries they inhabit. Recent waves of Latinos, too, seem reluctant to assimilate – although it remains to be seen whether or not they will follow the path of the Irish in time. Language may be key in this process – both as an indicator of an immigrant’s intentions and as an actual cultural barrier. It is obvious that a person who moves to another country with no intention of learning the language has no intention of assimilating either, but intends to live in an enclave with fellow members of the same culture. When the dominant culture accommodates multilingualism it actually encourages a continued and entrenched sense of separateness, aiding and abetting a sort of neo-colonialism in reverse. Add to such excessive accommodation the nebulosity of the multiculturalist anti-culture itself, and assimilation becomes all the more unlikely. Which is easier for the immigrant: to learn a new language and adopt a new culture with a bewildering lack of defining characteristics, or to keep the old language and old customs in a new context? The multiculturalist may consider his or her society a cornucopia of attractive possibilities, but it can equally be perceived as a weak and decadent mess with something for everyone to find revolting.

One point I have already alluded to needs emphasis. Considering the remorseless Darwinian processes that actually shape history, it may not be the best educated or intellectually sophisticated culture that endures. A few years ago there was a minor stir over whether or not President Obama believed in American exceptionalism. When asked, his somewhat nuanced answer amounted to “no.” Broadly speaking, “no” is the “correct” answer. All nations have a certain uniqueness within the greater context of history – some are more powerful than others; some put more emphasis on particular rights; others are more elaborate in their artistic expression; etc. Most human beings love their own nation (if perhaps not their own government) more than any other. In this context, American exceptionalism seems little more than an expression of one particular bias. To understand that we are nothing intrinsically special is a rational achievement. It is also a serious impediment to cultural cohesion. People can identify with a high ideal, even a fictitious one, in a way that they cannot identify with a prosaic fact. Perhaps it is naïve to stand in reverence to Ronald Reagan’s “shining city on a hill” – but it is hard to imagine anyone standing in reverence to a historically accidental superpower whose behavior is sometimes good and sometimes bad. Even the narrow aspects of multiculturalism that are actually rational are culturally corrosive. Many people have died for Christianity, Islam, France, Japan, and even for Communism or Nazism – but no one ever risked life or limb for the greater glory of relativism. To the contrary – cynicism and intellectual sophistication may well go hand-in-hand. Much of the corruption we now see in government may simply be the product of a progressive erosion of coherent cultural standards and myth-infused ideals.

Anti-intellectual as the above argument may sound, it is by no means self-evidently false. No law of nature promises us a stable society at the end of our efforts to sort the world out objectively. It may well be that our pursuit of open inquiry will turn out disastrously in the end. I am not suggesting that we surrender ourselves to tradition and superstition, but I am pointing out that tradition and superstition do yield certain strengths which may turn out to be necessary to cultural survival. It is not by accident that they have survived.3

-----------------------------------------------------------
1 Not all cultures are religions, but all religions are cultures.

2 The Amish did not, of course. The Jews retain a considerable distinctiveness although they have adapted to a high degree. Blacks were handicapped from assimilating through the peculiarities of their history. The Irish required more time than most groups, but now retain only a nominal separate identity.

3 There is a further irony here. Progressive anti-theists, like Richard Dawkins and Daniel Dennett, make the implicit assumption that an empirical pursuit of truth will, in fact, continue to make a better and more stable society. While one could produce a body of interesting if inconclusive evidence to support this, they tend to just take the assertion for granted. The belief is so deeply imbedded in their particular culture that it functions very much like an article of faith. Theirs is a culture that excludes religions and seeks to convert everyone to a certain epistemic schema – while rarely bothering to apply that schema to their own core principles. Thus, they have a dogma despite themselves. Unfortunately for them, it is not a very unifying dogma. The low number of people willing to self-identify as “brights” reveals this. (See: “Why I am not a Bright” [ http://cadwaladr.blogspot.com/2011/08/why-i-am-not-bright.html ] )