April 1, 2010

A Critique of "Social Structure"

The essay below was written by my father, Robert. He requested a response, and I though the exchange might be of possible interest to others. I present his work in its entirety, and will parse through it in detail.

-emc


Social Structure

Many years ago, several thousand young men were under orders to commit multiple homicides. Then, one day it was deemed appropriate for two Generals to meet and talk it over. They were not classmates, but both had attended the same military academy. To the vast majority of literate people it would have been very, very inappropriate for them to use their revolvers on each other. When so ordered, enlisted soldiers must kill their counter parts on the other side, but only under very special conditions are commissioned officers expected to kill each other, even their counterparts on the other side. Occasionally, during a war, a high ranking commissioned officer is killed, but the event is usually considered an accident, having little or no connection to the war.

Today in Afghanistan and Iraq there are a few hundred thousands of U.S. privates whose duty it is to kill natives of those two countries when so ordered. This and their own possible death are in the interest of making their own neighborhoods in the U.S. safer. In view of the fact that they are on duty "24 - 7", their salary is probably in terms of tens of dollars per hour. While this was going on, several, (possibly a few hundred) radio and television talk show hosts, Rush Limbaugh, Keith Olbermann and others were discussing events in the middle east among other things. These gentlemen are almost never in harm's way, although they do sometimes launch some rather pointed remarks at each other. I suspect that these gentlemen's salaries are hundreds or even thousands of dollars per hour.

A few years ago a television personality, (Martha Stewart) served a few weeks in prison, (albeit one of the "nicer" prisons) for the crime of "inside trading." If I can depend my memory at all, it seems to me that I recall hearing our then President admit to having done the same thing. I do not remember anyone seriously suggesting that the President spend a few weeks in prison.

I used to feel that situations, such as those mentioned above were "not in the best interests" of the greater number of people. I even had two or three rules in mind that, over time might
have changed the conclusions of these situations. However, even then I believed that most people would probably not "buy" the suggested rules. I also felt that war should be the last method to consider when trying to resolve international problems.

Then, I came across a book entitled "Sociology" circa 1971 by one David Popenoe. In said book I was reminded again of what my peers have been trying to tell me for years. "That's the way it IS!" According to the book, I have been not only negative and pessimistic, but also mistaken about virtually everything that has come to my attention!
The book also suggests that probably the most important requirement for the survival of any group is a stable social structure. Even the production and distribution of the necessities such as food, shelter and clothing are improved under a working social structure. An important part of the social structure is the establishment and maintenance of the social stratification. This determines the distribution of the desirables.

So it appears that the situations that I find difficulty in accepting are actually the desired results of maintaining a social structure. Considering my limited qualifications, this is really all I need to know.

War seems to be a good thing for at least two reasons:
1.) I seem to remember that during World War Two the prevailing disposition in the population was one of comradeship. People felt that, "We are all in this together."
2.) War seems to have the effect of reinforcing the social stratification. People in the middle and lower classes are told what they should do and when they should do it, thus relieving them of the responsibility of making decisions they are not qualified to make.

The current two wars do not seem to have the effect noted in (1) above. Perhaps they are simply not big enough.



Well, let us begin at the beginning…

Many years ago, several thousand young men were under orders to commit multiple homicides.
By definition, to kill a human being is, necessarily, to commit a homicide – but I am confident that that is not the point being made here. It is clear from the very outset that we will not be limiting ourselves to an objective discourse about facts, but will be subject to an attempt at emotional persuasion. This is an arena that I try, often not very successfully, to avoid. I have to grant that our emotions are important, and that our lives would be not only rather grey, but almost unimaginable without them. That being granted, a lifetime of observation has convinced me that very few problems, either personal or political, are ultimately solved by the application of invective or anger, no matter how righteous or justifiable it might be. Thus, despite the irony, I must at least attempt to approach any discussion of either war or inequality in a logical, methodical way. To discuss such matters in anger is not to understand them, but rather to be consumed by them.

Then, one day it was deemed appropriate for two Generals to meet and talk it over. They were not classmates, but both had attended the same military academy. To the vast majority of literate people it would have been very, very inappropriate for them to use their revolvers on each other.

This is a reference to Lee’s surrender to Grant at the close of the American Civil War.

When so ordered, enlisted soldiers must kill their counter parts on the other side, but only under very special conditions are commissioned officers expected to kill each other, even their counterparts on the other side. Occasionally, during a war, a high ranking commissioned officer is killed, but the event is usually considered an accident, having little or no connection to the war.
This is not an accurate portrayal of history. Officers are commonly killed and commonly called upon to kill. Many weapons are indiscriminate, killing or wounding anyone within a certain zone. While one might plead that these are special circumstances, most combat aircrews are composed chiefly of officers, and they both kill people in sizable numbers and subject themselves to serious risk of death. Many fighter pilots in both world wars no doubt killed far more enemy officers than enlisted men. Neither have officers been exempted from combat as parts of ships crews. A simple review of a few weeks worth of casualties in Iraq and Afghanistan will reveal that in the current wars officers are killed in some approximate proportion to their number, even the occasional Major or Lt Colonel. Whether they are killed by enemy officers or partisan snipers seems rather immaterial. Either way, they are equally dead. While it is true that the highest ranks of commissioned officers are rarely killed in war, it is also true that cooks, quartermasters, and other sorts of “rear echelon” troops are rarely killed. All these groups avoid death for approximately the same reason: they are not of much military utility as direct combatants. In circumstances where they are of some actual use in harm’s way, high ranking officers will usually present themselves. Naval warfare is the most obvious example of this; it is a rare task force than is not commanded by an Admiral in situ.

If we may set aside our emotional reactions to war for a moment and look at it as a purposeful endeavor, it may become at least intelligible. I do not dispute that war is an inherently brutal, wasteful and tragic activity – I merely take the position that it is generally neither a mass exercise in cruelty for its own sake, nor a straightforward bloodletting of a nation’s lower classes. War is, as Clausewitz said, the “continuation of politics by other means.” As such, its motivations are essentially political ones. In general, wars are conducted either to expand or preserve the power of such bodies of persons as rule the countries involved. They may or may not be in the interests on the broad majority of the citizens of countries involved, and whether they are or not is not always an easy mater to determine.

At an operational level, the goal of war is not an orgy of “multiple homicides,” but rather the collapse of the opposing state’s capacity to continue military operations. This goal being accomplished, the victorious state’s political ends may (at least in theory 1) be carried out without resistance. From this perspective, it would not only have been “inappropriate” for Grant and Lee to attempt to gun one another down at the surrender table, but entirely senseless from a military point of view. It would not have changed the outcome of the war. Warfare inevitably involves a loss of life, and often a needless loss of life, but warfare is still, for the most part, a means to an end. It is only in a few truly exceptional conflicts that killing takes place as a deliberate policy of genocide, which is to say as an end in itself.

It has been apparent since the Gulf War of 1991 that contemporary American military doctrine centers on the destruction on the opposing state’s command and controls systems, which is to say, their headquarters and communications facilities rather than the opposing soldiers on the front line. The American military exercises this doctrine not because the Joint Chiefs are better, more humane people than they were in World War Two, but simply because they have the technology to carry out such a doctrine. Destruction of an opposing military’s leaders is an efficient means of rendering its conventional forces impotent. While this does not make either war in general or American foreign policy in particular necessarily moral, it does contradict the implied hypothesis that the goal of war is necessarily to kill a maximum of underlings while leaving the upper classes of both sides intact.

Today in Afghanistan and Iraq there are a few hundred thousands of U.S. privates whose duty it is to kill natives of those two countries when so ordered. This and their own possible death are in the interest of making their own neighborhoods in the U.S. safer.
Current U.S. troop deployment in Afghanistan and Iraq totals well fewer than 200,000. I have no idea what fraction of these are privates.

The U.S. is still a signatory to the Geneva Convention so, strictly speaking, who American soldiers may kill and when they may do so is more-or-less narrowly defined by international law. That such restrictions are often breeched during the actual conduct of a war is not a matter I would attempt or even want to dispute – this does not, however, mean that such restrictions are irrelevant. While not explicit, there is an implication in the essay that the soldiers of the United States would be bound, if ordered, to liquidate the populations of Afghanistan and Iraq in the manner that the SS liquidated the population of the Warsaw ghetto. There have certainly been occasional and predictable abuses of civilians among the soldiery, and even (I believe) actual war crimes originating in high government circles, but I don’t think the majority of American soldiers are either indoctrinated or inclined to gun down children in cold blood. Incidents occur, but they are incidents. Our soldiers are not an aggregate of saints, but neither are they an aggregate of butchering robots.

I sympathize with what I presume to be sarcasm, the notion that anything about the current conflicts ultimately makes us any safer. I would go further, in that I am appalled at the extent to which the slogan “support our troops” has come to be interpreted as “support our policy.” It is not a good use of the life of a soldier to shovel him or her into a grave without a clear purpose. If one believes that it is always wrong to question one’s government in time of war, then one must believe that the German people had no right to question their government after 1939. If one believes that rules that apply to Germans (or any other people) should not apply to us, then one is has taken the first patriotic step toward a very deep abyss. Whether or not a government’s policies are either moral or successful are questions one should always be able to ask. Whether or not the current conflicts make us any safer is debatable, and there appear to be worthy arguments for either position.

In view of the fact that they are on duty "24 - 7", their salary is probably in terms of tens of dollars per hour. While this was going on, several, (possibly a few hundred) radio and television talk show hosts, Rush Limbaugh, Keith Olbermann and others were discussing events in the middle east among other things. These gentlemen are almost never in harm's way, although they do sometimes launch some rather pointed remarks at each other. I suspect that these gentlemen's salaries are hundreds or even thousands of dollars per hour.

A US Army PFC with a few years of service earns $1923 per month. If one works this out as an hourly rate on a 24-7 basis, the resultant figure is $2.63 per hour. Of course, soldiers do not have to pay for their own food, medical expenses, etc. – but let’s not muddy the water unnecessarily: celebrities don’t necessarily cover all of their expenses either. Olbermann’s annual salary is rumored at $4,000,000. Limbaugh’s salary in 2007 was $33,000,000. If one calculates Limbaugh’s hourly rate on a 24-7 basis (though it’s rather doubtful that he’s “on duty” all the time) one gets a figure of $3764.54 per hour. If you want look at his pay in military terms, Limbaugh is paid as much a US Army battalion, officers and all. Whatever one may feel about this, these are the facts.

The cliché that “money is power” is a very true one. Money is not only power in the sense of being able to buy political power, but is power in the more mundane sense of being able to buy goods and services. If one goes into a store and buys an object, one is exercising a form of power. Obviously, this power is not inherent in the money itself, but is present only by virtue of a common understanding among the people who circulate it. Power can be embodied in other forms. In medieval society, for example, money was important but a poor nobleman might still get the better of a rich merchant. Likewise, the clergy had rights and powers beyond the gold that they possessed (not that the material wealth of the church was in any way lacking).

Money is just a particular way of distributing certain social “rights,” but those “rights,” in one form or another, are an apparently inevitable feature of human social behavior. I cannot think of a single instance of a human society, of any size, that is not hierarchically structured. While there is a considerable variability from society to society in the depth of the disparity between their least and most powerful members, all societies distribute power unequally.

Getting back to example offered, an Army private has the “right” to acquire a certain amount of petty property, to gamble money on poker, stocks or other entities, to claim benefits provided in the contract he or she signed with the government, etc. Mr. Limbaugh has the “right” to a far greater amount of petty property, may speculate with vastly greater sums if he so chooses, and is neither encumbered by the heavy duties nor provided with the modest benefits of a soldier. He may, significantly, cease working forever at more-or-less the time of his choosing without fear of wanting for housing, food, or any common run of luxuries in his lifetime – or in fact, in many lifetimes. The soldier usually knows were his next meal is coming from, but has little promise of security to the end of his days.

While a “rights” perspective is not as neatly quantifiable as a financial one, it offers the advantage of allowing broader comparisons. One can, for example, compare capitalist and non-capitalist societies this way. Under communist hierarchies, for example, the lower strata of society have as many options and as much personal security as their governments (the highest strata of their society) are inclined to (or can) provide. The upper strata get as much compensation as they feel they deserve. Looked at from this narrow perspective, capitalism and communism are not greatly different. Both allow notably more social mobility that the hereditary hierarchies that preceded them. Getting rich in America or becoming a governmental official in Cuba are both at least within the realm of possibility for individuals from the lower classes of those societies; becoming a nobleman was not even a possibility for a medieval peasant.

While the salaries of TV celebrities, bankers, football players, etc, are out of any proportion to their demonstrable value to society, the system we live under makes no real claim to reward people in accordance with their social worth. A little less economic disparity might be nice, but any scheme that eliminates economic tyranny by introducing political tyranny is not much of an advance. The reverse is not that laudable either. Human beings have yet to devise a system of social organization that works for everyone. I am justifiably skeptical about this happening, ever. Evolution is not a process that tends toward a complete and uniform state of happiness for all the members of a species.

A few years ago a television personality, (Martha Stewart) served a few weeks in prison, (albeit one of the "nicer" prisons) for the crime of "inside trading." If I can depend my memory at all, it seems to me that I recall hearing our then President admit to having done the same thing. I do not remember anyone seriously suggesting that the President spend a few weeks in prison.
Here too, we have another example of the admittedly uneven distribution of power, not explicitly stated in law, but nonetheless quite apparent. People connected to the highest tier of government are only at the mercy of the law to the extent that some quorum of their peers chose to enforce it. The impersonal, bureaucratic wheels of justice that constrain the rest of us have difficultly punishing the people who invest those wheels with authority. This was true of Bushes’ insider trading, Cheney and others’ apparent complicity in war crimes, and Obama’s failure to publically produce his birth certificate.2 Martha Stewart, though a celebrity, had no extralegal powers. The best that she could do was hire expensive counsel. It wasn’t a charge like those against Michael Jackson, who had the option of just buying off the families of his accusers. Uniform justice is a pleasant ideal, perhaps even a useful one, but while one can be disgusted at its failure in application it takes an impressive naiveté to be shocked. Such events are as old as human society.

I used to feel that situations, such as those mentioned above were "not in the best interests" of the greater number of people. I even had two or three rules in mind that, over time might have changed the conclusions of these situations. However, even then I believed that most people would probably not "buy" the suggested rules. I also felt that war should be the last method to consider when trying to resolve international problems.
So spoke Don Quixote to the world. Well, perhaps an anarchist, pacifist Don Quixote – but Don Quixote nevertheless.

Personally, I agree that war should be the recourse of last resort. This also seems to be the general historical trend. As late as the 19th century, openly expansionist wars were the norm, especially wars conducted against technologically less advanced nations. There was relatively little moral backlash about the conquest of the British Empire, either within Britain or without. Such a series of imperialist wars would be utterly unthinkable now. One has only to consider the nearly universal backlash against Saddam Hussein’s invasion of Kuwait or the more recent American invasion of Iraq. Most contemporary wars are either revolutions or civil wars resulting from the creation of artificial, multinational states in the wake of colonialism. Iraq, for example, is one state but essentially three nations. While such conflicts can be bitter, long, and bloody, the unabashed expansionism of earlier times has greatly abated. While wars of conquest still occur, the belligerents are usually more cautious and inclined to take greater care about the defensibility of their pretexts.

It is indeed unfortunate that we have not arrived at utopia in the course of one man’s lifetime. It is untrue, however, that amid all the carnage of the 20th century there haven’t also been significant movements away from warfare, and, at least here and there, some little progress toward more humane and equitable societies.

Then, I came across a book entitled "Sociology" circa 1971 by one David Popenoe. In said book I was reminded again of what my peers have been trying to tell me for years. "That's the way it IS!" According to the book, I have been not only negative and pessimistic, but also mistaken about virtually everything that has come to my attention!

The book also suggests that probably the most important requirement for the survival of any group is a stable social structure. Even the production and distribution of the necessities such as food, shelter and clothing are improved under a working social structure. An important part of the social structure is the establishment and maintenance of the social stratification. This determines the distribution of the desirables.

So it appears that the situations that I find difficulty in accepting are actually the desired results of maintaining a social structure. Considering my limited qualifications, this is really all I need to know.

I am neither familiar with the book referenced here nor have I made a formal study of the field of sociology. However, accepting that Popenoe’s assertions are as you have summarized, the point is so self evident as to be hardly worth making. As I have already stated, I cannot think of a single instance of a human society that is not hierarchically structured. For the sake of argument though, let’s try to imagine one.

Such a society, to begin with, would look nothing like ours in a physical sense. Nothing could be built beyond what a handful of friends might manage to wrest from nature. There could be no buildings of any great size, no roads, and nothing as complex as an automobile, certainly. There would be no music other than what a person might make on simple instruments crafted with the simplest of tools. It takes an organized society to produce anyone as specialized as a violin maker. There could be no symphonic music in any case, because orchestras are hierarchies. Likewise, it is hard to imagine science advancing very far without some organization to support the scientist’s inquiries, provide instrumentation, and disseminate discoveries. In the hierarchy-free society Beethoven and Einstein would have to dig potatoes, clean skins, or hunt and fish with everyone else.

The problems get worse. What if, in this amorphous mass of absolute social equals, some little group decided to seize some measure of power by threat of force or cleverness? What mechanism inherent in the nature of the amorphous mass would prevent them from succeeding? It would have to be some trait inherent in individual human beings. Even such a constraining entity as a circle of tribal elders is a body with special authority, and therefore a hierarchy. One cannot look to laws or other sorts of rules, because these are, themselves, features of hierarchies requiring someone to enforce them. While most human beings do resist authoritarian constraint beyond a certain point, it is evident in every society on earth that real human beings endure or even welcome authority up to that point. Power abhors a vacuum, and states of anarchy never last long. The hierarchy-free society can be no more than an imaginary construct, since its existence would require an isolated population of flawlessly egalitarian human beings. To the best of my knowledge, no such population has ever existed.3
Now, none of this is to say that more and deeper hierarchy is always better, or that the concentration of power without limit is either good or inevitable. I would not assert this. I merely assert that an absolute rejection of hierarchy is tantamount to a rejection of human nature. Any scheme of social improvement that requires that human beings be something other than what evolutionary forces have made them is bound to be repressive, and a repressive scheme that, by its own principle, cannot be enforced can equally not succeed. It is also quite an odd conception on its face that people could be free of oppression if only they adhered to certain unshakable rules.

War seems to be a good thing for at least two reasons:
1.) I seem to remember that during World War Two the prevailing disposition in the population was one of comradeship. People felt that, "We are all in this together."
2.) War seems to have the effect of reinforcing the social stratification. People in the middle and lower classes are told what they should do and when they should do it, thus relieving them of the responsibility of making decisions they are not qualified to make.

The current two wars do not seem to have the effect noted in (1) above. Perhaps they are simply not big enough.


Again, these assertions seem rather hasty. Wars tend to increase a sense of national unity if there is a widespread perception that the enemy poses a serious threat to the nation and its institutions. This is what happened during the Second World War in Britain, the United States, the Soviet Union, and eventually Germany. The polities of these states were all justifiably concerned about the consequences of loosing. When wars seem less justified, more people are inclined to dissent. The protests during the Viet Nam War are a clear example of this. A general feeling of righting an injustice will also provide some measure of public acceptance for a war, even without an existential threat. The difference in public opinion between the Gulf War of 1991 and the present Iraq War points out this tendency. Though there was no great sense of national peril during the Gulf War, there was a general perception that Saddam Hussein had done something unacceptable by invading another country purely for the purpose of conquest. As I alluded to earlier, opposing this invasion met the modern criteria for a “just” war. On the other hand, when it became apparent that the justifications for America’s invasion of Iraq in 2003 were largely fabrications, a substantial fraction of America’s population began to oppose the war. While many people seem to have a penchant for blind obedience, it isn’t true that most of us do.

Clearly though, populations can be utterly divided by war – to the point of the very dissolution of the state. For example, the First World War destroyed the social order of Czarist Russia. It also created deep hostilities toward the sizable German ethnic group within the United States. Rule #1 is certainly in need of qualifications.

Ignoring the sheer darkness of its sarcasm, rule #2 seems to be essentially a corollary of rule #1, and suffers from the same limitations. It is true that during both world wars censorship and economic controls were rife. There was a general decline in civil liberties in those nations that had previously had them. Likewise, the Bush administration attempted to use the Iraq War as a pretext for a general centralization of power, and not without some success. Still, extensions of emergency government powers tend to breed resentment if they are not rescinded at the conclusion of the emergency. While the tendencies toward the centralization of power are strong, so are the countervailing forces of decentralization and individual freedom. If this were not so, revolutions would never occur and trade unions would not exist.

It is no easy matter to conclude a critique of an essay about such far reaching and open ended topics. Obviously, the work reflects a number of views which are substantially different from my own, though I do not reject it all. I’m not an avid proponent of either inequality or bloodshed.

The essay reminds me of nothing so much as the beliefs of the 19th century anarchists. If I may be forgiven a brutal oversimplification, the anarchists believed that hierarchy was the sole source of evil in human society. If one could simply take away the bosses, human beings would live harmoniously and cooperatively forever. The source of all suffering was the evil of a few, and thus it might be readily expunged. In its simplicity, this was undeniably a beautiful dream. A handful of anarchists threw bombs into crowds, or shot the odd president or aristocrat here or there, but most simply hoped, grumbled, and brooded until their movement withered away. No person with a heart can wholly despise a beautiful dream, but one can certainly grow weary of the pile of invective that it rests on.

Compared to the life of our species, the life of any individual is fleeting. We know a little about our species’ origins, but can only conjecture dimly about its future. We play small parts during our tenure here, the full repercussions of which we lack even the capacity to understand. I cannot say that human beings will never live in perfect equality and perfect peace. I can say however, with reasonable confidence, that they never have. Many facts about the world are not to my liking – yet they are facts nonetheless.

-----------------------------

1 Guerilla war is an exception to this. While really a worthy topic in itself, I will suffice to say that most governments that set out to defeat another nation’s regular army do so in the hope that the opposing populace with acquiesce quietly after that defeat. No government wishes to engage in a protracted war with armed civilians.

2 Obama’s failure to publically present his birth certificate may seem trivial compared to plausible allegations of war crimes, but it is a perfect illustration of my point. Few of us have the opportunity to commit war crimes, but all of us are required to present identification from time to time. If you or I were applying for a driver’s license but told the examiner “I can’t show you my birth certificate – I will only show it to the highest official of your bureau, and then only on the proviso that its contents be kept absolutely confidential” – we would be escorted politely out of the office. Not so a senator and the darling of his party.

3 A failure to understand that hierarchies arise spontaneously was the undoing of 20th century communism. Neither Marx nor Lenin predicted the rise of Stalin, because they believed naively in a power that would emanate from the public as a whole in some completely unprecedented, inexplicable way. They made no provision for the possibility that anyone within their own organization might harbor any dictatorial ambitions. The result was little more than a medieval tyranny under a thick coat of red paint.

March 24, 2010

The Future of the Church

Across from my stepdaughter’s apartment there is an old brick and stained glass church that has been converted into an indoor rock-climbing wall. I’m certainly no defender of religion, but this still strikes me as more than a little crass. Apparently the proprietors did have some slight sense of decorum though. There used to be a larger-than-life statue of Christ two thirds of the way up their new rock wall. They took him down. I suppose if they had been crass without limit, they would have just worked him into the pattern of other obstacles and handholds.

March 5, 2010

A Case against the existence of Free Will

The term “free will” has essentially two meanings. The first definition of free will is that it is that state in which one’s decisions can be realized in physical actions. In this sense, if one is physically constrained by devices, disease or other externally induced circumstances one is, to the extent of the constraint, deprived of free will. The second definition of free will is that it is that state in which one’s decisions constitute a first cause. In other words, the possessor of free will does not make decisions because (or at least not wholly because) he or she is caught in the middle of some inexorable physical process, but rather the decision maker is the actual originator of physical processes. While I will deal briefly with the first definition in closing, my chief interest is in the second.

My argument depends on certain assumptions that, while admittedly arguable in themselves, are by no means weakly held positions.

My first assumption is that the order we observe in nature is not illusory, but rather refers to, however imperfectly, a body of stable relationships that exist in a fully ontological sense. If one assumes that we live in a chaotic universe in which the apparent laws of nature might suspend themselves at any time, then arguments of any sort are futile. Any understanding of states of affairs, no matter how tenuous, must admit at least some constrains to be coherent. When I use the term “physics” in my argument, I mean just this set of constraints and relationships, whether they are presently known to us or not.

My second assumption, closely related to the first, is that nature is fundamentally causal. This assumption alone does not preclude the existence free will. Indeed, to assume that anything can be a first cause one must certainly admit to the existence of causality. By “causal” relationships, I mean to describe relationships in which objects and events are bound together by necessity and not merely by accidental, albeit stable, correlation.

These preliminaries being assumed, let’s dissect the concept of free will a little further using a thought experiment.



I.

Imagine a flying bird. I don’t want to engage in a muddy debate about whether nonhuman animals have free will or not, so let’s just accept that our thought experiment bird, for sake of argument, does have free will in the sense of having the power of first cause. We can describe innumerable trajectories our bird might take across the sky, including many that plainly violate the laws of physics. For example, the bird may not fly a path that would require changes of direction too rapid for the aerodynamic forces it is able to exert with its feathers. Neither can it fly straight up for very long, for a variety or understandable physical reasons. Nevertheless, so long as our bird might fly along at least two alternative paths, however constrained by gravity and aerodynamics, we might still consider it free.

Compare the flight of our bird with that of a thrown ball. Notably, we can imagine exactly the same innumerable set of trajectories for the ball that we could for the bird. Unlike the bird, however, the ball clearly does not possess either free will, or any physical means of altering its own trajectory. Its path is, without question, wholly predetermined by physics. With our knowledge of physics we can predict its trajectory with impressive accuracy. Moreover, even if we knew nothing of the exact physical laws that govern its trajectory, merely watching the ball’s motion would give us an intuition that it is a “thing” and not a “being”. Apart from the relatively static properties of its mass and its shape, there is nothing about the ball which determines its trajectory. It does not “choose” to do anything. It is a neutral participant in an inevitable physical process.

The bird’s behavior differs from the ball’s in at least two ways, one from our point of view and one from its own. From our perspective, the bird is unpredictable; the ball is not. The bird appears to have free will, not because it is unconstrained by physics, but because we can imagine it taking any of any number of plausible paths. From the bird’s perspective (which we are privileged to know only because this is a thought experiment) it has the conscious perception of free will – which is to say, it is aware of having choices. The ball, of course, has neither plausible options nor any capacity for awareness.

Now consider an entity whose status is somewhere between a bird a ball – a heat-seeking missile. For those who are unfamiliar with such things, a heat-seeking missile is essentially an autonomous robot whose function is to intercept and destroy aircraft. It detects infrared radiation (heat) with a special camera and adjusts its course toward the source of that radiation using a rocket and aerodynamic surfaces. Such missiles are “smart” enough to fairly reliably distinguish between aircraft and heat sources that are not aircraft.

Like both the bird and the ball, the missile’s trajectory is limited to a considerable degree by external physics. It is subject to gravity, for example, and its ability to turn is limited by the aerodynamic forces it can exert with its control surfaces. While its trajectory is theoretically predictable (given one has knowledge of the characteristics of all the heat sources in the range of its camera) that trajectory would be much more difficult to either predict or describe than that of a merely ballistic object like a ball.

More significant than the missile’s brute obedience to physics is the fact that its behavior calls into question what it is that constitutes a “choice”. If there are at least two heat sources in front of it on which it might potentially home, then, in at least some sense, the missile’s behavior is the result of a “choice” between alternative imaginable paths. While no suitably educated observer would say the missile has “free will,” we nevertheless get into trouble when we attempt to explain the distinction between its actions and any real bird’s.

Assuming we have access to the knowledge of both the missile’s internal physics (its camera, actuators, rocket, software, etc.) and the environment in which it is operating, we can predict its behavior up to the level of precision of that knowledge. If it does something we did not predict we can infer that there is either something in the external environment we didn’t notice, some variance between the missile’s components and our assumptions about them, or, perhaps, some aspect of physics we simply don’t understand. We would not assume an inability to specifically account for the missile’s aberrant behavior constitutes an argument for it having “free will.” We assume, in short, that entities like balls and missiles behave in a way that at least would be entirely predictable if our knowledge of the relevant physics and states of affairs were sufficiently complete. We do not resort to endowing such entities with extraphysical sources of causation.

When we speak of other entities as having “free will,” whether they are birds or human beings, we are in effect denying that we might be simply facing problems of enormous physical complexity. In place of an unknown (and perhaps even unknowable) physical solution to the problem of behavioral unpredictability, we are postulating an explanation which is little better than magic. While we cannot logically disprove the existence of such extraphysical entities as “free will,” there is nothing necessary about them either. In fact, there are no bona fide examples of physical events that could not have their origins in some purely physical cause. There is much about the nervous systems of animals we do not understand, but there is nothing about those nervous systems that clearly renders them incapable of being the sole mediators between an organism’s environment and its intentional actions. Anyone who has ever been hungry or has consumed an appreciable quantity of alcohol can attest to the fact that material factors strongly influence choice, and it is at least plausible that all decisions are ultimately reducible to the net influence of various environmental and neurological factors.

We can’t say with logical certainty that “free will” does not exist, but since its existence is not necessary to explain behavior we can say that the assertion that it exists is a violation of the principle of parsimony. It is more parsimonious to look for explanations in some as-yet-unknown physical process, or even in some collection of physical processes so complex as to be unknowable, than it is to put forward the untestable assertion that there are extraphysical first causes.



II.

Up to this point, I do not believe I’ve covered any new ground. One can make damaging arguments against the existence of free will in entities external to oneself, but the strongest practical argument for free will is probably the introspective one -- the claim that “I know my own actions to be free.” This is a harder nut to crack, but I believe it is possible -- without resort to peculiar quasi-dualism proposed by epiphenomenalists.

When one says “I have a choice” what does this actually mean? The proponent of the free will hypothesis takes such an assertion to mean that there are multiple possibilities for one to select from – and I am using the term “possibilities” in a strict sense: something that can be a state of affairs, not something which is imaginable but impossible. What I am proposing as a counter to this view is that, while the capacity to imagine multiple “possibilities” plays a role in decision making, any actual decision itself can only be understood as ultimately reducible to some physical process. In simple terms, we perceive ourselves as free because we can and do imagine alternative actions before making a decision, but we inevitably choose from among those apparent alternatives the one which suits our predispositions best.

If I may take the liberty of generalizing my internal experiences to those of other human beings, I must conclude that most of our actions are straightforwardly mechanical. When I type the letter “m” for mechanical, I do not think to myself “maybe I’ll press the ‘m’ key to produce the letter ‘m’ instead of trying to do it with the ‘q’ key.” In truth, there is nothing in the process of typing the letter “m” that has even the appearance of a choice. I just press the “m” key and an “m” appears. Likewise, when I eat my dinner I do not generally entertain the “possibility” of eating it off the floor – though I am at least arguably “free” to do this. Most of our actions are driven by the well established reflexes we have accumulated over a lifetime. Some are driven by genetics. The sort of weighty decisions ethicists like to talk about are comparatively rare events.

When we do act in a way that involves making recognizable choices, what is it we are doing that differs from our reflexive involvement in the causal world? When faced with what we actually perceive as a choice, we weigh the relative merits of each alternative. We attempt to predict the future. We imagine alternative futures for the purpose of selecting the most attractive one. We never fail to select the most attractive one, “attractiveness” being defined by our unique set of heuristics, our background, our genetics, our current state of knowledge and awareness, and any number of other factors than manifest themselves, ultimately, in the state of our neuroanatomy. There is nothing about this process that requires any special causal powers.

Consider the trivial choice of selecting a snack from a vending machine. We approach the vending machine not merely with a handful of change, but also with a huge collection of memories and other sorts of predispositions. Typically, we eliminate from consideration all of those items experience has shown us we don’t like. We gravitate toward those we know we like the taste of, perhaps toward those we think might have some nutritional value, and perhaps toward those which hold some pleasant associations unique to us. We might also have acquired a heuristic that inclines to trying new things, or a heuristic that inclines us to avoid them. We might have cravings related to our metabolic state.

When we choose one snack over another, we may or may not be able to explain our decision. One might say, “cheese crackers are my favorite snack, and that is why I picked them.” Such an answer erodes the case for free will on its face because the chooser is substantially aware of the dominant causal factor in the choice. The chooser is predicting that the cheese crackers will taste better than the other alternatives, and “tasting better” is almost certainly reducible to some biochemical process.

One might say, on the other hand, “I could not make up my mind so I chose the cheese crackers.” This answer admits to no known cause on the part of the chooser, and might be explained in either of two ways. The choice is either genuinely random, or it is the result of some process of which the chooser is simply unaware. If a choice is genuinely random in some quantum statistical sense, then it can hardly be considered an act of free will. On the other hand, if a decision is the result of subconscious motivations (or something of that sort) then it is still the product of an antecedent cause, so the decision cannot be a first cause in itself. There is no more reason to ascribe special causal powers to the subconscious than to the conscious, and even if there were, the possession of extraphysical subconscious powers is clearly not what we mean when we postulate free will.

Yet another (and quite atypical) response our decision maker might make would be: “I chose the cheese crackers because I am free to do so, even though I would have preferred the cherry pie.” Although superficially this sounds promising, on analysis it is either a lie or a self-deception. To make a choice for the sake of proving one’s freedom is just to evaluate the goal of the selection task differently. It is merely to find attempting to make a particular philosophical point more attractive than selecting an attractive food. It illustrates not freedom, but merely the working out of different causal sequences that happen to be granted precedence at the moment. The decision to put a belief before a physical pleasure is no proof of freedom either, as it may also be easily explained in terms of yet other antecedent conditions acting on the decision maker’s neuroanatomy.

Ultimately, it is simply unintelligible to talk about a decision as an isolated cause. In practice, we know decisions always exist within the unique context of the decision maker’s cognitive world. It is always legitimate to ask why a decision maker made a particular choice, which would not make any sense if decisions were uncaused, spontaneous events.

Relating the vending machine example above to our earlier missile example is revealing. Both the behavior of a heat-seeking missile and that of a hungry human being can be explained (at least in principle) by wholly deterministic physical processes. Both entities, too, behave in ways which vary according to the circumstances they detect in the external environment. They differ, most fundamentally, at the level of intentionality. Where the human being makes a series of predictions and inevitably seeks the most attractive, the missile is not “attracted” to any outcome in the same sense, nor is it capable of anything that answers to the term “prediction”. It does not pursue its target because it has some internal conscious purpose which it is deterministically compelled to carry out. The machine operates by a comparatively simple and wholly unconscious algorithm, more-or-less as I do when I press the “m” key to type the letter “m”. What sets human beings, and no doubt a good many other animals, apart from machines is not some special power to initiate causation, but rather the ability to attach action to meaning and meaning to action. A bird may fly a course wholly determined by physical causes, but one of those causes is its particular purpose at any given moment – a purpose that entities like balls and missiles do not possess. Current artificially “intelligent” machines may respond to external events as if they were predicting the future, but the only actual predictions that can be inferred from their behavior are the predictions of their designers.

To state my position another way, the distinction between a mechanical reaction and a decision is not the interposition of free will, but the interposition of an awareness of causality itself. To make a decision, in the sense that human beings make decisions, is to model at least two imaginary causal sequences in imaginary space-time, and to pursue the outcome of one or other of these sequences. The predicted outcome is the goal we pursue in making a decision. Further, to make reasonably reliable predictions one must have a substantial working knowledge of the physical world in which one is immersed. Without such knowledge, predictions would simply be wrong too often to be useful. While nothing about this process requires free will, it is no mere brute reaction either. Making decisions based on such understanding, no matter how deterministic the actual mechanism of the decision might be, is certainly no small achievement.
Ironically, it is this very ability to model the future in a variety of ways that creates the illusion that we have special causal powers. What we call freedom is nothing more or less than the general belief that each of our predictions really could represent some future state of affairs.1 It is a byproduct of the impressively complex, evolutionarily advantageous, but ultimately deterministic, way our nervous systems respond to a varied but not wholly unpredictable environment.

I think it is necessary, at this point, to clarify the distinction between the position I have outlined and the position of the epiphenomenalist. The epiphenomenalist would agree that our decisions are fundamentally deterministic, in other words, that the nervous system behaves like an elaborate machine and that the entity we identify as “the mind” cannot exercise the power of first cause. However, the epiphenomenalist goes further in taking the position that our cognitive activity is inconsequential, playing no role whatsoever in our actions. My position is that, since our cognitive processes are inseparable from the neurochemical processes that constitute them, it is wrong to assume that our cognitive activity is inconsequential. The epiphenomenalist position is analogous to saying that a regular lattice of silicon and oxygen atoms somehow only contingently appears to be a solid at our level of perception, and that it might just as easily be a liquid. This is contrary to experience. Without intentionality we would not, and could not, participate in the causal universe in the way we actually do. My assertion is that what we perceive as a decision and what we define (and, to the extent that our instruments are capable, detect) as a brain process are merely different expressions of the same state of affairs. “Thoughts” are no more capable of being first causes than the neurochemical processes which constitute them, but, as they are features of physical processes, the subjective experiences we know as “thoughts” are logically bound to causality. I think epiphenomenalists make their mistake because they believe that “thoughts” really are first causes (even if only for themselves) and must therefore be causally quarantined to protect the integrity of a material universe which they concede is deterministic.

I would like to make clear, too, that the elimination of a first cause interpretation of our capacity to make decisions does not require that the universe be wholly deterministic. If quantum theory is substantially correct, some states of affairs, at least at the level of subatomic particles, are statistical rather than deterministic. While it is at least imaginable that some unknown causes underlie this apparent randomness, even if the statistical nature of states of affairs at that level is a brute fact it does nothing for the “mind-as-first-cause” hypothesis. Genuinely non-deterministic processes would render the future less predictable, but would not require, or even imply, any special causal powers of the mind.2
Finally, I would like to make clear (if I have not done so already) that while my position implies that decisions are, at least in a broad sense, computations, I do not take the position that a capacity to perform computations is a sufficient condition for intentionality or consciousness. John Searle’s Chinese Room argument seems to cover this issue forcefully enough, and I will not re-cover the same ground.3 My purpose in this essay is to show that we have no reason to suppose we possess the power to truly originate causal sequences, not to show that all of our cognitive processes are reducible to computation. Indeed, I don’t believe either intentionality or consciousness are computational.



III.

Inevitably, any attack on the idea of free will must come to terms with our own sense of self-identity. If we are not free beings – what are we? Well, from a starkly materialist perspective, we are merely the loci of a particular kind of causal complexity. Being a sort of “thick spot” in the field of causation may not be a very satisfying self definition for most people, but there doesn’t seem to be anything inaccurate about this view. A perhaps more satisfying but no less accurate self assessment is that we rare instances consciousness in a universe that is largely unconscious. Consciousness, while a difficult entity to define, is at least ontologically less problematic than free will. While freedom isn’t necessary to explain our experiences, consciousness certainly is. None of us can honestly entertain the notion that he or she is absolutely unaware. To engage in any sort of reasoning at all requires an awareness of something. Taking the position that the elusive entity I identify as “me” is fundamentally a conscious entity is therefore a more secure stance than assuming what is fundamental to my nature is freedom. It is perfectly possible to act and make decisions without pretentions to magical causal powers, but many of our actions would be impossible (or at least wildly improbable) without a conscious capacity to make predictions.

The other sense of free will, that of being unimpeded in carrying out one’s intentions, is wholly independent of the first cause definition. Whatever the nature of our decisions might be, our ability to carry out those decisions is subject to circumstances that are external to the decision making process. To say that I am free in the sense that I can carry out my deterministically derived decisions without external impediments is not a meaningless claim. Here too, however, our perception of freedom pivots on our assessment of our own identity. The deterministic processes that constitute my decision making are still identifiable as part of “me,” just as the deterministic processes that make my muscles function are identifiable as part of “me,” but any deterministic processes in my environment that prevent me from carrying out my decisions are not a part of “me.” While in some doggedly holistic sense this distinction may be trivial, it is probably inevitable that we organize cognition from the perspective of our unique identities. Evolution has predisposed us to model the rest of the universe as somewhere we live, not something we are.

The sense in that we are free when we are not externally impeded can be readily explained in terms of the determinist perspective I’ve proposed. Remember, to make a decision is to pursue the most attractive option among some set of imagined options. To suffer a loss of free will in this more external sense, then, is just to have one’s options constrained to some subset of unattractive choices – or at least to be denied some plausible more attractive choice. Unless one is utterly paralyzed, one always has plausible choices of action. Even if one is utterly paralyzed, one still has plausible choices in regard to one’s own thoughts. A slave does not feel enslaved because he or she lacks imagined possibilities, but only because even the most attractive of those imagined possibilities is unpleasant.4-------------------------------------------

1 This erroneous belief may even be necessary to the decision making process: if we were constantly aware that most of the causal sequences we imagine before making decisions simply could not occur, we might tend to engage in futile, and rather paradoxical, searches for the possible rather than the attractive.

2 I owe this observation to John Searle.

3 See: http://en.wikipedia.org/wiki/Chinese_room

4
In theory, one might be severely constrained by external circumstance and yet feel completely free. Consider the following thought experiment. Imagine a vending machine with the magical ability to predict our decisions perfectly. Before we push a button to make our selection, the vending machine disables all the other buttons. Thus, we are constrained not merely by our own internal deterministic processes, but by an external one as well. Provided the machine made perfect predictions, however, we would be oblivious to the constraint. While it is tempting to imagine that we are made less free as the number of options available to us is reduced, we are in fact only less free, in the sense of being materially constrained, when we perceive the actual constraint.

October 13, 2009

A Few Words on the Dangers of Language

We are all born empiricists. As infants, we begin to learn about the world through our senses. We watch, we listen, we feel, we taste. We learn to manipulate objects. We learn to crawl, and eventually to walk, by trial and error. From the beginning, we are endowed with both a curiosity about our surroundings and a capacity to experiment and observe. This is our first and purest way of knowing.

Later, when we acquire some facility for language, we learn a second way to know about the world. Typically, we start this new exploration by asking questions of our parents. “What is that?” “Will it bite?” Using language, we can ask questions about things that are beyond our present means of discovering firsthand. “What is the sun made of?” “How old do trees get?” We learn to incorporate our empirically-acquired knowledge and our linguistically-acquired knowledge into some tentative, incomplete, but more-or-less functional picture of reality. So armed, we venture forth into the world.

These two ways of learning, experience and language, are the only two ways of learning we will ever possess. It is worth examining how these two methods differ in the kind of knowledge1 they provide, and what the consequences are of ignoring this distinction.

An important, almost defining, characteristic of empirical knowledge is that its truth has nothing to do with how we might feel about it. On a clear summer day the sky is its own particular shade of blue -- whether you happen to like that shade of blue or not. While we can change some parts of the world in certain ways, the lesson we learn by observation is that most things have stable characteristics, or predictable transitions thought a series of characteristics, that define them. To stay with childish discoveries for now, we discover that rocks are hard, heavy and relatively changeless on our human timescale. Apples, on the other hand, begin small and green, grow large and (usually) red, and (if uneaten) turn brown, soft, and inedible. While we are capable of imagining mushy rocks and indestructible green apples we will not find such things in nature. Illusions and other failings of our senses aside, our experience reliably shows us what is.

The knowledge we gain through language, on the other hand, is of a very different character. Even in the realm of what we might call material facts, experience and language spawn different kinds of ideas. To know by word of mouth that your grandmother is seventy-eight years old is not the same as being aware of her wrinkles, grey hair, and bent posture. In this case both physical perceptions and the linguistic expressions do, in some sense, refer to the same underlying reality, but while wrinkles have a physical presence, “seventy-eight years old” is a conceptual entity – something we cannot physically point to. The ideas we express in words, while they may refer to the world2, are fundamentally constrained not by nature, but merely by the rules of language. One could as easily say that your grandmother is 678 years old. While probably not true, the proposition is just as expressible as one that is true. While the empirical realm corresponds to what is, then the realm of language corresponds only to what is imaginable.
Every bit of knowledge we acquire through language alone is, in an important sense, imaginary. If I tell you, for example, that I once walked from Fort William to Glenelg in the Scottish highlands, unless you happened to witness my whole trek from end to end, you cannot know what I’m saying is true in anything like the same sense that you know what you ate for breakfast or where you spent yesterday evening. You imagine my journey, however abstractly, and you make a certain assessment about whether or not it actually occurred. If I tell you that I walked from Ascraeus Mons to the Fesenkov crater on the planet Mars you would also imagine my journey in precisely the same sense – even though you would probably make a different assessment regarding the truth of such a claim.

Knowledge we acquire through language is contingent in a way that empirical knowledge isn’t. When we are told something or read something, we automatically measure its plausibility against other ideas we have already accepted as true. In parallel to this, we also assess the credibility of the source. Were this credibility assessment always a measure of past reliability it would not be particularly problematic. (e.g., Jones has rarely been wrong about his forecasts of the weather, so if he says that it will rain today it probably will.) Unfortunately, credibility often rests on far less rational grounds. (e.g., Jones is my uncle, a model citizen, and a freemason in good standing -- so if he says that it will rain today it probably will.) Authority often has dimensions that have nothing to do with empirical reality.

Consider our earliest social relationships. It is normal for young children to accept the words of their parents as facts. In part, this may be based on empirical past performance: as long as parents are not grossly incompetent or deeply pathological they can be expected to answer most of their children’s questions about everyday matters accurately. It would be foolish, though, to imagine the trust that children have for their parents is altogether rational. In the first place, young children don’t have much existing knowledge to measure new knowledge against. They are innocent, ignorant or pathetically unskeptical – depending on one’s perspective. In the second place, parents have a privileged position as providers and protectors, and we are probably predisposed by millions of years of evolution to accept their authority, at least as children.

Parents, or at least some adult assuming some semblance of a parental role, serve as our first examples of credible authorities. It follows, then, that the parent-child relationship must be the model for all subsequent authority relationships, both with other persons and with non-personal entities such as gods and nations. Annoyingly Freudian as this may sound, I believe it is self-evident. If you have doubts about this claim, you need only ask yourself -- How could it be otherwise? What possible model for authority relationships could any individual experience earlier? Even if we assume the pattern for authority relationships is not learned but wholly ingrained in us genetically, we cannot escape the primacy of the parent-child relationship.3 Genetics, driven by natural selection, would tend to produce traits that are advantageous to our survival, and if our genes predispose us to trust anyone for the sake of our own survival it would certainly be our parents.

The connection between parental relationships and relationships to adult institutions permeates language. “Our father in heaven.” “The fatherland.” “Mother Russia.” “America’s founding fathers.” The word “patriot” derives, ultimately, from the Greek patēr, meaning father. Even phrases like “international brotherhood” imply a paternal relationship indirectly. In fact, one struggles to find a relationship with a national or religious institution expressed in any other terms. One may speak of “friendly nations,” but this “friendship” refers to a relationship between one nation and another, not an individual’s relationship to a nation.

Inevitably, we model new relationships on old ones. Having established a certain level of trust for an external authority from infancy, we are predisposed to look for truth in the linguistic constructions of others from then on. This is why the theist believes in scripture, why the patriot is stirred by patriotic speeches, and also, at least in part, the reason that the scientist trusts the contents of a professional journal. We cannot learn everything we want to know empirically, so we accept the assertions of those whose authorities with whom we feel connected. The professional group, the political party, the nation, the state, the church – all become surrogates for the family.

It is true that what human beings want to know is a different matter than who they are inclined to trust. Once the strictly necessary knowledge of how to cope our everyday environment is learned, the quest for further material with which to occupy our minds can proceed in various directions. It varies according to one’s culture, one’s class, and no doubt plenty of highly individual factors. Typically, however, the quest for such non-essential knowledge is bound up with the quest for personal identity. Having worked out how to eat and not to be eaten, one can dabble in luxury of experimenting with who one is. One can learn how to be a Christian, a communist, or a certified public accountant. Each of these identities has its own associated group of adherents and its own unique set of social rules. In terms of providing an identity they all meet the same need, even though they offer drastically different ways of looking at the world. Social identity is, as it were, familial identity writ large. It is an expression not of the need for a reliable source of knowledge, but rather it is the expression of the need for a reliable source of personal context and security. To be either a Christian, a communist, or even (to an admittedly lesser extent) a certified public accountant is to project what is essentially a family identity onto a group of individuals far too large and diverse to be a family. It is to expect a certain level of protection from inclusion in this group, even if, in some cases, this protection only amounts to a vague sense of social legitimacy.

The idea that all human beings are irresistibly drawn to seek the truth is a naïve one. We may all be born empiricists, but, as I’ve already at least implied, the practical function of empiricism is survival. Having burned one’s hand on a hot stove, one does not generally seek the opinion of an authority, even a parent, to confirm that it would not be wise to repeat the process. Most decisions, however, are more tolerant of error. Questions like “What is the sun made of?” may devolve naturally from the same innate curiosity that moved us to touch the stove, but whether the sun is a large gaseous ball of mostly hydrogen or god with certain peculiar attributes matters little to one’s immediate survival. This is another sense in which our “knowledge” is of two kinds: those things we substantially need to know and those which we merely want to know. Whether or not false explanations of apparently non-threatening phenomena might be dangerous to the species is another question, and we shall touch on that later.

It is worth noting that almost everyone makes at least an unconscious distinction between beliefs passed to them by language and hard empirical truth. Even people who claim to believe fervently in the power of prayer, for example, rarely attempt to stop their cars with a prayer when a nice substantial brake pedal is available. Likewise, they would seldom pray for purely spiritual sustenance as a replacement physically nutritious food. People pray for love, for cures from diseases, for emotional strength, etc. In short, they pray for things they do not know how to attain by any means they know through actual experience. They do not, in general, pray for alterations of the physical world that their experience tells them do not occur. They do not ask God to do the laundry because they know intuitively that nothing will happen.

Religion, ideology, culture, and even science, are cognitive luxuries – things we indulge in because language gives us the capacity to, and which then take on their own peculiar trajectories. They are the byproduct of abilities we’ve evolved for other purposes. We can use our legs to dance if we are so inclined, but natural selection did not produce our legs for dancing.

Making matters even worse for cause of truth is the fact that erroneous beliefs and destructive ideologies can be personally advantageous under certain circumstances. For an average Russian in the 1940’s, a worshipful attitude toward Stalin was a better survival strategy than one of outspoken dissent, no matter how much Stalin may have lied or how many millions may have died as the result of his decisions. Or, to offer a less brutal example, given that the actual goal of the average Christian is a sense of security within the family of the church, an open and rigorous skepticism is rather counterproductive, especially if one lives in a predominantly Christian community. The believer believes that God exists in more-or-less the same way that the avid football fan believes his last-place local team is still, somehow, the best. It is not a matter of what is, it is a matter of who one is – and who one’s friends are.
The esoteric nature of much of human knowledge, along with the natural tendency to put undue trust in the often arbitrary constructs that make up human institutions, conspires against our native empiricism. This is a truth which the most educated among us often find especially difficult to grasp. Watch any debate between one of the more articulate advocates of atheism and any devout believer and you will witness the same tragic pattern, almost without fail. The atheist begins with the assumption that God’s existence or non-existence is a question about nature -- something you can work out empirically, like the age of a tree or the distance from here to the moon. The believer begins with the assumption that God is the head of a grand spiritual family, a figure whose existence is as unquestionable as the existence of the believer him or herself. The atheist lays out a concise, empirical argument. Unconvinced, the believer counters with the truth according to scripture. “Don’t insult my intelligence,” is more-or-less what the atheist is saying. “Don’t insult my family,” is more-or-less what the believer is saying. The debate ends in mutual perplexity and irritation, and no one’s mind is changed.

I believe (if you will pardon the irony with which I employ the term) that empiricism is generally a good thing. To unpack this slightly, understanding nature is a good thing and one achieves that understanding by observation. Being able learn by observation was a useful capacity when we were infants, and it’s a useful capacity to us still. The real universe is a far richer and, if I may say so, a more interesting place than any of the illusory worlds human beings have managed to invent. Physical reality is not without its dangers and we have to face them, but we in no way lessen those dangers by inventing new ones of our own. For many, I realize, a universe denuded of gods of one sort or another would seem an unbearably lonely place. This is a sensibility I do not share and frankly cannot share. Ultimately, such imaginary parents always demand more of us than their dubious comfort is worth. You may love God, your country, or the ideology of V.I. Lenin – but there can be no meaningful sense in which any of these ghosts can ever love you. That you mean anything to your God, your country, or your ideology is the saddest of self deceptions. It isn’t hard to see that the willingness of people to slaughter one another over religious or ideological differences is not a trait that benefits our species as a whole, even if the odd individual might profit from it here and there. To be in love with one’s illusions is a personal tragedy; to be willing to kill for them is a tragedy for us all.4
If scientific and philosophical advancement is to be a boon to humanity we must guard against a tendency to become smug about our grasp of the truth. Knowledge should not be reduced to a mere initiation criterion for yet another narrow social group. If you are a humanist, a skeptic, or anyone with a generally empirical frame of reference, it is virtually certain that your views are the product of a unique and fortunate personal history. Illusions abound in human society and there is plenty of opportunity to succumb to them. If you have avoided or overcome such illusions, it is only because your personal circumstances have endowed you with a certain critical mass of empirical knowledge that has allowed you to see another way. In other words, you have learned enough real, substantiated facts to have a general sense of how things actually work, and have had enough experience to learn what sort of explanations are likely to bear scrutiny. If you have reached that point, then blind belief is simply not an option for you. Adopting such an empirical perspective is not a grand, heroic choice, but merely the result of a certain series of events. Had your life been only slightly different you might well have arrived at very different conclusions. If we are truly proponents of reason, we can hardly be outraged that the often intellectually stultified lives of others may have led them to adopt greatly different worldviews. Though we may naturally find their illusions frustrating, we may no more fairly despise the unenlightened than we may fairly despise the disabled or the illiterate. To do so would be, in itself, unreasonable. In the struggle against ignorance, intolerance is counterproductive.

To be a proponent of reason in an often irrational society is to strike a very difficult balance. While intolerance is counterproductive, a blind, all-encompassing tolerance can be nearly suicidal. One cannot placate the fanatically religious or the fanatically ideological with patience and civility. One must be willing to resist anyone who actively denies empirically substantiated knowledge, who impedes the progress of knowledge on purely superstitious grounds, or who seeks to impose ideas on another by brute physical force. Anti-evolutionists, bomb-wielding fundamentalists, political extremists and holocaust deniers must be opposed. On the other hand, we should never become so self-righteous in our dedication to the real that we feel it necessary to immolate Santa on a pile of Harry Potter novels. It must be admitted that some false beliefs can be genuinely harmless, and that often even deeply deluded people can be naturally humane enough to refrain hostile or intolerant actions. The difficulty, of course, is that in any more-or-less functional democracy other peoples’ personal beliefs manifest themselves in public policy. The next-door neighbor who believes in a celestial father figure may not be a problem, but the millions of neighbors who elect a demagogue who undermines your civil liberties certainly are. Unable to trust in deities, we must put our faith in education. Ironically, the best tool we have to dispel humanities’ delusions may be the very language from which those delusions are ultimately made.

--------------------------
1 I am using the word “knowledge” in an everyday sense here, not in a narrow, epistemological one. I will use “knowledge” as a synonym for “belief” in certain cases. While the distinction between the two concepts is obviously important, I am striving for readability rather than philosophical rigor.

2 More rigorously, to “states of affairs”.

3 There are, perhaps, two distinct kinds of genetic predispositions to see the world in terms of authority, which we might call the competitive and the cooperative. Competitive hierarchies are the result of struggles for dominance between members of different species or of members of the same species that are not closely related. Birds contesting for mates and territories would be an obvious example. While this sort of hierarchy may have nothing to do with parental relationships, it is not the kind of authority I am referring to here. We do not, after all, adopt the beliefs of our enemies because we fear them. Cooperative hierarchies, on the other hand, are the result of a struggle to advance the common interests of close relatives. Animals as diverse as ants and human beings engage in this sort of authoritarian organization, and it may be said that even the lowly worker ant’s activities are driven by a parent-childhood relationship – in this case almost wholly driven by its genes.

4 Of course, by my own earlier admission, my beliefs are probably no more that the reflection of my own self identity. If I did not identify with certain views about what such a slippery term as “good” ought to mean, I would not indulge such a passion to proselytize.

September 25, 2009

A Pseudo Introduction

I am inclined to say very little about myself. There are two principal reasons for my reticence. First, I have come to many conclusions that will doubtless offend some people. While I personally can live with the consequences of this, I do not care to have my family become the subject of threats from any loose cannon who happens to disagree with me. The world is amply dangerous enough. Second, my intention is to write about ideas which will, I hope, stand or fall on their own merits. This is not Facebook; my identity is not the message.

My intended areas of exploration are at best philosophical and at worst political. Living in America, I am likely to address American politics occasionally, probably from a sociological or historical perspective. For the record, I neither love nor hate America, nor do I love or hate any other country. I am writing to you with the sincere hope that you are a rational being. I am pretty much indifferent to your nationality, ethnicity, gender or even species. If you can read and think for yourself, and if your paws have sufficient dexterity to operate your computer, I am honored to have you as my audience. On the whole, my philosophical interests are broader and more forward-looking than my political ones. I intend to add fairly long and well-developed articles on philosophical topics from time to time, and perhaps to rant about politics more frequently and with less rigor.

Time permitting, I am open to dialog. I will reply to criticism or anything else that seems to invite a reply. I appreciate comments, especially well considered ones. I will happily engage in debate so long as it appears that we are ultimately advancing toward some better understanding of the topic. I have no particular interest in mere intellectual jousting.

Thank you for visiting,

e.m. cadwaladr