July 24, 2012

Distant Memory with Bricks

Automation, Service Economy, and the Welfare State

In the middle decades of the 20th century people were concerned about a question we have entirely forgotten. The question was: How will our society cope with automation? The problem was that human beings living in western industrialized societies lived, as most of them still do, by earning wages. In a market economy, the value of a person lies in his or her ability to perform useful work. Automation, at that time generally defined as the production of goods by machinery, with a minimum of human effort, threatened to break the fundamental reciprocal relationship between capital and labor. If profits could be increased by the elimination of the workforce, what industrialist would not want to automate? But, if all of industry became heavily automated, who would be left who could afford to buy the manufactured goods?

While this debate over this question is forgotten, the issues raised were not irrelevant. Indeed, many of the problems we currently face are the direct result of our failure to resolve the social consequences of the hyperproductivity characteristic of our technologically driven, energy intensive age.

It is not my intention to moralize here, or to either advocate or decry particular forms of social organization. My goal, to the best of my ability, is simply to understand and explain what has actually occurred. However, no coherent analysis of historical events can exclude some consideration of the forms of social organization that shape those events.1

The automation dilemma poses the greatest problem for the advocate of completely unfettered free markets. This is because the free market presumes a sort of game in which the goal of the players is to maximize their ownership of the available capital, and while de-emphasizing any notions of collective responsibility. Again, I am talking about the rules by which the game itself operates, not about the ethics of individual players. Free market capitalism is essentially Darwinian in nature: the competitive environment favors the individual who acts according to the principle of self-interest. Automation, by minimizing labor costs, maximizes profits for those who remain – the management and shareholders. That the progress of automation continually renders more and more members of the workforce superfluous is not a problem for the purely self-interested individualist. However, as the process can only serve to concentrate economic activity into fewer and fewer hands, it continuously shrinks the number of viable customers for products until the game itself is wrecked. In the classical liberal model, one becomes wealthy by providing goods for people who have at least some measure of wealth to exchange for those goods. Most people, in all societies, can only sustain themselves by selling their own individual effort or the immediate products of that effort. The game ends when that effort is not needed. One can rove the world for untapped markets, but as automation spreads all markets must eventually become depleted of any customers worth having.

Planned economies are different, at least in theory. In principal, automation should only spell utopia for the socialist. Socialism is something of an anti-Darwinian game, in which the goal is not the advancement of the individual but of the homogenized collective. Since wealth is to be distributed more-or-less equally, the fruits of automation are more material comfort for everyone with a smaller investment in individual effort and time. A shorter work week, and more hours to pursue those finer points of human advancement very few people ever actually pursue.

The statements above, of course, are idealized abstractions. Reality is more complex. Free market capitalist societies are rife with all sorts of non-self-interested motives and processes. Planned economies, on the other hand, are seldom very good at finding planners that are beyond the reach of petty self-interest.

What has actually occurred, at least in the United States and Western Europe, has been a kind of synthesis of market economies with socialistic government policies. As automation has displaced people from obviously meaningful and necessary work, programs of relief and assistance have provided support for them to varying degrees. Thus, we have created a hybrid system in which wealth is redistributed from hyperproductive free market enterprises to sustain a growing number of fundamentally non-productive persons.

By non-productive I do not mean merely the unemployed, but all of the people not involved in producing food or other relative essentials. In this extended sense, most of us are non-productive workers of one kind or another. No society can get along without food. No society of any great size or complexity can get along without material industry. However, the sudden disappearance of financial advisers, lawyers, musicians, web designers, insurance agents or baristas would not bring any society to an immediate crisis. All of these occupations are the product of having excess wealth and labor to throw around. They are a feature of an economy which has reached a hyperproductive state in its material essentials. The greater the productivity of a society in producing necessities, the greater the employment for the populous in producing non-necessities.

While I don’t mean to moralize too much about non-productive effort, I do want to make clear its real character. One should understand that non-productive effort is essentially fungible. So long as a society is employing enough people to feed itself and maintain its essential infrastructure, it may employ the rest of the population in any activity whatsoever. It may employ them as palace guards, monks, or folk musicians. As long as the society considers what these people do either important enough to finance with private funds or public taxation, any activity, no matter how fundamentally unproductive, may become a form of employment. America, from at least the 1950’s on, dedicated itself to putting most people in cars and suburban houses, not that any society on the planet had ever found a need to commute thirty miles to work in two-ton steel boxes before. We did so because we could – and because someone sold us on the idea. Fifty years earlier, when we were already a very prosperous society, no one would have imagined that automobile manufacturing would become the backbone of American industry.

When serious foreign competition and further automation eroded employment in the American auto industry sometime in the late 1970’s, the “service economy” was born. In short, this was the shift from making lots of unnecessary stuff to providing lots of unnecessary services. It was touted as progress, the beginning of a new age, but in fact it was a further departure of the economy from its fundamentals. If we could buy our cars from Japan, then why not buy almost all of our other manufactured goods from China? Thus, we began to live under the illusion that we could have a healthy economy that produced almost nothing, in which foreigners would do the dirty, polluting work of making things and Americans would simply own the world and sell each other insurance, cable services, and expensive cups of coffee. The unexpected consequence of automation was to make agriculture and manufacturing, once the vital organs of the nation’s economic corpus, seem economically unattractive and unimportant. The new way to make money, in a world in which making things was discouraged by social stigma and regulation, was to find various clever ways of moving paper around. It was only a short step from there to the dreamland of electronic, interconnected, virtual reality we live in now. Who builds our computers and cell phones? Some unfortunate quasi-slaves in China. Who grows the food? Who cares!

The percentage of the total population employed in American agriculture in colonial times was about 90%. Through most of the 19th century, it was at least 70%. In 2011, according to the Bureau of Labor Statistics, this had fallen to an astonishing 0.115%! A scant 358,000 people out of a population of 312 million. To put this into perspective, there are nearly three times as my many software developers as food-producers. There are almost twice as many lawyers.

The decline in manufacturing is also striking. In 1943, 12.1% the U.S. population made things. By 2011, this fraction had fallen to 3.8%. There are now more people employed in sales than in manufacturing, and nearly twice as many people employed in what the BLS subsumes under the heading “Office and Administrative Support Occupations”.

Automation, whether it is applied to industry or agriculture, can also correctly be seen as a process of replacing human and animal muscle power with energy derived from fossil fuels. We have financial advisers, lawyers, musicians, web designers, insurance agents and baristas in abundance only because we need a mere one person in a thousand to grow food and one in twenty-six to make goods. It is not machines that make this possible, but the coal, oil and gas that drive them. Without energy there could be no automation at all. Without abundant and cheap energy, we will be forced to de-automate, at least to some degree, and the age of hyperproductivity will end.

Even ancient civilizations produced whole classes of people who did ultimately non-essential things. Arguably, a preoccupation with non-essentials is the defining characteristic of civilization. As long as there have been cities, there have been priests, sculptors, and professional musicians. The achievement of recent decades has not been one of creating more and more esoteric occupations, but of running out of occupations altogether – of being content to employ really large numbers of people in doing not merely trivial things, but in doing nothing at all.

The laziness (and criminality) of the poor is a commonplace on the right side of the political spectrum. It is a commonplace which, unfortunately, is not without some basis in fact. While some large fraction of poor people are neither lazy nor criminal, it must be admitted that a propensity for either sloth or criminality will tend to make a person settle near the bottom of the socio-economic heap.  However, if we take an honest look at how the economy actually functions, we must also admit that there are simply more people than jobs to employ them. The means of individual survival and the need to personally grow food began to detach in ancient times. The hyperproductivity of the present era has detached the means of survival from any inherently productive effort whatsoever, even effort of questionable or fleeting social value. A ruthless application of market forces might have kept the population more-or-less equal to the available work. A ruthless application of socialist principles might have distributed the work more evenly at the expense of individual liberty. The peculiar synthesis of the two that we actually pursued simply heaps the support of more and more people on a continually eroding middle class. People who produce necessities rarely mind, or even notice, when other people are employed in senseless tasks – provided those people are working. But most members of the productive classes resent, with good reason, the idly dependent – whether those dependents are generational welfare recipients or hereditary royalty. No one likes to carry those who sneer at them for working. If energy depletion were not about to change things, social upheaval would eventually do so anyway.

----------------------------------------------
1 For the record, I believe that market economies, though not without their weaknesses and pitfalls, do a significantly better job of preserving individual freedom and collective material wealth than do centrally planned economies. Central planning does a slightly better job of preserving equality – even if only by making everyone equally poor. Neither system has a particularly good record of maintaining stability. I have my doubts that nations on the contemporary scale of hundreds of millions can be made stable under any social system.

July 9, 2012

July 6, 2012

The Efficacy of Torture

Not long ago I wrote a post arguing, among other things, against the use of torture as an instrument of public policy. I stand by this conviction for three reasons. First, though I recognize that while torture will probably always occur here and there as an inevitable consequence of war, I believe that it is fundamentally an immoral form of behavior. For torture to be moral one must give up the notion that people ought to have certain rights simply because they are people. Having given up such a core concept, the only realm in which one can discuss morality is within the confines of some particular group. Being moral only on an ad hoc basis (that is, only toward “good” people) leaves the whole concept rather superfluous. A promise not to beat one’s friends is not particularly constraining. My second objection to torture comes from a belief that the value of any information gained by exercising the practice as an instrument of public policy is probably far outweighed by the level of hatred it inspires in an enemy, aiding in his recruiting efforts, hardening his resolve, and justifying his own resort to the use of similar forms of atrocity. I know that such an assertion, like most others that involve the psychology of groups, is all but impossible to prove. That does not render it untrue, but I admit it weakens it as an argument. Third, I believe that a public policy which condones torture is fundamentally incompatible with the stability of a Republic. If the government can exercise such a policy as a legal option, no citizen can long feel free to dissent. A nation that condones the torture of foreigners will, soon enough, employ torture against its own citizens. As an instrument of repression, torture has few peers.

These things said, there is one argument against torture that is not only ludicrous, but dangerous and tragic in its own right. That is the argument that torture simply doesn’t yield any useful information. You can read a synopsis of this position in the article at the link below:

http://harpers.org/archive/2009/09/hbc-90005768

In essence, the argument is that neurobiological research shows that the extreme stress of torture has a negative effect on memory, producing statements that are uncertain or susceptible to the interrogator’s suggestion. Further, while torture increases one’s willingness to talk, it does not guarantee that one will speak the truth. I don’t have any particular doubts about the science itself, but the conclusions drawn, at least by Harper's, The Daily Beast, et al, are oversimplifications and erroneous.

The presumption is that stress not only impairs memory, but that it impairs it to such a degree as to render it wholly dysfunctional. Further, the Harpers article, at least, fails to make any distinctions between different methods of questioning, corroboration between different subjects, corroboration with outside evidence, etc. The implicit scenario is one in which the subject of torture is simply abused until he discerns from sloppy interrogation technique more-or-less what it is that his torturers expect to hear, whereupon he delivers up that narrative and his torturers go away with a presumption they have uncovered a fact. As much as we may hate the practice of torture, we are not justified in presuming that its practitioners are uniformly that credulous and amateurish. The CIA, for example, has been torturing people since its origins as the OSS in Second World War. While they are far from infallible, there is no reason to believe that they are any less methodical about torture than they are about any other intelligence gathering activity.

Deriving information from any sort of interrogation is a signal-to-noise ratio problem – a matter of sorting what is true from the noise of both impaired memory and deception on the part of the subject, and from poor questioning technique on the part of the interrogator. Methods and circumstances matter.

Imagine six people have been physically separated since capture and are asked, under torture, to give up the name of their immediate superior. One successfully resists, three give unique names, and two give the same name. While not utterly conclusive, the result of getting the same name from two separated individuals certainly increases the probability that the two are telling the truth. If the neurobiological objection implied by Harper’s were correct, all six subjects would be so impaired that their answers would be essentially random, thus any consensus that occurred between them would also be either random or the product of leading questions, and, therefore, irrelevant. This, frankly, is absurd. If our cognitive abilities failed utterly under severe stress, no one could be tortured effectively for the purpose of extracting information – but we probably could not have survived the rigors of our evolutionary history either.

The kind of scenario described above has not been unusual in wartime. During the Second World War, for example, the Soviets, the Germans, the Japanese and others all conducted, with some regularity, battlefield interrogation under torture along these general lines. Military interrogations are an entirely different matter from, say, the interrogations of the Spanish Inquisition. The Inquisition’s goal was, in some sense, to get at the truth – but since whether one was a heretic or not was entirely dependent on the Inquisition’s verdict, torture served largely to make the words of the accused conform to the Inquisitor’s prior conclusions. Having been broken by torture one confessed to one’s sins in open court. In a case like this, accurate memory of past events was not particularly important. Military interrogations are, on the other hand, wholly disinterested with the guilt, innocence, or immortal souls of the interrogation subjects. Their goal is simply to extract specific, factual information about the status of enemy forces. The military interrogator does not record his (or her) findings in a history book, but disseminates them to military commanders who in turn use them as the basis for their battle plans. If the information deviates very greatly from actual facts, plans will fail and soldiers will die in greater numbers. As a rule, battlefield commanders tend to notice such things. If we accept the Harper’s headline that “Torture Doesn’t Work,” we have to conclude that thousands of military leaders just never caught on. Somehow, they never noticed that the reports of their intelligence officers, derived from tortured prisoners’ statements, were never any more reliable than random guesses. Military leaders have often been dogmatic and thick headed – but they are not completely blind. They do eventually discard methods that don’t produce results.

Again, I am not advocating torture. The mere fact that something may be effective does not make it justifiable. However, it must be admitted that the empirical record trumps neurobiological theory and educational credentials. The mere fact that an expert does a study, and a group of people append that study to their cultural narrative, does not put the matter beyond the realm of further inquiry and rational doubt. To believe that torture never yields useful information is to believe that those who employ it are either too stupid to notice or too sadistic to care. While I am no admirer of the KGB, the Gestapo, the Kempeitai, or the CIA, one must have a decidedly peculiar view of history to imagine the depredations on their unfortunate victims never yielded at least enough correct information to justify the trouble of asking questions. If one’s goal is simply to produce terror or to vent sadistic impulses, asking questions serves no purpose.

That much of the American left believes the neurobiological argument without question should not be surprising. It fits their view that military and intelligence people are by nature simply stupid and sadistic. The neurobiological argument is dangerous, however, because it abandons the difficult but worthwhile moral and political arguments for a rhetorically powerful but empirically vapid claim. And even if that claim were true, as a basis for judgment, it would still have very dangerous implications. If efficacy were the real problem, would torture become acceptable if a drug could be administered which improved the victim’s memory? Is the problem with “enhanced interrogation” that it is an affront to basic decency – or merely that it isn’t enhanced scientifically by someone with the proper credentials?

It seems that plain empirical truth does not have many friends these days. Fortunately, it needs none.