For the last week I have been listening to establishment politicians, Republican and Democrat, attempting to justify the NSA’s PRISM program [ http://en.wikipedia.org/wiki/PRISM_(surveillance_program) ] on the grounds that it helps prevent acts of terrorism. No doubt is does. What we should be asking ourselves, however, is how much security we are gaining for the surrender of both our immediate privacy and the possible eventual surrender of the rest of our freedom.
The total number of American civilians killed by Al Qaida related terrorists thus far has amounted to a little over 3000 – almost all of them killed during the attacks of 9/11. The US population stands at over 300,000,000 so, in rough terms, your odds of having been an American civilian killed by Al Qaida terrorists stands at about 1-in-100,000. To put this into perspective, nearly 500,000 Americans have died in auto accidents since 2001 – about 1-in-600 of the population.
If we assume the elimination of PRISM and other forms of mass surveillance would increase our individual risk by a factor of ten – to 1-in-10,000 – I would still quite willingly accept that risk against the unknown but historically plausible risk of seeing our republic degenerate into a totalitarian police state. Against that latter risk, Edward Snowden has staked his life. Like Patrick Henry, he has thrown down the gauntlet to the rest of us – “Give me liberty, or give me death.”
June 18, 2013
June 14, 2013
Science, Authority and Narrative (oh my!)
It is common for human beings to confuse narrative for fact. We are often attracted to explanations that actually have little going for them except some measure of coherence. To illustrate this for yourself, consider any of the major, popular religions that you don’t happen to believe in. In such a religion, you will find a deity (or deities) with an interesting but largely unverifiable history, possessing vast powers that can either be attributed to normal physical causes or, at least, to the powers of someone else’s deities. Religion is one kind of narrative – one attempt to explain the behavior of world. Whatever you happen to believe, you have to admit that many people in the world have erroneous beliefs.1 Even atheists and theists can at least agree that they cannot both be right.
I used to think, rather naïvely, that science was a cure for the ailment of magical narrative thinking. It is not. To show this I will need to define “science” for the purposes of my argument – or perhaps to isolate the abstraction of science from the institutional practice of science. Much worthwhile effort has already been made in this area by Thomas Kuhn and others. I don’t intend to spend much time reiterating the work of examining the internal processes of science, rather, I am interested primarily in the way that science (and scientists) influence what non-scientists believe.
Let’s begin with the abstract notion of science. Dig into any text on the philosophy of science, and you’ll find out pretty quickly that “science” is surprisingly difficult to define. I’m going to dispense with most of the subtleties and propose a rough definition that works for my purposes:
Science is the formalized process of determining the truth of assertions solely by empirical means.
Let’s parse this.
It is important to note that science is a formalized process to distinguish it from all sorts of common behavior. When one of my cats gets hungry and walks to the kitchen to examine his bowl for food, he is testing a mental proposition by empirical means – but I wouldn’t say he’s doing science. Science is formal in the sense that scientists are conscious not only of the subject of their inquiries, but also of a set a rules stipulating how those inquires may be conducted. To do science one must have rules regarding what constitutes acceptable evidence, and consciously and consistently adhere to those rules.
Science is a pursuit of truth.2 To pursue truth scientifically is to presume that a context-independent form of truth exists. It is to assume there are states of affairs in nature which are not subject to our biases and desires, even if they are sometimes beyond the reach of our unaided senses.
Lastly, science in the abstract is a method of determining truth solely by empirical means – observation (whether direct or augmented with instruments), measurement, and experiment. Scientific truth is the antithesis of belief arrived at through persuasion, or belief based on acquiescence to accepted authority. To put this another way, scientific truth is that truth based on actual observation of events, rather than on traditional belief or other social mechanisms.3
Given this definition of what science is ideally, two problems are apparent with the proposition that science has supplanted narrative or magical beliefs in contemporary society. The first is that the systems by which scientists themselves actually operate tend to deviate from the ideal. The second is that the public’s faith in science is itself unscientific. I will address the second first.
It is safe to say that the average person who venerates science is not very knowledgeable about the accumulated body of scientific knowledge. Rather, the typical admirer of science knows a few interesting scientific facts and has a sort of vague notion of something popularly called “the scientific method.” What most people believe, really, is that a faith in science is justified because technology works – the two being bound together in the public imagination. In earlier times, when technology was not a transformative force in ordinary people’s lives, the general public had little awareness of science. They respect it now because it conjures iPads and other nifty shiny things into being from time to time. People would respect wizards or witchdoctors who had the same capability. Their faith in science as a whole stems from the great practical successes in physics and chemistry. That this faith has spilled over into fields like psychology and sociology has little to do with great successes in those fields, but depends on a generalization of scientists as a class. Once the witches and priests worked miracles; now the Ph.D.s do. Popular belief is not in a body of facts, but in a body of people who use special words, work in special places, and have special titles. While such belief is not altogether divorced from empirical justification, it rests on the same sort of authoritarian trappings any medieval peasant would have recognized.
The other problem with science as a working institution – the problem that the practice of science is corrupted by non-empirical factors – is best illustrated with a few examples.
Consider Climategate, a good summary of which appears at the following link: http://www.guardian.co.uk/environment/2010/jul/07/climate-emails-question-answer . An even shorter summary of Climategate is that an important body of climate scientists were engaged in restricting access to data that might throw doubt on the man-made global warming hypothesis, and at least discussed the possibility of smearing their critics in the press rather than attempting to refute them scientifically. While a significant portion of the general public believes that the Climategate emails prove that the man-made global warming hypothesis is false, the published evidence actually does not accomplish that. A scientific theory is not disproved by showing that some of its proponents were practicing bad science. A scientific theory can only be utterly disproven in the same way that it can be proven – which is to say, with empirically derived facts. However, by showing that a body of scientists were willing to defend their theory in explicitly non-scientific ways, it does throw reasonable doubt on the credibility of contemporary science as a public institution. This is really neither new nor surprising. If you believe that science is immune to political influence, Google the name Trofim Lysenko. In fact, scientists are human beings, subject to the influence of their particular cultures, the social acceptance of their peers, and the power of the educational, commercial or political institutions that support them. When their inquiries have obvious social consequences, empirical evidence ceases to be the sole requisite of what they promote as scientific truth.
The picture in the social sciences is even worse. A recent and glaring example is that of Diederik Stapel: http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html?pagewanted=all&_r=0 In brief, Stapel, a renowned Dutch sociologist, was ultimately forced to admit that he invented the data he used in over 50 published studies in sociology. Again, this does not prove that all, or even most, social scientists are that unscrupulous. What it does prove is that the peer review process, whose purpose is to ensure that scientists are actually practicing science, failed to uncover Stapel’s outright fraud at least 55 times. The social sciences, already questionable as “sciences” due to their general lack of predictive success, are doubly burdened by such failures of their own procedural integrity. Sociology, psychology, and related disciplines all struggle with a common problem. A case study of an individual, or of a particular group, does not yield results that can be generalized to all people at all times. A study in the social sciences is, at best, a snapshot of a certain unique set of circumstances. Studies in sociology and psychology are not repeatable in the way experiments in chemistry are. This makes the social sciences all that much more prone to the influences of culture, personal prestige, and politics. Stapel was believed because he was, himself, a recognized authority. He was, to some considerable extent, a recognized authority because he understood what his peers and his society wanted to believe.
In 2012, the summarized results of a series of studies from the University of California at Berkeley appeared in the popular press: http://www.theatlantic.com/health/archive/2012/03/are-rich-people-more-ethical/254689/ These studies purported to show a variety of ways in which the rich were less ethical than the rest of us – from a study of traffic behavior to a study of, believe it or not, literally stealing candy from children. These studies were presented as science and conducted, or at least directed, by academics of high standing. Even without examining the methodologies for systematic bias, it is immediately interesting that the studies were specifically targeted to compare the wealthy with an artificially homogenous class of everybody else. At least in the popular summaries, I have found no reference to any other social divisions. Considering the magnitude of the research, wouldn’t it have been more scientifically enlightening to rank all recognizable classes and groups – instead of just the rich vs. everybody else? But let’s be serious. How likely is it that researchers at UC Berkeley would ever release a set of findings showing that welfare recipients, blacks, or Latinos were less ethical on average than everybody else? I’m not asserting here that welfare recipients, blacks, or Latinos are innately less ethical, but simply that researchers in a liberal university know better than to ask the question. The studies that were actually conducted were tailored to produce findings that are consistent with the dominant ideology at UC Berkeley. In that context, the findings were entirely unsurprising. I am not suggesting, either, that the researchers would have falsified the data if they had gotten results that were surprising (i.e. contrary to their desired outcomes) but I am confident that in such a case the studies would not have been so widely published. Practically speaking, there are some assertions you are allowed to make and others you are not. In our culture, as in all cultures, certain inquiries are taboo.
An even more culture-bound political exercise masquerading as science is to be found in Right Wing Authoritarian (RWA) research. I wrote a critique of RWA in an earlier post, and append it here in the extended footnote below.4
Science in the sense that I outlined at the beginning – the formalized process of determining the truth of assertions solely by empirical means – is probably the most useful tool the human species has. By testing our beliefs about reality in a broadly scientific way, we can struggle forward with some confidence. With only faith, tradition, and consensus to guide us we hardly struggle forward at all. Science in the sense of the cultural institution embodied in a handful of professionals, however, has as much to do with authority as it does with truth. While it is likely that those with specialized educations will be especially fluent in their chosen disciplines, it does not follow that they will always be right, or that those with less formal training should surrender their skepticism, especially in the arena of the social sciences. Assertions are not true or false in accordance with the status of their originators, but in accordance to their correspondence to material reality.
It should always be remembered that the very core mechanisms of scientific inquiry, far from being the privileged sphere of the academic elites, are nearly universal. All animals with senses use them to uncover truth. If my cat is not exactly doing science when he examines his food bowl, he is at least doing things that science depends on. He makes a hypothesis, albeit an unstated one, that his bowl might contain food. He conducts a kind of experiment by walking to the bowl. He makes an observation. He adjusts his view of reality based on that observation. These rudiments of science are what sense organs and nervous systems evolved to do. It is the social, mystical, and authoritarian roads to belief which are peculiar to us. While most animals, including humans, suffer from certain instinctive reactions which drive us to occasionally behave against our better interests, only the most socially complex animals can summon up real delusions. The capacity to plan is the same capacity that allows us to imagine the world in ways that it is not. The capacity to communicate complex ideas is the same capacity that allows us to extend our imaginary world to others. Thus, the very mechanisms that make science possible put it continually at risk.
----------------------------------------------------------------
1 If there are any true relativists out there who believe that everybody’s truth is just as good as everybody else’s, they have to believe in my epistemic absolutism too, so they do not qualify as an exception.
2 Here I admittedly generalize right over the messy business of philosophy and all of its interesting nuances. Is a scientific theory true because it works – in other words, because it yields correct predictions? Are scientific theories expressions of reality, or merely symbolic analogies of what are ultimately imperceptible states of affairs? Interesting questions, but beyond my current scope.
3 I am tempted to say that tradition and authority based forms of “truth” are not truth at all, but this would be begging the question. It is sufficient for my purposes to roughly define what I mean by scientific truth, and leave other definitions of truth alone for now.
4 RWA research ( http://en.wikipedia.org/wiki/Right-wing_authoritarianism ) suffers from the same problem IQ tests do – the test itself becomes the definition of the property you are testing for. This is a hazard with almost all standardized assessments of this nature. You test against the biases of the people who compose the test. If the people who compose the test have an agenda you get a very bad test indeed. Consider what the article cites as the first item on the new RWA scale:
"Our country desperately needs a mighty leader who will do what has to be done to destroy the radical new ways and sinfulness that are ruining us."
The article explains: "People who strongly agree with this are showing a tendency toward authoritarian submission (Our country desperately needs a mighty leader), authoritarian aggression (who will do what has to be done to destroy), and conventionalism (the radical new ways and sinfulness that are ruining us)." Well, that sounds like very frightening stuff. Now, let’s alter the language only slightly, while trying to maintain the same essential content:
"Our country desperately needs a forceful leader who will do what has to be done to stamp out the new extremist policies and runaway corruption that are ruining us."
This still sounds like... authoritarian submission (Our country desperately needs a forceful leader), authoritarian aggression (who will do what has to be done to stamp out), and conventionalism (the new extremist policies and runaway corruption that are ruining us). Of course, this sentence would have dovetailed neatly into any Democratic candidate's nomination speech during the 2008 US election cycle. Well, amusing as it might be, we can't all be Ring-wing authoritarians.
What the proponents of RWA have done is to assemble a compact set of stereotypically conservative traits that most liberals find especially abhorrent, then constructed a quite precise linguistic trap that would snare conservatives – and only conservatives – into identifying with that definition.
A common hallmark of good science (though I admit not one that occurs in absolutely all cases) is that it produces some surprising results. The RWA assessment appears to be so carefully crafted that the results are about as surprising as discovering that optometrists write more glasses prescriptions than other people.
The RWA article continues:
“In a study by Altemeyer, 68 authoritarians played a three hour simulation of the Earth's future entitled the Global change game ( http://en.wikipedia.org/wiki/Global_change_game ). Unlike a comparison game played by individuals with low RWA scores, which resulted in world peace and widespread international cooperation, the simulation by authoritarians became highly militarized and eventually entered the stage of nuclear war. By the end of the high RWA game, the entire population of the earth was declared dead.”
Again, if you look at the Global change game objectively you will have to admit the findings are rather problematic. As a socio-economic-military simulation of the world, the game is both crude and overly subjective. The game world is quantified along resource and population lines based on real numbers, but little if any attempt is made to model cultural or historic relationships between nations. Military and economic models are oversimplified for the sake of playability. Assessments of the effects of player’s decisions are often not handled algorithmically (by some neutral mathematical rule) but by the rulings of “facilitators” with their own personal biases. I have no doubt the game is an enjoyable exercise, but it proves little. A global simulation designed and refereed by conservative economists might be equally enjoyable, would probably yield very different results, and would be every bit as useless.
A classic study of authority like the Milgram experiment ( http://en.wikipedia.org/wiki/Milgram_experiment ) had real validity because it attempted to hide the game from the experimental subjects. They thought they were doing something real. The Global change game is, straightforwardly, a game – not reality. Further, while the Milgram experiment put people into an unusual situation, it was one that was at least plausible for them to be in. The tiny population of world leaders the Altemeyer game attempts to have players represent are, in the real world, not drawn from some sampling of people from a common culture, screened only in accordance with how they performed on a psychologist’s test. On average, real leaders in the real world are a more cautious and deliberative breed. They have something real to lose. The “global change game” that actually played out over the forty-four years of the Cold War failed to produce a nuclear exchange, even though there were often authoritarians on both sides and always authoritarians on at least one side. Any candidate for a valid simulation of the future ought to also be a credible simulation of the past. While Altemeyer’s game is dramatic and interesting, I don’t see the Rand Corporation seizing on it anytime soon as means of predicting the future behavior of actual nations.
I used to think, rather naïvely, that science was a cure for the ailment of magical narrative thinking. It is not. To show this I will need to define “science” for the purposes of my argument – or perhaps to isolate the abstraction of science from the institutional practice of science. Much worthwhile effort has already been made in this area by Thomas Kuhn and others. I don’t intend to spend much time reiterating the work of examining the internal processes of science, rather, I am interested primarily in the way that science (and scientists) influence what non-scientists believe.
Let’s begin with the abstract notion of science. Dig into any text on the philosophy of science, and you’ll find out pretty quickly that “science” is surprisingly difficult to define. I’m going to dispense with most of the subtleties and propose a rough definition that works for my purposes:
Science is the formalized process of determining the truth of assertions solely by empirical means.
Let’s parse this.
It is important to note that science is a formalized process to distinguish it from all sorts of common behavior. When one of my cats gets hungry and walks to the kitchen to examine his bowl for food, he is testing a mental proposition by empirical means – but I wouldn’t say he’s doing science. Science is formal in the sense that scientists are conscious not only of the subject of their inquiries, but also of a set a rules stipulating how those inquires may be conducted. To do science one must have rules regarding what constitutes acceptable evidence, and consciously and consistently adhere to those rules.
Science is a pursuit of truth.2 To pursue truth scientifically is to presume that a context-independent form of truth exists. It is to assume there are states of affairs in nature which are not subject to our biases and desires, even if they are sometimes beyond the reach of our unaided senses.
Lastly, science in the abstract is a method of determining truth solely by empirical means – observation (whether direct or augmented with instruments), measurement, and experiment. Scientific truth is the antithesis of belief arrived at through persuasion, or belief based on acquiescence to accepted authority. To put this another way, scientific truth is that truth based on actual observation of events, rather than on traditional belief or other social mechanisms.3
Given this definition of what science is ideally, two problems are apparent with the proposition that science has supplanted narrative or magical beliefs in contemporary society. The first is that the systems by which scientists themselves actually operate tend to deviate from the ideal. The second is that the public’s faith in science is itself unscientific. I will address the second first.
It is safe to say that the average person who venerates science is not very knowledgeable about the accumulated body of scientific knowledge. Rather, the typical admirer of science knows a few interesting scientific facts and has a sort of vague notion of something popularly called “the scientific method.” What most people believe, really, is that a faith in science is justified because technology works – the two being bound together in the public imagination. In earlier times, when technology was not a transformative force in ordinary people’s lives, the general public had little awareness of science. They respect it now because it conjures iPads and other nifty shiny things into being from time to time. People would respect wizards or witchdoctors who had the same capability. Their faith in science as a whole stems from the great practical successes in physics and chemistry. That this faith has spilled over into fields like psychology and sociology has little to do with great successes in those fields, but depends on a generalization of scientists as a class. Once the witches and priests worked miracles; now the Ph.D.s do. Popular belief is not in a body of facts, but in a body of people who use special words, work in special places, and have special titles. While such belief is not altogether divorced from empirical justification, it rests on the same sort of authoritarian trappings any medieval peasant would have recognized.
The other problem with science as a working institution – the problem that the practice of science is corrupted by non-empirical factors – is best illustrated with a few examples.
Consider Climategate, a good summary of which appears at the following link: http://www.guardian.co.uk/environment/2010/jul/07/climate-emails-question-answer . An even shorter summary of Climategate is that an important body of climate scientists were engaged in restricting access to data that might throw doubt on the man-made global warming hypothesis, and at least discussed the possibility of smearing their critics in the press rather than attempting to refute them scientifically. While a significant portion of the general public believes that the Climategate emails prove that the man-made global warming hypothesis is false, the published evidence actually does not accomplish that. A scientific theory is not disproved by showing that some of its proponents were practicing bad science. A scientific theory can only be utterly disproven in the same way that it can be proven – which is to say, with empirically derived facts. However, by showing that a body of scientists were willing to defend their theory in explicitly non-scientific ways, it does throw reasonable doubt on the credibility of contemporary science as a public institution. This is really neither new nor surprising. If you believe that science is immune to political influence, Google the name Trofim Lysenko. In fact, scientists are human beings, subject to the influence of their particular cultures, the social acceptance of their peers, and the power of the educational, commercial or political institutions that support them. When their inquiries have obvious social consequences, empirical evidence ceases to be the sole requisite of what they promote as scientific truth.
The picture in the social sciences is even worse. A recent and glaring example is that of Diederik Stapel: http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html?pagewanted=all&_r=0 In brief, Stapel, a renowned Dutch sociologist, was ultimately forced to admit that he invented the data he used in over 50 published studies in sociology. Again, this does not prove that all, or even most, social scientists are that unscrupulous. What it does prove is that the peer review process, whose purpose is to ensure that scientists are actually practicing science, failed to uncover Stapel’s outright fraud at least 55 times. The social sciences, already questionable as “sciences” due to their general lack of predictive success, are doubly burdened by such failures of their own procedural integrity. Sociology, psychology, and related disciplines all struggle with a common problem. A case study of an individual, or of a particular group, does not yield results that can be generalized to all people at all times. A study in the social sciences is, at best, a snapshot of a certain unique set of circumstances. Studies in sociology and psychology are not repeatable in the way experiments in chemistry are. This makes the social sciences all that much more prone to the influences of culture, personal prestige, and politics. Stapel was believed because he was, himself, a recognized authority. He was, to some considerable extent, a recognized authority because he understood what his peers and his society wanted to believe.
In 2012, the summarized results of a series of studies from the University of California at Berkeley appeared in the popular press: http://www.theatlantic.com/health/archive/2012/03/are-rich-people-more-ethical/254689/ These studies purported to show a variety of ways in which the rich were less ethical than the rest of us – from a study of traffic behavior to a study of, believe it or not, literally stealing candy from children. These studies were presented as science and conducted, or at least directed, by academics of high standing. Even without examining the methodologies for systematic bias, it is immediately interesting that the studies were specifically targeted to compare the wealthy with an artificially homogenous class of everybody else. At least in the popular summaries, I have found no reference to any other social divisions. Considering the magnitude of the research, wouldn’t it have been more scientifically enlightening to rank all recognizable classes and groups – instead of just the rich vs. everybody else? But let’s be serious. How likely is it that researchers at UC Berkeley would ever release a set of findings showing that welfare recipients, blacks, or Latinos were less ethical on average than everybody else? I’m not asserting here that welfare recipients, blacks, or Latinos are innately less ethical, but simply that researchers in a liberal university know better than to ask the question. The studies that were actually conducted were tailored to produce findings that are consistent with the dominant ideology at UC Berkeley. In that context, the findings were entirely unsurprising. I am not suggesting, either, that the researchers would have falsified the data if they had gotten results that were surprising (i.e. contrary to their desired outcomes) but I am confident that in such a case the studies would not have been so widely published. Practically speaking, there are some assertions you are allowed to make and others you are not. In our culture, as in all cultures, certain inquiries are taboo.
An even more culture-bound political exercise masquerading as science is to be found in Right Wing Authoritarian (RWA) research. I wrote a critique of RWA in an earlier post, and append it here in the extended footnote below.4
Science in the sense that I outlined at the beginning – the formalized process of determining the truth of assertions solely by empirical means – is probably the most useful tool the human species has. By testing our beliefs about reality in a broadly scientific way, we can struggle forward with some confidence. With only faith, tradition, and consensus to guide us we hardly struggle forward at all. Science in the sense of the cultural institution embodied in a handful of professionals, however, has as much to do with authority as it does with truth. While it is likely that those with specialized educations will be especially fluent in their chosen disciplines, it does not follow that they will always be right, or that those with less formal training should surrender their skepticism, especially in the arena of the social sciences. Assertions are not true or false in accordance with the status of their originators, but in accordance to their correspondence to material reality.
It should always be remembered that the very core mechanisms of scientific inquiry, far from being the privileged sphere of the academic elites, are nearly universal. All animals with senses use them to uncover truth. If my cat is not exactly doing science when he examines his food bowl, he is at least doing things that science depends on. He makes a hypothesis, albeit an unstated one, that his bowl might contain food. He conducts a kind of experiment by walking to the bowl. He makes an observation. He adjusts his view of reality based on that observation. These rudiments of science are what sense organs and nervous systems evolved to do. It is the social, mystical, and authoritarian roads to belief which are peculiar to us. While most animals, including humans, suffer from certain instinctive reactions which drive us to occasionally behave against our better interests, only the most socially complex animals can summon up real delusions. The capacity to plan is the same capacity that allows us to imagine the world in ways that it is not. The capacity to communicate complex ideas is the same capacity that allows us to extend our imaginary world to others. Thus, the very mechanisms that make science possible put it continually at risk.
----------------------------------------------------------------
1 If there are any true relativists out there who believe that everybody’s truth is just as good as everybody else’s, they have to believe in my epistemic absolutism too, so they do not qualify as an exception.
2 Here I admittedly generalize right over the messy business of philosophy and all of its interesting nuances. Is a scientific theory true because it works – in other words, because it yields correct predictions? Are scientific theories expressions of reality, or merely symbolic analogies of what are ultimately imperceptible states of affairs? Interesting questions, but beyond my current scope.
3 I am tempted to say that tradition and authority based forms of “truth” are not truth at all, but this would be begging the question. It is sufficient for my purposes to roughly define what I mean by scientific truth, and leave other definitions of truth alone for now.
4 RWA research ( http://en.wikipedia.org/wiki/Right-wing_authoritarianism ) suffers from the same problem IQ tests do – the test itself becomes the definition of the property you are testing for. This is a hazard with almost all standardized assessments of this nature. You test against the biases of the people who compose the test. If the people who compose the test have an agenda you get a very bad test indeed. Consider what the article cites as the first item on the new RWA scale:
"Our country desperately needs a mighty leader who will do what has to be done to destroy the radical new ways and sinfulness that are ruining us."
The article explains: "People who strongly agree with this are showing a tendency toward authoritarian submission (Our country desperately needs a mighty leader), authoritarian aggression (who will do what has to be done to destroy), and conventionalism (the radical new ways and sinfulness that are ruining us)." Well, that sounds like very frightening stuff. Now, let’s alter the language only slightly, while trying to maintain the same essential content:
"Our country desperately needs a forceful leader who will do what has to be done to stamp out the new extremist policies and runaway corruption that are ruining us."
This still sounds like... authoritarian submission (Our country desperately needs a forceful leader), authoritarian aggression (who will do what has to be done to stamp out), and conventionalism (the new extremist policies and runaway corruption that are ruining us). Of course, this sentence would have dovetailed neatly into any Democratic candidate's nomination speech during the 2008 US election cycle. Well, amusing as it might be, we can't all be Ring-wing authoritarians.
What the proponents of RWA have done is to assemble a compact set of stereotypically conservative traits that most liberals find especially abhorrent, then constructed a quite precise linguistic trap that would snare conservatives – and only conservatives – into identifying with that definition.
A common hallmark of good science (though I admit not one that occurs in absolutely all cases) is that it produces some surprising results. The RWA assessment appears to be so carefully crafted that the results are about as surprising as discovering that optometrists write more glasses prescriptions than other people.
The RWA article continues:
“In a study by Altemeyer, 68 authoritarians played a three hour simulation of the Earth's future entitled the Global change game ( http://en.wikipedia.org/wiki/Global_change_game ). Unlike a comparison game played by individuals with low RWA scores, which resulted in world peace and widespread international cooperation, the simulation by authoritarians became highly militarized and eventually entered the stage of nuclear war. By the end of the high RWA game, the entire population of the earth was declared dead.”
Again, if you look at the Global change game objectively you will have to admit the findings are rather problematic. As a socio-economic-military simulation of the world, the game is both crude and overly subjective. The game world is quantified along resource and population lines based on real numbers, but little if any attempt is made to model cultural or historic relationships between nations. Military and economic models are oversimplified for the sake of playability. Assessments of the effects of player’s decisions are often not handled algorithmically (by some neutral mathematical rule) but by the rulings of “facilitators” with their own personal biases. I have no doubt the game is an enjoyable exercise, but it proves little. A global simulation designed and refereed by conservative economists might be equally enjoyable, would probably yield very different results, and would be every bit as useless.
A classic study of authority like the Milgram experiment ( http://en.wikipedia.org/wiki/Milgram_experiment ) had real validity because it attempted to hide the game from the experimental subjects. They thought they were doing something real. The Global change game is, straightforwardly, a game – not reality. Further, while the Milgram experiment put people into an unusual situation, it was one that was at least plausible for them to be in. The tiny population of world leaders the Altemeyer game attempts to have players represent are, in the real world, not drawn from some sampling of people from a common culture, screened only in accordance with how they performed on a psychologist’s test. On average, real leaders in the real world are a more cautious and deliberative breed. They have something real to lose. The “global change game” that actually played out over the forty-four years of the Cold War failed to produce a nuclear exchange, even though there were often authoritarians on both sides and always authoritarians on at least one side. Any candidate for a valid simulation of the future ought to also be a credible simulation of the past. While Altemeyer’s game is dramatic and interesting, I don’t see the Rand Corporation seizing on it anytime soon as means of predicting the future behavior of actual nations.
Posted by
E.M. Cadwaladr
June 10, 2013
What could the government possibly do with phone call metadata?
Here’s a quick scenario based on my own experience. I’ve attended my local Tea Party group’s meetings many times. I know how they are organized. They have a web site with a fair amount of contact information. The organizers are proud of what they do, and make no attempt to hide their identities. Meeting attendees often provide their phone numbers so they can stay informed about future meetings, upcoming speakers, and so forth. The organization’s volunteer secretary feeds these numbers into a robocaller for that purpose. The robocaller makes a series of calls in steady succession, which has a certain pattern no doubt easily identifiable by NSA software. An analyst can easily look up the originator of the calls, a Tea Party group secretary, and reasonably infer that she isn’t robocalling birthday greetings to her grandchildren. Thus, without a warrant, a list of people connected with that Tea Party organization can be arrived at. When people have to worry that participating in a political organization that supports the US Constitution might result in harassment from the IRS or some other government entity, they may decide to stay at home and keep their mouths shut. They may take a step, in fear, away from freedom. That is why a little metadata matters.
Posted by
E.M. Cadwaladr
Subscribe to:
Posts (Atom)