July 7, 2011

What We Can’t Know

Consider a snail. If you have ever watched a collection of snails in an aquarium over some reasonable period of time, you may have noticed that their behavioral repertoire is rather limited. Snails move about, eat, mate, deposit eggs, and recoil from attack or contact with certain substances. They have sufficient senses to allow them to locate food, and to determine whether an object they encounter is something edible, another snail, something attacking them, or something composed of an objectionable substance. The variability of behavior from snail to snail is not great, though some are more active than others. They do have some capacity to learn, but most snail behaviors appear to be intrinsic to their genetic makeup rather than the result of learning. New hatchlings and large adults behave identically in most respects. It is, at least at present, impossible to say whether or not the snail is “conscious” in something like the sense that human beings are. It is also impossible to say whether it perceives any experiential difference between what it “knows” genetically and what it “knows” as the result of learning.

With apologies to the biologists who study such animals, I believe the behavioral outline above offers a reasonable basis for making inferences about the cognitive world of the snail. It must surely be conceded that a snail’s understanding of the cosmos has limitations, even if the character of its understanding (what it is like to be a snail) remains alien to us. We can reasonably claim to know, for example, that no amount of patient training is ever going to allow a snail to read a sentence – even if we use Braille characters that the snail could, at least, perceive. A snail’s nervous system consists of a few connected ganglia, sufficient for its narrow repertoire of senses and behaviors, but clearly not much more. It can no more use its simple nervous system to read a sentence than you or I can fly by flapping our arms.

That a snail lacks the physical capacity to read is obvious, yet many human beings (including many very intelligent ones) believe the kilogram and a half of fatty tissue in their craniums gives them sufficient capacity to generate a fact from any state of affairs whatsoever. To put this more colloquially, we tend to believe that everything that can exist or occur is knowable to us. We don’t believe, of course, that any human being is omniscient -- able to attach facts to all states of affairs at once. We would also admit to certain spatial and temporal constraints. For example, we do not claim that we can know, at present, any minor details of the local conditions on any of the extrasolar planets astronomers have discovered. This is a spatial problem – we could know if we where there. Likewise, we don’t claim to know whether the statement “Socrates ate more than 100 kilograms of olives in his lifetime” is true or not. This is a temporal problem – we do not have access to unrecorded states of affairs from the 5th century BCE. What we do tend to believe is that any given state of affairs is understandable to at least one exceptionally intelligent member of our species, given that he or she is in the right place at the right time with the right set of observational instruments. It would not occur to most of us that, perhaps, we only glide across the glass of some aquarium, inside a larger universe that we not only do not know – but cannot know.

If a snail could somehow devote one of its ganglia to the knowledge that its capacity to understand the universe was limited, it would not be to its evolutionary advantage. Its nervous system is probably already taxed close to its limits by the meager cognitive tasks it has, and it is hard to see how being aware of its epistemic limitations would help it survive and propagate its genes. Most human philosophers have not been notably prolific breeders either, so it would be unfair to expect great biological success from a philosophically inclined snail. A more complex animal, like a cat, probably has plenty of neuronal capacity to spare on the knowledge of its own cognitive limitations, but lacking an abstract language in which to think it would have no way of formulating such knowledge. If we can imagine that there are states of affairs beyond our capacity to know, it is only because we can model such a situation symbolically. If your cognitive world consisted wholly of sensations and memory, you would have nothing to model the unknowable with. One cannot conceptualize the unknowable directly, but describing it by analogy is fairly easy -- as I have just done by comparing us with snails.

The great majority of human beings could probably understand the argument for cognitive limitations I have made above, even if the idea itself has never occurred to them. That it is not the common view, however, is probably natural. As a species, we have not had language (by which I mean language capable of expressing propositions) very long, and written language is an even more recent development. We don’t think it odd that the ancient Greeks did not discover DNA, even though there is every reason to believe their brains were just as capable as ours. One needs to discover other things first. The acquisition of knowledge is a cumulative endeavor.


Reduction

While what we usually think of as our “collective knowledge” expands from generation to generation (and seems to be expanding at an ever greater rate) there is no reason to believe this trend is without limit. Not only do our brains (or perhaps any brains) have physical limitations, but there may be a limit to the number of states of affairs that could, even for some hypothetical omniscient mind, correspond to uniquely meaningful facts. Although the number of states of affairs may, in some sense, be infinite, finite descriptions of at least some infinities is possible even for normal human beings. For example, one could imagine a ball in any one of an infinite number of positions along a 1-meter long track – arguably even this would represent an infinite number of states of affairs. We can, however, express this infinite number of states of affairs with a single, finite description:

A ball in any one of an infinite number of positions along a 1-meter long track.

This sort of description is, in general, what the laws of science attempt to do. E=mc2 is just a simple way of expressing a relationship between an enormous number of actual concrete particles of matter and their equivalencies in energy. Science can be described, in fact, as an attempt to reduce the incomprehensibly large number of states of affairs to a manageable collection of symbols. Not that this reductionism is the unique province of science. Any generalization we make is an attempt to make our cognitive world more manageable. Consider:

Zeus is the source of all lightning.

While this may not be a very satisfying explanation to us, it is at least a causal reduction of the phenomenon of lightning. It gives the initially chaotic phenomenon of lightning a locus we can relate to. To the snail, lightning flashes can only be random occurrences, not interpretable in any broader context. To the cat, perhaps, lightning correlates with rain. A cat might derive such an association through experience, but without a propositional language the causal underpinnings of the phenomenon must remain forever inaccessible. Neither the snail nor the cat can have any awareness of states of affairs beyond its senses. Only a being with a language capable of manipulating abstractions can penetrate the limit of its own direct perception.

The difference between empirical and purely imaginary models of the world is worth noting. Let’s stay with our example of explaining the origin of lightning.

The empirically derived explanation of lightning is that it is an electrical current caused by an enormous difference in electrical potential between a storm cloud and the ground. This is a simplified explanation, but it should suffice for our purpose here. It is an empirical explanation because, with the proper instruments, we can detect this difference in electrical potential – or to put this another way, we can extend our senses with instruments to become aware of a previously unknown state of affairs. Using this explanation as a starting point, we might construct devices that produce lightning on a smaller scale. Or, we might produce devices that predict lightning before it occurs. Knowing that metal conducts electricity more readily than air, we might construct lightning rods to protect our buildings from potential fires. Empirical models let us uncover relationships in nature, and in so doing let us make make predictions that are reliable enough to be useful.

Now, consider a purely imaginary explanation. What do we gain by attributing lightning to Zeus? As I’ve said, it does give the phenomena a psychological underpinning (it gives us a feeling of understanding) – but it offers little else. We cannot devise a means to either detect or predict Zeus’s’ actions. Neither are we likely to build machines that emulate powers that we assume to be unique to Zeus. We merely trade the mystery of lightning for the mystery of Zeus’s power. Such supernatural explanations of phenomena effectively preclude further inquiry. We cannot study magical, invisible, capricious beings empirically, even to disprove their existence. All that we can do is add conjecture onto what is already a conjecture, speculating that we might appease Zeus with such and such a sacrifice, that Zeus was displeased with the farmer who’s olive tree the lightning struck, etc. We can, in fact, create hypothetical causal chains with no empirical basis ad infinitum. You can posit a god to explain what has happened or what is happening, but not to reliably tell you what will happen – so the necessity of maintaining coherence requires that such explanations always be created post hoc. Language makes this possible, just as it makes science possible. The Zeus hypothesis does not let you predict that farmer X will probably be struck dead if he stands on top of a hill holding a pitchfork during a thunderstorm -- it only lets you conclude, in hindsight, that lightning must have struck farmer X because he sinned against the gods.


Technology

To return to my original line of thought, we might be tempted to think that science and technology will eventually gives us the power understand everything, or, more technically, to uncover facts to describe each and every state of affairs. After all, look at what the last few decades alone have brought: telescopes in orbit which plumb not only the depths of space but also of time, and particle accelerators that fission matter into ever finer and more elementary particles. Impressive though these things might be, the snail metaphor is still instructive.

Imagine you equipped a certain snail with a device that let it detect the sound of food pellets being dropped into its aquarium. Further assume that these pellets always fell slowly to the same spot on the aquarium’s floor. The snail might learn to associate the new stimulus with the future appearance of food in a particular place. It would be able to detect a state of affairs beyond the normal capacity of its senses and it would “know,” at least from a functional perspective, something that other snails didn’t. But would a snail so equipped have any greater capacity for knowledge? Obviously not. It would still be limited by having only a very rudimentary nervous system. While sense-augmenting devices let one detect new things, they do not make brains any bigger or any more capable. There is plenty of evidence that challenging mental activity increases one’s intelligence, but again, it is obvious that no amount of stimulation is going to make a snail literate.

One might argue that certain devices, like computers, do more than merely expand our senses. They perform calculations and other sorts of arguably “cognitive” tasks. Might intelligent machines expand our cognitive capacity to the point that all states of affairs will be knowable to us? While the expansion of knowledge that has been brought about through the use of computers is truly breathtaking, the answer to this question is probably also no.

Facts are cognitive entities – propositions which correspond to states of affairs. They are the exclusive province of beings. For something to be knowledge, it requires a knower. In the end, however a fact is discovered, whether by sight, smell or the study of the output of a computer program -- it must be knowable by a being to be a fact. If, hypothetically, a computer where to produce some article of data that no one was capable of understanding, such data could not constitute a fact. Artificial Intelligence advocates might claim that such a computer would itself possess knowledge. John Searle’s Chinese Room argument seems a pretty compelling refutation of this position, but whether a computer can have intentionality (can “know” in other words) or not is irrelevant. Even if our hypothetical computer were fully intentional, if it knew facts that we cannot know – they would not be facts to us. For us to ascribe such artificial knowledge to ourselves would be like saying that primeval lungfish discovered Relativity because Einstein ultimately evolved from such organisms. The number and nature of potential facts is limited by the number and nature of states of affairs, but the number and nature of actual facts is limited by the capacity and circumstances of individual knowers. To express this point another way, every individual’s total knowledge is limited to some subset of the universe of potential facts that it has both the capacity and the particular fortune to possess -- whether that individual is a snail, a cat, or a human being.

In this light, what exactly do computers do? Computers are tools, analogous in certain respects to written language, particularly the language of mathematics on which they depend. Computers process and sort information much more rapidly than we can, allowing us to assimilate that information in new ways. My point is not that such tools cannot put new facts within our grasp, but only that they cannot do so without limit. If an individual finds Einstein’s Theory of Relativity incomprehensible given any number of analogies and explanations, there is nothing a computer can do to overcome this limitation. Returning to our snail analogy, no device, no matter how clever, is going to let a snail comprehend a sentence in the way that you or I do.

It is of course imaginable that computers or some other technological device will extend our cognitive grasp just enough for each individual state of affairs to be knowable by some human being or other. Again though, there is no reason to assume this will happen. Extending a snail’s cognitive horizons with some range of sensors and calculating devices would not be sufficient to put all states of affairs within its grasp, certainly. There is little difference between the assumption that we can know everything using only our innate capacity and the assumption that we can know everything using our innate capacity augmented by a few very useful but problematic tools.


Collective Knowledge

We should be wary not only of the idea that we can share facts with hypothetical artificial intelligences, but also of the idea that we can share them with one another. The idea of collective knowledge is deeply deceptive, whether we are discussing the totality of facts possessed by humanity as a whole or merely some small set of facts known by two individuals. (For our purposes here, we will define knowledge as some collection of facts.) We are accustomed to thinking of any discoveries recorded by human beings as having been placed into some vast Jungian unconscious that we somehow share in just by being human. We realize, or course, that to understand the principles of Relativity we are probably either going to have to read Einstein’s work or have it explained to us by someone who already understands it, but there is still a sense in which we tend to feel such knowledge is the discovery and communal property of our species as a whole. I assert that such a notion of collective knowledge is meaningless. In truth, Relativity or any other collection of facts must be “discovered” by each individual who possesses it. Our individual discovery of Relativity differs from Einstein’s only in that we have Einstein’s written documents to point the way, whereas Einstein had only the less developed works of his predecessors. Einstein may have made the discovery, but to share his facts we need to understand them just as he did. What we refer to as collective knowledge isn’t knowledge at all, but merely the collection of expedient paths to facts that have been wrested from the material universe by others and recorded in symbolic form. Facts reside in individual beings -- not in species or in books.

Dispensing with the notion of collective knowledge has very serious ramifications for epistemology in general. Consider the apparent fact that Mt. Everest is the highest peak on Earth. How do we know this? We “know” this because, by a very lengthy and determined application of the laws of trigonometry, a group of 19th century surveyors calculated the mountain’s height. Subsequent groups of surveyors have confirmed the height of Mt. Everest (within some small margin of error). By various means, human beings have also surveyed the whole of the globe sufficiently to have noticed any other mountains which might have higher peaks, and it so happens that none of them do. The laws of trigonometry can be shown to be extremely reliable, as, no doubt, can the other methods both modern and 19th century surveyors have used. The problem is that, by our strict definition of the term fact, only someone who has surveyed Mt. Everest and all of the world’s other high peaks personally, and who has a thorough understanding of all the relevant trigonometric facts used in the survey, can possess the fact that Mt. Everest is the highest peak on Earth. We will set aside, for now, the argument that one’s own senses can’t be trusted. For argument’s sake, let us imagine both our surveyor’s senses and our own are absolutely reliable.

For those of us who are not world-traveling surveyors with an intimate understanding of trigonometry, the information that Mt. Everest is the highest peak on Earth must rest on a series of inductions. Maintaining our earlier use of terms, we induce that the expedient paths to facts that have been set forth by our surveyors would actually lead to those facts if we had the means and inclination to pursue them. In other words, we believe the people who originally made the assertion about Mt. Everest’s height did not merely express an idle opinion, but made careful and systematic measurements that could be repeated by anyone with the means and knowledge to carry them out. We treat the propositions of sources we trust as though they were facts. Indeed, we could not advance very far in our understanding of the world if we had to be rigorously skeptical of all material assertions made by others. To return to our Relativity example, Einstein accepted the results of other people’s experiments as both the material basis and confirmation of his theory. This kind of justification depends entirely on the idea of collective knowledge.

I do not wish to have my position confused with David Hume’s, although there are similarities. It was Hume’s position that inductions were untenable because the future would not, by necessity, follow the past. He asserted that even those facts that we consider laws of nature are mere correlations based on repeated experience. With nothing substantial we can point to as a cause, inductions, no matter how reliable, prove nothing. To offer a common example, when we drop an object and it falls to earth, the explanation offered by classical physics is that it has done so because the unseen force of gravity has acted on its mass. In Hume’s explanation, on the other hand, the existence of gravity is not a fact. What we call “gravity” is merely a convenient description we use to generalize past events.

My critique of induction here is far more modest. My position is that all facts, if we are using the term with philosophical rigor, must be both empirical and personal. We know, and only know, that which we ourselves have experienced. We may induce, after a few observations, that objects released near the surface of the earth are reliably drawn to it. I concede to Hume that we cannot know this has always been so, will always be so, or that something similar would have to occur in all possible worlds. I will even concede that, due to the very incompleteness of our understanding of states of affairs and the fallibility of our sense organs and nervous systems, that by calling something a “fact” we can only mean that it is probable within the sphere of our experience. Despite these sweeping concessions, it is still obvious that induction cannot simply be dismissed. In a practical sense, the observation of correlations between various phenomena is how all organisms capable of learning actually learn. Hume’s objection is interesting and may well be of philosophical use, but I have no intention of trusting in Hume’s skepticism by leaping from a rooftop in the hope that my notion of gravity is purely a habit of mind. It is worth noting that Hume, who lived to the moderate age of sixty-five, does not to appear to have forsworn the mundane physical correlations of the world either.

My position is that experience suffices to assert that the linguistic entity “gravity” does describe a state of affairs, however imperfectly, but that a distinction must be drawn between facts derived from personal experience, and beliefs drawn from the communicated assertions of others. I can test the existence of gravity myself -- even without subjecting myself to the risk of fatality. On the other hand, if someone else asserts that Mt. Everest is the highest peak on earth, unless I want to make the extraordinary effort to test their assertion personally, I must make an induction about their credibility. This must necessarily produce a weaker induction than any I can make from experience. To start with, it suffers from the fallibly of the senses of the person asserting it, compounded by my own in receiving the assertion. Further, I cannot know without empirical verification whether or not the person making the assertion is simply lying. Since I lack direct experience of the cognitive processes of others, this is always a possibility. Finally, and perhaps most importantly, there is the problematic nature of language itself. Symbols are not states of affairs, but representations of states of affairs. Language is always subject to interpretation, which is to say its semantic content varies in accordance with each individual’s understanding of its symbols. Even in essays such as this one, where I am at great pains to define my terms, one must always proceed as though most terms maintain their semantic fidelity without this special effort. Language, too, is capable of describing things that are not states of affairs even without deliberate deception. The party making an assertion can be honest, coherent, and have the same semantic interpretation of terms as the listener – and still be describing something that is a wholly cognitive construct. It is not my intention to imply that information we receive through language alone is worthless. Much of it can and should be believed. Rather, I simply wish to make clear that such information is different in kind from facts derived from experience.

Senses are given a peculiar status in philosophy, probably because of the antiquity of philosophy’s origins. The Greeks and other ancient peoples had a variety of ideas about the physical locus of what they understood as “the mind.” Some thought it was in the heart. Few had much understanding of the function of the brain. Aristotle thought the function of the brain was to cool the blood. All ancient cultures, on the other hand, understood what eyes and ears did, even if they did not know precisely how. They knew the loci of all the physical senses. It is hardly surprising, then, that they thought of senses and minds as very different kinds of things. No one would think that a person without eyes could see, but before the mind was associated with the brain it was at least plausible to think of the mind as non-corporeal. This division of our mind from our senses has persisted, via Descartes and others, into modern times. It might be useful to reexamine this. We now have plenty of reasons to believe that the brain is as necessary to our cognitive existence as our eyes are to our vision. Our minds are features of our nervous systems just as our senses are features of our nervous systems. The cells that make up the critical parts of our sense organs are very similar to the cells that make up our brains. Perhaps it would serve us better to think of our senses and our cognitive capacities as one intimately interconnected whole. If we eliminate the sense-mind distinction and think in terms of whole nervous systems interacting with states of affairs, sense data become more respectable. While admittedly fallible, our direct experience is still the closest approach to knowing a state of affairs we can possibly make.

There are two cognitive realms which are often put forward as non-empirical sources of facts, usually subsumed under the rubric of a priori knowledge. These two candidates are mathematics and logic. We shall address each in turn.


Mathematics

To begin, let’s consider what mathematics is. Fundamentally, mathematics is a language. In application, it is a language that describes states of affairs in terms of either discreet entities or arbitrary divisions of measurable properties. A simple example of how the language of mathematics symbolizes discreet entities is the process of counting.

The figure above consists of 2 dots. By counting the dots, we are engaging in a form of linguistic simplification we referred to earlier. The ability to describe an entity consisting of multiple similar components by naming a single component (“dot”) and assigning it a quantity (“two”) lets us describe an infinite number of entities with a finite number of symbols. Without this ability, a figure of 3 dots would require a completely unique identifier, not merely a concatenation of “three” and “dots.” To symbolize a collection of figures consisting of from one to one thousand dots would require the memorization of a thousand completely unique names. With the ability to count and the example of the figure above, the description “28,721 dots” is not only meaningful but precise. (For the sake of argument, we will ignore potential differences of pattern, size, color, etc.) We need only understand the symbol “dot,” the ten symbols used for decimal numerals, and the system by which decimal numeral symbols are conjoined.

Having learned to count, arithmetic relationships follow naturally with the addition (no pun intended) of a few more symbols. Consider the following constructions:

1 + 1 = 2

At their simplest level, the cognitive operations of arithmetic do not even require symbolization. It seems likely that any sentient animal with an ounce or so of brain could understand the relationship between what we symbolize by “1” and “2” in a purely sensory way. Again, though, the symbols “+” and “=” are abstractions for relationships we can actually see in simple instances but cannot see in large or complex ones. 1 + 1 depicted as dots is easily grasped; 2,329,091 + 28,721 depicted as dots is not.

The problems that occur with arithmetic symbolizations are not ones of coherence (for the most part, mathematics is coherent) – but problems of correspondence. Consider this example:

(The images represent piles of salt.)

If our numeric symbols correspond only to discreet entities, then it is apparent from this illustration that 1 + 1 = 1. Adding two small piles of salt together, we get one larger pile. This is not a trivial matter. Clearly there is some relationship between the original 2 piles and the larger pile that would result from their combination. Just as clearly, 1 + 1 = 1 cannot be taken as a general law of nature. We might as easily have divided the salt from our original 2 piles into several even smaller piles, and concluded 2 = 3, or 2 = 4, or 2 = 5, etc. Even though all entities are countable, the numeric relationships between them don’t necessarily correspond to the language of arithmetic. A quantity “x” cannot equal both 1 and 2. This would violate ex falso quodlibet. (We shall address the factual status of logic later.) If the common arithmetic relationship 1 + 1 = 2 really does symbolize a truth, it is not a universal truth but one which requires additional language to specify its domain.

Well, what about the other possibility -- describing states of affairs in terms of arbitrary divisions of measurable properties? Let’s reconstruct our previous example as follows:

When we measure the mass of the salt (in grams in this case) we appear to have rescued our system of arithmetic as a language capable of reliably describing states of affairs. Again, happily, 1 + 1 = 2. However, by accepting that our system of arithmetic may work on measureable properties like mass but may fail to describe some meaningful relationships between entities we have already dealt it a serious blow.

The problem is that the state of being an entity is not immutable. One might be tempted to think that the problem lies with using vague, amorphous entities like “piles”. After all, a pile is not an entity in any sense but proximity. Unfortunately, the problem goes deeper than the vagueness of the word “pile”.

If one were to dissolve our 2g salt in 100g of water, the result would be 102 grams of salt solution. 2 + 100 = 102. Again, arithmetic has faithfully described the combinations of arbitrary units of mass. Basic chemistry tells us, though, that salt (at least common table salt) is defined by its molecular nature, and that a salt molecule consists of a sodium atom bonded to an atom of chlorine. (For sake of argument, we will treat these assertions about basic chemistry as facts.) By this definition, the entity we identified as salt ceases to exist when dissolved in the water. The salt solution contains water molecules, sodium ions and chlorine ions – but does not, strictly speaking, contain any salt. The term “salt” is not a vague conceptual one like “pile,” but describes an actual material substance – a state of affairs. In other words, the “truth” of arithmetic relationships does not apply to chemical identities either, unless you specify its domain with some non-mathematical language. We might say that two chemical entities have become either one (a salt solution) or three (water molecules, sodium ions and chlorine ions). Here too, the problem is not that mathematics isn’t applicable to chemistry, but simply that it isn’t applicable to all relationships between all types of chemical entities. At least with regard to identities, we always need terms that are not native to mathematics to specify the kinds of relationships that can be accurately described mathematically.

In our examples thus far, arithmetic relationships between arbitrary divisions of measurable properties (mass in grams in the cases shown) have consistently held true. Perhaps arithmetic does, at least, express universal truths for material relationships of this sort. Unfortunately, Einstein’s Special Theory of Relativity has shown that even mass relationships are not so simple. It turns out that the mass of an object increases with its speed – an occurrence that becomes significant as the object approaches the speed of light. If we could accelerate our 1g salt pile to various large fractions of the speed of light, it would attain masses of 2g, 3g, 4g, etc. Elaborate and expensive experiments have demonstrated that this actually occurs, so it can be said that 1g, 2g, 3g, 4g, etc. are all quantities that could correctly represent the mass of the same concrete entity under different circumstances. Further, using Einstein’s mass-energy equation (E = mc2) does not save us here for essentially the same reason I have stated above: while the relationship between mass and energy can be expressed mathematically, the entities involved in the relationships must be described in language that is external to mathematics. While this hardly invalidates either the concept of mass or the use of mathematics in symbolizing useful relationships in physics, it does show that we cannot put a naïve trust in even the most basic arithmetic assumptions.

There is an often cited quotation from Galileo: "The book of nature is written in the language of mathematics ...without which it is impossible to understand a single word; without which there is only a vain wandering through a dark labyrinth”. Even setting aside the rather broad metaphor implied by the phrase“the book of nature,” this is a very misleading assertion. As a language, all mathematics is really capable of is describing quantitative relationships between symbols. It is a supplementary language which can only describe states of affairs by conjoining its symbols to the symbols of a natural language. Einstein’s famous equation E = mc2 only means something because its component parts, E, m, and c are quantified symbolizations of Energy, mass, and the speed of light. The formally equivalent statement X = yz2 does not, by itself, represent anything. It may be true or false, depending on the states of affairs X, y, and z happen to symbolize. It is not inherently true by virtue of its form. Moreover, there are relationships in nature that are not inherently quantitative, for which un-augmented natural language is a perfectly suitable language of description. The assertion “deer eat dandelions” symbolizes a state of affairs quite efficiently. Mathematically extended descriptions of the motions of a deer’s teeth and the biochemistry of dandelions could be used to represent the same state of affairs, but doing so would negate the very strength of mathematics – the reduction of complexity.


Logic

If mathematics cannot be freed from the hegemony of experience, then perhaps logic will fare better. Obviously, there can be no meaningful inference without logic. The idea of proving that logic is inherently untrustworthy is self-contradictory. Indeed, the significance of such an assertion being self-contradictory is, itself, dependent on logic! Unlike mathematics, the rules of logic can be applied to relationships between states of affairs without the need of specifying their applicable domain. Where X = yz2 is only true for certain material substitution of the variables, the same does not seem to apply to rules of inference. Consider dictum de omni for example:

x → y
y → z
______
:: x → z

The very nature of logic is such that one may substitute any conditionals that follow the formal structure without fear that our logical language will conflict with the relationships between states of affairs. One may arrive at false conclusions, but false conclusions are not problems so long as they result from false premises. We could say:

(a) All cats are radishes.
(b) All radishes are vegetables.
(c) Therefore, all cats are vegetables.

The logic holds, because the argument as a whole can be expressed as a conditional of the form:

(a & b) → c

Since a is false, c does not follow by necessity. Logic may claim to express relationships that are applicable to all states of affairs, whereas mathematics does the very useful but more limited work of describing specific relationships between states of affairs as those relationships are uncovered.

Because of their universal applicability we may in some sense call the rules of inference “facts,” but they are no less dependent on empirical verification than the assertion that Mt. Everest is the highest peak on Earth. As evidence for this, consider the following (rather drastic) thought experiment.

Imagine a sapient, conscious human being born without any senses whatsoever. This individual must lack not only the senses of sight, hearing, touch, taste, and smell, but also any ability to perceive gravity, temperature, motion, pain, or any other entity we would recognize as sensation. Imagine, further, that our unfortunate subject has an otherwise normal, fully-developed human brain. What, we must ask ourselves, would be the cognitive contents of such a brain?

Of course, we cannot know – but we can make educated speculations. There are at least a few mental states that seem to be intrinsic to our brains themselves. Virtually all human beings occasionally dream that they are falling. We can speculate that our insensate human might have such dreams. These dreams end for us when we are startled into consciousness and our physical senses overrule them, but without such senses our imaginary person might continue this odd, non-referential experience of falling indefinitely. On the other hand, it is possible that, lacking the experiences of gravity and motion to refer to, falling dreams might never occur. One can certainly have a latent capacity which circumstances deny expression. People with damaged optic nerves can have the rest of the physical apparatus necessary for sight fully intact. Another universal (or nearly universal) human trait is a fear of snakes. This does not seem to be learned; humans fear the first snake that they see. Is it possible such an instinctual fear might occasionally be triggered without experience? Might our subject have a spontaneous state of anxiety over a nameless imaginary entity, and would that entity be perceived as having a certain form, no matter how vague? Perhaps, but I doubt it.

While we cannot know what such a person would think or know, we are not wholly ignorant of the consequences of less severe forms of congenital sensory deprivation. Studies have shown, for example, that people born deaf and blind have great difficulty grasping the very idea of symbols. It is simply not plausible, therefore, that a totally insensate person would know, a priori, the axioms of arithmetic. In a world without perception, what would there be to count? My assertion is that logic is, in the very strictest sense, a posteriori as well. A prerequisite to understanding any rule of inference is an understanding of the concept that the universe can be divided into discreet entities. Without any empirical knowledge whatsoever it seems likely that the insensate human’s mind and universe would be one and the same.1 Even if, as I’ve conjectured, such a person could experience some manifestation of the universal human fears of snakes, it is questionable whether or not this fear would seem anything other than an unpleasant condition of the universe as a whole. In other words, the entire known universe being comprised of one’s cognitive state, there would be no basis for the concept of any relationship between discreet entities.

While I assert that logic is a posteriori in an absolute sense, the capacity to use certain rules of inference is an intrinsic feature not only of humans, but of many (and perhaps in some sense even all) animals. While this capacity requires experience to manifest itself, the capacity itself is innate to our biological form. When I was taught the rudiments of symbolic logic it struck me immediately that all that I was learning was a language with which to express relationships that I already understood, but that no one had specifically made me aware of. We need to make a distinction between being aware of logical relationships as abstractions, and having the capacity to make certain logical inferences by the very nature of our nervous systems.

Consider the thought experiment in the illustration above. An observer (in this case my cat Laszlo) is positioned to watch a ball roll across a floor. Laszlo does not know anything about formal logic. The ball rolls behind a chair, disappearing at position A and reappearing at position B. After a couple of trials, Laszlo notices the ball reappearing at position B, and thereafter will run to intercept the ball at position B as soon as he sees it disappear at position A. There is only one plausible explanation for this behavior: Laszlo understands the rule of inference formally known as modus ponens:

a → b
a
______
:: b

Indeed, any conditional behavior whatsoever implies an innate capacity to apply this rule. Any organism capable of learning to initiate a particular behavior in response to a particular sensory stimulus is capable of applying the rule of inference we symbolize as modus ponens.2 The capacity to apply the rule does not require an awareness of the rule itself, but both the capacity to apply the rule and the capacity to be aware of the rule require experience at least of some fundamental kind, the experience of entities and the relationships between them. Thus, experience is a prerequisite to all knowledge, and may even be a prerequisite to all cognitive activity of any kind.


Conclusions

To put the concepts I have outlined above to some practical use, let us briefly consider the epistemological difficulties of two disciplines: economics and quantum physics.

Economic processes, considered in some raw, physical sense, must consist of the aggregate consequences of some large number of individual human behaviors, constrained, of course, by a large number of non-human physical factors. To follow the logic of reductionism, economics reduces to psychology (plus the non-human physical factors); psychology reduces to biology; biology to chemistry; chemistry to physics. While reductionism raises problems of its own, it is reasonable to assume that nothing in economics is fundamentally outside the physical realm. Looked at as physics problems, however, assertions about economic relationships are considerably more complex than our brains can hope to grasp. It is a problem of sheer scale. Even if we could grasp all of the subsidiary causal steps down to the last displacement of an electron, and even putting the problems of data collection and errors of precision at that scale aside, we would still lack the computational resources, even with modern computers, to solve such problems.

As a consequence of our own limitations, when we attempt to explain economic processes we inevitably derive our explanations from gross behavioral truisms, inadequately substantiated theory and statistics drawn from past events. Of these sources, statistics are probably the most reliable. Statistics are, in principle at least, grounded in states of affairs – in things we know to have actually occurred. However, using statistics to make economic predictions presents at least two major problems. First, the people who compile economic statistics (usually governments) generally have a stake in the figures and bias them accordingly. One need only make a cursory examination of the United States’ Consumer Price Index to see how serious this problem can become. Second, it is fundamentally erroneous to assume that a given set of comparable statistics will yield the same economic outcomes in two populations divided by culture, temperament and time. In other words, experiments in economics are accidental and by nature unrepeatable. Derivation of relationships that obtain in one society in one period of history may be illuminating, but certainly cannot serve as the basis for mathematically precise prediction.

Accurate or not, the mathematics involved in economic calculations can be quite elaborate. Again, mathematics is a language, and being the language of precise definition its use tends to imply a degree of understanding that is not necessarily warranted. It is perfectly possible to express erroneous, or even ludicrous, relationships mathematically. Still, it is entirely fair to ask the question – what fraction of the time do economists get their predictions right? Considered from this perspective, the present state of economic understanding is comparable to the state of medical knowledge in ancient times. If a person were sick or injured in Greece of the classical era, he or she would probably have been better off with a physician than without one; this is not to say, however, that such a physician’s ignorance or hubris could not easily result in fatal consequences.

It is not my purpose to discredit economics as a useful discipline, but rather to put it into some sort of intelligible perspective. One must not mistake mathematics and the aura of credentials for a robust, factual understanding of a subject. Nearly the full content of some disciplines, chemistry and geology for example, are within the capacity of the human nervous system to understand. In the case of disciplines like economics or psychology, however, the greater part of the subject’s domain is necessarily beyond our full grasp.3 We may make asymptotic improvements to our statistical and theoretical models, but we struggle in vain against the sheer buzzing swarm of minute but relevant variables.

Quantum physics presents a different problem altogether. It does not appear, at least superficially, that individual processes at that level of physical organization are incalculably complex, and neither does it seem, with deference to Heisenberg, that data collection itself presents a fatal limitation. Rather, quantum physics appears to tax our very ability to understand the behavior of unseen entities symbolically to, and probably beyond, its limit.4

I once heard Lawrence Krauss, a well-known theoretical physicist, express the opinion than no one has ever understood quantum physics. While this is merely anecdotal, it is worth considering. It is hard to imagine any other discipline, except perhaps the rather dubious study of divinity, in which a leading authority would not only make such a confession, but would make it without the slightest embarrassment. Einstein’s General Theory of Relativity is accepted to be more than a little difficult to grasp, but physicists don’t say that no one, including Einstein, has ever understood it. I believe what Krauss meant by his statement was that, while we can describe quantum relationships mathematically, they are so far outside our normal understanding of states of affairs that we cannot understand them in any other way.

To return to chemical relationships as a basis of comparison, we can imagine shared electrons circling around nuclei in a certain way, thereby binding atoms into molecules. We can have a certain picture in our heads of how such relationships work. The real states of affairs cannot be entirely like the cognitive models we use to grasp them, but the models provide a close enough analogy that we can accurately understand the relationships.

In quantum physics, the similarities between the analogies we can grasp and the underlying states of affairs that actually exist grow ever more tenuous and provisional. Our understanding is grounded neither by direct observation nor by broadly workable analogies, but instead must rest on our confidence in the syntax of mathematics itself. While the mathematics continues to serve up accurate predictions we have some justification in saying the physicists are learning about new states of affairs, but clearly we are probing the outer limits of our cognitive aquarium. The utter inaccessibility of such knowledge to all but a very few minds should alone provide good evidence that, in this direction, we may be nearing the end of our reach. Sooner or later, even the mathematical reductions of this realm seem likely to prove inadequate. One can only describe the inconceivable in terms of the conceivable. Nothing in our evolutionary legacy made it advantageous to be able to unravel the inner secrets of the strange quark.

Summing up, we can know only what we have the capacity to know, and we generally fall a good deal short of that limit. I do not agree with Wittgenstein’s view that we have no business saying anything about the unknowable, but I would allow that no epistemological work gets done by filling the dark void beyond our reach with either gods or fanciful mathematics. The likelihood that we are not the measure of all things is a worthwhile discovery in itself. The aquarium constrains -- but also defines.

___________________________________________________________

1 There is plenty of evidence from studies of early childhood development that even perfectly normal children are not born with the understanding that the concrete universe is something different from their minds.

2 It is important to note that inanimate objects, or even plants, involved in conditional relationships are not applying modus ponens. Such relationships are conditional, but not intentional. When a plant bends toward the light, it isn’t doing so in response to learning. It is aware neither of why it bends, nor that it bends. Its behavior, like that of pebble dropped into a pond, is utterly non-cognitive.

3 Well, at least the interesting aspects of psychology are beyond our full grasp.

4 I am neither a mathematician nor a physicist. I freely admit that I am basing my assertions more on outward impressions of the discipline than on any exact personal knowledge.

1 comment:

  1. Scientific American just had an article on the physical limits of intelligence, wherein they asserted that various thermodynamic constraints restrict the absolute maximum level of intelligence. Basically, if you make neurons bigger so they have more connections, they are further apart so connections take longer to activate. So you're stuck in a certain range of nodes to network size. This supports your opening thesis.

    There is also Goedel's observation that no formal system can be complete. This implies we can't ever formally analyze our own consciousness to completion.

    However, you are quite right that abandoning collective knowledge would wreck epistemology. :D I think that might be a bit premature. For example, even what you call personal knowledge is really just shared knowledge for a collection of neurons. However, there is no reason to suspect this frees the sum total of knowledge from all thermodynamic constraints.

    ReplyDelete