Rationalism, Part 3

Disclaimer: I’m not an expert on philosophy. I’m just a person trying to figure things out for myself, and speak for no one but myself.

This post is the third in a series about Lecture Seven, "Rationalism", in Leonard Peikoff’s book Understanding Objectivism. I’ll go through the chapter, summarizing and adding my own thoughts, comments, or questions.

So far, I have considered four "symptoms" of rationalism as described by Peikoff:

  1. Ideas above reality.
  2. Deduction as the basic method of knowledge.
  3. Starting points are purely conceptual self-evidencies.
  4. A desire for "certainty with omniscience"

More on Certainty with Omnscience

We left off last time discussing a tendency some people have towards compartmentalization of topics and the avoidance of integrating things into general principles. This was under the heading of "certainty with omniscience". Peikoff continues on that idea. Peikoff says that separation of topics is necessary because you can't take everything in at once. But a proper separation involves being able to connect the topic you're focusing on back to the rest of your knowledge. A compartmentalizer doesn't do that.

Peikoff gives a concrete example of a compartmentalized type approach:

if you’re a newspaper columnist, you study strip mining laws, or you study strip mining laws in Tennessee, or you study strip mining laws in Tennessee in 1981, or you study foreign policy in the first half of the twentieth century, or you study the invasion of Grenada

This reminds me of how lots of academics have super specialized focuses or disciplines and don't even really try to integrate or connect their specialization to broader themes. There's nothing inherently wrong with having a bunch of detailed knowledge about some specific thing or other, but if you can't connect your detailed knowledge to broader themes or ideas, you won't be able to use that detailed knowledge to be able to come to useful and interesting conclusions that apply to other cases. Detailed knowledge of strip mining laws in Tennessee in 1981 might be useful as an example of all sorts of principles of, say, economics, and detailed invasion of the knowledge of Grenada might be useful as an illustration of foreign policy principles, but if you don't have the broader knowledge and concepts to put the details in perspective, you're just a kind of walking encyclopedia of specific facts without useful judgment or wisdom.

Peikoff says that the logic of the rationalist's flawed method leads them, contrary to their intent, into being "a concrete-bound, disintegrated gazer at this item and this item and this item." A rationalist disconnects concepts from reality, so they have to try to deal with reality without the use of concepts. And so, ironically, they wind up at the same place as an empiricist.

Peikoff says that rationalists are driven to mysticism because they can't get what they want from concepts or from the perceptual level, and they want to have a large-scale vision and understanding of reality somehow. So they turn to mysticism. He says Plato and Spinoza are two examples of this. I think there is a connection here with bad methods of learning and understanding topics. Somebody might have a hard time understanding some concepts in a field in an integrated, chewed, concrete way (their concepts are floating). So then they just try doing lots and lots of examples and hope it clicks (this is the turn to empiricism). And they might wait around in vain hope that everything will just "click" somehow and they'll understand the topic through a revelation if they churn enough examples (this is analogous to a turn to mysticism).

Peikoff gives the people he's lecturing to a sort of diagnostic test to help determine if they have some experience of the problem of "certainty with omniscience." I think Peikoff's idea here is important. The symptom is having constant swings of certainty on a given subject. Basically, the issue is swinging from "I know nothing" to "I know everything" about some topic. Peikoff indicates that this phenomenon can happen over an extended period of time. This contrasts with the proper attitude, which is "that you know what you know, even when you realize that there are things about that subject that you don’t know." He gives an example that I think is helpful. He says that if you understand the case for capitalism, it shouldn't bother you if someone brings up some example from the nineteenth century that shows capitalism is bad. Peikoff:

You do not have to be omniscient; you do not have to know the answer to every piece of concrete example or lunacy that someone could bring up, to know your principle or why it is correct. But if you feel, when you hear the principle, “Now I know everything,” and then, two weeks or two years later when this example comes up, “I don’t understand it at all, I’m completely thrown, I don’t even know if capitalism is right,” that is a rationalist idea—the goal is omniscience. So long as you feel that you have it, you’re certain, and as soon as you don’t have it, you’re wiped out.

I think there's a real important psychological truth here. I think there is a desire to be "done", to be stable, to be (at least mostly) "complete" in one's thinking about some topic, and to not feel like a new development or example or issue might cause one to have to reconsider one's premises, or at least think on one's feet about how those premises might apply to some new example. But one can't really be "done" thinking because it's a big world and there are various examples that people might bring up, issues that might come up, edge cases, etc. One might be done if one could somehow, impossibly, know every possible case or issue or counter example, and see all the angles on an issue in advance, but then that's back to the desire for omniscience.

One thing I think that's worth saying is that you should absolutely have mastery over the principles of your field. Like if you are going to be an advocate for capitalism, you should know the abstract, general case for capitalism really really well. But that doesn't require omniscience because the principles are a delimited set of abstractions that can be applied to infinitely many particular cases.

The Concern of Order

Peikoff's fifth symptom of rationalism is a concern with order. He says that rationalists are big on logic and structure and order, and that's good as far as it goes. But one major problem is that the starting points or axioms of the rationalist are not rooted in or dictated by reality. They're just some stuff the rationalist made up. He gives an example of Kant, who had a neat and tidy and symmetrical system where there were 4 sets prior endowments of the mind, 3 in each set, and each set consisting of a positive, negative, and union of the two. So it's all neat and tidy but just some castles in the sky, basically.

Peikoff says that rationalists don't recognize that their orderings are arbitrary. They think they're following the one non-optional thought process or pattern. The idea that there are a bunch of different ways to write an article that would be valid and good is not something the rationalist would accept. Instead, they would want to find the objective way to do it that's dictated by reality.

Peikoff gives an interesting example on the point about the validity of different orderings. He wanted to talk about the culture of Weimar Germany in his book The Ominous Parallels, but he couldn't figure out a reason to start with art or literature or whatever. And Rand said “It doesn’t make any difference; it’s all just examples anyway. Choose whatever is most interesting to you and most eloquent in your opinion, or most dramatic.” Peikoff took that as a suggestion that he be a whim-worshipper and said he needed a reason.

Anti-Emotion

Peikoff's sixth symptom of rationalism is being anti-emotion. They think emotions are subjective, irrelevant crap. They take the same approach to interests/values/desires/concerns. They want to be blank mirrors of reality. Boredom is irrelevant. But Peikoff disagrees and says:

In actual fact, if you are writing and are bored, that is an unfailing sign that you’re doing something wrong, that you’re trying to write something either that you don’t understand or that your subconscious is not convinced is important, and that will stop you more than any other thing. The only way to get out of that is to stop and to ask yourself what you actually want to say, what actually interests you.

Peikoff says that rationalists are inconsistent about emotion because on the one hand they are repressors but OTOH they pick their (divorced-from-reality) axioms based on what they like.

Polemics

Peikoff says that a rationalist feels a tremendous need to engage in polemics (arguing with other people) with people who disagree. He sees the disagreement as a threat or vulnerability because of his lack of confidence in his own ideas. Peikoff:

Because he doesn’t feel secure about his own ideas; he’s not really convinced, whatever his deductive structure, because in fact his ideas are floating; they are not tied to reality. Hence in his own mind, they seem shaky (which, in fact, they are). He can’t, therefore, be content simply to present his ideas to someone else and let them stand or fall with their honest judgment. If you are secure internally, with your own mind, you can tell your view to others and take the attitude, “If you see it, fine; if you don’t, that’s your problem.” But if you are insecure, if you feel shaky, and then they say, “I don’t see it,” that is like the knife in the heart. You feel, “I have to get to them. I have to force them to accept it, to agree. I have to, in effect, hit them over the head with a Q.E.D. I have to beat down their criticism and come up with the unanswerable argument.”

The approach Peikoff describes here is actually extremely second-handed and contrasts really really strongly to Roark's attitude. There's a particular scene in The Fountainhead with the Dean of Stanton:

“You know,” he said, “you would sound much more convincing if you spoke as if you cared whether I agreed with you or not.”
“That’s true,” said Roark. “I don’t care whether you agree with me or not.” He said it so simply that it did not sound offensive, it sounded like the statement of a fact which he noticed, puzzled, for the first time.
“You don’t care what others think—which might be understandable. But you don’t care even to make them think as you do?”
“No.”
“But that’s ... that’s monstrous.”
“Is it? Probably. I couldn’t say.”

The rationalists Peikoff is describing care very much whether other people agree with them and want to make other people think as they do due to the rationalists' own insecurities.

Peikoff makes a point that I think is another good psychological insight. He says that part of the reason rationalists favor deductive arguments is because you can supposedly use those to "overpower the guy" and have to concede the point. You can't do this with what Peikoff calls inductive arguments because with those you're relying on the person looking at some information and making connections on their own. But with deductive arguments you can finish with "QED" and make them submit. Peikoff explicitly talks of using arguments as a "social weapon" which I thought was notable and interesting.