Objectivism Versus Rationalism and Empiricism, Part 2

Second post on the ninth chapter in Leonard Peikoff's book "Understanding Objectivism", "Objectivism Versus Rationalism and Empiricism".

Disclaimer: I’m not an expert on philosophy. I’m just a person trying to figure things out for myself, and speak for no one but myself.

This post is the second in a series about Lecture Nine, "Objectivism Versus Rationalism and Empiricism", in Leonard Peikoff’s book Understanding Objectivism. I’ll go through the chapter, summarizing and adding my own thoughts, comments, or questions.

Summary of Previous Post

Peikoff started off the chapter comparing Objectivism to empiricism and rationalism on various points. On the relationship of ideas to reality, Peikoff says that rationalism puts ideas above reality, empiricism puts reality above ideas, and Objectivism says ideas are the means of knowing reality. I criticized a claim he made regarding whether concepts have content besides the concretes they integrate.

Peikoff's second point of comparison was induction versus deduction. He made some epistemology points I disagreed with. His comparison to the other approaches was that rationalists want to model human cognition on math and empiricists want to model it on a sprawling, unstructured science, and Objectivists don't want to use a model along another science but address the issue in terms of principles. I'm less clear/convinced on this comparison than on the first one.

Now we move onto new material:


Objectivism views axioms (specifically, existence, consciousness, and identity) as real but as preconditions of knowledge. Rationalism views them as real but as the starting points of a deductive argument by which you get to say mathematics. So Objectivism views axioms as the things you need to take for granted to even have knowledge, whereas rationalism views them as step one in a long argument. I think that on the one hand, if axioms really are the context in which all other knowledge exists, you could view them as steps 1-3, say, of some super long chain of argument that gets you to a specific conclusion about math or politics or whatever. I don't think that's the best way to think about it, though. That's sort of like viewing a computer program as a series of hundreds of individual steps rather than abstracting away some of the complexity at any point. So the Objectivist way of thinking about it seems better, since it's doing more grouping and categorization of things and saying these are important. It's like the hierarchy exercise – putting stuff in a hierarchy and grouping them lets you deal with manageable groups, and lets you think about what's better to put first. You can make connections between "lower level" points in metaphysics and "higher level" points in politics and epistemology, and that can be useful and enlightening to do if your goal is to clarify some specific connection or address some specific point, but just having a web of disconnected ideas that you draw connections between is not the best way to organize your knowledge overall. One thing that Peikoff has talked about is how the truth is the whole set of true things (he quotes Hegel as saying "The truth is the whole"), but you have to learn the individual parts one by one. And implicit in learning the individual parts one by one is learning them in an organized way. For any reasonably complex thing, you need to organize things and put them under groups or categories so that you can retain all the information. You can't just learn a morass of disconnected facts. Even things like mnemonics are a kind of grouping (grouping a set of concepts under a somewhat arbitrary but memorable word or phrase like "Please Excuse My Dear Aunt Sally" for the order of operations), though they are a more arbitrary and less logical grouping than something that actually follows from the nature/contents of some set of ideas.

Another distinction between rationalism and empiricism is that Objectivism thinks there are perceptual self-evidencies but rationalism thinks there are conceptual self-evidencies.

Peikoff says distinguishing between axioms and non-axioms is important. He says definitions aren't axioms or starting points of knowledge, but conclusions. He gives an example of going awry with taking something as an axiom. The example is about the statement "controls breed controls". He says this is true, but he says that if you take it as an axiom you might think dictatorship is inevitable, since we already have a bunch of controls, and then you can't explain the American Revolution, where controls resulted in more freedom. "Controls breed controls" isn't an axiom but a principle that requires a whole preceding context. Another example of this is the libertarian treatment of the idea of non-initiation of force.

As a sub-point, Peikoff says that the Objectivist view regarding determinism is that the world is lawful (everything has a cause) but that human action is caused by the choices we make.


Peikoff says the Objectivist view on certainty is that you can be certain but not omniscient, or that you can have certainty in a context.

He gives an example. He says that with his book The Ominous Parallels, he took a big leap from a couple of examples (Nazi Germany and America) to a conclusion about the role of philosophy in shaping things. He brought to his analysis certain philosophy ideas about the nature of man and reason that he already had, and integrated certain observations about America and Germany with that pre-existing context. Peikoff says to suppose that we discover a precondition to his generalization about human history like "philosophy is influential on human affairs only if men engage in sexual relations at least once a year". Peikoff claims it wouldn't invalidate his thesis. I disagree, since the thesis as stated didn't involve any such caveat, and since the precondition would require explanation which might itself affect the thesis in other ways. This is a problem with this method of posing an out-of-the-blue precondition that is unexplained – it at least potentially understates the likely impact of the newly discovered precondition on the previous theory. You can't actually know what the impact of the new precondition is until you understand it fairly well and check the existing theory for all the points at which the newly discovered precondition might be relevant. Maybe you can salvage most of the theory with minor changes, maybe not. You can't actually know until you do the analytic work. Presuming that you can just add an asterisk to the existing theory without having done that analytic work is reflects bias towards that existing theory.


So the rationalist is wrong—you have to say that inductive knowledge can always be made more precise; you can always specify more fully what it depends on; it is not a dogma or a revelation. Does this mean then that we should always say, “How do you know? Maybe there’s going to be a new condition next week or next century. Is it possible?” Every time I utter a generalization do I have to say, “I’m not a dogmatist; therefore, I have to add conscientiously, ‘It’s possible that this is going to be overthrown or specified or whichever at a later time’”? No. Only where there’s a specific basis to say that it is possible.

I disagree with Peikoff but I think he's trying to address a real problem, which is the problem of being paralyzed by vague doubts of specific theories due to a misunderstanding of fallibility. You don't have to put a disclaimer before every generalization. I do think, though, you can take the possibility of a theory being overthrown as part of your background context of assumptions. That doesn't mean you have to worry about your theory being overthrown in a specific case absent a specific reason. Accepting fallibility in principle does not mean being too timid to believe in anything strongly. That's not the action requirement that fallibility imposes. Fallibility just requires that you be open to your idea being overthrown if someone (including yourself) comes up with an argument of an existing idea that you can't adequately address.† If you don't have any such argument, you don't need to worry about a specific theory being endangered at this time.

†(I think fully practicing fallibility requires an attitude of looking for criticisms and trying to avoid bias, but I'm talking about a more minimal standard here).

There's another problem here, which came up earlier: Peikoff seems to think that the limit you can do in terms of some valid "inductive" mental process is specify more detail. He doesn't seem to take seriously that you can actually overthrow such a theory entirely. I think he may think that if you arrive at an error that gets refuted, that shows something was wrong with your process. Often, errors do indicate something wrong with someone's process, but I do think people with good processes can actually make honest errors, because of human fallibility.

More Peikoff:

What you should say is, “So-and-so is certain within the framework of all the knowledge already obtained.” You should not say, “It is impossible to discover anything new that’s relevant,” and you should not say, “It is possible to discover something new that’s relevant.” You do not have to say either, if you do not have any basis to say either.

Impossible vs possible are not symmetrical statements! Saying it's impossible to discover anything new that's relevant is a (false) statement of infallibility or omniscience. Saying it's possible to discover something new that's relevant is a (true) statement of human fallibility. There is a basis for the latter statement. But, again, it doesn't mean that you have to around vaguely doubting everything.

I think it's interesting cuz Peikoff has some good advice mixed in here about the right attitude (i.e. don't vaguely doubt everything on principle) and he's addressing a real problem that I myself have struggled with. But I think he's wrong on the epistemology so his discussion of the issue is flawed.

To be continued.