Emotions & Moral Judgments, Part 9
Part 9 of a dialogue discussing the tenth chapter in Leonard Peikoff’s book "Understanding Objectivism".
Adam (A): Hi.
Bob (B): Hi. Let’s continue our discussion of Lecture Ten, “Emotions and Moral Judgment”, in Leonard Peikoff’s book Understanding Objectivism.
A: OK.
B: So in Part 8 we summed up the discussion so far and introduced a tree depicting a mistaken path to learning Objectivist philosophy involving repression. We talked about how it makes sense that the scope of morality is life if you treat morality as knowledge to help you live well. We talked about how the Objectivist theory of emotions doesn’t require that people have fully chewed all their ideas and about how it’s impossible to fully chew all your ideas anyways. We talked about how the inability to chew all your ideas isn’t a major problem, since you can gradually improve your capacity to fix ideas when a problem comes up and to check new ideas before accepting them.
A: Yep.
B: Okay. So Peikoff continues and says that being against emotions is like being against the body, like being a medieval ascetic who was making war against the body. It’s the same sort of mistake. He then argues lays out why we need emotions. Here is the beginning of his first point on that:
Number one—and these are in no particular order—emotions are essential to action, which gives them a pretty high status. They are the automatized form in which we experience our value judgments, in which our value judgments actually enter moment by moment into our daily lives. They are the “lightning integrators” of the situation (quoting Galt). In other words, they are essential to the crow need. You don’t have to figure out every value aspect of a situation intellectually. Your subconscious takes in the meaning of an event and sums it up for you in an instantaneous feeling that Galt called “a statement of profit or loss,” like a running lightning total, without which you would be helpless, because the complexity in actual life is simply too great; you would be helpless to know what to do.
A: So according to Peikoff, emotions serve an essential function in helping us deal with the world. The “crow need” is referring to the limited ability of people to hold things in their mind at once (whether percepts or concepts). You can’t make value judgments on an explicit, intellectual basis fast enough to keep up with real life, so you need emotions to help you figure stuff out.
B: Right. He actually gives an extended example:
Imagine, for instance, there was no such function as emotions, but somehow or other you could nevertheless have value judgments, and someone held a gun at you, and there was no fear, no automatic desire to escape, nothing like that. So you’d sit back and say to yourself, in effect, “He’s holding a gun; now, let me see—this gun, I guess, must be aimed at property. Oh, yes, property is definitely a value. Why was property a value? We have to have rights, now I remember, because justice is a very crucial principle, and therefore, if he takes my property that violates justice. And why was justice important? It had something to do with rationality.” By the time you figured out, and looked up in the various books all of the structure supporting value judgments, obviously the opportunity for action would be long gone, to say nothing of the fact that even if you finally did, by purely intellectual means, come to the conclusion, “This is an action I should oppose,” you wouldn’t care, because caring is also a feeling, an emotion.
A: Hmm, I don’t know if I find this example very persuasive. If someone were to hold a gun to me, even if I was operating without emotions, I don’t think I’d have to go through a whole elaborate intellectual analysis to think something like “This seems to be a threat to my life/person/property, and I value those things greatly”, and react accordingly. Like, I could have high level non-emotional ideas about things like important values and threats, and act according to those. Peikoff’s analysis seems a bit intentionally overcomplicated to make the point. It’s possible that I’m wrong and that I’m smuggling emotions into my description of how I’d handle the situation without emotions, but I don’t know. Another thing with an example like this is people are often overcome by paralyzing fear in such a situation, and it’d improve their ability to handle the situation effectively if they could be more detached and objective. I don’t think you need to be totally detached and emotionless, but being less emotionally intense can help.
B: I think that Peikoff brings up a tricky example with the gun stuff. It might be better to consider the case of deciding which of several potential activities to undertake. Like, you might potentially read, or watch TV, or go for a bike ride. And you could do a rational analysis of relevant things like your interest level in the different activities, your energy level, whether you’ve been neglecting one of the activities lately, etc., but it will often be better to just pick the thing you currently want to do. Rational analysis has a role in determining which activities to choose, but it’s more in the role of thinking about overall policies or uses of time or values of activities than in making every particular decision. If the thought of taking a bike ride makes you happier than the other potential activities you could undertake, then it can be reasonable to just go with that, unless you think of a criticism of that option.
A: Hmm.
B: And one thing to keep in mind is that in a day you will face many such decisions. Any one of them could be pulled aside and made the object of a careful, thought out analysis, but if you tried to do that with all of them, you’d spend all your time analyzing and none of your time doing.
A: That makes sense.
B: Regarding “I could have high level non-emotional ideas about things like important values and threats, and act according to those”, I agree and that’s part of why I don’t like the example Peikoff uses too much. I think when one is an extreme situation like where someone is putting a gun to you, the situation and stakes are pretty clear. Often in those types of situations, the intense emotional reactions that people (understandably) have can get in the way, and it’d be better if they were more detached and analytical. I think that emotions can have an important role in dealing with such situations, but it can be more in the role of analyzing how to deal with the situation intuitively (which people discuss as “going with your gut”) than in arriving at some basic judgment of the overall situation and the values involved. In other words, I don’t think emotions are particularly necessary for getting to “My life and values are under threat by this person with a gun”. But suppose for example you were kidnapped but got away and was worried about being recaptured, and you were trying to find help. You might see someone and think “this person seems like the sort that might be willing to help me.” And there might be a whole complex analysis going on there that might take hours to break down rationally, but you don’t have hours so you just go with your gut.
A: Right okay. Now that you mention it, I guess there could be similar judgments involved in what sort of situation you’re in when you’re under threat. Like, you might be able to arrive, intellectually, at the judgment “This seems to be a threat to my life/person/property, and I value those things greatly”, but there might be a separate analysis that is complimentary to that, but at more of an emotional level, of “This person seems like they just want my property, so if I comply they should go away” or “This person seems like they want to hurt somebody, so I need to look for an opportunity to disable them or get away somehow” and this could be based on subtle readings of the thug’s demeanor and bearing and that sort of stuff.
B: Right. Even if you could arrive at an overall picture intellectually, the emotional “layer” of data might still provide life-or-death information relevant to action, and so shouldn’t be disregarded.
Peikoff continues:
Without emotions, you could not decide what to do in the face of this kind of complexity (and that isn’t very complex compared to some situations), nor would you have any motive to do anything; you would have no initiative, nothing would make a difference to you. You know that supposedly one of the major problems of people who are depressed is what is called “flat affect”—they just feel nothing—and they’re incapable of anything but the very most primitive action, because they just don’t care, nothing matters to them. There’s no emotional life, to say nothing of the fact that the reason you’re motivated is because you know you’re going to, or you hope, have an emotional reward in the long run; you’re going to enjoy the results, or at least not be miserable. And happiness, of course, is an emotion; the genus of “happiness” is “feeling.”
A: This makes some sense, though one thing I have noticed is that people who seem “emotionally driven” are often bad at achieving long run goals and often focus on the short run.
B: Someone can be “emotionally driven” in the sense of being the sort of subjectivist we discussed earlier on in this discussion, where they just try to follow their whims and treat them as an authority on what to do. That is different than having a life guided by long range plans and enjoying the emotional result of fulfilling those long range plans. This is one of those false dichotomy issues. The two options of the false dichotomy, are, on the one hand, a rationalistic Data the Android who acts and thinks but doesn’t feel, and on the other hand, a whim-worshipping emotionalist who acts and feels but doesn’t think. And the right approach is to think, feel, and act, all together, and try to make the thinking, feeling, and acting as integrated and coherent as possible.
A: Hmm okay that makes sense. By the way, this is a bit of a tangent but I’m not sure how much we can tell from someone’s affect. Like someone could have a flat affect but be fine and have emotions and stuff.
B: I think that’s true, and Peikoff was specifically talking about people who are depressed having a flat affect, and not necessarily saying that people who have a flat affect are depressed. Like, people who are depressed often have a flat affect because they’ve lost their motivation and happiness - that’s a fairly common thing. The affect thing isn’t an absolute indicator but it’s a useful sign in cultural context.
A: OK. One other thing I wanted to talk about is “nor would you have any motive to do anything; you would have no initiative, nothing would make a difference to you.” What if you were primarily motivated to say figure out some puzzle by intellectual curiosity instead of wanting to be happy?
B: I think intellectual curiosity would count as an emotion. It’s a desire that’s reflecting some value judgments about what’s interesting and worth thinking about. It’s not an intellectual analysis/argument about something being worth investigating. Emotions can be more subtle and nuanced than the strong emotions (happiness, sadness, love, hate) that people typically focus on. That’s my take, though, I don’t know if Peikoff would agree with this.
A: OK.
B: Peikoff continues:
Insofar as you cut down your emotional life, you cut down your whole life. You cut down your motivation, you cut down your ability to function except in a dutiful, bored way, and that will have repercussions on every virtue: on your initiative, your industry, your enthusiasm, your productiveness, and so on. You will be more and more reduced to dragging yourself through life, and you can do that only up to a point. Free will is not omnipotent—at a certain point, if you just don’t care, nothing is going to get you functioning. And you have got to have an emotional life to care.
I think it’s worth pointing out that Rand’s characters were quite passionate. They were often passionate about things that most people would think it’s weird to be passionate about - buildings, trains, metals - and not passionate about things like social climbing or altruism, but they were still people of passion. They were not unemotionless robots. Some people liked to attack them as being something like that for being different and not sharing certain moral premises that would allow others to take advantage of the heroes, but that was just a hateful attack, not an actual analysis. And Rand herself was pretty passionate.
A: Right.
B: Okay so that’s Peikoff’s point 1 in favor of emotions, that they’re essential “lightning integrators” necessary for action and motivation. So then we move onto point 2:
Two, emotions have a critical psycho-epistemological function. That’s the issue we’ve already discussed of concretizing our abstractions. Here again, in the process of actually thinking, using abstractions, emotions keep our values alive and real to us in situations where it’s impossible to make that a separate conscious assignment.
A: Before we go any further, are there specific situations that Peikoff has in mind as situations where it’s impossible to make a “separate conscious assignment” of values, or is this just reiterating the theme that the world is so complex that we need emotions in order to deal with it at all.
B: I’m honestly not 100% sure but I lean to the latter, especially since he’s framing it as “That’s the issue we’ve already discussed of concretizing our abstractions.” That tells me that this is a callback to a previously discussed theme.
Peikoff continues:
If your emotions are functioning properly, they automatically keep you attuned to concretes, and thereby keep your concepts concretized, as opposed to the floating abstractions of the rationalist. What wrecks the rationalist is that nothing in his automatic functions keeps him tied to reality, because the only thing that could keep him automatically tied would be his interests—that is, his likes, his dislikes, his inner emotional experience.
A: I have a connection to make to this passage related to learning. Lots of people try to learn things - maybe a foreign language, or programming, or whatever - and have a hard time staying motivated. Some of that might be an issue of trying to do stuff that is too hard for them in general, but I think maybe some of the issue is an emotional/motivational problem. Like, they think that they should learn X, should want to learn X, have various reasons/arguments for learning X, but they actually haven’t figured out how to have motivation to learn X at an emotional level. They aren’t curious in a deep way that would sustain many hours of doing the activity. They think they should learn it but don’t want to learn it, and so they’re trying to rationalistically impose an interest upon themselves, and it goes badly.
B: I think that’s a real thing, and I also can imagine this being another false dichotomy example. On the one hand, there are the rationalists you just described, and on the other hand, there are people who say “just do whatever’s fun in the moment.” Are those the only alternatives?
A: No I don’t think so. I think that you should not repress your existing preferences, but that you can also try new stuff out, and “challenge” your existing preferences in a non-repressive way. And you can also take a resilient attitude towards trying out new stuff, rather than giving up quickly if it’s not immediately super engaging and interesting. Like, you can give stuff a chance to become interesting and fun to you without repressing yourself.
I think we’ve talked about something like this before, but, say you want to casually play a video game but also maybe want to try blogging about philosophy more. You can try playing the video game, but then taking a break and blogging. Or you could try playing the video game but in a somewhat less casual and more serious way. You can also introspect as you’re doing these two activities, and think about what seems “fun” or interesting about each of them, what parts seem boring, what problems you run into, and so on.
Let’s end it here for today.
B: OK!