Writegenstein #2: Philosophy of Psychology 205 (Seeing-As)

How does one play the game: “It could also be this”?

[…] “I see (a) as (b)” might still mean very different things.

Here is a game played by children: they say of a chest, for example, that it is now a house; and thereupon it is interpreted as a house in every detail. A piece of fancy is woven around it.

— aphorism 205 of Ludwig Wittgenstein’s “Philosophy of Psychology” from Philosophical Investigations

It could be this and I see (a) as (b) point to different ways in which one could interpret a material object. That object alone has limited value, if any at all. In a sense, the material aspects of the object are arbitrary compared to the conceptualization of the object on the whole. What is conceptualized of it, i.e. how it is understood, depends on its place in its environment – what use it is to its environment. When children are playing house, they are playing a game. They see a chest as something to use in a game which mimics the game the child sees its parents playing daily and of which they are a part. They do not see it as something with material, mechanical parts as the builder might see it (that is what it would mean simply to see, though the builder may see the bigger picture as well.) They ask “What can we do with this?” and understand the chest to be a house, having already established, and taken for granted, the rules for what constitutes a house.

It does not end there. Playing the game of house is itself a very sophisticated perceptual process. Our ability to formulate and make use of abstraction is perhaps what separates human perception from the perception of other animals – not in terms of form, importantly, but in terms of degree. A cat, for example, will definitely see the chest as something other than a bundle of wood and nails assembled in a particular way. It will almost certainly see it as a scratching post or a place on which or in which to sit or sleep (depending on whether the chest is open or closed and on how tired the cat is), but the cat lacks the ability to conceptualize the chest as anything more than that with which it is afforded these very basic “cativities”, if you will. The reason for this, from an evolutionary standpoint, is that these cativities are all the cat needs to achieve its potential. So, the cat’s abstraction is of the same sort but of a much lower degree than that of the child. The cat’s abstraction is more like that of an infant’s than the young child’s, for an infant, like the cat, only seeks in objects the fulfillment of very basic needs. The only difference between the cat and the infant is the potential of growth and development.

One still might ask “what objective or quantifiable relation is there between a chest and a house?” One should see now, unless one is blinded by a materialist view of reality, that this question now becomes arbitrary because one cannot speak of perception in this example without qualifying the individual subjects’ understanding of it. Perception as we experience it does not seem to be a mere material process. One does not need to understand anything about brain matter to understand something. In fact, it is that understanding that is indeed the goal. One could say that in the cat’s mind there is very little understanding taking place at all, while in the child’s mind there is no limit, especially since the child’s capability for abstract thought will continue to develop. The child understands much more than the cat does. To understand an object, I should say, is to make an abstraction of it – an abstraction that has utility in the greater context of its environment – to allow one to be successful at a game. To see-as, then, is to understand, and vise versa.

The False-Dilemma of the Nature vs. Nurture Debate

Before I begin, allow me to explain what I mean by false dilemma. A false dilemma is an error in reasoning whereby one falsely assumes that the truth of a matter is limited to one of two (or a select few) explanations. For example, the American presidential election. For another example, have you ever been stumped by a question on multiple choice test because you saw more than one possible correct answer (or no correct answers all)? — perhaps you got frustrated because you felt that the test was unfairly trying to trick you? Well, you were probably right. This may have been an instance of your ability to recognize the false dilemma fallacy. Sometimes there are indeed any number of correct answers given any number of circumstances. There is often simply not enough information provided in the question for one choice to clearly stick out as correct. This might lead you to question the test in a broader sense. What is the purpose of this (presidential election, or) test? What is it trying to measure or prove? Without getting into that answer in too much detail (as this is not a post about the philosophical state of academic testing), I can say such tests aren’t really concerned with truth and meaning as they are about the specific program they support. That program may or may not have the best interests of the people in mind, and it may or may not be directly governed by the amount of money it can produce in a relatively short period of time. Anyway, that’s another discussion.

In a previous post entitled The Slate, the Chalk, and the Eraser, I compared a child’s mind to a slate, and I argued that as long as we write on it with chalk by teaching him how to think (rather than a permanent marker/what to think), then he will be able to erase those markings to make way for better and more situation-relevant ones in the future, once he develops the ability to make conscious judgments. This is an example that you may have heard before, and it can be useful, but by some interpretations, it may seem to rest on a false presupposition. Such an interpretation may raise the “nature-nurture” question that is so common in circles of science and philosophy. One might argue that if a child’s mind is truly analogous to a slate in the way I have put forth, then I should commit myself to the “nurture” side of that debate. That was not my intention. In fact, that debate, in its most common form, presents a false dilemma, so I can only commit to both or neither side depending on what is meant by ‘nature’ and ‘nurture’. The conventional definitions of these terms are limited in that they create a spectrum on which to make truth-value judgments about objects, experiences, phenomena, etc. We commit to one end of the spectrum or the other, and we take that position as true and the other as illusory. This is similar to the subject-object distinction I described in an earlier post. Perhaps comically, even the most radical (and supposedly-yet-not-so-contrary) ends of scientific and religious belief systems sometimes agree one which side to commit to, albeit for different reasons. That particular conflict, however, is usually caused by a semantic problem. The terms ‘nature’ and ‘nurture’ obviously mean very different things for radical mechanistic scientists and evangelical Christians.

Please keep in mind throughout that I am not criticizing science or religion in general, so I am not out to offend anyone. I am merely criticizing radical misinterpretations of each. Consequently, if you’re an idiot, you will probably misinterpret and get offended by this post as well.

Taking this description a step further, false dilemma can be committed to any number of degrees. The degree to which it is committed is determined by at least two factors: the number of possible options one is considering and the level of complexity at which one is analyzing the problem. Any matter we might deal with can be organized conceptually into a pyramid hierarchy where the theoretical categorical ideal is at the top, and the further one goes down the pyramid, the more manageable but trivial the matters become. As a rule of thumb, the fewest options (one or two) and the lowest level of analysis (bottom of the pyramid) should give rise to the highest probability of a logical error because the bottom level of analysis has the highest number of factors to consider, and those factors culminate up the pyramid toward the categorical ideal. Fortunately, committing an error at the lowest levels of analysis usually involves a harmless and easily-correctable confusion of facts. Committing the error at higher levels of analysis are more ontological in nature (as the categorical ideals are per se) and can have catastrophic consequences. All sciences and religions structure their methods and beliefs into such pyramid hierarchies, as do we individually. They start with a categorical ideal as their assumption (e.g. materialism for some science; the existence of God for some religion), and they work down from there. However, neither religion nor science are meant to be top-down processes like philosophy (which is likely the only top-down discipline that exists). They’re meant to be bottom-up processes. For science, everything starts with the data, and the more data that is compiled and organized, the more likely we are able to draw conclusions and make those conclusions useful (in order to help people, one would hope). For religion, everything starts with the individual. Live a moral and just life, act kindly toward others, and you will be rewarded through fulfillment (heaven for western religions, self-actualization for eastern religions). These can both be good things (and even reconcilable) if we go about them in the right way. What are the consequences, however, if we go about them radically (which is to say blindly)? In short, for radical belief in a self-righteous God, it is war, and therefore the loss of potentially millions of lives. In short, for radical materialism, it is corruption in politics, education, and the pharmaceutical industry, the elimination of health and economic equality, and the potential downfall of western civilization as we know it. That’s another discussion, though.

For the nature-nurture debate, the false dilemma is the consequence of (but is not limited to) confusion about what constitutes nature and nurture to begin with, and even most people who subscribe to the very same schools of thought have very different definitions of each. First, in the conventional form of this debate, what do people mean by ‘nature’? Biology, as far as I can tell, and nothing more. We each inherit an innate “code” of programmed genetic traits passed down from our parents, and they from theirs, and so on. This code determines our physiology and governs our behavior and interaction with the outside world. Our actions are reactive and governed by our brain-computer, and free will is consequently an illusion. What is meant by ‘nurture’ on the other hand? Our experienced environment, and nothing more. Regardless of our chemical makeup, how we are raised will determine our future. There is no variation in genetics that could make once person significantly different from another if raised in identical fashion by the same parents, in the same time and place. We have no control over the objective environment we experience, so free will still seems to be illusory.

These positions seem equally shortsighted, and therefore, this problem transcends semantics. Neither accounts for the gray in the matter — that reality, whatever that is, does not follow rules such as definitions and mathematical principles. These are conceptions of our own collectively-subjective realities which make it easier for us to explain phenomena which are otherwise unfathomable. On this note, we could potentially  consider both nature and nurture phenomenal. That is an objective point on the matter. The first subjective problem is that both positions imply that we don’t have free will. Sure, there are unconscious habits of ancient origins that drive our conscious behavior (e.g. consumption, survival, and reproduction), but there other more complex structures that these positions don’t account for (e.g. hierarchical structures of dominance, beliefs, and abstract behavior such as artistic production), and those are infinitely variable from person to person and from group to group. This comes back to the point I just made about phenomenal reality and the conceptions we follow in order to explain them as if they are somehow out there in the objective world that we are not part of.

Not to mention, we all take differently to the idea that free will might not exist. Religious people are often deeply offended by this idea whereas many scientists (theoretical physicists in particular) claim to be humbled by it. Both reactions, I would argue, are disgustingly self-righteous and are the direct consequence, not of truly understanding the concept of free will per se, but of whether or not free will simply fits into his or her preconstructed hierarchical structure of beliefs. One should see clearly, on that note, why a materialist must reject free will on principle alone, and a radical christian must accept it on principle alone. Regardless of the prospect that the religious person has a right to be offended in this case, and that it is contradictory of the scientist to commit to a subjective ontological opinion when that very opinion does not permit one to have an opinion to begin with (nor can it be supported with any sufficient amount of “scientific” evidence whatsoever), the point here transcends the matter of free will itself: that rejecting or accepting anything on principle alone is absurd. This calls into question matters of collective ideological influence. There is power in numbers, and that power is used for evil every bit as often as it is used for good. When individuals, however, break free from those ideologies, they realize how foolish it is to be sheep and to believe in anything to the extent that it harms anyone in any way (physiologically, financially, emotionally, etc.). The scary part about this is that literally any program might trap us in this way (ideologically), and blind us from the potentially-innate moral principles that underlie many of our actions. On that note, we are all collectively very much the same when we subscribe to a program, and we are all part of some program. We are individually very different, however, because we each have the potential to arrive at this realization through unique means. We each have a psychological structure that makes up our personality. It is undeniably innate to an extent, yet only partially biological. This reveals the immeasurable value in developing the one’s intrapersonal intelligence through introspection and careful evaluation of one’s own thoughts, feelings, perceptions, and desires.

Furthermore, conventional nature-nurture positions are polarities on a spectrum that doesn’t really exist. If we had clearer definitions of each, perhaps the debate would not present a false dilemma. We should reconstruct those definitions to be inclusive of phenomena — think of these terms as categories for ranges of processes rather than singular processes themselves. If we think of these terms as being on a spectrum, we are led to ask the impossible question of where the boundary is between them. If we think of them as categories, we are forced to embrace the reality that most, if not all, processes can fall into either category given a certain set of circumstances, and thus, those categories become virtually indistinguishable. E.g. in the case of inherited skills: practice makes perfect, yet natural talent seems so strongly to exist. If the truth-value-based spectrum between nature and nurture were a real thing, then neither position would be able to account for both nurtured ability and natural talent; it would simply be either/or. This is a consequence of the false dilemma. It leads us to believe that this gray matter is black and white. If we one is decent at learning anything, he/she knows that there is only gray in everything.

But is there? I hope I have explained to some conceivable extent why scientific and metaphysical matters should not be structured into a polar truth-spectrum, and why any attempt to do so would likely present a false dilemma. However, it seems more reasonable to apply spectrum structures to value theory matters such as aesthetics, ethics, and even other personal motivators such as love. This, I will explain further in a later post.

 

On the Categorization of Terms

It seems that, since he characterizes language as a whole rather than dealing with the nature of individual words, later-Wittgenstein denies the existence of classes of objects, and thus our accuracy in creating language about them. For example, instead of recognizing the chair as a chair, we would simply recognize the chair as that chair. If his view is accurate, then I think categorization would be better suited for proper nouns rather than objects such as a chair, because reference accuracy in these cases is naturally much more clear, i.e. apply names to named individuals (e.g. Ludwig Josef Johann Wittgenstein). There are many different forms that something we call a chair can take. Of course, as Wittgenstein would agree, there is an endless realm of possible connotations of ‘chair’, but there are certainly objects that we could exclude from the class of ‘chair’, such as a baseball, so there are some current methods of usage by which we must abide when speaking of a chair. However, with the exception of those cases that we can very obviously include and exclude from being connoted by ‘chair’, there are plenty other cases (e.g. a “chair” nailed upside down to the ceiling of an art gallery) that are not so obvious, despite their form or function. At least with individual persons, we know exactly what one is referring to when he mentions ‘Ludwig Josef Johann Wittgenstein’, and we know that he is excluding everything that is not Ludwig Josef Johann Wittgenstein. The line is more clear with proper names. With everything else, not so much. Therefore, categories are irrelevant from a philosophical standpoint and need not exist at all. They only exist within a specific context.

However, more generally, if we apply the word ‘chair’ to a baseball, and if the majority of language-users, after using the term ‘chair’ to refer to a baseball by way of its constant usage in that context, eventually came to use ‘chair’ to connote a baseball (out of unconscious social habit, not conscious agreement), then this would have become an acceptable definition, or use, of ‘chair’. For now, this is not the case. If we used ‘chair’ to connote a baseball, we would not be adhering to chair’s current method of usage, and that usage would be rejected in a social and definitive light and thus in this philosophical one. Though, after much such usage, very gradually, and not at any one particular moment, ‘chair’ could certainly come to connote a baseball. It would, at that point, have become a collective social habit and therefore semantically correct.

Syntax = Semantics

Current Methods of Usage can be applied to both syntactical and semantic rules. In fact, it has deeper implications that there is very little difference, if any at all, between the functions of syntax and semantics.

We traditionally think of syntax as being the grammatical rules of language: punctuation, spelling, sentence structure, etc. Such rules formalize language so our expressions are precise and easily understood. Semantics, on the other hand, is supposed to deal with reference and connotation. The forms, but only the forms, of syntax and semantics are different. However, their functions (which is to say ‘their purpose’) point to the same thing: communication. One can arguably not exist without the other if effective communication is to occur. Syntax and semantics are dependent upon one another like two sides to the same coin. One side is not worth more than the other (as Tractatus would argue that syntax would carry more weight, and semantics is simply incidental). They are both necessary for communication, and therefore, equal in value, especially in spoken language. They are like categories, as described above, and therefore have no philosophical value per se.

Current Methods of Usage – Language as a Collective Social Skill

Language has developed as a collective social skill to the extent that society needs to use it to function. Different dialects develop in different regions out of the necessities that those regions are subjected to. Languages spoken by small bands in the rural Amazon are structurally simple compared to English, which is spoken in most of the developed world. Amazonian lifestyles are also structurally more simple in contrast to the complicated (but certainly no better) lifestyles of the developed west. This makes sense. Their language is suited to their lifestyle. Their lifestyle has one main focus: survival.

Let us suppose one were to raise himself in the wild, isolated from all other humans, he would not be able to create a complex private language because he would not need to. He may develop some way of communicating with nature around him (e.g. mimicking bird calls to attract birds so he can catch them for food), but his language would be nothing like the one we understand. He would need no complex grammatical rules or extensive vocabulary to survive in the wild because there is nothing in the wild either that would need to reciprocate understanding of such a language.

Communication as we know it could never occur. It would not need to. However, the isolated Amazonian would be communicating with the birds, in a sense, if they respond in the way he hopes so that he can catch them to eat. (Whether or not this is considered language can be debated, but if the goal of language is to communicate, then language and communication should be equivalent.) He is using his bird call as a tool to attract a bird just as I am using English to convey an idea to you now. Both he and I can be successful or not in achieving our respective goals. Whether or not we are successful can be due to any number of circumstances. In fact, the Amazonian could very possibly communicate with the bird more effectively than I am now communicating with you. Therefore, he (and the bird) would be more proficient in his language than I am in mine. In fact, I would hope that to be the case so I can further support the claims of this essay!

To “Know” a Language is NOT to have “Knowledge”

We have taken for granted that language is knowledge when it should, in fact, be thought of as a skill. We cannot imagine a world in which we have no knowledge of language, but that is because we have developed the skill of using it so well. We are so good – too good – at using this skill. We can lie to and manipulate others to achieve our ends. In fact, this is a tactic in capitalistic business rhetoric. The main focus of such business is not productivity, conversation, or healthy relationships. The focus can be reduced to one entirely superficial entity: money. Everyone wants as much as they can get, so they employ tactics of rhetoric (i.e. linguistic manipulation) in order to achieve that goal. It is only the loudest and most cunning who succeeds at this, not the smartest, most thoughtful, or most honest.

In the Amazon, on the other hand, the goal is survival. There is no place for wasting resources or time. Nor is there a place for the use of expressions of language which are irrelevant to the tasks at hand. The precise reason that there is so much excess language in English and other western languages is because our lifestyles are not as directly oriented toward primal survival. Our irrelevant distractions have given rise to irrelevant expressions of language.

Language, more broadly, is something that we take for granted. It is difficult, sometimes almost impossible, to communicate complex ideas without language, so we are misled to believe that such ideas cannot even exist without our mastery of a complex language. This is not the case. Our experiences of the world, the patterns we draw from those experiences, and our creative, subjective manipulation of those patterns are what formulate our ideas. We use language to simply (and sometimes not so simply) express our understanding. So, in this sense, expression in general, not our mechanical ability to produce words, is the real evolutionary phenomenon of humans. Every bit as impressive and complex as our ability to express ourselves using written or oral language are our abilities to express ourselves using musical instruments, paintbrushes, sports equipment, hammers and nails, and our bodily movements in dance. Language is a tool, and like any tool, we can misuse it by lying, manipulating, and mistreating others, or, more preferably, we can use it honestly.

Current Methods of Usage – The “Private Language” Question and a Modern Example

To imagine how the meaning of terms evolve, we can use the word ‘gay’ as an example. It was originally an adjective used to refer to one who is happy, joyful, carefree, and very open-minded. It has been by virtue of usage, not definition, over the last century, that it has come to mean ‘homosexual.’ ‘Gay’ was once and then gradually very often used to mean ‘homosexual’ until the new meaning became the formal definition. Even today, in very slang contexts, ‘gay’ can be synonymous with a long list of words, depending on the context. This, as we know, has happened with many other words and phrases as well.

Of course, those other meanings for ‘gay’ are often slang and derogatory (e.g. in the conservative south, where homosexuality is not openly accepted). This is not a problem of language, but a problem of social human psychology. Perhaps I will further address this in a later post. For now, though, keep this (‘gay’) example in mind, for I will be returning to it soon.

The “Private Language” Question

Society would not be able to determine meaning or even function without shared customs which Wittgenstein calls forms of life. There are a countless number of forms of life which help shape meaning of language. Remember, language is a social activity, a game, a tool, and a means by which we interact. It is not by any means a universal entity because it cannot exist without the conformity of men. Therefore, later-Wittgenstein would claim, the creation of a private language is not possible.

Immediately, one might think otherwise. Is it not possible for an individual to create a private language that only he could understand? Perhaps with time it would actually be quite simple. One could easily create a private language using an interpretation of the modern Latin alphabet to form its words, such as English does. In the same way that John Locke says we come to understand meaning (from in the head), we can formulate a language by first creating words from an alphabet, assigning to them definitions, and then we would structure their usage by establishing syntactical rules. One might claim that even later-Wittgenstein should agree that this is possible provided that these definitions and rules are subject to change at any moment, which would certainly be the case once the language was taught to a group of people and then put to use. This may seem convincing, but there is an enormous problem here.

To argue that a truly-private language, in this sense, is possible is to argue something that cannot be proven. In fact, it is far more reasonable to bet in favor of the contrary. To even consider that a private language, which resembles our own to any degree, can be created is a naive over-simplification of language. We can only make this claim on the basis of what we already know about language: writing and recognizing symbols which represent sounds which can be formed into words that we assign definitions to. This is the method we have always used. It is habit, and in some sense, an ideology, that we take for granted.

As we humans have evolved, our language has evolved. We have obviously very extensively built off of caveman muttering to form the complex languages we have today. Ultimately, though, if recorded history allowed, even the most complex languages could be traced back to muttering. Indeed, each individual begins learning language as a muttering infant. More generally, this is how language began altogether.

Perhaps this “private language” question cannot be answered with absolute certainty, for you still may not be convinced, but one thing is certain: to claim, outright, that a private language can be created simply by developing an alphabet, formulating sounds and words, and assigning definitions to those words is extremely naive. We would be too closely relating our reality to the theoretical, and we would be admitting our ignorance of our own linguistic nature.

This all does not mean we should not speculate, of course. But keep in mind that, crucially, any attempt to speculate requires a conversation – a sharing of ideas. Participating in such a conversation would be to make even more clear that language works in the way that I (and later-Wittgenstein) am trying to explain.

Current Methods of Usage

Suppose that, through any means whatsoever, a private language can be created. I don’t know about you, but I can still accept Wittgenstein’s idea that, over time, fluidity of the new language would certainly occur, but the rules and meanings would change with it, and at any given moment, there are in fact present rules by which language must be used if we are to communicate effectively. Indeed, this is how any language, private or not, works. These rules are what I call the current methods of usage. Going back to a previous example, the word ‘gay’ used to have a different meaning and usage than it does today, but one individual cannot spontaneously decide to begin to use a word in a manner that steers too far away from its current method of usage (i.e. how it must be used at the present moment in time for communication to occur between one or more person).

Although usage, as later-Wittgenstein would say, caused the gradual shift in meaning of the word ‘gay’, it would be improper, incorrect, and not socially acceptable now to use the word ‘gay’ according to its previous definition. Not because the dictionary disagrees (remember, definitions are not rules of meaning), but because such usage of the term would be misunderstood in virtually any social setting. Miscommunication would occur. The general current method of usage of ‘gay’ suggests that it currently means ‘homosexual’ and by using it to mean ‘happy, outgoing, and open-minded’, we are very arguably no longer using the word properly. We are not conforming to the rules of the established language game. Communication requires some level of mutual understanding. I expect that absolutely no one reading this will find this arguable.

It should be noted that this is a very general example of “gay’s” current method of usage. There are also very specific, contextual cases where this concept comes into play. When I say that using ‘gay’ according to its former definition is currently improper, I am speaking about the concept’s more general terms. Most people, in most cases, equate ‘gay’ with ‘homosexual’.

Just as ‘gay’ is used and understood in slang as being synonymous with derogatory terms (unfortunately), it can also be used in contexts where it still means ‘joyful, carefree, and open-minded’. An example of this would be a small circle of elderly women, drinking tea on a Sunday afternoon, who describe one of their eighteen-year-old granddaughters as ‘gay’ because she recently got a tattoo. All of the elderly women understand the usage of ‘gay’ in this case. This would seem odd to the granddaughter if she were to walk into the room in the middle of their conversation, for she most likely understands ‘gay’ to mean ‘homosexual’ (because she is up-to-date with the general current method of usage of the term). However, the elderly women are not using ‘gay’ incorrectly because it conforms to their collective understanding that the term means ‘carefree and open-minded’. They are indeed conforming to a specific current method of usage – the method immediately relevant to the context of their conversation. They are playing the same language game. This works because the goal of language usage, communication, has been achieved.

Where Is Meaning?

Indeed, to deny that Wittgenstein’s later work improves on his early work is to commit two errors: 1) to overlook or submit to the intellectualist nature of Tractatus; 2) to fail to grasp the crucial insight that his later work provides. Tractatus claims that the better one masters the syntax of a language, the broader his experience and understanding of the world. This is a misled intellectualist view because it values the skill of applying language (as a priori) over and above all other skills and, more importantly, the matters themselves to which language is applied (i.e. any set of circumstances in the world that we attempt to describe). I have only seen shallow and insufficient evidence to support this view. After all, it is the things to which language is applied that matter, not the language itself.

Because there are no limits to how one can experience the world, we should never be misled into believing there are strict boundaries that limit our usage of a word. Our statements are an expression of our understanding. Our statements do not dictate understanding, as early-Wittgenstein thought. In fact, by this notion, we should even be allowed to take a word completely out of context, and just as long as we are able to communicate to at least one other person whatever idea is present to us by using that word, even if it is definitively unrelated, then we would not be using that word incorrectly. In fact, whether we realize it or not, we do this very often.

Whether true or untrue, contemporary schools of thought take for granted that meanings are not in the head. However, it seems clear that anyone’s interpretation of meaning is. It would seem that the most we can agree on is that communication occurs when two or more parties agree on meaning, but they could very well be using identical statements to assert two different things.

Perhaps “where is meaning?” is the wrong question to ask. There is nothing out there in the universe that we can observe in any fashion that dictates meaning. There are no dictionary definitions so precise that, from that definition, we are able to connote everything that is included in the word’s realm of possible references. If definitions were this way, i.e. if they served as rules of meaning, then such a dictionary would be so incredibly large, that it could never be printed. Perhaps it would have to be stored online for anyone to access and edit at a moment’s notice, much like Wikipedia. But still, usage among speakers would be dictating the definitions, so what good would these rules be at all? Definitions would begin to overlap more and more until every word would have so many connotations that it would be virtually indistinguishable from several other words. Is this not already the case?

Usage of phrases and words is in a constant state of flux. We collectively, and often unknowingly, adapt to these constant changes so there remains enough continuity for us to effectively communicate what we mean. Since this adaptation process is often subconscious, we need not think about it; we presume meaning by our usage, and we are almost always correct provided we, and those receiving our message, are fluent in that language.

If Tractatus were more accurate than P.I. in describing the fundamental nature of language, then to learn language would require a lot of memorization, much like one “learns” a foreign language in a classroom. This may allow us to learn something about the concepts of a language, but it does not teach us to effectively use the language within societal contexts, so, learning, in this case, would be much more difficult, and for many, impossible.

So, how to we actually learn language? We’ll have to go back to a time that we do not remember, so we must forget everything we now misunderstand about language. I’ll use the most parallel analogy I can think of:

When parents are teaching a child to walk, they do not simply explain to the child how to walk and expect him to be able to do it without practice. Obviously, the child is not yet proficient in grasping such a concept. Nor does a parent grab one of the child’s legs, put it in front of the other, then do the same with the other leg repeatedly, because the child has not yet developed the practical skill of walking, and one cannot learn such a skill in such a forced manner. The child needs a reason to walk, so the parents teach the child to walk by working toward a goal. One parent (let’s say, the father) stands the child up, and the other parent (the mother) kneels down a few feet away, holding her hands out to the child. The father acts as the spotter, and the mother acts as the goal. The child sees his mother, desires to reach her, and he has to walk to get there in the same way that he learned to crawl (or at least his parents will condition him to believe this based on their training methods). The same is true of language. It is the tool we use to communicate because we need to communicate to get what we want or need. We start out, as babies learning language, by blurting out the word ‘bear’ and pointing to our teddy bear in order to achieve the goal of the teddy bear itself. The child says ‘bear’ to express the general idea “I want that teddy bear” or the command “give that teddy bear to me”. He is communicating with the parent in this sense. He is expressing a desire to achieve a goal. He is not merely making a statement (that would be impossible). Language is the road, not the end of the road. There is no language for language’s sake just as there is no walking for walking’s sake. Language is used for a purpose – a goal – in any given situation.

How each person achieves his goal varies greatly. Not all children walk the same. Some are bowlegged, some are pigeon-toed, some drag their feet and trip on their shoelaces, and some cannot walk at all, so they utilize other tools such as wheelchairs. But they each adapt to their handicaps to get what they need – to get from A to B. Likewise, not everyone speaks the same. Some slur their ‘r’s, some pronounce their ‘s’s with a ‘th’ sound, some use poor grammar, and some cannot speak at all, so they learn sign language. Regardless, each adapts to their handicaps and uses language for the same purpose – to communicate.

Language in general is meant to be used practically, not to be merely understood conceptually. Of course, there are logical concepts to understand which will help us be more precise, but the understanding of those concepts is something like our understanding of how to walk: put one foot in front of the other. As long as you practice walking, you will learn the concept of walking to some extent, but it is the act of walking that is fruitful for the individual. Likewise, one learns the concepts of language to the extents that they need to, but only to the extent that they need to. This is why some children (and adults) in school grasp grammar well, and others do not, though they are able to orally communicate to much of the same effect in social and professional circles. Some are more conceptually-minded. Those prefer to master grammar in order to be as precise as possible both in writing and in speech. They will also make better teachers because they can adapt their language usage to a wide range of listeners. Others prefer to stick to practice and master other types of skills, and perhaps they will become better doers. Either way, practice comes prior to understanding in this case (but not necessarily in the case of everything).

And this is the point: It is only in the case that we look to the world that one might be able to explain language. The world is untamed, and so is the way we understand it and attempt to describe it – i.e. so is language itself. We play language games to adapt the meanings of utterances to our world. Otherwise meaning would be of no use to us, and that is certainly not the case.

Logical Reductionism

One similarity between Wittgenstein’s two main works, Tractatus Logico-Philosophicus and Philosophical Investigations, is that, in both, he concerned himself with this very question: “How are we to say what we mean?” However, the reasons for this concern were different in each work, so the question itself changed over time (and this is an example of how meaning changes; the same sentence can mean two different things under different circumstances).

Tractatus took for granted two fundamental assumptions about language: that it has a quantifiable logical construction and that it is causally related to our perception of the world. The latter assumption seems undoubtedly true, but the former, not so much, even though the latter seems to be contingent on the former. He says in 1.1 of Tractatus, “The world is the totality of facts, not things.”, and then in 4.001, “The totality of propositions is the language.” In other  words, if Wittgenstein remains consistent, reality is comprised of all states of affairs about which propositions can be made (in case this is not already clear from my previous description). Language is a puzzle that one must figure out if he is to communicate effectively. One may only think and speak according to those factual states of affairs in the world. That is to say, because language is something of a logical system, one may only think and speak logically. This brings me to the minor concept of this essay: logical reductionism.

Logic: The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding.” -Ambrose Bierce, The Devil’s Dictionary (1911)

Logical reductionism can be broadly defined as “rigid belief in an a priori system, even in contexts which it is inapplicable”. This term is very broad, for it includes any case where a dogmatic ideology guides understanding without exception. Logical reductionism is, in many cases but very generally, similar to the single-cause fallacy. The single-cause fallacy is also called false-dilemma, false cause, correlation-causation, or black-and-white fallacy. It states: because y follows x, then x must have caused y. For example, if a man who is known to have a heart condition dies in his sleep, his family members might conclude that the death was due to a heart attack. The pathologist may or may not be able to confirm this. Regardless, the family have come to an agreement on what the cause of death was, assuming that there was only one cause, when in fact there were probably multiple necessary contributing factors.

The main difference between the single-cause fallacy and logical reductionism is that the former deals with a one’s lack of ability to use a specific type of reasoning, and the latter deals broadly with one’s rigid belief in a system. The latter, as I will explain, is much more problematic.

The type of reasoning that is hindered by the single-cause fallacy is not one that warrants an immediate judgment. Rather, it is a mode of perception that allows one to see multiple possibilities and make connections in an unfamiliar situation. Proponents of Jungian psychology call this mode of perception extroverted intuition (or Ne). The tendency to neglect this perception is called, in some circles of psychology, explanation freeze. Anyone can fall victim to explanation freeze (i.e. get fixated on a singular explanation of a problem), no matter their ability to use Ne, but Jung would suggest that only half of the human population possesses the natural ability to exercise Ne at all, and a only very small percentage can exercise it consciously and effectively. Everyone else is only able to use it to a very small degree or merely act as if they use it. Upon close observation of one’s social environment, this actually seems to be true. Based on the limited formal research that has been done on this by Julia Galef and other contributors at clearthinking.org, it also has great potential to be confirmed. However, I am not making a case for that at this time.

If the ability to use Ne is indeed innate, one cannot have any difficulty in achieving that which he has no potential to achieve (i.e. overcoming single-cause). On the other hand, one’s ability to use extroverted thinking may not be innate, and everyone might have the potential to improve the skill. If this is the case, then everyone would have the potential to exercise Ne with practice. In fact, there are outlets online that can help with this: wi-phi; ClearThinking; YLFI. Either way one should be able to hold the position that it is generally more difficult to overcome logical reductionism than the single-cause fallacy.

A further description of Ne: (People who have a strong tendency for extroverted intuition have been found to naturally exhibit brain activity that is similar to that of someone who is under the influence of a hallucinogen like psilocybin, ayahuasca, or LSD. Despite the public’s general negative attitude toward the use of hallucinogens, they can have some very positive long-term effects. They can broaden one’s scope of the world, allow him to see multiple possibilities in any situation, make him realize the interconnection of humans and nature, etc. This is no delusion, but rather, a unique type of clarity which can, albeit more difficultly, be achieved without the use of such substances. I do not promote the use of hallucinogens mainly because their effects can be achieved through other means (intensive meditation, introspection, etc.). I have only used this example to further explain what it is like to have Ne “brain wiring”. Take it however you prefer.)

Logical Reductionism is more broad than single-cause, but as stated earlier, it is closely related to it. I used the example of the family of the man who died. It should now be clear why their assessment of the death is guided by poor reasoning: They are not medical professionals and do not realize the broad range of issues that normally contribute to an unexpected death. In such a moment of stress, their perception becomes narrow. More generally, they may not have the natural tendency to use Ne to a large extent to begin with. This is fine.

The pathologist, on the other hand, has no excuse (even if he no more possesses the natural ability to use Ne). If he outright agrees that heart attack was the cause of the man’s death, he likely does so for one of two reasons: because he simply wants to satisfy the family so he no longer has to continue the conversation, or because he believes so dogmatically in the practices of pathology that it can provide all of the answers on a strictly biological basis. It is in the latter case that he is being ideological, and therefore committing logical reductionism, whether he is aware of it or not. Either approach to the question is not very professional in my view, especially the latter because it is founded in ignorance. (This is a common problem in medical practice that I may address after further research at some other time.)

Logical reductionism is a widespread epidemic which epitomizes the naivety of human perception. There are no matters in the universe (medical, scientific, philosophical, religious, political, etc.) that can be absolutely confirmed or refuted by the application of an a priori system. To think otherwise is to commit the fallacy of logical reductionism. It is incredibly arrogant to claim that we humans have the potential to understand the nature of anything via systems that we have created for the sole purpose of making it easier for us to relate to those very states of affairs that we previously accepted as unfathomable. Language is not one of those systems. This is precisely what Tractatus gets wrong.

(In Philosophical Investigations, that assumption was done away with. Wittgenstein realized that language is not a puzzle; it is a tool. The challenge to say what we mean is not to figure out something fundamental about the language, but rather to work with others to communicate what we mean on a more holistic, interpersonal level. We use language; we need not deconstruct it.)

In reductive biology, researchers tend to look for specific genes to explain traits, birth deficiencies, and mutations. Genes are thought to be the most elementary autonomous anatomical units. The line of reasoning is that by reducing the condition down to its fundamental parts (simples), then we might gain a fundamental understanding of the whole being (composite). (This line of reasoning commits the fallacy of composition. Composition seeks to prove that the whole is merely the sum of all of its parts. Division seeks to prove the opposite.) The extent of their findings have been merely correlative. The only thing we absolutely know genes to do are to guide the synthesis of proteins. These proteins make up only a portion of DNA and RNA construction. DNA then provides a home base for storage and transmission of “genetic information”. RNA is then required to carry out functions of that information (e.g. traits and maintaining genetic stability of the organism). So, the gene’s role in developing traits is indirect and not very clear. It is something like: If A and X, then B; if B and Y, then C; if C and Z, then D. A (genes), therefore D (traits). It is becoming increasingly clear that reducing a composite to a simple does not help us to explain the broader functions of the composite, and vise-versa.

I’ll use a less complicated example from biology. Different types of cells carry out different and specific types of functions: Red blood cells distribute oxygen throughout the body, white blood cells fight infection, nerve cells transmit sensory impulses to the brain, skin cells shed and regenerate to protect the inside of the body from the outside world, etc. But, do the sum of all of these basic components equate to a human? The answer is ‘no’ because the range of functions that the organism can perform is much more extensive and diverse than that of the sum of all of its constituent parts. The human being (especially the brain) is so complex and mysterious because it cannot be quantified in this way. Any use of mathematics in biology is simply an estimation, and at best, a guideline. To believe otherwise is to commit logical reductionism.

The same is the case with language. Wittgenstein states in Philosophical Investigations:

47. But what are the simple constituent parts of which reality is composed? – What are the simple constituent parts of a chair? – The pieces of wood from which it is assembled? Or the molecules, or the atoms? ‘Simple’ means: not composite. And here is the point: in what sense ‘composite’? It makes no sense at all to speak absolutely of the ‘simple parts of a chair’.

48. …We use the word ‘composite’ (and therefore the word ‘simple’) in an enormous number of different and differently related ways. To the philosophical question ‘Is the visual image of this (chair) composite, and what are its constituent parts?’ the correct answer is ‘That depends on what you understand to be composite. (And that, of course, is not an answer to, but a rejection of, the question.)”

In this later work, Wittgenstein came to deny the existence of simples and composites in the way we describe reality. It would be because of the logic-contingent construction by which we might misunderstand language that we might disagree with him. Language no longer dictates one’s understanding of the world. Rather, the world controls the fluidity of language because the world controls us, whether we are able to admit it or not. Any attempt for us to control the world will have horrific ramifications (e.g. effects of agriculture on climate change). We adapt our language to our world out of necessity.

To be continued…

Current Methods of Usage (Part 2) – The Two Theories

Tractatus Logico-Philosophicus described language as the picture through which we see the world. Reality is everything that is the case – the totality of describable facts and states of affairs. The limits of language, of which there are many, are the limits of one’s overall experience of the world. Seemingly abstract questions such as those of ethics and aesthetics are transcendental and thus not ask-able because their foundations are not in accordance with the states of affairs in the world. Any question that can be asked (according to the current states of affairs in the world) can indeed be answered. We think in terms of logical propositions and express ourselves using those same propositions. This is a difficult process, for language has a unique logical construction, unlike mathematics or logic itself whose propositions are, at best, tautological. Being concise is important. Thinking and speaking are both logical processes. One cannot think or mutter an illogical proposition because such a proposition would not fit into the picture of the world, i.e. language, which at least limits our understanding of it, and at most limits the actual states of affairs that its propositions assert. The greater is one’s proficiency in language, the greater is their overall experience of the world. Language is everything.

That is as concisely as I believe I can put it. I sure hope that, by Wittgenstein’s measure, I am following the rules!

Philosophical Investigations begins with a quote from St. Augustine’s Confessions, which explains how language is first learned by learning the names of objects. You see your parents point to an object, say a word, and you learn to associate the word with the object. This initially seems to resemble Tractatus. For later-Wittgenstein, though, this is only the starting point. Names, and more generally, propositions, no longer pose a problem. It is reasonable to accept that we learn to communicate by pointing to objects while saying a specific word. However, Wittgenstein claims that we cannot create a necessary fundamental relationship between the name of an object and the object itself. Rather, language sets infinitely revisable guidelines for how we communicate, and it is the usage of words that gives them their meaning. For example, suppose a group of builders communicate using a four-word language containing the words ‘block’, ‘pillar’, ‘slab’, and ‘beam’ (Wittgenstein 19). When one builder says one of those words, or any combination of those words to another, he is not merely naming the individual objects. There are certain implied statements based on the usage and context of the words. To say “block” usually implies “fetch me that block”. It could even imply something as extensive as “fetch that block, and then place it here in an upright position.” Any combination of those words can have any combination of implications, and they will be correct just as long as all parties involved in the communication of those words understand those implied statements. Meaning, in this case, deals much more with the overall implications rather than the singular words. Meaning is not bound by the words themselves, but rather by how they are used. They seem to have no boundaries at all because of the endless range of implied statements one can make by saying a single word. This is in part what Wittgenstein refers to as a language game. There is no particular set structure by which we must speak in order to communicate. We play these language games to communicate ideas. In many cases, we can only hope that one receives a message as we mean to send it. The world, not language, is everything. Mastering language will help one in many ways, yes, but one’s problems in the world are more reducible to his individual psychology rather than due to language itself which, as Tractatus claims, has some a priori (self-justifiable) foundation.

By which theory, in the brief descriptions above, are you convinced best explains the nature of language? Though they seem to contradict each other, either one may seem feasible with some thought. At different points, I have been convinced of both for different reasons. However, my agreement with Tractatus was a bit more like my agreement with my daily horoscope. It seemed to make sense only within the confines of a very specific way of thinking. It seemed that the assumptions outweighed the claims they assert. Though Tractatus clearly provides insight, Philosophical Investigations now seems to better describe the ways in which language is actually used in the world. I hope that one will be convinced of this after reading further.

Current Methods of Usage (Part 1) – Introduction

At two different points in his life, Ludwig Wittgenstein held conflicting theories about the nature of language. These two philosophies arguably gave rise to two schools of thought, each with an extensive range of subfields, that are still prominent today: analytical and continental (this is why Wittgenstein is so widely considered the most influential philosopher of the twentieth century). We associate Wittgenstein’s early work, Tractatus Logico-Philosophicus, with the analytical school of thought. This work argued for a “picture theory” of language that states that language’s foundations are in the logically constructed picture of the world that we attempt to describe; there is a necessary relationship between terms and the things in the world that they refer to. We associate Wittgenstein’s later work, Philosophical Investigations, with the continental school of thought. This work argued for a much more open-ended theory of language that states that meaning is not fixed; it fluctuates depending on its context. We play language games in order to communicate as precisely as we can within a given context. In either case, saying what we mean is a difficult task.

My purpose in this essay is to show that Wittgenstein’s two theories of language can, in some sense, coexist. If I am successful, one should be able to infer that the respective schools of thought that they gave rise to must coexist if we are to advance thought. Perhaps I will elaborate on the latter point at a later date, but for now, I will defend the former by devising two concepts. The first concept is called the logical-reductionism fallacy, which will expose the problems of applying strict a priori ideals to meaning, in this case, as applied to language. The second concept, which will be the focus of this essay, is called current methods of usage. It states that there is indeed a proper way to use language in a particular time and place. It cherry-picks things from Tractatus that we should keep in mind when using language while accepting that Wittgenstein’s later theory is superior in explaining the overall nature of language. So, I am not claiming that two seemingly contradictory theories can coexist in terms of fundamental truth, but rather that one is more true, and the other is practically valuable, so both are worth keeping in mind.

Though I will be trying to stay on this track, I will frequently deviate from the central argument to express my own ideas about the fundamental nature of language. Perhaps that will be the focus.

“Ideology and the Third Realm” – What is Philosophy?

In Dr. Alva Noë’s book Varieties of Presence, many important aspects of perception are discussed. He makes a convincing case that we achieve contact with the world through skill-based action. Our understanding of a new experience is a collective recurrence, both conscious and unconscious, of past experiences. It is a dense work that deserves the attention of other contemporaries who concern themselves with matters in cognitive science and philosophy of mind. Perhaps I will do a full review of this book at a later date, but for now I would like to focus on a matter addressed in the final chapter entitled “Ideology and the Third Realm” which takes an important departure from the philosophy of consciousness and neuroscience.

What this chapter does is something that every philosopher should do periodically: broadly revisits the fundamental importance of philosophy as it relates to the context of his work. I will be a bit more general than that since I am not  “professional” philosopher. The role that philosophy plays in the world seems to constantly be changing. But is it? Perhaps it is only the popular understanding of what philosophy is that changes. I think that is, in part, the case, but it has more to do with the uses of philosophy. Some of those uses have remained constant since the beginning of recorded thought while others change by the minute. For this reason, it is impossible to pin down. But one need not pin it down. Philosophy exists to be used, and it is set of skills that will hopefully never become extinct. There is no dictionary definition that can sufficiently explain it, much less emphasize the field’s vital presence. I will give a general overview of the chapter but mainly share my thoughts about what philosophy is and why it is not only relevant, but necessary. Before I continue, I should define an important term which will be mentioned several times in this piece.

Q.E.D. (Latin) quod erat demonstrandum –  meaning “which had to be proven”

Many people, in and out of academia, naively think that philosophy deals with questions that warrant a Q.E.D. response. When you take a philosophy course, you often have to write at least one argumentative essay where you choose a position of a philosopher who you have read, you attempt to prove him wrong, and then you attempt to formulate a complete view of your own by supporting evidence. This way of “doing philosophy” is popular in undergraduate, base-level courses. It helps you to develop reasoning skills that can be applied anywhere. This is important, no doubt, but this is not where philosophy ends. Why? First, writing is not even necessary for “doing philosophy”. The only thing that is necessary, I would argue, is thinking. Thinking must be assisted by reasoning, but this is only the start.

This does not imply that we should identify the philosopher as one who locks himself up in his ivory tower and speculates of a deluded, idealized world. To philosophize well, one must also be able to communicate his ideas in some way, and that will involve language, whether spoken or written. This is one reason philosophy courses are difficult: one must already have a certain level of reading, writing, and speaking proficiency to succeed. The full title of the final chapter of Noë’s book is “Ideology and the Third Realm (Or, a Short Essay on Knowing How to Philosophize)”. Since language is such a crucial part of this issue, I will begin by taking a language-based example from that chapter:

‘The King’s carriage is drawn by four horses’ is a statement about what?

a) the carriage;  b) the horses;  c) the concept it asserts;  d) other

Immediately, one might think that the answer is ‘a) the carriage’. This seems completely logical, given how most of us understand language. ‘Carriage’ is the subject of the sentence, so any terms that follow should (theoretically) describe it. It is certainly not ‘b) the horses’ because that is the object receiving the action, and nor can the answer be ‘c) the concept it asserts’ because nine out of ten people in the room don’t know what the hell that means. Right? Good. It’s settled.

Gottlob Frege had other ideas. He thought that a statement about numbers is a statement about a concept. When we attempt to answer the question about the subject matter of the “king’s carriage” statement, we are speaking in conceptual terms. We are not using the statement to assert anything. So, the answer must be ‘c’. He gives more reasons for this, of course, and he makes us realize that there is a sense in which we become confused about what we mean when we say ‘The king’s carriage is drawn by four horses’. However, despite the piercing quality of Frege’s argument, we have a much stronger sense that we are unconvinced by his theory of language.

The problem with Frege’s claim, for most of us, seems to be that he had a preconception of the meaning of the statement ‘the king’s carriage is drawn by four horses’ before he was even asked the question. He had already established that any statement about a number, without exception, is a statement about a concept, so he was able to answer the question without thinking. The problem with our rejection of his claim is that we are doing exactly the same thing. We also answered without thinking. We held the preconception that every sentence is about its subject. This preconception is guided by the larger logical construction by which we understand language, and it is certainly no more correct than Frege’s view simply because nine out of ten people in the room agree that it is (that would be to commit ad populum). We take our theory of language for granted, and perhaps Frege takes his for granted too. There seems to be no Q.E.D. conclusion here. What we are all doing, if we become inflexible, if we stick to our answer to the question without sufficient evidence to support it, is committing what I call the ideological fallacy.

However, subscribing to ideologies is not always a fallacious thing. It is only when the ideology is applied in a dogmatic way that it becomes wrong. When an evangelical christian lives by Jesus’ principle, “love your enemies”, that can have very positive effects. It may minimize conflict in the person’s life. It may allow them to stand strong in the face of racial adversity. It may allow them to accept people more openly, and very often the favor will be returned. However, the favor is not always returned if the christian is careless and thoughtless. Despite his belief that he loves his enemies, participating in radical evangelical activism would invade on others and create more conflict, leaving his conception of “love” to be questioned. It takes Christianity out of context and misapplies it to the world in a negatively ideological way. There is nothing about the beliefs in themselves that are illogical, destructive, or even wrong. It is in how they are used will determine that. I will use another example. Evolutionary biology can study preserved skeletons of million-year-old homo erectus figures and learn about how we sapiens evolved three stages of evolution later. This could contribute to our understanding of how humans will continue to evolve (or devolve). However, evolutionary biology can only contribute a small piece to the puzzle of predicting the future of humankind. It needs influence from many other fields to even begin to solve any of its own problems. So, when Richard Dawkins uses the broad concept of evolution to attempt to disprove creationism in any one of its countless forms, he is taking his work out of context and applying it in a radical, dogmatic, negatively ideological way. There is nothing about evolutionary biology, as a field, that is wrong. It is a highly-useful method of inquiry. But there is still plenty we do not know about how humans have evolved. We generally just accept that they did with the minimal evidence that we have just as the evangelical accepts his own conception of loving his enemies based solely on Jesus’ teachings. In this case, both parties look equally silly.

Of course, the example above presents two extreme cases. Although we answer this “king’s carriage” question one way, Frege answers it in another, and we seem to have to agree to disagree, there is still a sense in which both sides think the issue is objective in nature and that it deserves further discussion. In order to have this discussion in a logical, respectful, open manner, we must become philosophers, and one may not need to go school to achieve this. Alva Noë wonders how we might categorize our dealing with the “king’s carriage” question. It is not in the realm of the material (e.g. biology), nor is it in the realm of belief (e.g. religion). It seems to be within some third realm. Noë begins to explain with this quote:

The point is not that Frege or we are entitled to be indifferent to what people say or would say in answer to such a questionnaire. The point is that whatever people say could be at most the beginning of our conversation, not its end; it would be the opportunity for philosophy, not the determination of the solution of a philosophical problem. (Noë, 173)

at most…“, Noë says “(what other people say is) the beginning of our conversation… the opportunity for philosophy…” This is another reason philosophy is so difficult! At the very most, when our view stands in opposition to another, we may only have the opportunity to do philosophy. We rarely get there. When we do get there, two or more people are concerning themselves with the third realm of a problem. What is the third realm? It is the realm of possibilities with minimal influence from ideologies. It is abstractly objective yet, as I will explain later, not in the realm of matters Q.E.D.

Where is this third realm? Well, ‘where’ is the wrong question. Bertrand Russell once said of philosophy that it is “in the no-man’s land between science and religion” because it always seems to be under scrutiny from both sides. Perhaps, in some cases, this is correct. It can serve as a mediator between two extremes, but, on the surface, this only explains one of unlimited applications of philosophy.

Upon first reading or hearing Russell’s quote, one might be inclined to place philosophy in between science and religion because it deals with reason over belief (like science) and thought without quantifiable experimentation (like religion). This would be a shallow interpretation that lacks crucial insight. Russell was perhaps a bit too concise for the average interpreter. He did not mean, as I understand him, that philosophy is inside the space between science and religion. It has deeper implications which resonate with those of Noë (despite the fact that Russell was a logical positivist, and Noë is a phenomenologist, so they would probably have a duel for other reasons). Explaining philosophy has nothing to do with where we should fit it in relation to other fields. It has to do with how we can apply its skills, and in that way it is most unique. Those skills are skills of thought. Developing those skills first requires one to look inward, rid himself of bias, and then turn outward to consider all possibilities. This is still only the beginning. Once we achieve this skill of thought, what do we do with it? We continue to practice and improve it. How? The answer is simple, but the application seems, in some cases, impossible. We communicate.

We share our ideas with others who have, to some degree, developed the skill of clear thinking. Of course, communication, whether written, oral, or otherwise, is a practical skill in itself that will be developed naturally, mostly prior to but also simultaneously, alongside the skill of thinking. We tend to adapt our ability to communicate only to the situational extents that we need them, and that can be limiting. When doing philosophy, anyone can participate, but only to the extent that they can think clearly. Philosophy tests those limits, which is why both science and religion so often scrutinize it. Though they deal with subject matter that seems contradictory, (mechanistic) science and religion do have one general thing in common: dogmatic ideology. Philosophy, on the other hand, is perhaps the only field that dedicates the elimination of dogmatism as one of its primary goals.

Doing philosophy is not only about increasing the degree to which people can think, but about being open to different forms of thought as well. What is fortunate in this regard is that each person in the conversation, if one is to find himself in such a conversation, has probably achieved their skill of thought through different means. For example:

There may be one who developed his thinking through philosophy itself, who rigorously studied informal logic to learn how not to commit errors in reasoning. He also may be able to contribute history of thought to the conversation and explain why certain schools of thought are obsolete in academic philosophy. There might also be a more scientifically-minded person who, in a graduate school lab, performed the same experiment under the same conditions hundreds of times, but got variable results. He questioned why this was happening (if the laws of physics are supposed to be constant), so he turned his research to the inconsistencies and realized that uncertainty transcends mathematical equations. He is now able to think more broadly about his work. There might also be a Buddhist in the group who practices intensive meditation. He can turn off influence from his sensory world and walk on hot coals without getting burned, or he can submerge himself into freezing-cold water without catching hypothermia. He is able to clear his mind from all unnecessary matter. Each person achieves the same thing – to think clearly, skeptically, critically – through different means. They each learn from one another and gain a broad range of insights.

Also, and perhaps most importantly, each person in the conversation should be genuinely interested in learning new perspectives in order to improve their own points of view. There is a sense in which someone may have achieved access to the third realm of conversation to a lesser degree than the others, and at a deeper point in the discussion, he gets flustered and has to back out. This is perfectly fine as long as he does back out, at least until his temper cools (if he does not back out, he will disrupt the conversation). He has pushed his boundaries of clear thinking to a level that the others have not, and that can be a very constructive or destructive thing, depending on his mindset. But it is vital that all parties directly involved maintain self-preservation throughout the conversation. If there are any unsettled nerves, it is almost certain that at least one participant is not being genuine, but rather, is too outwardly focused and is perhaps ultimately trying too hard to prove himself right or the others wrong. Although they might seem to contribute insight to the conversation, they will inevitably expose themselves as operating from within an ideology, thereby rendering themselves a nuisance. Philosophy is no activity for the pretentious or egocentric, contrary to popular belief. In fact, the absolute contrary is the case.

Do any philosophical questions warrant a Q.E.D. response? (Does philosophy ever prove anything?)

No. In case this is not already clear, there are, in a sense, no “philosophical questions”. There are only philosophical approaches to questions. Approaching the third realm of a problem requires one to be, as stated earlier, abstractly objective (or perhaps objectively abstract). There are limits to how objective one can be, no doubt, but the aim of advancing thought is to learn more and more about the world and how those in it think, so we can improve on that, both individually and collectively. It exposes dogmatism and reveals the sheer grey-ness in any concrete matter. Need I give examples as to when this might be useful? I challenge anyone to give an example of when it is not, and thereby present an opportunity for doing philosophy! This is why philosophy is so widely-applicable.

To draw an analogy – toward the end of Noë’s final chapter, he mentions Immanuel Kant’s aesthetic view that the reality of one’s response to a work of art is based in feeling – it is not contingent on his ability to explain it. Similarly, Clive Bell described a “peculiar aesthetic emotion” that must (first) be present in something for it to be considered art. It is that feeling you get when you listen to a beautiful composition, watch a film that evokes tears, or look at Picasso’s Guernica after you have heard the gruesome story behind the painting. I had experienced this aesthetic emotion many times, but it was my former professor at the University of New Orleans, Rob Stufflebeam, who, whether he intended to or not, led me to realize that all of those experiences involved the same exact emotional response. Perhaps only for those who have experienced it, it is certainly something that need not, and often cannot be explained.

Likewise, a philosophical approach to a problem is, instead of an emotional experience as with art, at its very best, an all-encompassing intellectual experience. It is not a heated argument, nor is it even a controlled debate. It is a respectful, open-ended discussion about ideas between two or more people in an intimate setting. It raises the awareness of each involved to a broad level of skepticism that, perhaps very strangely, brings with it an aura of contentment. It is obviously not the same feeling one gets with the peculiar aesthetic emotion, but it is parallel in the sense that when you are part of it, you really know. That reality seems to transcend explanation.

Final Thoughts

Alva Noë has developed this idea about perception: “The world shows up for us, but not for free. We achieve access to it through skill-based action.” It is a combination of developing our conceptual and practical skills that allows us to understand the world and live in it. Achieving access to the third realm of a question, as I would consider it, is one of those countless skills. It comes more easily for some than for others. Just as one person might naturally have ideal physiological makeup for learning how to swim (lean, broad shoulders, webbed feet, etc.), another person’s brain might seem to be better wired for clear thinking. Everyone, to some degree, with the proper amount of training, can swim. Likewise, everyone can, with practice, think clearly. The more one practices by looking inward, ridding himself of bias, and working up the courage to subject himself to critique, the more he can contribute to the conversation in his own unique way. How much one wants to participate is solely up to him, but to not participate at all is to miss out on a hugely important (and my personal favorite) part of the human experience.