The Slate, the Chalk, and the Eraser

Prerequisite reading: “WARNING: Your Kid is Smarter Than You!”

A mark of good critical thinking, let’s say, as it applies to science, is that it is always attempting to prove itself wrong. It challenges its most fundamental assumptions when unexpected results arise. We can do this in our everyday lives when we make decisions and formulate our own views. We are only truly challenging ourselves by trying to find flaws in our own reasoning rather than trying to confirm our views. It is easy to confirm our beliefs.

Let’s take astrology as a personal-scientific example. Sparing you the details, based on what little research has been done to refute it, astrology is seen as invalid, and therefore, a pseudoscience, by the standards of modern mechanistic science. However, that does not preclude one from believing in it – in confirming it or any of its insights to themselves. Now, one is not thinking critically by simply believing that astrology is a pseudoscience (or that it is legitimate science). That would be to put too much trust in other people’s thinking. What reasons can you give to support your own belief, and what does it mean?

One can wake up every morning, read their daily horoscope, and upon very little reflection, come up with a reason or two for how that horoscope applies to his or her life. On one hand, those reasons might be good ones, founded on an abundance of personal experience. The horoscope’s insights might serve as something to keep in mind as one goes about his or her day, and that can be a very helpful thing. On the other hand, however, the reasons might be mere, self-confirming opinions. They might be the result of the person’s ideological belief in astrology in general. That can be harmful if the person attempts to apply astrological insights to contexts which it is inapplicable. This is an example of how the confirmation of a specific belief, not the belief in itself, can be good or bad, helpful or harmful, depending on how one thinks about it and the reasons he or she gives for it. The question of whether it is right or wrong, correct or incorrect, is neither important nor provable.

In order to formulate views that are not mere opinions, we must expose ourselves to views that oppose the ones we already hold dear to our hearts. This is difficult for adults. Most of us have been clinging to the same beliefs since we were children or young adults. This is where children have a huge advantage. They don’t yet have views of their own. The sky is the limit to how they can think and what they might believe. Their handicap, though, is that they do not control what they are exposed to. They cannot (or perhaps, should not) search the internet alone, drive themselves to the library, proficiently read, or precisely express themselves through writing or speech. They are clean slates, and that ignorance not only gives them huge potential, but it also leaves them extremely vulnerable.

The Analogy

You may have heard this analogy before, but I will attempt to add a bit of depth to it.

A child’s mind is a slate, as are those of adults (though, arguably, much less so). It is a surface on which we can take notes, write and solve equations, draw pictures, and even play games. We can create a private world with our imaginations. For all intents and purposes, there are no innate limits to how we can use our slates. Maximizing our potential, and that of children, is up to the tools we use.

First, we need something to write with, but we shouldn’t use just any writing tool. Chalk is meant to be used on slate because it is temporary. It can be erased and replaced. If one were to write on a slate with a sharpie marker, that would be permanent. One could not simply erase those markings to make room for others. A slate has a limited amount of space.

Though our minds may not have a limited amount of space in general (there is not sufficient evidence that they do), there is a limit to how much information we can express at any given moment. That, not our mind in general, is our slate – that plane of instant access. The writing tool is our voice – our tool of expression. If we write with a sharpie, it cannot be erased. We leave no room to change our minds in the face of better evidence to the contrary. If we write with chalk, we can just as clearly express our ideas, but we also leave our ideas open to be challenged, and if necessary, erased and changed. It is also easier, for in the process of formulating our ideas with chalk, we need not be so algorithmic. We can adjust our system accordingly as we learn and experience new things.

The smaller the writing on the slate is, the more one can fit, but the more difficult it is to read. Think of a philosopher who has a complexly structured system of views. One detail leads into the next, and they all add up to a bigger-picture philosophy. It might take one’s reading all of it to understand any of it. That can be difficult and time-consuming, and not everyone has the patience for it. The larger the words on the slate, however, the easier it is to read, but the less there will be, so it risks lacking depth. Think of a manager-type personality who is a stickler for rules. He is easy to understand because he is concise, but he may lack the ability to explain the rules. People are irritated by him when he repetitively makes commands and gives no reasons for them. Likewise, children are annoyed when their parents and teachers make commands with no reasons to support them, or at least, no good ones (e.g. “because I said so”).

So, the slate represents the plane of instant access and expression of information, and the writing tool, whether it be chalk or a sharpie, represents our voice – our tool for expressing information and ideas. What does the eraser represent? The eraser represents our willingness to eliminate an idea or bit of information. It represents our willingness to refute our own beliefs and move forward. It represents the ability to make more space on our slate for better, or at least more situation-relevant, information. It represents reason. If one writes with chalk, the eraser – reason – holds the power. If we write with a sharpie, the eraser becomes becomes useless.

The Analogy for Children

I explained in my last post “WARNING: Your Kid is Smarter Than You” that it is important for parents and teachers to teach their kids how to think – not what to think – but I did not offer much advice on how to actually do that. I will not tell anyone, in detail, how to raise or educate their children. Each has a different personality and needs to be catered to through different means. I will, however, offer a bit of general advice based on the analogy above.

The way to teach children how to think (after already having done it for yourself, of course, which is arguably much more difficult) is NOT to hand the kids sharpies, for they will never learn to use an eraser. Their statements and beliefs will be rigid and lack depth of understanding. Granted, this might make them a lot of money in the short-term, but it will also significantly reduce their flexibility when they encounter real-life situations (outside of the institutions of school and work) that require them to think for themselves. This will inevitably limit their happiness from young adulthood to beyond.

Instead, simply hand them a piece of chalk. It is not even important to hand them an eraser, initially. Kids will figure out, after much trial and error, their own way to erase their slates. Eventually, they will find on their own that the eraser is a very efficient method to do so. Literally-speaking, they will express themselves and reason through their problems until they find the most efficient methods – by thinking for themselves, but only as long as they have the right tool.

WARNING: Your Kid is Smarter Than You!

Everyone is born with some capacity for critical thinking, but most people lose the skill over time. Children, specifically those aged 3-5, happen to be the best at it. This can be proven by a single word: ‘why’.

When someone asks a ‘why’-question, they are asking a question of reason, which is to say they are thinking critically to some degree. Children do this much more openly than adults, which is why most adults think children are simply being pests when they do. That is incorrect. The root of their questioning is philosophical. Children challenge assumptions, premises, and claims more openly than anyone. They are learning as much as they can about the world, and they demand reason to back up that knowledge. They are not lazy in the way that they tend to develop beliefs. Unfortunately, most parents do not share such genuine, open curiosity, nor are they readily able to cater to it. This is most obvious in grandparents, as the saying goes, “you can’t teach an old dog new tricks”. Elderly people tend to be the most firmly set in their ways and resistant to new ideas. Who can blame them? Thinking is calorie-intensive. Quite frankly, old people just don’t have the energy for it. Parents and teachers, however, have an important job to do. They have no excuse.

Though a child’s tendency to ask these types of questions will persist for some time, his continuance to do so will depend greatly on how open and able his parents and teachers are to dealing with it. In a perfect world, adults would take this as an opportunity to think critically about those questions themselves. Instead, they get frustrated or annoyed, make up a poor answer (e.g. “because I said so”), and send their kid straight to the TV or to bed; whatever it takes to keep them occupied and out from under the their skin. This is an uninspired and very resistant approach to parenting. The child’s curiosity is repressed, and they gradually stop asking questions and start submitting more and more to an ideology. The more naive children give in more quickly to the rules set before them. Others might become rebellious. Those rule-followers are certainly no smarter than the rebels, despite what social convention will tell you. Either way, their guardians’ repression has a lasting, negative effect on how they think.

I would like to now disclose that I do not have any children of my own, and I do not plan to have children in the foreseeable future. On that basis, someone who is guilty of the above might already feel offended and accuse me of having an incredible opinion on the matter. I would like to think that the contrary is true for two main reasons. First, I am a good planner. I am fully aware of the challenges of raising a child, and that is precisely why I am responsible enough to take the necessary precautions to prevent having one. Secondly, experience isn’t everything. I can observe the effects of bad parenting with a high level of objectivity because my thoughts about the matter are not distorted by the feelings caused by having a child of my own – feelings which unavoidably inhibit one’s ability to reason well.

Having said that, as you are a rational, autonomous agent, let me tell you a story.

I have a friend who has a four-year-old daughter. Immediately, there is a problem: He did not intend to. No, the fact that so many other people accidentally have children does not excuse him. That would be to commit the bandwagon fallacy. Nor does the fact that he is married and is financially able to support his daughter excuse him. In fact, he and his wife planned on holding out for five to seven years after their marriage to have a child, as they were aware of their not being ready. Instead, they ended up getting pregnant within only one year of their marriage. She was not planned, and my friend was not ready for the challenge of raising her. This is obvious upon close observation.

What does it mean for one to “be ready” to raise a child? That seems like a personal, descriptive question that everyone has their own unique answer to. That is true in a sense, but there is also a very normative aspect to this question. What “readiness” should mean here is that one is willing to accept the intellectual challenge of teaching a little person how to think – not what to think. That involves, not shrugging every time the child asks ‘why’, but, also, more crucially, asking ‘why’ for oneself. There is a modern saying that goes, “grade school teaches one what to think whereas college teaches one how to think”. My argument is that by the time someone gets to college age, they have already become a person to a degree, with their own thoughts, feelings, and system of beliefs. Therefore, it is almost certainly too late to teach one how to think. Small children ask the most critical questions. Parents should help them improve that ability at that point, before they have subscribed to an ideology that will most likely be founded in poor reasoning. The obstacle here is that the parents have previously adopted certain beliefs and have therefore surrendered their own ability to think well, much less will they be able to teach that ability to a child. Leading by example is vital, as kids learn by copying.

My friend is no exception. He holds some rather radical beliefs – mainly those of scientism and atheism, which normally go hand-in-hand. Therefore, he is not the type, no matter the subject, to be truly open to the question ‘why’. His beliefs dictate specific answers to those questions. i.e. All knowledge in the universe, including that of supernatural entities (such as God), has been or will be confirmed or falsified on the basis of physical, quantifiable matter.

The other day, my friend’s daughter was at preschool when some of her classmates were talking about a discussion they had in Sunday School the weekend before. When she got home that afternoon, she began to ask her father questions about God. She wasn’t doing so in a way that presupposed God’s existence, nor was she making any such claims. She was simply asking out of genuine curiosity, as children do with everything. To this point in her life, she had never even heard of God because my friend, being a serious atheist, had kept all sources of religion from her access at home. So, as you might imagine, he was quite disturbed that she was asking these questions. He felt he had done all he could do at home to keep religion out of her life, and now she was confronting him, backing him into a corner. His quick-fix decision was to, first, reject her questioning, and second, become more militant in forcing scientism upon her. He went out and bought children’s books about Darwinian evolution to fill the gap of there being no religion (e.g. bible story books). His hope was that she would believe in science (actually, scientism) instead of religion.

My friend, on an elusive, yet vital note, is trapped in a very conflicted way of thinking. He wants his daughter to “think according to reason”, as he says, but he also wants her to believe in some very specific ideologies. The two, at least in principle, cannot coexist. As I have clearly explained in earlier posts, reason and ideology are nearly polar opposite mindsets. If one is to reason well, he should find that no general ideology, is worth submitting to. There are only specific, situational exceptions to that fact. For example, when one takes a math test, he tunes into the deductive, mathematical way of thinking. When he takes a history test, he tunes into the material he studied for that test. Each way of thinking is useful in its own contexts. If he tries to apply math to the history test, or vise versa, he will fail the test.

On a more obvious note, my friend’s attempt to relentlessly control what is exposed to his daughter is a hopeless endeavor. She is going to get out of the house and away from her parents, as she already has to a degree. She is going to experience the world. She is going to have conversations with people who have views that conflict with her own. Most of all, she is going to be challenged. If she is taught what to think (whether evangelical Christianity, scientism, atheism, democratic or republican ideologies, etc.) she will be defenseless in such encounters. She will only be able to think and express herself according to those strict systems of thought, and that will be very limiting.

This approach to parenting, in some form or another, is widespread in the western world, and it is wrong. It is like trying to understand how the brain of a rat works by killing the rat, taking the brain out, and observing the brain in a non-working state, independent of the body. When one attempts to control all variables from happening, such conditions fail to represent those in the real world, for the real world is that which contains all the variables uncontrolled! Anything learned via such a method cannot be meaningfully applied in the real world. In fact, such methods will produce literally no meaningful results whatsoever.

How these analogies and examples can help us improve things, I will soon explain. There are constructive methods and solutions. The details of those methods will be for the individual parents and teachers to determine. All I will do is offer insights. You know your children the best, so adapt the concepts in your own way toward the one common goal: development of flexible thinking and viewpoints. There is a route for everyone. It is up to you to carve it for your children and for yourself.

There is not one generalized system of government, education, and economy that will satisfy all individuals. The ways individuals see things can change instantaneously. Creating a better world starts with better-thinking individuals. We can only hope that future systems will adapt accordingly.

To be continued…

 

Reason – The Business of Philosophy

“To say that a stone falls to Earth because it is obeying a law makes it a man and even a citizen.”  -C. S. Lewis

People who believe in science as a worldview rather than a method of inquiry – I call them scientismists – are fascinated by science because they cannot grasp it, just like all people who are not magicians are fascinated by magic. What little understanding they do have of it, in principle, is superficial. The difference between people’s perception of science and that of magic is that magic can always be explained. Magic plays a trick on one’s perception. That is magic’s nature as well as its goal. Science, on the other hand, cannot always be figured out. There simply is not a scientific explanation for everything (or of most things). Nor is it science’s goal to explain everything! Science is an incremental process of collecting empirical data, interpreting it, and attempting to manipulate aspects of the environment accordingly for (mostly) human benefit. It is experimental and observable. It is, as I will explain, inductive. Unfortunately, sometimes unknowingly, human subjectivity intervenes in at least one of these three steps, exposing its limits through ours. So, where does reason fit in to this process?

What “Reason” is NOT

One problem with scientism is that it equates science and reason. This is incorrect. Although philosophers of science, most of whom are scientists themselves, have debated the definition of science since it was called ‘Natural Philosophy’, there is one thing that we do know about it and the difference between it and reason. Science deals with questions of ‘how’. It describes the inner-workings, the technicalities, of observable processes and states of affairs. Reason deals with questions of ‘why’. It explores lines of thinking – fundamental goals, purposes, and meanings – for those processes and states of affairs as well those for many other non-scientific processes and states of affairs. Having said that, reason is necessary for science, but it is immeasurably more broad.

Science cannot alone answer why-questions. Claiming that it can is a mark of scientism. Why is that?

I will now give reasons for that by using an example from Dr. Wes Cecil’s 2014 lecture about scientism at Peninsula College:

Engineering, which is a type of science that has its foundations in calculus, can tell us how to build a bridge. Engineering can build the biggest, longest, strongest bridge one could possibly imagine. How will the bridge look? We marry science and art to make the bridge beautiful as well as functional. So, even at this first stage of building a bridge – design – science cannot stand independent from even art, which seems so much more abstract.

Furthermore, why do we need to build a bridge? This is a question of reason, not of science. The answer seems to be “to get to the other side of the river”. But what the engineer (who is also a business man who wants to land the deal for this highly-lucrative project) might neglect is that building a bridge is not the only way to get to the other side of the river. Perhaps a ferry would be an easier, more cost-effective option. The engineer can tell us how to build a ferry too, but making the decision between the bridge and the ferry, ultimately, is not the engineer’s business.

Even once the decision has been made to build the bridge, several more questions arise: who will pay for the bridge?; how will they pay for it?; where exactly will the bridge be?; who will be allowed to use the bridge? Motorized vehicles only? Bikes? Pedestrians?; etc. These are not scientific questions, and nor are most questions in our everyday lives. They are economic, ethical, and political questions that, much like the scientific question of how to build the bridge, require some application of reason, but they cannot themselves be equated with reason. Reason is something as different as it is important to these goals, processes, and states of affairs.

What is Reason?

Reason is a skill and a tool. It is the byproduct of logic. Logic is a subfield of philosophy that deals with reasoning in its purest forms. So, if someone wants to believe that science and reason are the same thing, then they are clearly admitting that science is merely a byproduct of a subfield of philosophy. I am sure that most scientismists egos would not be willing to live with that. Although some similar claim could still otherwise be the case, that is not what I am attempting to prove here. Let’s focus on reasoning.

We say that an argument is valid when the truth of the claim follows from the truth of its evidence. There is a symbolic way to express this. For example:

If p, then q; p; Therefore q.

What we have here is not a statement, but rather, a statement form called Modus Ponens. It is a formula in which we can plug anything for variables p and q, and whether or not the statement is true, it will be valid according to the rules of logic. Try it for yourself! But remember, ‘validity’ and ‘truth’ are not the same thing.

The example above describes deductive reasoning; it is conceptual. Immanuel Kant called the knowledge we gain from this process a priori – knowledge which is self-justifiable. Mathematics is a classic example of deductive reasoning. It is a highly systematic construction that seems to work independent from our own experience of it, that we can also apply to processes like building a bridge.

There is another type of reasoning called inductive reasoning. It is the process of reasoning based on past events and evidence collected from those events. The type of knowledge that one gains from inductive reasoning, according to Kant, is called a posteriori. This is knowledge that is justified by experience rather than a conceptual system. For example: We reason that the sun will rise tomorrow because it has everyday for all of recorded human history. We also have empirical evidence to explain how the sun rises. However, the prediction that the sun will rise tomorrow is only a prediction, not a certainty, despite all the evidence we have that it will rise. The prediction presupposes that not one of countless possible events (Sun burns out, asteroid knocks Earth out of orbit, Earth stops rotating, etc.) will occur to prevent that from happening.

Illusions of Scientism

The mistake that scientism makes is that it claims that the methods of science are deductive when they are actually inductive. Reductive science (that which seeks to explain larger phenomena by reducing matter down to smaller parts) most commonly makes this mistake. More often than not, those “smallest parts” are laws or theories defined by mathematical formulas. Scientismists believe that the deductions made by mathematical approaches to science produce philosophically true results. They do not. The results are simply valid because they work within a strict, self-justifiable framework – mathematics. But, how applicable are mathematics to the sciences, and how strong is this validity?

“The excellent beginning made by quantum mechanics with the hydrogen atom peters out slowly in the sands of approximation in as much as we move toward more complex situations… This decline in the efficiency of mathematical algorithms accelerates when we go into chemistry. The interactions between two molecules of any degree of complexity evades mathematical description… In biology, if we make exceptions of the theory of population and of formal genetics, the use of mathematics is confined to modelling a few local situations (transmission of nerve impulses, blood flow in the arteries, etc.) of slight theoretical interest and limited practical value… The relatively rapid degeneration in the possible uses of mathematics when one moves from physics to biology is certainly known among specialists, but there is a reluctance to reveal it to the public at large… The feeling of security given by the reductionist approach is in fact illusory.”

-Rene Thom, Mathemetician

Deductive reasoning and its systems, such as mathematics, are human constructs. However, how they came to be should be accurately described. They were not merely created, because that would imply that they came from nothing. Mathematics are very logical and can be applied in important ways. However, the fact that mathematics works in so many ways should not cause us the delusion that they were discovered either, for that would imply that there is some observable, fundamental, empirical truth to them. This is not the case either. Mathematics and the laws they describe are found nowhere in nature. There are no obvious examples of perfect circles or right angles anywhere in the universe. There are also no numbers. We can count objects, yes, but no two objects, from stars to particles of dust, are exactly the same. What does it mean when we say “here are two firs” when the trees, though of the same species, have so many obvious differences?

What a statement about a number asserts, according to Gottlob Frege, is a concept, because any application of it is deductive. So, I prefer to say of such systems that they were developed. They are constructed from logic for a purpose, but without that purpose – without an answer to the question ‘why do we use them?’ – they are nonexistent. Therefore, there is a strong sense in which the application of such systems is limited to our belief in them. Because we see them work in so many ways, it is difficult to not believe in them.

Physics attempts to act as the reason, the governing body of all science, but it cannot account for all of the uncertainty that scientific problems face. Its mathematical foundations are rigid, and so are the laws that they describe. However, occurrences in the universe are not rigid at all. They are random and unpredictable and constantly evolving. Therefore, such “laws” are only guidelines, albeit rather useful ones.

As Thom states, “the public at large” is unaware of the lack of practical applications of mathematics to science, and it is precisely that illusion of efficiency that scientism, which is comprised of both specialists and non-specialists, takes for granted. It is anthropocentric to believe that, because we understand mathematics, a system we developed, we can understand everything. Humans are not at the center of the universe. We’re merely an immeasurably small part of it.

The Solution

In the same way Rene Thom explains mathematical formulas do not directly translate to chemistry and biology, deductive reasoning, more generally, has very limited application in most aspects of our everyday lives. Kids in school ask, “I’ll never use algebra; why am I learning it?” It turns out, they are absolutely right. Learning math beyond basic addition, subtraction, multiplication, and division is a waste of time for most. What they should be learning instead are the basics of reasoning. Deduction only proves validity, not truth, and induction has even greater limits, as David Hume and many others have pointed out. People, especially young children, are truth-seekers by nature, which is to say they are little philosophers.

There is a solution: informal logic, the study of logical fallacies – the most basic errors in reasoning. Informal logic is widely accessible and universally applicable. If people are to reason well, informal logic is the most fundamental way to start, and start young we should. Children, in fact, have a natural tendency to do this extremely well.

To be continued…

Typology as a Step Toward Critical Thinking

One of the key aims of philosophy, for the individual, is to simply become more open-minded. It is to broaden one’s understanding of what is logical and illogical, rational and irrational, not merely to himself, but actually. This is extremely difficult, so most philosophy course syllabi will include a disclaimer such as this one:

WARNING!
Doing philosophy requires a willingness to think critically. Critical thinking does not consist in merely making claims. Rather, it requires offering reasons/evidence in support of your claims. It also requires your willingness to entertain criticism from
others who do not share your assumptions. You will be required to do philosophy in this class. Doing philosophy can be hazardous to your cherished beliefs. Consequently, if you are unwilling to participate, to subject your views to critical analysis, to explore issues that cannot be resolved empirically, using computers, or watching Sci-Fi, then my course is not for you.
Rob Stufflebeam (University of New Orleans)

Harsh? For many, it is. After extensive, philosophical examination of our beliefs via criticism from others or otherwise, we should find that they are founded on many assumptions. Of course, one cannot make any argument without some preexisting assumption(s). Perhaps the challenge, for some, lies in choosing which assumptions to submit to and which to debate. For the philosopher, though, the challenge is much more broad and often more difficult. Philosophy isn’t about formulating beliefs from nothing, but rather, if not to develop beliefs which can be justified and maintained in a logically consistent way, to eliminate belief altogether.

It may seem ironic that the aim of the philosopher is precisely to not have “a philosophy” in the conventional sense of the term. I would argue, though, that this is not a conscious aim of philosophy (perhaps that is the conscious aim of art). After all, good philosophers are not grumpy, old, bigoted skeptics in the way some may think. Rather, this unbelief is merely a byproduct of having explored a subject in a philosophical way, i.e. impartially. As I explained in a previous post, There are no “philosophical problems” per se; there are only philosophical approaches to a problem, and one can approach any problem philosophically.

What do philosophical approaches to problems do for us? The short answer is “lots of stuff”. Let us consider this example: Let us suppose that a man named Scott stands at the foot of a deciduous forest in the winter long after all of the trees’ leaves have fallen off. In front of the forest, in Scott’s plain view, are two large, lush, and green coniferous firs. Scott’s wife, Cindy, asks him “how many trees do you see?”. He answers “two”, for the firs, so green and lush, are the clearest things in his immediate view that resemble what he conceives to be trees.

Cindy’s question initially seemed like a very straight-forward, mathematical question. But Scott jumped to the conclusion that firs, not trees in general, were the objects Cindy wanted him to count. Of course there are many deciduous trees directly behind the two firs. He could have very well replied ‘a bunch; too many to count’ if he had simply looked past the eye-catching firs to the vast-yet-barren, leafless forest. As we know, the deciduous trees are every bit as alive as the firs; they’re only dormant for the winter. Even if Cindy’s question specified the firs as the trees for Scott to count, answering in a straightforward way might pose more questions, leading to a philosophical discussion about, say, mathematics (e.g. what is meant by “two firs” when the trees literally have so little in common?).

This is just a metaphoric example, but the point is this: an aim of approaching questions in a philosophical (and similarly, an artistic) manner is to gain the ability to see past what is immediately present to us. After all, what is immediately present to us are often dubious assumptions formulated by culture, nurture, institutions, etc.

Immediately, one might see why this type of “critical thinking” can not only be difficult, but get us into trouble, and it often does. Not only are individual’s beliefs founded on assumptions which are very often irrational, but the same is the case for belief systems of businesses, institutions, and personal relationships. People in these contexts can be very sensitive to criticism. “Power-in-numbers” exists and is very often harmful in a philosophical sense, for collective bodies are generally more easily influenced by foolish belief systems than individuals are (cult mentality). Those who break from the group and question things in a fundamental way are only thinking for themselves. They become outcasts, albeit curious and honest ones. Just as an individual should strive for harmony between his outer world and inner self, so should a group be resistant to any type of dogmatism. How do we achieve this?

There is no sure-fire solution, for if there were, it would follow that all people innately think the same way, and this is obviously not the case. In fact, thinking for yourself, which is to say, thinking differently from everyone else, is absolutely vital if you want to thrive in any regard. Philosophy and critical thinking in general can help if one is up for the challenge, but it is not advisable for just anyone to dive right into philosophical reading and discussion (Philosophy is difficult, and few people have the natural tendency to think openly about sensitive subjects to the extent that one must to be successful in philosophical discussion – see the WARNING above). There are other ways.

Each person has a different mind which presents a new set of challenges – challenges for which they will find solutions only if they come to terms with themselves first. For an outwardly-focused extrovert, this generally means finding comfort in one’s own skin. For an inwardly-focused introvert, it means finding one’s place in the outer world. However, it is much more complicated than that. This has been one basis for why I think Jungian typology, personality psychology, and light aesthetics, for the general population, present more relatable ways to deal with questions that are normally of concern to ethics and moral philosophy. No one broad ethical theory will satisfy everyone, and I find it nearly impossible to adapt such a theory to a wide range of people in a conceptual sense, and even less so in a practical sense. Typology is an extremely effective method for understanding one’s self and others.

How can each individual maximize his or her ability to think, act, and thrive? First of all, we must acknowledge that every person has his or her own version of the “good life”, so it is his or her goal to figure out what that is and aspire to it by maximizing his or her cognitive potential, so ethics does not, at least initially, seem to be of much use. This sort of “self-actualization” can be vital, also, for maximizing one’s participation in philosophical discussion. However, before one subjects him or herself to harsh philosophical criticism, it is advisable for one to come to know him or herself. Jungian typology is a great method for taking that first step, and then, perhaps, philosophy can pave the rest of the path.

To be continued…

Current Methods of Usage – The “Private Language” Question and a Modern Example

To imagine how the meaning of terms evolve, we can use the word ‘gay’ as an example. It was originally an adjective used to refer to one who is happy, joyful, carefree, and very open-minded. It has been by virtue of usage, not definition, over the last century, that it has come to mean ‘homosexual.’ ‘Gay’ was once and then gradually very often used to mean ‘homosexual’ until the new meaning became the formal definition. Even today, in very slang contexts, ‘gay’ can be synonymous with a long list of words, depending on the context. This, as we know, has happened with many other words and phrases as well.

Of course, those other meanings for ‘gay’ are often slang and derogatory (e.g. in the conservative south, where homosexuality is not openly accepted). This is not a problem of language, but a problem of social human psychology. Perhaps I will further address this in a later post. For now, though, keep this (‘gay’) example in mind, for I will be returning to it soon.

The “Private Language” Question

Society would not be able to determine meaning or even function without shared customs which Wittgenstein calls forms of life. There are a countless number of forms of life which help shape meaning of language. Remember, language is a social activity, a game, a tool, and a means by which we interact. It is not by any means a universal entity because it cannot exist without the conformity of men. Therefore, later-Wittgenstein would claim, the creation of a private language is not possible.

Immediately, one might think otherwise. Is it not possible for an individual to create a private language that only he could understand? Perhaps with time it would actually be quite simple. One could easily create a private language using an interpretation of the modern Latin alphabet to form its words, such as English does. In the same way that John Locke says we come to understand meaning (from in the head), we can formulate a language by first creating words from an alphabet, assigning to them definitions, and then we would structure their usage by establishing syntactical rules. One might claim that even later-Wittgenstein should agree that this is possible provided that these definitions and rules are subject to change at any moment, which would certainly be the case once the language was taught to a group of people and then put to use. This may seem convincing, but there is an enormous problem here.

To argue that a truly-private language, in this sense, is possible is to argue something that cannot be proven. In fact, it is far more reasonable to bet in favor of the contrary. To even consider that a private language, which resembles our own to any degree, can be created is a naive over-simplification of language. We can only make this claim on the basis of what we already know about language: writing and recognizing symbols which represent sounds which can be formed into words that we assign definitions to. This is the method we have always used. It is habit, and in some sense, an ideology, that we take for granted.

As we humans have evolved, our language has evolved. We have obviously very extensively built off of caveman muttering to form the complex languages we have today. Ultimately, though, if recorded history allowed, even the most complex languages could be traced back to muttering. Indeed, each individual begins learning language as a muttering infant. More generally, this is how language began altogether.

Perhaps this “private language” question cannot be answered with absolute certainty, for you still may not be convinced, but one thing is certain: to claim, outright, that a private language can be created simply by developing an alphabet, formulating sounds and words, and assigning definitions to those words is extremely naive. We would be too closely relating our reality to the theoretical, and we would be admitting our ignorance of our own linguistic nature.

This all does not mean we should not speculate, of course. But keep in mind that, crucially, any attempt to speculate requires a conversation – a sharing of ideas. Participating in such a conversation would be to make even more clear that language works in the way that I (and later-Wittgenstein) am trying to explain.

Current Methods of Usage

Suppose that, through any means whatsoever, a private language can be created. I don’t know about you, but I can still accept Wittgenstein’s idea that, over time, fluidity of the new language would certainly occur, but the rules and meanings would change with it, and at any given moment, there are in fact present rules by which language must be used if we are to communicate effectively. Indeed, this is how any language, private or not, works. These rules are what I call the current methods of usage. Going back to a previous example, the word ‘gay’ used to have a different meaning and usage than it does today, but one individual cannot spontaneously decide to begin to use a word in a manner that steers too far away from its current method of usage (i.e. how it must be used at the present moment in time for communication to occur between one or more person).

Although usage, as later-Wittgenstein would say, caused the gradual shift in meaning of the word ‘gay’, it would be improper, incorrect, and not socially acceptable now to use the word ‘gay’ according to its previous definition. Not because the dictionary disagrees (remember, definitions are not rules of meaning), but because such usage of the term would be misunderstood in virtually any social setting. Miscommunication would occur. The general current method of usage of ‘gay’ suggests that it currently means ‘homosexual’ and by using it to mean ‘happy, outgoing, and open-minded’, we are very arguably no longer using the word properly. We are not conforming to the rules of the established language game. Communication requires some level of mutual understanding. I expect that absolutely no one reading this will find this arguable.

It should be noted that this is a very general example of “gay’s” current method of usage. There are also very specific, contextual cases where this concept comes into play. When I say that using ‘gay’ according to its former definition is currently improper, I am speaking about the concept’s more general terms. Most people, in most cases, equate ‘gay’ with ‘homosexual’.

Just as ‘gay’ is used and understood in slang as being synonymous with derogatory terms (unfortunately), it can also be used in contexts where it still means ‘joyful, carefree, and open-minded’. An example of this would be a small circle of elderly women, drinking tea on a Sunday afternoon, who describe one of their eighteen-year-old granddaughters as ‘gay’ because she recently got a tattoo. All of the elderly women understand the usage of ‘gay’ in this case. This would seem odd to the granddaughter if she were to walk into the room in the middle of their conversation, for she most likely understands ‘gay’ to mean ‘homosexual’ (because she is up-to-date with the general current method of usage of the term). However, the elderly women are not using ‘gay’ incorrectly because it conforms to their collective understanding that the term means ‘carefree and open-minded’. They are indeed conforming to a specific current method of usage – the method immediately relevant to the context of their conversation. They are playing the same language game. This works because the goal of language usage, communication, has been achieved.

Where Is Meaning?

Indeed, to deny that Wittgenstein’s later work improves on his early work is to commit two errors: 1) to overlook or submit to the intellectualist nature of Tractatus; 2) to fail to grasp the crucial insight that his later work provides. Tractatus claims that the better one masters the syntax of a language, the broader his experience and understanding of the world. This is a misled intellectualist view because it values the skill of applying language (as a priori) over and above all other skills and, more importantly, the matters themselves to which language is applied (i.e. any set of circumstances in the world that we attempt to describe). I have only seen shallow and insufficient evidence to support this view. After all, it is the things to which language is applied that matter, not the language itself.

Because there are no limits to how one can experience the world, we should never be misled into believing there are strict boundaries that limit our usage of a word. Our statements are an expression of our understanding. Our statements do not dictate understanding, as early-Wittgenstein thought. In fact, by this notion, we should even be allowed to take a word completely out of context, and just as long as we are able to communicate to at least one other person whatever idea is present to us by using that word, even if it is definitively unrelated, then we would not be using that word incorrectly. In fact, whether we realize it or not, we do this very often.

Whether true or untrue, contemporary schools of thought take for granted that meanings are not in the head. However, it seems clear that anyone’s interpretation of meaning is. It would seem that the most we can agree on is that communication occurs when two or more parties agree on meaning, but they could very well be using identical statements to assert two different things.

Perhaps “where is meaning?” is the wrong question to ask. There is nothing out there in the universe that we can observe in any fashion that dictates meaning. There are no dictionary definitions so precise that, from that definition, we are able to connote everything that is included in the word’s realm of possible references. If definitions were this way, i.e. if they served as rules of meaning, then such a dictionary would be so incredibly large, that it could never be printed. Perhaps it would have to be stored online for anyone to access and edit at a moment’s notice, much like Wikipedia. But still, usage among speakers would be dictating the definitions, so what good would these rules be at all? Definitions would begin to overlap more and more until every word would have so many connotations that it would be virtually indistinguishable from several other words. Is this not already the case?

Usage of phrases and words is in a constant state of flux. We collectively, and often unknowingly, adapt to these constant changes so there remains enough continuity for us to effectively communicate what we mean. Since this adaptation process is often subconscious, we need not think about it; we presume meaning by our usage, and we are almost always correct provided we, and those receiving our message, are fluent in that language.

If Tractatus were more accurate than P.I. in describing the fundamental nature of language, then to learn language would require a lot of memorization, much like one “learns” a foreign language in a classroom. This may allow us to learn something about the concepts of a language, but it does not teach us to effectively use the language within societal contexts, so, learning, in this case, would be much more difficult, and for many, impossible.

So, how to we actually learn language? We’ll have to go back to a time that we do not remember, so we must forget everything we now misunderstand about language. I’ll use the most parallel analogy I can think of:

When parents are teaching a child to walk, they do not simply explain to the child how to walk and expect him to be able to do it without practice. Obviously, the child is not yet proficient in grasping such a concept. Nor does a parent grab one of the child’s legs, put it in front of the other, then do the same with the other leg repeatedly, because the child has not yet developed the practical skill of walking, and one cannot learn such a skill in such a forced manner. The child needs a reason to walk, so the parents teach the child to walk by working toward a goal. One parent (let’s say, the father) stands the child up, and the other parent (the mother) kneels down a few feet away, holding her hands out to the child. The father acts as the spotter, and the mother acts as the goal. The child sees his mother, desires to reach her, and he has to walk to get there in the same way that he learned to crawl (or at least his parents will condition him to believe this based on their training methods). The same is true of language. It is the tool we use to communicate because we need to communicate to get what we want or need. We start out, as babies learning language, by blurting out the word ‘bear’ and pointing to our teddy bear in order to achieve the goal of the teddy bear itself. The child says ‘bear’ to express the general idea “I want that teddy bear” or the command “give that teddy bear to me”. He is communicating with the parent in this sense. He is expressing a desire to achieve a goal. He is not merely making a statement (that would be impossible). Language is the road, not the end of the road. There is no language for language’s sake just as there is no walking for walking’s sake. Language is used for a purpose – a goal – in any given situation.

How each person achieves his goal varies greatly. Not all children walk the same. Some are bowlegged, some are pigeon-toed, some drag their feet and trip on their shoelaces, and some cannot walk at all, so they utilize other tools such as wheelchairs. But they each adapt to their handicaps to get what they need – to get from A to B. Likewise, not everyone speaks the same. Some slur their ‘r’s, some pronounce their ‘s’s with a ‘th’ sound, some use poor grammar, and some cannot speak at all, so they learn sign language. Regardless, each adapts to their handicaps and uses language for the same purpose – to communicate.

Language in general is meant to be used practically, not to be merely understood conceptually. Of course, there are logical concepts to understand which will help us be more precise, but the understanding of those concepts is something like our understanding of how to walk: put one foot in front of the other. As long as you practice walking, you will learn the concept of walking to some extent, but it is the act of walking that is fruitful for the individual. Likewise, one learns the concepts of language to the extents that they need to, but only to the extent that they need to. This is why some children (and adults) in school grasp grammar well, and others do not, though they are able to orally communicate to much of the same effect in social and professional circles. Some are more conceptually-minded. Those prefer to master grammar in order to be as precise as possible both in writing and in speech. They will also make better teachers because they can adapt their language usage to a wide range of listeners. Others prefer to stick to practice and master other types of skills, and perhaps they will become better doers. Either way, practice comes prior to understanding in this case (but not necessarily in the case of everything).

And this is the point: It is only in the case that we look to the world that one might be able to explain language. The world is untamed, and so is the way we understand it and attempt to describe it – i.e. so is language itself. We play language games to adapt the meanings of utterances to our world. Otherwise meaning would be of no use to us, and that is certainly not the case.

Logical Reductionism

One similarity between Wittgenstein’s two main works, Tractatus Logico-Philosophicus and Philosophical Investigations, is that, in both, he concerned himself with this very question: “How are we to say what we mean?” However, the reasons for this concern were different in each work, so the question itself changed over time (and this is an example of how meaning changes; the same sentence can mean two different things under different circumstances).

Tractatus took for granted two fundamental assumptions about language: that it has a quantifiable logical construction and that it is causally related to our perception of the world. The latter assumption seems undoubtedly true, but the former, not so much, even though the latter seems to be contingent on the former. He says in 1.1 of Tractatus, “The world is the totality of facts, not things.”, and then in 4.001, “The totality of propositions is the language.” In other  words, if Wittgenstein remains consistent, reality is comprised of all states of affairs about which propositions can be made (in case this is not already clear from my previous description). Language is a puzzle that one must figure out if he is to communicate effectively. One may only think and speak according to those factual states of affairs in the world. That is to say, because language is something of a logical system, one may only think and speak logically. This brings me to the minor concept of this essay: logical reductionism.

Logic: The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding.” -Ambrose Bierce, The Devil’s Dictionary (1911)

Logical reductionism can be broadly defined as “rigid belief in an a priori system, even in contexts which it is inapplicable”. This term is very broad, for it includes any case where a dogmatic ideology guides understanding without exception. Logical reductionism is, in many cases but very generally, similar to the single-cause fallacy. The single-cause fallacy is also called false-dilemma, false cause, correlation-causation, or black-and-white fallacy. It states: because y follows x, then x must have caused y. For example, if a man who is known to have a heart condition dies in his sleep, his family members might conclude that the death was due to a heart attack. The pathologist may or may not be able to confirm this. Regardless, the family have come to an agreement on what the cause of death was, assuming that there was only one cause, when in fact there were probably multiple necessary contributing factors.

The main difference between the single-cause fallacy and logical reductionism is that the former deals with a one’s lack of ability to use a specific type of reasoning, and the latter deals broadly with one’s rigid belief in a system. The latter, as I will explain, is much more problematic.

The type of reasoning that is hindered by the single-cause fallacy is not one that warrants an immediate judgment. Rather, it is a mode of perception that allows one to see multiple possibilities and make connections in an unfamiliar situation. Proponents of Jungian psychology call this mode of perception extroverted intuition (or Ne). The tendency to neglect this perception is called, in some circles of psychology, explanation freeze. Anyone can fall victim to explanation freeze (i.e. get fixated on a singular explanation of a problem), no matter their ability to use Ne, but Jung would suggest that only half of the human population possesses the natural ability to exercise Ne at all, and a only very small percentage can exercise it consciously and effectively. Everyone else is only able to use it to a very small degree or merely act as if they use it. Upon close observation of one’s social environment, this actually seems to be true. Based on the limited formal research that has been done on this by Julia Galef and other contributors at clearthinking.org, it also has great potential to be confirmed. However, I am not making a case for that at this time.

If the ability to use Ne is indeed innate, one cannot have any difficulty in achieving that which he has no potential to achieve (i.e. overcoming single-cause). On the other hand, one’s ability to use extroverted thinking may not be innate, and everyone might have the potential to improve the skill. If this is the case, then everyone would have the potential to exercise Ne with practice. In fact, there are outlets online that can help with this: wi-phi; ClearThinking; YLFI. Either way one should be able to hold the position that it is generally more difficult to overcome logical reductionism than the single-cause fallacy.

A further description of Ne: (People who have a strong tendency for extroverted intuition have been found to naturally exhibit brain activity that is similar to that of someone who is under the influence of a hallucinogen like psilocybin, ayahuasca, or LSD. Despite the public’s general negative attitude toward the use of hallucinogens, they can have some very positive long-term effects. They can broaden one’s scope of the world, allow him to see multiple possibilities in any situation, make him realize the interconnection of humans and nature, etc. This is no delusion, but rather, a unique type of clarity which can, albeit more difficultly, be achieved without the use of such substances. I do not promote the use of hallucinogens mainly because their effects can be achieved through other means (intensive meditation, introspection, etc.). I have only used this example to further explain what it is like to have Ne “brain wiring”. Take it however you prefer.)

Logical Reductionism is more broad than single-cause, but as stated earlier, it is closely related to it. I used the example of the family of the man who died. It should now be clear why their assessment of the death is guided by poor reasoning: They are not medical professionals and do not realize the broad range of issues that normally contribute to an unexpected death. In such a moment of stress, their perception becomes narrow. More generally, they may not have the natural tendency to use Ne to a large extent to begin with. This is fine.

The pathologist, on the other hand, has no excuse (even if he no more possesses the natural ability to use Ne). If he outright agrees that heart attack was the cause of the man’s death, he likely does so for one of two reasons: because he simply wants to satisfy the family so he no longer has to continue the conversation, or because he believes so dogmatically in the practices of pathology that it can provide all of the answers on a strictly biological basis. It is in the latter case that he is being ideological, and therefore committing logical reductionism, whether he is aware of it or not. Either approach to the question is not very professional in my view, especially the latter because it is founded in ignorance. (This is a common problem in medical practice that I may address after further research at some other time.)

Logical reductionism is a widespread epidemic which epitomizes the naivety of human perception. There are no matters in the universe (medical, scientific, philosophical, religious, political, etc.) that can be absolutely confirmed or refuted by the application of an a priori system. To think otherwise is to commit the fallacy of logical reductionism. It is incredibly arrogant to claim that we humans have the potential to understand the nature of anything via systems that we have created for the sole purpose of making it easier for us to relate to those very states of affairs that we previously accepted as unfathomable. Language is not one of those systems. This is precisely what Tractatus gets wrong.

(In Philosophical Investigations, that assumption was done away with. Wittgenstein realized that language is not a puzzle; it is a tool. The challenge to say what we mean is not to figure out something fundamental about the language, but rather to work with others to communicate what we mean on a more holistic, interpersonal level. We use language; we need not deconstruct it.)

In reductive biology, researchers tend to look for specific genes to explain traits, birth deficiencies, and mutations. Genes are thought to be the most elementary autonomous anatomical units. The line of reasoning is that by reducing the condition down to its fundamental parts (simples), then we might gain a fundamental understanding of the whole being (composite). (This line of reasoning commits the fallacy of composition. Composition seeks to prove that the whole is merely the sum of all of its parts. Division seeks to prove the opposite.) The extent of their findings have been merely correlative. The only thing we absolutely know genes to do are to guide the synthesis of proteins. These proteins make up only a portion of DNA and RNA construction. DNA then provides a home base for storage and transmission of “genetic information”. RNA is then required to carry out functions of that information (e.g. traits and maintaining genetic stability of the organism). So, the gene’s role in developing traits is indirect and not very clear. It is something like: If A and X, then B; if B and Y, then C; if C and Z, then D. A (genes), therefore D (traits). It is becoming increasingly clear that reducing a composite to a simple does not help us to explain the broader functions of the composite, and vise-versa.

I’ll use a less complicated example from biology. Different types of cells carry out different and specific types of functions: Red blood cells distribute oxygen throughout the body, white blood cells fight infection, nerve cells transmit sensory impulses to the brain, skin cells shed and regenerate to protect the inside of the body from the outside world, etc. But, do the sum of all of these basic components equate to a human? The answer is ‘no’ because the range of functions that the organism can perform is much more extensive and diverse than that of the sum of all of its constituent parts. The human being (especially the brain) is so complex and mysterious because it cannot be quantified in this way. Any use of mathematics in biology is simply an estimation, and at best, a guideline. To believe otherwise is to commit logical reductionism.

The same is the case with language. Wittgenstein states in Philosophical Investigations:

47. But what are the simple constituent parts of which reality is composed? – What are the simple constituent parts of a chair? – The pieces of wood from which it is assembled? Or the molecules, or the atoms? ‘Simple’ means: not composite. And here is the point: in what sense ‘composite’? It makes no sense at all to speak absolutely of the ‘simple parts of a chair’.

48. …We use the word ‘composite’ (and therefore the word ‘simple’) in an enormous number of different and differently related ways. To the philosophical question ‘Is the visual image of this (chair) composite, and what are its constituent parts?’ the correct answer is ‘That depends on what you understand to be composite. (And that, of course, is not an answer to, but a rejection of, the question.)”

In this later work, Wittgenstein came to deny the existence of simples and composites in the way we describe reality. It would be because of the logic-contingent construction by which we might misunderstand language that we might disagree with him. Language no longer dictates one’s understanding of the world. Rather, the world controls the fluidity of language because the world controls us, whether we are able to admit it or not. Any attempt for us to control the world will have horrific ramifications (e.g. effects of agriculture on climate change). We adapt our language to our world out of necessity.

To be continued…

Current Methods of Usage (Part 2) – The Two Theories

Tractatus Logico-Philosophicus described language as the picture through which we see the world. Reality is everything that is the case – the totality of describable facts and states of affairs. The limits of language, of which there are many, are the limits of one’s overall experience of the world. Seemingly abstract questions such as those of ethics and aesthetics are transcendental and thus not ask-able because their foundations are not in accordance with the states of affairs in the world. Any question that can be asked (according to the current states of affairs in the world) can indeed be answered. We think in terms of logical propositions and express ourselves using those same propositions. This is a difficult process, for language has a unique logical construction, unlike mathematics or logic itself whose propositions are, at best, tautological. Being concise is important. Thinking and speaking are both logical processes. One cannot think or mutter an illogical proposition because such a proposition would not fit into the picture of the world, i.e. language, which at least limits our understanding of it, and at most limits the actual states of affairs that its propositions assert. The greater is one’s proficiency in language, the greater is their overall experience of the world. Language is everything.

That is as concisely as I believe I can put it. I sure hope that, by Wittgenstein’s measure, I am following the rules!

Philosophical Investigations begins with a quote from St. Augustine’s Confessions, which explains how language is first learned by learning the names of objects. You see your parents point to an object, say a word, and you learn to associate the word with the object. This initially seems to resemble Tractatus. For later-Wittgenstein, though, this is only the starting point. Names, and more generally, propositions, no longer pose a problem. It is reasonable to accept that we learn to communicate by pointing to objects while saying a specific word. However, Wittgenstein claims that we cannot create a necessary fundamental relationship between the name of an object and the object itself. Rather, language sets infinitely revisable guidelines for how we communicate, and it is the usage of words that gives them their meaning. For example, suppose a group of builders communicate using a four-word language containing the words ‘block’, ‘pillar’, ‘slab’, and ‘beam’ (Wittgenstein 19). When one builder says one of those words, or any combination of those words to another, he is not merely naming the individual objects. There are certain implied statements based on the usage and context of the words. To say “block” usually implies “fetch me that block”. It could even imply something as extensive as “fetch that block, and then place it here in an upright position.” Any combination of those words can have any combination of implications, and they will be correct just as long as all parties involved in the communication of those words understand those implied statements. Meaning, in this case, deals much more with the overall implications rather than the singular words. Meaning is not bound by the words themselves, but rather by how they are used. They seem to have no boundaries at all because of the endless range of implied statements one can make by saying a single word. This is in part what Wittgenstein refers to as a language game. There is no particular set structure by which we must speak in order to communicate. We play these language games to communicate ideas. In many cases, we can only hope that one receives a message as we mean to send it. The world, not language, is everything. Mastering language will help one in many ways, yes, but one’s problems in the world are more reducible to his individual psychology rather than due to language itself which, as Tractatus claims, has some a priori (self-justifiable) foundation.

By which theory, in the brief descriptions above, are you convinced best explains the nature of language? Though they seem to contradict each other, either one may seem feasible with some thought. At different points, I have been convinced of both for different reasons. However, my agreement with Tractatus was a bit more like my agreement with my daily horoscope. It seemed to make sense only within the confines of a very specific way of thinking. It seemed that the assumptions outweighed the claims they assert. Though Tractatus clearly provides insight, Philosophical Investigations now seems to better describe the ways in which language is actually used in the world. I hope that one will be convinced of this after reading further.

Current Methods of Usage (Part 1) – Introduction

At two different points in his life, Ludwig Wittgenstein held conflicting theories about the nature of language. These two philosophies arguably gave rise to two schools of thought, each with an extensive range of subfields, that are still prominent today: analytical and continental (this is why Wittgenstein is so widely considered the most influential philosopher of the twentieth century). We associate Wittgenstein’s early work, Tractatus Logico-Philosophicus, with the analytical school of thought. This work argued for a “picture theory” of language that states that language’s foundations are in the logically constructed picture of the world that we attempt to describe; there is a necessary relationship between terms and the things in the world that they refer to. We associate Wittgenstein’s later work, Philosophical Investigations, with the continental school of thought. This work argued for a much more open-ended theory of language that states that meaning is not fixed; it fluctuates depending on its context. We play language games in order to communicate as precisely as we can within a given context. In either case, saying what we mean is a difficult task.

My purpose in this essay is to show that Wittgenstein’s two theories of language can, in some sense, coexist. If I am successful, one should be able to infer that the respective schools of thought that they gave rise to must coexist if we are to advance thought. Perhaps I will elaborate on the latter point at a later date, but for now, I will defend the former by devising two concepts. The first concept is called the logical-reductionism fallacy, which will expose the problems of applying strict a priori ideals to meaning, in this case, as applied to language. The second concept, which will be the focus of this essay, is called current methods of usage. It states that there is indeed a proper way to use language in a particular time and place. It cherry-picks things from Tractatus that we should keep in mind when using language while accepting that Wittgenstein’s later theory is superior in explaining the overall nature of language. So, I am not claiming that two seemingly contradictory theories can coexist in terms of fundamental truth, but rather that one is more true, and the other is practically valuable, so both are worth keeping in mind.

Though I will be trying to stay on this track, I will frequently deviate from the central argument to express my own ideas about the fundamental nature of language. Perhaps that will be the focus.

A Further Thought on “…the Third Realm”

Upon initially completing Varieties of Presence, I failed to notice a very sly tactic used by Noë: putting a chapter about how to philosophize at the very end of a book about matters that are primarily of interest to neuroscience, biology, and psychology. These, especially the former two, are fields which generally pay little attention to philosophy despite the fact that they are participating in philosophy when they explore ideas, publish work, or think about how to redirect their research when they hit a stump. (I hope the reason I was initially blind to this it because I am already inclined to philosophize fairly well) No matter the reader’s background, Noë seems to want to leave fresh in his or her mind that there is a third realm core of every problem, whether scientific, social, political, historical, interpersonal, etc… So, the opportunity to do philosophy is always there. It is up to those involved to take advantage of that opportunity.