The False-Dilemma of the Nature vs. Nurture Debate

Before I begin, allow me to explain what I mean by false dilemma. A false dilemma is an error in reasoning whereby one falsely assumes that the truth of a matter is limited to one of two (or a select few) explanations. For example, the American presidential election. For another example, have you ever been stumped by a question on multiple choice test because you saw more than one possible correct answer (or no correct answers all)? — perhaps you got frustrated because you felt that the test was unfairly trying to trick you? Well, you were probably right. This may have been an instance of your ability to recognize the false dilemma fallacy. Sometimes there are indeed any number of correct answers given any number of circumstances. There is often simply not enough information provided in the question for one choice to clearly stick out as correct. This might lead you to question the test in a broader sense. What is the purpose of this (presidential election, or) test? What is it trying to measure or prove? Without getting into that answer in too much detail (as this is not a post about the philosophical state of academic testing), I can say such tests aren’t really concerned with truth and meaning as they are about the specific program they support. That program may or may not have the best interests of the people in mind, and it may or may not be directly governed by the amount of money it can produce in a relatively short period of time. Anyway, that’s another discussion.

In a previous post entitled The Slate, the Chalk, and the Eraser, I compared a child’s mind to a slate, and I argued that as long as we write on it with chalk by teaching him how to think (rather than a permanent marker/what to think), then he will be able to erase those markings to make way for better and more situation-relevant ones in the future, once he develops the ability to make conscious judgments. This is an example that you may have heard before, and it can be useful, but by some interpretations, it may seem to rest on a false presupposition. Such an interpretation may raise the “nature-nurture” question that is so common in circles of science and philosophy. One might argue that if a child’s mind is truly analogous to a slate in the way I have put forth, then I should commit myself to the “nurture” side of that debate. That was not my intention. In fact, that debate, in its most common form, presents a false dilemma, so I can only commit to both or neither side depending on what is meant by ‘nature’ and ‘nurture’. The conventional definitions of these terms are limited in that they create a spectrum on which to make truth-value judgments about objects, experiences, phenomena, etc. We commit to one end of the spectrum or the other, and we take that position as true and the other as illusory. This is similar to the subject-object distinction I described in an earlier post. Perhaps comically, even the most radical (and supposedly-yet-not-so-contrary) ends of scientific and religious belief systems sometimes agree one which side to commit to, albeit for different reasons. That particular conflict, however, is usually caused by a semantic problem. The terms ‘nature’ and ‘nurture’ obviously mean very different things for radical mechanistic scientists and evangelical Christians.

Please keep in mind throughout that I am not criticizing science or religion in general, so I am not out to offend anyone. I am merely criticizing radical misinterpretations of each. Consequently, if you’re an idiot, you will probably misinterpret and get offended by this post as well.

Taking this description a step further, false dilemma can be committed to any number of degrees. The degree to which it is committed is determined by at least two factors: the number of possible options one is considering and the level of complexity at which one is analyzing the problem. Any matter we might deal with can be organized conceptually into a pyramid hierarchy where the theoretical categorical ideal is at the top, and the further one goes down the pyramid, the more manageable but trivial the matters become. As a rule of thumb, the fewest options (one or two) and the lowest level of analysis (bottom of the pyramid) should give rise to the highest probability of a logical error because the bottom level of analysis has the highest number of factors to consider, and those factors culminate up the pyramid toward the categorical ideal. Fortunately, committing an error at the lowest levels of analysis usually involves a harmless and easily-correctable confusion of facts. Committing the error at higher levels of analysis are more ontological in nature (as the categorical ideals are per se) and can have catastrophic consequences. All sciences and religions structure their methods and beliefs into such pyramid hierarchies, as do we individually. They start with a categorical ideal as their assumption (e.g. materialism for some science; the existence of God for some religion), and they work down from there. However, neither religion nor science are meant to be top-down processes like philosophy (which is likely the only top-down discipline that exists). They’re meant to be bottom-up processes. For science, everything starts with the data, and the more data that is compiled and organized, the more likely we are able to draw conclusions and make those conclusions useful (in order to help people, one would hope). For religion, everything starts with the individual. Live a moral and just life, act kindly toward others, and you will be rewarded through fulfillment (heaven for western religions, self-actualization for eastern religions). These can both be good things (and even reconcilable) if we go about them in the right way. What are the consequences, however, if we go about them radically (which is to say blindly)? In short, for radical belief in a self-righteous God, it is war, and therefore the loss of potentially millions of lives. In short, for radical materialism, it is corruption in politics, education, and the pharmaceutical industry, the elimination of health and economic equality, and the potential downfall of western civilization as we know it. That’s another discussion, though.

For the nature-nurture debate, the false dilemma is the consequence of (but is not limited to) confusion about what constitutes nature and nurture to begin with, and even most people who subscribe to the very same schools of thought have very different definitions of each. First, in the conventional form of this debate, what do people mean by ‘nature’? Biology, as far as I can tell, and nothing more. We each inherit an innate “code” of programmed genetic traits passed down from our parents, and they from theirs, and so on. This code determines our physiology and governs our behavior and interaction with the outside world. Our actions are reactive and governed by our brain-computer, and free will is consequently an illusion. What is meant by ‘nurture’ on the other hand? Our experienced environment, and nothing more. Regardless of our chemical makeup, how we are raised will determine our future. There is no variation in genetics that could make once person significantly different from another if raised in identical fashion by the same parents, in the same time and place. We have no control over the objective environment we experience, so free will still seems to be illusory.

These positions seem equally shortsighted, and therefore, this problem transcends semantics. Neither accounts for the gray in the matter — that reality, whatever that is, does not follow rules such as definitions and mathematical principles. These are conceptions of our own collectively-subjective realities which make it easier for us to explain phenomena which are otherwise unfathomable. On this note, we could potentially  consider both nature and nurture phenomenal. That is an objective point on the matter. The first subjective problem is that both positions imply that we don’t have free will. Sure, there are unconscious habits of ancient origins that drive our conscious behavior (e.g. consumption, survival, and reproduction), but there other more complex structures that these positions don’t account for (e.g. hierarchical structures of dominance, beliefs, and abstract behavior such as artistic production), and those are infinitely variable from person to person and from group to group. This comes back to the point I just made about phenomenal reality and the conceptions we follow in order to explain them as if they are somehow out there in the objective world that we are not part of.

Not to mention, we all take differently to the idea that free will might not exist. Religious people are often deeply offended by this idea whereas many scientists (theoretical physicists in particular) claim to be humbled by it. Both reactions, I would argue, are disgustingly self-righteous and are the direct consequence, not of truly understanding the concept of free will per se, but of whether or not free will simply fits into his or her preconstructed hierarchical structure of beliefs. One should see clearly, on that note, why a materialist must reject free will on principle alone, and a radical christian must accept it on principle alone. Regardless of the prospect that the religious person has a right to be offended in this case, and that it is contradictory of the scientist to commit to a subjective ontological opinion when that very opinion does not permit one to have an opinion to begin with (nor can it be supported with any sufficient amount of “scientific” evidence whatsoever), the point here transcends the matter of free will itself: that rejecting or accepting anything on principle alone is absurd. This calls into question matters of collective ideological influence. There is power in numbers, and that power is used for evil every bit as often as it is used for good. When individuals, however, break free from those ideologies, they realize how foolish it is to be sheep and to believe in anything to the extent that it harms anyone in any way (physiologically, financially, emotionally, etc.). The scary part about this is that literally any program might trap us in this way (ideologically), and blind us from the potentially-innate moral principles that underlie many of our actions. On that note, we are all collectively very much the same when we subscribe to a program, and we are all part of some program. We are individually very different, however, because we each have the potential to arrive at this realization through unique means. We each have a psychological structure that makes up our personality. It is undeniably innate to an extent, yet only partially biological. This reveals the immeasurable value in developing the one’s intrapersonal intelligence through introspection and careful evaluation of one’s own thoughts, feelings, perceptions, and desires.

Furthermore, conventional nature-nurture positions are polarities on a spectrum that doesn’t really exist. If we had clearer definitions of each, perhaps the debate would not present a false dilemma. We should reconstruct those definitions to be inclusive of phenomena — think of these terms as categories for ranges of processes rather than singular processes themselves. If we think of these terms as being on a spectrum, we are led to ask the impossible question of where the boundary is between them. If we think of them as categories, we are forced to embrace the reality that most, if not all, processes can fall into either category given a certain set of circumstances, and thus, those categories become virtually indistinguishable. E.g. in the case of inherited skills: practice makes perfect, yet natural talent seems so strongly to exist. If the truth-value-based spectrum between nature and nurture were a real thing, then neither position would be able to account for both nurtured ability and natural talent; it would simply be either/or. This is a consequence of the false dilemma. It leads us to believe that this gray matter is black and white. If we one is decent at learning anything, he/she knows that there is only gray in everything.

But is there? I hope I have explained to some conceivable extent why scientific and metaphysical matters should not be structured into a polar truth-spectrum, and why any attempt to do so would likely present a false dilemma. However, it seems more reasonable to apply spectrum structures to value theory matters such as aesthetics, ethics, and even other personal motivators such as love. This, I will explain further in a later post.

 

Collective Subjectivity = Reality :: The Utility of Phenomenological Thought

In my last post, I explained the differences between and the proper uses of the terms ‘subjective’ and ‘objective’. To recap, these terms do not describe the positions from which one perceives. Of course, everyone perceives subjectively, and objects don’t perceive at all. Therefore, the subject/object spectrum is not a spectrum on which one may judge a matter’s truth-value. The spectrum simply describes the nature of the matter at hand — subjective means “of a subject” and objective means “of an object”. Having said that, how can we define truth more broadly? What determines it?

I think that we can, in many conceivable instances, equate truth with reality. This is based on one of two popular definitions of reality. The first, more popular definition in which we cannot equate truth and reality, and the one I reject, is that of objective, Newtonian-scientific reality. This holds that there are mathematical laws and principles out there in the universe, already discovered or waiting to be discovered, which the forces of nature can be reduced to. Proponents of this view hold “rationality”, in all of its vagueness, as the singular Platonic ideal which dictates what is true, real, and meaningful. It follows from this that mechanistic science holds the key to all knowledge. The problem here is that mechanistic science (not all science) is founded in the metaphysical belief in materialism. Materialism suggests that all reality is comprised of quantifiable matter and energy. Humans, and all living things, are “lumbering robots”, as Richard Dawkins claims. Consciousness, ethics, morality, spirituality, and anything else without a known material basis is subjective in nature and thus superstitious, irrational, and not real. As I have already explained, this worldview rests on a straw-man distinction between what constitutes subjective and objective, for it assumes that this distinction creates a spectrum on which to judge a matter’s truth-value (the more objective, the more true).

Remaining consistent with how I have distinguished subjective and objective is the second, less popular, and in my view, much more useful way of defining truth and reality: what is real is what affords us action and drives us toward a goal. The definition is as simple as that, but its implications have a tremendous amount of depth rooted in the unknown. Instead of holding one Platonic ideal (like rationality) as the key to all truth, there are an infinite number of ideals that humans conceptualize, both individually and collectively, in order to achieve their ends. Therefore, this view affords relevance to a wide range of perspectives even if the nature of the objects being perceived is unknown. The rationalist view, by contrast, is limited to the assumption that the nature of everything has already been determined to fit into one of two metaphysical categories: objective reality or subjective delusion. (This Newtonian theory of reality I have just explained, by the way, is a long-winded way of defining ‘scientism’, a term I often use in my posts.)

Nature doesn’t obey laws; humans do, so we tend to compartmentalize everything else in that way because that makes it easier for us to explain what we want to know and explain-away anything we don’t want to know. What we don’t want to know is what we are afraid of, and as it turns out, what we are afraid of is the unknown. So, when anomalies, whether personal or scientific, that don’t fit the already-established laws arise, a Newtonian thinker will categorize it as illusory in order to explain it away. This doesn’t work because even we humans have a propensity to break the laws that we create for ourselves, and this can be a very productive thing. The degrees to which this is the case depends on our individual psychological makeups. People who are high in the Big-5 personality trait conscientiousness, for example, tend to obey rules because of their innate need for outward structure and order. Those who are low in that trait are more likely to break rules, especially if they are also low in agreeableness which measures one’s tendency to compromise and achieve harmony in social situations. Openness, on the other hand, the trait correlated with intellect and creativity, allows one to see beyond the rules and break them for the right reasons — when they are holding one back from progress, for example. These are just three of five broad personality traits that have an abundance of scientific research to potentially confirm their realness and usefulness, even as a rationalist/Newtonian might perceive them. However, the tendency of someone to break rules as a result of their psychological makeup does not only apply to political laws. We also create collective social rules among groups of friends and unconscious conceptual rules for ourselves in order to more easily understand our environment, and those systems satisfy the same basic human needs and take the same hierarchical forms as political order does, and they serve purposes that contrast only in terms of their widespread-ness.

Regardless of our individual psychologies, there are commonalities that all humans share in terms of which types of goals we have and which types of things drive us toward or away from action. Those things are, therefore, collectively subjective across humanity and are what I would like to propose the most universally real and true things (insofar as anything can be universally real or true at all). This leads me to elaborate further on this goal-oriented view of reality.

Since I used Newton as a scientific lens through which to understand the rationalist theory of reality, I will do the same thing to explain the goal-based theory that I am proposing, but this time using Darwin. Philosophically speaking, Darwin did not commit himself to his theories in the same law-sense that Newton did his. In fact, many of Darwin’s ideas have recently been found to be rooted in psychology rather than in hard mechanistic biology. His main principle can be summed up with this: nature selects, and we make choices, based on what we judge to be most likely to allow us to survive and reproduce. That is all. Everything else is just detailed justification which may or may not be true or relevant. In fact, Darwin left open the possibility that the details of his evolutionary theory not only could be wrong, but that they probably were, and he was very serious about that. To take all of those details literally leads one into the same logical trap that the “skeptics/ new atheists” fall into when they obsess over the details of the Bible — they oversimplify and misrepresent its meaning, and therefore overlook the broader, most important points that exist. These are straw-man arguments, and they demonstrate a persistent, juvenile lack and rejection of intellect.

The reason Darwin’s main evolutionary principle is psychological is because it is consistent with Carl Jung’s idea of the archetype. An archetype is any ancient, unconscious pattern of behavior common among groups or the entirety of the human population and their ancestors. The need for all living beings, not only humans, to survive and reproduce, is undoubtedly real. It is something we understand so little, yet it drives an inconceivably wide range of behaviors, most of which are taken for granted to the extent that they are unconscious (e.g. sex-drive is causally related to the desire to reproduce). It is not only in the natural world that humans would have to desperately fight for their life against other species, but even among ourselves in the civilized world have there been instances of radical attempts to wipe out masses of people because one group saw another group’s ideologies as threatening to their own survival and prosperity (e.g. both Hitler and Stalin led such endeavors in the 20th century).

Perhaps, instead, if we equate truth with this archetypal, goal-oriented conception of reality, then we can come to a reasonable conclusion about what constitutes truth: that which affords and drives us to action. That is to say that (capital-T) Truth, in the idealistic, rationalist sense, probably does not exist, and if it does, our five senses will never have the capacity to understand it. The best we can achieve and conceive is that which is true-enough. For what? For us to achieve our goals: survive, reproduce, and make ends meet, and if we are very sophisticated and open, to also introspect, to be honest with ourselves and others, and to live a moral and just life.

Subjectivity vs. Objectivity: Not a Distinction of Truth

I wonder which is worse: the fear of the unknown? Or knowing for sure that something terrible is true?

@pennyforyourbookthoughts

Or, if I might add, the negative, unforeseen consequences of that terrible thing being true?

The answer is: “fear of the unknown”, and it’s a little complicated.

Most things one might know “for sure” lie at either end of the subject/object spectrum. What is known on the subjective end of that spectrum is generally thought to deal with personal or value truths of an that are understood qualitatively by that individual. What is known on the objective end is generally thought to deal with fact and scientific truth that is understood quantitatively by a group. This is generally correct, but it is only the world of objects that convention accepts as ‘truth’, while the subjective is understood to not contain truth-value at all unless we are speaking about it in material (and thus, objective) terms. So, this spectrum actually seems to measure truth; the more objective it is, the more true it is. Here is an interesting misconception that leads me to attempt to make clear the proper uses of these terms.

What does it mean for something to be ‘subjective’ or ‘objective’? First, what they DO NOT describe are points from which one perceives. In other words, ‘subjective’ does not mean “opinion – from the point of view of a particular subject”, and ‘objective’ does not mean “rationally – from the point of view of an object or the world of objects” as, say, Richard Dawkins’ or Ayn Rand’s pseudo-philosophies suggest. They consider the vaguely defined term ‘rationality’ as the universal ideal — Dawkins through materialism and Rand through radical capitalism/individualism. This is shallow and wrong. The reasons for this should be clear. First, everyone perceives subjectively, from their own point of view, and objects don’t have the capacity to perceive to begin with — that is precisely what makes us subjects and things objects! No human perceives at the level of subatomic particles or, by the same token, God. Second, the differences between what constitutes ‘subjective’ and ‘objective’, for the sake of this conversation, depend on how ‘truth’ is defined more broadly. In fact, these terms have nothing to do with truth at all.

Rather, these terms describe the nature of a matter at hand. ‘Subjective’ simply means “dealing with matters of the subject or set of subjects”, and that can range from intrapersonal matters to interpersonal ones. ‘Objective’ means “dealing with matters of an object or set of objects”, and that can range from logical to quantitative to empirical. They DO NOT distinguish any degree of truth. Science, for example, is not objective because it it more true; it is objective simply because it deals with objects. Medicinal practice (which is not a science, by the way), on the other hand, is subjective in nature because it is interpersonal; it deals with human subjects on a case-by-case basis (many physicians do, however, treat their patients as objects, and they in turn view their practice as an objective matter).

This is not to say, however, that each subject perceives and makes judgments to the same degree of truth or accuracy. Each subject analyses any given situation to the degree that is consistent with their unique set of intellectual capacities; those include intrapersonal, interpersonal, conceptual, spatial, experiential, etc. A good IQ assessment tends to measure a combination of all of those things, but most people are only strong in one or two of those areas. For example, one might have a high level of intrapersonal intelligence (they know themselves well and understand their own mental and emotional states) but lack the ability to impartially deal with other people or objective matters because of how strongly they are affected by the outside world. On the other hand, one might have be high in logical or spatial intelligence but lack the ability to admit or even be aware of their emotional states or internal biases that govern the way they deal with personal matters (having one capacity does not imply deficiency in another capacity, necessarily, as people high in IQ might prove).

Given all of this personality variability among subjects, can an argument be made about the question stated above? Which is worse: fear of the unknown, knowing something terrible is true, or the negative consequences that accompany knowledge? I can only speak about this in a normative fashion. I also must presume that anything “good”, as it pertains to knowledge, should broaden one’s perception, and anything “bad” should narrow it. Knowing anything “for sure”, insofar as that is possible, should be a good thing in that it should teach us something meaningful, whether it is pleasant or not. The goodness of that knowledge, because it is sometimes unpleasant, is not contingent on the goodness of its specific consequences. Nietzsche was correct when he said that “people do not fear being deceived; they fear the negative consequences of being deceived”. The consequences, after all, are merely a result of cause and effect, and any cause can produce any number of variable effects depending on the set of circumstances under which it occurs. It is that potential for unforeseen chaos that people fear, at least on the surface. But, such matters are too variable and trivial to direct action in a meaningful way when certain higher-level truths (e.g. how should we think about x, why does x matter to us, etc.) have not been accounted for, so to simply fear consequences is shortsighted. To know something “terrible”, on the other hand, is usually just a case of knowing one side of a particular occurrence without knowing the reasons it happened or being familiar with any perspectives apart from the first one that is presented. In other words, it is knowledge without understanding.

It is the unknown that contains that crucial knowledge that will afford us understanding and drive us to action. That is where real truth comes from. We should be prepared to face the unknown at any time, for it is all around us, and the world so rarely unfolds as we expect it to. In fact, there is nothing that I can think of that any one person has complete control over. There are an infinite number of effects and consequences that our actions can and will cause, so perhaps having minimal expectations to begin with is the most healthy way to prepare for the future. Do not fear the unknown, for to fear the unknown is to fear truth. Facing the unknown will prevent one from accepting any knowledge as “terrible”, and it will in turn not only minimize negative consequences, but it will open many unforeseen, positive opportunities.

 

“Strange Tools: Art and Human Nature”

Some years ago, I was talking with an artist. He asked me about the science of visual perception. I explained that the vision scientists seek to understand how it is we see so much–the colorful and detailed world of objects spread out around us in space–when what we are given are tiny distorted upside-down images in the eyes. How do we see so much on the basis of so little?

I was startled by the artist’s reply. Nonesense! he scoffed. That’s not the question we should ask. The important question is this: Why are we so blind, why do we see so little, when there is so much around us to see?     –Alva Noë

The quote above is from the Preface of Alva Noë’s latest book Strange Tools: Art and Human Nature. Noë is a philosopher at UC-Berkeley who focuses his research on mind and cognition. I have been a fan of his work for the last year or so, so I was excited when he came out with this latest book which deals with a subject that I concern myself with in my own work. His work initially caught my attention because he already does very well what I seek to do to some degree: blurring boundaries between disciplines and shattering harmful ideologies. After all, is this not necessary if we are to advance thought?

It turns out that there is a lot that Noë and I agree on concerning art, and there was even more for me to learn concerning the relationship between art and philosophy more generally. He argues that both art and philosophy are transformative in that they force us to look at the world in different ways. As he explains in Chapter 8, a good work of art carries the message “See me if you can!” One cannot understand it with one simple glance. It takes a timely process of organizing and reorganizing our conception of a work of art to fully understand it, just as we must organize and reorganize many aspects of our lives. It is not until we understand the work to this degree that we are qualified to make a critical judgment about it. Art is a transformative tool, like philosophy or language, for shaping our understanding and expression of reality and of ourselves.

In one of his previous works entitled Out of our Heads, Noë makes a convincing and nearly irrefutable case that we are not merely our brains. There is more to consciousness than neural functioning inside the brain. When we confine the mind to the brain, we leave out a crucial part of our being. We are not mere “lumbering robots” as Richard Dawkins argues. We have rights, responsibilities, and the power to make conscious decisions. What exactly is beyond our brains, whatever its nature, is still up for debate. Regardless, this transformation is not something that happens in the art itself, nor does it happen in us, as in, in our brains, as neuroaesthetics would suggest. Rather, art, and also philosophy, happen to us. Yes, there will be correlative changes in brain functioning, but those are merely byproducts of our active engagement with the work.

Art is to the artist as philosophy is to the philosopher. It is the beginning of a conversation that can cause controversy or enlightenment. It might make us uncomfortable at first because it causes us to question our perception as philosophy forces us to question our beliefs. But, it is later humbling, rewarding, and intellectually engaging. It is a tool for thinking critically, and it is strange because it is difficult to understand. Art and philosophy are both Strange Tools.

Reason – The Business of Philosophy

“To say that a stone falls to Earth because it is obeying a law makes it a man and even a citizen.”  -C. S. Lewis

People who believe in science as a worldview rather than a method of inquiry – I call them scientismists – are fascinated by science because they cannot grasp it, just like all people who are not magicians are fascinated by magic. What little understanding they do have of it, in principle, is superficial. The difference between people’s perception of science and that of magic is that magic can always be explained. Magic plays a trick on one’s perception. That is magic’s nature as well as its goal. Science, on the other hand, cannot always be figured out. There simply is not a scientific explanation for everything (or of most things). Nor is it science’s goal to explain everything! Science is an incremental process of collecting empirical data, interpreting it, and attempting to manipulate aspects of the environment accordingly for (mostly) human benefit. It is experimental and observable. It is, as I will explain, inductive. Unfortunately, sometimes unknowingly, human subjectivity intervenes in at least one of these three steps, exposing its limits through ours. So, where does reason fit in to this process?

What “Reason” is NOT

One problem with scientism is that it equates science and reason. This is incorrect. Although philosophers of science, most of whom are scientists themselves, have debated the definition of science since it was called ‘Natural Philosophy’, there is one thing that we do know about it and the difference between it and reason. Science deals with questions of ‘how’. It describes the inner-workings, the technicalities, of observable processes and states of affairs. Reason deals with questions of ‘why’. It explores lines of thinking – fundamental goals, purposes, and meanings – for those processes and states of affairs as well those for many other non-scientific processes and states of affairs. Having said that, reason is necessary for science, but it is immeasurably more broad.

Science cannot alone answer why-questions. Claiming that it can is a mark of scientism. Why is that?

I will now give reasons for that by using an example from Dr. Wes Cecil’s 2014 lecture about scientism at Peninsula College:

Engineering, which is a type of science that has its foundations in calculus, can tell us how to build a bridge. Engineering can build the biggest, longest, strongest bridge one could possibly imagine. How will the bridge look? We marry science and art to make the bridge beautiful as well as functional. So, even at this first stage of building a bridge – design – science cannot stand independent from even art, which seems so much more abstract.

Furthermore, why do we need to build a bridge? This is a question of reason, not of science. The answer seems to be “to get to the other side of the river”. But what the engineer (who is also a business man who wants to land the deal for this highly-lucrative project) might neglect is that building a bridge is not the only way to get to the other side of the river. Perhaps a ferry would be an easier, more cost-effective option. The engineer can tell us how to build a ferry too, but making the decision between the bridge and the ferry, ultimately, is not the engineer’s business.

Even once the decision has been made to build the bridge, several more questions arise: who will pay for the bridge?; how will they pay for it?; where exactly will the bridge be?; who will be allowed to use the bridge? Motorized vehicles only? Bikes? Pedestrians?; etc. These are not scientific questions, and nor are most questions in our everyday lives. They are economic, ethical, and political questions that, much like the scientific question of how to build the bridge, require some application of reason, but they cannot themselves be equated with reason. Reason is something as different as it is important to these goals, processes, and states of affairs.

What is Reason?

Reason is a skill and a tool. It is the byproduct of logic. Logic is a subfield of philosophy that deals with reasoning in its purest forms. So, if someone wants to believe that science and reason are the same thing, then they are clearly admitting that science is merely a byproduct of a subfield of philosophy. I am sure that most scientismists egos would not be willing to live with that. Although some similar claim could still otherwise be the case, that is not what I am attempting to prove here. Let’s focus on reasoning.

We say that an argument is valid when the truth of the claim follows from the truth of its evidence. There is a symbolic way to express this. For example:

If p, then q; p; Therefore q.

What we have here is not a statement, but rather, a statement form called Modus Ponens. It is a formula in which we can plug anything for variables p and q, and whether or not the statement is true, it will be valid according to the rules of logic. Try it for yourself! But remember, ‘validity’ and ‘truth’ are not the same thing.

The example above describes deductive reasoning; it is conceptual. Immanuel Kant called the knowledge we gain from this process a priori – knowledge which is self-justifiable. Mathematics is a classic example of deductive reasoning. It is a highly systematic construction that seems to work independent from our own experience of it, that we can also apply to processes like building a bridge.

There is another type of reasoning called inductive reasoning. It is the process of reasoning based on past events and evidence collected from those events. The type of knowledge that one gains from inductive reasoning, according to Kant, is called a posteriori. This is knowledge that is justified by experience rather than a conceptual system. For example: We reason that the sun will rise tomorrow because it has everyday for all of recorded human history. We also have empirical evidence to explain how the sun rises. However, the prediction that the sun will rise tomorrow is only a prediction, not a certainty, despite all the evidence we have that it will rise. The prediction presupposes that not one of countless possible events (Sun burns out, asteroid knocks Earth out of orbit, Earth stops rotating, etc.) will occur to prevent that from happening.

Illusions of Scientism

The mistake that scientism makes is that it claims that the methods of science are deductive when they are actually inductive. Reductive science (that which seeks to explain larger phenomena by reducing matter down to smaller parts) most commonly makes this mistake. More often than not, those “smallest parts” are laws or theories defined by mathematical formulas. Scientismists believe that the deductions made by mathematical approaches to science produce philosophically true results. They do not. The results are simply valid because they work within a strict, self-justifiable framework – mathematics. But, how applicable are mathematics to the sciences, and how strong is this validity?

“The excellent beginning made by quantum mechanics with the hydrogen atom peters out slowly in the sands of approximation in as much as we move toward more complex situations… This decline in the efficiency of mathematical algorithms accelerates when we go into chemistry. The interactions between two molecules of any degree of complexity evades mathematical description… In biology, if we make exceptions of the theory of population and of formal genetics, the use of mathematics is confined to modelling a few local situations (transmission of nerve impulses, blood flow in the arteries, etc.) of slight theoretical interest and limited practical value… The relatively rapid degeneration in the possible uses of mathematics when one moves from physics to biology is certainly known among specialists, but there is a reluctance to reveal it to the public at large… The feeling of security given by the reductionist approach is in fact illusory.”

-Rene Thom, Mathemetician

Deductive reasoning and its systems, such as mathematics, are human constructs. However, how they came to be should be accurately described. They were not merely created, because that would imply that they came from nothing. Mathematics are very logical and can be applied in important ways. However, the fact that mathematics works in so many ways should not cause us the delusion that they were discovered either, for that would imply that there is some observable, fundamental, empirical truth to them. This is not the case either. Mathematics and the laws they describe are found nowhere in nature. There are no obvious examples of perfect circles or right angles anywhere in the universe. There are also no numbers. We can count objects, yes, but no two objects, from stars to particles of dust, are exactly the same. What does it mean when we say “here are two firs” when the trees, though of the same species, have so many obvious differences?

What a statement about a number asserts, according to Gottlob Frege, is a concept, because any application of it is deductive. So, I prefer to say of such systems that they were developed. They are constructed from logic for a purpose, but without that purpose – without an answer to the question ‘why do we use them?’ – they are nonexistent. Therefore, there is a strong sense in which the application of such systems is limited to our belief in them. Because we see them work in so many ways, it is difficult to not believe in them.

Physics attempts to act as the reason, the governing body of all science, but it cannot account for all of the uncertainty that scientific problems face. Its mathematical foundations are rigid, and so are the laws that they describe. However, occurrences in the universe are not rigid at all. They are random and unpredictable and constantly evolving. Therefore, such “laws” are only guidelines, albeit rather useful ones.

As Thom states, “the public at large” is unaware of the lack of practical applications of mathematics to science, and it is precisely that illusion of efficiency that scientism, which is comprised of both specialists and non-specialists, takes for granted. It is anthropocentric to believe that, because we understand mathematics, a system we developed, we can understand everything. Humans are not at the center of the universe. We’re merely an immeasurably small part of it.

The Solution

In the same way Rene Thom explains mathematical formulas do not directly translate to chemistry and biology, deductive reasoning, more generally, has very limited application in most aspects of our everyday lives. Kids in school ask, “I’ll never use algebra; why am I learning it?” It turns out, they are absolutely right. Learning math beyond basic addition, subtraction, multiplication, and division is a waste of time for most. What they should be learning instead are the basics of reasoning. Deduction only proves validity, not truth, and induction has even greater limits, as David Hume and many others have pointed out. People, especially young children, are truth-seekers by nature, which is to say they are little philosophers.

There is a solution: informal logic, the study of logical fallacies – the most basic errors in reasoning. Informal logic is widely accessible and universally applicable. If people are to reason well, informal logic is the most fundamental way to start, and start young we should. Children, in fact, have a natural tendency to do this extremely well.

To be continued…

Typology as a Step Toward Critical Thinking

One of the key aims of philosophy, for the individual, is to simply become more open-minded. It is to broaden one’s understanding of what is logical and illogical, rational and irrational, not merely to himself, but actually. This is extremely difficult, so most philosophy course syllabi will include a disclaimer such as this one:

WARNING!
Doing philosophy requires a willingness to think critically. Critical thinking does not consist in merely making claims. Rather, it requires offering reasons/evidence in support of your claims. It also requires your willingness to entertain criticism from
others who do not share your assumptions. You will be required to do philosophy in this class. Doing philosophy can be hazardous to your cherished beliefs. Consequently, if you are unwilling to participate, to subject your views to critical analysis, to explore issues that cannot be resolved empirically, using computers, or watching Sci-Fi, then my course is not for you.
Rob Stufflebeam (University of New Orleans)

Harsh? For many, it is. After extensive, philosophical examination of our beliefs via criticism from others or otherwise, we should find that they are founded on many assumptions. Of course, one cannot make any argument without some preexisting assumption(s). Perhaps the challenge, for some, lies in choosing which assumptions to submit to and which to debate. For the philosopher, though, the challenge is much more broad and often more difficult. Philosophy isn’t about formulating beliefs from nothing, but rather, if not to develop beliefs which can be justified and maintained in a logically consistent way, to eliminate belief altogether.

It may seem ironic that the aim of the philosopher is precisely to not have “a philosophy” in the conventional sense of the term. I would argue, though, that this is not a conscious aim of philosophy (perhaps that is the conscious aim of art). After all, good philosophers are not grumpy, old, bigoted skeptics in the way some may think. Rather, this unbelief is merely a byproduct of having explored a subject in a philosophical way, i.e. impartially. As I explained in a previous post, There are no “philosophical problems” per se; there are only philosophical approaches to a problem, and one can approach any problem philosophically.

What do philosophical approaches to problems do for us? The short answer is “lots of stuff”. Let us consider this example: Let us suppose that a man named Scott stands at the foot of a deciduous forest in the winter long after all of the trees’ leaves have fallen off. In front of the forest, in Scott’s plain view, are two large, lush, and green coniferous firs. Scott’s wife, Cindy, asks him “how many trees do you see?”. He answers “two”, for the firs, so green and lush, are the clearest things in his immediate view that resemble what he conceives to be trees.

Cindy’s question initially seemed like a very straight-forward, mathematical question. But Scott jumped to the conclusion that firs, not trees in general, were the objects Cindy wanted him to count. Of course there are many deciduous trees directly behind the two firs. He could have very well replied ‘a bunch; too many to count’ if he had simply looked past the eye-catching firs to the vast-yet-barren, leafless forest. As we know, the deciduous trees are every bit as alive as the firs; they’re only dormant for the winter. Even if Cindy’s question specified the firs as the trees for Scott to count, answering in a straightforward way might pose more questions, leading to a philosophical discussion about, say, mathematics (e.g. what is meant by “two firs” when the trees literally have so little in common?).

This is just a metaphoric example, but the point is this: an aim of approaching questions in a philosophical (and similarly, an artistic) manner is to gain the ability to see past what is immediately present to us. After all, what is immediately present to us are often dubious assumptions formulated by culture, nurture, institutions, etc.

Immediately, one might see why this type of “critical thinking” can not only be difficult, but get us into trouble, and it often does. Not only are individual’s beliefs founded on assumptions which are very often irrational, but the same is the case for belief systems of businesses, institutions, and personal relationships. People in these contexts can be very sensitive to criticism. “Power-in-numbers” exists and is very often harmful in a philosophical sense, for collective bodies are generally more easily influenced by foolish belief systems than individuals are (cult mentality). Those who break from the group and question things in a fundamental way are only thinking for themselves. They become outcasts, albeit curious and honest ones. Just as an individual should strive for harmony between his outer world and inner self, so should a group be resistant to any type of dogmatism. How do we achieve this?

There is no sure-fire solution, for if there were, it would follow that all people innately think the same way, and this is obviously not the case. In fact, thinking for yourself, which is to say, thinking differently from everyone else, is absolutely vital if you want to thrive in any regard. Philosophy and critical thinking in general can help if one is up for the challenge, but it is not advisable for just anyone to dive right into philosophical reading and discussion (Philosophy is difficult, and few people have the natural tendency to think openly about sensitive subjects to the extent that one must to be successful in philosophical discussion – see the WARNING above). There are other ways.

Each person has a different mind which presents a new set of challenges – challenges for which they will find solutions only if they come to terms with themselves first. For an outwardly-focused extrovert, this generally means finding comfort in one’s own skin. For an inwardly-focused introvert, it means finding one’s place in the outer world. However, it is much more complicated than that. This has been one basis for why I think Jungian typology, personality psychology, and light aesthetics, for the general population, present more relatable ways to deal with questions that are normally of concern to ethics and moral philosophy. No one broad ethical theory will satisfy everyone, and I find it nearly impossible to adapt such a theory to a wide range of people in a conceptual sense, and even less so in a practical sense. Typology is an extremely effective method for understanding one’s self and others.

How can each individual maximize his or her ability to think, act, and thrive? First of all, we must acknowledge that every person has his or her own version of the “good life”, so it is his or her goal to figure out what that is and aspire to it by maximizing his or her cognitive potential, so ethics does not, at least initially, seem to be of much use. This sort of “self-actualization” can be vital, also, for maximizing one’s participation in philosophical discussion. However, before one subjects him or herself to harsh philosophical criticism, it is advisable for one to come to know him or herself. Jungian typology is a great method for taking that first step, and then, perhaps, philosophy can pave the rest of the path.

To be continued…

“Ideology and the Third Realm” – What is Philosophy?

In Dr. Alva Noë’s book Varieties of Presence, many important aspects of perception are discussed. He makes a convincing case that we achieve contact with the world through skill-based action. Our understanding of a new experience is a collective recurrence, both conscious and unconscious, of past experiences. It is a dense work that deserves the attention of other contemporaries who concern themselves with matters in cognitive science and philosophy of mind. Perhaps I will do a full review of this book at a later date, but for now I would like to focus on a matter addressed in the final chapter entitled “Ideology and the Third Realm” which takes an important departure from the philosophy of consciousness and neuroscience.

What this chapter does is something that every philosopher should do periodically: broadly revisits the fundamental importance of philosophy as it relates to the context of his work. I will be a bit more general than that since I am not  “professional” philosopher. The role that philosophy plays in the world seems to constantly be changing. But is it? Perhaps it is only the popular understanding of what philosophy is that changes. I think that is, in part, the case, but it has more to do with the uses of philosophy. Some of those uses have remained constant since the beginning of recorded thought while others change by the minute. For this reason, it is impossible to pin down. But one need not pin it down. Philosophy exists to be used, and it is set of skills that will hopefully never become extinct. There is no dictionary definition that can sufficiently explain it, much less emphasize the field’s vital presence. I will give a general overview of the chapter but mainly share my thoughts about what philosophy is and why it is not only relevant, but necessary. Before I continue, I should define an important term which will be mentioned several times in this piece.

Q.E.D. (Latin) quod erat demonstrandum –  meaning “which had to be proven”

Many people, in and out of academia, naively think that philosophy deals with questions that warrant a Q.E.D. response. When you take a philosophy course, you often have to write at least one argumentative essay where you choose a position of a philosopher who you have read, you attempt to prove him wrong, and then you attempt to formulate a complete view of your own by supporting evidence. This way of “doing philosophy” is popular in undergraduate, base-level courses. It helps you to develop reasoning skills that can be applied anywhere. This is important, no doubt, but this is not where philosophy ends. Why? First, writing is not even necessary for “doing philosophy”. The only thing that is necessary, I would argue, is thinking. Thinking must be assisted by reasoning, but this is only the start.

This does not imply that we should identify the philosopher as one who locks himself up in his ivory tower and speculates of a deluded, idealized world. To philosophize well, one must also be able to communicate his ideas in some way, and that will involve language, whether spoken or written. This is one reason philosophy courses are difficult: one must already have a certain level of reading, writing, and speaking proficiency to succeed. The full title of the final chapter of Noë’s book is “Ideology and the Third Realm (Or, a Short Essay on Knowing How to Philosophize)”. Since language is such a crucial part of this issue, I will begin by taking a language-based example from that chapter:

‘The King’s carriage is drawn by four horses’ is a statement about what?

a) the carriage;  b) the horses;  c) the concept it asserts;  d) other

Immediately, one might think that the answer is ‘a) the carriage’. This seems completely logical, given how most of us understand language. ‘Carriage’ is the subject of the sentence, so any terms that follow should (theoretically) describe it. It is certainly not ‘b) the horses’ because that is the object receiving the action, and nor can the answer be ‘c) the concept it asserts’ because nine out of ten people in the room don’t know what the hell that means. Right? Good. It’s settled.

Gottlob Frege had other ideas. He thought that a statement about numbers is a statement about a concept. When we attempt to answer the question about the subject matter of the “king’s carriage” statement, we are speaking in conceptual terms. We are not using the statement to assert anything. So, the answer must be ‘c’. He gives more reasons for this, of course, and he makes us realize that there is a sense in which we become confused about what we mean when we say ‘The king’s carriage is drawn by four horses’. However, despite the piercing quality of Frege’s argument, we have a much stronger sense that we are unconvinced by his theory of language.

The problem with Frege’s claim, for most of us, seems to be that he had a preconception of the meaning of the statement ‘the king’s carriage is drawn by four horses’ before he was even asked the question. He had already established that any statement about a number, without exception, is a statement about a concept, so he was able to answer the question without thinking. The problem with our rejection of his claim is that we are doing exactly the same thing. We also answered without thinking. We held the preconception that every sentence is about its subject. This preconception is guided by the larger logical construction by which we understand language, and it is certainly no more correct than Frege’s view simply because nine out of ten people in the room agree that it is (that would be to commit ad populum). We take our theory of language for granted, and perhaps Frege takes his for granted too. There seems to be no Q.E.D. conclusion here. What we are all doing, if we become inflexible, if we stick to our answer to the question without sufficient evidence to support it, is committing what I call the ideological fallacy.

However, subscribing to ideologies is not always a fallacious thing. It is only when the ideology is applied in a dogmatic way that it becomes wrong. When an evangelical christian lives by Jesus’ principle, “love your enemies”, that can have very positive effects. It may minimize conflict in the person’s life. It may allow them to stand strong in the face of racial adversity. It may allow them to accept people more openly, and very often the favor will be returned. However, the favor is not always returned if the christian is careless and thoughtless. Despite his belief that he loves his enemies, participating in radical evangelical activism would invade on others and create more conflict, leaving his conception of “love” to be questioned. It takes Christianity out of context and misapplies it to the world in a negatively ideological way. There is nothing about the beliefs in themselves that are illogical, destructive, or even wrong. It is in how they are used will determine that. I will use another example. Evolutionary biology can study preserved skeletons of million-year-old homo erectus figures and learn about how we sapiens evolved three stages of evolution later. This could contribute to our understanding of how humans will continue to evolve (or devolve). However, evolutionary biology can only contribute a small piece to the puzzle of predicting the future of humankind. It needs influence from many other fields to even begin to solve any of its own problems. So, when Richard Dawkins uses the broad concept of evolution to attempt to disprove creationism in any one of its countless forms, he is taking his work out of context and applying it in a radical, dogmatic, negatively ideological way. There is nothing about evolutionary biology, as a field, that is wrong. It is a highly-useful method of inquiry. But there is still plenty we do not know about how humans have evolved. We generally just accept that they did with the minimal evidence that we have just as the evangelical accepts his own conception of loving his enemies based solely on Jesus’ teachings. In this case, both parties look equally silly.

Of course, the example above presents two extreme cases. Although we answer this “king’s carriage” question one way, Frege answers it in another, and we seem to have to agree to disagree, there is still a sense in which both sides think the issue is objective in nature and that it deserves further discussion. In order to have this discussion in a logical, respectful, open manner, we must become philosophers, and one may not need to go school to achieve this. Alva Noë wonders how we might categorize our dealing with the “king’s carriage” question. It is not in the realm of the material (e.g. biology), nor is it in the realm of belief (e.g. religion). It seems to be within some third realm. Noë begins to explain with this quote:

The point is not that Frege or we are entitled to be indifferent to what people say or would say in answer to such a questionnaire. The point is that whatever people say could be at most the beginning of our conversation, not its end; it would be the opportunity for philosophy, not the determination of the solution of a philosophical problem. (Noë, 173)

at most…“, Noë says “(what other people say is) the beginning of our conversation… the opportunity for philosophy…” This is another reason philosophy is so difficult! At the very most, when our view stands in opposition to another, we may only have the opportunity to do philosophy. We rarely get there. When we do get there, two or more people are concerning themselves with the third realm of a problem. What is the third realm? It is the realm of possibilities with minimal influence from ideologies. It is abstractly objective yet, as I will explain later, not in the realm of matters Q.E.D.

Where is this third realm? Well, ‘where’ is the wrong question. Bertrand Russell once said of philosophy that it is “in the no-man’s land between science and religion” because it always seems to be under scrutiny from both sides. Perhaps, in some cases, this is correct. It can serve as a mediator between two extremes, but, on the surface, this only explains one of unlimited applications of philosophy.

Upon first reading or hearing Russell’s quote, one might be inclined to place philosophy in between science and religion because it deals with reason over belief (like science) and thought without quantifiable experimentation (like religion). This would be a shallow interpretation that lacks crucial insight. Russell was perhaps a bit too concise for the average interpreter. He did not mean, as I understand him, that philosophy is inside the space between science and religion. It has deeper implications which resonate with those of Noë (despite the fact that Russell was a logical positivist, and Noë is a phenomenologist, so they would probably have a duel for other reasons). Explaining philosophy has nothing to do with where we should fit it in relation to other fields. It has to do with how we can apply its skills, and in that way it is most unique. Those skills are skills of thought. Developing those skills first requires one to look inward, rid himself of bias, and then turn outward to consider all possibilities. This is still only the beginning. Once we achieve this skill of thought, what do we do with it? We continue to practice and improve it. How? The answer is simple, but the application seems, in some cases, impossible. We communicate.

We share our ideas with others who have, to some degree, developed the skill of clear thinking. Of course, communication, whether written, oral, or otherwise, is a practical skill in itself that will be developed naturally, mostly prior to but also simultaneously, alongside the skill of thinking. We tend to adapt our ability to communicate only to the situational extents that we need them, and that can be limiting. When doing philosophy, anyone can participate, but only to the extent that they can think clearly. Philosophy tests those limits, which is why both science and religion so often scrutinize it. Though they deal with subject matter that seems contradictory, (mechanistic) science and religion do have one general thing in common: dogmatic ideology. Philosophy, on the other hand, is perhaps the only field that dedicates the elimination of dogmatism as one of its primary goals.

Doing philosophy is not only about increasing the degree to which people can think, but about being open to different forms of thought as well. What is fortunate in this regard is that each person in the conversation, if one is to find himself in such a conversation, has probably achieved their skill of thought through different means. For example:

There may be one who developed his thinking through philosophy itself, who rigorously studied informal logic to learn how not to commit errors in reasoning. He also may be able to contribute history of thought to the conversation and explain why certain schools of thought are obsolete in academic philosophy. There might also be a more scientifically-minded person who, in a graduate school lab, performed the same experiment under the same conditions hundreds of times, but got variable results. He questioned why this was happening (if the laws of physics are supposed to be constant), so he turned his research to the inconsistencies and realized that uncertainty transcends mathematical equations. He is now able to think more broadly about his work. There might also be a Buddhist in the group who practices intensive meditation. He can turn off influence from his sensory world and walk on hot coals without getting burned, or he can submerge himself into freezing-cold water without catching hypothermia. He is able to clear his mind from all unnecessary matter. Each person achieves the same thing – to think clearly, skeptically, critically – through different means. They each learn from one another and gain a broad range of insights.

Also, and perhaps most importantly, each person in the conversation should be genuinely interested in learning new perspectives in order to improve their own points of view. There is a sense in which someone may have achieved access to the third realm of conversation to a lesser degree than the others, and at a deeper point in the discussion, he gets flustered and has to back out. This is perfectly fine as long as he does back out, at least until his temper cools (if he does not back out, he will disrupt the conversation). He has pushed his boundaries of clear thinking to a level that the others have not, and that can be a very constructive or destructive thing, depending on his mindset. But it is vital that all parties directly involved maintain self-preservation throughout the conversation. If there are any unsettled nerves, it is almost certain that at least one participant is not being genuine, but rather, is too outwardly focused and is perhaps ultimately trying too hard to prove himself right or the others wrong. Although they might seem to contribute insight to the conversation, they will inevitably expose themselves as operating from within an ideology, thereby rendering themselves a nuisance. Philosophy is no activity for the pretentious or egocentric, contrary to popular belief. In fact, the absolute contrary is the case.

Do any philosophical questions warrant a Q.E.D. response? (Does philosophy ever prove anything?)

No. In case this is not already clear, there are, in a sense, no “philosophical questions”. There are only philosophical approaches to questions. Approaching the third realm of a problem requires one to be, as stated earlier, abstractly objective (or perhaps objectively abstract). There are limits to how objective one can be, no doubt, but the aim of advancing thought is to learn more and more about the world and how those in it think, so we can improve on that, both individually and collectively. It exposes dogmatism and reveals the sheer grey-ness in any concrete matter. Need I give examples as to when this might be useful? I challenge anyone to give an example of when it is not, and thereby present an opportunity for doing philosophy! This is why philosophy is so widely-applicable.

To draw an analogy – toward the end of Noë’s final chapter, he mentions Immanuel Kant’s aesthetic view that the reality of one’s response to a work of art is based in feeling – it is not contingent on his ability to explain it. Similarly, Clive Bell described a “peculiar aesthetic emotion” that must (first) be present in something for it to be considered art. It is that feeling you get when you listen to a beautiful composition, watch a film that evokes tears, or look at Picasso’s Guernica after you have heard the gruesome story behind the painting. I had experienced this aesthetic emotion many times, but it was my former professor at the University of New Orleans, Rob Stufflebeam, who, whether he intended to or not, led me to realize that all of those experiences involved the same exact emotional response. Perhaps only for those who have experienced it, it is certainly something that need not, and often cannot be explained.

Likewise, a philosophical approach to a problem is, instead of an emotional experience as with art, at its very best, an all-encompassing intellectual experience. It is not a heated argument, nor is it even a controlled debate. It is a respectful, open-ended discussion about ideas between two or more people in an intimate setting. It raises the awareness of each involved to a broad level of skepticism that, perhaps very strangely, brings with it an aura of contentment. It is obviously not the same feeling one gets with the peculiar aesthetic emotion, but it is parallel in the sense that when you are part of it, you really know. That reality seems to transcend explanation.

Final Thoughts

Alva Noë has developed this idea about perception: “The world shows up for us, but not for free. We achieve access to it through skill-based action.” It is a combination of developing our conceptual and practical skills that allows us to understand the world and live in it. Achieving access to the third realm of a question, as I would consider it, is one of those countless skills. It comes more easily for some than for others. Just as one person might naturally have ideal physiological makeup for learning how to swim (lean, broad shoulders, webbed feet, etc.), another person’s brain might seem to be better wired for clear thinking. Everyone, to some degree, with the proper amount of training, can swim. Likewise, everyone can, with practice, think clearly. The more one practices by looking inward, ridding himself of bias, and working up the courage to subject himself to critique, the more he can contribute to the conversation in his own unique way. How much one wants to participate is solely up to him, but to not participate at all is to miss out on a hugely important (and my personal favorite) part of the human experience.