A Morning Meditation

“I really only love God as much as the person I love the least.”

-Dorothy Day

Your love for God, your understanding of the world, your connection to nature and the universe, your pursuit of Truth — however you view the metaphysical source and governing body of all things — exists in all, and in it are all contained. Every part is necessary for the functioning and flourishing of everything else in it, and therefore, your willful coherence with all else is necessary for your functioning and flourishing as well.

Of course, people can act in a way that you will perceive to be out of line, but that should be just fine as far as you are concerned. This raises the distinction between acceptance and tolerance, for an individual possesses the capacity — which each also has a duty toward themselves and others to cultivate — to accept everything and tolerate nothing. That is to intuit, without interference from emotion or sense, what is true and good for one and for all and not to tolerate, to the extent that one can be locally effective, anything that is not.

This process is not about you. You may need to focus on yourself for some time — to build a boundary between yourself and others for some time — to get to know certain aspects of yourself to the point where intuitive reflection is at all possible. But, that is not the goal. You are not the goal. Total coherence is the goal, and you have a vital duty to play your role. The separation you create between yourself and others is merely so that you may discover what that role is. However, others will play a role in determining how it is that you will provide value to all (of which they are a part), so your engagement with them is ultimately necessary. It may not be what you expect or desire, but that is why one must tame the ego fire.

To cohere with all does not mean you must work with just anyone, but rather that you must accept them for where they are and what role from which they are capable of serving, depending on where they are on their path of spiritual development, whether that be to feed the poor and heal the sick or to delve into self-destructive hedonism before hitting rock bottom to merely realize they exist. Regardless, they are in need of your love, from near or far, and that implies full acceptance. Without that, there is incoherence, and thus, incompleteness. Your love for God, nature, truth, others, and yourself is only as good as your love for the one who you love the least, for they too play a role that is necessary for you.

Though, again, that is not the goal…

Debunking the “Free Will Illusion”

The other day, I read this PsyBlog article that attempts to explain a psychological study which, according to the author, seems to imply that humans are mechanical robots merely controlled by neuronal impulses in our brains, and that free will is an illusory conception that humans have constructed to cope with death. There have been numerous studies, including the one described in that article, which show that neurons in the brain begin to fire before the person can report being conscious of their decision to pick up a pencil or before they can predict exactly which one of five circles on a computer screen changes color, for examples (the latter example is the experiment referred to in the article). The article also mentions the term ‘unconscious’ several times, and the usages imply that ‘unconscious’ should be defined merely as ‘the mechanical workings of the brain’. My aims in this post are to explain why that is an oversimplified and unsophisticated definition of ‘unconscious’, and also to suggest, partly on that basis, why these studies not only do not imply that free will is an illusion, but that they have virtually no bearing on what constitutes free will to begin with.

A Less Trivial Definition of ‘Unconscious’

There is one thing that the article (and anyone who would agree with it) gets right: we are not in total control of what we see, understand, and believe. However, this truth cannot be maintained to every degree of analysis imaginable (the highest degree being the ontology of free will and morals, arguably). This raises a semantic problem. Everyone has their own definition of what constitutes “unconscious” and even “free will”. The level of analysis that the article attempts to operate on is one of moral ontology, but it fails. Instead, it maintains the assumption that all that exists in us are mechanistic processes, and those processes are “unconscious”. We are our brains, and our brains are computer processors that take in data and organize that data for output, and when we are faced with stimuli relevant to our experiences, we merely react in accordance with our pre-organized data. Eh, well, partially correct! We are more nature than nurture after all. But, how does this imply that we don’t have free will? Let’s step back first. What can we infer from this article’s usage of ‘unconscious’?

“Neural activity is unconscious”, materialists will hold. Yes, we know that to the same extent that we know that digestion in the intestines is unconscious, and it need not be overstated. It is merely a biological process per se. However, biological processes tell us very little about our conscious world — the reality that we actually experience. They presuppose that the origins of our behaviors and decisions are pre-programmed inside our brains, and the neuronal activity is the first step in activating those programs (which we call decisions). This is an assumption, albeit a rather interesting one. Those who believe that this process is the causal origin of our behavior commit the most basic fallacy in science: correlation without causation. Why do they assume that the brain is the beginning when the brain requires the world to gather information to begin with, and why would anyone assume that we are disconnected from objective reality to the extent that we are separate and not intimately connected to it in a way that our actions most likely have ancient origins. What is left over when we commit to this materialist view of perception?

A lot, I would say. In fact, one can control some aspects of even these biological processes. If I am lactose-intolerant, I can consciously avoid dairy so my digestion maintains a regular track. In the same way, I can somewhat control what my brain “processes”. If I am at a music festival, for example, and I have to decide whether I want to attend the concert of a band I have already seen or that of a new band I haven’t yet seen, my decision will affect what my brain processes. If I choose the familiar option, I will go into the show having certain expectations based on what I have already processed from previous shows of theirs. If I choose the unfamiliar band, (which is statistically less likely), then I am choosing a new path. My experience will not be dictated by any biases, and, in a way, the show will present a challenge — a challenge to what I already know and expect in music generally. It is not only those biological processes that are necessarily unconscious, but so are some of the decisions we make which come prior to those processes. We can, however, take control of those decisions if we think about learning and decision-making in the right way. So, let’s think about it like this: perhaps the origins of our behavior and decisions are in the world, but not in the minute-by-minute, stimuli-centric world that neuro-materialists would like to believe. If it were that way, then we would not even be able to inquire about how our minds work as we’re doing now (which requires temporarily stepping outside of them), much less to overcome social pressure to leave our friend group at a music festival to see the band we want to see, alone.

What I am dancing around now is the more nuanced meaning of ‘unconscious’ that we find in fringe psychology and spiritual circles.

“To know oneself is to make the unconscious conscious.” — C.G. Jung

We can observe, in my field of birth chart astrology, that people live out their charts until they seek knowledge about them. The birth chart represents one’s innate set of perceptions and predispositions for responding to different aspects of reality. Someone is likely even living out their transits when they come to me for consultation — i.e. there is something external compelling them to learn about themselves at a particular time — but free will is clearly expressed in how they make use of the information I give them. The better one knows oneself, the more opportunities they will have to express their free will. There is still no guarantee, however, that they will. As I always say, I don’t tell people what to do; I help them own what they choose to do.

There is a strong case that it is not when someone is acting from their proclivities, but rather only when someone acts against what is normal and comfortable for them, that they are expressing free will. This “opposition to the self” kind of behavior must be founded on moral principles, boundaries, or in the very least, external rules. These represent three different degrees of self-governance and the spectrum of our human relationship to that concept, and only one fully shows that free will can be expressed in any case. In the next post, I will describe these three levels and show the connection from free will to that one of them, perhaps revealing something about the origins of autonomous decision-making that evaded us in the beginning of this article.

Writegenstein #7: Disagreement as Misunderstanding

“611. Where two principles really do meet which cannot be reconciled with one another, then each man declares the other a fool and heretic.”

-Ludwig Wittgenstein (On Certainty)

Disagreements don’t exist — only misunderstandings do — if we take truth to exist. For a relativist, there is no difference between the two.

Furthermore for the relativist, definitions don’t exist at all.

Nor does anything exist for the relativist. To follow relativism through to its conclusion, one must be nihilistic and solipsistic, which are also unsustainable because it would follow that identity itself is impossible.

For one who accepts truth (in conscious thought, that is — for we all do in action), however, understanding is a prerequisite to opinion. An opinion is a sort of judgement. To understand is to have thought critically, and to think critically is to have observed impartially. Few who practice this method would consider their views to be mere opinions, worth just as much consideration as that of one who has no conscious basis.

Have well-reasoned perspectives, not opinions. Some “opinions” have a basis, and some do not, so we should not consider them to be of the same category. When they do, it is by coincidence. What serves as the intentional value determines which is which.

An opinion is never put forth with the intention of being true. It is either a nonsensical impulse or an attempt to be right. It will be fought for by way of rhetoric rather than reasoning. Any tools of reasoning that it employs will be inverted. For example, one might commit the appeal-to-pity fallacy in order to win the argument rather than avoid it as to not be fallacious in one’s reasoning. All well-reasoned perspectives have, at the very least, the intention of truth. Otherwise, the end is chosen at random by man, all means are justified, and logic is inverted to serve that end if it is used at all.

In summary: rhetoric is the art of debate — i.e. inverting logic to persuade someone to your side, for your ends.

Rhetoric, indeed, has a solipsistic aura to it. It is not motivated by what is good for one and for all, but rather for oneself alone. The gain can be of finances, power, status, or the appearance of virtue, all of which are superficial and, in the end, not good for oneself either. It leaves one alone, imprisoned, on one’s own island.

To reason well, by contrast, one’s only concern should be that which is true is revealed. One must be indifferent to the specifics of the outcome. To be virtuous in one’s faith is to believe that what is true is also good — to not allow one’s own motivations to intervene with that inquiry. To be naive in one’s faith is to put one’s trust in the motivations of man.

One should not even trust one’s own motivations if one cannot first observe them.

To be made in the image and likeness of God means that we have all the power we need within us — to discern deception and to speak and act in favor of what is good for good’s sake. To have the power of God within us is to have the power of Truth within us.

God, goodness, and Truth are the same concepts, dressed differently.

Logic itself is not good or true. It is a tool that we have been given with which we can choose good or evil, true or false. The human will is the only entity that possesses the power of good and evil. We have the conscious ability to use our tools for either at any point. To lack this ability is to be imprisoned.

It should be regarded as good that logic does allow us to follow a perspective or opinion through to its conclusion. In a disagreement between two people, no more than one participant is doing this. Disagreement happens when one person has a more tightly knitted sifter for information than the other, so he can see what is relevant and irrelevant more clearly, thus formulating a more solid basis for a perspective. The one who has not tightened the knitting of his filter jumps to conclusions, perhaps not from sifting at all, but from constructing a viewpoint on the basis of his data. This is the composition fallacy.

As we know, a philosophical argument has three parts: assumption, evidence, conclusion. For a reasonable discussion to occur, it treats the assumption(s) as a foundation for the evidence, and so all participants agree on that foundation. If this is not the case, then the discussion should be about that basis itself before moving forward with evidence. Otherwise, the participants will have different ideas about what constitutes evidence to begin with. This will leave the discussion at a stalemate.

It is that which is not being questioned in the argument that should first be understood.

A spiritual being is a truth-centered being. To be spiritual is to value truth and goodness above all and to have intended it even if one falls short of it in action.

To mull over a disagreement is to expect that the other understands what you understand. This is a mistake.

Even if you have put forth your position in clear, logical terms, it may not be the case that your message has been received as you intended. Do not expect anyone to understand. Speak simply and authentically, as if to allow your message to flow through you.

If your message is true, then it is not yours to begin with. You are merely the vessel for truth, so take no offense to how it is received. Anticipate that it will be met with great resistance. Surprise about this will cause you much unnecessary anguish, as will anything that you seek to control but cannot.

Understanding human perception more broadly will afford you forgiveness in particular cases.

Understanding that what is true is good will afford you the willingness to investigate assumptions before the evidence.

It is not your job to convert someone’s assumptive basis, for that might entail a deeper spiritual journey that they are yet to embark upon. One can only pursue that journey from their own will. They may have to experience hell before they enter into that darkness.

To disagree with someone may spark a volatile response. He will, as Wittgenstein implies in the quotation above, commit argumentum ad hominem. This is proof enough that there is something deeper that they misunderstand. They are frustrated neither with you nor your argument, but with themselves. If you engage them, you are showing the same incompetency in your own way, whether it be regarding your assumptions, evidence, or the reasons for their disagreeing and an inability to forgive them for that.

To show indifference toward what they say but concern for why they say it is to love them.

Similarly, to lack the ability to release yourself of their struggle reveals a struggle of your own that you must address. To simply present what you have concluded as true, and step away, is to love yourself.

The paths to both heaven and hell are dark. The latter is guided by one’s own senses, which is to say that one abandons oneself and others in an attempt to serve oneself. The former is guided by intuition — i.e. faith that the light at the end is already self-contained, is also something greater of which one is a part, and is therefore also best for all others involved.

Don’t Use Sarahah; Own Your Words!

The problem with the new anonymous messaging app Sarahah isn’t that it creates a platform for cyberbullying (just walk away from your computer screen, jackass); it’s that it is playing a role in the leftist movement against free speech by ridding people of the responsibility of owning their words.

I don’t need to have used the app to know this. It’s obvious. In this time when social media is allowing for people to communicate less and less directly, making them more and more thin-skinned, careless with their speech, and, quite frankly, stupid, this app deals with the free speech problem by cleverly working around it. While most leftist social media platforms attempt to censor content or to simply suspend accounts when people say things that don’t conform to their collective beliefs, Sarahah allows the content to flow freely because no one in particular can claim responsibility for it. It is an anonymous free speech safe space, if you will.

Of course, the app knows who said what, so it allows you the option to anonymously block users if you get an undesirable message, so content can still be managed in that way.

Fair enough.

If someone messages you through the app telling you point-blank “you’re a dumb fuck”, you might not want to hear from that person again since they are lacking the tact and constructive criticism that the app would like of its users, and the same would be the case in real life, you can be sure.

The point I’d like to make in this post is that the Sarahah concept can seem all well and good on its own, but when you put it into a real world context, as with any new product, the users will determine its true identity. (this is through no clear fault of the creator; not every app developer knows enough about human nature to think through every scenario in which someone might use the app differently than he intended… this is why user feedback is so crucial). This post is my prophesy about why Sarahah’s identity will turn out more bad than good and why I would generally advise against using it.

Why Sarahah is Bad for Business

A good business provides a valuable service to the community. In order to ensure that the service continues to grow and improve, it is necessary that the employees work in an environment conducive to the free-exchange of ideas. That might make Sarahah seem like the perfect app, right? Actually, the contrary is true because of what the idea leaves out.

What is just as important as the idea itself is the employee’s taking credit for it. Sarahah doesn’t allow for this, neutralizing the dominance hierarchy within the company. The employer can reap the benefits of having the idea, but he does not have to give credit where it is due. This is convenient for the individuals at the top whose jobs won’t be threatened, and for the human resources department because they will have fewer cases to deal with, but it could hurt the company in the long run when their employees’ intellects are suppressed and promotions are given to the wrong people. This is bad news for female employees who, if they thought they were disadvantaged in the workplace before, will be even more so now, perhaps without their even realizing it. It is also bad for male employees who will inevitably lack the motivation to give any criticism at all.

Here are the differences between how women and men will be affected by Sarahah in the workplace.

Sarahah sneekily caters to the female temperament.

From a personality perspective, women tend on average to be higher than men in Big5 trait agreeableness. This means they are more compassionate, less assertive, tend to underestimate their abilities, and they don’t as often take credit for their achievements. They are also higher in trait neuroticism, which is sensitivity to negative emotion. This makes Sarahah the perfect place for women to speak their minds. They don’t have to give criticism directly, and they don’t have to claim fault if that criticism hurts someone’s feelings.

This might sound appealing to women, but I see it as taking advantage of the woman’s common workplace weaknesses. Though (probably) not intended, the inevitable consequence of this will be that even fewer women will stand out among their coworkers and be considered for promotions. They’ll be comforted now more than ever that simply sitting there and doing their jobs is enough, instead of taking the risks necessary to advance. (Of course, personality studies show that this is a good thing if they want to maximize their mate options, as women prefer mates who are at least as smart and successful as they are) All of this is true for some men as well, but I suspect men in general will encounter a different set of problems.

Sarahah Suppresses the Male Intellect

Since men are more assertive and aggressive, they will still be more likely than women to give criticism face-to-face, and there’s bad news for men who do. If a company begins to rely on Sarahah as the primary means by which to take criticism, then direct dialogue between people will be constricted, not enforced. Any man who does not use the app to speak his mind is taking a dangerous and unnecessary risk. He may get into trouble and risk losing his job if his speech is in violation of company policy. He won’t be able to play the traditional, competitive, risk-reward game that is crucial to his potential to climb the company ladder.

Challenging the status quo is an important way in which men typically show their ability to think critically, articulate, and negotiate – skills that are necessary for managing a good business at all levels. Sarahah suppresses these skills. This will allow HR to keep the hiring process neutralized, so they do not have to promote people within the company based on merit, but rather by whichever absurd and counterproductive standards they choose (e.g. to meet notoriously anglophobic ethnic diversity quotas).

Why Sarahah is Bad for Personal Relations (to point out the obvious)

It might sound appealing to find out what your friends and acquaintances really think of you, but I suspect that the anxiety that will result from not knowing who exactly said those things will far outweigh any positive effect that the criticism may have on you. Imagine walking around at a party where all of your closest friends are present, knowing that half, maybe even all of them have only been able to honestly open up to you anonymously.

A good friendship or relationship should not only be conducive to, but founded on open, honest communication. I know it sounds cliché, but this cannot be overstated given that Sarahah exists to deny that. In fact, we identify who our friends are based on how open our communication is with them, do we not?

Consider this… your primary or best friends are those few who you can be absolutely open with. You know who they are. Your secondary friends encompass a wider circle. They are people you may call on regularly, but the subject matter of your communication with them is limited, whether to specific topics or to a level of depth in general. Your acquaintances are everyone else you know – people you could (and often should) do without.

Which friend group do you suspect is the most likely to send you overly-critical messages on Sarahah? Acquaintances? The people who know you the least?

Hmm, maybe not.

Acquaintances might be the most likely to send you the occasional “you’re a dumb fuck” sort of message. But, since they know you the least, they think of you the least. They care for you the least. They’re the least likely to try to help you. So, I’d guess not.

What about those best friends who use the app? They very well may use it to give you some much-needed advice, but who are they? Though the advice is sound, are they really your friends if they can’t sit you down and talk to you?

You might be disappointed (or even relieved, if you’re a particularly strong person) to find out that some people who you thought were your best friends are really secondary friends, or mere acquaintances, or just snakes and not your friends at all. In fact, any “best friend” who might use the app out of fear of being honest with you, no matter the content of their message, is doing you a huge disservice. They’re simply acting cowardly.

Conclusion: Don’t Be a Pussy

Don’t use Sarahah. Own your words. Be an open, honest, and responsible human, for your sake and the sake of your friends and coworkers. If your company tries to adopt Sarahah in order to take criticism, explain to them the problems that would cause for you and for them. If they insist, then give criticism directly anyway. Get into a fight with those dumb cunts in HR. Get fired. Chances are that it’s not your dream job anyway.

If your friends announce on social media that they just started a Sarahah account, they’re reaching out for help. Take them out for a drink and ask them what’s up. It may require a bit of persistence, but if they’re really your friend, then it will be worth it.

Despite the difficulties in the short-term, the long-term benefits of having straightforward, critical discussions with people will be worth it. You’ll show them that you are worth it, and they will reward you for it. But, of course, don’t do it for the reward; as with anything, do it simply because it’s right.

Tinder Fun With a Feminist

I’m Britton, as you should know, and below you’ll find the bio I wrote for my Tinder profile. If you don’t know what Tinder is, then get your head out of the sand, and read about it here.

2017-05-22 16.49.15

I was in New Orleans the other day, getting my swipe on, and then I came across this fine, older lady.

2017-05-22 12.40.00

The first things, ‘politically progressive’ and “the f-word”, I admit, probably should have raised red flags before even her shitty taste in music did. Those terms on their own hint at far-left political views, but the two of them together scream ‘SJW‘. However, she was hot, and that’s very rare of feminists, so I read into her words and saw deeper possibilities. I was hoping that maybe we could talk some philosophy, giving her the benefit of the doubt that her knowledge on that subject wasn’t confined to new-wave feminist crap. Hey, maybe she was even a feminist of the second-wave, non-radical kind, and ‘progressive’ just meant that she was kind of liberal and open to reasonable and necessary change. Maybe she’d even have a cat named Elvira. With this optimistic attitude, I swiped right and immediately tested her humor to see how “open” she really was.

2017-05-22 12.24.23

BOOM! No fun or games with this one. Did I “proudly proclaim” that I am politically incorrect? Reread my bio, and let me know. I think I’m just straightforward about what I want out of my Tinder experience. She could have easily swiped me left if my intentions didn’t line up with hers. Looking back, though, maybe I should have ended my first message with a winky face. 😉

2017-05-22 12.26.28

Do you value truth, Jessica? DO YOU? We’ll find out. Also, Jessica, I’ll be addressing you directly from here on. Wait, is it ok that I call you by your name, or would you prefer something else? I don’t want to be too incorrect and risk “invalidating your existence“.

2017-05-22 14.12.41

Yeah, let’s define a term together! That sounds like a fun philosophical exercise. Maybe you’ll even return the favor by asking me how I would define the term, and then we’ll find some common ground, bettering both of our conceptions of the world. Learning stuff is fun! You read philosophy, so you agree, right?

2017-05-22 12.29.22

Annnnnd there it is. You pretty much nailed it, Jessica. I’m guilty of whiteness, so there’s no need to ask me what I think ‘political correctness’ means. Your understanding of how language works, on the other hand, seems a bit strange, and the philosophy you read may be of questionable quality. My validity on that topic comes from my education in linguistics and philosophy of language. But, you’re attempting to “invalidate” me because I’m… white? Hmmm.

I don’t think that speech is an activity so consciously aimed toward respect, nor do I think it’s a good idea to blindly respect people at all. In fact, it’s dangerous. I’ll spare you the technical linguistic part of the argument because I’m starting to sense that you have a screw or two loose, but I still must address the respect-issue.

Also, how are you so sure that I’m not black or transgender? If you respected me, then you would have asked about my preferred identity because race and gender are determined whimsically and have no biological basis, correct? No, you should have simply requested a dick pic, Jessica. Truth requires evidence, and I have plenty of it.

2017-05-22 12.31.40

So, maybe there’s more to political correctness than your definition, Jessica, and maybe I know some stuff that you don’t. Maybe you’d be interested in hearing it. Maybe if you weren’t so keen on blindly respecting others, then you wouldn’t be so liable to get mugged and raped in a dark alley in New Orleans. Or, maybe you’d like that because you’d become a martyr for your ideology. At this point, you’re not giving me any reason at all to respect you, but I do fear for your safety. After all, you’re right that the world isn’t a very kind place.

2017-05-22 14.39.072017-05-22 14.40.34

I figured I’d play the “patriarchy” card since you already accused me of being part of it by virtue of my straightness, whiteness, and maleness. What did you expect? Why did you swipe me right if you hate me by default, unless you wanted to hate-fuck me (shit, I may have missed my shot)? I mean, you’ve seen my pictures. Chances are that I’m not black under my clothes. In fact, I’m even WHITER there. Well, actually, there is a very small part of me that is kind of tan.

2017-05-22 12.35.42

2017-05-22 15.00.48

*ignores grammatical errors and moves on*

I know I’m an asshole, Jessica. There is no need to repeat yourself. But, does being an asshole make me wrong? No, Jessica, you’re the meanie who committed ad hominem. I also didn’t appeal to emotion to argue my point. You just took it that way. Taking offense and giving it are NOT the same thing. That’s Philosophy 101.

But…do save me! Please save me from my problematic ways so I can be more compassionate like you and make the world a more progressive place! Or, do I need a degree in women’s studies to be infected with your profound wisdom? If it’s LSU that infected you, then you’re right that there is no hope for me because I dropped out of that poor excuse for a higher-education institution after just one semester of grad school.

On the other hand, I could help you by revealing your greatest contradiction, and maybe even give you one more chance to get laid by me, knowing well that so few men would have gotten even this far with you. I mean, this is Tinder. Why else would you be here? Yeah, that’s what I’ll do because I want some too. I’ve learned to accept that liking sex makes women delicate flowers and men oppressive misogynists. It’s cool, really, I don’t need to be reeducated. I’ll even let you play the role of misogynist, and I’ll be the victim, and you can oppress deez nuts all you want.

2017-05-22 15.11.27

That’s where it ended. So…

What the hell is going on here?

I don’t think that I need to go into detail about what is going on here. There are plenty people who have done that very well already. For example, Dr. Jordan B. Peterson in this brilliant snippet from the most popular podcast in the world. The general point I want to make is that we are in a strange place where people like Jessica are multiplying exponentially by the semester, thanks to politically correct ideology infecting universities, business administrations, legislature, and now even Tinder (as if Tinder doesn’t already have enough spam)! This is the time for talented and capable people, mostly men, to stop ceding power to the people who live in those boxes; they’re wrong, and they’ve snuck their way into power without truly earning it. To stand up for truth is to stand up for yourself. However painful that may be now, it is absolutely necessary for the survival of our species. After all, if we were all angry, 35-year-old feminist virgins, of course humanity would end.

Since we aren’t all like Jessica, one day we will be without these people completely. Let’s give them what they want: spare their feelings, thus depriving them of the open, truth-seeking dialogue that would mold them into stronger moral beings and free them from the narrow and suffocating constraints of the feminist ideology. Since they aren’t open to that sort of thing, they will eventually self-extinguish under their childless philosophy and rot in the miserable hell that they’ve created for themselves.

The False-Dilemma of the Nature vs. Nurture Debate

Before I begin, allow me to explain what I mean by false dilemma. A false dilemma is an error in reasoning whereby one falsely assumes that the truth of a matter is limited to one of two (or a select few) explanations. For example, the American presidential election. For another example, have you ever been stumped by a question on multiple choice test because you saw more than one possible correct answer (or no correct answers all)? — perhaps you got frustrated because you felt that the test was unfairly trying to trick you? Well, you were probably right. This may have been an instance of your ability to recognize the false dilemma fallacy. Sometimes there are indeed any number of correct answers given any number of circumstances. There is often simply not enough information provided in the question for one choice to clearly stick out as correct. This might lead you to question the test in a broader sense. What is the purpose of this (presidential election, or) test? What is it trying to measure or prove? Without getting into that answer in too much detail (as this is not a post about the philosophical state of academic testing), I can say such tests aren’t really concerned with truth and meaning as they are about the specific program they support. That program may or may not have the best interests of the people in mind, and it may or may not be directly governed by the amount of money it can produce in a relatively short period of time. Anyway, that’s another discussion.

In a previous post entitled The Slate, the Chalk, and the Eraser, I compared a child’s mind to a slate, and I argued that as long as we write on it with chalk by teaching him how to think (rather than a permanent marker/what to think), then he will be able to erase those markings to make way for better and more situation-relevant ones in the future, once he develops the ability to make conscious judgments. This is an example that you may have heard before, and it can be useful, but by some interpretations, it may seem to rest on a false presupposition. Such an interpretation may raise the “nature-nurture” question that is so common in circles of science and philosophy. One might argue that if a child’s mind is truly analogous to a slate in the way I have put forth, then I should commit myself to the “nurture” side of that debate. That was not my intention. In fact, that debate, in its most common form, presents a false dilemma, so I can only commit to both or neither side depending on what is meant by ‘nature’ and ‘nurture’. The conventional definitions of these terms are limited in that they create a spectrum on which to make truth-value judgments about objects, experiences, phenomena, etc. We commit to one end of the spectrum or the other, and we take that position as true and the other as illusory. This is similar to the subject-object distinction I described in an earlier post. Perhaps comically, even the most radical (and supposedly-yet-not-so-contrary) ends of scientific and religious belief systems sometimes agree one which side to commit to, albeit for different reasons. That particular conflict, however, is usually caused by a semantic problem. The terms ‘nature’ and ‘nurture’ obviously mean very different things for radical mechanistic scientists and evangelical Christians.

Please keep in mind throughout that I am not criticizing science or religion in general, so I am not out to offend anyone. I am merely criticizing radical misinterpretations of each. Consequently, if you’re an idiot, you will probably misinterpret and get offended by this post as well.

Taking this description a step further, false dilemma can be committed to any number of degrees. The degree to which it is committed is determined by at least two factors: the number of possible options one is considering and the level of complexity at which one is analyzing the problem. Any matter we might deal with can be organized conceptually into a pyramid hierarchy where the theoretical categorical ideal is at the top, and the further one goes down the pyramid, the more manageable but trivial the matters become. As a rule of thumb, the fewest options (one or two) and the lowest level of analysis (bottom of the pyramid) should give rise to the highest probability of a logical error because the bottom level of analysis has the highest number of factors to consider, and those factors culminate up the pyramid toward the categorical ideal. Fortunately, committing an error at the lowest levels of analysis usually involves a harmless and easily-correctable confusion of facts. Committing the error at higher levels of analysis are more ontological in nature (as the categorical ideals are per se) and can have catastrophic consequences. All sciences and religions structure their methods and beliefs into such pyramid hierarchies, as do we individually. They start with a categorical ideal as their assumption (e.g. materialism for some science; the existence of God for some religion), and they work down from there. However, neither religion nor science are meant to be top-down processes like philosophy (which is likely the only top-down discipline that exists). They’re meant to be bottom-up processes. For science, everything starts with the data, and the more data that is compiled and organized, the more likely we are able to draw conclusions and make those conclusions useful (in order to help people, one would hope). For religion, everything starts with the individual. Live a moral and just life, act kindly toward others, and you will be rewarded through fulfillment (heaven for western religions, self-actualization for eastern religions). These can both be good things (and even reconcilable) if we go about them in the right way. What are the consequences, however, if we go about them radically (which is to say blindly)? In short, for radical belief in a self-righteous God, it is war, and therefore the loss of potentially millions of lives. In short, for radical materialism, it is corruption in politics, education, and the pharmaceutical industry, the elimination of health and economic equality, and the potential downfall of western civilization as we know it. That’s another discussion, though.

For the nature-nurture debate, the false dilemma is the consequence of (but is not limited to) confusion about what constitutes nature and nurture to begin with, and even most people who subscribe to the very same schools of thought have very different definitions of each. First, in the conventional form of this debate, what do people mean by ‘nature’? Biology, as far as I can tell, and nothing more. We each inherit an innate “code” of programmed genetic traits passed down from our parents, and they from theirs, and so on. This code determines our physiology and governs our behavior and interaction with the outside world. Our actions are reactive and governed by our brain-computer, and free will is consequently an illusion. What is meant by ‘nurture’ on the other hand? Our experienced environment, and nothing more. Regardless of our chemical makeup, how we are raised will determine our future. There is no variation in genetics that could make once person significantly different from another if raised in identical fashion by the same parents, in the same time and place. We have no control over the objective environment we experience, so free will still seems to be illusory.

These positions seem equally shortsighted, and therefore, this problem transcends semantics. Neither accounts for the gray in the matter — that reality, whatever that is, does not follow rules such as definitions and mathematical principles. These are conceptions of our own collectively-subjective realities which make it easier for us to explain phenomena which are otherwise unfathomable. On this note, we could potentially  consider both nature and nurture phenomenal. That is an objective point on the matter. The first subjective problem is that both positions imply that we don’t have free will. Sure, there are unconscious habits of ancient origins that drive our conscious behavior (e.g. consumption, survival, and reproduction), but there other more complex structures that these positions don’t account for (e.g. hierarchical structures of dominance, beliefs, and abstract behavior such as artistic production), and those are infinitely variable from person to person and from group to group. This comes back to the point I just made about phenomenal reality and the conceptions we follow in order to explain them as if they are somehow out there in the objective world that we are not part of.

Not to mention, we all take differently to the idea that free will might not exist. Religious people are often deeply offended by this idea whereas many scientists (theoretical physicists in particular) claim to be humbled by it. Both reactions, I would argue, are disgustingly self-righteous and are the direct consequence, not of truly understanding the concept of free will per se, but of whether or not free will simply fits into his or her preconstructed hierarchical structure of beliefs. One should see clearly, on that note, why a materialist must reject free will on principle alone, and a radical christian must accept it on principle alone. Regardless of the prospect that the religious person has a right to be offended in this case, and that it is contradictory of the scientist to commit to a subjective ontological opinion when that very opinion does not permit one to have an opinion to begin with (nor can it be supported with any sufficient amount of “scientific” evidence whatsoever), the point here transcends the matter of free will itself: that rejecting or accepting anything on principle alone is absurd. This calls into question matters of collective ideological influence. There is power in numbers, and that power is used for evil every bit as often as it is used for good. When individuals, however, break free from those ideologies, they realize how foolish it is to be sheep and to believe in anything to the extent that it harms anyone in any way (physiologically, financially, emotionally, etc.). The scary part about this is that literally any program might trap us in this way (ideologically), and blind us from the potentially-innate moral principles that underlie many of our actions. On that note, we are all collectively very much the same when we subscribe to a program, and we are all part of some program. We are individually very different, however, because we each have the potential to arrive at this realization through unique means. We each have a psychological structure that makes up our personality. It is undeniably innate to an extent, yet only partially biological. This reveals the immeasurable value in developing the one’s intrapersonal intelligence through introspection and careful evaluation of one’s own thoughts, feelings, perceptions, and desires.

Furthermore, conventional nature-nurture positions are polarities on a spectrum that doesn’t really exist. If we had clearer definitions of each, perhaps the debate would not present a false dilemma. We should reconstruct those definitions to be inclusive of phenomena — think of these terms as categories for ranges of processes rather than singular processes themselves. If we think of these terms as being on a spectrum, we are led to ask the impossible question of where the boundary is between them. If we think of them as categories, we are forced to embrace the reality that most, if not all, processes can fall into either category given a certain set of circumstances, and thus, those categories become virtually indistinguishable. E.g. in the case of inherited skills: practice makes perfect, yet natural talent seems so strongly to exist. If the truth-value-based spectrum between nature and nurture were a real thing, then neither position would be able to account for both nurtured ability and natural talent; it would simply be either/or. This is a consequence of the false dilemma. It leads us to believe that this gray matter is black and white. If we one is decent at learning anything, he/she knows that there is only gray in everything.

But is there? I hope I have explained to some conceivable extent why scientific and metaphysical matters should not be structured into a polar truth-spectrum, and why any attempt to do so would likely present a false dilemma. However, it seems more reasonable to apply spectrum structures to value theory matters such as aesthetics, ethics, and even other personal motivators such as love. This, I will explain further in a later post.

 

Collective Subjectivity = Reality :: The Utility of Phenomenological Thought

In my last post, I explained the differences between and the proper uses of the terms ‘subjective’ and ‘objective’. To recap, these terms do not describe the positions from which one perceives. Of course, everyone perceives subjectively, and objects don’t perceive at all. Therefore, the subject/object spectrum is not a spectrum on which one may judge a matter’s truth-value. The spectrum simply describes the nature of the matter at hand — subjective means “of a subject” and objective means “of an object”. Having said that, how can we define truth more broadly? What determines it?

I think that we can, in many conceivable instances, equate truth with reality. This is based on one of two popular definitions of reality. The first, more popular definition in which we cannot equate truth and reality, and the one I reject, is that of objective, Newtonian-scientific reality. This holds that there are mathematical laws and principles out there in the universe, already discovered or waiting to be discovered, which the forces of nature can be reduced to. Proponents of this view hold “rationality”, in all of its vagueness, as the singular Platonic ideal which dictates what is true, real, and meaningful. It follows from this that mechanistic science holds the key to all knowledge. The problem here is that mechanistic science (not all science) is founded in the metaphysical belief in materialism. Materialism suggests that all reality is comprised of quantifiable matter and energy. Humans, and all living things, are “lumbering robots”, as Richard Dawkins claims. Consciousness, ethics, morality, spirituality, and anything else without a known material basis is subjective in nature and thus superstitious, irrational, and not real. As I have already explained, this worldview rests on a straw-man distinction between what constitutes subjective and objective, for it assumes that this distinction creates a spectrum on which to judge a matter’s truth-value (the more objective, the more true).

Remaining consistent with how I have distinguished subjective and objective is the second, less popular, and in my view, much more useful way of defining truth and reality: what is real is what affords us action and drives us toward a goal. The definition is as simple as that, but its implications have a tremendous amount of depth rooted in the unknown. Instead of holding one Platonic ideal (like rationality) as the key to all truth, there are an infinite number of ideals that humans conceptualize, both individually and collectively, in order to achieve their ends. Therefore, this view affords relevance to a wide range of perspectives even if the nature of the objects being perceived is unknown. The rationalist view, by contrast, is limited to the assumption that the nature of everything has already been determined to fit into one of two metaphysical categories: objective reality or subjective delusion. (This Newtonian theory of reality I have just explained, by the way, is a long-winded way of defining ‘scientism’, a term I often use in my posts.)

Nature doesn’t obey laws; humans do, so we tend to compartmentalize everything else in that way because that makes it easier for us to explain what we want to know and explain-away anything we don’t want to know. What we don’t want to know is what we are afraid of, and as it turns out, what we are afraid of is the unknown. So, when anomalies, whether personal or scientific, that don’t fit the already-established laws arise, a Newtonian thinker will categorize it as illusory in order to explain it away. This doesn’t work because even we humans have a propensity to break the laws that we create for ourselves, and this can be a very productive thing. The degrees to which this is the case depends on our individual psychological makeups. People who are high in the Big-5 personality trait conscientiousness, for example, tend to obey rules because of their innate need for outward structure and order. Those who are low in that trait are more likely to break rules, especially if they are also low in agreeableness which measures one’s tendency to compromise and achieve harmony in social situations. Openness, on the other hand, the trait correlated with intellect and creativity, allows one to see beyond the rules and break them for the right reasons — when they are holding one back from progress, for example. These are just three of five broad personality traits that have an abundance of scientific research to potentially confirm their realness and usefulness, even as a rationalist/Newtonian might perceive them. However, the tendency of someone to break rules as a result of their psychological makeup does not only apply to political laws. We also create collective social rules among groups of friends and unconscious conceptual rules for ourselves in order to more easily understand our environment, and those systems satisfy the same basic human needs and take the same hierarchical forms as political order does, and they serve purposes that contrast only in terms of their widespread-ness.

Regardless of our individual psychologies, there are commonalities that all humans share in terms of which types of goals we have and which types of things drive us toward or away from action. Those things are, therefore, collectively subjective across humanity and are what I would like to propose the most universally real and true things (insofar as anything can be universally real or true at all). This leads me to elaborate further on this goal-oriented view of reality.

Since I used Newton as a scientific lens through which to understand the rationalist theory of reality, I will do the same thing to explain the goal-based theory that I am proposing, but this time using Darwin. Philosophically speaking, Darwin did not commit himself to his theories in the same law-sense that Newton did his. In fact, many of Darwin’s ideas have recently been found to be rooted in psychology rather than in hard mechanistic biology. His main principle can be summed up with this: nature selects, and we make choices, based on what we judge to be most likely to allow us to survive and reproduce. That is all. Everything else is just detailed justification which may or may not be true or relevant. In fact, Darwin left open the possibility that the details of his evolutionary theory not only could be wrong, but that they probably were, and he was very serious about that. To take all of those details literally leads one into the same logical trap that the “skeptics/ new atheists” fall into when they obsess over the details of the Bible — they oversimplify and misrepresent its meaning, and therefore overlook the broader, most important points that exist. These are straw-man arguments, and they demonstrate a persistent, juvenile lack and rejection of intellect.

The reason Darwin’s main evolutionary principle is psychological is because it is consistent with Carl Jung’s idea of the archetype. An archetype is any ancient, unconscious pattern of behavior common among groups or the entirety of the human population and their ancestors. The need for all living beings, not only humans, to survive and reproduce, is undoubtedly real. It is something we understand so little, yet it drives an inconceivably wide range of behaviors, most of which are taken for granted to the extent that they are unconscious (e.g. sex-drive is causally related to the desire to reproduce). It is not only in the natural world that humans would have to desperately fight for their life against other species, but even among ourselves in the civilized world have there been instances of radical attempts to wipe out masses of people because one group saw another group’s ideologies as threatening to their own survival and prosperity (e.g. both Hitler and Stalin led such endeavors in the 20th century).

Perhaps, instead, if we equate truth with this archetypal, goal-oriented conception of reality, then we can come to a reasonable conclusion about what constitutes truth: that which affords and drives us to action. That is to say that (capital-T) Truth, in the idealistic, rationalist sense, probably does not exist, and if it does, our five senses will never have the capacity to understand it. The best we can achieve and conceive is that which is true-enough. For what? For us to achieve our goals: survive, reproduce, and make ends meet, and if we are very sophisticated and open, to also introspect, to be honest with ourselves and others, and to live a moral and just life.

Subjectivity vs. Objectivity: Not a Distinction of Truth

I wonder which is worse: the fear of the unknown? Or knowing for sure that something terrible is true?

@pennyforyourbookthoughts

Or, if I might add, the negative, unforeseen consequences of that terrible thing being true?

The answer is: “fear of the unknown”, and it’s a little complicated.

Most things one might know “for sure” lie at either end of the subject/object spectrum. What is known on the subjective end of that spectrum is generally thought to deal with personal or value truths of an that are understood qualitatively by that individual. What is known on the objective end is generally thought to deal with fact and scientific truth that is understood quantitatively by a group. This is generally correct, but it is only the world of objects that convention accepts as ‘truth’, while the subjective is understood to not contain truth-value at all unless we are speaking about it in material (and thus, objective) terms. So, this spectrum actually seems to measure truth; the more objective it is, the more true it is. Here is an interesting misconception that leads me to attempt to make clear the proper uses of these terms.

What does it mean for something to be ‘subjective’ or ‘objective’? First, what they DO NOT describe are points from which one perceives. In other words, ‘subjective’ does not mean “opinion – from the point of view of a particular subject”, and ‘objective’ does not mean “rationally – from the point of view of an object or the world of objects” as, say, Richard Dawkins’ or Ayn Rand’s pseudo-philosophies suggest. They consider the vaguely defined term ‘rationality’ as the universal ideal — Dawkins through materialism and Rand through radical capitalism/individualism. This is shallow and wrong. The reasons for this should be clear. First, everyone perceives subjectively, from their own point of view, and objects don’t have the capacity to perceive to begin with — that is precisely what makes us subjects and things objects! No human perceives at the level of subatomic particles or, by the same token, God. Second, the differences between what constitutes ‘subjective’ and ‘objective’, for the sake of this conversation, depend on how ‘truth’ is defined more broadly. In fact, these terms have nothing to do with truth at all.

Rather, these terms describe the nature of a matter at hand. ‘Subjective’ simply means “dealing with matters of the subject or set of subjects”, and that can range from intrapersonal matters to interpersonal ones. ‘Objective’ means “dealing with matters of an object or set of objects”, and that can range from logical to quantitative to empirical. They DO NOT distinguish any degree of truth. Science, for example, is not objective because it it more true; it is objective simply because it deals with objects. Medicinal practice (which is not a science, by the way), on the other hand, is subjective in nature because it is interpersonal; it deals with human subjects on a case-by-case basis (many physicians do, however, treat their patients as objects, and they in turn view their practice as an objective matter).

This is not to say, however, that each subject perceives and makes judgments to the same degree of truth or accuracy. Each subject analyses any given situation to the degree that is consistent with their unique set of intellectual capacities; those include intrapersonal, interpersonal, conceptual, spatial, experiential, etc. A good IQ assessment tends to measure a combination of all of those things, but most people are only strong in one or two of those areas. For example, one might have a high level of intrapersonal intelligence (they know themselves well and understand their own mental and emotional states) but lack the ability to impartially deal with other people or objective matters because of how strongly they are affected by the outside world. On the other hand, one might have be high in logical or spatial intelligence but lack the ability to admit or even be aware of their emotional states or internal biases that govern the way they deal with personal matters (having one capacity does not imply deficiency in another capacity, necessarily, as people high in IQ might prove).

Given all of this personality variability among subjects, can an argument be made about the question stated above? Which is worse: fear of the unknown, knowing something terrible is true, or the negative consequences that accompany knowledge? I can only speak about this in a normative fashion. I also must presume that anything “good”, as it pertains to knowledge, should broaden one’s perception, and anything “bad” should narrow it. Knowing anything “for sure”, insofar as that is possible, should be a good thing in that it should teach us something meaningful, whether it is pleasant or not. The goodness of that knowledge, because it is sometimes unpleasant, is not contingent on the goodness of its specific consequences. Nietzsche was correct when he said that “people do not fear being deceived; they fear the negative consequences of being deceived”. The consequences, after all, are merely a result of cause and effect, and any cause can produce any number of variable effects depending on the set of circumstances under which it occurs. It is that potential for unforeseen chaos that people fear, at least on the surface. But, such matters are too variable and trivial to direct action in a meaningful way when certain higher-level truths (e.g. how should we think about x, why does x matter to us, etc.) have not been accounted for, so to simply fear consequences is shortsighted. To know something “terrible”, on the other hand, is usually just a case of knowing one side of a particular occurrence without knowing the reasons it happened or being familiar with any perspectives apart from the first one that is presented. In other words, it is knowledge without understanding.

It is the unknown that contains that crucial knowledge that will afford us understanding and drive us to action. That is where real truth comes from. We should be prepared to face the unknown at any time, for it is all around us, and the world so rarely unfolds as we expect it to. In fact, there is nothing that I can think of that any one person has complete control over. There are an infinite number of effects and consequences that our actions can and will cause, so perhaps having minimal expectations to begin with is the most healthy way to prepare for the future. Do not fear the unknown, for to fear the unknown is to fear truth. Facing the unknown will prevent one from accepting any knowledge as “terrible”, and it will in turn not only minimize negative consequences, but it will open many unforeseen, positive opportunities.

 

The Slate, the Chalk, and the Eraser

Prerequisite reading: “WARNING: Your Kid is Smarter Than You!”

A mark of good critical thinking, let’s say, as it applies to science, is that it is always attempting to prove itself wrong. It challenges its most fundamental assumptions when unexpected results arise. We can do this in our everyday lives when we make decisions and formulate our own views. We are only truly challenging ourselves by trying to find flaws in our own reasoning rather than trying to confirm our views. It is easy to confirm our beliefs.

Let’s take astrology as a personal-scientific example. Sparing you the details, based on what little research has been done to refute it, astrology is seen as invalid, and therefore, a pseudoscience, by the standards of modern mechanistic science. However, that does not preclude one from believing in it – in confirming it or any of its insights to themselves. Now, one is not thinking critically by simply believing that astrology is a pseudoscience (or that it is legitimate science). That would be to put too much trust in other people’s thinking. What reasons can you give to support your own belief, and what does it mean?

One can wake up every morning, read their daily horoscope, and upon very little reflection, come up with a reason or two for how that horoscope applies to his or her life. On one hand, those reasons might be good ones, founded on an abundance of personal experience. The horoscope’s insights might serve as something to keep in mind as one goes about his or her day, and that can be a very helpful thing. On the other hand, however, the reasons might be mere, self-confirming opinions. They might be the result of the person’s ideological belief in astrology in general. That can be harmful if the person attempts to apply astrological insights to contexts which it is inapplicable. This is an example of how the confirmation of a specific belief, not the belief in itself, can be good or bad, helpful or harmful, depending on how one thinks about it and the reasons he or she gives for it. The question of whether it is right or wrong, correct or incorrect, is neither important nor provable.

In order to formulate views that are not mere opinions, we must expose ourselves to views that oppose the ones we already hold dear to our hearts. This is difficult for adults. Most of us have been clinging to the same beliefs since we were children or young adults. This is where children have a huge advantage. They don’t yet have views of their own. The sky is the limit to how they can think and what they might believe. Their handicap, though, is that they do not control what they are exposed to. They cannot (or perhaps, should not) search the internet alone, drive themselves to the library, proficiently read, or precisely express themselves through writing or speech. They are clean slates, and that ignorance not only gives them huge potential, but it also leaves them extremely vulnerable.

The Analogy

You may have heard this analogy before, but I will attempt to add a bit of depth to it.

A child’s mind is a slate, as are those of adults (though, arguably, much less so). It is a surface on which we can take notes, write and solve equations, draw pictures, and even play games. We can create a private world with our imaginations. For all intents and purposes, there are no innate limits to how we can use our slates. Maximizing our potential, and that of children, is up to the tools we use.

First, we need something to write with, but we shouldn’t use just any writing tool. Chalk is meant to be used on slate because it is temporary. It can be erased and replaced. If one were to write on a slate with a sharpie marker, that would be permanent. One could not simply erase those markings to make room for others. A slate has a limited amount of space.

Though our minds may not have a limited amount of space in general (there is not sufficient evidence that they do), there is a limit to how much information we can express at any given moment. That, not our mind in general, is our slate – that plane of instant access. The writing tool is our voice – our tool of expression. If we write with a sharpie, it cannot be erased. We leave no room to change our minds in the face of better evidence to the contrary. If we write with chalk, we can just as clearly express our ideas, but we also leave our ideas open to be challenged, and if necessary, erased and changed. It is also easier, for in the process of formulating our ideas with chalk, we need not be so algorithmic. We can adjust our system accordingly as we learn and experience new things.

The smaller the writing on the slate is, the more one can fit, but the more difficult it is to read. Think of a philosopher who has a complexly structured system of views. One detail leads into the next, and they all add up to a bigger-picture philosophy. It might take one’s reading all of it to understand any of it. That can be difficult and time-consuming, and not everyone has the patience for it. The larger the words on the slate, however, the easier it is to read, but the less there will be, so it risks lacking depth. Think of a manager-type personality who is a stickler for rules. He is easy to understand because he is concise, but he may lack the ability to explain the rules. People are irritated by him when he repetitively makes commands and gives no reasons for them. Likewise, children are annoyed when their parents and teachers make commands with no reasons to support them, or at least, no good ones (e.g. “because I said so”).

So, the slate represents the plane of instant access and expression of information, and the writing tool, whether it be chalk or a sharpie, represents our voice – our tool for expressing information and ideas. What does the eraser represent? The eraser represents our willingness to eliminate an idea or bit of information. It represents our willingness to refute our own beliefs and move forward. It represents the ability to make more space on our slate for better, or at least more situation-relevant, information. It represents reason. If one writes with chalk, the eraser – reason – holds the power. If we write with a sharpie, the eraser becomes becomes useless.

The Analogy for Children

I explained in my last post “WARNING: Your Kid is Smarter Than You” that it is important for parents and teachers to teach their kids how to think – not what to think – but I did not offer much advice on how to actually do that. I will not tell anyone, in detail, how to raise or educate their children. Each has a different personality and needs to be catered to through different means. I will, however, offer a bit of general advice based on the analogy above.

The way to teach children how to think (after already having done it for yourself, of course, which is arguably much more difficult) is NOT to hand the kids sharpies, for they will never learn to use an eraser. Their statements and beliefs will be rigid and lack depth of understanding. Granted, this might make them a lot of money in the short-term, but it will also significantly reduce their flexibility when they encounter real-life situations (outside of the institutions of school and work) that require them to think for themselves. This will inevitably limit their happiness from young adulthood to beyond.

Instead, simply hand them a piece of chalk. It is not even important to hand them an eraser, initially. Kids will figure out, after much trial and error, their own way to erase their slates. Eventually, they will find on their own that the eraser is a very efficient method to do so. Literally-speaking, they will express themselves and reason through their problems until they find the most efficient methods – by thinking for themselves, but only as long as they have the right tool.

Reason – The Business of Philosophy

“To say that a stone falls to Earth because it is obeying a law makes it a man and even a citizen.”  -C. S. Lewis

People who believe in science as a worldview rather than a method of inquiry – I call them scientismists – are fascinated by science because they cannot grasp it, just like all people who are not magicians are fascinated by magic. What little understanding they do have of it, in principle, is superficial. The difference between people’s perception of science and that of magic is that magic can always be explained. Magic plays a trick on one’s perception. That is magic’s nature as well as its goal. Science, on the other hand, cannot always be figured out. There simply is not a scientific explanation for everything (or of most things). Nor is it science’s goal to explain everything! Science is an incremental process of collecting empirical data, interpreting it, and attempting to manipulate aspects of the environment accordingly for (mostly) human benefit. It is experimental and observable. It is, as I will explain, inductive. Unfortunately, sometimes unknowingly, human subjectivity intervenes in at least one of these three steps, exposing its limits through ours. So, where does reason fit in to this process?

What “Reason” is NOT

One problem with scientism is that it equates science and reason. This is incorrect. Although philosophers of science, most of whom are scientists themselves, have debated the definition of science since it was called ‘Natural Philosophy’, there is one thing that we do know about it and the difference between it and reason. Science deals with questions of ‘how’. It describes the inner-workings, the technicalities, of observable processes and states of affairs. Reason deals with questions of ‘why’. It explores lines of thinking – fundamental goals, purposes, and meanings – for those processes and states of affairs as well those for many other non-scientific processes and states of affairs. Having said that, reason is necessary for science, but it is immeasurably more broad.

Science cannot alone answer why-questions. Claiming that it can is a mark of scientism. Why is that?

I will now give reasons for that by using an example from Dr. Wes Cecil’s 2014 lecture about scientism at Peninsula College:

Engineering, which is a type of science that has its foundations in calculus, can tell us how to build a bridge. Engineering can build the biggest, longest, strongest bridge one could possibly imagine. How will the bridge look? We marry science and art to make the bridge beautiful as well as functional. So, even at this first stage of building a bridge – design – science cannot stand independent from even art, which seems so much more abstract.

Furthermore, why do we need to build a bridge? This is a question of reason, not of science. The answer seems to be “to get to the other side of the river”. But what the engineer (who is also a business man who wants to land the deal for this highly-lucrative project) might neglect is that building a bridge is not the only way to get to the other side of the river. Perhaps a ferry would be an easier, more cost-effective option. The engineer can tell us how to build a ferry too, but making the decision between the bridge and the ferry, ultimately, is not the engineer’s business.

Even once the decision has been made to build the bridge, several more questions arise: who will pay for the bridge?; how will they pay for it?; where exactly will the bridge be?; who will be allowed to use the bridge? Motorized vehicles only? Bikes? Pedestrians?; etc. These are not scientific questions, and nor are most questions in our everyday lives. They are economic, ethical, and political questions that, much like the scientific question of how to build the bridge, require some application of reason, but they cannot themselves be equated with reason. Reason is something as different as it is important to these goals, processes, and states of affairs.

What is Reason?

Reason is a skill and a tool. It is the byproduct of logic. Logic is a subfield of philosophy that deals with reasoning in its purest forms. So, if someone wants to believe that science and reason are the same thing, then they are clearly admitting that science is merely a byproduct of a subfield of philosophy. I am sure that most scientismists egos would not be willing to live with that. Although some similar claim could still otherwise be the case, that is not what I am attempting to prove here. Let’s focus on reasoning.

We say that an argument is valid when the truth of the claim follows from the truth of its evidence. There is a symbolic way to express this. For example:

If p, then q; p; Therefore q.

What we have here is not a statement, but rather, a statement form called Modus Ponens. It is a formula in which we can plug anything for variables p and q, and whether or not the statement is true, it will be valid according to the rules of logic. Try it for yourself! But remember, ‘validity’ and ‘truth’ are not the same thing.

The example above describes deductive reasoning; it is conceptual. Immanuel Kant called the knowledge we gain from this process a priori – knowledge which is self-justifiable. Mathematics is a classic example of deductive reasoning. It is a highly systematic construction that seems to work independent from our own experience of it, that we can also apply to processes like building a bridge.

There is another type of reasoning called inductive reasoning. It is the process of reasoning based on past events and evidence collected from those events. The type of knowledge that one gains from inductive reasoning, according to Kant, is called a posteriori. This is knowledge that is justified by experience rather than a conceptual system. For example: We reason that the sun will rise tomorrow because it has everyday for all of recorded human history. We also have empirical evidence to explain how the sun rises. However, the prediction that the sun will rise tomorrow is only a prediction, not a certainty, despite all the evidence we have that it will rise. The prediction presupposes that not one of countless possible events (Sun burns out, asteroid knocks Earth out of orbit, Earth stops rotating, etc.) will occur to prevent that from happening.

Illusions of Scientism

The mistake that scientism makes is that it claims that the methods of science are deductive when they are actually inductive. Reductive science (that which seeks to explain larger phenomena by reducing matter down to smaller parts) most commonly makes this mistake. More often than not, those “smallest parts” are laws or theories defined by mathematical formulas. Scientismists believe that the deductions made by mathematical approaches to science produce philosophically true results. They do not. The results are simply valid because they work within a strict, self-justifiable framework – mathematics. But, how applicable are mathematics to the sciences, and how strong is this validity?

“The excellent beginning made by quantum mechanics with the hydrogen atom peters out slowly in the sands of approximation in as much as we move toward more complex situations… This decline in the efficiency of mathematical algorithms accelerates when we go into chemistry. The interactions between two molecules of any degree of complexity evades mathematical description… In biology, if we make exceptions of the theory of population and of formal genetics, the use of mathematics is confined to modelling a few local situations (transmission of nerve impulses, blood flow in the arteries, etc.) of slight theoretical interest and limited practical value… The relatively rapid degeneration in the possible uses of mathematics when one moves from physics to biology is certainly known among specialists, but there is a reluctance to reveal it to the public at large… The feeling of security given by the reductionist approach is in fact illusory.”

-Rene Thom, Mathemetician

Deductive reasoning and its systems, such as mathematics, are human constructs. However, how they came to be should be accurately described. They were not merely created, because that would imply that they came from nothing. Mathematics are very logical and can be applied in important ways. However, the fact that mathematics works in so many ways should not cause us the delusion that they were discovered either, for that would imply that there is some observable, fundamental, empirical truth to them. This is not the case either. Mathematics and the laws they describe are found nowhere in nature. There are no obvious examples of perfect circles or right angles anywhere in the universe. There are also no numbers. We can count objects, yes, but no two objects, from stars to particles of dust, are exactly the same. What does it mean when we say “here are two firs” when the trees, though of the same species, have so many obvious differences?

What a statement about a number asserts, according to Gottlob Frege, is a concept, because any application of it is deductive. So, I prefer to say of such systems that they were developed. They are constructed from logic for a purpose, but without that purpose – without an answer to the question ‘why do we use them?’ – they are nonexistent. Therefore, there is a strong sense in which the application of such systems is limited to our belief in them. Because we see them work in so many ways, it is difficult to not believe in them.

Physics attempts to act as the reason, the governing body of all science, but it cannot account for all of the uncertainty that scientific problems face. Its mathematical foundations are rigid, and so are the laws that they describe. However, occurrences in the universe are not rigid at all. They are random and unpredictable and constantly evolving. Therefore, such “laws” are only guidelines, albeit rather useful ones.

As Thom states, “the public at large” is unaware of the lack of practical applications of mathematics to science, and it is precisely that illusion of efficiency that scientism, which is comprised of both specialists and non-specialists, takes for granted. It is anthropocentric to believe that, because we understand mathematics, a system we developed, we can understand everything. Humans are not at the center of the universe. We’re merely an immeasurably small part of it.

The Solution

In the same way Rene Thom explains mathematical formulas do not directly translate to chemistry and biology, deductive reasoning, more generally, has very limited application in most aspects of our everyday lives. Kids in school ask, “I’ll never use algebra; why am I learning it?” It turns out, they are absolutely right. Learning math beyond basic addition, subtraction, multiplication, and division is a waste of time for most. What they should be learning instead are the basics of reasoning. Deduction only proves validity, not truth, and induction has even greater limits, as David Hume and many others have pointed out. People, especially young children, are truth-seekers by nature, which is to say they are little philosophers.

There is a solution: informal logic, the study of logical fallacies – the most basic errors in reasoning. Informal logic is widely accessible and universally applicable. If people are to reason well, informal logic is the most fundamental way to start, and start young we should. Children, in fact, have a natural tendency to do this extremely well.

To be continued…