I listened to John McWhorter’s course The Story of Human Language. ‘Language’ primarily refers to spoken communication. Written language is considered a relatively new invention; how people write is quite distinct from how they actually speak, even within the same language. A person who speaks in paragraphs is very odd, suggests McWhorter.
The story of language change is all about how sounds evolve over time. Vowels shift, consonants merge. Sometimes sounds get ‘rebracketed’ — word boundaries of commonly used phrases change. Certain combinations of syllables are simply hard (like trying to pronounce ‘February’) and over generations, they morph into something simpler. Certain sounds are in constant danger of disappearing, like the ‘h’ at the start of a word. Word order occasionally flips from subject-object-verb to subject-verb-object, or the other way around. Grammar is, to a degree, optional — a lot of information is derived from context.
Many modern languages have so-called high and low varieties, the high variety being what is considered ‘proper’, and the low variety being what is normally used by everyone. This phenomenon is called diglossia. A newcomer learning a language may mistakenly learn the high form, use it in ordinary speech, and get laughed at. The high form is what’s taught in schools, the low form is what you pick up through everyday experience. Strangely, the low form is generally not considered fit to be formally taught.
Languages don’t really exist, it turns out. All you have is bundles of dialects, each one a little further removed from the other. Language distinctions are drawn by geopolitics; certain languages are even closer to each other than certain dialects. All you have is a continuous evolution of dialects into others as they get separated by geography. These dialects keep evolving further and further until they start looking quite different from the original.
When people of different tongues are forced to interact with each other on a temporary basis, they may create an ultra-simplified language that enables a minimal degree of communication. Such languages are called pidgins and not expressive enough to be considered full-fledged languages. But when the arrangement becomes more permanent over multiple generations, these stunted languages may then develop into new ones called creoles.
The Story of Human Language is an exciting and beautifully narrated tale; I highly recommend listening to it.
The illusion of explanatory depth is a cognitive bias that drives people to believe they understand things in far more depth than they actually do. These things may be familiar devices like toilets or refrigerators, or complex systems like government and healthcare. Rozenblit and Keil found that the “ratio of visible to hidden parts is the best predictor of overconfidence for an item”. What this means is that when a system looks simple on the outside, people tend to assume they understand it on the inside.
“Across three studies, we found that people have unjustified confidence in their understanding of policies. Attempting to generate a mechanistic explanation undermines this illusion of understanding and leads people to endorse more moderate positions.” [Fernbach, P. M., Rogers, T., Fox, C. R., & Sloman, S. A. (2013)] The title of this paper says it all: political extremism is supported by an illusion of understanding.
There is one interesting tidbit in the second paper. “Although these effects occurred when people were asked to generate a mechanistic explanation, they did not occur when people were instead asked to enumerate reasons for their policy preferences […]”. If you want to get people to take a more moderate position, don’t ask them why they support one policy over another. Instead, ask them to explain how things work. That makes them realize how little they actually know, which then leads them to conclusions that are less intolerant of others’ points of view.
Of late, I’ve been wondering about the nature of creativity. Is creativity something that is built into your genetic makeup? Is it something learned over time? Or is it, perhaps, acquired through the force of habit? Personally, I lean towards the latter hypothesis, that by intentionally mimicking the superficial effects of creativity over long periods of time, one effectively becomes a creative person. Unfortunately, I have not conducted any experiment that might prove or disprove this hypothesis.
Dreams have long fascinated me for this very reason. You wake up in the morning and marvel at the absurdity of the sequences that you experienced during the night (assuming that you do recall some of it). In my personal experience, it’s not that logic has been suspended in dreams, or that paradoxes are taken for granted. Rather, the contradictions are obvious and even well-understood, but you have no choice but to continue forward, because there is a sense of reality to everything. Evolution has taught us that reality may be worked around or controlled to a degree, but must never be disbelieved, lest it get the better of your gene pool.
Our minds are apparently capable of creating these strange sequences that we wouldn’t normally be able to come up with in our waking hours. Last night, I dreamt that I was going to play tennis, something the real me hasn’t done in ages. I figured I would practice my serves (with a vague sense of awareness that the pandemic made it difficult to find actual opponents to play against). I had an hour to play: a dream storyline always has an element of a time crunch that manifests emphatically. Of course, there was simply no explanation for the odd shapes of some of the tennis balls, as if they were cut from a full loaf of bread. Would they bounce correctly? Was I missing something? Should I try these oddly shaped balls? I didn’t have too much time to ponder though, as I only had ten minutes left of the hour that I had started with.
I hypothesize that to be creative is to posit improbable associations. Start from the absurd idea that tennis balls must come from loaves of bread, and chip away at the absurdity over time, until all you’re left with is a Good Idea™️.
Apparently, individuals, at all ages, believe that they are unlikely to change much in the future (in terms of personal growth and maturity), even in the face of evidence that they’ve grown in the past. The psychologists who studied this effect gave it a cool name: the end-of-history illusion.
Personally, I am skeptical that this is a problem, even if the bias was shown to be real. There’s a danger to thinking too much about the future: we may forget to live in the now. We might miss the chance to let opportunity and luck take us to all kinds of interesting places.
I sense the light, although I cannot quite see it. I feel the warmth in my veins, a sense of richness filling my body that contrasts vividly against the coolness at my feet. The light beckons to me from one side, and I reach out longingly. I feel strength growing inside of me, bit by bit, everyday.
I stay very still; I know of no other way. As I am, I meditate upon the world – who am I and why do I exist? I do not know the answers, and I have no one else to ask, but I am in harmony with the world, and the world is in harmony with me; I know of no other way.
I dream sometimes. I don’t always know the difference between what is real and what is dreamt. It is difficult to judge reality harshly when you know so little of it.
I can’t quite see my beginning, and I certainly can’t predict my end, but I am not afraid, for I think I am loved.
Fundamental physics has become complicated over the past century, at least in public perception. With many flashy new ideas like ‘strings’ and ‘multiverses’ being proposed, it is difficult to separate what we think we actually know, from hypotheses, or even conjectures.
Physics is the natural science that seeks to explain our objective reality using a small set of laws that govern it. There is much packed within this pithy definition, that we can go over with a fine-toothed comb.
First and foremost, physics is a study of reality. Its goal is to observe reality very carefully and look for patterns that may be codified into fundamental laws. Its method is experiment — start with a hypothesis of what such a fundamental law might be, design an experiment whose anticipated outcome is best explained by the hypothesis, and then perform the experiment to test the prediction. Good experiments ensure that if they are clearly unsuccessful, the hypothesis may be safely discarded. On the contrary, a successful experiment does not prove the hypothesis, it only increases the likelihood that the hypothesis is correct. The history of science is rife with examples of better explanations superseding good ones.
We seldom know if a particular law is ‘fundamental’. Many laws that we deemed fundamental turned out to be special cases of something even simpler. But the creed of physics is parsimony — to reduce reality into as few laws as possible that merit explanation. We can explain the movements of all the planets in the sky when we understand the laws of gravity. This ‘law of parsimony’ is colloquially known as Occam’s Razor.
Second, the reality studied by physics must objectively exist. Unless we agree that something is real, in the sense that it can be subject to experiment, it doesn’t have a place of study in physics. For instance, hallucinations generated by the human brain are not part of our objective reality, though one is free to study the behavior of the brain (this is neuroscience, not physics). The study of consciousness is not a part of physics, unless there is evidence that the behavior of higher organisms cannot be adequately explained by known physical laws (we have no such evidence).
Occasionally, a theory may predict the existence of hitherto unheard of phenomena as side-effects. If these phenomena are later discovered, they lend further credence to the theory. For instance, black holes were predicted by general relativity, and experimentally detected later on. But for a hypothesis to be scientifically useful, it must be falsifiable — it must offer an experiment whose outcome is capable of refuting the theory.
It’s worth noting at this point that there are several popular science conjectures that are decidedly unscientific. Any variant of the ‘multiverse’ conjecture — the idea that there are other so-called universes that no conceivable experiment can detect — fails the test of falsifiability, and has no place in science. The ‘mathematical universe hypothesis’ — the claim that the universe is mathematics — falls into the same category of pseudoscience, as it doesn’t explain reality as we know it. Yet another idea that is unscientific is the claim that the universe is a gigantic simulation — no experiment has ever been proposed that would distinguish a simulated universe from a real one. If you find people making tall claims that are unsupported by experimental evidence, you would be right to be skeptical.
Finally, a subtle yet important assumption in physics is that our reality is governed by laws that remain the same over time. There is a tenet here that there is order underlying the chaos of reality. The idea that our reality is dictated by these unchanging laws is deeply incompatible with the idea of an omnipresent and omniscient deity making decisions on a whim. If there were experimental evidence of such a deity, the laws governing the deity would then merit further explanation. The laws of physics are all there is to reality, and the quest of physics is to discover them. It’s turtles all the way down.
The static nature of laws is a consequence of the law of parsimony. If a certain law changes over time or is different in different parts of the universe, there must be a higher law that explains the conditions under which it varies. If you keep discovering these higher laws, what you eventually end up with must be permanent and unchanging.
We hiked up to Heather Lake on Saturday morning, a 5 mile roundtrip with an elevation of 1,781 feet, according to my fitness tracker. Being one of the easier hikes at driving distance from the city, Heather Lake Trail generally sees a lot of foot traffic. That’s especially so with COVID-19 restrictions in place, as hiking is one of the few outdoor activities still possible when the weather allows it.
Our own plans tend to be created on a whim, of course. When you get up in the morning and discover that it’s not going to rain, you’re forced to think of something to do before the clouds move in again. If you dally, the weather gods may get angry and change their minds.
The trailhead is a little over an hour drive from Seattle. The last mile is on a pothole-ridden road that somebody should really consider fixing. The trail itself is fairly straightforward, and can be tackled at a brisk pace. The path is wooded with tall trees and a few small streams that need to be negotiated.
In a sense, a hike through the woods is a spiritual experience, though you may not always know it at the time. When you spend all of your energy sensing your environment, absorbing the sights and smells, and stepping carefully from one rock to the next, your mind has the opportunity to switch into low gear, hum silently and contemplate the meaning of life.
The meaning of life is not complicated. The mind is usually a jumble of distinct and sometime conflicting motivations crisscrossing through the terrain. To understand the meaning of life is to organize these motivations into an aesthetic, if not simpler, hierarchy. Let the deeper motivations bubble up weightlessly so you can understand them better, and then let them sink back down to the bottom of your mind. Dust off ideas and observe them dispassionately without the constraint of time looming over you. Some ideas become clearer, others become superfluous. The process is automatic, like air rushing to fill a vacuum.
When we reached the lake, we discovered, to our surprise, that it was completely frozen. A few people were brave enough (or stupid enough) to stand on the ice in the middle of the lake and take pictures.
In a free society, tolerance means that individuals put up with ideas and actions that they don’t necessarily agree with. Evelyn Beatrice Hall famously expressed the Voltairean principle as “I disapprove of what you say, but I will defend to the death your right to say it.” Every free citizen is at liberty to act as they please, as long as they don’t infringe on the freedoms of others.
Tolerance does not imply freedom from consequences. Don’t do stupid things or say hurtful words and expect others to be okay with it. And if you intentionally make false statements that cause harm to others, tort laws may catch up with you.
The ‘paradox of tolerance’ refers to the idea that when a society is tolerant without limit, its ability to be tolerant is eventually destroyed by the intolerant. When ideas of intolerance are given a free rein, these ideas can take root in a critical mass of people with the power to impose their own will on the populace. Moreover, it is only a matter of time before incentives align to make this happen.
According to Karl Popper, this paradox doesn’t imply that ideas of intolerance should not be tolerated at all. Society may continue to tolerate these ideas to the extent that they can be kept in check through rational argument and public opinion — in fact, it is preferable to do so unless force is necessary.
One way to rationalize this paradox is that if you’re a tolerant person in a tolerant society, you’re playing by a set of rules and expect everyone else to do the same. Someone espousing intolerance is effectively trying to change the rules of the game. If they convince enough people to join their cause, you’re at a disadvantage because trying to stop them is against your ethos. If things get out of hand (as they’re bound to eventually, if enough people believe they will benefit), your only option is to break your own rules and shut them down.
Sadly, once you say it’s okay to break your own rules, the intolerant in positions of power will eventually take advantage of this loophole to further their own agendas.
Numbers are a human invention. As intelligent and rational creatures, we have learned to operate effectively in a world of abstract thought, where the concept of numbers goes beyond what we can count. But as humans, we cannot discount the evolutionary basis for how our apparatus works.
It turns out that numbers that are really large or really small are well-nigh impossible for us to comprehend. Upon careful inspection, this is not surprising, as comprehension is simply a form of analogy-making. We understand things when we can compare and contrast them with other things we are already familiar with. When we are presented with numbers that are very different from what we encounter in everyday experience, we have no way to make sense of them. Here are some examples:
Despite the beautiful pictures of galaxies you might have seen, the scale of the Universe is so vast that a galaxy is mostly empty space. For instance, the average density of our Milky Way galaxy — together with all its brilliant stars and planets — is conservatively estimated to be of the order of 1 kilogram over every billion cubic kilometers. If you imagine a box that spans a kilometer across on every side, magnify it a thousand million (1,000,000,000) times, and add a bag of potatoes into this box — that’s how empty our galaxy is. The space between galaxies is far emptier.
We estimate that there are 2 trillion galaxies in the observable universe. A trillion is a million million. To put that in context, a trillion is a number that is so large that removing a few millions doesn’t make much of a difference — you’re still left with about a trillion. Astronomers conservatively estimate that there are about a 100 million stars in the average galaxy, and about 1,000,000,000,000,000,000,000,000 stars in all in the observable universe. That’s one followed by 24 zeros.
Okay, now here’s a fun fact: there are more molecules of water in a few regular-sized drops than the total number of stars in the observable universe. All it takes is about a hundredth of a liter of water.
Molecules are pretty average-sized in the bigger scheme of things. The Planck length is the theoretical minimum limit to distance you can meaningfully speak of. Nothing smaller than the Planck length exists, according to present-day theory. The Planck length is so small that if you were to magnify a particle that spans the width of a human hair (~0.1 mm) to the size of the observable universe, the Planck length would itself be as big as the original size of the particle (~0.1 mm).
And finally, humans have evolved on our planet for millions of years, right? It turns out that if you overlay the Earth’s entire history until today over a 24 hour clock (midnight to midnight), single cell life forms appeared at around 4 am, multicellular organisms appeared at around 5:30 pm, and all of human history spans the last two minutes until midnight.
A false dichotomy is a logical fallacy. You may be presented with alternatives A and B, when there is at least one other alternative C that is also available. You may then be presented with arguments that force you to choose either A or B, and reach some conclusion as a consequence. The danger lurking here is that unless you actively seek out additional alternatives, you may not easily notice that you’re falling into a trap.
In classical logic, the principle of explosion says that anything may be proven from a contradiction. In Latin, this idea goes by the phrase ‘ex falso [sequitur] quodlibet’ — from falsehood, anything [follows]. If wishes were horses, beggars would ride…a true statement. And so, when you start with a false dichotomy, the conclusion you might arrive at doesn’t carry the force of logic: it may be true or it may be false; the argument itself is null and void.
In my experience, all dichotomies are false in the real world. In other words, there are always more options; you only have to look a little deeper. Often, the options presented are the extremities of reasonable ideas that actually span a spectrum of possibilities.
Society is having a hard time fact-checking politicians these days, although the roots of the problem are quite ancient. There is a natural human tendency to assign categories to things. It is very easy and very human to categorize knowledge into things you believe are true versus things you believe are false. This true-or-false dichotomy, like all others in the real world, is false. To put it in more colloquial terms, there is always some truth to what you label as false (and vice versa).
Here’s the catch, once you label something in your mind, you are psychologically less inclined to alter the label, even in light of new facts. The easiest way for someone to convince you of a falsehood is to club it together with something you already believe to be true. When you know a man to be virtuous, you’re far more likely to attribute his sins to factors outside of his control. Over time, these falsehoods build upon each other to create a warped worldview.
“The first principle is that you must not fool yourself, and you are the easiest person to fool.”
Every individual who wishes to avoid this sad fate should try to relentlessly pursue the truth, using the principle of falsifiability or refutability expounded by Karl Popper back in 1934. Falsifiability is the capacity for a statement or hypothesis to be contradicted by evidence. Here’s one way of applying this idea: you should believe something to be true only if you’ve made a good faith attempt to prove that it is false, and failed to do so despite your best efforts. This is exemplified in the ‘black swan’ argument: if someone told you that all swans are white, you should look for colored swans — not white ones — to validate the truth of the statement. Look for evidence that disconfirms your belief.
In everyday situations, you might be faced with ideas that conflict with each other, some that you are more inclined to believe than others. As a relentless pursuer of truth, your objective should be to identify the most believable ideas, and look for evidence that these ideas are wrong. By critiquing the ideas you wish to believe, you make them more robust, understand the nuances and half-truths, and end up with something that is far closer to the truth than where you started. The truth will not make you feel warm and fuzzy, but it will be grounded in reality.
“-Ism’s in my opinion are not good. A person should not believe in an -ism, he should believe in himself.”
—Ferris Bueller, in ‘Ferris Bueller’s Day Off’
There are many forces in the world that polarize opinion, creating more of the same false dichotomies that we want to avoid in social discourse. Social media platforms have made it easy to reach millions of people and convey information — and misinformation — in real time, reinforcing the beliefs of opposing camps.
For instance, we hear debates about capitalism versus socialism (sometimes incorrectly equated with communism). Both of these are unrealistic models, figments of our imagination. In practice, every sovereign nation has to adopt policies that may be inspired by either model, or maybe something else altogether.
The real debates usually need to be on the merits and demerits of specific policies. People with ulterior motives are inclined to assign labels to these policies and associate them with existing factions. For instance, in a debate on government-sponsored healthcare, one side may be painted as being ‘socialist’, even though the actual debate is far more nuanced. Interestingly, it is easy to recognize this kind of labeling when it works against us, but when it works in our favor, we might be quick to assume it’s because we’re right.
That is the most dangerous time of all — when we are starting to believe we are right, but haven’t yet looked for evidence countering our belief.