A rather well-known fact of fundamental physics is that when it comes to explaining what the stuff of our Universe is made of, there are a few different ‘right’ answers that cannot be reconciled with each other. Most non-physicists likely don’t know about these, and likely don’t care. But it really is a fascinating topic to mull over informally over coffee.

Broadly speaking, we have three theoretical frameworks that describe our world very accurately — that is, better than any of their predecessors.

#### Special Relativity

First, we have **special relativity**, Einstein’s brainchild of 1905. What this theory postulates is that there is an ultimate ‘speed limit’ that everything in our Universe adheres to, that we’ll call c.

This runs counter to intuition, because we typically expect the speed of objects along the same axis to be additive: if Tom and Jerry are traveling in opposite directions at 10 m/s, from the perspective of Tom, Jerry is traveling at 20 m/s away from him. It turns out this *can’t* be true as you approach the speed limit c. For instance, if Tom and Jerry are traveling in opposite directions at 90% of c, then common sense would say that Tom should observe Jerry flying away at 180% of c, but in this case, common sense would be wrong. What actually ends up happening is that Tom still sees Jerry traveling a little over 99% of c, which is certainly faster than 90%, but still less than the speed limit.

The consequences of having this speed limit are huge. We end up with so-called *relativistic effects* such as the shrinking of space and slowing down of time, as nature assiduously works to enforce the speed limit. These effects combine in just the right ways to make the math work, and they’ve been experimentally verified to a high degree of precision.

Incidentally, light happens to *always* travel at speed c in empty space. This speed is exactly equal to 299,792,458 m/s, which is pretty fast.

A footnote in the theory of special relativity is the famous equation:

E = mc^2

This equation basically says that the stuff we see around us can be converted into energy, and energy can be converted back into stuff.

#### General Relativity

Despite the name, it’s best to think of **general relativity** as a completely different beast. Presented (again) by Einstein in 1915, this theory explains gravity. While Newton’s theory of gravity provided equations that worked quite well, Einstein’s theory went much further, by not only providing more accurate equations, but also *explaining* gravity as the consequence of the geometry of space-time.

Just like special relativity, there is an element of counterintuitiveness to this idea. For instance, we might normally think of objects interacting with each other in a vast, unbounded, otherwise empty space, an idea that is traditionally called *absolute space*. Not so, says Einstein — space and time emerge as a consequence of stuff in the Universe. This is pithily explained by the following idea: imagine that you somehow get rid of everything in the Universe — stars, planets, galaxies, dust. Conventional wisdom says you’d be left with vast, empty space, whereas according to Einstein, space and time would disappear with everything else.

This might take a while to sink in, and I find it helpful to visualize it as in the picture below. If our Universe had *nothing* but the Earth in it, imagine the Earth being surrounded by a ‘halo’ or invisible bubble of space-time. This isn’t part of something larger; this *is* the Universe. You could travel to anywhere within this bubble, but there are a finite number of places you could go to. As a result, if you picked a direction and kept going, you would eventually end up where you started.

What’s important to note here is that it is meaningless to consider what happens when you step ‘outside’ of the bubble, as the bubble *is* the entirety of space-time.

Now you can imagine what happens when there’s more stuff in the Universe. Let’s say you have both the Sun and the Earth. Each of these has its own bubble, but these bubbles are also *connected* to each other. Both the Earth and the Sun warp the shape of the bubble around each other, and this interaction leads to the dynamic motion of these objects. What we observe as a ‘force’ between the Earth and the Sun is really just objects traveling in straight lines through the curved geometry of the bubbles, like a strange pair of yo-yos bouncing around. Of course, there is much more to the geometry of space-time and our Universe may not be finite after all, but this is a good start.

General relativity plays by all the rules set by special relativity. In particular, there is a limit to how fast information can travel through space-time. Say the Sun were to magically disappear tomorrow, general relativity says that it would take us at least 8 minutes to find out that something was wrong, the same as the time it takes for the light of the Sun to reach us every day.

General relativity has one striking limitation: nothing in its equations prevents stuff from getting squeezed into a tiny little point that keeps getting compressed *ad infinitum*. When you play this out, the equations of general relativity lead to absurd results. Such a tiny, infinitely dense point is called a *singularity*. Common wisdom is that we end up in this theoretical situation simply because general relativity is incomplete — it doesn’t try to describe what happens at the scale of the *very small*.

#### Quantum Physics

From general relativity, we now come to (what we believe) happens at the scale of the *very small*. Here, we’re talking not just about atoms and molecules that make up the stuff around us, but even smaller particles. For instance, most of the atom is concentrated in its nucleus, which is about a hundred thousand times smaller than the atom itself. That, according to one source, is comparable to a “pea in the middle of a racetrack”. The nucleus, in turn, is *enormous* compared to some other particles and distances that we end up worrying about in particle physics.

Since the 1920s, there have been a steady set of theories, experiments and interpretations that have emerged that I’ll collectively term “quantum theories”. Unsurprisingly, it was Einstein yet again that kicked off this revolution in 1905, when he published a paper that explained the *photoelectric *effect. The details of this effect are simple and fascinating, but we will jump right to the conclusion, which is this: light is made up of discrete *particles*, or small bundles of energy.

This, in itself, may not seem very impressive, but it was revolutionary when you consider that over the previous few decades, scientists had *finally* reached the conclusion that light behaves like a *wave* rather than a bunch of particles. A particle moves in straight lines from one point to another; but a wave dissipates through a medium and spreads out. Although Newton had originally propounded the *corpuscular* theory of light (i.e. light as particles), Thomas Young, in 1801, had experimentally shown that light demonstrated typical wave-like behaviors. With his *double-slit experiment*, Young showed that light ‘waves’ interfere with each other just like waves you might see in water. Einstein’s explanation, while perhaps obvious in retrospect, upended this established belief.

So is light a particle or a wave? Today, more than a hundred years from the early days of the quantum, *we have absolutely no idea*.

A central pillar of quantum physics is the so-called **standard model**, which is a model of particle physics that has been refined over the years. The standard model is just as much a theoretical failure as it is an experimental success. I like to think of it as a complex computer program. Given the right inputs, it spits out amazingly accurate answers, but it doesn’t deign to offer any explanations.

For instance, the standard model predicts a variety of particles with specific properties, and almost all of these have been experimentally detected. It tells us what kinds of particle interactions to expect with what likelihood, and again, experiments have verified these predictions.

On the other hand, consider this: we have a law that says that the *electron number* is conserved across all particle interactions. There are a couple of particles (one of them being the electron) that have a non-zero electron number, and when particles interact, the electron number before and after the interaction always remain the same, even if particles transform from one to another. Why? Nobody knows. Quantum physics is full of many such strange rules that are unsupported by common sense explanations.

Despite all this abstruseness, here’s what we know: most types of quantities in nature are *quantized*, which means they come in discrete increments just like light. At the same time, the way these quanta propagate and interact with each other resembles the behaviors of waves. Just as a quantum of light (aka, *photon*) behaves like a wave, so does an elementary particle like the electron. This idea has been enshrined in the notion of *wave-particle duality*, which asserts that all known matter behaves either like a wave or a particle. Only at the *time* of measurement (there is much debate on what this actually means), and depending on the *kind* of measurement being attempted, does the wave-particle duality resolve to either wave-like or particle-like behavior.

It’s worth noting at this point that a lot hinges on the kind of measurement being performed. No one has *ever* observed an electron (say) ‘smeared’ out across multiple points (that’s not how it works), but the probability of discovering an electron at a particular position is determined by its corresponding wave representation. When you do finally find that electron, it could be at some definitive point with some likelihood. But if you do try to pin down the *exact* position of that electron, you’ll find that it goes into a frenzied rage (metaphorically) making it harder to perform other kinds of measurements on the same electron. Quantum physics forces you to prioritize the kinds of measurements you truly care about, and correspondingly makes other kinds of measurements harder. The net effect is that there’s always a certain minimum amount of uncertainty across specific related properties such as position and momentum.

From a practical standpoint, you can think of waves as a mathematical tool for figuring out the likelihood or probability of a specific outcome, like finding particle in a particular place. These probabilities have turned out to be experimentally correct, but no one can explain why, or what it actually means.

#### Collision Course

We now come to the most interesting part of the story, which is the collision — not of particles — but of these theoretical frameworks.

General relativity, as you may recall, doesn’t have a notion of *quantized* space, that is, some limit of how tiny an increment of space can be. Physicists believe that space must be quantized, but no one really knows how.

Quantum physics has been successfully refined to take special relativity into account. But no one has figured out how to take general relativity into account as well. One problem with quantum theories is that they are currently *background-dependent*. What this means is that the interaction of particles and forces is assumed to be occurring in the backdrop of absolute space. General relativity successfully did away with absolute space, as you may recall, but only in the context of gravity. One would expect a true description of reality to be self-sufficient in its explanatory power, just like general relativity.

Another unexplained phenomenon is that of *non-locality*. The gist of this idea is that when particles (or rather, their wave representations) interact, they can get ‘entangled’ with each other according to the rules of quantum physics. When this happens, the measurements performed on one particle influences the measurements performed on the other, *even when they are far apart*. This effect is instantaneous, and we have no explanation for it. At first glance, one might posit that there is some information hidden away in the particles (or waves) themselves that is revealed only at the time of measurement. But this hypothesis consisting of so-called *local hidden variables*, has been experimentally disproven through an amazing feat of statistical ingenuity (John Stewart Bell, 1964). The best explanation we can come up with is that somehow, somewhere, the Universe keeps track of the relationships between these entangled particles, no matter where they are.

And finally, we have the problem of probability. Now, one might argue that there is nothing wrong with the Universe being a little random in its tastes, but the problem is even more subtle. If you recall the *double-slit experiment*, one way it might be conducted is to shine a light through two closely-lined up slits and observe an interference pattern at the screen. This is easily explained by describing light as waves; wherever the waves combine with each other, you get a bright line, and wherever the waves cancel each other out, you get a dark stripe. But here’s the rub: suppose you performed the same experiment while carefully shining a single quantum of light (aka, a photon) at a time, you still end up with an interference pattern.

Each photon, being a discrete particle, lands on exactly one spot on the screen. But the *probability* of the photon landing on the bright line is much higher than the probability of it landing on the dark stripe. So as you keep sending photons through, you can observe the usual interference pattern build up over time. For this to be possible, though, each photon *must* somehow ‘be aware’ of where the other photons are going to be found, or have been found. Again, it appears the Universe is keeping track of the overall distribution of these photons over time, but nobody can explain how or why this is the case.