THQIHVE5

Determinists Are Playing Pretend

The irony of thought experiments that attempt to ground Determinist beliefs in physics, is that they inadvertently stray into metaphysics to make their point. Consider the following classic by Laplace, known as Laplace's Demon:

Suppose there was a hypothetical intelligence (the "Demon") that knew the exact positions and velocities of all particles in the universe at a given moment, as well as the laws governing their interactions. Based on this information, it would be able to compute the past and future states of the universe with perfect accuracy, implying a fixed and predetermined future.

So what's wrong with this argument? You could take issue with the way it directly invokes an imaginary metaphysical entity, but insofar as the necessary knowledge exists in some form, so does the fixed future it implies... right? It's not unusual for a thought experiment to invoke the impossible to illustrate the inescapable, but the devil is in the details: if you invoke an impossible premise, you better make sure the cause of its impossibility isn't entangled with the very aspects of reality you're exploring -- a nontrivial problem when the inquiry concerns the fundamentals of reality. And so, while questioning the physical coherence of the Demon itself is moot, questioning the physical coherence of the Demon's knowledge seems on point.

It's obvious that perfect knowledge of the state of the universe is impossible in practice, but is it possible in principle? If we regard physics as an abstract, logical framework, does the concept of perfect knowledge arise naturally within this framework -- out of its own principles -- or is it an injection from outside, masquerading as part of physics by introducing itself in the former's native language?

Well, if the framework is Classical Physics, then perfect knowledge fits right in, but for an entirely unhelpful reason: it arises as a corollary of perfect measurement. Classical Physics treats as a mere practical limitation not only the crudeness of the measuring instrument and its wielder, but also the need for the instrument to interact with the measured system at all. Measurement acts not as a physical process (idealized or otherwise), but as a pinhole view into a hidden realm of absolute truth: it's an expression of the ontological premise that the universe exists in a definite state, not just as an undifferentiated whole, but as a collection of interacting parts. Insofar as ontology falls under the umbrella of metaphysics, so does perfect measurement, and then so does perfect knowledge -- if the latter is derived from the former.

The problem can be demonstrated poignantly using a thought experiment more grounded than one about a Demon who "just knows"; for example, the Billiard Table Analogy:

Imagine a billiard table where a cue ball is struck, setting other balls in motion. If you could measure the exact position, velocity, and forces acting on each ball with perfect precision, then, using the laws of physics, you could calculate their entire future trajectories with complete accuracy.

This thought experiment is intended to suggest that, in principle, the behavior of any system -- including the universe itself -- is fully determined by its prior state. If every particle's position and motion could be measured with sufficient precision at a given moment, then every future event would follow inevitably from these measured conditions, governed by physical laws.

The idealization of measurement seems harmless when applied to billiard balls, and the resulting conclusion seems independent from the scale of the system being analyzed. But unlike billiard balls, whose positions can be probed using elements from a vastly smaller scale (e.g. photons), particles, presumably being at the bottom of it all, can only be measured through interactions with elements on their own scale. A better analogy would be trying to determine the positions of the balls on the table by poking around with a cue stick in the dark and then attempting to pocket the balls. The universe has its own game, and when its rules are respected, victory becomes uncertain.

This is not merely a practical limitation, but a fundamental one stemming from the very nature of reality: knowledge implies measurement, measurement implies interaction, interaction implies disturbance and uncertainty. The exact state of the entire system is fundamentally unknowable from within itself, relegating any arguments based on perfect knowledge to the realm of metaphysics.

It's possible to imagine a Classical Physics framework that maintains ontological agnosticism, at least in regard to the knowability of a system's state, by incorporating uncertainty terms right into its formulations of basic physical laws: contemporary mathematics enables meaningful algebraic manipulation of such terms, propagating the modeled uncertanties from measurements, through basic formulas, to derived formulas and quantities (see Uncertainty Quantification, Stochastic Arithmetic etc.). Such a framework would be consistent with empirical observations and actual engineering practices, as well as varied ontological premises. However, this would rob Classical Physics of its mathematical elegance -- a consideration that no doubt played a part in shaping the subject into its current form.

Nevertheless, the idealized concept of measurement wasn't inspired solely by ontological a-priori and aesthetics: predicting billiards may be a poor analogy for predicting the universe on the level of particles, but considered in its own right, it seems to demonstrate that determinism does arise on some level of abstraction, regardless of micro-level uncertainties; after all, skilled humans play billiards with great success, and machines may hone this craft to perfection. This is a very different kind of argument: instead of a logical demonstration from first principles, it relies on observations about emergent properties that are not at all apparent on the level of nano-scale reductionism.

Thus far we have merely inverted the situation: instead of questioning the incongruence between a deterministic theoretical framework and seemingly non-deterministic outcomes observed outside of the lab, we now have to question the incongruence between a non-deterministic theoretical framework and the seemingly deterministic outcomes observed inside the lab. But the Billiard Ball experiment turns out to be particularly helpful here, for the way it resolves this tension: the predicted positions of the scattering balls may involve quantifiable uncertainties and smeared outcomes, but the game of pocketing the balls collapses them into discrete outcomes: you either pocket the ball or you don't; and given sufficiently large holes and sufficiently precise and conservative players, the outcome of each strike is as good as certain... that is, so long as the players don't start striking each other's elbow to spice things up.

Consider the case of a fair coin toss: the very shape of the coin ensures that uncertainties in the continuous variables describing its motion will collapse into a binary outcome -- either heads or tails -- but this plays the opposite role to the pockets in the billiard table: it makes the uncertainties directly observable and distinct. A scientist can try to refute this appearance of randomness by repeating the coin-tossing experiment under controlled condition: he might construct a coin-flipping machine that performs the same flipping motion every time, in a closed laboratory free of external interference. Although he wouldn't be able to predict the spot where the coin goes still with perfect accuracy, he may adjust the granularity of the prediction until it matches a repeatable outcome: it will always land tails; or if not, it will always land within a certain radius; or at the very least, it will always land! And so one might say that even if determinism is not absolute, it is still effectively true if the predictions are made at the appropriate level of granularity.

Such an observation is compelling on one hand, absurd on the other: for all the palpable chaos happening on the surface of the coin, where countless micro-organisms are involved in a raging battle for survival, the outcome is still either heads or tails, and perhaps predictably so; but is the same not true of all our Earthly drama? Sooner or later, as cosmology predicts, the sun will swallow the Earth, and for a certain mindset, many of life's small uncertainties can be settled by this big apparent certainty. But this is not an Existentialist essay, so we can put this aside for now, while keeping in mind the relationship between apparent determinism and prediction granularity.

Going back to the Physicist's effort to control his experiment's condition, consider some of the things that can go wrong, and the ways he can dismiss them: if the coin-flipping machine breaks down, it's an engineering problem. The assistant keeps placing the coin incorrectly? It's a problem of discipline, surely -- but not the discipline of Physics. Angry student protesters trash the lab? Sociology. A nuclear blast turns the entire city into dust? Politics. The Physicist considers such matters to be outside the scope of his field, leaving them to be studied by other fields that operate on higher levels of abstraction, but insofar as physical laws underlie them all, are they not expressions of the inherent uncertainty of physics, unexpectedly playing out in the course of an experiment intended to refute that very thing?

Indeed, such disruptions can be traced back to small but inherent uncertainties, amplified by large and complex systems -- especially chaotic ones. If even something as simple as a Double Pendulum setup has enough sensitivity to initial conditions to render its own future uncertain, what about an entire system of mutually-dependent, chaotic feedback loops: a body, a brain, or an entire society? Chaos provides a physical mechanism by which uncertain conditions on any level of granularity can ascend to higher levels, potentially indefinitely; it can take seemingly negligible unknowability, introduced down at the level where measurement breaks down, and promote it up to the level of world-altering consequences: a figurative flap of the butterfly's wings can not only cause a storm, but subsequently cancel a political rally, change the outcome of an election, trigger a global war. Inherent, fundamental unknowability plays out on every level: just as zooming in on the coin reveals chaos on one scale, so does zooming out from the lab reveal chaos on another scale.

Classical Physics omits fundamental uncertainties from its theory by postulating ideal measurement, but they never truly go away: instead, they graduate into discrepancies so large and distinct, that when they occur in the course of an experiment, they can be individually identified, and must be isolated or else postulated as irrelevant. In other words, determinism appears to hold only so long as you invalidate outcomes affected by unintended influences on the same level of granularity as the prediction. Thus the attempt to objectively establish an "emergent" determinism, by empirical means, falls flat: it exists only to the degree of the experimenter's ability to arrange it, and the willingness to ignore the inevitable failure of his arrangement.

So what to make of all this? Granted, this entire saga proves nothing to a Philosophical Determinist: he can still make the same ontological assertions that gave birth to Classical Physics, if he's honest enough to state them as such, and who can prove him wrong? Even when confronted with more direct evidence of non-determinism from Modern Physics, he can cite non-local hidden variable theories to dismiss the apparent randomness of quantum physics. However, he shouldn't wear the Empiricist Hat while doing so; nor should he pretend that his view stands above competing philosophies on account of "scientific" appeal: determinism is squarely in the realm of metaphysics, where it's at best on an equal standing with non-determinism, or at worst, in a weakened position, given its tensions with empirical observations (when those are taken at face value). As a corollary, those who prefer to maintain an empirical mindset when dealing with empirical questions, should embrace unknowability as a fundamental feature of reality seen through the lens of science.

But what does "unknowability" imply? Is the future essentially random? Randomness itself is hard to define with both rigor and generality, but the core intuition seems to be that some situations have multiple possible outcomes, yet only one actual outcome, making randomness a deus ex machina that closes the gap between the two. The weakness of this explanation lies in the fact that it's difficult to define what it means for an outcome that gets ruled out in the future, to have been "possible" in the first place. A simple resolution is to say that in this context, 'possible' is used in a specialized sense, to describe an outcome that can't be ruled out by the given knowledge. This frames randomness in epistemological terms, thereby connecting it to unknowability and justifying a carry-over of normal intuitions about randomness, as well as standard mathematical frameworks that deal with it, to deal with physical unknowability. On the other hand, that also robs it of its ontic power to bridge the aforementioned gap.

In fact, one of the stronger arguments for determinism is precisely that this gap is unbridgeable, but appears to be there solely due to epistemological limitations. This allows determinism to creep back in as a "common sense" (if metaphysical) resolution to a metaphysical doubt inherently aroused by this otherwise empirical perspective. But there is another way to smooth over, if not conclusively eliminate the issue:

Consider the analysis of chaotic systems: let T0 be the time of initial condition measurement, and T1 be the time of a predicted future state; as the interval between T0 and T1 increases, so does the uncertainty of the prediction; but conversely, as it diminishes, so does uncertainty -- this is true whether you move T1 back towards T0, or T0 forward towards T1, i.e. continuously re-measure and refine your predictions as you approach T1. The uncertainty diminishes gradually along with the time interval.

In the same way, we observe "the gap" when considering the possible outcomes for an interval of time into the future; but insofar as the flow of time is continuous, there is no single moment where multiple possibilities suddenly collapse into one; instead, the degree of unknowability diminishes continuously, with one particular possibility gradually gaining favor over the rest, until they fade away and only it remains as a given present.

If the matter still feels unresolved, we must go back to a crucial statement made at the beginning:

"[In Classical Physycs] measurement acts not as a physical process, but as a pinhole view into a hidden realm of absolute truth: it's an expression of the ontological premise that the universe exists in a definite state, not just as an undifferentiated whole, but as a collection of interacting parts."

Perhaps the distinction between an undifferentiated reality and an atomized one makes more sense now: insofar as the unknowability being discussed arises from the nature of measurement -- in particular, its interactive aspect -- it's ultimately a product of breaking down reality into many parts that interact. If so, the source of unknowability truly is epistemic at the core: it may be that reality as a whole does exist in a definite state, but it's neither precisely expressible in reductionist terms, nor wholly graspable by the limited beings that arise within it.

In conclusion: it is what it is. The axiom "A=A" is taken to be the most fundamental truth; yet when we ask, philosophically or scientifically, "...but what is A, really?", we naturally, yet somewhat paradoxically, expect A to be something other than just itself. In practice, the question asks to express A in terms of B, C, D... in other words: it's about how A relates to other things. When we ask "what is the universe?" -- there being no other things beside it -- we are asking: "how does the whole relate to its parts?", or equivalently: "how do the parts relate to each other?". The Axiom of Identity, for all its profoundness, seems to functions only to stop us short of an infinite regress, or short of the discovery that, in and of themselves, the myriad elements we invent are nothing beyond how they relate to each other. Thus, we trick ourselves with our own questions.

Reflections/Afterthoughts

A potential weakness of this essay lies in the fact that determinism, randomness, uncertainty and unknowability can all be interpreted as either ontological or epistemological. This is alluded to, but generally left to the reader to make the necessary distinctions based on context. This approach stems from the way ontology, epistemology and empiricism become entangled in the subject matter, even though they are logically distinct in principle. It's a reflection of Determinism as commonly seen "in the wild" (rather than in the Lecture Hall), and so reinforcing the distinctions seems artificial and unproductive. Likewise, Unknowability, despite being an epistemological conclusion, is ontologically suggestive of irreducibility (at least more directly than of non-determinism).

What does this all imply for a purely ontological determinism? That depends on how it came about.

It should be acknowledged that this stance usually arises out of intuitions developed through experience. Humanity tends to control its environment, compartmentalize the unpredictable, and become conceptually preoccupied with the predictable; perception has an innate bias for order and patterns. Hence the formation of determinist intuitions starts long before philosophical and scientific thought enters the picture. The mentioned tendencies are crystalized in Classical Physics and its experimental traditions, giving rise to thought experiments like Laplace's Demon, which result in the elevation of otherwise subjective and highly conditional intuitions into what seems like a scientifically credible universal law.

From this angle, philosophical determinist ontologies can be seen as having a roughly analogous relationship to empirical evidence and direct experience, respectively. Physics has an interesting relationship with ontological assumptions: theory uses them in a role akin to axioms, and those are not subject to direct falsification. However, such assumptions are inspired by empirical observations, so they can be discredited, or at least weakened, by a more adequate framing of those observations. It would be wise for a philosophical Determinist to take this (admittedly imperfect) analogy to heart, especially if he subscribes to a mechanistic and atomized view of reality inspired by science.

There's also a more direct attack, although one more limited in scope: In order to meaningfully assert that the future is uniquely determined by laws as such, one has to maintain that the laws are possible to conceptualize, at least in principle; since no laws can be conceptualized in terms of the universe as a whole, one is always stuck with reductionism, putting this stance at odds with the ontological irreducibility implication of Unknowability. From this weakened position, one can only retreat by dropping the requirement that there be any conceivable laws. This means embracing a holistic concept of a universe that organically evolves, and so Determinism loses its mechanistic flavor, profoundly changing the way it interacts with other philsophical questions.

To summarize, in a certain sense, trying to make the present determinate (what is A, really?) makes the future indeterminate, while trying to make the future determinate makes the present indeterminate (A is just A).

By this point, you may be wondering why Quantum Mechanics is barely mentioned. The essay makes all kinds of allusions to it, including stupid puns, but it's never explicitly utilized to prove a point. This is a deliberate choice. As noted previously, people like to compartmentaize, and separating the "quantum world" from the "classical world" by invoking quantum decoherence is as easy as it gets. Furthermore, the Measurement Problem is normally presented in a way that makes it seem like a peculiarity specific to Quantum Mechanics -- to be taken as some insight into "the quantum nature" of particles. I think the Measurement Problem is a specific instance of a more general principle: reductionist methods produce epistemological artifacts. The conclusion regarding nondeterminism being an epistemological artifact may very well apply to QM. That would technically be a kind of "non-local hidden variables theory", albeit one that comes about naturally through known unknowns, instead of postulating artificial unknown unknowns.