Preprint
Article

This version is not peer-reviewed.

Entropy, Computability, and the Observer: From the Halting Problem to Quantum Erasure

Submitted:

21 April 2025

Posted:

22 April 2025

You are already at the latest version

Abstract
We propose an observer-based extension of computability theory, demonstrating that undecidability is not an absolute property but depends on an observer's access to the structural evolution of computation. Using a time-dependent refinement of Kolmogorov Complexity, Kt(P(x)), we define an external observer O who tracks the compression behavior of a program over time, rather than simulating its internal logic. This enables a three-valued halting classification function HO that outputs \( \)Yes, No, or Undecided based on the asymptotic behavior of Kt. This work proposes a paradigm shift in the interpretation of undecidability, reframing the Halting Problem not as an absolute computational boundary, but as a dynamic, observer-relative classification problem grounded in structural convergence. This extension is logically consistent and fully compatible with classical results~\cite{turing1936computable,chaitin1975theory}, while enabling a new class of entropy-aware, observer-based inferences. The framework aligns computability theory more closely with developments in statistical physics~\cite{zurek1989algorithmic}, quantum information~\cite{zurek2003decoherence}, and complexity theory~\cite{vitanyi2011minimum}, where inference arises not from deductive closure, but from entropy flow and observer-accessible regularity. We conclude by outlining implications for the foundations of logic, artificial intelligence, and quantum measurement, and by establishing a conceptual bridge between logical undecidability and entropy geometry as formalized in the TEQ framework~\cite{Sigtermans2025,Sigtermans2025Eigenphysics}.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The Halting Problem, first articulated by Alan Turing in 1936 [4], establishes that there exists no total algorithm H ( P , x ) that, for every Turing machine P and input x, decides whether P ( x ) halts. This undecidability result is derived via a diagonalization argument: assuming the existence of such an algorithm leads to contradiction when applied to a self-referential construction.
Formally, let us suppose H : Σ * × Σ * { 0 , 1 } is a total function such that H ( P , x ) = 1 if P ( x ) halts, and H ( P , x ) = 0 otherwise. Then define the function D by:
D ( P ) = loop forever , if H ( P , P ) = 1 , halt , if H ( P , P ) = 0 .
Applying H to D yields a contradiction: H ( D , D ) = 1 D ( D ) does not halt, and vice versa. Hence, no such universal halting decider H can exist.
While definitive, this proof implicitly assumes a closed, self-contained system in which computation occurs in isolation. The classical model excludes any external perspective capable of tracking how a program’s output unfolds over time. In contrast, many modern frameworks—from quantum physics to machine learning—emphasize the role of observation, environmental coupling, and evolving uncertainty in shaping what can be inferred [3,14].
This paper proposes an extension of computability theory by introducing an external observer O, who does not simulate or analyze P ( x ) internally, but instead monitors the evolution of a structural quantity: the time-dependent Kolmogorov Complexity K t ( P ( x ) ) . This function captures the shortest program that can reproduce the observable behavior of P ( x ) up to time t. The complexity may increase, stabilize, or fluctuate—mirroring the program’s structural dynamics.
By classifying the asymptotic behavior of K t ( P ( x ) ) , the observer can infer whether the computation halts, continues indefinitely, or remains unresolved within the observation horizon. Crucially, this process avoids diagonalization and makes no assumption of total logical access to the program’s internal code.
We show that this observer-based approach remains logically consistent with classical results, while offering a more empirically grounded and entropy-sensitive account of undecidability. In this framing, halting becomes not an absolute property of a program, but a relational property between the program and a finite observer—analogous to how measurement outcomes in quantum systems depend on observer-accessible entropy flow [5,12].
What follows is a formalization of this observer-relative approach. We begin by reviewing Kolmogorov Complexity and its tight connection to the Halting Problem [2,6]. We then show that any attempt to define a self-referential complexity function leads to contradiction, motivating the introduction of an external observer. This gives rise to a time-dependent formulation K t ( P ( x ) ) , and to a logically consistent, three-valued halting classifier H O that infers stabilization trends without logical paradox.
This framework reinterprets undecidability not as a hard limit to inference, but as an emergent property of bounded resolution geometry. When seen externally—via an observer tracking the evolution of structural complexity—halting becomes an entropy-mediated resolution problem, not a syntactic absolute. This shift preserves classical results while opening a path toward a more dynamic, inference-based theory of computability aligned with both empirical science and the Total Entropic Quantity (TEQ) framework [13].

2. Part I: Formal Limits, Structural Complexity, and the Observer

2.1. Kolmogorov Complexity and the Halting Problem

Kolmogorov Complexity provides a formal, machine-independent measure of the information content of a finite binary string. Let U be a fixed universal prefix-free Turing machine. The (plain) Kolmogorov Complexity of a string x { 0 , 1 } * is defined as:
K U ( x ) = min { | p | U ( p ) = x } ,
where | p | is the length of the binary program p, and the minimum is taken over all programs that halt and output x on machine U [1,2]. In what follows, we drop the subscript and write K ( x ) , with the understanding that complexity is defined relative to a fixed universal machine.

2.1.1. Basic Properties of Kolmogorov Complexity

We briefly recall several key properties relevant to this paper:
  • Invariance Theorem (Universality): For any two universal prefix-free machines U and U , there exists a constant c such that:
    | K U ( x ) K U ( x ) | c for all x .
    This allows us to treat K ( x ) as machine-independent up to an additive constant.
  • Incompressibility: A string x is algorithmically random if K ( x ) | x | c for some constant c. Such strings exhibit maximal structural unpredictability [6].
  • Uncomputability: The function K ( x ) is not computable. That is, no total recursive function f ( x ) exists such that f ( x ) = K ( x ) for all x. This follows from the fact that computing K ( x ) would solve the Halting Problem.
  • Upper Semi-Computability: While not computable, K ( x ) is upper semi-computable: there exists a total recursive function ϕ ( x , t ) such that:
    ϕ ( x , t + 1 ) ϕ ( x , t ) , and lim t ϕ ( x , t ) = K ( x ) .
    This reflects our ability to approximate K ( x ) from above by exhaustively simulating candidate programs and updating the shortest found so far.

2.1.2. Connection to the Halting Problem

The uncomputability of Kolmogorov Complexity is tightly linked to the undecidability of the Halting Problem. Determining whether a program p is the shortest description of a string x requires verifying:
  • that p halts on U,
  • that U ( p ) = x ,
  • and that no shorter program halts and yields the same output.
The third step requires solving the Halting Problem for all programs of length less than | p | . Thus, any function that computes K ( x ) exactly would yield a total halting decider, violating Turing’s result [4].
In this sense, Kolmogorov Complexity embeds the Halting Problem. It reveals that information-theoretic and computational undecidability are not separate limits, but two manifestations of a deeper constraint: the impossibility of self-contained structural resolution.

2.1.3. Toward an Observer-Relative Reformulation

Despite its uncomputability, K ( x ) can often be approximated in practice via compression techniques or empirical bounds [7]. This motivates a shift in perspective: instead of attempting to resolve halting from within a system, we define an external observer O who monitors the evolution of descriptive complexity over time.
This leads to the concept of time-dependent Kolmogorov Complexity  K t ( P ( x ) ) , which captures the minimal description length required to reproduce the execution trace of a program up to time t. The observer does not simulate or analyze the source code of P, but tracks how the compressibility of its trace changes over time.
This defines an empirical, resolution-based theory of undecidability—one in which halting behavior is inferred from the stabilization or divergence of complexity trends, rather than deduced from internal logic. It aligns the theory of computation more closely with physical models in which inference is bounded by entropy, time, and access [8,13].
In this way, we move from a static view of undecidability to a dynamic, observer-relative theory of computation—one that respects classical limits but reframes what those limits mean when systems are embedded in time and entropy geometry.

3. Defining the Self-Referential Complexity Function

To clarify the foundational limits of internal computability, we now consider whether a function could determine or even reference its own Kolmogorov Complexity. We show that any such self-referential construction leads to contradiction, reinforcing the need for observer-external mechanisms.

3.1. Assumption: Existence of a Self-Referential Complexity Function

Assume, for contradiction, that there exists a total function K S ( x ) such that for each input x, K S ( x ) gives the length of the shortest program that produces x, where the program is permitted to reference the value of K S ( x ) during execution. Formally:
K S ( x ) = min { | P | U ( P ) = x and P references K S ( x ) } .
This construction implies a circular dependency: the program P must contain or access the value of K S ( x ) , even though this value is defined in terms of the length of P itself.

3.2. Contradiction via Minimality Violation

Define a program P S that attempts to compute and output the value K S ( x ) + 1 . That is:
U ( P S ) = K S ( x ) + 1 .
Assume P S is the shortest such program. Then, by the definition of Kolmogorov Complexity:
K ( U ( P S ) ) | P S | .
But since U ( P S ) = K S ( x ) + 1 , and K S ( x ) is defined as the minimal length of any program that outputs x, it follows that:
K ( U ( P S ) ) K S ( x ) + 1 .
Combining both inequalities:
K ( U ( P S ) ) | P S | and K ( U ( P S ) ) K S ( x ) + 1 ,
so:
| P S | K S ( x ) + 1 .
Yet by assumption, P S is the shortest program that outputs K S ( x ) + 1 , so:
| P S | = K ( K S ( x ) + 1 ) .
This implies:
K ( K S ( x ) + 1 ) K S ( x ) + 1 .
However, it is a general property of Kolmogorov Complexity that for all natural numbers y, K ( y ) log y + c for some constant c [2]. Thus:
K ( K S ( x ) + 1 ) log ( K S ( x ) + 1 ) + c ,
which leads to:
log ( K S ( x ) + 1 ) + c K S ( x ) + 1 ,
a contradiction for sufficiently large x, since the logarithm grows slower than the identity function. Hence, no such total self-referential complexity function can exist.

3.3. Interpretation

This contradiction is not just a quirk of coding—it reflects a deeper structural limit: a system cannot internally resolve the complexity of its own structural encoding without violating minimality. The logic here mirrors Turing’s diagonal argument and Gödel’s incompleteness: any sufficiently expressive system attempting to describe or decide its own descriptive depth falls into paradox.

3.4. Conclusion

We conclude that no total, self-referential complexity function K S ( x ) can exist. Any such construction collapses under the weight of circular minimality constraints. Thus, a consistent framework for evaluating complexity must avoid internal self-reference. This motivates our central proposal: an external observer who tracks complexity trends empirically over time—an idea we develop in the next section using the time-dependent complexity function K t ( P ( x ) ) .
This conclusion resonates with the broader principle in TEQ: that resolution is fundamentally relational—a configuration can only be resolved from an entropy-distinct vantage, not from within its own self-compression.

4. Time-Dependent Kolmogorov Complexity

Traditional Kolmogorov Complexity K ( x ) is a static, global property: it assigns to each string x the length of the shortest program that outputs it, without regard for the temporal unfolding of that output. It thus provides a measure of information content, but not of dynamical evolution.
However, in computational systems with evolving behavior, the question of whether a system halts or continues indefinitely is fundamentally a question about the progression of structure over time. To capture this dimension, we now introduce a dynamic extension: the time-dependent Kolmogorov Complexity.

4.1. Definition

Let P ( x ) be a program executed on a universal Turing machine U, and let τ N denote a discrete time bound on its execution. Define the execution trace up to time τ , denoted Trace τ ( P ( x ) ) , as the complete observable output and state sequence generated by P ( x ) within τ computation steps.
We then define the time-dependent Kolmogorov Complexity  K τ ( P ( x ) ) as:
K τ ( P ( x ) ) = min | p | | U ( p ) = Trace τ ( P ( x ) ) .
This is the length of the shortest program p that reproduces the externally observable behavior of P ( x ) up to time τ , not necessarily by simulating P directly. Thus, K τ encodes the apparent regularity in the output as seen by an observer tracking structure across time.

4.2. Formal Properties and Assumptions

  • K τ ( P ( x ) ) is upper semi-computable in τ : for fixed P ( x ) , we can enumerate candidate programs and incrementally search for one that reproduces Trace τ ( P ( x ) ) .
  • The map τ K τ ( P ( x ) ) is non-decreasing: once novel structure has appeared, it cannot be erased from the minimal description.
  • If P ( x ) halts at time T, then K τ ( P ( x ) ) = K T ( P ( x ) ) for all τ T . That is, complexity saturates once the trace stops changing.

4.3. Interpretation

The function K τ ( P ( x ) ) allows an external observer to infer structural dynamics over time:
  • Halting behavior: If P ( x ) halts at finite time T, then K τ ( P ( x ) ) stabilizes for all τ T . This convergence provides a detectable signature of halting.
  • Non-halting behavior: If P ( x ) continues producing new, non-repeating output indefinitely, then K τ ( P ( x ) ) . The minimal description length keeps increasing with time.
  • Ambiguous or metastable regimes: If P ( x ) generates complex but eventually repeating output, or has long periods of inactivity punctuated by bursts, then K τ may grow slowly, plateau, or oscillate. The convergence of structure may be undecidable in practice, yet still observable asymptotically.

4.4. Empirical Computability via Complexity Trends

This dynamic formulation transforms the halting problem into a matter of inference rather than deduction. The observer does not simulate the internal logic of P; instead, they track whether the emergent structural complexity settles or diverges.
In this way, undecidability is reframed not as an ontological barrier, but as a limitation on the observer’s resolution capacity over time. This is analogous to asymptotic non-measurability in dynamical systems, or delayed phase stabilization in statistical inference [8,14].

4.5. Examples and Analogies

This approach resonates with several lines of empirical and theoretical research:
  • In practical complexity analysis, compressed traces or behavioral signatures are used to evaluate the predictability and structure of evolving systems [7].
  • In algorithmic thermodynamics, entropy production is tracked as a proxy for irreversibility and convergence [8].
  • In machine learning and AI alignment, trend-based metrics are used to approximate convergence without requiring exhaustive simulation of all internal processes.

4.6. Outlook

The function K τ ( P ( x ) ) lies at the empirical core of our observer-based framework. It provides the dynamic lens through which we classify halting behavior—not as a binary predicate, but as an inference process grounded in structural evolution. In the next section, we define an observer-relative halting classifier H O , which maps the asymptotic behavior of K τ to a three-valued logical outcome: Yes, No, or Undecided.
This opens the door to a computability theory that reflects how knowledge emerges not from total derivation, but from stabilization under entropy flow.

5. The Observer-Based Halting Function

Building on the time-dependent Kolmogorov Complexity K τ ( P ( x ) ) , we now define a decision mechanism for classifying halting behavior based on empirical structure rather than logical deduction. Unlike the classical Halting Problem—which seeks a total computable function to decide whether any program halts—our formulation introduces an external observer O who infers halting trends from the long-term evolution of compressibility.

5.1. Definition of the Observer

Let O be an observer with access to the sequence { K τ ( P ( x ) ) } τ 0 , where each K τ ( P ( x ) ) is the minimal program length reproducing the trace of P ( x ) up to time τ . Crucially, the observer does not simulate the internal logic of P, nor examine its source code. Instead, O operates by evaluating the informational structure emerging from the program’s behavior over time.

5.2. Observer-Based Halting Classification

We define the observer’s classification function as follows:
H O ( P , x ) = YES , if lim τ K τ ( P ( x ) ) = K f < , NO , if lim τ K τ ( P ( x ) ) = , UNDECIDED , otherwise .
Each output reflects a distinct asymptotic regime of K τ :
  • Yes(Halting): The complexity stabilizes at a finite value K f . This indicates that P ( x ) halts at some finite time T, and no further structure is generated.
  • No(Non-halting): The complexity diverges, i.e., K τ ( P ( x ) ) . This corresponds to persistent structural novelty without convergence.
  • Undecided(Ambiguous): The complexity is bounded but fluctuating, or slowly increasing, with no clear asymptotic signature. In this regime, the observer suspends judgment, allowing for epistemic incompleteness.

5.3. Remarks on Logical Consistency

This approach avoids the diagonalization-based contradiction central to Turing’s result in two key ways:
  • The observer O does not simulate P ( x ) , nor invoke any universal halting decider. It merely observes empirical complexity trends.
  • The classification function H O is explicitly partial: it includes a third outcome, Undecided, to account for observational uncertainty. This defers judgment in ambiguous or metastable regimes, preserving logical consistency.
Consequently, H O is a well-defined, non-paradoxical structure grounded in observable information rather than formal deductive closure. It expands the classical notion of computability by introducing a dynamic, entropy-aware layer of inference.

5.4. Interpretation and Precedents

This observer-based classification mirrors scientific practice: conclusions about system behavior are often inferred from regularity, convergence, or persistent novelty in data, not from exhaustive deduction. In fields such as:
  • Empirical complexity analysis, compression-based metrics guide judgments about structure and randomness [7];
  • Machine learning, structural regularities drive inductive generalization without full interpretability;
  • Physics, irreversibility is inferred through entropy production rather than formal causality [8].
Here, we extend such inferential reasoning into the foundations of computability itself. The halting problem becomes a classification problem over time-indexed structural resolution rather than a logical absolution.

5.5. Outlook

In the next section, we establish the logical consistency of this construction in detail. We show that H O remains paradox-free even when applied to programs referencing itself or attempting self-evaluation. This completes the foundation for a computability theory that is not only consistent with classical limits but reflective of entropic and observational constraints—themes that continue in Part II of this paper and in the broader TEQ framework [13].

6. Proof of Observer-Based Computability

We now formalize the central claim: that the observer-based halting function H O ( P , x ) , defined in terms of time-dependent Kolmogorov Complexity K τ ( P ( x ) ) , provides a logically consistent classification of halting behavior. Specifically, we show that:
  • H O does not require simulation or access to the internal logic of the program P.
  • H O does not violate the undecidability of the classical Halting Problem.
  • H O remains paradox-free, even when applied to programs referencing itself or its outputs.

6.1. Theorem: Consistency of Observer-Based Halting Classification

Let P ( x ) be a Turing machine and K τ ( P ( x ) ) the time-dependent Kolmogorov Complexity of its execution trace up to time τ . Then the function
H O ( P , x ) = YES , if lim τ K τ ( P ( x ) ) < , NO , if lim τ K τ ( P ( x ) ) = , UNDECIDED n d e c i d e d , otherwise ,
is well-defined, partial, and logically consistent under the following conditions:
  • The observer O does not execute or simulate P ( x ) .
  • O observes only the externally accessible sequence { K τ ( P ( x ) ) } τ 0 .
  • O does not commit to halting judgments unless convergence is empirically verifiable.

6.2. Proof Sketch by Case Analysis

We analyze the three exhaustive categories of long-term behavior for the complexity sequence K τ ( P ( x ) ) :

Case 1: Convergence to a Finite Value (Yes)

Assume there exists a finite time T N such that for all τ T , the program P ( x ) halts and produces no further output. Then:
Trace τ ( P ( x ) ) = Trace T ( P ( x ) ) for all τ T ,
implying that:
K τ ( P ( x ) ) = K T ( P ( x ) ) for all τ T .
Thus:
lim τ K τ ( P ( x ) ) = K T ( P ( x ) ) < ,
and the observer assigns H O ( P , x ) = YES . This inference is observational and does not require internal code access.

Case 2: Unbounded Growth (No)

Assume P ( x ) continuously generates new, non-repeating output. Then for any B N , there exists a τ such that:
K τ ( P ( x ) ) > B ,
and hence:
lim τ K τ ( P ( x ) ) = .
The observer accordingly assigns H O ( P , x ) = NO , again without contradiction or simulation.

Case 3: Ambiguous Growth (Undecided)

Suppose K τ ( P ( x ) ) exhibits neither provable convergence nor divergence—e.g., it fluctuates, grows slowly, or enters long plateaus. Then the observer cannot confidently resolve halting status within finite time. In this case:
H O ( P , x ) = UNDECIDED .
This allows O to preserve consistency by deferring classification in epistemically ambiguous regimes.

6.3. Why No Paradox Arises

Unlike the classical diagonalization argument, which generates contradiction by demanding totality, this framework is immune to self-reference pitfalls for three key reasons:
  • The observer does not simulate or deduce halting; it infers from surface structure.
  • The classification is partial and explicitly permits uncertainty.
  • The complexity function K τ is upper semi-computable and monotonic, and is never used self-referentially by P or H O .
No circular logic arises, and no contradiction can be constructed—only observational boundaries determined by the resolution horizon of the observer.

6.4. Conclusion

The halting function H O , while not Turing-computable, is both well-defined and consistent. It operationalizes halting classification as a structural inference process, rather than a logical deduction. This framework avoids paradoxes by explicitly acknowledging the limits of resolution within time-bounded observation.
In this view, undecidability becomes not an immutable epistemic wall, but a consequence of structural indistinguishability under evolving entropy flow—an idea that aligns precisely with the principles underlying the TEQ framework [13].
Remark 1.
The observer-based halting function H O ( P , x ) does not constitute a solution to the classical Halting Problem. Rather, it offers a partial, entropy-sensitive classification that preserves consistency by forgoing logical omniscience and embracing structural uncertainty. It is thus aligned with empirical reasoning, not formal deduction.

7. Implications

The observer-based framework we have developed reinterprets the Halting Problem through the lens of empirical complexity trends rather than logical deduction. This reframing has conceptual, formal, and practical implications across computation, physics, and information theory.

7.1. Computability as Observer-Relative

In classical computability theory, the Halting Problem is undecidable because no total function can determine, for every Turing machine and input, whether the computation halts. This undecidability is absolute within a closed formal system.
In the observer-based perspective, by contrast, decidability becomes a function of:
  • the observer’s access to time-indexed compression bounds;
  • the structural evolution of the system over time;
  • and the tolerance for uncertainty (e.g., the allowance of an Undecided state).
Decidability is thus reframed as a relational property between a dynamical system and an entropy-bounded observer. This perspective aligns with epistemic approaches in constructive mathematics and proof theory, where truth emerges not from total axiomatic closure but from structural accessibility over time.

7.2. Connection to Physical Measurement and Entropy

The analogy with quantum measurement is immediate and deep. In quantum mechanics, outcomes are not determined by internal properties alone, but by the informational configuration of the observer and system. Measurement collapses are not absolute events, but entropy redistributions driven by environmental coupling [3,5].
In our framework, halting judgments arise from similar dynamics: they reflect irreversible stabilization in the entropy landscape of complexity growth. In both domains:
  • The observer has no access to the internal “wavefunction” or program logic;
  • All inference is drawn from statistical or structural trends;
  • Uncertainty is not eliminated, but bounded by the entropy geometry of the system-observer relation.
This correspondence suggests that logical undecidability and quantum irreversibility are not disjoint anomalies, but emergent boundaries under a shared entropic principle.

7.3. Implications for Artificial Intelligence

In formal verification and automated reasoning, the classical Halting Problem imposes hard theoretical limits. Yet within the observer-relative framework, AI systems may still infer halting behavior by observing:
  • Growth trends in empirical compression metrics K τ ,
  • Statistical stabilization in execution traces,
  • Asymptotic plateaus that signal convergence under structural evolution.
Such techniques already appear in compression-based anomaly detection, software fuzzing, and non-symbolic reasoning systems. Incorporating partial, entropy-aware halting classifiers into AI architectures may improve robustness in long-horizon inference—by accepting that inference must be adaptive, not omniscient.

7.4. A Conceptual Reframing of Undecidability

In classical logic, undecidability is often framed as a negative result: a rigid barrier between knowable and unknowable. Here, we reframe it positively:
Undecidability is not an absolute epistemic limit, but a dynamically contingent expression of structural indistinguishability under bounded entropy flow.
This mirrors developments in quantum foundations, statistical mechanics, and complex systems: the boundary between latent and resolved structure is determined not by logic alone, but by entropy, measurement, and access.
The Undecided state is not a gap—it is a faithful expression of what inference means under constraint. It reinstates humility as a formal concept.

7.5. Bridge to Entropy-Based Physics

These insights motivate a unified perspective in which entropy and inference are two aspects of a single geometric substrate. In the second part of this work, we introduce the Total Entropic Quantity (TEQ) framework, which derives quantum structure, dynamics, and measurement from entropy-weighted path selection [13].
Within TEQ, entropy is not a measure of ignorance—it is a geometric principle governing which structures stabilize and become observable. The parallels with our observer-based framework are exact:
  • Entropy flow determines what is physically resolved;
  • Observer-relative decompositions shape interference and decoherence;
  • Stabilization corresponds to phase fixation under entropy curvature.
In both contexts, inference and measurement are not passive acts but entropic projections: shifts in what is stabilized under constrained resolution.

7.6. Bridging Observer-Based Computability and TEQ

The observer-relative reformulation of the Halting Problem finds a structural counterpart in the TEQ framework. There, quantum trajectories are selected not via postulated laws, but via entropy-weighted action principles. Observable amplitudes emerge from trajectories that are both dynamically coherent and entropically stable.
In this setting:
  • The external observer O, who infers halting from the behavior of K t ( P ( x ) ) , plays a role isomorphic to the quantum observer, who filters outcomes through entropy redistribution.
  • Halting corresponds to stabilization of symbolic structure; measurement corresponds to stabilization of physical amplitude.
  • In both cases, what becomes decidable is not intrinsic—it is shaped by the entropy geometry of the observer-system interaction.
This leads to a unifying principle:
Both classical undecidability and quantum indeterminacy are expressions of entropy-constrained resolution.
TEQ thus provides the ontological bridge: a coherent entropy geometry in which computation, observation, and stabilization become phases of the same structural dynamics. Logical limits and physical laws are not separate—they are entangled by curvature.

8. Part II: Entropy, Observation, and Quantum Inference

Philosophical Reflection: From Formal Boundaries to Entropic Horizons

The Halting Problem, in its classical form, exemplifies the power of minimalist reasoning. Alan Turing’s original proof, framed in the stark logic of contradiction and self-reference, defines a logical surface—a boundary beyond which deductive certainty cannot pass. Like the event horizon of a black hole, this boundary is deceptively simple in expression, yet encodes a deep constraint on all internally self-contained computational systems.
But this surface is not the whole story. Just as modern physics reveals that space is curved and interactive, inference in natural systems unfolds not in a flat syntactic domain, but in a dynamically structured informational geometry. Systems are not closed—they are embedded in environments, open to entropy exchange, and permeable to observation. Measurement is not deduction, but a structural coupling between observer and system.
The observer-based extension of computability introduced in Part I lifts the Halting Problem into this broader context. It preserves the classical barrier—never asserting computability where Turing showed none can exist—but reframes the issue. Halting becomes not an intrinsic attribute of code, but a relational property that emerges through time, structure, and compressibility. It is no longer a binary decision made internally, but a convergence process inferred externally.
In this sense, the Halting Problem is not merely a boundary—it is a window into the topology of inference. It invites us to ask: What lies beyond formal syntax? What forms of decidability arise when the observer is included in the geometry of the system? And how does entropy—once a thermodynamic quantity—govern the very distinction between the knowable and the unresolvable?
These questions reverberate in the quantum domain. Quantum theory, in its conventional formulation, begins with axioms: Hilbert spaces, operators, probabilistic amplitudes, measurement postulates. These elements work—but they appear axiomatic and unmotivated, disconnected from any deeper structural necessity. They describe outcomes, but not why those outcomes are structured in that particular way.
The Total Entropic Quantity (TEQ) framework reverses this picture. It does not take quantization, discreteness, or the Born rule as inputs. Instead, it asks: what kind of variational and geometric principle would give rise to quantum structure as a consequence?
In TEQ, entropy is not a measure of ignorance or disorder—it is a dynamical selector. It defines a geometry of resolution in which physical structure arises from the interplay between coherence (action) and distinguishability (entropy). Quantum amplitudes become weighted sums over paths—not just dynamically consistent, but entropically admissible.
From this foundation, the canonical structures of quantum theory—wavefunction amplitudes, unitary evolution, discrete spectra, uncertainty—emerge not as postulates, but as entropic necessities. The observer is no longer a mysterious agent that causes collapse, but a local resolution frame through which entropy redistributes. Measurement is not discontinuity—it is curvature.
In this light, quantum mechanics is not a separate domain of physical law—it is a thermodynamically filtered subspace of path-based dynamics, emergent from stability under entropy flow.
What follows is a rederivation of quantum structure from the TEQ principle. The result is a theory in which the axioms of quantum mechanics are no longer axioms—they are consequences.

9. Entropy, Measurement, and the Quantum Eraser

In both computability and quantum mechanics, the observer defines the boundary between uncertainty and information. The Total Entropic Quantity (TEQ) framework interprets quantum measurement not as wavefunction collapse, but as an entropy-driven redistribution across subsystems. This observer-relative interpretation echoes Relational Quantum Mechanics [15], but TEQ derives it from entropy geometry: the state of a system is defined by its stability under entropy-weighted variation with respect to an observer’s resolution scale. This section applies the observer-relative perspective developed in Part I to the quantum eraser experiment, revealing a unified entropic structure beneath both computational and quantum inference.
Table 1. Derived Quantum Structures (TEQ ↔ QM).
Table 1. Derived Quantum Structures (TEQ ↔ QM).
Quantum Concept Standard QM TEQ Interpretation
Wavefunction Complex-valued field encoding probabilities Entropy-weighted amplitude from path selection
Born Rule Postulated probability: P = | ψ | 2 Emerges from Gaussian entropy expansion over dominant paths
Schrödinger Equation Postulated unitary time evolution Arises from entropy-weighted action extremization
Commutation Relations Axiomatic: [ x , p ] = i Emergent from entropy curvature in phase space
Quantization Imposed via boundary conditions or operator algebra Entropic filtering of stable, coherent phase modes
Hilbert Space Abstract vector space of states Emergent from interference of entropy-stable trajectories
Measurement Collapse postulate or branching Entropic redistribution: shift from latent to realized structure
Uncertainty Heisenberg principle via operator algebra Result of entropy curvature suppressing exact paths

9.1. The Quantum Eraser as Entropy Redistribution

The quantum eraser experiment demonstrates that interference patterns can be destroyed or recovered depending on whether which-path information is accessible—even after detection [9,10]. In standard interpretations, this raises concerns of retrocausality and nonlocal collapse.
TEQ reinterprets the phenomenon through entropy geometry:
  • Which-Path Information Increases Realized Entropy
    Recording which-path data transfers entropy from latent entanglement to realized degrees of freedom:
    d S ˜ realized d t = α f ( Λ ) 1 S ˜ realized ,
    where S ˜ realized is the entropy accessible to the observer, Λ describes environmental coupling, and α is a scale factor (units of 1 ). Decoherence increases as entropy becomes localized.
  • Erasure Reweights Quantum Path Probabilities
    When which-path information is erased, coherence is restored. In TEQ, this corresponds to a shift in entropy-weighted path amplitudes:
    P eff , i | c i | 2 e β S ˜ apparent , i ,
    where S ˜ apparent , i is the path entropy perceived by the observer, and β controls entropic sensitivity. Probability amplitudes reshape as the observer’s entropy access changes.
  • No Retrocausality Required
    The apparent reversal of outcome arises not from time running backward, but from a reconfiguration of the entropy landscape. When which-path information is erased, the entropy geometry realigns, re-enabling previously suppressed interference.

9.2. Observer-Dependent Entropy Geometry

Different observers—distinguished by their informational coupling—access different entropy decompositions. In TEQ, the total entropy of a system is partitioned as:
S ˜ total = S ˜ realized + S ˜ latent , entangled + S ˜ latent , classical .
An observer with access only to S ˜ realized sees decohered outcomes. One who retains or recovers latent entanglement can access interference. The geometry of entropy—not logic—determines what becomes physically resolved.

9.3. Analogy with Observer-Based Computability

The connection to Part I is both direct and structural:
  • In both contexts, the observer does not alter the system’s intrinsic evolution, but selects resolution from among the available structure.
  • Halting behavior in computation and measurement outcomes in quantum systems emerge from stabilization under entropy-constrained trajectories.
  • Information gain in the halting problem and coherence loss in quantum measurement are dual manifestations of a common entropic projection principle.
In both cases, the observer defines a slicing of structure from the space of possibilities—computational or quantum—into that which becomes decidable.

9.4. Conclusion: Measurement as Entropic Projection

Within TEQ, the quantum eraser ceases to be paradoxical. It becomes an expression of entropy flow: a shift in the resolution boundary induced by observation. The apparent reversal of outcome is not due to backward causation, but to a change in the observer’s entropic horizon.
The transition from uncertainty to realization is governed not by logic alone, but by the entropy geometry of the observer-system relation.
In the next sections, we outline experimental and theoretical implications of this principle—bridging computation, quantum inference, and the emergence of stable physical law through entropy-weighted resolution.

10. Conclusion

We have proposed a unifying perspective in which undecidability, measurement, and inference are not absolute properties of systems, but emergent features shaped by the entropic relation between system and observer. By introducing a time-dependent formulation of Kolmogorov Complexity, K t ( P ( x ) ) , we constructed an observer-based halting classifier H O that remains logically consistent with Turing’s undecidability result while avoiding classical paradoxes. The key shift is from internal logical deduction to external structural inference.
This observer-centric framework extends seamlessly to quantum phenomena. Within the Total Entropic Quantity (TEQ) model, we interpreted quantum measurement and the quantum eraser experiment not as collapse or retrocausality, but as entropy redistributions governed by the observer’s informational boundary. Just as a computation halts relative to the stabilization of K t , interference emerges or vanishes based on which entropic components are accessible. In both domains, realization is conditional—not absolute.

10.1. Experimental Predictions

The TEQ framework leads naturally to testable hypotheses that bridge entropy geometry and observable phenomena:
  • Entropy-Conditioned Interference: Visibility of interference patterns in quantum eraser experiments should vary not just with which-path access, but with the ambient entropy of the system’s environment. Decoherence should scale with the entropy flux into S ˜ realized .
  • Delayed Erasure Without Classical Storage: If coherence is a function of entropy redistribution, then interference should be restorable even when which-path data is never classically stored—provided the entropic geometry is rebalanced.
  • Complexity Trend Estimation in AI: Machine learning systems can leverage K t -like metrics to approximate halting behavior in long-horizon programs, without full simulation. This could improve soft-decision architectures and advance automated theorem discovery.
These predictions can be explored using quantum optical setups, thermodynamic control experiments, and symbolic learning frameworks in AI.

10.2. Foundational Significance

The central philosophical insight of this work is that observation is not a passive extraction of truth, but a constraint-based resolution process. In both logic and physics, paradoxes arise when we assume systems can fully describe themselves. When instead we recognize the observer as an entropic boundary condition, those paradoxes dissolve.
This yields a reframing of core notions across domains:
  • Computation: Computability becomes a structural relation between evolving programs and complexity-aware observers.
  • Measurement: Measurement is not collapse, but entropic projection—a redistribution from latent to realized structure.
  • Causality: Apparent retrocausality may reflect changes in entropy geometry, not violations of physical law.
In this way, TEQ offers a single geometrical principle—entropy-weighted resolution—from which computability, inference, quantization, and measurement can all be derived.

10.3. Future Directions

Some results discussed here build directly on prior TEQ developments, notably:
  • The derivation of the entropy-weighted path integral from entropy geometry [12];
  • The emergence of the β -regulated probability law as a structural generalization of the Born rule [12].
Building on these, the following directions remain open for investigation:
  • Application of observer-based complexity tracking to Gödelian domains in arithmetic, soft halting classification in AI safety, and proof emergence in large language models.
  • Investigation of whether entropy redistribution in quantum erasure can be decoupled from explicit time-ordering, illuminating the role of entropic geometry in resolving temporal paradoxes.
We conclude that both logical undecidability and quantum indeterminacy are not barriers, but portals—openings through which we glimpse a deeper structure that governs both thought and matter. In this view:
Computation halts, coherence collapses, and causality unfolds—only as entropy allows it.

Acknowledgments

This research was undertaken informally and independently during an ongoing period of cognitive and physical rehabilitation following a brain hemorrhage. It should be understood as part of a personal recovery process, not a professional research output. In that context, ChatGPT was used for grammar refinement, structural clarity, and conceptual dialogue. All theoretical developments, derivations, and conclusions are solely the author’s. This work is offered in the hope that, whatever its origin, the structural clarity it seeks may be of value.

References

  1. Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems of Information Transmission, 1(1), 1–7.
  2. Li, M., & Vitányi, P. (2008). An Introduction to Kolmogorov Complexity and Its Applications (3rd ed.). Springer.
  3. Zurek, W. H. (2003). Decoherence, einselection, and the quantum origins of the classical. Reviews of Modern Physics, 75(3), 715.
  4. Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42(1), 230–265.
  5. Schlosshauer, M. (2005). Decoherence, the measurement problem, and interpretations of quantum mechanics. Reviews of Modern Physics, 76(4), 1267.
  6. Chaitin, G. J. (1975). A theory of program size formally identical to information theory. Journal of the ACM, 22(3), 329–340.
  7. Vitányi, P. (2011). Minimum description length and statistical modeling. IEEE Transactions on Information Theory, 57(10), 6580–6592.
  8. Zurek, W. H. (1989). Algorithmic randomness and physical entropy. Physical Review A, 40(8), 4731.
  9. Walborn, S. P., Terra Cunha, M. O., Pádua, S., & Monken, C. H. (2002). Double-slit quantum eraser. Physical Review A, 65(3), 033818.
  10. Kim, Y.-H., Yu, R., Kulik, S. P., Shih, Y., & Scully, M. O. (2000). A delayed choice quantum eraser. Physical Review Letters, 84(1), 1.
  11. Sigtermans, D. (2025). The Total Entropic Quantity Framework: A Conceptual Foundation for Entropy, Time, and Physical Evolution. Preprints. [CrossRef]
  12. Sigtermans, D. (2025). Entropy as First Principle: Deriving Quantum and Gravitational Structure from Thermodynamic Geometry. Preprints. [CrossRef]
  13. Sigtermans, D. (2025). Eigenphysics: The Emergence of Quantization from Entropy Geometry. Preprints. [CrossRef]
  14. Wolfram, S. (2002). A New Kind of Science. Wolfram Media.
  15. Rovelli, C. (1996). Relational quantum mechanics. International Journal of Theoretical Physics, 35(8), 1637–1678.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated