Lorentz invariance

  • Home
  • Lorentz invariance

The theory of coded reality and the problem of Lorentz invariance

What is Lorentz invariance

Lorentz invariance is often presented simplistically as the statement that "the laws of physics are the same for all observers in uniform motion." This formulation is correct, but superficial. In fact, it concerns a much deeper, more restrictive condition, far beyond the simple symmetry of the equations.

Lorentz invariance means that there is no physically unique frame of reference. No state of uniform motion is more "true," more fundamental, or closer to the structure of the world than any other. There is no local experiment that can distinguish rest from uniform motion with respect to space itself.

In other words, spacetime is locally isotropic: no spatial or spatiotemporal direction is physically privileged. Any observable anisotropy — even a trace — would imply the existence of a hidden background, a distinctive structure, or an absolute reference. This is also true for time: the existence of global simultaneity or a shared "now" would already violate this principle.

The consequence of this fact is a radical shift in the concept of time and space. They are no longer separate, absolute entities, but quantities that blend with each other when the frame of reference changes. Different observers may disagree about the length, duration, or simultaneity of events, but they always agree on the causal structure: which events can interact and which are causally isolated. The lack of absolute simultaneity here is not an interpretation, but a structural feature of the theory.

The formal expression of this property is the invariance of the space-time interval described by the Minkowski metric:

ds2=c2dt2+dx2+dy2+dz2 ds^2 = -c^2 dt^2 + dx^2 + dy^2 + dz^2

or in covariant notation:

ds2=ημνdxμdxν ds^2 = \eta_{\mu\nu}\, dx^\mu dx^\nu

wheren ημν=diag(1,+1,+1,+1) \eta_{\mu\nu} = \mathrm{diag}(-1, +1, +1, +1)  is a constant metric of flat spacetime.

It is this invariance—not any additional assumptions—that underlies the entire theory of special relativity and determines its mathematical structure. The following follow directly from it:

Lorentz transformations (for the boost in the direction of the axis x x ):

t=γ(tvxc2),x=γ(xvt),γ=11v2/c2 t’ = \gamma \left( t – \frac{v x}{c^2} \right), \quad x’ = \gamma (x – v t), \quad \gamma = \frac{1}{\sqrt{1 – v^2/c^2}}

energy-momentum relationship:

E2=p2c2+m2c4 E^2 = p^2 c^2 + m^2 c^4

relativistic increase in effective mass:

m=m01v2/c2 m = \frac{m_0}{\sqrt{1 – v^2/c^2}}

and the fact that the speed of light is the same structural limit for all observers:

v=c v = c

Physically this means that:

– it is impossible to detect “motion relative to space itself” (no aether), – there is no absolute clock or global grid of simultaneity, – the speed of light is not a property of a specific object, but a boundary of the causal structure, – the light cone remains invariant — what lies inside the causal cone remains inside for every observer.

It is this property that makes special relativity so ruthlessly restrictive. It does not tolerate even the slightest trace of effects that would betray the existence of a privileged background, a distinguished direction, a global time synchronization mechanism, or anisotropy of dynamics. Modern tests for violation of Lorentz invariance (LIV) reach a sensitivity of the order of 10201023 10^{-20} – 10^{-23} , which makes this symmetry one of the best confirmed properties of reality.

Therefore, any discrete fundamental model that aspires to describe the physical world must not only reproduce the formal equations of special relativity, but also explain why its own ontology and dynamics introduce neither detectable anisotropy nor absolute simultaneity.

This condition is not an aesthetic addition. It is a test of ontological coherence.

Why lattice theories have a problem with this

Here a classic tension arises, one that for decades was considered nearly insurmountable. Lattice theories — and more broadly, all approaches that assume a discrete substrate for reality — seemed, by definition, to be inconsistent with Lorentz invariance.

The reason is seemingly obvious. Regular grid:

– it has distinguished directions (axes, planes, diagonals), and is therefore anisotropic, – it has a distinguished scale (minimum step, e.g. Planck length), – it has a natural “rest state” with respect to which motion can be defined, – and – even if implicitly – a common updating scheme that acts as a global clock.

This last point is crucial. Every regular step-by-step evolution, every global update of the mesh state, every configuration definition "in the moment" t t ” introduces absolute simultaneity. And absolute simultaneity denotes a distinguished frame of reference.

If reality were to be composed of such a structure, intuition and calculation suggest that an observer moving relative to the grid should be able to detect it. Propagation speeds should depend on direction. Wave dispersion should reveal granularity. Clocks based on local processes should "feel" the global updating rhythm.

For precisely this reason, for decades, the discrete structure of spacetime and strict Lorentz invariance were thought to be mutually exclusive. At best, it was hoped that Lorentz symmetry would only emerge as an approximation—in the low-energy limit—with minor corrections at the Planck scale.

This position quickly proved problematic.

First, even extremely small violations of isotropy and Lorentz invariance lead to observable effects that we simply don't see. Second, "approximate" symmetry ceases to be symmetry in the fundamental sense — it becomes an artifact of a specific energy regime. Third, and finally, there was no mechanism explaining why this particular symmetry should reproduce itself with such absurd precision.

Formally, the problem is very clear.

In continuous, isotropic space-time the dispersion relation has the form:

E2=p2c2+m2c4 E^2 = p^2 c^2 + m^2 c^4

In a regular mesh, rotational symmetry is broken and directionally dependent dispersion corrections almost inevitably appear, which are a direct trace of the substrate anisotropy:

E2=p2c2+m2c4+ξ1(pn)4MP2+ξ2(p2)2MP2+ E^2 = p^2 c^2 + m^2 c^4 + \xi_1 \frac{(p \cdot n)^4}{M_P^2} + \xi_2 \frac{(p^2)^2}{M_P^2} + \dots

where n n  is the vector of the distinguished direction of the grid, and ξ1,ξ2 \xi_1, \xi_2 are coefficients of order of unity, depending on the geometry of the truss and the propagation rules.

The key is that these corrections grow as p4 p^4 . This means that their influence becomes dramatically visible precisely where tests are most sensitive today: in observations of gamma-ray bursts, cosmic rays with energies above 1019eV 10^{19}\, \mathrm{eV} , cosmic neutrinos, and the stability of pulsar signals. These experiments proved merciless for classical grid models.

In practice, this meant one thing: the mesh revealed its anisotropic nature.

Therefore, most approaches to date have not attempted to explain the origin of Lorentz invariance, but merely imitate or mask it. Typical strategies have included:

– random distribution of points — statistical isotropy at the expense of local stability and particle dynamics, – hiding anisotropy in entangled structures — with residual directional effects at high energies, – tuning of propagation rules or boundary conditions — technically effective but ontologically inelegant, – very long graph evolutions — with apparent emergence of symmetries but a visible trace of the mesh on small scales.

None of these approaches solved the problem at its source. They merely shifted it. Either at the expense of locality, or of stable particles, or of minimalism, or of full agreement with observational data.

This wasn't a failure of the specific implementation. It was a failure of the assumption that Lorentz invariance and isotropy are properties that should be imposed on the mesh, rather than properties that should follow from the mechanism of its operation.

This description level error is exactly where Māyā changes the rules of the game.

Isotropy as a property of the execution process, not of space

The most profound difference is that in Māyā isotropy is not a property of space, but a property of the rendered process, which occurs in local time and not in a global temporal background.

What we observe as isotropic spacetime is not a direct reflection of the planxel structure, but rather the result of a rendering occurring on scales much smaller than any physical experiment. The anisotropy is not hidden or masked—it vanishes ontologically before a level of description at which it could be measured arises.

Each planxel has its own local computational clock tp t_p , the length of which depends on the local information density and synchronization cost. There is no common clock or global unit of time. Time is the number of completed local execution cycles.

This means that the rendering of reality always occurs locally – each fragment of the world is generated at its own pace, not “at the same time” as the rest of the network.

At the same time, each act of rendering is performed on the same execution principle:

– information can only be passed to the immediate neighborhood, – propagation takes place in the full 26-neighborhood, – each synchronization step takes exactly one local cycle tp t_p .

As a result, the limiting speed is not defined relative to external time, but as the local ratio of the synchronization step to the local execution time:

cptp c \equiv \frac{\ell_p}{t_p}

Since both p \ell_p and tp t_p are defined by the same runtime act, their ratio is locally invariant, no matter how much the rendering rate differs in different regions of the network.

Therefore, despite the lack of global time, all observers reconstruct the same terminal velocity. Not because time is absolute, but because every velocity measurement is local.

At this level, the anisotropy suppression mechanism begins to operate.

The full 26-neighborhood ensures that information propagation is nearly spherical in nature, even in a single local cycle. The correlation front does not develop along the grid axis, but spreads evenly in all available execution directions.

Additionally, each local cycle contains a phase rotation by the golden angle ≈137.036°, which causes the propagation sequences to never close periodically with respect to any distinguished direction. The phase propagation directions are continuously decoupled.

As a result, after a very small number of local measures:

– directional correlations undergo destructive interference, – axial preferences disappear, – the trace of the cubic lattice structure is no longer physically available.

This extinction occurs on scales far smaller than any experimental scale. By the time an object, wave, or particle that could be used as a probe is created, the anisotropy has already been ontologically destroyed by the rendering process itself.

Therefore, what we record is not an “averaged grid” but a continuous image generated by local acts of performance in which:

– there is no common moment, – there is no global direction, – there is no possibility of comparing propagation “at the same time”.

Isotropy is not a symmetry of space. It is a stable point of execution dynamics in local time.

Why this really closes the Lorentz problem

Thanks to this connection:

  • local time tp t_p
  • local velocity definition c=p/tp c = \ell_p / t_p
  • 26-neighborhood rendering, – golden angle phase rotation

Not only do LIV amendments not appear, but there is no ontological level at which they could appear.

Anisotropy disappears before observable dynamics arise. There is no global clock, so absolute rest does not exist. Lorentz transformations describe the relationships between different rendering rhythms, not motion relative to the ground.

This is not isotropy. It is the absence of a condition under which anisotropy could ever become established.

Cart (0 items)
Address
Warszawa
Contact
e-mail: contact@instytut-iskra.pl
Working hours
Mon - Sat: 8:00 AM - 6:00 PM Sunday: Closed