- Preface
- The origins of the MAYA theory
- Problems of modern physics
- Why information?
- Planck units
- Planxel
- Physics implications of the planxel mechanism
- Reinterpretation of Formulas
- Time in the Māyā Model
- Space in the Māyā model
- Gravity
- Paradoxes of Physics
- ALPHA decoded
- Particles in MAYA
- Quantum mechanics
- Emergentna niezmienniczość Lorentza
- On the emergence of mathematics
The Origin of the MĀYĀ Theory
How the Coded Reality model was created
The Māyā theory wasn't born in a laboratory, nor was it the result of a planned research project. It didn't emerge from a research grant or an academic program. Its origins are entirely different, more personal and unexpected — it emerged as a side effect of work on a novel exploring the idea of a simulated universe.
The starting point wasn't an attempt to create another "theory of everything," but a much simpler, even philosophical, question: If reality were truly a simulation, how would it work internally? What would its internal mechanics look like, regardless of how we describe it?
From metaphor to mechanism
Initially, these considerations were purely conceptual. Simulation was treated as a metaphor — a mental tool, not a literal description of reality. However, during the research for the book, it gradually became clear that modern physics had long employed concepts that surprisingly resembled the language of information systems.
Quantum states exist as distributions of information. The collapse of the wave function resembles an act of choice or decision. Time ceased to be absolute, and space — at the Planck level — lost its continuity. It became increasingly difficult to treat these facts as random analogies. Instead, I decided to shift my thinking: what if simulation was not merely a metaphor, but the simplest possible mechanism for implementing the known laws of physics?
The turning point – the solo journey that changed everything
The breakthrough came suddenly, unexpectedly, during a solo car journey. It wasn't the result of lengthy calculations or a planned thought experiment. Rather, it was a sudden shift in perspective—a moment that occurred on April 10, 2025, on the road between Warsaw and Gdańsk, on the S7 expressway, while conducting research for a book on the go, in a voice conversation with ChatGPT.
Instead of asking, "What is the universe made of?", another, much more fundamental question emerged: "How is the universe processed?" At this point, elements that had previously seemed like separate branches of physics—quantum mechanics, relativity, Planck's constants—began to form a single, coherent picture. As if different theories weren't describing distinct aspects of reality, but the same mechanism viewed from different levels of resolution.
We cannot solve problems by using the same thinking that created them.
Albert Einstein
At that moment, the words of one of the greatest physicists ceased to be an abstract aphorism. They became a very concrete experience.
Planck as a structure, not a boundary
The key insight was the realization that Planck units don't have to represent the "limits of our knowledge" or the point where physics breaks down. They can be something much simpler and more profound—parameters of the architecture of reality itself.
The Planck length began to emerge as the fundamental resolution of space. Planck time as the fundamental clock tick. Planck mass and energy as the natural limits of processing. From this premise emerged the concept of the planxel — a fundamental unit of local information processing from which time, space, mass, gravity, and our entire reality emerge, as if from the simple rules of a cellular automaton.
From idea to model
At this point, the Māyā theory ceased to be a philosophical narrative. Subsequent steps followed from the established mechanism with an almost mathematical necessity.
Gravity emerged not as a fundamental force, but as a gradient of slowing information processing. Mass turned out to be the result of local information overload. Time became a local update rate. And physical constants — from the speed of light to the fine-structure constant — began to appear as derivatives of the rhythm of this fundamental code, rather than arbitrary numbers embedded in equations.
The most surprising thing, however, wasn't that numbers began to "come out of the model." The breakthrough lay in something much deeper. The Maya hypothesis didn't change a single valid formula, didn't challenge any established law of physics, and didn't introduce any new theoretical entities. It merely changed the ontological interpretation of what had long been written in equations.
It was this shift in perspective that proved decisive. The same formalisms of quantum mechanics, general relativity, and Planckian physics began to coalesce into a coherent mechanism when they were no longer treated as descriptions of "things" but began to be read as traces of a processing process. Everything that Maya theory proposed was already present in existing patterns—fully consistent with all existing empirical knowledge—but had previously remained invisible because different questions were being asked.
So there was no moment of "fitting in." There was a moment of recognition. It was as if theories that had been coexisting side by side for decades suddenly revealed that they had always described the same mechanism—just from different levels of resolution.
Reveal moment – constant 1/137 as a rendering stabilization parameter
At some point, it became clear to me that the mechanism beginning to emerge from the Māyā model no longer resembled a simulation metaphor, but something much more concrete. It increasingly looked like an advanced cellular automaton — a network of local processors updating their state in discrete cycles, with simple local rules and complex global dynamics.
This association was not accidental. For years, there has been a movement in physics that has attempted to think about reality in precisely this way. In correspondence with one of the leading theoretical physicists working on deterministic information models, it was pointed out to me that the key challenge for such approaches is not the idea of local information processing itself, but explaining how the full symmetries known from particle physics and spacetime emerge from such a structure. This observation clearly points the way forward for further formalization of the Maya model.
In other words, it's not enough to propose a network and a clock. We also need to explain why the world that emerges from it has exactly the symmetry structure we observe.
This answer wasn't a brake. It was a guidepost.
Because at this precise point on the path to understanding how all known symmetries of physics could emerge from local information processing rules, I encountered a number that was impossible to avoid. The fine-structure constant. The most mysterious of physical numbers. 1/137.
I knew it well. I knew it was dimensionless—that is, it didn't describe any specific physical quantity, but a relationship. I also knew the caution, almost reverence, with which physicists approached it. Richard Feynman called it one of the greatest mysteries of physics, the number that "God wrote on the wall of the universe" but left no key.
Something else has long been noticed: its value is surprisingly close to the golden angle. Many physicists and mathematicians have noticed this similarity, and attempts at connections, speculations, and approximations have emerged. But there has always been a lack of a starting point. The answer to the simplest question has been: what physical mechanism would force this choice? Why this particular number and not another?
And then something emerged that did not come from physics, but from a completely different field of my experience.
As a computer graphic designer, I'd worked with 3D rendering technologies for years. I knew the golden angle wasn't some esoteric artifact or aesthetic curiosity. It's an engineering tool. It's used wherever you need to evenly disperse samples, destroy mesh periodicity, avoid aliasing, and hide the fact that an image is based on a discrete, anisotropic structure.
The golden angle appears when we want something to appear continuous and isotropic, even though it is actually generated by a finite, granular architecture.
At this point, the pieces started to fit together.
If reality—as Māyā hypothesizes—is based on a discrete computational architecture, if time and space emerge from local update cycles, and the world is not "continuous" but rendered, then the problem facing the Universe is exactly the same problem facing every 3D graphics engine: how to generate a world from a grainy grid that is devoid of privileged directions, symmetric, and observationally stable.
In this context, the fine structure constant stopped looking like a "strength of interaction" parameter. It started looking like a rendering stabilization parameter. A number that doesn't tell you how strong the electromagnetic interaction is, but how finely and in what way the phase information must be scattered so that the world doesn't betray its discrete nature.
α does not describe matter. It describes the conditions under which matter can even appear as a stable, isotropic phenomenon.
In this sense, the fine-structure constant is not a physics correction or a mystery waiting to be "explained." It is the trace of an engineering solution, written deep into existing equations — from the very beginning, only becoming apparent when one's perspective changes. The difference between its value and the ideal golden angle is not accidental, but results from what might be called the geometric impedance of reality's architecture.
From this perspective, the fine-structure constant serves as a stability parameter: it defines the conditions under which stable information states can arise and persist in a discrete network. Such states — persistent, self-sustaining synchronization patterns — are soliton-like in nature: they retain their identity during propagation and do not decay despite local network fluctuations. It is these information solitons that we observe as elementary particles.
The constant α does not impose symmetry from the outside, but rather selects those symmetries that are compatible with the existence of stable solitons in the process of rendering reality. In this sense, it becomes the key to understanding why certain structures and symmetry representations are realized in nature and others are not.
This insight also proved to be a turning point for my brother, Daniel. The decisive factor was not the mere resemblance of the number α to the golden angle — such associations had appeared in physics before — but grasping its deeper mathematical meaning: not as a simple angle, but as a geometric parameter of a discrete computational network, acting as a condition for the stability and coherence of information patterns in the architecture of reality.
It was this recognition of the mathematical significance and necessity of such a mechanism that overcame his initial skepticism about the coded reality hypothesis. From that moment on, Māyā theory ceased to be merely an interesting metaphor for him and began to emerge as a coherent mechanism. As an analyst with a mathematical background in game theory and statistics, Daniel made a significant contribution to its development — supporting the organization of arguments, the verification of relationships, and the translation of intuitive ideas into a more rigorous language of logical structures and consequences. His contribution played a crucial role in further refining the theory's coherence and determining its future development path.
Moving from this phase to developing Māyā theory into a fully formalized physical theory — capable of unambiguously deriving symmetry structures, scales, constants, and testable consequences — requires a tremendous amount of work. This includes rigorous mathematical formalization, systematic refinement of the conceptual framework, analysis of consistency with existing results in physics, and open participation in the scientific community through publications, discussion, and critical review.
To facilitate this process and create realistic conditions for moving from a coherent ontological hypothesis to a complete theory, we established the Iskra Foundation. Its goal is to raise funds for further formalization of the model, support interdisciplinary research, and build a lasting space for dialogue with the broader scientific community.
Why this story matters
The story of the emergence of Māyā theory is not merely a personal anecdote. It is a testament to the fact that moving beyond an established paradigm does not require rejecting science, but rather a shift in starting point.
Working outside academic structures wasn't an act of defiance against science, but a space free from certain constraints. It allowed us to ask questions that rarely fit within formal frameworks: not "how to describe this?" but "how must it work to exist at all?" Not "what do we observe?" but "what mechanism generates these observations?"
Māyā theory is the result of this reversal of perspective. Instead of asking "what," it began to ask "how." Instead of searching for further building blocks of reality, it began to search for an algorithm.
And perhaps it is in this change of perspective that lies the key – not only to understanding how the Universe works, but also why there is something at all, and not nothing.
The history of science knows moments when a key shift in perspective occurred outside the mainstream before being recognized and leading to a paradigm shift.
Link to the publication of the full history of the origins of the Māyā theory: https://zenodo.org/records/15231099/files/Geneza_powstania_Maya.pdf
Seweryn Czarnocki
