One of my favorite arguments against the simulation hypothesis is the exponential resource problem.
To simulate a system with N states/particles with full fidelity, the simulator needs resources that scale with N (or worse, exponentially with N for quantum systems). This create a hierarchy problem:
- Level 0 (base reality): has X computational resources
- Level 1 (first simulation): needs X resources to simulate Level 0, but exists within Level 0, so can only access some fraction of X
- Level 2: would need even more resources than Level 1 has available
The logical trap is that each simulation layer must have fewer resources than the layer above it (since it is contained within it), but needs MORE resources to simulate that layer. This is mathematically impossible for high-fidelity simulations.
This means either:
* we're in base reality - there's no way to create a full-fidelity simulation without having more computational power than the universe you're simulating contains
* simulations must be extremely "lossy" - using shortcuts, approximations, rendering only what's observed (live video games), etc. But then we face the question of why unobserved quantum experiments still produce consistent results. Why does the unifier render distant galaxies we'll never visit?
* the simulation uses physics we don't understand - perhaps the base reality operates on completely different principles that are vastly more computationally efficient. But that is an unfalsifiable speculation.
This is also sometimes called the "substrate problem"; you cannot create something more complex than yourself using only your own resources.
But lossy simulation might still be the case? Once we look at a galaxy it becomes less lossy (but still very lossy until we can actually visit it); like games, the renderer knows what is important to get accurate and what not.
We already create video games which both "operate on completely different principles" than our reality, and which are "lossy" approximations of our reality. We already have concrete examples of us making (simple) simulations that take those approaches. You seem to refuse the "higher reality working differently" premise because it's unfalsifiable, when we already actively do exactly that for simulations we create. Our video games aren't based on quantum physics, but our reality is.
Having the host universe be in any way similar to ours is such speculation too. It's a weird belief in us being exceptional in some way. A bit like drawing gods that look similar to humans.
Computable systems can have have mathematically undecidable problems inside them.
Game of life is maybe the simplest example of simulated universe that contains many undecidable problems.
They fall into the same categorical mistake as the Lucas–Penrose argument, and they even use that argument in the paper. There is a lot of hand-waving. By the way, just adding irreducible randomness into a computational system would make it trivially non-computable in the meaning they use, but that itself would not prevent developing an axiomatic Theory of Everything that explains everything we want to know. So far, there has been nothing that demonstrates that the Universe must be non-computable.
One of my favorite arguments against the simulation hypothesis is the exponential resource problem.
To simulate a system with N states/particles with full fidelity, the simulator needs resources that scale with N (or worse, exponentially with N for quantum systems). This create a hierarchy problem:
- Level 0 (base reality): has X computational resources
- Level 1 (first simulation): needs X resources to simulate Level 0, but exists within Level 0, so can only access some fraction of X
- Level 2: would need even more resources than Level 1 has available
The logical trap is that each simulation layer must have fewer resources than the layer above it (since it is contained within it), but needs MORE resources to simulate that layer. This is mathematically impossible for high-fidelity simulations.
This means either:
* we're in base reality - there's no way to create a full-fidelity simulation without having more computational power than the universe you're simulating contains
* simulations must be extremely "lossy" - using shortcuts, approximations, rendering only what's observed (live video games), etc. But then we face the question of why unobserved quantum experiments still produce consistent results. Why does the unifier render distant galaxies we'll never visit?
* the simulation uses physics we don't understand - perhaps the base reality operates on completely different principles that are vastly more computationally efficient. But that is an unfalsifiable speculation.
This is also sometimes called the "substrate problem"; you cannot create something more complex than yourself using only your own resources.
But lossy simulation might still be the case? Once we look at a galaxy it becomes less lossy (but still very lossy until we can actually visit it); like games, the renderer knows what is important to get accurate and what not.
We already create video games which both "operate on completely different principles" than our reality, and which are "lossy" approximations of our reality. We already have concrete examples of us making (simple) simulations that take those approaches. You seem to refuse the "higher reality working differently" premise because it's unfalsifiable, when we already actively do exactly that for simulations we create. Our video games aren't based on quantum physics, but our reality is.
> But that is an unfalsifiable speculation.
Having the host universe be in any way similar to ours is such speculation too. It's a weird belief in us being exceptional in some way. A bit like drawing gods that look similar to humans.
They made simple categorical error in the paper.
Computable systems can have have mathematically undecidable problems inside them.
Game of life is maybe the simplest example of simulated universe that contains many undecidable problems.
They fall into the same categorical mistake as the Lucas–Penrose argument, and they even use that argument in the paper. There is a lot of hand-waving. By the way, just adding irreducible randomness into a computational system would make it trivially non-computable in the meaning they use, but that itself would not prevent developing an axiomatic Theory of Everything that explains everything we want to know. So far, there has been nothing that demonstrates that the Universe must be non-computable.