Avatar

Big Picture (Destiny)

by CruelLEGACEY @, Toronto, Wednesday, October 25, 2017, 00:31 (2378 days ago) @ Cody Miller

The difference is that “conceiving” of a super-intelligent AI like a warming is not the same as “being” one. The Vex don’t have to actually create or understand a warming, they just need to be aware of the concept. No human would be smart enough to look at a vex simulation of a warmind and know that it isn’t the real thing.


But within the simulation there presumably exist Warminds, incorrectly simulated or approximated. Otherwise, this discussion would not be happening.

If you are part of the simulation though, you have no way of knowing that the Warmind is functioning 'improperly'. Just like you would have no idea 'real' tree leaves are green.

So there is zero evidence from which to say "Warminds cannot be simulated" since you are yourself part of the simulation.


The Vex could have noticed in reality that the Warminds were beyond their ability to simulate accurately. So in their simulation, they have simulated warminds that, while being inferior to real warminds, still defeat the simulated vex. That’s the thing about a simulation... the Vex could even include false defeats just to sell the simulation.


That's possible, but the discussion is about whether a simulated being can have sufficient evidence to say 'A Warmind cannot be simulated".

In order for a simulated being to conceive of something, it must store a representation of that thing it the simulated mind. This representation must necessarily reflect characteristics about the object: what it looks like, how it behaves, etc.

But if you have a way to describe how it behaves… you can simulate it. Because you just did by making the symbolic representation. It doesn't matter that you don't simulate atoms… because you can 'cheat'. You use GR to describe the motions of Galaxies even though QM governs how all matter in the universe works. The simple equations of GM could place a simulated galaxy cluster in my telescope, even though tracking each atom with QM is impossible.

So if you can make a representation in a simulated brain, which is itself required for 'understanding', then you can by definition simulate the macro level functions of a thing, which means you can simulate it. The simulation only cares that the outputs are correct. Your simulated being perceives things at the macro level anyway.

I still think there is a fundamental gap in your argument.

Looking at Vex simulations from a human perspective:

We must first recognize that humans are not at the top of the intellectual chain. Both the Vex and the Warminds are complex beyond our general understanding (amongst others, but let’s focus on those 2 for now).

Both the Vex and the Warminds are complex beyond our ability to fully grasp their workings. This literally means that we lack the ability to define their boundaries.

So, in a theoretical vex simulation, the Vex don’t need to perfectly replicate a warmind in order to trick us. They just need to simulate our understanding of a warmind. And since humans are well within the Vex’s ability to simulate, they can reproduce our understanding of a warmind perfectly.

So yes, as you argue, the Vex cannot truly simulate a warmind in a way that accurately predicts the behaviours of a real warmind. But what if that isn’t their goal? What if the purpose, the focal point of their simulations, is us? Humanity? The light? If that is the case, then they don’t need to accurately simulate a warmind. They just need to simulate our human concept of a warmind, and watch how we react within the simulation.

It’s like that line in the Matrix; how do we know that the machines got the taste of chicken right? Ultimately, it doesn’t matter. They don’t have to accurately recreate the taste of chicken. They just need to create the taste of *something*, and tell everyone “that is chicken”.


Complete thread:

 RSS Feed of thread