Avatar

Big Picture (Destiny)

by RaichuKFM @, Northeastern Ohio, Tuesday, October 24, 2017, 23:04 (2378 days ago) @ Cody Miller

I should elaborate. On the face of it, your statement is correct. But people are asking about imagining something within the simulation that the simulation cannot simulate. That is different. You give an answer to a question that isn't asked.


If something is in the simulation, it's tautological that it can be simulated…

Again, by imagining it, the representation of it in the simulated mind assigns it characteristics and behaviors, which must be references to parts of the simulation which can govern those things. Otherwise it's impossible to assign those characteristics, which means no representation which means no imagining it.

There's a parsing error, here, I think. The imagining of the thing is within the simulation, but the thing itself still isn't.

It does not necessarily follow that if you can simulate a mind imagining something, you can simulate that something. I know you know this, but I do want to state that as a fact.

It also does not necessarily follow that if you can simulate a mind imagining something, you can simulate a version of that thing that the mind will (mistakenly) recognize as the actual thing. It theoretically could, but not necessarily in practice. While it takes less processing power to simulate only the parts of a thing that the mind can detect, that still may require more power than the simulation has.

Now, it can in (almost?) all cases 'cheat' simulate such a thing by faking the input to the mind. If you can't simulate a lightbulb, you could just simulate the eyes responding to a light source like there had been one, and the mind can't tell the difference. If I understand you correctly, this is your point?

(Note that, while this could be done in the simulation, it might not necessarily be a thing feasible for those running the simulation to make happen. It requires an ability to interject into the simulation a certain way, and the knowledge of how to do that. I think it's a safe assumption that the Vex can manage it, but it is not necessarily true in all theoretical cases of simulations.)

So it doesn't matter if the Vex mind can actually simulate the Warmind or not-- (and I think that's an important consideration, as a sidenote; it's not about whether the Vex could simulate a Warmind, but just the particular Mind they were dealing with, specifically) because they can trick the simulated people into thinking there's a Warmind. Because it's based on their ideas of a Warmind, it won't contradict those impressions, and if done properly would be impossible for them to realize was not, in fact, a Warmind, even if there was a departure from reality.

However, there are some wrinkles. Assuming each mind in the simulations were experiencing a cohesive narrative with all the others in their instance (and IIRC they had to be, because the outside scientists were witnessing things unfold for their simulated selves, all the way down, and then later all the copies met and hatched a plan to spy on the Vex, or such, right?) it would become more and more complicated to reasonably meet all of their expectations of a Warmind, although still probably doable for the Vex. But it is a little less clear-cut, because of that, especially because of the unknown capabilities of a Warmind. (Which presumably include contacting things further outside of the simulation, which might themselves be more and more complicated to simulate; Asher Mir and Ikora, in at least one variation of the Pyramidion strike, suggest that the Vex could already be simulating their correspondence to you, when you've actually in fact lost contact, but if the Warmind actually was simulated it would presumably be considerably more difficult for the Vex Mind to trick it like that, than it would several humans, or Guardians.

But, more importantly, I think, is that nowhere (that I can recall) does it say that the humans realized a simulated Warmind was an impostor. Saying that couldn't have happened, doesn't mean anything? Putting aside the possibility that the entire series of Grimoire cards, progressively released over several expansions, was a Double Shyamalan, and the researchers didn't get out of the simulation, and they just never got around to revealing the twist, we know what happened. The Warmind got all of the copies out of their simulations. It did not, as far as is mentioned, excavate copies of itself. So their plan, however unfounded, did work, and there is no concrete evidence the Warmind was simulated.

Now, I'm not saying this means you're wrong that they couldn't tell if the Warmind was simulated. In fact, even if you were, that wouldn't explain the resolution! The solution was not 'figuring out which ones are the copies', so this has all been a bunch of dithering about, not the actual crux of the scenario? Perhaps the idea was that, because the Vex Mind could not simulate the Warmind, the Warmind was able to leverage something against the Vex Mind, and know with (greater) certainty that it was really threatening the actual Vex Mind. And, just judging by Rasputin's philosophy, it might have not made the moral call of the scientists, that allowing the copies to come to harm would be terrible; which would deny the Mind's main demonstrated leverage, even if it was simulated. The other possibility is that because the Warmind was probably a larger, more complicated AI than the Vex Mind, it could outsmart it, or do something like that. In all honesty, it feels like it was left unexplained because there wasn't a great idea for exactly what it did, but I think it's safe to say that the tangent about whether people could tell if the Warmind was simulated or not, while interesting, is not actually the maker or breaker of the logic of the scenario.

As a last sidenote, I would like to point out- you're wrong about the trees thing you mention elsewhere. If there was a simulation that cannot produce the wavelengths of light that humans perceive as green, there could still be humans that know trees are green, or remember green trees. If they looked at trees with black leaves, they would be able to tell the difference. You might ask how it's possible for them to imagine a color that can't exist in the simulation, or how that doesn't undermine what I've written above- but it's because the color itself is qualia, a subjective experience, and, if you'll indulge my materialism for a moment, a property arising from the mind's interpretation of its physical brain. It doesn't need there to actually be that specific wavelength of light; it needs a certain electrical impulse from the optic nerve, which can exist without that wavelength. (I'm not that well versed in neuroscience, so I don't know how the memory of qualia relates to direct experience of the qualia; for instance, how imagining something green in your mind's eye relates to recognizing a green thing you see in your actual eyes. I'm getting kind of far afield at this point, though.)


Complete thread:

 RSS Feed of thread