Avatar

Well... (Destiny)

by uberfoop @, Seattle-ish, Saturday, April 12, 2014, 10:25 (3877 days ago) @ ZackDark
edited by uberfoop, Saturday, April 12, 2014, 10:47

Well, I was referring exclusively to those cases, but rendering multiple frames for the same physics "time step" isn't necessarily a waste, depending on the rendering scheme.

You could, for instance, only render the player's cone of vision, so even if the physics itself doesn't update, what part of it you're seeing is. Again, I do lots of that in my internship as well, zooming around results of simulations with Z-buffer type rendering programs.

When I said "unchanged scene data", I was imagining that to the extreme of constant viewpoint*.

But yes, decoupling physics and user view response in that way could potentially make sense in some applications. Doing lots of motion extrapolation between "real" frames in VR, for instance, could solve ghosting/blur issues that happen when people eye-track static objects while rotating their head (which can evidently arise because of the actual movement of the screen relative to the eyeball during the display of each frame).

*edit: But I suppose Cody's post I replied to had specifically said "physics."


Complete thread:

 RSS Feed of thread