Avatar

It's complicated. (Gaming)

by uberfoop @, Seattle-ish, Monday, July 02, 2018, 23:29 (2135 days ago) @ Cody Miller

Games need to be rendered at whatever sample rates and output resolutions necessary to pull off the desired visuals. And this is not only subjective, but also sometimes counter-intuitive.

Take Ridge Racer 7. It embraced the whole "HD console" thing, and went 1080p60 at PS3 launch. In order to hit that target, it's graphical makeup is - besides texture quality and resolution - mostly not very interesting even by PS2-era standards. But artistically it was designed around that razor-sharp hyper-speed look... it looks great on my HD LCD, but despite the visual simplicity, it looks poor on my SD CRt; all of its luster is just gone at low res.

Take The Last of Us. Very visually complex, very richly detailed. But, it's got a smudgy visual composition, and assets are generally authored in such a way that they still convey their intent and are readable when heavily filtered down. And consequentially, when I played it in a ridiculous 16:9 360i window on my SD CRT, I was surprised to find a game that still looked excellent and remained very visually readable.

Also think about the fact that 2K (2048 x 1080) has long been, and still is a cinema standard, where screens are 25 FEET or more. The CGI work in the Martian looked pretty damn realistic on the big screen right? Guess what, it was 2K. It looked so good because they had the ability to spend an hour per frame to render graphical effects.

Movie comparisons tend to be misleading in the context of game rendering, for two main reasons.

First, depth of field tends to increase perceived clarity on unblurred objects. That's fine for movies, but with games, having ultra-clarity everywhere is often desirable in and of itself.

Second, we need to separate output reconstruction with sample rate.
Movies aren't rendered at 2K, they're mastered at 2K. When using a physical camera, a ton of photons are involved in determining the color of each pixel. When rendering CGI, a ton of samples are taken at every pixel.
It's easy to blow off this issue by saying "just use better antialiasing", but that's not really a complete solution. There are a lot of types of detail that contribute to a game scene, and within current rendering methodology, very few of them are actually being filtered correctly down from high sample rates. And that produces macro-scale inaccuracies that can't simply be swept under the rug.
Even if we're playing on a 1080p display from a considerable distance, there are a ton of respects in which games do look cleaner if you boost the rendering resolution beyond 1080p and do a good resample to produce the final output.

Oh, also:

Assuming he has 20/20 vision

There's a very high probability that he has better than 20/20 vision.

20/20 vision isn't good visual acuity, it's the boundary of what's considered healthy. That is, having acuity worse than 20/20 can be indicative of a problem.

Typical values for healthy young people are more in the ballpark of 20/15. (So a lot of people do even better than that.)


Complete thread:

 RSS Feed of thread