Avatar

It's complicated. (Gaming)

by uberfoop @, Seattle-ish, Tuesday, July 03, 2018, 11:42 (2135 days ago) @ Cody Miller
edited by uberfoop, Tuesday, July 03, 2018, 12:29

You never see artifacts in a movie, even when they are shot in 2K. There is no supersampling going on.

Yes there is. Huge numbers of photons strike each photosensor element to contribute to the final image. In graphics rendering terms, that's basically supersampling.

The photosensor grids used in digital cameras often do have elements with a very narrow view, which results in aliasing if the cameras are used without a low-pass filter. Hence, low-pass filters are usually used in circumstances where aliasing is a notable problem with the scenes and cameras involved.

Although some post-processing is always done on digital imagery, the low-pass filters in question are usually optical filters that slightly blur the incoming light before the photosensor grid, causing light that might otherwise have "missed" a photosensor to strike it.
This is the most correct way to antialias a signal: blurring before sampling prevents aliases from being introduced in the first place, whereas sampling and then blurring just blurs the aliases that result from the sampling process.

Anyway, the cleanliness of digital film isn't just a matter of post-process AA.

The whole point behind anti-aliasing was that it was cheaper than supersampling. If it weren't a good 'bang for the buck' the algorithms wouldn't exist. In your example, the developers could have lowered the resolution, and increased the filtering to eliminate the flicker with the extra GPU cycles that freed up.

Yes, but as I stated earlier, there are numerous respects in which the only current way to eliminate the artifacts while producing accurate results is to supersample. Saying that you just need to "increase the filtering" ignores that rendering is far from an easy and solved problem.

So, for the example of specular shimmer. It's very easy to reduce specular shimmer inaccurately.
For example, Halo 1's normal maps undergo trilinear texture filtering, preventing sharp changes in surface normal from flickering in and out of existence. However, this also has the effect of visually flattening surfaces at a distance. This isn't a big issue for Halo 1, because the only materials in the game that have sharp normal maps are smooth besides some large cuts, so the flattening of the normals doesn't harm the perceived material types. But this can be a problem for things like micro-smooth surfaces with complex macro-roughness represented in the normal map; they go from being chunky up close to mirror-like at a distance. Consequentially, games sometimes choose to filter the normal maps sharply, to prioritize material accuracy at the cost of lots of flicker.
Nowadays there are some techniques available to combat this, like Toksvig mapping, which essentially transform normal map contents into material roughness as the normals flatten out at a distance. But there are plenty of circumstances where they're still far off from a ground truth render.

Your example of using more rays for the ray tracing… this is completely independent of the resolution and actually supports my claim that beyond a certain point other things matter much more than resolution.

In terms of combating artifacts relating to inadequate sampling, using more rays is pretty much the same exact approach as rendering at a higher resolution. Both are an increase to the number of point-samples being taken to create the final image.


Complete thread:

 RSS Feed of thread