RC's uninformed opinions on VR (Gaming)

by RC ⌂, UK, Sunday, August 30, 2015, 23:10 (1808 days ago) @ Cody Miller
edited by RC, Sunday, August 30, 2015, 23:22

The kit required to make many excellent VR experiences will be too expensive and/or bulky for the vast majority of home users. Arcade-like installations will be able to achieve experiences that simply won't come to home users on a large scale because it would end up sucking with the price, weight and space concerns.

Note: I have not yet used any VR device and this is all based on what I've read, as well as my own imagination.

Let's talk about FPS games first. Right now, and possibly forever these are a dead end with VR. The big problem is control. I played a game called World War Toons, in which you aim by moving your head.

Yep, totally dumb idea. VR FPS games should (and I believe will) model themselves after a mech-style control scheme. More inertia and slower turning than current gamepad FPSes. One joystick for X/Y motion and turning. Second for fine-grain aim. Use your head for 'general direction of aim' and the joystick to finesse it. More intuitive that way, I think.

VR experiences may integrate more foot-pedals as well - which would be a natural bit of kit for car racing games, and could offer additional analogue input for other genres.

You can't just use a swivel chair, because the head tracking works very poorly if you are facing the other way.

HTC Vive has cameras embedded in the device. So should work equally well whatever direction you are facing. Or a set-up with cameras all-around as you mention later (as you said, not feasible in the short-medium term for home use).

even if that somehow worked, you still can;t spin because you have the video cable going to your headset which will wrap around you and get tangled.

There are such things as power and data connectors that work when they're spun in every direction. Ring rails? Not feasible for home, but wireless may be more of a thing with Gigabit-level WiFi (WiGig) and less data-heavy, low-latency video connections (next-gen, not currently existing).

After playing that I thought the solution would have been to have the game control like Halo or Destiny, with the left thumbstick moving, the right aiming, and the head tracking letting you look around at stuff you aren;t shooting. There were a few games like that, but these have an even more serious problem. Every time I used the right stick to turn, I immediately felt sick. The effect was instant. Basically any time my viewpoint was changing, and my head is stationary, I felt nauseous.

Apparently people have varying susceptibility to 'simulator sickness' (the corollary of car sickness). Moving the person's actual body helps, and as another poster mentioned, very high refresh rates where the image does not remain static compared to the expected motion blur/retinal smear apparently helps as well.

This fixed both the problems above, but it still problematic because you can only walk about 10 feet, and nobody will have this setup in their home. So while the control was good, it's sucks to be boxed into the game world of 100 square feet.

The whole game (which will be a more limited, focused and 'arcadey' experience) has to be designed with that limitation in mind. I've seen one demo where the players are on a hover-craft that flies through a desert canyon with various bad-guys popping up on their own craft. I think it'd turn out to be a really fun arcade experience (especially co-op) but, as you said, not something people would have in their own homes.

I also checked out 360 degree video, where you basically take a ball of GoPros and shoot video all around. You can then play it back, and use the VR headset to look around. For storytelling it's pointless in my opinion since you can;t move around, and can only look from the camera's perspective. So you're essentially watching a movie with no edits, with important stuff you can miss because you aren't looking in the right direction.

How do they direct audience attention in plays/theatre? Lighting, motion, sound. How do they edit? Scene changes.

VR video (cinema?) doesn't have to be 360 Degrees of Non-Stop Action, it can be any size, any shape, any orientation of focus, at any time, at any point around the viewer. In front, behind, above, below. Squares, circles, ovals, fades. The creator can have multiple areas of focus from disparate places at the same time. It's not limited to just a 16:9 rectangle, covering 50 degrees directly in front of the audience - it's anything up to a full sphere of a scene. Some movie-makers will never figure this out, but others will and create some awesome stuff, IMO.

Computer-generated Machinima, or possibly light-field data may allow viewers to be more 'within' the scene and at least lean around a scene, or move in a limited fashion. More experiences that blend movie and game with varying levels of interactivity are coming, I think.

Editing techniques will differ, but will still exist. If you have a scene where two characters are talking, it'd be more natural to simply place them both within the field of view and let the audience look from one to the other rather than cutting. If the audience misses clues, that's a part of their unique experience and already happens in cinema anyway. Or they can be led to noticing things with lighting, motion, sound & colour cues.

There's no real benefit.

It's different. I don't think it'll ever a be a replacement, however.

First, none of the units had a high enough resolution. The screen door effect is real, and it's a significant detraction from the experience. Each eye was 1080p (~2K), so realistically you're looking at 4K per eye to eliminate this. I don't know many systems that can render two 4K images at 60fps right now.

I think VR is where raster rendering may meet it's demise. However, alternatives are a few years out still, unfortunately. Look up 'frameless' and 'foveated' rendering. There is significant inter-ocular coherency (similarity between images presented to each eye), spatial coherency (similarity of pixels close to each other), and temporal coherency (between frames). Ray-traced Rendering could exploit these features just as video compression already does to reduce the rendering burden by orders of magnitude. But it needs graphics and display hardware support to get the full benefit. "Frames per second" will mean less than response time, and "resolution" will be more like a hardware maximum.

The first generation of games that use this will probably have a 'graphics' downgrade, but will be silky smooth and razor sharp. They'll probably have simpler art styles that focus more on shape and shadow than texture.

Second, and this is minor, but none of the units completely filled my field of vision. I always had the black from the edges in my periphery, so the field of view was too narrow, and in a way felt like tunnel vision.

Just down to optics of a flat screen that close to your face. Curved screens could do better horizontally. Vertically is much harder due to the geometry of faces.

The other thing that needs addressing is positional audio. No games had it. I would turn my head, and the audio still came from the same place in the headphones.

Audio is generally neglected :'( Which is a shame because binaural sound could add sooo much to VR games.

Anyway, I really want to actually try one of these devices some time. Really just jelly. Mega jelly.

Complete thread:

 RSS Feed of thread