GPS doesn't track the same way as the Vive. The vive only needs to measure the difference in time between a few sensors being hit by a very simple signal (laser on/off) while a GPS receiver receives, decodes and processes data transferred by radio signals. This is not something that is very practical to do for millisecond tracking in a consumer device - if it is even possible to do at all.
Hmm. I figured that without some sort of time code, they’d only be able to get distance to the beacon, but I guess now I understand why they need to hit five sensors to get a full lock. Interesting, thanks. <3
So we could have a fast paced FP game where locomotion is handled by the controller but all acceleration is modified for comfort: say by tunnel-ing like "Eagles Flight," or massively speeded up like Blink in "The Assembly”?
Yes, I'm talking about how to eliminate discomfort while maximizing the user's ability to travel freely through the VE, but no, I haven't really discussed those techniques yet. lol The blink system used by nDreams does leverage the same quirk of anatomy that I suggested for artificial acceleration though.
While verifying we were using the same definition of “blink,” I rediscovered
a talk on locomotion given by nDreams CEO, Patrick O'Luanaigh, back in December. He describes teleportation at 18:45 and blink begins at 19:35, but for those who can’t watch, blink is basically a teleport, but instead of simply fading through black, you show a 100ms, first-person animation of the user zipping through the environment like The Flash to reach their destination. It sounds terrifying, but compressed time is actually a little less disruptive than missing time, and most crucially, it’s all over before the vestibular system has a chance to say, “WHAT’S EVEN HAPPENING??”
Earlier in the talk — during the discussion regarding implementation of more traditional controls — he mentioned that users can similarly survive “short” bursts of acceleration. He wasn’t more specific than that, but I’ve heard elsewhere that “about a third of a second” is a safe target. I’m actually glad you brought The Assembly up, because he also touched on something sorta central to my theory on comfortable movement.
He talked at some length about the efforts they made to make traditional controls more comfortable. They cranked rotation speed generated by the right analog stick
way up because it effectively transformed the process in to a snap turn — over before the ears were roused. They also capped forward movement at 1.5m/s, which is a nice walking pace. CoD lets you run at around 7m/s, which is about as fast as Ussain Bolt sprints, but since most of us have
never run that fast and the game is basically just you walking around indoors, 1.5m/s was chosen as a nice, manageable top speed. The lower top end not only gave the user finer control of their actual speed, it also did an effective job of preventing them from moving “disturbingly fast” accidentally. Similarly, because sidestepping IRL is a very slow and clumsy process, strafing was capped at <1m/s for both comfort and control. I’m gonna say 0.75m/s, just because.
Towards the end of the discussion of traditional controls he mentioned that combining multiple inputs simultaneously — turning while strafing while looking around — tended to make users quite ill, but there was no need to code any restraints to lock out one set of controls while another was being used because once users found a combination of inputs that made them woozy, they naturally stopped doing that on their own.
There was no need to prevent or even discourage them from performing such actions, because the results themselves provided the desired discouragement, and the players naturally stopped of their own accord. Remember that bit.
However, despite their best efforts — and the fact that 77% of their participants preferred the traditional controls whether it got them sick or not — 40% of the participants
did get sick while using traditional controls, which is sort of a lot. We’ve child-proofed every sharp corner we could, and we’ve seen that even if we do miss something, users will simply avoid doing it once they realize it hurts. So what went wrong??
I think it ultimately comes back to what I was saying about giving users effective means to communicate their intent. We know that users will naturally only perform movements they find comfortable, so it seems reasonable to assume the users who found the experience to be uncomfortable simply had more difficulty communicating their exact needs. The problem doesn’t come from the user’s inability to
recognize comfortable movement, but rather from their inability to
achieve it. If it were possible for them to do what they wanted, then they would do so, and never make themselves sick in the process. Well, never more than once or twice, I guess. Experiential learning, after all.
So what’s causing the problem? We’ve
given the user the ability to set their own speed, and we’ve even gone to the trouble of locking away speeds which are
certain to be uncomfortable. So if the user is supposed to “naturally” choose a comfortable travel speed for themselves, why aren’t they doing so?? Well, perhaps they simply
can’t.
Let’s take a look at the controls nDreams provided to the user. We’ve got forward movement on the left analog stick, capped at a leisurely 1.5m/s. That’s straightforward enough; “full throttle” is a comfortable walking pace, and if I wanna move half that speed, I just give it a half-tilt instead. Okay, I can get the hang of this eventually. Yeah, this isn’t so bad; I’m getting pretty good at tilting the stick “directly” to the appropriate throttle setting, and I’m starting to not completely suck at holding it steady. Usually I just romp on the gas though, because it's not that fast, and it gives consistent results. Okay, now I need to sidestep to the switch, but it's really close, so I’m gonna go ahead and do that at half speed. Urf, that didn’t feel good at all.
Why not? Well, for one thing, the developer helpfully capped strafing at half the forward movement rate, so instead of my half-tilt resulting in 0.75m/s movement as I anticipated, it resulted in 0.375m/s movement instead. Obviously, such a low movement rate isn’t “too fast.” That isn’t the problem. The problem is the output was simply
wrong.
The user made their best effort to tell us exactly what they wanted, and we let them down. But the users adapt, right? So when you’re strafing, you need to strafe a little harder to get the desired result. No problem. … for most, but some users
will have a hard time getting a handle on this additional abstraction we’ve introduced to them. Some never will. Collectively, those users who — at various times and in various ways — had trouble producing the exact results they wanted were the 40% who came away feeling a little queasy despite
their best efforts. How do we “know” this? Because if they were
able to stop making themselves sick, they woulda.
And nDreams weird-but-well-intentioned, lopsided analog stick wasn’t the real problem here. Sure, if both axes scaled to 1.5m/s, that might’ve made it a little easier for some users to get the hang of, but the real problem is that the analog stick itself is an abstraction. It’s effectively just a slider with a very short throw, that you need to manually hold in place with your thumb, which is actually sorta twitchy. Then you need to use it to select a value from 0% to 100%, without initially having any idea what the scope of the slider even is! Does it top out at 1.5m/s like The Assembly, or does it top out at 7m/s like CoD? Or does it top out at a healthy 15m/s, but with a handy acceleration curve to allow finer control over low speed movements?
That’s an
awful lot of abstraction for Granny to get her mind around, and even those of us who are quite experienced with such abstraction are going to be making ourselves quite ill while we get the hang of how
this game maps user velocity. We gained a lot of ground with minor adjustments like glossing over the acceleration phase, and it seems users can find the rest of the way on their own if we simply gave them a better way to choose the speed they were going to travel than experimenting with a tiny, fiddly lever. Ideally, a method of describing speed that doesn't change with every game you play, because it's starting to sound like predictability may be our final hurdle here.
So I think that’s where 6DOF tracking comes in to play. I can’t think of a more natural way for a human to describe velocity than with a wave of their hand. You’re sitting in your spacious cubicle, in your swivel chair with god-level casters. When you push off from the desk, away from your keyboard to get to the printer, how fast are you going? “However fast I pushed myself, I guess… it was about this fast. *repeats shoving motion* How fast
is that?”
It’s not something you think about
at all. You just push however hard it takes to go the speed you want. If you’ve been eating your roids, you can probably push yourself pretty quick. I’ll bet you even push off with a graceful flick of your wrist, giving yourself
just enough spin that by the time you cross the cubicle, you arrive facing the printer so you can catch yourself cleanly on the opposite desk, instead of clumsily crashing backwards in to it. “But the acceleration!” It will actually be imparted by a form of ratcheting and should be over in a jiffy regardless, so it should be perfectly comfortable.
Remember that while the controller gives us 6DOF tracking of the hand, we needn’t map all six degrees in to the game world in some way, and we needn’t use straight, 1:1 mapping. All we need glean is user
intent, and we’ve now given them the means to use their whole hand to communicate with us. We simply pay attention to the readings relevant to the action being described by the user, and safely ignore the rest.
Let’s say you’re teaching Granny how to glide, and explain that she starts moving by simply reaching out and giving a little tug on the wand to pull herself forward. She promptly reaches forward and YANKS the wand up and out, like she’s trying to start a busted lawnmower. Ouch. That’ll be Granny’s last visit to VR, amirite?
No, because all we need to determine from her motion was her intent. For example, we can probably safely assume that rather than send herself careening towards the first circle of Hell, her goal was likely to move closer to the pretty flowers she's looking at. Plus, we haven’t even modeled Hell, so we’re constraining her movement to the ground regardless of her actual intent.
Also, we’ve enabled Granny Mode, which means that rather than try to pull any directional information at all from the wand, we’re just gonna send her moving in whichever direction her head was pointed. That means all we need from the wand is the velocity data.
Problem is, she gave it a pretty good yank, and again, it probably wasn’t her intention to go rocketing towards the flowers at 10m/s, but that’s precisely what she asked for… but again, Granny Mode saves the day here. No matter how high she “tries” to set her velocity, we’re going to
cap it at 5m/s, which is about the speed of a leisurely bike ride. That’s still probably a lot faster than she’s used to moving around indoors, but it since it’s in the direction she’s looking it doesn’t really qualify as ludicrous speed, and it will be over in a moment as she bumps to an abrupt but comfortable stop against the table. “Whoa! *thud* Ha! Well, I guess I’m here.”
Granny just learned all kinds of good stuff with a single wave of her wand! First, she learned the table is definitely solid, but it still doesn’t hurt even if you dash straight in to it. That’s reassuring because even if she gets completely out of control — like she sorta did just then… — she can’t actually injure herself. That in turn encourages experimentation, because now she knows that no matter how bad she messes it up, all she needs to do it close her eyes and listen for the thud, and then it'll be safe to look around and get her bearings again. Sure enough, she's standing right in front of the table she was just speeding towards, just as she though she'd be. Ahhhhh, nothing like expected results.
Speaking of experiential learning, let's watch Granny attempt her
second glide, towards the begonias she just spotted down the hall. Thinking back to her recent toboggan run towards the roses, this time she decides to plan a little better. "Okay, maybe if I give myself just a
little pull this time… Wheee! *thud*" Before you know it, she'll be spiking herself to a stop in the
middle of the room. Go, Granny, go.
That's my guess anyway, but I have no lab, nor anyone to do my bidding.
Well, if serversurfer is right, we'd all just be playing Eve Valkyrie anyway. RIP, roomscale!
If I'm right, and comfortable, abstract locomotion is a solvable problem, then we'll have lots of gameplay options available to us in
addition to piloting, pacing, and skipping locomotion entirely. ;P (Yes, teleportation, I'm looking at you.)
Also, the human body can get use to stuff like this. I mean, we got use to moving in cars.
This too. I think once users get used to imparting motion with a wave of their hand, we can start removing training wheels like vector restrictions and speed limits. (We can still cap stuff for balancing purposes, obviously.)
Its possible future headsets might find a way to stimulate the vestibular system as well.
If we got really good at it, it'd finally be safe to include sustained acceleration in our simulations.
Basically, this tech is going to change A LOT over the next 10 years. I don't think VR will be mainstream for at least another 5 years and it wont be the main method of gaming for 15-20 years.
Not just the tech, but how we use it. That's why I was surprised some had already given up on trying new uses to see what's good.
Yeah, there will always be some hybrid solution that is applied on a case by case basis. Eventually devs will converge on some standardized options, such as:
Roomscale + teleportation
Roomscale + analog stick movement (not turning) + some technique that alleviates motion sickness (like tunnel vision)
Roomscale +walkabout
Roomscale + jogging/walking in place
Roomscale + "ratcheting"
It's just another way to play games and can quite easily be a superset for all games, since it inherently supports standing and seated arrangements anyway. Never really understood the skepticism around it, which mostly seems to stem from "oh I don't have space for it, so it will never catch on" knee jerk reactions. The way I see it, roomscale is an all encompassing tracking technology that EVERYONE should implement. Whether it's standing or seated or moving is a genre and game specific implementation of roomscale. All other forms of tracking are destined for obsolescence.
I agree with this too; better tracking is better. Though I'd update your list of "standard" control schemes to say, "[your trackable volume] + …" instead.
It also introduces other challenges though. in particular, even if you solve the tracking issue perfectly in hardware, you'd need to then render 200 FPS to make it work as far as I know. Probably almost every non-trivial VR game would be CPU bound on many systems when trying that.
SMI demonstrated foveated rendering on a DK2 in January, actually.
http://www.roadtovr.com/hands-on-smi-proves-that-foveated-rendering-is-here-and-it-really-works/
One optimization we could make there is skipping rendering some frames when the eye isn't moving. If the pipeline for a frame-on-demand system was quick enough, maybe we could get away with still only displaying 90 FPS overall, with additional frames on demand for saccades. Cuts down on the power draw quite a bit.
Could they render the background at a lower frame rate and just the fovea at full speed, or is your peripheral vision sensitive enough to notice the difference?
Are you never supposed to "Look with your eyes"? Often times only the middle of the screen was in focus or had good resolution. If I tried to look around with my eyes shit got blurry towards the periphery. I assumed that was just how it worked and explains why Foveated Rendering is sought after right now.
No, that's by design believe it or not. The idea is you sacrifice resolution near the edges to gain it back at the center, where you spend "most" of your time looking. I'm with you though; it seems like a poor decision, since we're not owls. The subtle yet significant difference between simply looking at something and actually pointing your face directly at it was made painfully clear to me when I was told to "aim by looking" in Virtuality. Having the edges of my vision always be blurry sounds even more annoying and distracting, but I suppose it'll serve as a constant reminder to hold my fire while I wait for my face to catch up. #brightside
Pretty sure we have the scalloped lenses to thank for this effect. ;p
The Resolution is pretty disappointing coming from traditional monitors and TVs.
It'd be worse without the scalloped lenses and blurry edges! lol That said, having tried only the fresnel-free Gear, I still think I'd rather have something uniform.
Oh, a lower FOV will give better apparent resolution with the same display too. I'd probably lean towards FOV myself though.
VR feels really natural so when things don't work like you think they should it pulls you out of the experience. For example, when playing budget cuts I tried to stab a robot with the knife and it did nothing. Also, when I ran out of knives I tried throwing a dead robots body and it also did nothing. Furthermore punching a robot does nothing. Its these "do nothings" that pull you out of it. The game is still fun within its own rules though.
This is why I've been saying predictable results are always best, and I think it's especially important with locomotion.
silversurfer's ratcheting: the game:
http://store.steampowered.com/app/462480/?snr=1_7_7_230_150_1
interesting way to implrement ratcheting without breaking immersion. I wonder if anyone else will get ratcheting going
Looks like Grow Home set in VR Candyland. I thought Grow Home was pretty cool, and this would make hand placement a lot less fidgety. Should be even more intense with the transition to VR.
And Sharkey says: All of nature talks to me. If I could just figure out what it was trying to tell me. Listen! Trees are swinging in the breeze. They're talking to me. Insects are rubbing their legs together. They're all talking. They're talking to me. And short animals- They're bucking up on their hind legs. Talking. Talking to me. Hey! Look out! Bugs are crawling up my legs! You know? I'd rather see this on TV. Tones it down.
~Laurie Anderson
Though if swipe-to-glide or something similar catches on, I imagine this type of ratcheting will mostly be used for climbing and to say, "No, I want to stand
here."