• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The High-end VR Discussion Thread (HTC Vive, Oculus Rift, Playstation VR)

Onemic

Member
I disagree



As far as treadmills go, the omni is extremely cheap.

You have an omni right? How does it feel to walk around? The review I read(I feel like it was posted here a couple pages ago) said that in order for the treadmill to correctly track your feet you had to do a sort of sliding motion as you walk, making it not intuitive or natural.

When I say price I dont mean in relation to other treadmills, I mean in relation to VR. At the end of the day consumers wont see this as a cheap treadmill, because they're not looking for a treadmill. They'll see it as an extremely expensive peripheral to use with VR that's just as or more expensive than the current VR solution that they own. This isn't factoring in the space requirements for the product either, which would be another knock against it being viable in the long term.
 

Croatoan

They/Them A-10 Warthog
Games made with Room scale + teleportation in mind feels better than an omni directional treadmill imo

That is...odd coming from you.

I assume The problem with current tradmills like the omni is that they limit your movement. For example, you cannot crouch, jump, or even really bend much while harnessed in the omni. It also forces you into a default stance right?

You can't tell me a future cv1 ready infinideck isnt going to blow other forms of locomotion out of the water.
https://youtu.be/pJS7LzJfQA0

Maybe something that uses rotating balls instead of a motor.

Kinda link a smaller version of the omnideck
https://youtu.be/wPffziOrE6Y

Current teleportation mechanics are garbage for dynamic combat. They force you to take your eyes off of a target and look where you want to teleport. This makes games like dark souls, any fps, skyrim, fallout, or anything where combat requires a lot of instant movement either clunky or impossible.

Teleportation works great for games that are designed around it like Budget Cuts. It is never going to capture the masses though. They want instant fully directional movement (what every traditional game has). Either VR figures out a way to make it work or it will never be anything more than a niche peripheral.

COD level of speed is out of the question but the ability to WASD is a must for capturing that audience whether by moving on a treadmill or using a joystick.

Im just being realistic here. If VR wants to succeed it has to eventually offer traditional gamers a better way to play the same games (with a few changes to limit sickness) . You are not going to get these people to put down their CODs for budget cuts (no matter how great that game is). That is the reality of the games industry.
 

fred

Member
Meh. Teleport mechanics and comfort modes are pants. Give me an analog stick and I'll be happy. I'm immune to simulation sickness thankfully lol
 
Meh. Teleport mechanics and comfort modes are pants. Give me an analog stick and I'll be happy. I'm immune to simulation sickness thankfully lol

be careful what you ask for.

I doubt many (any) non-cockpit games will be designed to use an analog stick for movement. It just doesn't make financial sense to cut down your potential buying audience.
 

Croatoan

They/Them A-10 Warthog
be careful what you ask for.

I doubt many (any) non-cockpit games will be designed to use an analog stick for movement. It just doesn't make financial sense to cut down your potential buying audience.
It takes very little dev time to give people the option to use wasd. Or wasd teleportation.

I am no genius programmer and i can set up both with an option to switch between them in about 20 minutes.

Now, if your game is BASED around point and shoot teleportation you wouldn't want to do this.
 

beef3483

Member
Does anybody know if it's possible to adjust to analog locomotion over time? Maybe people just need to play for awhile and their body will adjust?
 

Croatoan

They/Them A-10 Warthog
Does anybody know if it's possible to adjust to analog locomotion over time? Maybe people just need to play for awhile and their body will adjust?
You may never even get sick. From what i understand simulation sickness is rare if you remove artificial turning from the equation.

Most people don't get sick from wasd directional movement. Just need to turn the natural way.
 
Something I haven't seen compared is the process of starting up Vive and Rift before you put on a headset. With Vive, you have to manually startup Steam and then Steam VR before putting on your headset. There's also plugging in the lighthouses if you've been turning them off after use, which I have. Mainly because they seem to be interfering with other Bluetooth devices in my house. Occasionally once they're turned on there's a bit of back and forth switching channels to get them recognised.
It's much slicker with the Rift - there seems to be a little sensor in the unit and as soon as you put it on, it starts Oculus Home and it's ready to use. It's great, zero hassle. I could use it without even having to turn my monitor on, let alone touch a mouse or keyboard.
It's seems like a small difference, but it does make the Rift feel like something I'll happily have plugged it to quickly jump in where with Vive it's more of a conscious effort, and a week in the novelty of most of what I've got is wearing off to make it worth the hassle. If all we're get for now is small, experimental experiences then I'm going to need a constant stream of them to warrant keeping it.
 

jaypah

Member
Something I haven't seen compared is the process of starting up Vive and Rift before you put on a headset. With Vive, you have to manually startup Steam and then Steam VR before putting on your headset. There's also plugging in the lighthouses if you've been turning them off after use, which I have. Mainly because they seem to be interfering with other Bluetooth devices in my house. Occasionally once they're turned on there's a bit of back and forth switching channels to get them recognised.
It's much slicker with the Rift - there seems to be a little sensor in the unit and as soon as you put it on, it starts Oculus Home and it's ready to use. It's great, zero hassle. I could use it without even having to turn my monitor on, let alone touch a mouse or keyboard.
It's seems like a small difference, but it does make the Rift feel like something I'll happily have plugged it to quickly jump in where with Vive it's more of a conscious effort, and a week in the novelty of most of what I've got is wearing off to make it worth the hassle. If all we're get for now is small, experimental experiences then I'm going to need a constant stream of them to warrant keeping it.

I guess wait a month and see what the game output is like. If it's not up to your liking them sell it, seems like just the Rift would be fine for you.
 

Zalusithix

Member
It's much slicker with the Rift - there seems to be a little sensor in the unit and as soon as you put it on, it starts Oculus Home and it's ready to use.

The Vive also has a proximity sensor within the headset to detect when it's being worn. How they're actually using it is a whole different matter.
 
There's an option in SteamVR to have the lighthouses turn on and off when you start and stop using VR.

Tried that. It's really flaky, wasn't waking them when I started VR.

The Vive also has a proximity sensor within the headset to detect when it's being worn. How they're actually using it is a whole different matter.

Yeah, I just spotted that. They really should use it for that, makes a big difference.
 

Ionic

Member
Tried that. It's really flaky, wasn't waking them when I started VR.

When I first got mine I was doing this and it took awhile to wake up, so now I just leave them on. Supposedly however a recent update to SteamVR addressed this issue. Perhaps give it a shot again and report back.
 

KingSnake

The Birthday Skeleton
If all we're get for now is small, experimental experiences then I'm going to need a constant stream of them to warrant keeping it.

But there are games popping up almost every day on Steam. There are already quite a number of good games announced for later this year. Plus some of the games are in early access like SPT and they will get more content over time.
 

jmga

Member
Something I haven't seen compared is the process of starting up Vive and Rift before you put on a headset. With Vive, you have to manually startup Steam and then Steam VR before putting on your headset. There's also plugging in the lighthouses if you've been turning them off after use, which I have. Mainly because they seem to be interfering with other Bluetooth devices in my house. Occasionally once they're turned on there's a bit of back and forth switching channels to get them recognised.
It's much slicker with the Rift - there seems to be a little sensor in the unit and as soon as you put it on, it starts Oculus Home and it's ready to use. It's great, zero hassle. I could use it without even having to turn my monitor on, let alone touch a mouse or keyboard.
It's seems like a small difference, but it does make the Rift feel like something I'll happily have plugged it to quickly jump in where with Vive it's more of a conscious effort, and a week in the novelty of most of what I've got is wearing off to make it worth the hassle. If all we're get for now is small, experimental experiences then I'm going to need a constant stream of them to warrant keeping it.
Can't you just launch SteamVR from the controllers like you do with Big Picture and the Steam Controller?
 
I still can't understand why the Steam controller doesn't work with SteamVR. I've seen some say it starts working when you launch a SC-configured game, but I can't get that to work either.
 
Custom abstractions (like walking by moving your hands or whatever) are great for games based around them. But if you game is based around a dude (or dudette) walking around an open world then you need to be...uh...walking.

Think of a VR Skyrim for example. You need to be able to freely walk around a giant world in a way that is immersive (doesn't break immersion, isn't noticeable), immediate (happens instantly), dynamic (can move any direction), smooth (no pauses, stutters, or breaks in movement), and additive (means that the upperbody can do things (aim a gun, swing a sword, ect) separate from the lowerbody (locomotion).

Using those five criteria there are only two locomotion methods available for this gametype...

Absolute Best: Treadmill/redirected

Gets the job done: traditional joystick + roomscale


Using the arm pulling method doesn't work as you break the additive criteria

Using the rotating world (ratchet?) method doesn't work because you have to pause the game to rotate the world with the player which breaks both immersion and smooth.


Now if your game was based around being a wolf or four legged creature you could use the arm pulling method.

Or if your game was based around being spiderman you can shoot webs and swing.

The only games I can see using the ratchet (rotating world) method are slow adventure games with no combat. And even then they will suffer from the NASCAR effect.

I guess what I am saying is that locomotion can be anything, but it HAS to make sense with your game. Shoehorning something in that isn't natural wont work.

I am not a fan of the treadmill idea actually. A few reasons stand out first being the restriction of movement. Two the size of this thing where are you going to put this.. Next the adoption of this actually being needed for games. I cant see AAA games requiring something like this. Most people wouldnt bother.

I am not saying that its not a neat idea, but I dont find it really practical.
 

wonderpug

Neo Member
Back in the 90s I remember moving around playing Dactyl Nightmare and not having even a hint of motion sickness. Granted, I only played for maybe 10 minutes.

ncMCoH5.jpg


Fuzzy memories, but didn't that rig have you lean in the direction you wanted to move and then press a button on the handheld thingie?
 
What's the issue? Oculus funded the studio to make them an exclusive title. Almost sounds like port whining/platform warrior bullshit to me. Can you elaborate?

Yeah I think there is a big difference between a developer creating a game X then at some point a platform holder pays to restrict it to their platform, versus a platform holder bankrolling the project to begin with.

Its a shame that these practical issues get in the way of VR just being a platform by itself.
 
Now that I have that image in my head...
… of a lungfish with a little tear in his eye?

But I was referring to this:
"Imagine removing your head…"
Well, like I said, Marks was talking about it, but I can't find the talk, so you're welcome to dismiss if you'd like. Or perhaps I just confused the ratcheting comments from the talk I linked. Regardless, I wasn't proposing it as a locomotion solution, though I do think it will bring ADS to a new level, assuming it's tolerable.

It sounds like you were when you said this:
"the impressive girth of (Touch's) tracking ring is a direct result of the comparatively poor visibility of IR markers."
Sorry, I was saying the large diameter of the ring was the compensation, not that no compensation was made.

Again though, you're speaking as if you know for a fact that Oculus haven't compensated for this by providing ample power to the IR diodes and/or a higher-sensitivity IR camera.
Actually, I've mentioned exactly that as one of the compensations which would be dictated by their decision to use low-visibility markers.

You do understand I'm not saying Rift is "broken" or whatever, right? =/

Multiple laser bars sweep across the room in different directions, and each sensor detects the laser pulses at slightly different times. I don't see any similarities to GPS.
They both use time differentials calculated from receiving signals generated by a beacon to determine position and heading. Obviously the implementation won't be identical since they use different mediums for signal propagation, but the principle is the same. The advantage of radio signaling is somewhat apparent, so I was wondering if anyone knew why lasers were chosen instead.

You're not a developer yourself though, so a lot of the "if this then that" stuff you pepper your posts with will continue to be disputed.
Sorry, but I guess I don't understand. How does your job affect your ability to draw conclusions? What if you get fired?

In any case, I came here looking to have my conclusions challenged. Based on all of the information I've collected, my conclusions seem reasonable enough, but I also realize the information I have collected to date is not comprehensive, so by extension, neither is my understanding of the subject. My goal was to discuss my conclusions to first help determine validity, and then discuss what additional conclusions could be drawn from that determination.

For example, it's been reported that steady, sustained motion is comfortable. This makes sense, because the vestibular system is inertial; if there are no changes in speed or direction, there is nothing for the vestibular system to detect, so there is no disconnect, hence no source of discomfort.

It seems reasonable enough to suggest that a technique which seems to offer comfortable traversal for an arbitrary duration with arbitrary speed and heading is at least worth some investigation as basis for an abstracted locomotion solution. There are implementation details to work out, but thus far I haven't seen any reason to rule it out as having potential.

The most obvious roadblock would the need to transition from stationary to moving and back again. That requires a change in velocity, and that's precisely what the vestibular system detects, so now we have a disconnect, which is typically a good source for discomfort.

However, we do have workarounds available to us. The most obvious solution is to skip the acceleration phase entirely. With no acceleration detected by the eyes, there's nothing for the ears to confirm or deny, so no opportunity for uncomfortable disconnect. So while this causes the user no physical discomfort, the abrupt change in velocity can still be a bit jarring, because while we never suffered through conflicting sensory input — an excellent source of discomfort — years of experiential learning has taught us that something else probably should've happened in between then and now, and we somehow slept through the entire process.

So we've delivered a bit of an immersion slap to the user, because despite avoiding the cardinal sin of pitting the senses against one another, we've also failed on our promise of delivering expected results, which is something else I yammer about a lot. A simulation is convincing and believable when it delivers the expected results, and we just gave them something different.

Well, the good news is that our firm belief that acceleration is a thing comes entirely from the experiential learning I mentioned. Thanks to the excellent teamwork of my eyes and ears over the years, I've become quite adept at recognizing and gauging acceleration when I experience it, and experience has additionally taught me that it's what happens whenever you change speed.

But here I am in a place where acceleration doesn't seem to be a fundamental aspect of changing velocity. We're definitely moving now despite the fact that nothing resembling acceleration has been detected by the eyes; a somewhat astonishing fact actually confirmed by the ears! Huh. Didn't see that coming, yet here we are, seemingly having skipped acceleration entirely. That's certainly not how it worked where I grew up, but whatever, I guess; apparently that's how it works here. No sense arguing with reality itself.

And this is exactly the point where experiential learning stops working against us and starts working in our favor instead, and a strong general sense of presence actually helps facilitate that transition. While acceleration was definitely a thing in the place we were before, it’s clearly not a thing here, where we are now, and this has been verified not just by two separate senses, but just as importantly, by repeatability. Repeatable results are fundamental to experiential learning, and experiential learning is what tells us what "normal" is, and in a sense, “reality” is nothing more than everything our experiential learning has taught us to expect as normal.

And that’s the key here. Because learning is an ongoing process, as long as we have a good idea of what to expect and when — repeatability — we can adjust our sense of normality quite quickly. Just as people become accustomed to working underwater and in zero gravity, people will become accustomed to the quirks of the new environments we’re introducing them to. Ultimately, it doesn’t matter if things are a little different here than they were there, because they’re predictable, so before long, we’ll all learn exactly what to expect.

Hey, there’s that expectations word again. Earlier we agreed that not meeting expectations is bad, but now we understand that expectations can be quite fluid, and as long as we have consistency we can start making accurate predictions, and in turn realign our expectations to fit this new reality we've found ourselves in. Reality is what it is, and whatever the rules, we will adapt, improvise, and overcome. We sorta can't help it; it's just how we're wired up. When you get the result you expected to get, that's a normal result by definition.

#dealwithit, amirite? Well, yes and no. Navigating an abstracted environment requires abstracted locomotion, so at the end of the day — assuming we aren't limiting ourselves to non-abstract environments — we are gonna need to deal with it, whether "it" turns out to be teleportation, gliding, a combination thereof, or shit nobody's even thought of yet.

But adaptability is our thing. We're just as comfortable with abstraction as we are anything else; as long as we're able to predict the results of our actions, then those results will indeed meet our expectations. As gamers, we should be quite used to "figuring out what the rules are here," because it's the first thing we do when we get a new game. Even if it's a franchise we're intimately familiar with, we need to immediately find the new features, so we can start using them to our advantage. We're not merely able to adapt, but eager to do so. The central conceit of all gaming is, "Ah, but what if this was reality? Then what would you do?" VR just makes it harder for players to remember this isn't reality.

Insomniac has a game coming for Touch where you chuck fireballs at each other by making a fist to form the fireball and then you hurl it at your opponent like a baseball. Obviously, that's not how you really throw a fireball at someone; it's a total abstraction. I mean, they completely ignore the incantation and who aims with their hands when you could be using them to transcribe scrolls instead?? So every time the player uses their primary ability, they're totally pulled out of the experience because this is just some lame abstraction that's nothing like the real thing, right?

No, because that's how it works here, where we are now. Rather than being a constant reminder of how fake this all is, it's instead an immensely satisfying experience because it's perfectly predictable; fireballs always appear right when you call them, and always go right where you throw them, just as they should. It's also satisfying because it provides the user with consistency. This is how all spells are cast in this world — more or less — and the player is never left wondering when they'll need to fall back to a completely separate solution due to circumstances they're often unable to control or even predict. If the user is forced to use the fallback system with any regularity, they will quickly adapt and begin using it as their primary solution, because it provides that oh so satisfying reliability and predictability. If the fallback system also proves to be an unsatisfactory conduit for their intent — too unreliable, cumbersome, ineffective, etc. — they will simply move on to experiences which do provide such a conduit.

So has nothing changed? Learn the rules and play the game? Well, that's really the essence of gaming itself. We dictate the obstacles to success, and give the players the tools required to overcome them. When the players understand what they can expect from those tools — however incredible those expectations may seem to those of us stuck here in Plain Old Reality — they will do "the normal thing," and start using whatever tools are available to them to help achieve their goals.

So this world doesn't have acceleration. Who gives a shit? Are you a Nuverse physicist or something? Obviously not, or you'd have no reason to expect acceleration to be a thing in the first place. Dragons also violate the laws of physics as we currently understand them, but I don't think that stops anyone here from wanting to pal around with one, especially after you hear he can teach you how to throw BALLS OF FUCKING FIRE at anyone who crosses you. Having a 20m dragon literally towering over you is going to be a pretty convincing argument that it's time to update your physics model. Dat experiential learning.

And that's why presence is such a powerful new tool in our arsenal. Presence is when the information being sent to your eyes and ears is so compelling that it completely overwhelms any sense of disbelief you may have. Sure, experiential learning ensures your first response will always be, "Whoa, this can't be happening," but that won't change the fact that it is happening, and you can be sure this experience will be logged right alongside the rest of them. It really doesn't make any difference what Professor Stabelboson said, because there is definitely a dragon looking you right in the eyes! Maybe the old man was full of shit or maybe something changed, but regardless, one thing you can be certain of is that it's time to learn some spells, because the dragon just said so.

And the greatest thing about experiential learning is that — assuming he always behaves in a way consistent with this world — every new encounter with the dragon just makes you more convinced that he's real. If you can happily accept all of the "clear" violations of physics which occur every time you take a ride on your new friend, do you really think you'll get hung up on something as mundane as a lack of acceleration? As long as we don't let its absence make you sick, your brain will happily except such phenomena as being just as realistic as the dragons whose existence they allow. If it does make you sick, then your brain will find it to be equally realistic, and will be decidedly less happy about it.

So is there anything at all we can do to ease users' transitions in to this new reality they've found themselves in? As a matter of fact, there is. I mentioned that the eyes can experience short bursts of acceleration without causing any discomfort. This helps to satisfy the part of your brain saying, "Hey, wasn't more supposed to happen there?" but the stimulation ends before the vestibular system has the chance to raise any alarms. Will it be precisely the same way we experienced acceleration back on Earth? No, but we'll get used to it here, just like we got used to it there, and the simulated acceleration will help to smooth and speed that adaptation for us. Remember too that a native Nuversian would find simulated acceleration even more disquieting than we find it comforting, because they've never experienced such a thing at all. Hopefully they're just as adaptable as we are, because now that we adaptable humans are fortunate enough to be able to travel freely from one reality to the next, we're starting to become perfectly comfortable with the fact that sometimes acceleration is a thing, and other times not so much.

So we've established how "believability" is in fact a side effect of presence and how this helps people adapt to new rulesets even more quickly and easily than they would normally do already. But for a truly enjoyable experience, users also need a clear and effective way of communicating their intent, and this is where video gaming has fallen over for most people; the high level of abstraction we've suffered through in the past meant most users found the controls to be neither clear nor effective. Most people don't have any trouble groking simple abstractions like "push button -> light goes on" but anything much more complex than that and they're likely to start waving their hands around in a reflexive effort to better communicate their meaning. 6DOF tracking finally gives computers the same ability to look at someone else's hands and say, "Oh, that's what you mean," which humans have enjoyed for so long.

That's some powerful juju. We know that steady-speed cruising doesn’t trigger a disconnect, and we understand why that is the case, but what if you’re still new to gliding and just find zipping around at those speeds to be unsettling simply because you haven’t had a chance to become acclimatized to this form of travel? Then don’t push off so hard. There’s no reason for us to set the user’s velocity arbitrarily, nor “empower” them with the ability to estimate how many meters per second they’ll be able to comfortably travel — taking in to account both the content of the environment and the time elapsed since they fed their meatbag — then call up an interface which allows them to accurately communicate their current needs. I don’t know shit so I can’t help you guys with implementation here, but it seems entirely possible to use a 6DOF to allow users to set their own velocity in the clearest and most natural way possible. Well, how fast would you like to move? “This fast.”

Even small children instinctively understand precisely how fast “this fast” is, in the same way they understand how big “this big” is. They also understand intuitively when they need to grow their arms physically longer to properly convey how big “this big” really is, and the only thing stopping them from doing so without a second thought is the inability to actually do so. How logical does the ability to grow your arms need to be before you use it reflexively? Again, examine the logic of the child. When they realize their arms aren't long enough to properly describe just how big, they immediately stand on their tiptoes. What's the logic in that? Experiential learning has taught them that standing on your tiptoes does indeed extend your reach, clearly evidenced by the fact that it invariably helps them reach the cookies. Simple, effective, and predictable; the obvious go-to any time you need to grow your arms a bit. Show them a better way to grow your arms, and they will happily incorporate it in to their own sense of reality. Show them it's simply not possible, and they'll manage to come to terms with that too.

Anyway, I hope this helps you guys understand that I've not just done my homework but also put a fair bit of thought in to this stuff. I am fully aware of the fact that without ready access to a lab, there could be a lot of flaws in my reasoning. Indeed, it was this knowledge that motivated me to ask if anyone could identify any such flaws, so that I might update my model. While I do appreciate all attempts to assist in this endeavor, replies like, "Where do you even work?" and, "STFU and buy a Vive," do surprisingly little to help pinpoint these errors or explain what makes them so. <3
 

Cyriades

Member
Look at this!

Pico Neo

pico-neo-8.jpg


picostationmove.jpg


The hardware inside the controller includes the Snapdragon 820 we have already talked about, 4GB of RAM, 32GB of internal storage, WiFi, Bluetooth 4.1, and a microSD card slot for storage expansion supporting up to 128GB. The controller has physical buttons and a touchpad on the back along with an integrated motion sensor. The headset weighs 11 ounces and has a 3.8-inch AMOLED with 1200 x 1080 pixel resolution. Power is from a 5,000 mAh internal battery good for three hours of use.

The headset itself has a pair of 1,200 x 1,080 panels (one for each eye) and 90Hz screen refresh rate, which matches the resolution of the HTC Vive and Oculus Rift.

Inside the headset there&#8217;s a 3.8 inch AMOLED display with a resolution of 1200 x 1080 pixels per eye and a 102 degree field of view. The pupil distance can be adjusted between 54 and 73 millimeters.

The Pico Neo headset can also double as a headset for a PC gamer. This is where another benefit of putting the hardware inside the controller comes in. If all you want to do is use the Pico Neo with your PC, you can buy just the headset and save yourself about half the cost of buying the headset with the controller.

The Pico Neo will land in late June for around $550 with the controller or $300 without.

http://www.slashgear.com/pico-neo-a...ks-snapdragon-820-in-the-controller-20437078/

http://www.engadget.com/2016/04/20/pico-neo-android-vr-headset/

http://liliputing.com/2016/04/pico-...tem-snapdragon-820-also-works-pc-headset.html
 

Zalusithix

Member
Tracked controllers, but untracked headset by the looks of it. Honestly it looks like cheap trash IMO. Rather have the OSVR hacker kit myself if I wanted a more affordable headset.

Edit: Actually after looking at this image:
pico-neo-11-1200x675.png

It appears that there's another ping-pong ball that clips to the headset to track it if (and only if) you have the tracked controllers which come with the camera(s?). Putting aside the whole driver/software/computer interface issue, I'm not even positive how the camera tracking is supposed to interface with the device in stand alone mode. The controller has the logic, but has to connect to the headset by wire, so the cameras can't connect to it. On the other hand, it appears the tracked controllers come with a giant dock with the cameras and that that the normal controller can be stored in it. So I guess the headset and standard controller connect to that behemoth dock in stand alone mode. Also not sure whether the dock represents one fixed binocular vision camera, or is simply a base station and the cameras connect to it allowing camera positioning to reduce occlusion.

Bleh, enough analysis of this junk.
 
They both use time differentials calculated from receiving signals generated by a beacon to determine position and heading. Obviously the implementation won't be identical since they use different mediums for signal propagation, but the principle is the same. The advantage of radio signaling is somewhat apparent, so I was wondering if anyone knew why lasers were chosen instead.

GPS doesn't track the same way as the Vive. The vive only needs to measure the difference in time between a few sensors being hit by a very simple signal (laser on/off) while a GPS receiver receives, decodes and processes data transferred by radio signals. This is not something that is very practical to do for millisecond tracking in a consumer device - if it is even possible to do at all.
 

gmoran

Member
The most obvious roadblock would the need to transition from stationary to moving and back again. That requires a change in velocity, and that's precisely what the vestibular system detects, so now we have a disconnect, which is typically a good source for discomfort.

However, we do have workarounds available to us. The most obvious solution is to skip the acceleration phase entirely. With no acceleration detected by the eyes, there's nothing for the ears to confirm or deny, so no opportunity for uncomfortable disconnect. So while this causes the user no physical discomfort, the abrupt change in velocity can still be a bit jarring, because while we never suffered through conflicting sensory input — an excellent source of discomfort — years of experiential learning has taught us that something else probably should've happened in between then and now, and we somehow slept through the entire process.

So we could have a fast paced FP game where locomotion is handled by the controller but all acceleration is modified for comfort: say by tunnel-ing like "Eagles Flight," or massively speeded up like Blink in "The Assembly"?

I'm happy that we can either get good software solutions, particularly if the market for VR is reasonably successful, or hardware (VRKat looks interesting, something like that that could be folded away would be ace—if it actually works as good as they say).
 
I've ordered what is hopefully a suitable wall mount for my Rift sensor and some USB and HDMI extension cables. If they work out well I'll share all the details. I've got to get that sensor up higher, and I've got to get it fixed to really maximize my tracking area. Plus right now it's sat on top of my Lego Ghostbusters Firehouse, and it kind of prevents me opening up the firehouse and showing it to people without moving the sensor and having to recalibrate.

My 'room scale' area barely hits 2m x 2m. It took me about ten tries to get Steam to agree on that space using my hydras. It works fine... but it's frustrating how little you can move before chaperone kicks in when you're working in a space like that. Not worrying about (or actually) bumping into stuff is a bigger plus than the annoyance of the grid however!

Not that you could go bigger than that with the wired hydras... but when you get your Vive or when touch lands I hope you all have bigger areas than me. The Sisters demo requires 3 x 3 m and that isn't happening for me any time in the foreseeable.

I supposed if you really whip your head around, it's possible. Not that it really matters since the whole "single lighthouse on the desk" was more of a hypothetical thought exercise than any realistic solution to begin with. Besides, I'd like to see the logic boards and connectors moved to the back of the head anyhow in the next generation for weight distribution purposes, so a few extra sensors on that rigid section couldn't hurt anything.

Yeah, that's basically all I'm saying. Even if it only covers a few extreme scenarios (or helps with less than ideal sensor positions), it seems like a small but sensible improvement for the next hardware refresh. Not having it isn't a problem, but I'm very glad the Rift does, otherwise everything I'm doing with one sensor and the hydras wouldn't be feasible.
 

Onemic

Member
How legit is StarVR? Anyone know? 2560x1440 resolution per eye sounds absolutely insane....and like no PC available today or even next year will be able to run reliably.
 
How legit is StarVR? Anyone know? 2560x1440 resolution per eye sounds absolutely insane....and like no PC available today or even next year will be able to run reliably.

From the impressions I remember, it's mostly not so great compared to the other big 3 with the exception of the muuuch wider FOV. WEight was one of the bigger problems. I think I even read somewhere that you can actually see the gap between the screens in the center of your view but I may be remembering that wrong.

EDIT: Most of the impressions were from E3 last year though so it could be much improved since then.
 

Zalusithix

Member
How legit is StarVR? Anyone know? 2560x1440 resolution per eye sounds absolutely insane....and like no PC available today or even next year will be able to run reliably.

Foveated rendering will allow it. Thing is, without eye tracking, that much of a FoV is kind of pointless. If you render the periphery at a much lower resolution, but have no eye tracking, then any time you actually look to the sides, it'll look horrible. If you don't render the periphery at a lower resolution, then you're stuck with rendering 2560x1440x2 at 90fps, which as you pointed out is... less than ideal.

Even with foveated rendering, there's the issue of weight on that thing. Larger screens, larger lenses, larger container to put everything in. It's a pretty safe bet to say that room scale experiences are off the table for that monster.
 

Onemic

Member
Foveated rendering will allow it. Thing is, without eye tracking, that much of a FoV is kind of pointless. If you render the periphery at a much lower resolution, but have no eye tracking, then any time you actually look to the sides, it'll look horrible. If you don't render the periphery at a lower resolution, then you're stuck with rendering 2560x1440x2 at 90fps, which as you pointed out is... less than ideal.

Even with foveated rendering, there's the issue of weight on that thing. Larger screens, larger lenses, larger container to put everything in. It's a pretty safe bet to say that room scale experiences are off the table for that monster.

Ah well. Hopefully Rift and Vive CV2 ups the resolution of each eye. I wonder how long it will take to release CV2's of these headsets? I'm thinking something like 3-4 years.
 
Im only thinking it'll be longer simply because of the high cost of these headsets. Will people really be down with incremental upgrades to $800 tech every 1.5-2 years?

Some people will, absolutely... and so long as they don't make the previous headset redundant, a lot of people will be just fine without getting the latest generation headset every time, just as they are okay with the way mobile phones are released.
 
Im only thinking it'll be longer simply because of the high cost of these headsets. Will people really be down with incremental upgrades to $800 tech every 1.5-2 years?

I kind of see it like video cards. Some people with the disposable income are going to upgrade this expensive PC component every couple of years (and maybe sell their previous one to help subsidize). Others are going to be fine skipping every other generation. Not a 1:1 comparison but it's the same audience we're talking about. As long as everything is backwards compatible, I don't think it will be a problem.
 

viveks86

Member
Im only thinking it'll be longer simply because of the high cost of these headsets. Will people really be down with incremental upgrades to $800 tech every 1.5-2 years?

Sure. High end PC gamers spend as much or more every 2 years anyway. Also bear in mind that a LOT of people are holding out on gen 1 because the software lineup and hardware need time to mature. So assuming VR gains traction, there will be quite a sizeable market waiting to jump in.
 

Onemic

Member
Some people will, absolutely... and so long as they don't make the previous headset redundant, a lot of people will be just fine without getting the latest generation headset every time, just as they are okay with the way mobile phones are released.

I kind of see it like video cards. Some people with the disposable income are going to upgrade this expensive PC component every couple of years (and maybe sell their previous one to help subsidize). Others are going to be fine skipping every other generation. Not a 1:1 comparison but it's the same audience we're talking about. As long as everything is backwards compatible, I don't think it will be a problem.

Sure. High end PC gamers spend as much or more every 2 years anyway. Also bear in mind that a LOT of people are holding out on gen 1 because the software lineup and hardware need time to mature. So assuming VR gains traction, there will be quite a sizeable market waiting to jump in.

That's true. I'm also wondering what they could even change in that timeframe? Wireless seems like it's out of the question. Same with eye tracking. I guess just increased resolution per eye, wider FOV, potentially redesigned motion controllers, or ergonomic changes?
 

kinggroin

Banned
I wonder what kind of lifespan we can expect with gen 1 units? Not talking about time before a successor is out, but how long they'll receive support?

I'm hoping my Vive will be good for 5 years.
 

viveks86

Member
That's true. I'm also wondering what they could even change in that timeframe? Wireless seems like it's out of the question. Same with eye tracking. I guess just increased resolution per eye, wider FOV, potentially redesigned motion controllers, or ergonomic changes?

Quite a bit actually. All of the above, in fact. Pretty sure both Oculus and HTC have been working on gen 2 design already. I believe Oculus even confirmed that recently. SMI is already talking to headset manufacturers to included foveated rendering in gen 2. Fove is releasing this year, so there's no way others will lag behind for more than 2 years.

The only thing I'm skeptical of seeing in gen 2 is wireless. All I've heard so far is some no-name company claiming they have working prototypes. Given that the resolution requirements are going to get bumped up again, I just can't see it happening anytime soon. Foveated rendering + some kind of smart video compression around foveated rendering may help, but it all sounds like very early days
 

Durante

Member
I wonder what kind of lifespan we can expect with gen 1 units? Not talking about time before a successor is out, but how long they'll receive support?

I'm hoping my Vive will be good for 5 years.
I think the Steam/OpenVR API baseline established with the Vive will be good for a very long time.

That's an advantage of shipping the first complete solution (including tracked VR controllers and large-scale tracking). Many developers will be testing/prototyping their full-on VR games with that. And of course, for "seated" games with traditional controls there is basically no reason to ever drop support for any HMD in the foreseeable future given that the API masks e.g. differences in resolution or FoV anyway.

Quite a bit actually. All of the above, in fact. Pretty sure both Oculus and HTC have been working on gen 2 design already. I believe Oculus even confirmed that recently. SMI is already talking to headset manufacturers to included foveated rendering in gen 2. Fove is releasing this year, so there's no way others will lag behind for more than 2 years.

The only thing I'm skeptical of seeing in gen 2 is wireless. All I've heard so far is some no-name company claiming they have working prototypes. Given that the resolution requirements are going to get bumped up again, I just can't see it happening anytime soon. Foveated rendering + some kind of smart video compression around foveated rendering may help, but it all sounds like very early days
Yeah, I don't see wireless happening for high-end VR in gen 2. Although it would be fantastic, especially for room-scale.
 

SinSilla

Member
Tracked controllers, but untracked headset by the looks of it. Honestly it looks like cheap trash IMO. Rather have the OSVR hacker kit myself if I wanted a more affordable headset.

Edit: Actually after looking at this image:
pico-neo-11-1200x675.png

It appears that there's another ping-pong ball that clips to the headset to track it if (and only if) you have the tracked controllers which come with the camera(s?). Putting aside the whole driver/software/computer interface issue, I'm not even positive how the camera tracking is supposed to interface with the device in stand alone mode. The controller has the logic, but has to connect to the headset by wire, so the cameras can't connect to it. On the other hand, it appears the tracked controllers come with a giant dock with the cameras and that that the normal controller can be stored in it. So I guess the headset and standard controller connect to that behemoth dock in stand alone mode. Also not sure whether the dock represents one fixed binocular vision camera, or is simply a base station and the cameras connect to it allowing camera positioning to reduce occlusion.

Bleh, enough analysis of this junk.

You're dismissing it too soon. There is some promise hidden underneath that orange coat. Probably. Hopefully. Hardware seems to be good (enough), it'll be all about the Software. If this thing supports OSVR/OpenVR this could become a viable Option (amongst other headests that will undoubtly See the light of day in the near future).

To adress some of your questions. This thing has two modes of operation: Untethered/Mobile and tethered to a PC.

The first Mode requires that you buy the package with the "Smart" gamepad which will be connected to the Headset via USB. No wands or positional tracking in this mode. So let's quickly forget about that, there's cardboard for mobile porn.

Second Mode of operation is PC Mode (you can buy this without the snapdragon loaded gamepad for 200$ less!). You connect the HMD via a single USB cable to your PC (same Port on the HMD as for the gamepad).

The TrackingStation with two undetachable cameras can be used wireless (and most probably wired) and connects to your PC via 802.11ac dual band WIFI. So now you got the cameras and HMD connected to your PC where the Sensor Fusion happens. The wands connect to either your PC or probably to the Tracking Station via Bluetooth.

That should be it.

Some other thoughts and info from the other thread.

Some other interesting tidbits:

SinSilla said:
  • IPD Adjuster on top (up to 73mm)
  • Built-In Microphone
  • Proximity Sensor
  • supports asynchronous time warp
  • "tracking station" with two built in cameras is actually wireless (i guess optionally)
  • 360 tracking seems to be out of the question with the current setup
  • they were demoing the Neo without positional tracking on PC with a wired XBox Gamepad (not working yet?)
  • i have spotted at least 3 inputs on the Headset itself (lower right side), one for the tracking beacon, one 3.5mm audio jack and a third one which awkwardly seems to be located between the face cushion and the shell of the headset. This one most probably is used for either the gamepad when used mobile or usb connection for tethered pc use.

If it's true what has been said about SteamVR/OSVR support then this actually isn't looking too shabby!

SinSilla said:
The design of the tracking station is kinda strange as well. Having these two 120° fov cameras so close to each other isn't exactly that beneficial? For 180 tracking a single camera should have been enough (as proven by Razer HDK and Oculus Rift). 360 tracking won't work at all with only a single tracking point at the very front of the headset. They don't look as if you could detach them from the station either, so hooking the cameras higher up for "deeper" tracking (or crouching/laying down) isn't convinient at all, this thing is intended to go on your desk or couch table (seems to be quite the brick). At least they can be tilted independently.
 
Top Bottom