Sorry, gang, Ive been working more hours.
Moreover, you can disable the magnetometer in the XMB, right after the calibration process. If it was so important they wouldn't let you disable it no?
Yes, I know all of that stuff. Some of it I actually picked up from you back in the day! lol <3
mrklaw was asking why they needed the camera/beacon to get an orientation fix, because he thought the IMU should be sufficient for that. Problem is, strictly using the IMU, you can only get a two-axis orientation fix. Lets say youve got the wand pointed directly at the ceiling, and we know this because gravity is pulling straight through the controllers ass. Now, which side of the wand is facing the camera?
We have no idea, because the system can only see the bulb, which looks the same in both pictures. We can assume its the latter, but thats only an assumption. I was saying that the purpose of the magnetometer was to give the system a three-axis orientation fix relative to the North Pole, but as you know, its prone to interference outside of the lab, so that didnt work out as well as theyd hoped.
That said, I imagine that IK could help them out here. Its a pretty safe bet the Move buttons are located under the users thumbs, and since we know where their head is and where their hands are in relation to that, I would think they could make a close-enough guess as to where the thumbs are. The only evidence I have of this though is that I havent heard anyone talk about any kind of calibration or setup with the wands under PSVR.
/shrug
That video demonstrates quite well what I don't like about the Vive controller - it treats the end of the controller as the end of your hand, which feels a bit weird as the controller is quite long.
How odd. They should know precisely where the handle is relative to the marker, so it seems like the API should be providing true hand position, at least as an option.
the
RGB PenTile is
Sony Samsung marketing trying to complicate things. Just means that each pixel in the PSVR has three subpixels (red/green/blue). You'd think that would be normal, but the OR is suggested to have a pentile display which means you get shared subpixels so theoretically you get a lower effective resolution. But I think you'd be hard pushed to notice it.
Fixed, and it seems fairly noticeable.
RGB at 267 PPI on the left, and PenTile at 306 PPI on the right, so a fairly decent analogy for the panels were discussing here. Im sure some will disagree, but the numbers are what they are.
It may sound nitpicky, but Carmack is right, and Samsungs method of counting pixels is pretty shady. Pixel is short for picture element; the smallest chunk of the picture you can chip off without changing the nature of the chunk in question. But PenTile pixels arent truly elemental, because each contains only two of the three primary colors. Its simply impossible for a PenTile pixel to reproduce the full range of colors, so its disingenuous at best to refer to it as elemental in any way. To produce full range color, any given PenTile pixel must team up with one of its neighbors, meaning its really more of a slutty subpixel rather than a
true pixel.
For example, lets imagine a black and white grid. A 1080p display should be able to display 1080 rows and 1920 columns simultaneously, giving us 2073600 elements in our picture, right? if youre talking about true RGB pixels, then yes, but because PenTile pixels dont actually qualify as a pixel until theyve buddied up, while they can display 1080 rows
or 1920 columns, they
cant do both at the same time. So the best chess board it can actually manage is 540 rows and 960 columns, yielding 518400 picture elements. Hey, thats only 25% of what they claimed on the tin!! :(
Some will try to dismiss this example as implausible, but its merely meant to be illustrative. While its highly unlikely well be displaying any microscopic chess boards for the user, it isnt hard to imagine that any random render is likely to contain a lot of areas where different colors butt up against one another, and PenTile displays will lose a lot of detail wherever that occurs, because it has far fewer picture elements to work with. It wont make any difference with large swaths of color, of course; its only the fine detail which is lost.
Its been funny watching people trump rgb subpixel arrangements, because the move from rgb to pentile from dk1 to dk2 was highly celebrated for many reasons, most important being that rgb subpixel arrangements are subject to jailbarring. Unless sonys headset randomizes the rgb arrangement per row, it will also have the same problem. The problem manifests when you show a solid screen of the same color - say red 255, 0, 0. Because in this instance both green and blue subchannels are entirely off, it gives a 2 subpixel wide gap between pixels, which looks like vertical jailbars on dk1, but would probably look like scanlines on psvr because of the screen orientation.
Hmm. To my untrained eye, it simply appears that half of the pixels are missing in the PenTile on the right, leaving a far larger horizontal gap than the one you just complained about,
plus a pixel-high gap above and below. Thats actually considered preferable, is it? If so, cant they get the same effect on RGB by masking out every alternate pixel when displaying primary colors?
And I doubt the people testing these headsets are savvy enough to report on jailbarring.
So, having the extra pixels lit only sucks if you know it's supposed to? =/
It's also funny to watch people boast about "20% more subpixels in rgb" without accoubting for wasted pixels along the bridge of the nose thanks to a single screen. Rift cv1 and vive, by nature of their split screens, waste far fewer pixels.
I've seen this mentioned but I haven't been able to find any specifics. Just how much of a single display is being wasted, and why?
Valve's VR optimization presentation just now was good stuff.
Good stuff indeed. For me, the biggest surprise was the advice to drop resolution from 196% of screen native to as low as 42% of native in an effort to hit your frame deadlines. (1404x780) Sound advice, Im sure, but I was just surprised that such a grainy render would be considered acceptable at all. (Not breaking presence, etc.)
Now the positives:
- Everything else
Sounds pretty nice overall. Thanks for sharing. <3
this is how teleporting works.
Why not just make the avatar's head match the body-relative position of the user's, so if they're looking over their left shoulder, the avatar will be too, which should yield predictable results.
(Of course, the "correct" way to do it would be to simply pull the cable through and mount a new connector on the other side, but ain't nobody got time - or the tools - for that)
Have you ever actually crimped cable? It's stupid easy and immensely satisfying. Well, for Ethernet, at least, but I can't imagine the others would differ too wildly. I'd recommend you look in to it, at least. I think you'll be quite pleased with the results.
The whole somewhat combative "People told us we shouldn't, so we did!" attitude sounds even less ideal lol. I mean, they can do whatever the hell they want, but that's not the greatest outlook to take.
#dealwithit
The creator being in the room and not even noticing the tracking camera isn't facing the right way certainly isn't a great sign.
What surprised me most was that the system itself wasnt saying, Err,
what headset?
What does that mean?
ok, so shopping list of best things to take from this gen for gen 2:
Nice list. Its also worth noting that because the DS4 is tracked, it can give you a strong sense of hand presence even though you cant see your hands at all. Because you can look down and see the object you can already feel in your hands, thats enough to trick your brain in to thinking this is really happening, and the fact that you cant actually see your hands isnt particularly relevant. I guess Im invisible. /shrug Imagine looking down in-game and being able to see your very own HOTAS, mapped 1:1 in the game, displaying contextual, holographic tool tips.
And because its a full 6DOF controller, you can even use that for mapping additional inputs like you guys were discussing previously. Twist yaw? Not a problem. You can have twist pitch and roll as well. Map throttle and lateral thrusters to translation too, if youd like. You can have full control of your flight systems using strictly motion, leaving all of the buttons and sticks on the DS4 free for weapons and avionics. There was some sci-fi movie where ships were controlled by grabbing a little sphere that you pushed and twisted around. Thisd be that, basically.
Well, I thought all of that was fascinating.
Indeed. The beacon syncing stuff got me wondering
Do you happen to know how they scale past two beacons or stitch multiple volumes together?
I came across an article a while back where I believe Oculus stated that the CPU load was somewhere in the vicinity of 1-2% of one core per additional camera. 1-2% of what CPU at what speed I don't know, nor do I know whether they were talking about physical cores or logical.
They're using the CPU for that? What about reprojection/time-warp? Tracking and reprojection
combined take under 2 ms on the PS4s GPU, so why isnt PC doing that stuff on the GPU? Too much overhead shuffling the jobs back and forth? =/
Would you still work with metal sticking through your doughnut hole?
Lots of people and companies were bashing their heads against the VR spatial tracking issue, but Valve were simply the first to come up with a reliable solution which works in a wide variety of conditions, and is simple and cheap enough to actually be shippable.
Horse shit.
Don't worry, they don't. In fact, they very explicitly state that you should not do that in the same presentation where they introduced it as a fallback option. In all caps even.
This is inaccurate. While he does use the phrase, Reprojection should only be a last ditch fallback no idea where you got the all caps part
thats certainly not the only thing he says on the subject, and indeed, seems to contradict his overall advice on the matter and comments on its usefulness.
He also says this stuff
"If you can drop your settings a level to maintain framerate in the worst case (goal #1) and you want to scale up if you have extra GPU cycles to spend (goal #2)"
Example: Aperture Robot Repair VR demo now runs at target framerate on GTX 680."
"If you use this adaptive system, you can run on a lower GPU spec. Robot Repair now running on a 4 year old GPU without touching shaders."
Here's options for maintaining framerate on a 680 in aperture robot repair all the way up to 980Ti
You'll notice on the 680 it'll scale up to full resolution often actually"
"And if you drop in a high end card you're going to get something that looks way better. People who have seen this say they think we have higher resolution panels."
Sure sounds to me like hes talking about how to ensure your game runs smoothly on a lowly 680. No mention of reprojection, but lets read on
"What about experiences where you have text in your enviroment? You need enough texel density to be able to read things. And if you're going to have a game where you're going to shoot something, you can't go down to 0.65x res."
"So my thinking is that if you target in 0.8x in each direction, there's enough texel density to be readable in VR
Oh, look, he just recommended that you utilize their frame reuse tool (interleaving) to help maintain frame pacing in games(/situations?) where your user needs to be able to read or aim at stuff. What else does he have to say about frame reuse?
Then there's async reprojection."
"GPU gets interrupted and if the currently rendered frame is not done, it can reproject the last rendered frames. This sounds like the silver bullet -- the ideal safety system. But it requires preemption granularity on the GPU that's good or better than current GPUs.
"There's no great solution yet until GPUs can be pre-empted better."
"So we have something called an Interleaved Reprojection Hint"
"This gives you about 18ms to render instead of 11ms. It's a great safety measure."
"It's a safety net that the runtime can tap into and force it on when needed. It's a good tradeoff I think, and I agree what Oculus' Michael Antonov said last year." (quote on slide in photo)
So its not perfect, but its good enough, and most importantly,
its far better than the alternative of simply letting frames drop unanswered. This latter point is noteworthy because while you rail continuously against the use of reprojection and frame reuse in general, Ive yet to see you suggest a better solution. Moar flops, I suspect, but this conveniently ignores the fact that they were dropping frames even on a 980. Anyway, moving on
So interleaving is a great safety measure that provides a good tradeoff whenever you cant maintain native refresh rate. So should you just spend all of your time running at 45 fps? Not really, unless youre on a 680 or something, in which case spending most of your time at 45 fps will often give you the best experience possible. When were talking safety nets, async is better still, but its hard to come by, and even if you have it, its still actually better to drop to 45 fps than rely on ATW to take you from 89 to 90, because its just a safety net, not a turbo. So here their advice is precisely the same as Oculus and Sony: Yes, ATW has the knock-on effect of compensating for dropped frames, but dont drop frames; dial stuff back instead.
So you cherry picked a single line from the talk and then distorted it to make it a little more emphatic, and completely ignored the rather lengthy discussion which surrounded it, and in doing so, youve completely misrepresented Valves stance on the matter, just as youve previously done with Oculus and Sony.
Edit: as someone just pointed out to me, another relevant detail in this discussion is that it was in fact the Valve VR demo room which convinced Zuckerberg to buy Oculus.
It seems hard to imagine that room-scale specifically was what sold Zuckerberg, given his company's insistence that walking around with a headset on is far too dangerous to ever consider condoning. cutekidplop.gif It seems more likely that he was impressed by the performance of the headset itself WRT refresh, persistence, etc. since it was a fair bit ahead of Oculus at the time.
I prefer technologically superior hardware, and I prefer more open software.
I've never made a secret out of either of those stances, and I'll readily admit to both. In fact, I am proud of them!
The issue isn't so much your completely arbitrary pronouncements about which is best, but rather the misplaced sense of pride they engender within you, and the efforts you take to help nurture that false sense of superiority you've created for yourself.
I ignore that, because we now have a perfectly sound
investigation which takes into account all factors (binocular FoV, eye relief distance etc.), and its results are rather clear (110° x 113° for Vive and 94° x 93° for the Rift). This does induce a pixel density tradeoff, but the FoV question is independent of that and quite frankly
resolved at this point. I don't see a point in relying on hearsay when we have
data.
Yes, I said arbitrary, like here, where you claim the Vive to simply be technically superior thanks to its larger FOV when in fact this is just a tradeoff with apparent pixel density as you later admit. So what you choose to think of as an unquestionable advantage over all other comers is actually nothing more than your personal preference. Then you almost literally burst with "pride" regarding your uncanny ability to identify your own preferences.
Unfortunately, that's not good enough for you either. Since these are not just personal preferences but rather clear indicators of your own intelligence and competence, then it follows that anyone who prefers something else is not merely less competent than yourself, but a drooling fanboy who should be roundly and relentlessly ridiculed for allowing logos and marketing dictate their purchasing "decisions" rather than simply choosing products on pure merit, as truly enlightened folks such as yourself invariably do. Gimme a break.
Perhaps you're thinking it's unfair of me to give you such a thorough dressing down here when you've already been forced to backpedal on this particular claim and besides, Lighthouse, but again, your insistence that an expensive and complex system which is reliant on sensitive, spinning parts and provides almost zero additional utility to the vast majority of users not only represents the best tracking solution but also trumps considerations such as refresh rates, positional audio, and ergonomics all of which provide considerable utility to virtually all users is in fact nothing more than your personal opinion, and not especially relevant to anyone but yourself and perhaps of limited relevance to those whose goals are similar to yours. Yes, cost is a valid consideration as well. If a solution costs you ten times as much and offers a 1% performance improvement, most rational people would not agree it's unquestionably superior unless the primary goal of the system is either a) wasting as much money as possible, or b) saving human life, and I suspect you'd get a lot of pushback on the latter.
But even more troubling than your heavily reliance on ad hominem attacks and faulty metrics to establish your own sense of self worth or pride, as you call it is your willingness to distort the truth to help reinforce your claims, whether as part of your tireless and misguided crusade against frame reuse mentioned above, or your attempt to literally rewrite history and claim that Valve were the first to come up with a shippable tracking solution as part of your campaign to expunge any and all of Sonys contributions from the record. Yes, I realize you included a bunch of qualifiers in an attempt to gerrymander Move out of the running, but it actually meets all of the qualifications you set forth, so nyeah.
But rather than take this post as an indication that you need to be more careful with your gerrymandering in the future, can I suggest we instead attempt to set all of that nonsense aside, just talk tech, let everyone decide for themselves which solutions best suit their own needs, and then do our damnedest to not make them feel like a schmuck if
their needs dont align perfectly with our own? <3
Actual, the fundamental discourse here seems to be about open (sorry, "less closed") versus closed APIs. That isn't going to go away unless Oculus decides to make it go away (e.g. by removing the offending parts of their EULA).
From what I can tell, OpenVR is controlled entirely by Valve, is closed source, and is dependent on the SteamVR runtime to function. How is OpenVR open in anything apart from name?
I'm getting a Vive, but I don't see any reason why Oculus would want to rely in OpenVR considering their direct competitor is in control of the whole thing.
If it was a neutral 3rd party with nothing to do in the storefront / HMD business, or some kind of open consortium to develop an unified API (I believe it will happen at some point), then yeah, maybe.
This. Some folks are acting as though OpenVR is like VESA or something, but it seems to be just another proprietary technology.
RSP, I think Zalusithix' point is that you are not comparing like-for-like in terms of setup.
With Vive, you are setting up an entire room-scale VR playing field with tracked controllers. With CV1, you are setting up a HMD and a camera.
Obviously the latter is less work than the former. But if you wanted to set up an equivalent experience (in the future, when Touch is out with the second camera) it would also by necessity be more involved. Probably more complicated, in fact, than Vive setup, considering you'd need to run USB from the corners of your room to the PC.
This isnt a very good argument. Its akin to saying that nobody should argue that a nuclear reactor is a more complex device to set up than a fire pit because the former produces more heat. While the additional heat produced by the nuclear furnace is indisputable, it still represents a
colossal waste of resources when our only goal was to warm up a hot dog.
Sony is pushing for a few more polished experiences when their device launches. I think that's why they have an October date instead of something sooner. With the impressions of the device feeling very good and the fact all the other stuff required is already out, (besides whatever the PS4k might be) I'm sure they could release it now if they really wanted to.
FWIW, Sony originally said they'd be launching first-half because they were waiting on software, as you surmise, but when they pushed the date back to October, they said it was to ramp up production to help meet projected at-launch demand.