CamHostage
Member
All the cameras, sensors, etc. are also present for BC to work.
Unfortunately, that is not necessarily the case. PS VR2 will not include lightpoints on the headset or controllers, and it will not be oriented around a camera mounted outside of the body. One's outside-in tracking, one's inside-out tracking.
This is all that PSVR for PS4 knows:

PSVR for PS4 is a rather basic "outside-in" implementation of tracking bodies in motion through visual information. It sees the light points on the headset and the two differently-colored lightbulbs in your hands, and that's it. With that visual data, it determines, via changes per frame, where you are and how fast you've moved and in what direction in space. So, this lightbulb was here 1/60th of a second ago, now it's shifted slightly this way and moved this many pixels over and is this much bigger than it was before, so it must have moved this way and that way and this fast in between; this other lightbulb on the side used to be fully visible but now it's partially blocked so rotation must also be involved. Calibrate and calculate that between 9 different lights on the headset and the two colored bulbs (PS Move also has the benefit of also having additional built-in motion sensors for fine-detail movement data,) and the system can assume "motion" to orientate what's displayed in the headset.
PS VR2 for PS5 on the other hand uses two different types of motion detection. One is more of what we think of as "motion-sensors", where the thing being moved senses the motion through three-axis gyroscope and three-axis accelerometer sensors. These sensors can tell roll and pitch and yaw and acceleration/deceleration. The other is "inside-out" cameras (the IR Proximity Sensor) that look outside and understands the orientation of the body based on what it sees in the room outside of you.
Now, it's not to say that the Sony couldn't find a way to translate the PS5's various motion-sensors data into the PS4's understanding of motion (the PS5 would be generating a lot more motion data than PS4 ever could,) but they're fundamentally different. One is based on a stationary, external camera placed several feet away watching a 2D video feed to transcribe differences between frames into movement across multiple dimensions so that it can create what the headset displays and how the game understands collision/action; the other is a head-mounted IR camera detecting motions of the body and distance changes between landmarks in the room to directly orient the view and controls with 6DoF. The sensors for PSVR never "move"; the PS VR2 sensors are always moving.
Ultimately, the game is just looking for the data, and the camera feed I believe gets processed by the central system before the game gets to touch it. If so, the game would get the positioning and speed and pitch/yaw/roll numbers from the console, and Sony's task would be to create a translation system for understanding how PS5 VR2 motion data would correlate with what the PS4 VR games would be expecting.
(Mccrocket is saying Rift overcame this hurdle previously, between the old CV1 and the more modern sensor style of Rift S. I'm not finding information on how they did that, nor info on what percentage of original Oculus games actually received a patch versus are "cheated" into compatibility through assumptions of congruent data, and how much difference there would be. CV1 meanwhile is still compatible with current Rift games, and the SDK is designed to account for both types of sensor input; however, it's also tested and tweaked by developers on various headsets, and is part of the initial deploy of the software, which isn't the same as just "making compatibility" between PSVR and PSVR2. Also, although the translation mathematics are probably down to a fair science by now and shared on developer forums, it's still a big headache for good game developers to test and configure their games for various headsets on the market.)
Last edited: