I would assume that move uses only gyro (IMU) data to determine its pose and uses tracking for position only. Lighthouse can also do that (for stuff with a gyro - which the HMD and controllers have), but it's only used as a fallback.is interesting for two reasons to me. First - the guy that wrote that 'optimal placement of sensors for any given 3D model' is a genius. Second - they talk about needing 4 sensors to see a lighthouse, plus IMU data to get full pose information (or 5 sensors alone). That seems surprisingly high. So how does move get away with only having effectively one point of reference (the glowing orb) which by Valve's comments should only provide position in space and not full pose information (and even the single point plus IMU should't be enough)