Hi all,
Ready to reveal my paper on VR locomotion.
The system is called: Controller Assisted On the Spot movement (or CAOTS for short).
You can download it here.
https://mega.nz/#!08BBUDLA!OZtIYs45ww43Zg7j0L3lLMJVQup4QwUFNb18LwNhPLk
The paper includes a reasonably thorough primer on the state of VR locomotion, why it's important, and how this solution solves for a lot of the problems of VR within the practical constraints of current day VR tech.
The basic synopsis of CAOTS is that the user uses a tracked motion controller to provide the movement directionality and intent of movement, but not the actual velocity of movement itself, which is instead inferred by the head bounce motion of the users head mounted display as the user walks, jogs or runs on the spot.
VR Locomotion is an issue that has the potential to effect the trajectory of VR adoption and thus society as a whole. An intuitive and immersive locomotion method that works well within the necessary limitations of the systems will help to drive forward the range of possible VR experiences. At this stage, the VR industry as a whole has not settled on such a solution, but a keen and considered implementation of Controller Assisted On the Spot (CAOTS) movement that pairs well with room scale movement will go a long way towards enabling that desirable locomotion standard for a majority of all VR experiences. This in turn allows users to switch between many VR experiences without having to guess at or relearn basic traversal and navigation concepts.
In summary, with CAOTS, we have access to a VR locomotion method that
Can use standard existing room scale VR hardware
Allows for unlimited traversal within the virtual environment (VE)
Synergizes with room scale movement
Provides proprioceptive and vestibular cues that approximate actual movement and is therefore more immersive
Provides a natural sense of movement through VEs
Allows for more intuitive changes to the rate of movement.
Requires minimal training (a few minutes to get used to it).
Enhances the sense of scale of VEs, by getting users to actually use their bodies to traverse VE space.
Provides users with a fitness and health positive traversal solution
Helps mitigate and minimize motion sickness.
Decouples the direction of motion from the direction of the head and allows users to easily look around in the VE without disrupting the direction of travel.
Is compatible with a wide range of play space sizes.
This solution does not provide the user with a perfect 1:1 visual motion to vestibular motion relationship. But it does allow for users to engage that room scale mode to the extent that it is physically possible, while affording a compromise to VR locomotion that is likely going to be as good as it gets. This is especially true when taking into account the necessary constraints of mainstream consumer usage of VR which include consideration for cost, complexity, extra peripherals/components/ accessories as well as the physical limitations of the play space that most people will be able to use their VR systems in.
There's actually a demo of it that you can try made by Ryan Sullivan/Deprecated Coder called RIPMotion.
http://smirkingcat.software/ripmotion/
Worth noting that it's not a direct implementation of CAOTS as described in the paper - but the key principles are there - directionality indicated by controller heading, and intentionality signaled by the user. It can be held in hand (I'd suggest is a better solution than putting it in your belt for a number of reasons, among which is, not everyone is wearing the appropriate attire), uses head bounce for velocity measurement, not arm swing. I'd also suggest turning on beginner to advanced chaperone boundaries if you haven't already, because you will drift as you run in place, and having the bounds pop up as you drift helps you recenter very easily.
Also, the implementation is rough around its edges - especially in terms of the stride length. The ideal is to get it so that it feels like your body motion is directly responsible for the motion - the paper suggests a detailed movement scaling system that provides this feedback.
Nonetheless, it's a great demo of the principles discussed in the paper and you should give it a shot if you haven't already (and if you have, try it again with the minor modifications that I suggest - holding it in hand, while taking note of things like scale, vection, the ability to look around, moving around in room space and larger VE space, etc).
I've talked with deprecatedcoder and apparently he's made some modifications and will be releasing a demo soon - so looking forward to seeing that.