• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Oculus Rift Kickstarter [Ended, $2.4 million funded]

1-D_FTW

Member
I think doing processing on the device makes no sense, especially in conjunction with a PC. Just focus on getting the data into the system as quickly as possible, and with as much temporal and spatial resolution as possible without compromising accuracy. USB3 would help, but probably a PCIe card with a proprietary connection would be best in terms of latency and throughput.

There's a lot to be said there. My memory is a little fuzzy, but I'm pretty sure Carmack was talking about the benefits of dual GPU's for this specific purpose. To have one of the GPU's handling that aspect. He made it sound like VR could absolutely devour GPU power if you gave it the chance, so I think the problems are probably bigger than we all assume.\

EDIT: Although thinking back on it, it's possible he was simply referencing the type of resolutions that could be used in the future. That was a long keynote. Some of it's run together in my head.

Just fundamentally it's a bad input device. With a mouse/motion sensor/controller buttons you get an immediate value you can work on, but with kinect you have to process that data to create the hand/body tracking information hat the game can use. So an inevitable interim step that is difficult to remove.

Do wireless controllers add noticeable latency? Eg a 360/PS3 analog pad.

I'm a bit of a freak when it comes to latency, but I can say without question that I can notice the lag on a wireless 360 pad. It's why I just use a wired pad. It's not outrageous, but it's high enough that a human can detect it with the right games.
 

Durante

Member
There's a lot to be said there. My memory is a little fuzzy, but I'm pretty sure Carmack was talking about the benefits of dual GPU's for this specific purpose. To have one of the GPU's handling that aspect.
I could well imagine that. You need a lot of signal processing to make sense of (multiple) high-resolution (+depth) images, and GPUs are really good at that kind of processing. (I've previously done some work in realtime optical flow computation from high-res video on GPUs)
 
For someone who's just stumbled into this thread - does anyone know when they expect any version of this to hit the consumer market???
 

Zaptruder

Banned
Just fundamentally it's a bad input device. With a mouse/motion sensor/controller buttons you get an immediate value you can work on, but with kinect you have to process that data to create the hand/body tracking information hat the game can use. So an inevitable interim step that is difficult to remove.

Do wireless controllers add noticeable latency? Eg a 360/PS3 analog pad.

From what I understood, Carmack's real beef with the kinect's relatively massive latency was in the context of using it to track body position data to compensate for the lack of translational data from the accelerometers.

I don't know why they don't just stick a 3-axis accelerometer or two into the rift - found one for about 20 bucks (a little chip with board and soldering points).

Input wise - using it as an input device for gesturing and what not... I honestly don't think it'd be that bad. Maybe not so great for hand movement, but macro body movement like walking, jumping, crouching, I think the overall amount of time it takes to execute those movements far outweigh any additional latency of the kinect device.
 

1-D_FTW

Member
For someone who's just stumbled into this thread - does anyone know when they expect any version of this to hit the consumer market???

There isn't an ETA. They want to do it sooner rather than later, but it really depends how quickly they solve some of the issues/get developer support/get access to better screens/etc.

Honestly, after really listening to them speak the past week, I'm not really expecting a consumer version till the second half of 2013. That's just one person's opinion, but if I were betting, that's where I'd place my money.
 

mrklaw

MrArseFace
From what I understood, Carmack's real beef with the kinect's relatively massive latency was in the context of using it to track body position data to compensate for the lack of translational data from the accelerometers.

I don't know why they don't just stick a 3-axis accelerometer or two into the rift - found one for about 20 bucks (a little chip with board and soldering points).

Input wise - using it as an input device for gesturing and what not... I honestly don't think it'd be that bad. Maybe not so great for hand movement, but macro body movement like walking, jumping, crouching, I think the overall amount of time it takes to execute those movements far outweigh any additional latency of the kinect device.

Good point. Do you really need absolute translational positioning? Surely temporary is enough to register a lean forward/back/left/right

I wasn't specifically talking about kinect as a substitute for that though, more as an input device for interacting with the environment
 

TTP

Have a fun! Enjoy!
From what I understood, Carmack's real beef with the kinect's relatively massive latency was in the context of using it to track body position data to compensate for the lack of translational data from the accelerometers.

I don't know why they don't just stick a 3-axis accelerometer or two into the Rift - found one for about 20 bucks (a little chip with board and soldering points).

They have gyros and accelerometers already into the Rift but those can't provide reliable positional data due to drift issues.

Besides Kinect, Carmack tried both the TrackIR and the Razer Hydra tech to get that data but the former is limited by line of sight while the latter is limited by lack of accuracy.

I don't know why Carmack hasn't tried the Move yet. Placing it so that its sphere sticks out from the top of the forehead could work very well for position tracking in most cases (within 180° at least). Heck, since we are talking research here and not cool looking product design, he could place it diagonally like a bowsprit to cover an even bigger area.

Alternatively, you could just use the Track IR tech and simply place more trackers on the thing to cope for everything. In all honestly tho, the magnetic field solution of the Hydra would be more elegant. Perhaps current magnetic sensors are better than what it's in the Hydra?
 

Zaptruder

Banned
They have gyros and accelerometers already into the Rift but those can't provide reliable positional data due to drift issues.

Besides Kinect, Carmack tried both the TrackIR and the Razer Hydra tech to get that data but the former is limited by line of sight while the latter is limited by lack of accuracy.

I don't know why Carmack hasn't tried the Move yet. Placing it so that its sphere sticks out from the top of the forehead could work very well for position tracking in most cases (within 180° at least). Heck, since we are talking research here, he could place it diagonally like a bowsprit to cover an even bigger area.

The way he talked about it, made it sound like the Rift lacked translational positional data (i.e. moving along the X/Y, Y/Z, X/Z planes), which he was using all those other devices to compensate for.

So even though you get some drift with acceloremeters over time, it seems like they only do so after a couple seconds or more - which is more than enough time for even the Kinect to reinforce the absolute positioning data to stop the excessive drift.
 

TTP

Have a fun! Enjoy!
The way he talked about it, made it sound like the Rift lacked translational positional data (i.e. moving along the X/Y, Y/Z, X/Z planes), which he was using all those other devices to compensate for.

So even though you get some drift with acceloremeters over time, it seems like they only do so after a couple seconds or more - which is more than enough time for even the Kinect to reinforce the absolute positioning data to stop the excessive drift.

That's true, but I think the issue with Kinect with regards to position tracking application is the low res of its 3D camera. I don't think it can accurately register translations on millimeter level. And even if you can make it work like that by placing it closer to the head, there is still the latency issue.
 

mrklaw

MrArseFace
I think rift has translational sensors, but carmack isn't supporting them in doom 3.

Still not sure why you need absolute positioning for translation. Imoortant for rotation obviously, but for translation, temporary movement should be enough for now.
 

TTP

Have a fun! Enjoy!
I think rift has translational sensors, but carmack isn't supporting them in doom 3.

Still not sure why you need absolute positioning for translation. Imoortant for rotation obviously, but for translation, temporary movement should be enough for now.

As far as I understand, accelerometers do actually provide translational information (or rather, you can extrapolate it from the data), but that info gets more and more unreliable over time due to drift issues. I think that's why Carmack isn't doing position tracking with those.

If you have a Move it's very simple to see how drifting can fuck up the tracking when there is no absolute position tracker to compensate for that. Simply hide the sphere as you waggle around.
 

mrklaw

MrArseFace
As far as I understand, accelerometers do actually provide translational information (or rather, you can extrapolate it from the data), but that info gets more and more unreliable over time due to drift issues. I think that's why Carmack isn't doing position tracking with those.

If you have a Move it's very simple to see how drifting can fuck up the tracking when there is no absolute position tracker to compensate for that. Simply hide the sphere as you waggle around.

I understand. My point is - why do you need it to be stable? Do you really need to track constant movement, or only occasional shifts? Or do you mean that the drifting can cause phantom inputs where the game will think you are moving when you aren't?
 

Zaptruder

Banned
I understand. My point is - why do you need it to be stable? Do you really need to track constant movement, or only occasional shifts? Or do you mean that the drifting can cause phantom inputs where the game will think you are moving when you aren't?

Exactly this.

Wouldn't serve to be standing doing nothing while your view starts tilting and shifting around.
 
They have gyros and accelerometers already into the Rift but those can't provide reliable positional data due to drift issues.

Besides Kinect, Carmack tried both the TrackIR and the Razer Hydra tech to get that data but the former is limited by line of sight while the latter is limited by lack of accuracy.

I don't know why Carmack hasn't tried the Move yet. Placing it so that its sphere sticks out from the top of the forehead could work very well for position tracking in most cases (within 180° at least). Heck, since we are talking research here and not cool looking product design, he could place it diagonally like a bowsprit to cover an even bigger area.

Alternatively, you could just use the Track IR tech and simply place more trackers on the thing to cope for everything. In all honestly tho, the magnetic field solution of the Hydra would be more elegant. Perhaps current magnetic sensors are better than what it's in the Hydra?

You don't even need Move hardware to handle position the way the Move does. You just need to put markers (a couple colored dots, maybe) on the front of the housing, and use a webcam.
 

DieH@rd

Banned
You don't even need Move hardware to handle position the way the Move does. You just need to put markers (a couple colored dots, maybe) on the front of the housing, and use a webcam.

Webcams have small field of view. Carmacks wants tech that will enable us to move a little, crouch, and look at the ground while we are kneeling.

He wants to give us something fundamentally cool. :)
 

TTP

Have a fun! Enjoy!
You don't even need Move hardware to handle position the way the Move does. You just need to put markers (a couple colored dots, maybe) on the front of the housing, and use a webcam.

Well, of course. But you'd need to program all the sphere/dots position/size/distance tracking by yourself. Not to mention merging that data with the inertial sensor data. I was suggesting using the Move because it comes with a ready to use dev kit.
 

mrklaw

MrArseFace
Webcams have small field of view. Carmacks wants tech that will enable us to move a little, crouch, and look at the ground while we are kneeling.

He wants to give us something fundamentally cool. :)

Aren't there basic solutions that could be used? Really Simple things like a mercury switch that registers horizontal and uses that to calibrate the main sensors?

Or build a webcam into the HMD that uses very basic processing to detect translational movement.
 

DieH@rd

Banned
Aren't there basic solutions that could be used? Really Simple things like a mercury switch that registers horizontal and uses that to calibrate the main sensors?

Or build a webcam into the HMD that uses very basic processing to detect translational movement.

All gyros became uncalibrated after a short time [and violent use], but accelerometers become unusable after only few seconds of random shaking. They definitely need some external tracking device that can provide additional positional data.
 

aeolist

Banned
Just went out to Quakecon for a bit since I live nearby and tried out the Rift demo.

Keep in mind that I have really bad eyesight and had to keep my glasses on to get anything from it but I was pretty unimpressed. With the lenses further out from my eyes the parallax effect and field of view were both pretty paltry. Motion tracking was very smooth and intuitive though, and the image quality was pretty good.

I'd say if you need corrective lenses and either can't or won't wear contacts (like me) it's not worthwhile.
 
Just went out to Quakecon for a bit since I live nearby and tried out the Rift demo.

Keep in mind that I have really bad eyesight and had to keep my glasses on to get anything from it but I was pretty unimpressed. With the lenses further out from my eyes the parallax effect and field of view were both pretty paltry. Motion tracking was very smooth and intuitive though, and the image quality was pretty good.

I'd say if you need corrective lenses and either can't or won't wear contacts (like me) it's not worthwhile.


Sounds... not that good. I also wear glasses and wont wear contacts that much. Hope they are working on a solution for us ;)
 

aeolist

Banned
Sounds... not that good. I also wear glasses and wont wear contacts that much. Hope they are working on a solution for us ;)

You'd have to have a set of prescription lenses for the headset and slot them in when you wanted to use it. There's really no other way to do it with LCDs and optics like this.
 

TTP

Have a fun! Enjoy!
Just went out to Quakecon for a bit since I live nearby and tried out the Rift demo.

Keep in mind that I have really bad eyesight and had to keep my glasses on to get anything from it but I was pretty unimpressed. With the lenses further out from my eyes the parallax effect and field of view were both pretty paltry. Motion tracking was very smooth and intuitive though, and the image quality was pretty good.

I'd say if you need corrective lenses and either can't or won't wear contacts (like me) it's not worthwhile.

Are you longsighted or nearsighted?
 
You'd have to have a set of prescription lenses for the headset and slot them in when you wanted to use it. There's really no other way to do it with LCDs and optics like this.

Couldnt they just "adjust" some "glasses" (with your eyesight) right in front of the lcds? I am not that sure, how expensive they are, but they wouldnt need to be that big I guess.
 

DieH@rd

Banned
Nearsighted with astigmatism. I tried it without my glasses but it was just a blur.

People who can focus their eyes on "infinity" can use Rift without problems. Far sighted people are OK, slightly nearsighted are also OK, but people with bigger nearsight problems are screwed.
 

aeolist

Banned
I do hope that there's wide support for the Rift so someone can hack head-tracking using cameras into games more easily and we can at least get parallax on normal monitors.
 
There's a lot to be said there. My memory is a little fuzzy, but I'm pretty sure Carmack was talking about the benefits of dual GPU's for this specific purpose. To have one of the GPU's handling that aspect. He made it sound like VR could absolutely devour GPU power if you gave it the chance, so I think the problems are probably bigger than we all assume.

This would actually be an awesome and valid use for multi-GPU setups. SLI and Crossfire are actually not that useful for most use cases, but dedicating a GPU for VR processing? I would be all over that.

You'd have to have a set of prescription lenses for the headset and slot them in when you wanted to use it. There's really no other way to do it with LCDs and optics like this.

For the early adopters, one solution would probably be to pull the lenses out of a spare pair of glasses and tape them to the Rift. That's probably what I'll end up having to do, as I am severely nearsighted (nearly -8 diopters in both eyes, plus astigmatism).
 

efyu_lemonardo

May I have a cookie?
Interesting conversations in this thread.
I'm afraid I don't have time to watch Carmack's keynote, but some of the issues brought up here seem to be solvable if you are willing to accept some small compromises.

For example, it can't be too difficult to get a hold of a medium to high resolution camera with a wide enough field of view to do 110 degree tracking - which is what you'd need to match up real world movement with headset's field of view.
I know Nintendo specifically chose the camera in the Wiimote due to its high refresh rate and (relatively) wide field of view. Remember, we can always sacrifice bandwidth for accuracy and speed as in the case of the Wii's IR camera.

In addition, and hopefully somebody can correct me if I'm wrong, isn't it just as possible to use the same combination of optics and software correction to make for a wider field of view in the camera? just like what is being done on the display only you'd require a lens of opposite focal length and these me be more expensive/difficult to come by with the required specs.

With this in mind, and doing only gross positional tracking of markers on the outside of the Rift to compensate for loss of accuracy in the accelerometer/gyro, I think you should be able to get sufficient data to allow for limited motion in place.
It may be wiser to have the opposite setup, with one or two wide FOV cameras on the outside of the Rift, and a simple sensor bar solution, like Nintendo.

Another thing to consider is if you are willing to sacrifice mobility a bit, then having a down-looking camera on the ceiling would surely provide more than enough data to track the top of the head and could then be used as a suitable reference for linear motion in the X-Y plane, as well as aid in identifying large body movements.

edit: I wanted to add one more point, related to response time. One of the lessons developers have learned this gen, is that natural response to a user's motion has to be as close as possible to real time to feel truly authentic. This is something I believe we still can't do all that well with today's consumer level tech. That's why for the most part developers have relied on various hacks and cheats so far, where a certain motion (or rather, a "family" of similar, related motions) correspond to a command that is executed only after the gesture is confirmed.
This kind of approach is less suitable for a general-purpose tracking algorithm than the more abstract one, but I'd imagine the upside is it's probably much easier to develop and test these kinds of algorithms.
 

mrklaw

MrArseFace
Why need external equipment at all? Why not just have a camera on the rift pointing up, and use simple images processing to determine left/right/forward/back movements? (not much good for up/down though)
 
I wish someone brought up its compatibility with current 3d standards.

Like can software make the device just work with Killzone 3? Or with an HDMI signal from a 3d movie?

Why need external equipment at all? Why not just have a camera on the rift pointing up, and use simple images processing to determine left/right/forward/back movements? (not much good for up/down though)

This wouldn't include depth. I think the solution would be sound waves processing where you are in the real world. Because that would be super accurate and really fast.
 

efyu_lemonardo

May I have a cookie?
Why need external equipment at all? Why not just have a camera on the rift pointing up, and use simple images processing to determine left/right/forward/back movements? (not much good for up/down though)

This was my initial thought as well, similarly to how a high resolution optical or laser mouse can provide very accurate positional information on varying surfaces, without the need for user calibration each time you switch surfaces.

The main difference is probably that this works so well mainly because of the very close proximity to the surface, and very high degree of control over external noise (there are effectively no other sources of light underneath the mouse). I'd imagine once you eliminate these two advantages, the problem isn't as straight forward anymore. I'm not saying we couldn't solve it with off the shelf parts today, but further research is necessary to achieve something as reliable as a mouse.


This wouldn't include depth. I think the solution would be sound waves processing where you are in the real world. Because that would be super accurate and really fast.

I don't see how using sound waves gives you any advantage in terms of signal to noise ratio or versatility of the signal in a user environment.
 
This was my initial thought as well, similarly to how a high resolution optical or laser mouse can provide very accurate positional information on varying surfaces, without the need for user calibration each time you switch surfaces.

The main difference is probably that this works so well mainly because of the very close proximity to the surface, and very high degree of control over external noise (there are effectively no other sources of light underneath the mouse). I'd imagine once you eliminate these two advantages, the problem isn't as straight forward anymore. I'm not saying we couldn't solve it with off the shelf parts today, but further research is necessary to achieve something as reliable as a mouse.




I don't see how using sound waves gives you any advantage in terms of signal to noise ratio or versatility of the signal in a user environment.
It could tell exactly where your head and neck are. Sending that information into VR would be highly useful. Even better would be if it could send your whole body language. Bringing into VR the dream scenerio carmack wants of being able to duck, and kneel and re-position yourself.

Look at this device that uses sound waves to detect finger motion. http://www.technologyreview.com/view/428350/the-most-important-new-technology-since-the-smart/#ooid=VkYW04NTod5cDwz247vs1gGVcq_jw0X4

Kinect could work too.. I hope the best for this company they got to find the right partner and VR could easily come to be.
 

efyu_lemonardo

May I have a cookie?
It could tell exactly where your head and neck are. Sending that information into VR would be highly useful. Even better would be if it could send your whole body language. Bringing into VR the dream scenerio carmack wants of being able to duck, and kneel and re-position yourself.

Look at this device that uses sound waves to detect finger motion. http://www.technologyreview.com/view/428350/the-most-important-new-technology-since-the-smart/#ooid=VkYW04NTod5cDwz247vs1gGVcq_jw0X4

Kinect could work too.. I hope the best for this company they got to find the right partner and VR could easily come to be.

That tech is certainly optical and not based on sound, at least not exclusively. You'd need insanely high frequencies to get wavelenghts of sound down to the scales they are talking about. In the order of tens of megahertz!

There are no details on the company's website, but I'd imagine they are using some kind of infra-red solution at the base of their technology.
 

mrklaw

MrArseFace
I don't see how using sound waves gives you any advantage in terms of signal to noise ratio or versatility of the signal in a user environment.

Using something like sonar to ping distances would allow you to know if someone is moving towards/away from obstacles (eg walls). You can get cheap sonar sensors in DIY stores to measure room sizes etc
 

1-D_FTW

Member
I wish someone brought up its compatibility with current 3d standards.

Like can software make the device just work with Killzone 3? Or with an HDMI signal from a 3d movie?



This wouldn't include depth. I think the solution would be sound waves processing where you are in the real world. Because that would be super accurate and really fast.

It's not compatible that way. The optics completely distort the view. You need software to invert the distortion so that it appears normal when viewing it.

Anything that isn't inverting that image with software isn't going to work with the display. Well, not technically true. It'll work, it'll just have major fisheye distortion.
 

TTP

Have a fun! Enjoy!
Interesting conversations in this thread.
I'm afraid I don't have time to watch Carmack's keynote, but some of the issues brought up here seem to be solvable if you are willing to accept some small compromises.

For example, it can't be too difficult to get a hold of a medium to high resolution camera with a wide enough field of view to do 110 degree tracking - which is what you'd need to match up real world movement with headset's field of view.
I know Nintendo specifically chose the camera in the Wiimote due to its high refresh rate and (relatively) wide field of view. Remember, we can always sacrifice bandwidth for accuracy and speed as in the case of the Wii's IR camera.

In addition, and hopefully somebody can correct me if I'm wrong, isn't it just as possible to use the same combination of optics and software correction to make for a wider field of view in the camera? just like what is being done on the display only you'd require a lens of opposite focal length and these me be more expensive/difficult to come by with the required specs.

With this in mind, and doing only gross positional tracking of markers on the outside of the Rift to compensate for loss of accuracy in the accelerometer/gyro, I think you should be able to get sufficient data to allow for limited motion in place.
It may be wiser to have the opposite setup, with one or two wide FOV cameras on the outside of the Rift, and a simple sensor bar solution, like Nintendo.

Another thing to consider is if you are willing to sacrifice mobility a bit, then having a down-looking camera on the ceiling would surely provide more than enough data to track the top of the head and could then be used as a suitable reference for linear motion in the X-Y plane, as well as aid in identifying large body movements.

edit: I wanted to add one more point, related to response time. One of the lessons developers have learned this gen, is that natural response to a user's motion has to be as close as possible to real time to feel truly authentic. This is something I believe we still can't do all that well with today's consumer level tech. That's why for the most part developers have relied on various hacks and cheats so far, where a certain motion (or rather, a "family" of similar, related motions) correspond to a command that is executed only after the gesture is confirmed.
This kind of approach is less suitable for a general-purpose tracking algorithm than the more abstract one, but I'd imagine the upside is it's probably much easier to develop and test these kinds of algorithms.

This is interesting.

One of the ideas brought up during the Rift panel was to have cameras pointing outwards in order to allow AR applications. Basically you would see the real world around you through the Rift, with superimposed computer graphics.

Now, since you have cameras looking around already for the purpose of AR, one could easily use the markerless Sony Smart AR tech to track a few reference points in the environment to be used as an anti-drift measure.

Of course the camera would need to be able to cope with dark environments (night vision mode or something).
 

TTP

Have a fun! Enjoy!
Interesting conversations in this thread.
I'm afraid I don't have time to watch Carmack's keynote, but some of the issues brought up here seem to be solvable if you are willing to accept some small compromises.

For example, it can't be too difficult to get a hold of a medium to high resolution camera with a wide enough field of view to do 110 degree tracking - which is what you'd need to match up real world movement with headset's field of view.
I know Nintendo specifically chose the camera in the Wiimote due to its high refresh rate and (relatively) wide field of view. Remember, we can always sacrifice bandwidth for accuracy and speed as in the case of the Wii's IR camera.

In addition, and hopefully somebody can correct me if I'm wrong, isn't it just as possible to use the same combination of optics and software correction to make for a wider field of view in the camera? just like what is being done on the display only you'd require a lens of opposite focal length and these me be more expensive/difficult to come by with the required specs.

With this in mind, and doing only gross positional tracking of markers on the outside of the Rift to compensate for loss of accuracy in the accelerometer/gyro, I think you should be able to get sufficient data to allow for limited motion in place.
It may be wiser to have the opposite setup, with one or two wide FOV cameras on the outside of the Rift, and a simple sensor bar solution, like Nintendo.

Another thing to consider is if you are willing to sacrifice mobility a bit, then having a down-looking camera on the ceiling would surely provide more than enough data to track the top of the head and could then be used as a suitable reference for linear motion in the X-Y plane, as well as aid in identifying large body movements.

edit: I wanted to add one more point, related to response time. One of the lessons developers have learned this gen, is that natural response to a user's motion has to be as close as possible to real time to feel truly authentic. This is something I believe we still can't do all that well with today's consumer level tech. That's why for the most part developers have relied on various hacks and cheats so far, where a certain motion (or rather, a "family" of similar, related motions) correspond to a command that is executed only after the gesture is confirmed.
This kind of approach is less suitable for a general-purpose tracking algorithm than the more abstract one, but I'd imagine the upside is it's probably much easier to develop and test these kinds of algorithms.

This is interesting.

One of the ideas brought up during the Rift panel was to have cameras pointing outwards in order to allow AR applications. Basically you would see the real world around you through the Rift, with superimposed computer graphics.

Now, since you have cameras looking around already for the purpose of AR, one could easily use something like the markerless Sony Smart AR tech to track a few reference points in the environment to be used as an anti-drift measure and to provide positional data.

Of course the camera would need to be able to cope with dark environments (night vision mode or something).

Even better would be something like Kinect attached to the HMD, scanning the environment around you in hi-def and at 120fps. If you can make it work fast and precisely enough, you wouldn't need inertial sensors at all.
 
Yes. This is the VR headset that Carmack was showing off at E3 and Quakecon 2012.

Ah, thnx for anwsering this.

So what's his relation to this project now? He is the inventor of the device right? I take it he has found this company to buy his concept/develop it with further input from himself?
 

DieH@rd

Banned
Ah, thnx for anwsering this.

So what's his relation to this project now? He is the inventor of the device right? I take it he has found this company to buy his concept/develop it with further input from himself?

No, he only worked software solutions for reducing motion tracking latencies [did a great job], and adapting his game for this VR device. His name also opened a lot of doors for Palmer Lucky [creator of Oculus Rift].
 

scitek

Member
It's not compatible that way. The optics completely distort the view. You need software to invert the distortion so that it appears normal when viewing it.

Anything that isn't inverting that image with software isn't going to work with the display. Well, not technically true. It'll work, it'll just have major fisheye distortion.

Is it among their goals to make it usable as an all-around HMD? i.e. coming up with a way to correctly view movies and other 3D content on it? If not, I don't see how it could have any future outside of the people interested in it for a very niche gaming experience.
 

1-D_FTW

Member
Is it among their goals to make it usable as an all-around HMD? i.e. coming up with a way to correctly view movies and other 3D content on it? If not, I don't see how it could have any future outside of the people interested in it for a very niche gaming experience.

I'm sure some developers will play around with it, but it's not really meant to be an HMD. If that's what you're looking for, the HMZ is probably your best bet for the foreseeable future. 110 degree FOV really doesn't work for movies. And things like the subpar resolution are overlooked because of how it's the best implementation of VR ever done (and brings hope to things getting much better, quickly). That's why all those people involved are excited. It's the VR aspect.
 
Top Bottom