HL2 + Johnny Lee head tracking effect + faceAPI = OMG

Give us your opinion after 2-3 hours of straight play. In particular please note any instances of discomfort or the system doing something that you didn't particularly intend.
 
Zaptruder said:
Give us your opinion after 2-3 hours of straight play. In particular please note any instances of discomfort or the system doing something that you didn't particularly intend.
It's a good idea, but I must ask: If my opinion is no good after having used the program for about a half hour, just how valuable is yours after (assumedly) no time? If you've got a 'cam and a Source game, perhaps you should pick up the mouse also? ;)

I'll probably play some coast levels and Nova Prospekt tomorrow in any case.
 
Botolf said:
It's a good idea, but I must ask: If my opinion is no good after having used the program for about a half hour, just how valuable is yours after (assumedly) no time? If you've got a 'cam and a Source game, perhaps you should pick up the mouse also? ;)

I'll probably play some coast levels and Nova Prospekt tomorrow in any case.

I'm lazy. Far too lazy to redownload a source game off steam and play it for 2-3 hours straight to provide first hand experience. Doesn't really invalidate what I'm saying about head based gestures been gimmicky... at least in the sense that it won't be for everyone.

Unless whatever you're using can do the parallax perspective thing. Can it? I'd definetly give it a shot then.
 
GeneralIroh said:
Check out this awesome demonstration

Headtracking has been around sometime now. We've seen the eyetoy for the Playstation 2, and soon Project Natal. What about that old web camera you have? It isn't about the camera itself, just the software that drives it, even this can make Project Natal feel dumb.


http://www.moddb.com/mods/city-17-episode-1/news/city-17-episode-1-3-day-update-head-tracking

http://www.youtube.com/watch?v=qWkpdtFZoBE&feature=player_embedded

Its all about the software and a standard web camera. Who needs the over priced gear to experience 4d.

http://www.seeingmachines.com/product/faceapi/

??

Natal does this. In fact, Natal actually tracks the entire body, not just the face.
 
I look forward to trying this. The devil is in the details but if the implementation and feel is just right, it could make PC FPS games a lot more fun to play.
 
Davidion said:
I look forward to trying this. The devil is in the details but if the implementation and feel is just right, it could make PC FPS games a lot more fun to play.

I agree. I cannot wait to see what developers come up with because I am sure some will be highly original.
 
The natural head movement and peering around corners are the best two functions imo, means I can keep playing a pc game as I do but with some extra camera touches, no more static head camera movement and gives me a little looking room around the screen.

OMG you know what would be awesome? if your character in TF2 had the same facial expression the player has :lol you will really know when you are pissing off that sniper.
 
I'm a little tired of people saying "Fuck Natal my $20 webcam does headtracking!" when it's pretty obvious Natal is a console-specific solution with a lot of proprietary hardware and software included. For one thing, your webcam signal just sends a basic picture back, more likely than not. It doesn't include a specialized depth sensor, or a microphone array. And so no matter how they show it in the videos with that nice average lighting, basic webcam solutions still hit issues the moment the visual feed gets too dark, or too washed out, because they're only working from a series of still pictures with no true depth information relayed.
 
Lord Phol said:
Pretty cool stuff, and though I agree with this being a nice concept for console shooters I'm not seeing how Natal would improve it. You only need a normal webcamera for this to work right, so what extra would Natal bring to it that the 360camera or PS3eye wouldn't be able to?

Natal is a lot more accurate on measuring depth.

This simply measures the distance between two points it can make out (like eyes) and scales based on how far apart or how close together they seem to the camera. It only knows where those points are.

Natal floods the area with infrared light then measures temperature with an IR camera, and as a result knows where your entire body is in a 3d space.
 
Zaptruder said:
I'm lazy. Far too lazy to redownload a source game off steam and play it for 2-3 hours straight to provide first hand experience. Doesn't really invalidate what I'm saying about head based gestures been gimmicky... at least in the sense that it won't be for everyone.

Unless whatever you're using can do the parallax perspective thing. Can it? I'd definetly give it a shot then.
You're gonna have to explain what you mean by parallax perspective thing (I don't follow). There was something in the video that looked like it allowed you to rotate and angle your game head independent of your gun.
 
I like that. If there is a HL2 mod coming up that focuses on actual fun, there's a bit of tweaking up ahead before getting it right. Almost feels like I have to move my desktop and webcam to the living room. Combine the rest of the inputs into a wiimote+nunchuck and you got yourselves a stew.
 
pswii60 said:
What happens when someone is sat next to you on the sofa while you are playing?
Maintain a safe distance, lest a lean gesture turn into a headbutt gesture.

No idea, maybe it picks the most centred, closest face?
 
pswii60 said:
What happens when someone is sat next to you on the sofa while you are playing?

If this uses just a standard webcam i doubt it would work well in a long range environment even for one person. (Edit: any info on this? I'm curious if the environment has to be close or not, I'm just assuming right now)

No idea for two.
 
Someone must make this compatible with Arma 2 and compare it to the newest TrackIR. I bet it is comparable when it comes to accuracy. Maybe some more processing power is needed for analysing the picture.
 
pswii60 said:
What happens when someone is sat next to you on the sofa while you are playing?

I would guess if the person was 'new' to the camera it would ignore it.

If you were both there from the beginning i guess it would ask you to choose who's playing. Head tracking would obviously not be suitable for multiplayer.

I think for a variety of reasons this might be more suited to desktop play than sofa play...I think a lot of the gestures talked about maybe make more sense with a display closer to your head.

And the issue of robustness with distance might also be an issue for sofa-play. Your data is going to get fuzzier with distance from the camera, whether you have a depth reading or not. I'd be interested to know if there are techniques that work robustly with reasonably significant distance.
 
wayward archer said:
Natal is a lot more accurate on measuring depth.

This simply measures the distance between two points it can make out (like eyes) and scales based on how far apart or how close together they seem to the camera. It only knows where those points are.

Natal floods the area with infrared light then measures temperature with an IR camera, and as a result knows where your entire body is in a 3d space.
which is all pretty overkill for simple head tracking. he was asking what functionality would be added -- not what made natal different. having the camera know how far your nose sticks out won't do much for gameplay.
 
dfyb said:
which is all pretty overkill for simple head tracking. he was asking what functionality would be added -- not what made natal different. having the camera know how far your nose sticks out won't do much for gameplay.

In the video the guy does a move where he leans forward to zoom. Natal would allow a lot more precision with that move. Measuring depth better could also allow for just more flexible tracking all around, with a more precise range of motion. It has potential to add a lot of functionality beyond this basic stuff that just a webcam setup can't even dream of doing.
 
wayward archer said:
In the video the guy does a move where he leans forward to zoom. Natal would allow a lot more precision with that move. Measuring depth better could also allow for just more flexible tracking all around, with a more precise range of motion. It has potential to add a lot of functionality beyond this basic stuff that just a webcam setup can't even dream of doing.

I bet it's the way that they coded the engine and not the precision of the tech that resulted in those moves.
 
the control gestures are very cool but kind of gimmicky. My neck hurts already.

The best part for me would be the depth perception + natural movement. It really can make computer games more immersive because you screen will behave like a window into another world instead of being a flat 2D plane. Bring it on.
 
I just tried this out and it works pretty much as perfect as it seemed in that video. I had to use my PS Eye (and download other drivers for it) since I don't have a normal web cam, and it was only sitting about two or three feet away from my face, but it managed to do all the actions every time I tried. It only had issues when I leaned far away from the camera view, sorta like how the Wii remote gets jumpy when you aim it outside its range.

One thing people should realize is the HL2 one in this thread doesn't let you do all of the commands at the same time. For example, you can't do head tilt to iron sight and leaning to peek around corners. The best combo was the full face tracking (basically the Johnny Lee one where it changes perspective) and leaning. It really lets you lean around corners carefully.

The best part for me would be the depth perception + natural movement. It really can make computer games more immersive because you screen will behave like a window into another world instead of being a flat 2D plane. Bring it on.

that's pretty much what I came away with after trying this. The face tracking/perception, if augmented a bit, would be pretty awesome in games.

that's basically what doing the perception + leaning is in this program. Normal perception seems 1:1 with your face, but with lean on, it lets you make wider game movements with less head movement.
 
wayward archer said:
In the video the guy does a move where he leans forward to zoom. Natal would allow a lot more precision with that move. Measuring depth better could also allow for just more flexible tracking all around, with a more precise range of motion. It has potential to add a lot of functionality beyond this basic stuff that just a webcam setup can't even dream of doing.
it's not like the head is a very dynamic part of your body. i don't know if MS has released pics of the infrared view of NATAL, but the 3DV company MS worked with released pics showing how their Z-Cam works --

zcam5.jpg


notice how his face is basically the same value uniformly? you aren't really getting much more data. the 3D camera works better for an entire body, but it doesn't add much to face tracking.
 
wayward archer said:
In the video the guy does a move where he leans forward to zoom. Natal would allow a lot more precision with that move. Measuring depth better could also allow for just more flexible tracking all around, with a more precise range of motion. It has potential to add a lot of functionality beyond this basic stuff that just a webcam setup can't even dream of doing.

For head tracking I'm not sure what more functionality you could want beyond translational and rotational tracking..?

If visual-only solutions weren't robust enough to use, and the addition of a depth sensor could make it robust enough, then I can see an obvious advantage for Natal with this. But if it's robust enough as is..you may get diminishing returns with that extra data in this application.

I think the question of tracking at sofa distances is an interesting one..although I think it applies equally to any camera. I don't see how a depth reading would solve that problem if it's primarily linked to resolution (which I'm guessing it is). I'd be curious to see Sony's implementation in the PS3 SDK since I presume they wouldn't release it if it wasn't usable at reasonable distance.
 
Kulock said:
I'm a little tired of people saying "Fuck Natal my $20 webcam does headtracking!" when it's pretty obvious Natal is a console-specific solution with a lot of proprietary hardware and software included. For one thing, your webcam signal just sends a basic picture back, more likely than not. It doesn't include a specialized depth sensor, or a microphone array. And so no matter how they show it in the videos with that nice average lighting, basic webcam solutions still hit issues the moment the visual feed gets too dark, or too washed out, because they're only working from a series of still pictures with no true depth information relayed.

Considering Natal can't see black people as well, this would seem to be an issue for it as well.
 
evilromero said:
Hmmm. Seems counter-intuitive to turn your head only to have to turn it back so you can see the screen.

Just guess how it works in reality. They've figured it out. Just check a couple of video's on youtube demonstrating Track IR 5.



I miss the abillity to turn the characters head instead of the body. Maybe it's a limitation of the engine?
 
when i try to load it, my webcam turns on for a second while it's loading and then shuts off.. needless to say it doesn't work in-game :/
 
Yea, not all the gestures in the HL2 mod play well together. Luckily, you can enable/disable individual ones.

Headlook - Would work better if it was more sensitive (more in-game movement). Doesn't work with iron-sights.

Ironsights - Works fairly well with most of the guns. Understandably doesn't so anything with grenades or the rocket launcher.

Lean - This works very well. Leaning right understandably conflicts with iron-sights. Aside from that, the only bug I found was that you can still fire your weapons while leaning (Not a real biggie).

Spin - This also works very well, distinctive from the other gestures so it won't happen unintentionally often.

Long Zoom - This is the better of the zoom gestures, I think.



Some good combos:


-Ironsights + Spin + Long Zoom
-Lean + Spin + Long Zoom
-Lean + Headlook + Long Zoom

Still going to play through some levels with some of these gestures on later in the day. What could really make this package awesome would be a way to decouple the head and gun completely, so you'd use the mouse to aim the gun and waggle your head to manipulate the game camera. There could be an edge restraint, so you wouldn't be able to move your head around and shoot bullets out of your arse.
 
webrunner said:
when i try to load it, my webcam turns on for a second while it's loading and then shuts off.. needless to say it doesn't work in-game :/
Drivers installed? USB 2.0? Installed files in the right place? (Check this last one, the default directories might not install in the right place)
 
evilromero said:
Hmmm. Seems counter-intuitive to turn your head only to have to turn it back so you can see the screen.

Yes. Because your eyes cannot turn independently of the direction your head is facing.
 
bah, i installed it, it loads up, menu screen and only options and quit work for me.
I select load test map, nothing loads. i select headtrack options and nothing happens...:(
 
Templar Wizard said:
bah, i installed it, it loads up, menu screen and only options and quit work for me.
I select load test map, nothing loads. i select headtrack options and nothing happens...:(
Some stuff didn't install into the right place.

Reinstall, but change the install directories. For some reason, it installs into Program Files/Valve/Steam... , instead of Program Files/Steam...
 
dfyb said:
which is all pretty overkill for simple head tracking. he was asking what functionality would be added -- not what made natal different. having the camera know how far your nose sticks out won't do much for gameplay.

It's very simple.

3-D GUI. It's going to be a whole new Windows interface. The simple interaction is only fully realized when the camera can properly calculate our POV and the point of interaction (somewhere in mid-air) is properly detected. It's not going to be the boring crap they showed (wave left for left, wave right etc etc.) Hopefully, we'll actually be clicking a button/navigating in midair.


and I don't know why he didn't try to get it working on a game that better supports headtracking
 
Mindlog said:
It's very simple.

3-D GUI. It's going to be a whole new Windows interface. The simple interaction is only fully realized when the camera can properly calculate our POV and the point of interaction (somewhere in mid-air) is properly detected. It's not going to be the boring crap they showed (wave left for left, wave right etc etc.) Hopefully, we'll actually be clicking a button/navigating in midair.
with your head?


Mindlog said:
and I don't know why he didn't try to get it working on a game that better supports headtracking
probably has to do with source SDK being accessible and popular
 
Is there an appropriate cheap camera for me to do freetrack/faceapi/etc in a home theater environment? i.e. where I am like nine feet from the screen instead of two? Would this require a zoom camera?
 
Doesnt work here either :(

It loads the headtracking 'game', and I can select everything and get in game, but it gives me a non-stop zoomed in view and when I move my head nothing happens. Should I select more things in the headtracking options screen than just zoom in?

When I do so, the screen turns black and nothing happens anymore.
 
Karma Kramer said:
With Natal capabilities it will be pretty damn close... only thing is some sort of tactile feedback...

How is starring at a TV, even if the images float out a bit in 3D 'pretty damn close' to VR?

I don't think you have the correct definition of virtual reality.
 
beermonkey@tehbias said:
Is there an appropriate cheap camera for me to do freetrack/faceapi/etc in a home theater environment? i.e. where I am like nine feet from the screen instead of two? Would this require a zoom camera?

I think a zoom camera would be helpful.

Also, be sure you have reasonable lighting conditions. If your theatre is dark this won't work (i.e. hopefully you're not using a front projector ;)).

By the way, does anyone know if there are drivers for PSeye under windows? I'd like to try some things with it (it, btw, does have a optical zoom..75 degree field of view of 56 degree..which might help with tracking at greater distances if there are issues with robustness beyond 'desktop proximity).
 
Botolf said:
Some stuff didn't install into the right place.

Reinstall, but change the install directories. For some reason, it installs into Program Files/Valve/Steam... , instead of Program Files/Steam...

negative, i installed to where it should be. I think i may have a problem.
 
Botolf said:
Half-Life 2 maps - I dropped a HL2 bsp into the head tracking mod's map folder, and loaded it up fine. The gestures and tracking seem to work perfectly fine. Tested the zoom, the ironsights, the quickspin, and the cover lean here.

What would really complete this would be sensitivity settings, so gamers could tune this to their exact preferences. As it is, though, it's a fantastic piece of work, and I really wouldn't mind playing through some Source games with it on.

Where do I get a good test HL2 map? Does anyone know where I can get the map the guy in the video is playing?
 
acm2000 said:
all the things shown in that video in the OP are just as bad as adding waggle to wii games for the sake of waggle

Waggle is good when it's used well. The problem with Wii is that it's very rarely used well, even in good games. I'm not just talking about substituting waggle-shaking for button mashing. This whole approach to gesture-sensing technology being used for precise actions is a problem because gestures can vary wildly between player to player, and the motion data that the accelerometers and cameras deliver to the console are not always what the system is looking for. Casting spells in a Harry Potter game by drawing circles, for example, can be a confounding experience for the user (and programmer) since circles don't always look like circles to the computer (or sometimes it sees circles that the player didn't intend.) Technology has to improve with the gesture-sensing technology, and developers need to learn how best to use gestures (and also when not to use them even when they can) in their games. Developers also have to focus and improve their game designs around gesture usage, because having gestures for attacks in a beat-em-up is usually pretty silly since it's not 1-to-1 combat and waggling often does as much damage as precise actions, so developers have to make games where the gestures are reflected properly in the game. One final failure of motion-detection in general is the delay in sensing and computing the data. If the camera isn't running as many frames of data as it possibly can and the computer is taking too long to identify motions, that ruins the gameplay even in the case of games that couldn't be done with a Dpad and buttons.

When would waggle and gesturing be used well? Well, I think we can see some good and bad cases in Torben Sko's video (linked above.) The 180-spin, for example, would likely fail 3-5 times out of 10 and would not be a good function for intense shooting action, and the iron-sighting and zooming would probably get to be frustrating over prolonged play sessions. The windowing, however, would give a lot of depth to the scene and translate that first-person experience to the player more clearly (and also would make for less motion sickness?) Also, peeking would be a fairly simple move that could be triggered in less intense action scenes (say it's disabled when you're running or firing, for example) and could lessen the futziness over too many buttons that console FPSes sometimes hurt the players with.

You would not want motion detection and gesture-sensing to be the end-all, be-all of videogaming, but it sure can be a powerful tool in a developer's hands when done right.
 
Natal's z camera has a low resolution, 320 wide I think. I think it would struggle to judge head angles from a distance.
In my uninformed opinion, you'd be better off using the video feed for accurate results, seeing as it's been proved possible multiple times.
 
ninjavanish said:
Where do I get a good test HL2 map? Does anyone know where I can get the map the guy in the video is playing?
The map he's playing is included and is accessible from the main menu, not much to see though.

If you get GCFscape, you can extract HL2 bsps from your own game files (they're inside package files called GCFs) and use them.
 
dfyb said:
it's not like the head is a very dynamic part of your body. i don't know if MS has released pics of the infrared view of NATAL, but the 3DV company MS worked with released pics showing how their Z-Cam works --

http://www.blogcdn.com/www.engadget.com/media/2008/01/zcam5.jpg

notice how his face is basically the same value uniformly? you aren't really getting much more data. the 3D camera works better for an entire body, but it doesn't add much to face tracking.
MS has said a few times now that they aren't using 3DV's tech.
 
do you guys really want to be flicking your head to the side and doing all that crap for a full game? i can see how it would be fun for a little while, but any extended play would get incredibly annoying. imo.
 
Botolf said:
The map he's playing is included and is accessible from the main menu, not much to see though.

If you get GCFscape, you can extract HL2 bsps from your own game files (they're inside package files called GCFs) and use them.


When I load it up and hit "load test map" nothing happens
 
Top Bottom