Josh Strike, the dev in charge of designing the HUD, posted some more on the subject. Interesting stuff. Actually, no, it's fucking awesome.
Thanks for such a warm welcome! Can’t believe so many people stopped by to say hi while I was sleeping. I’m by no means able to answer all the questions, and I should be getting back to work but I guess I can give you all another good teaser…
I’m seeing a lot of questions about 2D vs. 3D displays, and about 3D HMDs vs. cockpit-projected HUDs. In fact, Chris made the decision to use all three types in Star Citizen from the beginning.
There’s a fixed holographic projection within the cockpit, which overlays things that don’t move with your head, e.g. velocity, acceleration, attitude, altitude, heading and targeting data. This projection has depth for greater effect (along the lines of what you see in that Scaleform promotional video), and also uses some (non-essential) stereoscopic depth cues to assist the pilot in reading certain situations more quickly. You don’t need an Oculus to get all the same info, but I’m not saying the depth cues won’t give you a slight edge.
In addition to that, there’s a separate projection on the inside of the pilot’s helmet which can be loaded up with with deeper data sets (e.g. ship status, weapons selection, power balancing, navigation maps, communications, etc). This HMD projection stays in your field of vision when you turn your head.
Finally, elements from both of these projections can be shunted to the flat LCD displays or brought back up to their respective projection, and have been designed to shift shape, color and opacity, and/or break into separate elements, depending on whether they’re being displayed on a flat screen or holographically. So yeah, there was definitely some inspiration from Minority Report and Iron Man, but at the same time these elements are very much in the CR space sim style, and my primary goal is to remain true to his original vision.
I should stress that these projection layers exist, and about 80% of the gizmos are fully functional; in tests, they have been added to the projections and screens and run successfully with dummy data. The part that’s not yet complete is the full integration that sends active environmental data to the components. The HUD is a platform in itself with 62 custom classes and a 20-page API manual so far. There are hundreds of data points that need to be connected up to the ship’s systems, so we’re still in the process of getting the cockpit fully “wired”.
And no, we’re not deviating from the classic polar-mapped radar screen! But we’ve added some nice touches like sector heat mapping, and ship-on-your-tail alerts.
Additionally, someone asked me about incorrect and/or damaged displays. Every element in our HUD responds to damage. As Chris built the fly-by-wire system to procedurally handle an infinite range of ship states based on damage to various components, that philosophy was extended to have damage also rendered procedurally in the HUD. Just as one example, text in the HUD is not pre-rendered or even generated on the fly as a block; it’s printed procedurally to the displays one character at a time, with a greater likelihood of transcription error (or garbled transmission) depending on specific damage to your avionics or communications systems. Response to damage is a bedrock feature of every element in the HUD and has a negative impact on boot times, data latency and accuracy. If this sounds like something not recommended in a Scaleform project, it isn’t – and they said it was crazy – but we’ve refactored, honed and optimized it to work.
So, I’ll end it here before I give away all my tricks, but I’ll drop back in from time to time. Meanwhile, stay tuned for the next vid!