... You can also see sleeping beauty there. It's pretty hard to wake him up. He's not too happy about it if he
does wakes up, though.
Very, very nice!
How many rooms are there, are they all modeled? Interesting!
Correct me if I'm wrong as this is not my bag, but most of this can be done with shaders and pre and post process in almost any engine. ...
Due to the Turing completeness of many shader languages, anything what can
be computed can be done with them as well. You can even use BASIC. It's not
about the language, it's about control, adaptivity, and efficiency.
Depending on the render pipeline of the engine and flexibility of the SDK would ultimately impact use and adaptation of those techniques. ...
That's the point. Despite being able to compute virtually everything using
shaders, some hardware things cannot be controlled using them. For example,
you can't change the way how, for example, lines or polygons are rasterized
algorithmically by the hardware. 'You' can shade, texture them, but you don't
have any control over the algorithm itself up until you write your own using,
for example, a shading language. There are many cool effects possible being in
control of the rasterizer. Another thing is rendering order. To get some very
interesting video effects, rendering in scanline order has advantages over the
Z-buffer (random object order), whereas one can use a Z-buffer algorithm on a
per scanline basis for the depth test as well, but hardware doesn't allow to
do (to control) this, but one can setup the hardware to fake it (each scanline
being a viewport) making the process very inefficient, and it may not be a
good choice either since the weakness of the Z-buffer (overdraw) rises while
using more and more complex per-pixel rendering equations. One can alleviate
this problem to some degree by using better object culling techniques like for
example Hierarchical Occlusion Maps or better image based culling techniques
like Z-hierarchies/pyramids. But it gets nasty the further you go. Anyhow. Say
for example I want to change the rasterizer algorithmically on a per scanline
basis, or, per scanline and per depth of an object / pixel. I have some pretty
cool ideas using such things. As a simple example, say (in a scanline
rasterizer) I want to modify the stepping algorithm of the edge equation in
turning some edges into a B-spline whos control points my vary with time.
The language I choose? Well, it will be C/C++ with parts written using, for
example, a shading languages to accelerate stuff targeting a video
accelerators, if necessary. However, am not fixed on any video hardware, quite
the opposite! I want to make a step back and away from them. This may sounds
elusive and contradictory to the current approach for video games, since those
accelerator are supposed to be the Holy Grail for video games. And indeed, for
many things they are. But I want to build something very specific and very
flexible leaving also a lot of room for experimentation. Software Rendering
won't be as fast as using fixed function hardware, but it is a lot more
flexible. And retro rendering is a perfect match so to speak. I don't need
many of the costly smoothing stuff of current hardware for retro rendering.
Rest assured I'm not going to need any anisotropic filtering or 16xFSAA or
any similar stuff. Likewise, it's also not the goal to pump millions of
polygons through the engine for just a few models -- giving air to the 3d
frustum clipper. It should be a special something. No need to tie on a
specific acceleration hardware/language.