Any favorite learning resources, missile? books, articles, videos, etc.
Basically, many of the 3d programming books before the area of the (consumer)
3d graphics accelerator explosion (about 1995) are good for learning software
rendering. Many of the books and papers written before that time do explain
the principles more clearly whereas many of the modern books past 1995 til
around, say, 2005, are more about using libraries like OpenGL/DirectX to do
graphics and won't necessarily explain the principles as such. There are
exceptions, of course. It also feels like that many of the modern books want
to sell you something instead of sharing the details. It seems that oncs the
commercialization started with all the consumer 3d accelerators a lot of the
accompanying 3d literature also transformed into fast commercial products
over night (bad books).
Books you definitely should have on your table are;
Foley, van Dam: Computer Graphics: Principles and Practice in C (2nd Edition)
Rogers: Procedural Elements for Computer Graphics (2nd Edition)
The topics covered in these books are more than enough. Especially the first
one (2nd edition, not 3rd) is pretty valuable not only because of all the
details given in there, but also because of all the old references. This book
contain all the valuable references back to the pioneers of computer graphics
like Sutherland, Sproull, Newman, Warnock, Watkins, and many more. Their
papers and books you should study over time! For example, Sutherland has
written a paper on comparing 10 different rastering algorithm and marked the
difference among them by how they sort their elements. Many rastering
algorithm just differ in how they sort while rastering.
However, there are also all these misc. graphics books you can take advantage
of like the graphics programming gems series and so on.
So if you wanna go deep you better start with the hidden-line problem in
object-space. There are basically two algorithms within this regard, after
Roberts and Appel. Roberts algorithm was the first ever for doing
hidden-line. I've read his original thesis where he came up with his
algorithm at the very end of it. In the book of Rogers these algorithms are
also included with examples (Rogers always ties to give an example for
each topic presented).
Having said that I also want to give another perspective.
The above is considered old-skool these days. For example, no one is going to
implement a Cohen-Sutherland clipping algorithm any longer nor wants to
understand how it works. xD All the (cs) guys 'n gals want to display cool
graphics. Well, there is a new trend in approaching computer graphics, i.e.
the new-skool as I like to say, which is way different from the old one. The
new-skool starts right at the top with the full rendering equation expressed
in radiometric and/or photometric quantities. Given MC integration you get
something cool to see on the screen rather quickly without needing any of the
old-skool techniques (at first). However, the new-skool requires you to
understand more physical stuff up-front like for example how light really
reflects from a surface (down to Maxwell's equation at best, at least the
Fresnel relation should be understood), about energy conservation etc.. So I
see a lot of coming cs students to struggle here. For, you wanna do some cool
graphics and suddenly find yourself doing lots of physics and integration. xD
This new-skool is perhaps best seen in the new 3rd edition of the book
"Computer Graphics: Principles and Practice" where the focus has shifted
towards physical-based rendering. Makes sense, because today's computers are
fast enough and the goal is physical correct rendering, so why dealing with
all this old stuff?
Well, after having implement all this physical cool stuff you will realize
that everything becomes dead slow. How to make it fast? You will be surprised
to see that many of the fast GI approaches are based on rasterizing again. For
example, all these lightprobes, shadow-maps etc. are computed via graphics
hardware by rastering the scene multiple times. And the old-skool is
essentially about rastering. So you will end up with a mixed-hybrid rendering
like we see in today's engines. The old-skool is sort of the accelerating
backend for the new-skool, that is to say object-space rendering enhanced with
screen-space methods.
So you could equally start with the new-skool if you feel comfortable with
the physics and such and later do the rastering as needed. There are many
good books poping up everywhere about it. But it is my believe that by doing
so you won't be eager to do any rasterizing at all, because it's more
difficult to set up and also comes with a lot of restrictions you better know
in advance resp. from experience.
The new-skool is cool, do it, but the old-skool will be your Swiss Army knife!