Star Fox programmer: PSP even more powerful than PS2

Fafalada said:
The technique from PSP demo is Precomputed Radiance Transfers - it's a relatively recent approach that simulates effects of global illumination by precomputing radiance transfers and encoding them using basis functions - most commonly, the basis functions for the task chosen are Spherical Harmonics, but they aren't the only ones you can use.
Would it be possible to give brief explanation of the main differences between regular radiosity and PRT?
Panajev2001a said:
You have soft words for LOD calculation on the GS, but you are also a nice guy who probably does not get that hot-tempered too easily, so that explains it :).
What is it that's so bad about the GS MIP mapping? From what has been explained to me, it seems as though that the "only" really poor thing about the implementation, is that it doesn't perform the “fake anisotropic” that other VPUs does (using higher mip levels when the polygon is viewed at a steep angle). Is this correct, and why is it such a big deal? It is after all possible to “manually” control MIP mapping in the most offending cases.
 
You mipmap to get rid of texture alaising, so a satisfactory method should at least ensure that. The GS's algorithm just accounts for a texel's distance from the camera and not much for slope apparently.
 
It is after all possible to “manually” control MIP mapping in the most offending cases.
To do a full blown mip mapping on PS2 you need to do some extra per-vertex calculations, IIRC, but not many games do that. J&D and Jak 2 are example of games that do, they use trilinear filtering and mipmapping.
 
Marconelly said:
To do a full blown mip mapping on PS2 you need to do some extra per-vertex calculations, IIRC, but not many games do that. J&D and Jak 2 are example of games that do, they use trilinear filtering and mipmapping.
Okay, but you don't have to perform the extra calculations on all the geometry in the scene, only where it matters (ground, walls etc.), right?
 
There are so many possibilities with moving objects and free-roaming cameras for textures to get into offending orientations that selective application of a fix like that would still miss some of it.
 
Fafalada said:
How anal do we want to be with DX7 classification though?
DX7 specifies EMBM - a blend mode that is a "dependant texture read with a matrix transform"(at least 2xdotproduct). Possible speed-issues aside, that's all the math operations you need to do pixelshading - or at least, implement the core material shader of something like Doom3.
And if you have lots of texture-stages, and your dependant read is single cycle operation, you will get those shaders running comparably fast to your DX8.1 class hw (GF3...), as evidenced by NGC :p

If you want to proove yes/no pixelshading you'll have to get better evidence then that. ;)

Grrrr.... still your making the case why it could still do Pixel Shading tells me that PSP cannot do it.

No, you did not say that, but this is the feeling/emotion I read.
 
Grrrr.... still your making the case why it could still do Pixel Shading tells me that PSP cannot do it.

No, you did not say that, but this is the feeling/emotion I read.
Gotta say I've got the same impression out of his post :(

I think the bumpmapping used in that one PSP demo was not DOT3, btw. That's what I kinda remember someone asked the demo's programmer at E3, and the answer he got was that it's not DOT3, but he didn't say what it was.
 
TekunoRobby said:
It's come from that drek of a forum, ehhh.


Hey do you even know who Dylan Cuthbert is? As the only foreign programmer to have worked inside both Nintendo Japan and Sony Japan for many years he has just a bit of authenticity.

And explain what you mean by 'drek of a forum' - AFAIK that forum is well known for good industry discussion with no trolling.
 
kaching said:
Why the would there be NDAs on those aspects of the hardware at this point, though?
The NDAs aren't 'conditional' you know - in most cases they are strict enough that even admitting to seeing a certain document is technically a violation. And they most certainly aren't about to expire before PSP is even released.
Now, as certain info is made available to public in one form or another it usually makes it safe to talk about since even if you do you are only repeating what you already saw out in the open...
But anything else is off limits.

Squeak said:
Would it be possible to give brief explanation of the main differences between regular radiosity and PRT?
To put it simply - think of PRT as precomputing lighting from every possible angle, and then using a clever way to compress that into a much smaller dataset that can be handled in realtime.
In a nut shell, for most common PRT SH implementation that means your realtime scene can be lighted with virtually unlimited number of lights moving around it, but nothing within that scene can move or deform.
Or, you could create separate PRT datasets for different objects within that scene(split into background and dynamic objects for instance), but then you loose the radiosity like interaction between those objects (you'd still get a pretty background though :p).

The precomputing step can also take hours at end, depending on complexity of the scene and desired lighting resolution.

The light frequency is another thing to mention - SHs are good at representing low frequency lights cheaply (you need as little as 4SH coefficients per vertex to get the look of that City Scene), which will give you soft blurry shadows. But if you want clearly defined shapes, that would require exponentially more SH coefficients which would make it both too slow for realtime rendering as well as take up far too much memory.

It's a still an area of lots of research - for methods to allow movement&animation, better range of light frequency etc., but in the end it's basically about precomputing more data and find ways to represent it small and fast during rendering.
 
Top Bottom