State of the Industry: Graphics in 2005 and Beyond (Extremetech.com)

xexex

Banned
nothing really new, just an interesting article on the general state of graphics.


http://www.extremetech.com/article2/0,1558,1750720,00.asp

Graphics technology moves faster than probably any other area of PC technology, doubling performance every year or so, and reinventing itself just as quickly. Before, there were dozens of players in the 3D graphics race, now there are only two major combatants: ATI and nVidia. Microsoft has a lot of influence over the future of graphics as well, as its DirectX API is the catalyst for change in PC graphics.
None of what we say in this forward-looking piece should be considered fact. These are educated guesses based on conversations we've had with those in the industry, but nothing is confirmed, and plans frequently change.

The next DirectX?
Last summer we wrote about the future of Windows graphics. Not many specifics are known about the Windows Graphics Foundation architecture coming with the Longhorn operating system. But we think we know one major thing about the future graphics API from Microsoft—what some are calling DirectX 10. It appears that it will unify the vertex and pixel shaders into a generic shader language, where graphics cards would perform certain sets of operations on certain types of data, but these wouldn't necessarily have to be defined as verticies or pixels. In fact, some work has already been done to use graphics cards as general processors to perform complex calculations like fluid dynamics. The next generation of DirectX, or Windows Graphics Foundation, or whatever it ends up being called, should make that kind of work more common. The bidirectional nature of PCI Express will make this practical, too.


ATI's next generation

Graphics card companies play it close to the vest. Still, we've heard some rumblings about ATI's next-generation GPU architecture. We think the days of separate vertex and pixel pipelines are almost at an end. After all, when both vertex and pixel processing requires 32-bit floating point precision and the same capabilities for shader length, looping and branching, and so on, it's perhaps time to stop separating them on the silicon.

ATI's next generation, perhaps going by the chip name R520, will likely have a set of general "pipelines" that can read 32-bit floating point data of any type, vertex or pixel, and perform enough different math operations on them to qualify as a fully functional vertex- or pixel-shader pipeline. The idea has a lot of merit. If you have 16 general pipelines, you can devote all of them to vertex processing, all to pixel processing, or any balance in between. We expect the chip to support shader model 3.0 and then some. (ATI will probably pull a marketing trick like putting a "+" sign in there, though there is no shader specification yet beyond shader model 3.0.)


When will it get here? We suspect a spring announcement, with availability in the late spring or early summer. Our first glimpse at ATI's next-gen graphics may actually come with the formal announcement of the next Xbox console, which should happen in the first quarter of 2005. The company has designed the graphics chip for Microsoft (and Nintendo's next console, for that matter, though the design teams are separate), and we hear it is based on the R500 core technology that will be the basis for their next-gen PC graphics, too. Many things are likely to be different between the two, such as the number of pipelines, the size of internal cache, and clock speed. Still, the general architectural principles will likely be very similar. You can bet that whatever nifty graphics demos they show us to demonstrate the Xbox 2's power could be run by ATI's next-generation PC chips.

Further down the pike, ATI could take advantage of their license with Intrinsity. Intrinsity's technology consists of tools that allow chip designers to run logic circuits far faster than they have in the past, without increasing power consumption. ATI could take one of two approaches: run cards with large numbers of hardware shaders very, very fast, or cut down the number of hardware shaders (and hence, chip real estate) and still get the same performance. We'll have to wait and see.

We had an exclusive interview with ATI's VP of Marketing, Rick Bergman, in which he answered questions posted by ExtremeTech readers. In this Q and A, Bergman talks about AGP availability of its latest GPUs, OpenGL performance, and whether the company has a competitive answer to nVidia's SLI technology.

nVidia's next generation

We've heard a lot less on the rumor mill about nVidia's next-gen chip. Either they're doing a much better job keeping things under wraps, or they're further away from delivering it. The codename should be NV50, whenever it hits us. A rumor has been floating around that NV50 was "cancelled," but the company's naming scheme is pretty clear. If the supposed "NV50" was cancelled or postponed (and we've heard no credible source tell us it has), we'll probably still see their next-generation chip with that same name.

Though we don't know any specifics, we have spoken with architects at nVidia in the past about the whole idea of "general shader units" that operate on data that can be vertices, pixels, or more. It was always met with a knowing smile and a remark like "well that would seem to be a logical place to go." For what it's worth, Microsoft reps do the same thing. They're not that great at being coy. This is clearly the way PC graphics are headed, and we suspect nVidia's next-gen PC chip, just like ATI's, will follow this general paradigm.


We asked nVidia's Dr. Kirk ten questions—all but one asked by ExtremeTech readers. Topics range from HDR to open-source drivers and everything in between, and this array of interesting questions drew equally interesting answers from our guest guru.


Finding the faster Card

So whose will be faster? That's going to come down to a lot of factors. How many ALUs (Arithmetic Logic Units) are in each pipeline? How many vector and scalar operations can be performed at once? Can vertex/pixel data be fetched while the ALUs crunch numbers, or can each pipeline only perform one or the other? Clock speeds can never be ignored, of course.

Memory bandwidth will continue to be a huge issue. The art resolution in games continue to increase, and getting all that data into the GPU for processing and back out to a frame buffer will take more bandwidth than ever. Currently, graphics cards use GDDR3 memory—which is just double—data rate memory optimized for the extremely high clock speeds demanded of graphics cards. It's likely the next wave of graphics cards will use this as well, or possibly a "GDDR4", once again tweaking the spec to allow ever-higher clock speeds. Graphics cards already utilize a 256-bit memory interface, and it's not really practical to double that to 512-bit. The chip pin count would be enormous, and it would be practically impossible to route all those trace wires on the board itself. Of course, that's what they said about 256-bit memory interfaces in the days when all the cards used 128-bit interfaces.


What is needed is a way to deliver more bits per clock. Rambus' XDR memory may be the front-runner there. It delivers an "octal data rate," four times as many bits per clock as DDR, DDR2, GDDR3, etc. If a graphics card with 500MHz GDDR3 memory on a 256-bit interface has 32 GB/sec of memory bandwidth, then the same card with XDR memory at the same speed would have a whopping 128 GB/sec of bandwidth. Of course, there's no guarantee that XDR will go that fast yet, and so far no graphics manufacturer seems quite ready to move in Rambus' direction. If graphics cards move to XDR RAM, it will probably not be until 2006.

It's all about the Software, again

ATI and nVidia will play a mean game of technical one-upsmanship, dazzling us with boatloads of technical jargon and impressive statistics, but every year it boils down to the same thing: Who is going to run the most games the fastest, with the best image quality.

From what we've seen, next year will continue the trend of increasing DirectX 9 shaders in games. But we haven't seen anything yet that leads us to believe the extremely long shaders in Shader Model 3.0 are going to be critical. It looks like Shader Model 2.0 can do everything 3.0 can do for the next year, perhaps with only a small speed penalty. It will be important to have Shader Model 3.0 support to future-proof your card against the games of 2006, and maybe to provide a little speed boost, but the tech isn't going to make things look significantly different.


Longhorn support will be key. Microsoft's next OS will use your 3D accelerator to draw the desktop GUI, so robust support for the new APIs in Longhorn will be critical to future graphics card's success. But Longhorn won't be here next year, and even the beta version that is due likely won't have the full featured Avalon desktop compositing engine or Aero interface yet.

There's still a lot of driver work to be done, though. Preparing for a rock-solid Longhorn launch will be top priority for graphics companies, but 64-bit computing should be a major, and more immediate, concern. This year we'll see the launch of the 64-bit version of Windows XP. Those with Athlon 64 CPUs, or Intel's upcoming 64-bit Pentium 4 CPUs, will want to snap it up to take advantage of 64-bit applications. But the OS requires new drivers, and that means both ATI and nVidia have to make their 64-bit Windows drivers every bit as fast and full featured as those for 32-bit versions of Windows XP. Users will expect more from the 64-bit version of Windows XP, and it will be totally unacceptable if they run games more slowly because their drivers aren't up to snuff.


Video and final thoughts

Video will play a key role next year. The battle over video features and quality is starting to heat up, and as HDTV prices fall and millions start to bring home new digital televisions next year, it's going to get hotter. Look for nVidia's next chip to have a newer, even better video processing unit. ATI didn't change their video processing from the R300 series to the current R400 series, but they'll make a big leap forward in 2005. We've heard they will leverage upcoming technology from their Xilleon line of consumer electronics chips. Xilleon chips power many of today's top HDTVs and set top boxes, so having that technology (or a future revision of it) in your PC graphics card will truly deliver on the promise of a "CE-like experience" from your PC.

So there's a lot to look forward in 2005. Broad industry support for Shader Model 3.0 will mean that game developers can start to take real advantage of it, but don't expect many big games to look any different under SM3.0 than they do under 2.0—at least not in 2005. What we'll see instead is much broader adoption of DirectX 9's advanced features. You'll see a lot more games use DX9 shaders more heavily, and a greater emphasis put on shader performance.


New GPU architectures from ATI and nVidia will probably get rid of separate pixel and vertex processors, making more efficient use of the GPU to render the scene optimally, while at the same time giving developers more flexibility. Still, it's the next graphics API from Microsoft that will bring out the power of this more general architecture. We may get a glimpse of the possibilities first if Microsoft unveils the next-generation Xbox before ATI or nVidia unveil their new graphics cards. Expect to see real-time graphics that look like pre-rendered cut scenes in today's games. Next December when we write a retrospective on the year 2005, there may be a few surprises, but we'll almost certainly still say "it's been an exciting year for PC graphics."
 
A good summary, overall, but I'm pretty sure that nVidia is not really that sympathetic to the concept of unified shaders (general pipelines as the article puts it) and believe dedicated shader/vertex units is more hassle-free and efficient overall than unified shaders.. But since Microsoft wants unified shaders, they'll have to bend over and take it, eventually.
 
tahrikmili said:
A good summary, overall, but I'm pretty sure that nVidia is not really that sympathetic to the concept of unified shaders (general pipelines as the article puts it) and believe dedicated shader/vertex units is more hassle-free and efficient overall than unified shaders.. But since Microsoft wants unified shaders, they'll have to bend over and take it, eventually.


It is a little more involved really. Unlike DirectX, this time Microsoft and ATI are not only dictacting the API, they are also dictacting the hardware implementation. What NVIDIA wanted was to support the unified shader spec at the software level while under hood the hardware had seperate pixel and vertex shaders doing all the work if I recall. Ergo, the situation of unified shaders became an issue.
 
ATI is talking about next-gen but they still can't this-gen on store shelves.

Hopefully MS $ will make next-gen ATI chips exist in some significant form.
 
Top Bottom