• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia GT300 + ATI Rx8xx info-rumor thread

Roquentin

Member
Death Dealer said:
Was DX10 ever relevant ? Not IMO.
Both DX11 and Windows 7 should make DX10 relevant. A lot of people already has DX10 hardware, a lot of people already has Vista, a lot of people wants to move from XP to W7. DX11 doesn't bring new API (like DX10 did), and some of its features will be compatible with DX10 hardware.
 

HiResDes

Member
So now the biggest question becomes how will developers take advantage of the multitude of cores and pixel shaders?...If development costs are already high, is it even plausible to think that the influx of new and improved technology with necessarily translate to a comparable advancement in the graphics department.
 
Dani said:
I cant decipher what that means to me as casual PC gamer that might splash cash on some new graphics hardware in the next 12 months or beyond.

Better eye candy, right?

:lol :lol :lol

I was also really lost.
 
I love how inexpensive graphic cards are getting.

I just scored a 260 GTX for about $150, so I probably won't have to upgrade for a while.
(I don't care about all the bells and whistles...medium settings and good FPS(40+) is what I care for)
 

HiResDes

Member
fistfulofmetal said:
So should I wait for something like the 5870 or maybe just go with two 4870s?
Personally I'm waiting for the HIS versions of the HD4770, which will reduce heat output significantly.
 

DaFish

Member
I'm not big on crunching numbers... so someone please tell me

How much better is the GT300 then my 8800 Ultra?
 
camineet said:
Welcome.

And just to be fair regarding consoles, the X360's performance, CPU+GPU combined, is 0.355 TFLOPS
(CPU: 115 GFLOPS + GPU: 240 GFLOPS) which is more realistic compared to the 1 TFLOP marketing figure.

For those that are curious about their little Wii, it gets 0.015 TFLOPS

CPU+GPU combined: 15.75 GFLOPS
(Broadway CPU: 2.85 GFLOPS + Hollywood GPU: 12.9 GFLOPS)

which is exactly 50% more than GameCube's 10.5 GFLOPS
(Gekko CPU: 1.9 GFLOPS + Flipper GPU: 8.6 GFLOPS)


On another note, now imagine what Nintendo could do with a low-end to midrange GPU that is at least one GPU-generation beyond Rx8xx.

AMD/ATI is not only working on the R9xx generation but the R1000 as well.

The Flipper/Hollywood GPU architecture is now 10 year old tech.

http://cube.ign.com/articles/099/099520p1.html



The Wii to Wii HD/Wii 2 could represent a 11-12 year leap in GPU technology O_O
so what you're saying is Wii isn't as powerful as 2 gamecubes taped together? :lol

Ps3: 46.8 gamecubes
360: 35.5 gamecubes
Wii: 1.5 gamecubes :lol
 

Zyzyxxz

Member
DaFish said:
I'm not big on crunching numbers... so someone please tell me

How much better is the GT300 then my 8800 Ultra?

Judging from the estimates and prototype numbers I sepculate piles and piles better.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
fistfulofmetal said:
So should I wait for something like the 5870 or maybe just go with two 4870s?
I had a 4870x2 and it was not all that great tbh.
 
HiResDes said:
Personally I'm waiting for the HIS versions of the HD4770, which will reduce heat output significantly.

Have you read the reviews? The 4770 has one of the lowest heat outputs of any card out right now. I am personally waiting to see if they put out a 1GB version or I may just get a cheaper 4870 512 to hold me over till the DX11 cards.

How good are ATI on keeping their dates. If it's a definite June release I will just wait.
 

camineet

Banned
Now that ATI has established a 40nm manufacturing process with its Radeon HD 4770, it was only a matter of time before the rumour mill began to churn out details of other upcoming 40nm parts.

Today, the attention's turning to potential RV870-based parts. According to German site hardware-infos.com, ATI's RV770 successor will arrive as soon as July '09 in the form of the Radeon HD 5870 and the dual-GPU Radeon HD 5870 X2.

It is, of course, entirely speculation at this moment in time. However, should the report be believed, the products will line up with the following specifications:

they've got a nice chart with all the specs:

http://www.hexus.net/content/item.php?item=18240
 

mr stroke

Member
fistfulofmetal said:
So should I wait for something like the 5870 or maybe just go with two 4870s?

Wait, considering its only 2-3 months away. Plus if you decide to go with a 4870 they will be dirt cheap by then. I am hoping ATI sticks with the same price range next round.
5870-for-$300 please
 

sankao

Member
DaFish said:
I'm not big on crunching numbers... so someone please tell me

How much better is the GT300 then my 8800 Ultra?

8800 : 128 cores
GT300 : 512 cores

So, basically, 4 times. The GT300 also has more stuff related to memory access and thread scheduling it seems, but video-games are not too demanding in that regard as far as I know.

Said in an other way, whatever your 8800 ultra can run in 720p 30fps, the GT300 can run in in 1080p at 60fps.
 

RavenFox

Banned
This is not the only change - cluster organization is no longer static. The Scratch Cache is much more granular and allows for larger interactivity between the cores inside the cluster. GPGPU e.g. GPU Computing applications should really benefit from this architectural choice. When it comes to gaming, the question is obviously - how good can GT300 be? Please do bear in mind that this 32-core cluster will be used in next-generation Playstation console, Tegra, Tesla, GeForce and Quadro cards.
Fixed with love:D

thuway said:
I also wonder how Cell 2 will compete with it :lol .
If the parts in PS3 are working together in harmony why would cell have to compete? They will just integrate like the last part. If they do this the performance benefits will be ridiculous. Lets hope the price doesn't get to high.
 

DopeyFish

Not bitter, just unsweetened
sankao said:
8800 : 128 cores
GT300 : 512 cores

So, basically, 4 times. The GT300 also has more stuff related to memory access and thread scheduling it seems, but video-games are not too demanding in that regard as far as I know.

Said in an other way, whatever your 8800 ultra can run in 720p 30fps, the GT300 can run in in 1080p at 60fps.


also depends on the frequency of the cores too

if they do get 2 ghz, a normal 8800gt user would see an increase of ~6 times (112 cores in that variation @ 1.5 ghz) it might be more or might be less, depending on how well they scale and depending on how less efficient or more efficient the cores get with their added functions

sorry i couldn't get better info but info is pretty limited on ALU frequencies :p
 

xabre

Banned
God's Beard said:
so what you're saying is Wii isn't as powerful as 2 gamecubes taped together? :lol

Ps3: 46.8 gamecubes
360: 35.5 gamecubes
Wii: 1.5 gamecubes :lol

I really don't give a fuck.

Why do we have to have this stupid junk in a thread about UPCOMING PC GRAPHICS CARDS???
 
camineet said:
they've got a nice chart with all the specs:

http://www.hexus.net/content/item.php?item=18240

Wow if thats true its extremly similar in design to the RV7xx despite being DX11 complient. AMD must be glad they already implemented a tesselator, they already have a much smaller GPU compared to Nvidia and they dont have to add much extra for DX11 complience, probably mostly just instruction set extensions to accomodate the new shader types. Does anyone know if theres a change DX11 tesselation will work on RV7xx GPU's?
 
xabre said:
What's DX11 offering over DX10 anyway?

For games lots of multithreading improvements, much better texture compression, and tesselation (which is what hull shader and domain shader is used for). They also improved HLSL to include high level constructs like classes and interfaces which will making coding and debugging much easier. I just read an article and it seems many of the improvements will be compatible with DX10 GPU's except for tesselation. On the GPGPU side they added compute shader which is MS's answer to CUDA and OpenCL. Addoption should be much quicker as it will be available for both windows vista and windows 7 and it offers benifits for all DX10 and up GPU's. From a developers prospective there's really no point in using DX10 over DX11.
 

tokkun

Member
sankao said:
8800 : 128 cores
GT300 : 512 cores

So, basically, 4 times. The GT300 also has more stuff related to memory access and thread scheduling it seems, but video-games are not too demanding in that regard as far as I know.

Said in an other way, whatever your 8800 ultra can run in 720p 30fps, the GT300 can run in in 1080p at 60fps.

You have to be careful about the GPU manufacturer's definition of cores.
If you look at ATI's stuff, they will claim 800 cores.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
tokkun said:
You have to be careful about the GPU manufacturer's definition of cores.
If you look at ATI's stuff, they will claim 800 cores.
ATI isn't lying, it is just that due to differences in the architecture, some cores are less powerful than others.
 
tokkun said:
You have to be careful about the GPU manufacturer's definition of cores.
If you look at ATI's stuff, they will claim 800 cores.
By any sane definition both really only have 10 cores but Nvidia does have more flexibility within each core.

EDIT: You could say Nvidia has 30 cores but its debatable.
 

tokkun

Member
godhandiscen said:
ATI isn't lying, it is just that due to differences in the architecture, some cores are less powerful than others.

I feel like the use of the term 'core' is a lie when 'SIMD lane' is a more accurate description.
 

artist

Banned
tokkun said:
You have to be careful about the GPU manufacturer's definition of cores.
If you look at ATI's stuff, they will claim 800 cores.
In Nvidia speak (bull shit marketing crap): Nvidia's 128 machine guns vs ATI's 800 BB guns.
 
irfan said:
In Nvidia speak (bull shit marketing crap): Nvidia's 128 machine guns vs ATI's 800 BB guns.

Well its 240 with they GT200 and the are actually quite similar, each having a single precision FP, Int and Mov/Cmp unit. Nvidia's run at a higher clock speed and the overall design is more efficient, for example each of the SP's can work on a seperate thread while with AMD they are in groups of 5 wich means 240 simultainious threads on Nvidia vs 160 for AMD. Nvidia also has 60 extra SFU's (sin/cos etc) which they dont cout as SP's while with AMD 1 out of every 5 SP's is has this as an extra capibility. Its really confusing.
 

M3d10n

Member
xabre said:
What's DX11 offering over DX10 anyway?
Only two really major features, IMO:

1) Truly multithreaded rendering: multiple threads can draw to the same direct3D device, allowing the rendering itself to be split into multiple threads (up to now applications were forced to concentrate the actual rendering phase in one thread). This is backwards compatible with DX10 and even DX9 hardware, so new games can take advantage of this without requiring DX11 GPUs.

2) The tessellation stuff. Many people think this is like ATI's old trueForm stuff, but it's not. Here's what can be done with it:

- Use nurbs/bezier/subdivision model assets (like those used in CGI) in real-time applications.
- Take normal maps to the next level by having them truly displace geometry, with results that are far better than parallax tricks (think ZBrush/MudBox quality).
- Different areas of a same model can be tessellated differently, based on camera distance or visible surface angle. Done right this can eliminate harsh polygonal silhouettes and blocky/pointy fingers.
- Less VRAM spent on geometry, since nurbs/bezier/subdivision/displacement needs much less vertices/control points.
 

jett

D-Member
M3d10n said:
Only to really major features, IMO:

1) Truly multithreaded rendering: multiple threads can draw to the same direct3D device, allowing the rendering itself to be split into multiple threads (up to now applications were forced to concentrate the actual rendering phase in one thread). This is backwards compatible with DX10 and even DX9 hardware, so new games can take advantage of this without requiring DX11 GPUs.

2) The tessellation stuff. Many people think this is like ATI's old trueForm stuff, but it's not. Here's what can be done with it:

- Use nurbs/bezier/subdivision model assets (like those used in CGI) in real-time applications.
- Take normal maps to the next level by having them truly displace geometry, with results that are far better than parallax tricks (think ZBrush/MudBox quality).
- Different areas of a same model can be tessellated differently, based on camera distance or visible surface angle. Done right this can eliminate harsh polygonal silhouettes and blocky/pointy fingers.
- Less VRAM spent on geometry, since nurbs/bezier/subdivision/displacement needs much less vertices/control points.

And my DX10 video card will be able to do all this shit? Nice.
 

dLMN8R

Member
xabre said:
What's DX11 offering over DX10 anyway?
DirectX 11 is to DirectX 10 what Windows 7 is to Windows Vista.

In Vista, practically every underlying system was rewritten from the ground up. The result was something that wasn't completely optimized, but that improved over time as Microsoft worked on it. In essence, it was a better, more robust, more secure operating system, but it was all internals, stuff that consumers didn't see an advantage to. It was an overhauled underlying platform that couldn't be fully taken care of right from the get-go. Just like DirectX 10.

Now Windows 7, much like DirectX 11, is the result of 2-3 years work building on top of and improving those overhauled fundamentals. More efficient at doing the same things with some new features built on top of the underlying architecture that wouldn't have previously been possible without the first step of Windows Vista and DirectX 10.
 

M3d10n

Member
The Doom 3 imp model. On left it's just subdivided, on the right it's using a displacement map. The base model has roughly the same amount of polygons as the original Doom3 model.
rs4dwn.jpg
 

Nirolak

Mrgrgr
dazzgc said:
crysist@constant 60fps maxxed out with these cards?
If they rewrote the engine to actually take advantage of DirectX 11, absolutely.

Otherwise, that's up in the air, but not really the fault of the card.

M3d10n said:
The Doom 3 imp model. On left it's just subdivided, on the right it's using a displacement map. The base model has roughly the same amount of polygons as the original Doom3 model.
http://i43.tinypic.com/rs4dwn.jpg[IMG][/QUOTE]
Now that's impressive.
 

camineet

Banned
GT300 likely delayed until Q1 2010

http://www.theinquirer.net/inquirer/news/1052025/gt300-delayed-till-2010
http://www.electronista.com/articles/09/05/05/nvidia.gt300.delayed.more/

So it looks like GT300 and Larrabee will be launching around the same time.


Don't know what this means for AMD and Rx8xx. It could be that AMD will be the only graphics provider with a DX11 part this year, as if it matters, as if there's a ton of DX11 games needing a new card. There is Windows 7 release, and it will have DirectX11/Direct3D11 built in, but to what extent it makes use of DX11 hardware I have no idea.


p.s. yeah that DOOM 3 model on the right with DM is very impressive.
 

1-D_FTW

Member
camineet said:
p.s. yeah that DOOM 3 model on the right with DM is very impressive.

Is it actually supposed to do these things by default (IE improve old game visuals with hardware tricks)? This technology confuses me. It sounds like it's a next-gen improvement on ATI's TrueForm. But the thing about Trueform is I remember all the hype and articles leading up to its release. And I never, to my knowledge, had it used in a single game I owned. So I'm just wondering if this is even something to be excited about. Or just some hypothetical thing that'll never be used once the card is released.
 

camineet

Banned
Inq just posted an interesting article on GT300 architecture. If this article is to be believed, GT300 is completely fucked. But I assume Inq is just biased against Nvidia. Still, it's worth reading.

A look at the Nvidia GT300 architecture
Analysis Compromised by wrong vision
By Charlie Demerjian
Thursday, 14 May 2009, 00:55

THERE'S A LOT of fake news going around about the upcoming GPUish chip called the GT300. Let's clear some air on this Larrabee-lite architecture.

First of all, almost everything you have heard about the two upcoming DX11 architectures is wrong. There is a single source making up news, and second rate sites are parroting it left and right. The R870 news is laughably inaccurate, and the GT300 info is quite curious too. Either ATI figured out a way to break the laws of physics with memory speed and Nvidia managed to almost double its transistor density - do the math on purported numbers, they aren't even in the ballpark - or someone is blatantly making up numbers.

That said, lets get on with what we know, and delve into the architectures a bit. The GT300 is going to lose, badly, in the GPU game, and we will go over why and how.

First a little background science and math. There are three fabrication processes out there that ATI and Nvidia use, all from TSMC, 65nm, 55nm and 40nm. They are each a 'half step' from the next, and 65nm to 40nm is a full step. If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area. We will be using these later.

Second is the time it takes to do things. We will use the best case scenarios, with a hot lot from TSMC taking a mere six weeks, and the time from wafers in to boards out of an AIB being 12 weeks. Top it off with test and debug times of two weeks for first silicon and one week for each subsequent spin. To simplify rough calculations, all months will be assumed to have 4 weeks.

Okay, ATI stated that it will have DX11 GPUs on sale when Windows 7 launches, purportedly October 23, 2009. Since this was done in a financial conference call, SEC rules applying, you can be pretty sure ATI is serious about this. Nvidia on the other hand basically dodged the question, hard, in its conference call the other day.

At least you should know why Nvidia picked the farcical date of October 15 for its partners. Why farcical? Lets go over the numbers once again.

According to sources in Satan Clara, GT300 has not taped out yet, as of last week. It is still set for June, which means best case, June 1st. Add six weeks for first silicon, two more for initial debug, and you are at eight weeks, minimum. That means the go or no-go decision might be made as early as August 1st. If everything goes perfectly, and there is no second spin required, you would have to add 90 days to that, meaning November 1st, before you could see any boards.

So, if all the stars align, and everything goes perfectly, Nvidia could hit Q4 of 2009. But that won't happen.

Why not? There is a concept called risk when doing chips, and the GT300 is a high risk part. GT300 is the first chip of a new architecture, or so Nvidia claims. It is also going to be the first GDDR5 part, and moreover, it will be Nvidia's first 'big' chip on the 40nm process.

Nvidia chipmaking of late has been laughably bad. GT200 was slated for November of 2007 and came out in May or so in 2008, two quarters late. We are still waiting for the derivative parts. The shrink, GT206/GT200b is technically a no-brainer, but instead of arriving in August of 2008, it trickled out in January, 2009. The shrink of that to 40nm, the GT212/GT200c was flat out canceled, Nvidia couldn't do it.

The next largest 40nm part, the GT214 also failed, and it was redone as the GT215. The next smallest parts, the GT216 and GT218, very small chips, are hugely delayed, perhaps to finally show up in late June. Nvidia can't make a chip that is one-quarter of the purported size of the GT300 on the TSMC 40nm process. That is, make it at all, period - making it profitably is, well, a humorous concept for now.

GT300 is also the first DX11 part from the green team, and it didn't even have DX10.1 parts. Between the new process, larger size, bleeding-edge memory technology, dysfunctional design teams, new feature sets and fab partners trashed at every opportunity, you could hardly imagine ways to have more risk in a new chip design than Nvidia has with the GT300.

If everything goes perfectly and Nvidia puts out a GT300 with zero bugs, or easy fix minor bugs, then it could be out in November. Given that there is only one GPU that we have heard of that hit this milestone, a derivative part, not a new architecture, it is almost assuredly not going to happen. No OEM is going to bet their Windows 7 launch vehicles on Nvidia's track record. They remember the 9400, GT200, and well, everything else.

If there is only one respin, you are into 2010. If there is a second respin, then you might have a hard time hitting Q1 of 2010. Of late, we can't think of any Nvidia product that hasn't had at least two respins, be they simple optical shrinks or big chips.

Conversely, the ATI R870 is a low risk part. ATI has a functional 40nm part on the market with the RV740/HD4770, and has had GDDR5 on cards since last June. Heck, it basically developed GDDR5. The RV740 - again, a part already on the market - is rumored to be notably larger than either the GT216 or 218, and more or less the same size as the GT215 that Nvidia can't seem to make.

DX11 is a much funnier story. The DX10 feature list was quite long when it was first proposed. ATI dutifully worked with Microsoft to get it implemented, and did so with the HD2900. Nvidia stomped around like a petulant child and refused to support most of those features, and Microsoft stupidly capitulated and removed large tracts of DX10 functionality.

This had several effects, the most notable being that the now castrated DX10 was a pretty sad API, barely moving anything forward. It also meant that ATI spent a lot of silicon area implementing things that would never be used. DX10.1 put some of those back, but not the big ones.

DX11 is basically what DX10 was meant to be with a few minor additions. That means ATI has had a mostly DX11 compliant part since the HD2900. The R870/HD5870 effectively will be the fourth generation DX11 GPU from the red team. Remember the tessellator? Been there, done that since 80nm parts.

This is not to say that is will be easy for either side, TSMC has basically come out and said that its 40nm process basically is horrid, an assertion backed up by everyone that uses it. That said, both the GT300 and R870 are designed for the process, so they are stuck with it. If yields can't be made economically viable, you will be in a situation of older 55nm parts going head to head for all of 2010. Given Nvidia's total lack of cost competitiveness on that node, it would be more a question of them surviving the year.

That brings us to the main point, what is GT300? If you recall Jen-Hsun's mocking jabs about Laughabee, you might find it ironic that GT300 is basically a Larrabee clone. Sadly though, it doesn't have the process tech, software support, or architecture behind it to make it work, but then again, this isn't the first time that Nvidia's grand prognostications have landed on its head.

The basic structure of GT300 is the same as Larrabee. Nvidia is going to use general purpose 'shaders' to do compute tasks, and the things that any sane company would put into dedicated hardware are going to be done in software. Basically DX11 will be shader code on top of a generic CPU-like structure. Just like Larrabee, but from the look of it, Larrabee got the underlying hardware right.

Before you jump up and down, and before all the Nvidiots start drooling, this is a massive problem for Nvidia. The chip was conceived at a time when Nvidia thought GPU compute was actually going to bring it some money, and it was an exit strategy for the company when GPUs went away.

It didn't happen that way, partially because of buggy hardware, partially because of over-promising and under-delivering, and then came the deathblows from Larrabee and Fusion. Nvidia's grand ambitions were stuffed into the dirt, and rightly so.

Nvidia Investor Relations tells people that between five to ten per cent of the GT200 die area is dedicated to GPU compute tasks. The GT300 goes way farther here, but let's be charitable and call it 10 per cent. This puts Nvidia at a 10 per cent areal disadvantage to ATI on the DX11 front, and that is before you talk about anything else. Out of the gate in second place.

On 55nm, the ATI RV790 basically ties the GT200b in performance, but does it in about 60 per cent of the area, and that means less than 60 per cent of the cost. Please note, we are not taking board costs into account, and if you look at yield too, things get very ugly for Nvidia. Suffice it to say that architecturally, GT200 is a dog, a fat, bloated dog.

Rather than go lean and mean for GT300, possibly with a multi-die strategy like ATI, Nvidia is going for bigger and less areally efficient. They are giving up GPU performance to chase a market that doesn't exist, but was a nice fantasy three years ago. Also, remember that part about ATI's DX10 being the vast majority of the current DX11? ATI is not going to have to bloat its die size to get to DX11, but Nvidia will be forced to, one way or another. Step 1) Collect Underpants. Step 2) ??? Step 3) Profit!

On the shrink from 55nm to 40nm, you about double your transistor count, but due to current leakage, doing so will hit a power wall. Let's assume that both sides can double their transistor counts and stay within their power budgets though, that is the best case for Nvidia.

If AMD doubles its transistor count, it could almost double performance. If it does, Nvidia will have to as well. But, because Nvidia has to add in all the DX11 features, or additional shaders to essentially dedicate to them, its chips' areal efficiency will likely go down. Meanwhile, ATI has those features already in place, and it will shrink its chip sizes to a quarter of what they were in the 2900, or half of what they were in the R770.

Nvidia will gain some area back when it goes to GDDR5. Then the open question will be how wide the memory interface will have to be to support a hugely inefficient GPGPU strategy. That code has to be loaded, stored and flushed, taking bandwidth and memory.

In the end, what you will end up with is ATI that can double performance if it choses to double shader count, while Nvidia can double shader count, but it will lose a lot of real world performance if it does.

In the R870, if you compare the time it takes to render 1 Million triangles from 250K using the tesselator, it will take a bit longer than running those same 1 Million triangles through without the tesselator. Tesselation takes no shader time, so other than latency and bandwidth, there is essentially zero cost. If ATI implemented things right, and remember, this is generation four of the technology, things should be almost transparent.

Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.

The same should hold true for all DX11 features, ATI has dedicated hardware where applicable, Nvidia has general purpose shaders roped into doing things far less efficiently. When you turn on DX11 features, the GT300 will take a performance nosedive, the R870 won't.

Worse yet, when the derivatives come out, the proportion of shaders needed to run DX11 will go up for Nvidia, but the dedicated hardware won't change for ATI. It is currently selling parts on the low end of the market that have all the "almost DX11" features, and is doing so profitably. Nvidia will have a situation on its hands in the low end that will make the DX10 performance of the 8600 and 8400 class parts look like drag racers.

In the end, Nvidia architecturally did just about everything wrong with this part. It is chasing a market that doesn't exist, and skewing its parts away from their core purpose, graphics, to fulfill that pipe dream. Meanwhile, ATI will offer you an x86 hybrid Fusion part if that is what you want to do, and Intel will have Larrabee in the same time frame.

GT300 is basically Larrabee done wrong for the wrong reasons. Amusingly though, it misses both of the attempted targets. R870 should pummel it in DX10/DX11 performance, but if you buy a $400-600 GPU for ripping DVDs to your Ipod, Nvidia has a card for you. Maybe. Yield problems notwithstanding.

GT300 will be quarters late, and without a miracle, miss back to school, the Windows 7 launch, and Christmas. It won't come close to R870 in graphics performance, and it will cost much more to make. This is not an architecture that will dig Nvidia out of its hole, but instead will dig it deeper. It made a Laughabee. µ

http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture
 

xemumanic

Member
camineet said:
Inq just posted an interesting article on GT300 architecture. If this article is to be believed, GT300 is completely fucked. But I assume Inq is just biased against Nvidia. Still, it's worth reading.



http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

Typical Charlie BS.

Additionally, there's so many rumors going around, both bad and good, for both ATI and Nvidia its best to just wait it out till we have more credible data.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
camineet said:
GT300 likely delayed until Q1 2010

http://www.theinquirer.net/inquirer/news/1052025/gt300-delayed-till-2010
http://www.electronista.com/articles/09/05/05/nvidia.gt300.delayed.more/

So it looks like GT300 and Larrabee will be launching around the same time.


Don't know what this means for AMD and Rx8xx. It could be that AMD will be the only graphics provider with a DX11 part this year, as if it matters, as if there's a ton of DX11 games needing a new card. There is Windows 7 release, and it will have DirectX11/Direct3D11 built in, but to what extent it makes use of DX11 hardware I have no idea.


p.s. yeah that DOOM 3 model on the right with DM is very impressive.

WOW. Technically ATI could kill Nvidia if they don't have anything new during this xmas season. I will still wait for the GT300 and some real benches to be out before I make my new purchase. This means I might finally be able to use my upgrade money for something else.
camineet said:
Inq just posted an interesting article on GT300 architecture. If this article is to be believed, GT300 is completely fucked. But I assume Inq is just biased against Nvidia. Still, it's worth reading.

Charlie said:
The GT300 is going to lose, badly, in the GPU game, and we will go over why and how.

http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture
lol @ Charlie. The same fuckwad that championed the 3700 series when Nvidia made ATI lick its cock. I learned my lesson and will never listen to that retard. He also claimed the 4800 architecture to be vaslty superior to the GTX200 one and in the end they are neck and neck. Look, I am an ATI fanboy but that asshole is a disgrace.
 

artist

Banned
xemumanic said:
Typical Charlie BS.

Additionally, there's so many rumors going around, both bad and good, for both ATI and Nvidia its best to just wait it out till we have more credible data.
Disregard it as BS, but he was the whistle blower on a lot of Nvidia related things .. bad bump Nvidia GPUs, GT212 canned, GT200b delayed and a whole lot.

Charlie has good sources, you just have to wade through the intentional drama in his articles.
 

Wallach

Member
M3d10n said:
The Doom 3 imp model. On left it's just subdivided, on the right it's using a displacement map. The base model has roughly the same amount of polygons as the original Doom3 model.

sheeeit2.jpg
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
irfan said:
Disregard it as BS, but he was the whistle blower on a lot of Nvidia related things .. bad bump Nvidia GPUs, GT212 canned, GT200b delayed and a whole lot.

Charlie has good sources, you just have to wade through the intentional drama in his articles.
Charlie is beyond optimistic with his ATI fanboyism. Those are a couple rumors he got right in how long? Everyday Charlie has new BS to bash Nvidia, of course something must be true every once in a while. If ATI wins, cool, I want ATI to win. I want to lick Nvidia fanboy tears because it will be fun for a night at the bar. However in the long run I would miss Nvidia the strong competition that delivered excellent products during these last two years. If the GT300 is such a flop, and I can see ATI just OC'ing its card for Q2 of 2010, which would suck.
 
Top Bottom