Eh, I didnt suggest you to take his word as the gospel of truth. Neither did I imply that Nvidia (GT300) should go kaput :/godhandiscen said:Charlie is beyond optimistic with his ATI fanboyism. Those are a couple rumors he got right in how long? Everyday Charlie has new BS to bash Nvidia, of course something must be true every once in a while. If ATI wins, cool, I want ATI to win. I want to lick Nvidia fanboy tears because it will be fun for a night at the bar. However in the long run I would miss Nvidia the strong competition that delivered excellent products during these last two years. If the GT300 is such a flop, and I can see ATI just OC'ing its card for Q2 of 2010, which would suck.
Well, I am truly hoping the GT300 series gets delayed until Q1 of 2010 since I don't think I am growing attached to my GTX295.irfan said:Eh, I didnt suggest you to take his word as the gospel of truth. Neither did I imply that Nvidia (GT300) should go kaput :/
His sources claiming the GT300 is behind schedule could be accurate, given his past record. However when he claims that RV870 will kick GT300's ass is what should be taken with salt, because he knows squat about either's computational power to be making those statements, if he had this info why wouldnt he post it?
So like I said, read through the article and you'll know what parts are his speculation and what came from his sources.
M3d10n said:The Doom 3 imp model. On left it's just subdivided, on the right it's using a displacement map. The base model has roughly the same amount of polygons as the original Doom3 model.
Nvidias 512-core GT300 taped out at 40nm, already in A1 silicon
May. 18, 2009 (1:50 pm) By: Rick Hodgin
Nvidias next-gen Tesla GPGPU engine, the 40nm GT300 GPU, has been confirmed to be in A1 silicon at Nvidias labs, meaning it actually taped out sometime in January, February or March.
The first silicon produced wouldve been A0, meaning Nvidia is already through one stepping pre-production, which is not uncommon. In fact there may be a solid explanation for it as it was previously rumored that both ATI and Nvidia are having trouble with TSMCs 40nm process technology, and that could be affecting yields. If true, then the re-spin (moving from one stepping to another) could be done not for performance reasons exactly, but rather to address TSMCs 40nm issues.
The GT300 is the Tesla part. There are additional Gx300yy chips, such as the G300 which will be the GeForce desktop card, along with G300GL which will be a Quadro part. The specs include 512 cores, 512-bit memory interface, 256 GB/s to 280 GB/s depending on whether or not the part is overclocked, and these specs will approach different thermal, power and performance envelopes based on target use and relative clocks.
Ricks Opinion
Nvidias GT300 is believed also to be a cGPU, which is to say it shares traits with a CPU in addition to the traditional GPU engine. If true, the cGPU may begin to expose additional abilities which allow for more exciting gaming effects, more generic programming abilities (such as a different approach to PhysX integration), and many other compute possibilitiesespecially in Tesla or Quadro when used in a supercomputer configuration.
BTW, to avoid any confusion about the GT300 or GeForce GTX300 series, nVidia's GT300 chip has several codenames. The GT300 silicon is destined to become a Tesla part; G300 is the desktop GeForce card, while G300GL is upcoming Quadro part. nVidia's old-timers still call the chip NV70 and if you roam in the halls of Graphzilla's Building C in Santa Clara, you might find papers with NV70 all over it. nVidia's current parts, the GeForce GTX 285 are all based on NV65 chips.
We saw how the board looks like and there are plenty of surprises coming to all the nay-sayers - expect world-wide hardware media going into a frenzy competition who will score the first picture of GT300 board. If not in the next couple of days, expect GT300 pictures coming online during Computex.
According to our sources, nVidia has no plans to show the GT300to the stockholders, analysts and the selected invited press [no, we're not in that club], but you can expect that Jen-Hsun and the rest of the exec gang will be bullish about their upcoming products.
It's because you don't have a DX11 card, my friend! I'm pretty sure future versions of programs like zbrush and mudbox will be able to use hardware displacement maps. Right now zbrush actually generates millions and millions of polygons using the CPU.Dabookerman said:It will still require a bit of processing power to generate the displacement maps, and it certainly doesn't make modelling them easier ;p
Displacement maps always end up fucked for me when I generate them in zbrush and use them in Maya
TheExodu5 said:Can't wait. Holding it out with my 8800GT until this series comes out...I figure it's worth waiting out the current gen of cards.
Well yes and no. With this kind of hardware, we can finally start to expect a constant 60fps V-Synced with full anti-aliasing with most games. Many games still don't maintain 60fps on today's high end cards at the highest quality settings, especially when AA comes into play. I look forward to not having to make any graphical compromises.
brain_stew said:280GB/s of memory bandwidth!?
Oh fap, fap, fap.
Kinda makes the ~ 20GB/s of RSX look a little pathetic, that's a 14x increase! Yet there's still some people that believe console technology is in the same realm as the high end PC space, its not, its closer to the Wii than hardware like this.
For the record that's more bandwidth than the eDRAM in Xenos which makes up something like a 1/3 of its dies space. :lol
I'm actually scared to think how big GT300 is going to end up, they must be pushing more than 2 billion transistors with that thing, surely? I think the rumours of them ditching a lot of fixed function hardware has to be true, otherwise I don't see how they're going to be able to push 512 stream processors without making the thing the size of King Kong. GT200 was big enough as it is.
Zero Hero said:nVidia already has boards with it's chipsets, why can't they just make their own boards with their graphics chip on board? The heat sink and fan could cover both the cpu and gpu like in the PS3.
thuway said:I also wonder how Cell 2 will compete with it :lol .
And in this exhibit, we witness a person who has no idea how either chipset is structured.Truespeed said:And also that Cell copycat Larrabee :lol
That's like comparing a 360 with a Geforce 2 (five years difference) and proclaiming that "the ~ 7GB/s of the Geforce 2 look a little pathetic". Hell, the difference there is even bigger, 34x.brain_stew said:
It's really apples and oranges, because consoles are a closed platform designed for one thing that they can optimize the hell out of. A lot of power is going to waste in PC gaming.Aizu_Itsuko said:That's like comparing a 360 with a Geforce 2 (five years difference) and proclaiming that "the ~ 7GB/s of the Geforce 2 look a little pathetic". Hell, the difference there is even bigger, 34x.
SapientWolf said:It's really apples and oranges, because consoles are a closed platform designed for one thing that they can optimize the hell out of. A lot of power is going to waste in PC gaming.
brain_stew said:No amount of optimisation is going to make up for 14x the bandwidth and 10x the compute. :lol
They're absolutely comparable, since Sony used a cut down off the shelf Nvidia GPU. Optimisation is nice and all, but can never compensate for new generation silicon. Good luck in getting the Wii to match a PS3, because that's essentialy what you're proposing.
Zaptruder said:While its great and all to hear about ever faster graphics tech... what exactly are they going to use it on?
I mean resolution seems to be converging at 1080p as a standard, frame rates are already consistently high...
camineet said:Exactly.
100% agreed. There is no arguing this. It's fact.
Console optimisation might allow consoles to compete with PCs that are, say, several times more powerful but NOT 10x more powerful. That's an order of magnitude difference. An entire console generation. Upcoming highend PC components are gonna be as much of a leap beyond 360/PS3 as 360/PS3 are beyond Wii.
Truespeed said:And also that Cell copycat Larrabee :lol
camineet said:framerate and resolution are only the tip of the graphical iceberg. What needs to be improved is the detail/complexity of each frame, with far better lighting, post-processing, effects, etc, beyond what DX9 and DX10 cards can do today.
Yes 3D Vision can greatly benfit from an Nvidia GPU that's at least twice as powerful as the current strongest GPU, but that's only one thing. Instead of needing two cards or a dual-GPU card, one GPU will be able to handle it better than SLI.
brain_stew said:Say what now!?
Larrabee is a GPU
Cell is a CPU.
That's a pretty fucking fundamental difference right there, I'm not going to go into all the other fundamental differences, suffice it say, they follow very different design philosiphies and are meant for totally different functions.
Ugh, yeah, Sony invented the concept of a manycore processor design and all others are just derivative copies of it, sure whatever helps you sleep at night.
Yeah, but all that extra horse power won't do any good unless devs take advantage of it. Which might not happen if they develop games with the current generation of consoles in mind.brain_stew said:No amount of optimisation is going to make up for 14x the bandwidth and 10x the compute. :lol
They're absolutely comparable, since Sony used a cut down off the shelf Nvidia GPU. Optimisation is nice and all, but can never compete with new generation silicon. Good luck in getting the Wii to match a PS3, because that's essentially what you're proposing.
SapientWolf said:Yeah, but all that extra horse power won't do any good unless devs take advantage of it. Which might not happen if they develop games with the current generation of consoles in mind.
I'm sorry this is off-topic, but actually looking at the 2 chips that's not a "fundamental difference", it's playing with semantics. In fact, the main architectural difference arguably being cache coherency (well, maybe second to heterogeneity), one could well argue that Cell is more GPU-like than LRB.brain_stew said:Say what now!?
Larrabee is a GPU
Cell is a CPU.
brain_stew said:No amount of optimisation is going to make up for 14x the bandwidth and 10x the compute. :lol
They're absolutely comparable, since Sony used a cut down off the shelf Nvidia GPU. Optimisation is nice and all, but can never compete with new generation silicon. Good luck in getting the Wii to match a PS3, because that's essentially what you're proposing.
I would think you're right. Even with the more powerful (in comparison to consoles) PCs we see today, I really haven't seen a boatload of games that look so much better than a console game that I would consider upgrading my GPU and playing on PC.gofreak said:There's absolutely no doubt about the relative gap in technology.
But the advantage consoles have is that there is a larger number of developers there willing to squeeze the machines for everything they've got, to use it as their baseline.
Of course the PC benefits from this via console ports, and with one of these GPUs you'll enjoy better resolution/texture-filtering etc.
But there are precious few PC devs who are willing to use the latest nVidia or AMD as their baseline or to even optimise for those chips. Many if not most PC devs seem to target far more modest specificiations, with more powerful chips 'only' providing higher resolution/better filtered versions of those games.
Of course there are developers willing to target the high end (e.g. Crytek), but they seem few and far between relative to the console space, where you often have whole stables of first party devs (at least) willing to aim really high with their games. The frequency/numbers of high-end looking games on PC still doesn't seem to match consoles even if it has the occasional title that matches and exceeds what's available on consoles..but the breadth of such titles doesn't seem to be the same.
Or am I wrong? I'll admit I haven't exhaustively surveyed what's coming up in the near future in terms of native PC games, so I could well be..
Anyway, this is kind of OT. As a tech whore, those reports about the GT300 are mouthwatering.
That's actually false - there's a lot of reasons why HOS haven't seen widespread use even though hw was capable of useful implementation for over 10 years now - and one is that they don't-really-save memory (outside of contrived scenarios, like certain types of terrain).M3d10n said:- Less VRAM spent on geometry, since nurbs/bezier/subdivision/displacement needs much less vertices/control points.
This is an old comment, but I wouldn't really buy that argument. One just needs to look at the particular instructions each processor is designed for to see the big fundamental difference.Durante said:I'm sorry this is off-topic, but actually looking at the 2 chips that's not a "fundamental difference", it's playing with semantics. In fact, the main architectural difference arguably being cache coherency (well, maybe second to heterogeneity), one could well argue that Cell is more GPU-like than LRB.
Oh god, I thought it was part of the card at first.Manager said:Awesome watermark...
Xdrive05 said:I'll try to wait for the mid-range version to come out to replace my reliable 8800gt superclocked. If I can hang in there that is.
Tom Penny said:I'm in the same boat and have the same card. I just am not sure what is the best possible bang for the buck is right now. They are really lowering in price.
Aren't they pretty much at the limit of the ATX spec there? I'm not even sure that would fit in my case.Manager said:Apparently it's around 28cm long...
:lolcamineet said:http://www.geek.com/articles/games/...d-out-at-40nm-already-in-a1-silicon-20090518/
http://www.brightsideofnews.com/new...dy-taped-out2c-a1-silicon-in-santa-clara.aspx
GT300 is gonna be an absolute BEAST, but I'm pretty sure we won't see it until Q1 2010. Given that Larrabee won't be out until Q1 2010 also, that means ATI will be the only one with a DX11 GPU out in 2009.