Lol... no.
Display Port -> DVI-Dual Link -----> HDMI
HDMI is the most limited of the 3 interfaces. If you are connecting to a monitor (especially a high refresh rate monitor) you should never use HDMI when DisplayPort is present.
I mean... unless you are really using it for audio :|
Latency is not the problem for stream GPU processing. Having better latency won't mean anything - at least until some GPU apps will take advantage of this.
I would say that 4GB isn't enough for anything more than 1080p right now. There are a lot of games which hit the VRAM limit on a 4GB card with downsampling or MSAA in resolutions between 1080p and 4K. So having 4GB on a top end card right now is a big problem.
It'll do for a year until the 4xx series comes out - I assume they'd want to release the dual-GPU with 8GB ASAP though because as you said for downsampling it's not going to cut it and for the enthusiasts who'll easily drop 3-4K+ on a build they'll almost certainly be downsampling or the like.
The RAM pool is split between two cards in a dual GPU setup (framebuffer assets are mirrored across both pools). If you ran into VRAM issues with a Single card, you'll run in to the same issues with a dual GPU setup.
We are assuming that. For all we know they could end up putting it all on the interposer. I wonder if that is possible.
It just strikes me as a bad choice - to launch a $850 card for 1080p market. But we'll see.It'll do for a year until the 4xx series comes out - I assume they'd want to release the dual-GPU with 8GB ASAP though because as you said for downsampling it's not going to cut it and for the enthusiasts who'll easily drop 3-4K+ on a build they'll almost certainly be downsampling or the like.
An Interposer that spans TWO gpu chips? That would be insanely expensive and also wouldn't get around the 4GB limit of HBM1 anyway. there's also the issue that both GPUs still have to have their own framebuffer which nullifies any ram configuration. SFR (Split Frame Rendering) will allow two gpus to share the same framebuffer, but that requires DX12, or Vulkan, and the game to be programmed specifically for that on the outset.
What If they try to market this as their VR card? I think 4GB are enough for the resolutions of the Vive and Rift and it should have enough power to produce locked FPS at reasonable detail levels.
I don't know about Rift but Vive have two 1080p displays. This falls exactly in between 1080p and 4K where 4GB is not enough already.
Edit: Rift seems to be even higher than that.
I don't know about Rift but Vive have two 1080p displays. This falls exactly in between 1080p and 4K where 4GB is not enough already.
Edit: Rift seems to be even higher than that.
If the 380 is a re-badged 290X but slightly faster and more efficient, I may be interested in the 8GB GDDR5 version of that if both the 390 and 390X are limited to 4GB HBM.
4GB GDDR5 isn't enough. HMB is a different beast that doesn't have to duplicate data across the chips to actually use it's bandwidth. Have you tested the new cards? What I mean is why are you speaking about this as if it's something you actually know anything about?I don't know about Rift but Vive have two 1080p displays. This falls exactly in between 1080p and 4K where 4GB is not enough already.
Edit: Rift seems to be even higher than that.
If they do end up being rebadged cards, does that mean there will be no competition when it comes to the 980? the 290/290X offers about the same performance as a 970 currently so if reports are true and the new chip is priced at $849, it will compete with the TitanX and/or the upcoming 980Ti.
Will there be a "gimped" version of the new chip to compete with the 980?
Shh...A little knowledge is a dangerous thing.
Everyone talking about duplicating data needs to look up memory interleaving, we've had it since the 80s at least.
A little knowledge is a dangerous thing.
Everyone talking about duplicating data needs to look up memory interleaving, we've had it since the 80s at least.
If you actually look at frame buffers and how efficient they are and how efficient the drivers are at managing capacities across the resolutions, you'll find that there's a lot that can be done. We do not see 4GB as a limitation that would cause performance bottlenecks. We just need to do a better job managing the capacities. We were getting free capacity, because with [GDDR5] in order to get more bandwidth we needed to make the memory system wider, so the capacities were increasing. As engineers, we always focus on where the bottleneck is. If you're getting capacity, you don't put as much effort into better utilising that capacity. 4GB is more than sufficient. We've had to go do a little bit of investment in order to better utilise the frame buffer, but we're not really seeing a frame buffer capacity [problem]. You'll be blown away by how much [capacity] is wasted.
Ah ok, still more than 1080p though.Every Display in the Vive has a resolution of 1080x1200
2160x1200 = 2592000 pixels
1920x1080 = 2073600 pixels
Most current games are working well with 3-3.5GB, so for that small increase, full 4GB should be sufficient, no? Also VR games are going to take place in rather linear and small environments. Most memory hogs are currently open world games.
I know that you can't fit 8GB of data into 4GB of memory. You don't?4GB GDDR5 isn't enough. HMB is a different beast that doesn't have to duplicate data across the chips to actually use it's bandwidth. Have you tested the new cards? What I mean is why are you speaking about this as if it's something you actually know anything about?
We know that HMB doesn't have to fill the memory with garbage to use the full bandwidth and we know that AMD hired engineers specifically to work on streamlining memory usage on the hardware/driver side. So, until we see benches we don't actually know anything. All we do know is that cards with GDDR5 need more than 4GBs due to their VRAM bandwidth constraints ie the caching and holding of data across multiple chips in order to have enough bandwidth to use the data when it needs it. Also, that's without discussing AMD's new texture compression technology which means textures should use less memory now as well.
I'm not saying all of this will work flawlessly and it'll be a paradigm shift. What I am saying is that there is no reason to state guesses as facts and treat a new technology as if it acts exactly like the old technology it's replacing. Wait for benches before you act as though you know how it performs.
A little knowledge is a dangerous thing.
Everyone talking about duplicating data needs to look up memory interleaving, we've had it since the 80s at least.
I seem to have missed something, could you provide links to the new texture formats?Also, that's without discussing AMD's new texture compression technology which means textures should use less memory now as well.
I seem to have missed something, could you provide links to the new texture formats?
Same, thus I was quite intrigued.The only thing I'm aware of is their colour compression, which is similar to what we saw on Maxwell to help increase "effective" memory bandwidth (by lowering the BW requirements for certain things somewhat).
It's somewhat of a different scenario. In the first place, AMD isn't trying to sell it as anything other than a 4GB card, and the 4GB you get will perform as advertised.I believe the Techreport article commented that the AMD response to the 4GB question sounded very similar to Nvidia's response to the 970 memory issue. They weren't saying exactly that they can fit more than 4GB, but they implied that some kind of software level improvement could result in lower memory usage, on account of inefficiencies in how they store (presumably unused) memory over time. But yes it strikes me as pre-emptive damage control as much as anything.
It's somewhat of a different scenario.
The conversation around this issue should be interesting to watch. Much of what Macri said about poor use of the data in GPU memory echoes what Nvidia said in the wake of the revelations about the GeForce GTX 970's funky 3.5GB/0.5GB memory split.
Ah ok, still more than 1080p though.
I know that you can't fit 8GB of data into 4GB of memory. You don't?
Let me be as straight as I can here: 4GB of memory will never be able to contain 8GB of data. This isn't a "guess", that's a fact.
I never said this
A "guess" is what you're describing by saying that HBM is somehow so different from GDDR5 that 4GB of it will be able to hold as much data as 8GB of GDDR5.
Again, I never made any such statement. You're saying this going off a vague statement from a representative of a company which would be directly interested in spreading such FUD if their top end card really has only 4GB of VRAM.
Any caching in GDDR5 MCs never adds to the memory capacity. "Texture compression" technology is actually ROP color compression technology and it doesn't apply to the data storage, only to the bus data transfers. What I'm making of this quote is that AMD spent some time optimizing their drivers for Fiji memory management - not because GDDR5 was so damn inefficient but because they do understand that having only 4GB of HBM on Fiji is a problem for them. This wasn't a problem with Hawaii because its 512 bit bus was "gifting" them 4 or 8 GBs of VRAM in times when such quantities were not needed. Thus their optimization effort there wasn't the same as with Fiji. But it is very unlikely that they've somehow wasted half the VRAM on Hawaii via their drivers. That would actually require an effort in un-optimizing.
Now you may say that you don't know anything but don't say that "we" don't know anything because this would be incorrect. A 4GB card is a 4GB card. These 4GBs may be fast as hell but that won't magically turn them into 6GBs, much less 8GBs.This is still has nothing to do with anything I said. These "inefficiencies" of GDDR5 are a) quite a bit different between vendors and b) are in range of hundreds of MBs at best.
I was misremembering, I apologize.I seem to have missed something, could you provide links to the new texture formats?
I believe the Techreport article commented that the AMD response to the 4GB question sounded very similar to Nvidia's response to the 970 memory issue. They weren't saying exactly that they can fit more than 4GB, but they implied that some kind of software level improvement could result in lower memory usage, on account of inefficiencies in how they store (presumably unused) memory over time. But yes it strikes me as pre-emptive damage control as much as anything.
A little knowledge is a dangerous thing.
Everyone talking about duplicating data needs to look up memory interleaving, we've had it since the 80s at least.
It makes sense that some of the stuff developers are putting into VRAM doesn't really need to be there and could afford to be in system RAM instead. Maybe we'll see something weird like higher RAM on recommended system requirements for people with AMD cards, heh.
Found this listed on Taobao through a thread on Chiphell this morning. Probably is shens anyway.
Anyone read Chinese? Google Translate only does so much.
So will HBM improve the PCIE bandwidth somehow as well?
Found this listed on Taobao through a thread on Chiphell this morning. Probably is shens anyway.
Anyone read Chinese? Google Translate only does so much.
![]()
THat says 8 gigs of VRAM even though AMD confirmed 4?
Mos def bullshit listing in regards to the VRAM amount and price. Would be nice as hell though if that was the case but sadly, it is not. :/Yeah. Can just delete it. Suggested price is way low while the taobao price is around 1200 USD.
Nah, similar to that last 512MB for NVIDIA, the whole thing depends on there being some types of data which could tolerate being accessed more slowly without having a big impact on the overall frametime.
That data is present on all videocards. If you can move it to system RAM on Fiji you can do that on a 980Ti as well resulting in the same memory difference essentially.
Btw, these last 512MB on a 970 are used to hold the Windows desktop for example. I doubt that moving them to system RAM is even possible since these are rendering buffers essentially. The whole point with 970's slow partition was that there are some stuff in VRAM which don't have to be on a fast access part but at the same time - can't be moved off VRAM entirely.
THat says 8 gigs of VRAM even though AMD confirmed 4?
That data is present on all videocards. If you can move it to system RAM on Fiji you can do that on a 980Ti as well resulting in the same memory difference essentially.
Btw, these last 512MB on a 970 are used to hold the Windows desktop for example. I doubt that moving them to system RAM is even possible since these are rendering buffers essentially. The whole point with 970's slow partition was that there are some stuff in VRAM which don't have to be on a fast access part but at the same time - can't be moved off VRAM entirely.
They said that if you build it with 4 stacks, you're limited to 4GB, however you could build it with more stacks.
The whole point of the slow partition on the 970 was to give users 3.5GB of fast memory on 56 ROPs instead of giving them 3GB of memory in total on 48 ROPs. Another option would be to keep all L2s and ROPs operational and sell the card for a higher price.Now who is parroting company FUD? The whole point of the slow partition on the 970 was to be able to market 4GB cards irregardless of the performance ramifications.
Personally, I think it's very possibly true that memory management on videocards right now is extremely inefficient because the capacities ballooned so far, so fast. It became easier for developers to simply recommend a 4+GB card for their top settings than to actually worry about managing the flow of data or optimize buffer formats. The performance benefits of HBM could make that worth the effort again.
What I don't understand is why is 8GB's even needed for greater than 1080p? In practice on DDR3/5 that seems to be the case, but what's the reason?
I mean most of that memory is used for textures and geometry. Is it just the way games are designed, that it starts to fail at greater than 1080p on 4GB? Or could there just be driver problems trying to optomize for the configuration of DDR3/5, perhaps HBM can resolve some of? Isn't the framebuffer only like several megs that are required for 1080p?
All this seems fine and dandy for DX11 and earlier, but once we go low level with memory management that DX12, and other API's, then wouldn't optimizing for HBM be a odd ball (until it's standardized)?
So that design is real? I thought it was a fake fan render or whatever
Thanks for the reply, that's a real world example, which is good. If the majority of that is assets, then ultra setting at 1080p should run into the same problems on 4GB right? If not, then I'm still wondering why the need is higher for 4K, the assets are the same, if the GPU can handle the workload, why is the memory requirement so much more? As you say the actual frame buffer hasn't been a problem for a while.Because UItra settings on GTA V push 6GB and on a card with gobs and gobs of performance I don't want a framerate to start choking because video memory is thrashing like a bastard.
Literal (not "what we call VRAM") frame buffers sizes have long not been a problem. If your game has to use 6GB of assets to display a scene and you only have 4GB? Prepare for your framerate to tank or lower the detail.