AMD Radeon Fury X Series | HBM, Small Form Factor And Water Cooling | June 16th

I figured this was already happening because honestly, from a marketing standpoint, why give your product to someone who you think is predisposed to just shitting on AMD in the first place.

Actually reading Kitguru, you can't help but feel they fight for team green just a little too much. Most AMD card reviews end with finding some way of saying "Yeah, but NVIDIA does this thing better"

Which is a little surprising given that the EIC of Kitguru came from another site that was heavily AMD focused and in fact hosted AMD events.

I personally wont comment, as a former mod of said site, but this is all a little bit sad really.
 
If true, lol, AMD is in the right then.

Also, I'm currently on a GTX 970, but I'm still willing to jump to a GTX 980 Ti/Fury X, but yeah... I just want Pascal in the end.


In the video he says he hopes that it can compete with GTX 980 (Not 980Ti) and that he's certain that if it can, it will definitely have a much larger power draw... from what we know, is he right about the power draw part at least?
 
Read this, but not sure if true:




Also found this
https://www.youtube.com/watch?v=QFWgc8qjQwk

Now I see why AMD banned them LOL

It's all kicking off on KitGuru on that news article:

http://www.kitguru.net/site-news/an...-kitguru-fury-x-sample-over-negative-content/

Lot of people mention that video, which is pretty badly misinformed from someone who is meant to be in the know.

In the video he says he hopes that it can compete with GTX 980 (Not 980Ti) and that he's certain that if it can, it will definitely have a much larger power draw... from what we know, is he right about the power draw part at least?

Saying he thinks "there's no way" the Fury X would match the Titan X, despite all the avalanche of leaks recently suggesting it would, and saying also that he hopes the Fiji cards "could" match the 980 is pretty spectacularly ignorant to what has been going on for the past 3-4 months.
 
Let's not confuse this with actually looking at issues with the cards objectively either.

TechReport and PCPer uncovered a number of issues with AMD cards, from frame pacing issues, to entire fuckups @ 4K where frames were essentially being dropped, heavily increasing the reported "FPS" when frame times were anything close to that. Just the switch from looking at performance on the frame level rather than "frames per second" level was hugely damaging for AMD as it exposed a lot of issues.

AMD doesn't withold anything from those two websites.

Also, this is your reminder that if you're an English speaker, you shouldn't bother with any website that isn't TechReport or PCPer.
 
Let's not confuse this with actually looking at issues with the cards objectively either.

TechReport and PCPer uncovered a number of issues with AMD cards, from frame pacing issues, to entire fuckups @ 4K where frames were essentially being dropped, heavily increasing the reported "FPS" when frame times were anything close to that. Just the switch from looking at performance on the frame level rather than "frames per second" level was hugely damaging for AMD as it exposed a lot of issues.

AMD doesn't withold anything from those two websites.

Also, this is your reminder that if you're an English speaker, you shouldn't bother with any website that isn't TechReport or PCPer.

Interesting.

To be clear I'm not saying KitGuru are biased, but watching that video, I can't believe how off their journalist was with what has actually transpired. I do read KitGuru news articles, as they update it regularly. Personally I like Bit-Tech, but will prob bookmark those two as well.
 
What about Anandtech? :(
For actual PC Hardware reviews, they've been irrelevant for a long time. Anand leaving was the proverbial nail in the coffin for them. IMO, they're coming up on Tom's for being the IGN of Hardware. Still not that bad, though.
Interesting.

To be clear I'm not saying KitGuru are biased, but watching that video, I can't believe how off their journalist was with what has actually transpired. I do read KitGuru news articles, as they update it regularly. Personally I like Bit-Tech, but will prob bookmark those two as well.
Bit-Tech is great for things that aren't CPUs and Video Cards.

Just look at the reviews for either of those on PCPer and TechReport. The way in which they test the cards is fundamentally more accurate and in depth.
 
Its quite funny to see Nvidia people saying "4GB is not enough" (especially when they have 970, 980, 970 SLI, 980SLI), and then say "8GB is useless"
 
Its quite funny to see Nvidia people saying "4GB is not enough" (especially when they have 970, 980, 970 SLI, 980SLI), and then say "8GB is useless"

Because they're different cards for different performance tiers. The 970 with 4GB is fine for 1080p. 980 with 4GB is borderline for 1440p. R9 390X with 8GB is kind of irrelevant for 1080p and Fury X for 4GB for 1440p/4K is going to feel really constrained.
 
I may have just missed it, but have the released Fury X specs stated the minimum power supply requirement? I have a 700 watt PSU, I want to know if that would work.
 
Because they're different cards for different performance tiers. The 970 with 4GB is fine for 1080p. 980 with 4GB is borderline for 1440p. R9 390X with 8GB is kind of irrelevant for 1080p and Fury X for 4GB for 1440p/4K is going to feel really constrained.


It actually doesn't matter that much for 4k atm unless you use AA which hurts performance too much anyway but it will matter in the future. If you're buying to play in 4k without sli/xfire it's going to seem like a foolish decision when new games can barely get 30fps regardless of vram
 
Because they're different cards for different performance tiers. The 970 with 4GB is fine for 1080p. 980 with 4GB is borderline for 1440p. R9 390X with 8GB is kind of irrelevant for 1080p and Fury X for 4GB for 1440p/4K is going to feel really constrained.
7uI7nHC.png


Doesnt look too constrained. Now with traditional GDDR5, 4GB will not be sufficient in the future for 4k, but the HBM memory works differently too, I think people will be surprised. This is being marketed at a 4K card.
 
Its quite funny to see Nvidia people saying "4GB is not enough" (especially when they have 970, 980, 970 SLI, 980SLI), and then say "8GB is useless"
I think it's different people saying these things.

Current benchmarks, at least with framerates, seems to suggest that vRAM isn't bottlenecking anything at the moment, even at higher resolutions. But we haven't seen any frametime benchmarks to corroborate this, which is where this sort of thing would more likely show its head.

So we don't really know yet. I'm still inclined to believe that as we move forward, the latest games with the highest texture settings and greatest draw distances and whatnot, may well create bottlenecks with only 4GB, particularly at 2160p. We're not just jumping from 720p to 1080p here. 1080p to 2160p is absolutely *huge*. So I really cant blame anybody who has doubts on 4GB. Everything we know about increasing RAM requirements suggests it will become an issue, if not now, then at least very quickly.

7uI7nHC.png


Doesnt look too constrained. Now with traditional GDDR5, 4GB will not be sufficient in the future for 4k, but the HBM memory works differently too, I think people will be surprised. This is being marketed at a 4K card.
*AMD provided benchmarks

The settings chosen for these games are strange to say the least, nor does it show frametimes.
 
The drop to 16nm and the greater number of shaders and hopefully higher clock speeds is going to be a far greater performance boost. Memory bandwidth isn't really a problem on top end cards and HBM is more about lowering the amount of power used (GDDR5 chips are real power hogs at 1750MHz), running cooler and going down to smaller form factors.

And die space dedicated to the memory controller.
 
In the video he says he hopes that it can compete with GTX 980 (Not 980Ti) and that he's certain that if it can, it will definitely have a much larger power draw... from what we know, is he right about the power draw part at least?

Going on benchmarks and reviews, the 390X competes with the GTX 980 already.

I'd expect Fury X to go against the 980 Ti and Titan X.
 
yeah I think this is an awkward time to upgrade considering the nm shrinks and overall huge performance gains with it. Found a 270 for $90 on Reddit to crossfire with, course now I need a new PSU :( still cheaper than a new GPU and its only a stopgap measure until next year. Can't wait for HBM2.
 
I prefer HardOCP. They test the cards and actually play with them rather then running straight up benchmarks. Very thorough; they share their experience playing the games they use with the cards. In addition, they include frame rate data while they played a particular level that's he most taxing for that game. Furthermore, games may show average playable frame rates, but the experience may be terrible and they tell you this. Something average, high, or low frame rates don't tell you.

The way they do game settings is by going for the best playability with the best settings they can get at that playability. Basically how people actually set their settings to play the games with that card.

I'm on a 780 and have too much of a back log; so lucky for me, I have no time for these new games that kill my card because it's not a 900 series card.
 
I think it's different people saying these things.

Current benchmarks, at least with framerates, seems to suggest that vRAM isn't bottlenecking anything at the moment, even at higher resolutions. But we haven't seen any frametime benchmarks to corroborate this, which is where this sort of thing would more likely show its head.

So we don't really know yet. I'm still inclined to believe that as we move forward, the latest games with the highest texture settings and greatest draw distances and whatnot, may well create bottlenecks with only 4GB, particularly at 2160p. We're not just jumping from 720p to 1080p here. 1080p to 2160p is absolutely *huge*. So I really cant blame anybody who has doubts on 4GB. Everything we know about increasing RAM requirements suggests it will become an issue, if not now, then at least very quickly.


*AMD provided benchmarks

The settings chosen for these games are strange to say the least, nor does it show frametimes.

Those are fair points, DX 12 is launching in a month, and will change the way texture loading works (more texture streaming), so the need for massive frame buffers is going to drop. I am very excited for DX12 games, will be a big boon for nvidia and AMD, but moreso AMD.
 
You'll have no problem at all.

Okay. The reason I asked was because I decided to research if I should just get a 290x this year and hold the "real" upgrade for HBM Gen2, but the specs on that listed 750 watts required and thought "Oh boy, if Fury X requires that or more I might have to go 980ti by default," as it has 600 watt requirement.
 
Those are fair points, DX 12 is launching in a month, and will change the way texture loading works (more texture streaming), so the need for massive frame buffers is going to drop. I am very excited for DX12 games, will be a big boon for nvidia and AMD, but moreso AMD.

That's not how texture streaming works. Texture streaming means you ensure the texture is in VRAM before you begin rendering the scene. If you have 6GB of assets in a scene you need minimum 360GB/sec to get 60fps. The most you can stuff down a PCI-E 4.0 x16 pipe in a frame (assuming 60fps target) is just over half a gig.

Frame buffers will still need to be at least as large as the assets the developers require to make a scene.
 
If you go with two GPUs, I highly recommend going with NVIDIA. AMD's crossfire support is pretty shit-tier compared to NVIDIA, which is still not great.

Wow, I thought they fixed the frame pacing issue. I used to have 2 7950s, and the stuttering was terrible.

Nvidia it is then.
 
Wow, I thought they fixed the frame pacing issue. I used to have 2 7950s, and the stuttering was terrible.

Nvidia it is then.

They have fixed the frame pacing but AMD have been pretty lazy with the crossfire application profiles. Witcher 3 I know some people were pretty salty with how low performance was until AMD got their act together.
 
Okay. The reason I asked was because I decided to research if I should just get a 290x this year and hold the "real" upgrade for HBM Gen2, but the specs on that listed 750 watts required and thought "Oh boy, if Fury X requires that or more I might have to go 980ti by default," as it has 600 watt requirement.

Yeah, just keep in mind those requirements are always super conservative. If you compare those numbers to actual power consumption in reviews you'll see what I mean.
 
I love that we have plenty of different HW review sites to collate from. No single sites give you the whole picture, and you really need to see multiple reviews with different testing methods to get a complete idea of how a certain GPU performs overall. Sites like PCPer and Techreport are great at showing frametime data, but they don't do all that many cards, games or settings. TPU is great for overall view with many games and cards, Anandtech for architectural stuff, and [H] is good for a more subjective take. Then there's many other sites for some extra validation. It's weird for me how some people just scream bias at any site for the slightest negativity or praise that goes against their own beliefs.
 
Wow, I thought they fixed the frame pacing issue. I used to have 2 7950s, and the stuttering was terrible.

Nvidia it is then.
They have fixed the pacing issue. They're just very slow with profiles compared to NVIDIA. The non-official support for people submitting user created profiles is a lot more active for NVIDIA as well for obvious reasons (dat market share).

Games don't work right without profiles.
 
I prefer HardOCP. They test the cards and actually play with them rather then running straight up benchmarks. Very thorough; they share their experience playing the games they use with the cards. In addition, they include frame rate data while they played a particular level that's he most taxing for that game. Furthermore, games may show average playable frame rates, but the experience may be terrible and they tell you this. Something average, high, or low frame rates don't tell you.

The way they do game settings is by going for the best playability with the best settings they can get at that playability. Basically how people actually set their settings to play the games with that card.

I'm on a 780 and have too much of a back log; so lucky for me, I have no time for these new games that kill my card because it's not a 900 series card.

I have such a hard time reading their reviews. You look at the top and you're like oh this card is faster than this one, but then you look at it a little longer and realize they tweak the settings to get the best FPS. Then at the bottom of the page they do an "apples to apples" comparison. It's so annoying.
 
Because they're different cards for different performance tiers. The 970 with 4GB is fine for 1080p. 980 with 4GB is borderline for 1440p. R9 390X with 8GB is kind of irrelevant for 1080p and Fury X for 4GB for 1440p/4K is going to feel really constrained.

what really? i want toget a new gpu for 1440p gaming so if i get a 970 (which i was) it wouldnt be sufficient? thats a bummer...
 
I wouldn't say they've COMPLETELY fixed the frame-time stuff. Look at the videos DF did for the review of the 980ti and you'll see the AMD cards tend to spike a lot harder when dropping frames compared to Nvidia. It's pretty noticeable in Witcher 3 for me.
 
I prefer HardOCP. They test the cards and actually play with them rather then running straight up benchmarks. Very thorough; they share their experience playing the games they use with the cards. In addition, they include frame rate data while they played a particular level that's he most taxing for that game. Furthermore, games may show average playable frame rates, but the experience may be terrible and they tell you this. Something average, high, or low frame rates don't tell you.

The way they do game settings is by going for the best playability with the best settings they can get at that playability. Basically how people actually set their settings to play the games with that card.

I'm on a 780 and have too much of a back log; so lucky for me, I have no time for these new games that kill my card because it's not a 900 series card.
That is something that frame times do tell you, rather than "frame rate", which is an average in and of itself.

HardOCP's method leaves room for subjectivity, whereas the frame time data from TechReport and PCPer do not.

HardOCP avoids the better testing methodology because it's harder and takes a fuckton more time. He's said as much in the past.
 
If you go with two GPUs, I highly recommend going with NVIDIA. AMD's crossfire support is pretty shit-tier compared to NVIDIA, which is still not great.
I thought they fixed all that.

I personally think multi GPU setups are completely worthless unless you have single digit frame times / triple digit framerates, and a monitor to support that. Adaptive sync technology may make those setups even more viable, but I haven't tested that theory.
 
I wouldn't say they've COMPLETELY fixed the frame-time stuff. Look at the videos DF did for the review of the 980ti and you'll see the AMD cards tend to spike a lot harder when dropping frames compared to Nvidia. It's pretty noticeable in Witcher 3 for me.
It's more of a per-game issue than it is a completely widespread issue like it was before though. I mean, you'll see some games where SLI does the same sort of thing and the AMD dual GPU solution gives much much more smooth frame delivery.
I thought they fixed all that.

I personally think multi GPU setups are completely worthless unless you have single digit frame times / triple digit framerates, and a monitor to support that. Adaptive sync technology may make those setups even more viable, but I haven't tested that theory.
I agree on the dual card thing. It's really for the niche of the niche. Folks who want all of the eye candy plus super low frame times on a 1440p or higher monitor.

I'm not struggling to get 8.3ms-ish frame times on my RoG Swift with a single 780 Ti, but I also don't really give a shit about texture quality or post processing, to a point.
 
what really? i want toget a new gpu for 1440p gaming so if i get a 970 (which i was) it wouldnt be sufficient? thats a bummer...

Define sufficient.
970 won't max out Witcher 3 (mix of medium and ultra for 1080p60) so at 1440p60 will need slightly lower settings. If that's sufficient then it's fine.
 
Define sufficient.
970 won't max out Witcher 3 (mix of medium and ultra for 1080p60) so at 1440p60 will need slightly lower settings. If that's sufficient then it's fine.

This. I was using twin 2GB 770s and limited to medium textures for 1440p60 in GTA V for instance. It's kind of distracting seeing pixelation in clothes in close ups.
 
Define sufficient.
970 won't max out Witcher 3 (mix of medium and ultra for 1080p60) so at 1440p60 will need slightly lower settings. If that's sufficient then it's fine.

i want to run high ( not ultra) settings 1440p/60fps. would a 980 be sufficient then?
 
i want to run high ( not ultra) settings 1440p/60fps. would a 980 be sufficient then?

Without Harworks and some overclock it should be fine for a locked 60fps.

Some settings such as shadows can be run at medium with very minor impact but large fps gain.
 
What about the 1080p and 1440p performances of the Fury X vs 980 Ti?

Why didn't AMD include them? Could it be that the Fury X only beats the 980ti in 4k?

Would that make sense?
 
What about the 1080p and 1440p performances of the Fury X vs 980 Ti?

Why didn't AMD include them? Could it be that the Fury X only beats the 980ti in 4k?

Would that make sense?

Perhaps. AMD seems to scale better with high resolution and AA and perform worse lower resolution. We'll just have to wait and see how this all turns out.
 
This card seems to be in a tough spot. The main draw would seem to be 4k gaming, but a single card isn't going to get you 60 fps with maxed settings. So then you start looking at sli/crossfire, and if you are going to do that then you would probably be better off to go with SLI since Nvidia seems to be better at keeping their profiles up to date. On top of that you get 6GB of vram with the 980 ti.
 
Perhaps. AMD seems to scale better with high resolution and AA and perform worse lower resolution. We'll just have to wait and see how this all turns out.

On the other hand, lower resolution requires less VRAM, so the 4K of HBM should shine, unless the bottleneck is elsewhere.

Someone who knows more about GPU architecture needs to chime in and make bold predictions! :)
 
okay. so ill get a gtx 980. amd maybe sli in the future

You mean 980 Ti? If there's a scenario where you can afford the 980 but not the 980 Ti, I would get a 970 and use the extra cash for something else. Its 50% more money for 10-15% more performance in most cases by choosing a 980.
 
You mean 980 Ti? If there's a scenario where you can afford the 980 but not the 980 Ti, I would get a 970 and use the extra cash for something else. Its 50% more money for 10-15% more performance in most cases by choosing a 980.

Yes, better off buying a 970 and then selling it and buying a GTX 1070 rather than SLI.
 
yeah I think this is an awkward time to upgrade considering the nm shrinks and overall huge performance gains with it. Found a 270 for $90 on Reddit to crossfire with, course now I need a new PSU :( still cheaper than a new GPU and its only a stopgap measure until next year. Can't wait for HBM2.

Agreed.

14nm / 16nm FinFET, HBM2 and most likely the first GPUs to be engineered after Direct3D FL 12.1 spec was finalized, at least on the AMD side. Although it's somewhat uncertain and could be wrong about that part.

What I'm saying overall is, Arctic Islands / R400 series and Pascal next year will be the best time to upgrade for those that have Kepler and R2x0 based cards.
 
Top Bottom