AMD Polaris architecture to succeed Graphics Core Next

No. It becomes less of a hack, but definitely not easier.

I'm not trying to insinuate that it's as easy as flipping a switch or anything, but I think it's reasonable to believe that all of these supposedly "horrendous" compatibility issues are reduced.
 
Both Fiji and Tonga has 1/16 DP rate which is just two times higher than what Maxwell and GP10x have. So in essence AMD has "ditched" its DP units as well in GCN3 cards. This, btw, is the most likely reason of perf/watt improvements seen in GCN3 (that and HBM use on Fiji obviously).

That's true. Although GCN doesn't use dedicated DP ALUs like nVidia does, I assume cutting actual units saves up more space than just cutting logic that beefs up the DP ratio. But I'm not all that familiar how GCN handles DP on the transistor level so I could be wrong.

My point merely was that it's not easy to say how efficient GCN is as whole considering how messy the line-up is with all the outliers and iterations AMD drip fed over past five years.
 
Am I missing something here? I just installed Crossfire the other day (2nd R390x) and have encountered zero issues playing the following games:

  • Doom
  • Far Cry 4
  • Divinity Original Sin
  • Crysis 3
  • Fallout 4

In fact, the only issue I did encounter was with Warhammer--which was fixed that day?

I swear, I always see people spreading stuff like this about Crossfire and SLI and my experience so far has been incredible. Granted, I'm still in the honeymoon period, but sheesh, it seems to be working pretty well for me right now.

Plus, with DX 12, the multi-GPU stuff becomes easier to implement.

you are so wrong. dx12 is the death blow for multi gpu.
 
There is a 390x bios we can flash into the 290x. From doing this you will gain ~5% over stock.

http://www.overclock.net/t/1564219/modded-r9-390x-bios-for-r9-290-290x-updated-02-16-2016/0_50


290x memory timings (think CAS latency of normal RAM such as DDR3) are separated into "steps".



Each step has its own set of memory timings.

The lower the strap mhz, the tighter the timings are.

We edit the Bios by copying the timings from the strap 1126-1250mhz into all the straps above it.

Now when you overclock you ram from 1250mhz (stock) to 1500mhz, your performance will be another 5~10%higher than a stock bios 290x also at 1500mhz.

http://www.overclock.net/t/1561372/hawaii-bios-editing-290-290x-295x2-390-390x/0_50




Here is my 290x @ 1150mhz / 1580mhz (conservative clocks)

http://www.3dmark.com/fs/7825736

290/290x/390/390x are beasts and the kings of performance per dollar.
 
I'm not trying to insinuate that it's as easy as flipping a switch or anything, but I think it's reasonable to believe that all of these supposedly "horrendous" compatibility issues are reduced.

There will also be games with 0 compatibility issues
because developers didn't bother implementing a mGPU solution. So basically the same situation we're in today (probably at best)
 
you do realize that under dx12 developers have to do all the work they already werent doing under dx11 as well as all the driver side work the IHVs have spent years doing.

No they don't.... But they can choose to do everything manually.
You can still run dx12 with drivers taking care of workload distribution. (This will ofc prevent you from mixing GPUs)
 
No they don't.... But they can choose to do everything manually.
You can still run dx12 with drivers taking care of workload distribution. (This will ofc prevent you from mixing GPUs)

then youre doing whats already been available for years and dont get the benefits that dx12 allows should a developer want to do the work
 
As if dx12 was only about that.. Here is a nvidia slide on the modes available

GTX-1080-REVIEWS-11.PNG

Btw the max 2 gpu is just recommendation, actual is 4
 
ok, what benefit does dx12 offer over dx11 when using lda implicit?

Things like new effects, async compute and better thread distribution on cpu should all still work.

But ofc if a developer can master the dx12/vulkan multi adapter functions then that will probably be faster.
 
Am I missing something here? I just installed Crossfire the other day (2nd R390x) and have encountered zero issues playing the following games:

  • Doom
  • Far Cry 4
  • Divinity Original Sin
  • Crysis 3
  • Fallout 4

In fact, the only issue I did encounter was with Warhammer--which was fixed that day?

I swear, I always see people spreading stuff like this about Crossfire and SLI and my experience so far has been incredible. Granted, I'm still in the honeymoon period, but sheesh, it seems to be working pretty well for me right now.

Plus, with DX 12, the multi-GPU stuff becomes easier to implement.

Fallout 4 did not support Crossfire at launch and for very long after (not sure if and when they actually added a profile in the drivers).
The Division did not work with crossfire (even though it was touted to have support), it still experiences major flickering with Crossfire on, this is 3 months after the games release.
VR does not support Crossfire or SLI at all, it's blocked forcefully.
With every new game released it's a crapshoot if it supports it or not.
And from a quick search, Doom does not currently support Crossfire. They added Doom support to the 16.5.2 driver, but this did not include a Crossfire profile.

DX12 puts all the effort of multi-GPU on the developers, and considering it is a pretty small market, I doubt many will focus on adding such a feature.

This is coming from a Crossfire user btw. I'm speaking from experience about the issues and lack of support.
 
Things like new effects, async compute and better thread distribution on cpu should all still work.

But ofc if a developer can master the dx12/vulkan multi adapter functions then that will probably be faster.

Those have nothing to do with multi gpu. Im asking in what ways can multi gpu be done better in dx12 when using lda implicit
 
Those have nothing to do with multi gpu. Im asking in what ways can multi gpu be done better in dx12 when using lda implicit

Seeing as no one else is answering

The linked GPU pattern turns all the cards available on a system to be treated as a single GPU with multiple command processors per engine (3D/Compute/Copy) and memory regions. It utilizes resources from one GPU in the other linked GPU’s rendering pipeline and command processor and memory regions are indicated by a node mask on the API.

Which makes the 2cards memory available something that under dx11 isn't possible.
 
Apologies for the potentially ignorant question as I've not been keeping up to date.

Is there an ETA on when we can expect to hear more about what Polaris will be bringing to the table performance wise?
 
Apologies for the potentially ignorant question as I've not been keeping up to date.

Is there an ETA on when we can expect to hear more about what Polaris will be bringing to the table performance wise?

I think tomorrow (27th) is when AMD present it to the press in Macau? May hear some tidbits then...
 
that is not applicable to LDA implicit mode. thats for the explicit mode where the developer has to do all the app side work + all the driver side work.

Well that's strange as that text is lifted from nvidias 1080 white paper talking about multiGPU where they differentiate i-lda and e-lda as who has control and then talk about LDA as a single entity... could you be wrong and the get to use the extra grunt from the 2nd GPU under dx12 from the driver?
 
Well that's strange as that text is lifted from nvidias 1080 white paper talking about multiGPU where they differentiate i-lda and e-lda as who has control and then talk about LDA as a single entity... could you be wrong and the get to use the extra grunt from the 2nd GPU under dx12 from the driver?

Def possible im wrong, but i dont see how its realistically feasible for the previously mentioned benefits to exist under driver controlled mgpu
 
Def possible im wrong, but i dont see how its realistically feasible for the previously mentioned benefits to exist under driver controlled mgpu
Think I found your answer...

XFMJVZy.jpg


You're right, but their are small benefits as per image above.
Text that accompanies image quoted.

Implicit Multiadapter tells the graphics driver that you do not want to deal with load balancing. Like SLI and CrossFire, this means Alternate Frame Rendering (AFR). I also expect that Implicit Multiadapter would also mirror all memory between devices and graphics cards of different models will not qualify, but neither of these two points were mentioned in the keynote. Of course, Microsoft still recommends that developers collaborate with hardware vendors to create a profile, like SLI and CrossFire do today with various driver updates and the GeForce Experience application.
 
im not seeing any benefits there other than working in windowed applications? i honestly dont know if thats possible under dx11.

Reduction in cross frame dependancies would reduce microsutter, I'd call that a win.

But i-lda is a dead tech walking, so we should drop it here.
 
Reduction in cross frame dependancies would reduce microsutter, I'd call that a win.

But i-lda is a dead tech walking, so we should drop it here.

one of us is misunderstanding the slide. my understanding is its listing things developers have to do under the current dx11 model to facilitate afr scaling.
 
one of us is misunderstanding the slide. my understanding is its listing things developers have to do under the current dx11 model to facilitate afr scaling.

Could be. But I'm dropping it here, let's call it status quo for Ida, but as the tech is basically dead why mourn it's passing?
 
Those have nothing to do with multi gpu. Im asking in what ways can multi gpu be done better in dx12 when using lda implicit

No that was actually not what you where asking.. and it can be done better by the driver if the developer don't have time or don't care to spend the time on implementing the full DX12 or don't have the actual skill for it.
as i stated in my earlier answer yes dx12/vulcan new way of doing multi GPU directly to the hardware is a better way to do it..... IF you can master it, that is also why they are not forced to use it.
 
No that was actually not what you where asking.. and it can be done better by the driver if the developer don't have time or don't care to spend the time on implementing the full DX12 or don't have the actual skill for it.
as i stated in my earlier answer yes dx12/vulcan new way of doing multi GPU directly to the hardware is a better way to do it..... IF you can master it, that is also why they are not forced to use it.

of course it was. my entire post history in this particular discussion has been talking about mgpu in dx12 compared to dx11. better cpu threading and async compute have absolutely nothing to do with it. the post i originally replied to expressed excitement because dx12 allows for better mgpu than dx11. to achieve that explicit mode must be used which means the dev has to do app + driver work. that is never going to be anything but the extremely rare exception. using implicit lda we have the exact same issues we have now. implicit lda changes absolutely nothing wrt mgpu scaling and/or compatibility.

i mean theres even a huge image a few posts above that says "same experience and guidance as dx11"
 
of course it was. my entire post history in this particular discussion has been talking about mgpu in dx12 compared to dx11.1"

wow sorry for not reading your entire post history, and then guessing that i should automatically associate some of them with the questions you ask.
Still i think i answered your questions.
 
wow sorry for not reading your entire post history, and then guessing that i should automatically associate some of them with the questions you ask.
Still i think i answered your questions.

then youre doing whats already been available for years and dont get the benefits that dx12 allows should a developer want to do the work

ok, what benefit does dx12 offer over dx11 when using lda implicit?

those were my 2 original posts to you. i dont see how theres any confusion. you then answered with benefits that have nothing to do with mgpu
 
They better tease more than that by tomorrow because a shit ton of 1080s will be sold soon.

Buyers of 1080s would never look at a Polaris because Polaris 10 isn't even likely to be able to match a 1070.

They better have some concrete performance numbers which are better than expected before June 10th though, or else the 1070 will become the 970 of this generation. It's still amazing that Nvidia was able to sell so many $330 video cards that the 970 is currently the #1 GPU in the Steam Hardware Survey.
 
There isn't any overlap between the potential buyers of something like a GTX 1080 and whatever AMD might reveal in the next few days.

Depends though, if price/perf is good enough, I might actually skip the 1080, get this, and wait for Vega/1080ti. Just a stopgap upgrade from 7970.
 
Fallout 4 did not support Crossfire at launch and for very long after (not sure if and when they actually added a profile in the drivers).
The Division did not work with crossfire (even though it was touted to have support), it still experiences major flickering with Crossfire on, this is 3 months after the games
And from a quick search, Doom does not currently support Crossfire. They added Doom support to the 16.5.2 driver, but this did not include a Crossfire profile.

DX12 puts all the effort of multi-GPU on the developers, and considering it is a pretty small market, I doubt many will focus on adding such a feature.

This is coming from a Crossfire user btw. I'm speaking from experience about the issues and lack of support.

Thanks for the informative post. I'm actually surprised by the lack of Crossfire support with Doom. I'm almost embarrassed to say it, but perhaps I've been placebo'd? That or maybe the support that AMD added for Doom improved my single card performance substantially?
 
Thanks for the informative post. I'm actually surprised by the lack of Crossfire support with Doom. I'm almost embarrassed to say it, but perhaps I've been placebo'd? That or maybe the support that AMD added for Doom improved my single card performance substantially?

Indeed, I think there was a pretty good increase in single card performance with Doom on latest drivers, I think I read reports of up to 30% in some configs.

So apparently the NDA for the Polaris event starting soon is 29:th of June. That'd be pretty bad honestly. Some speculate though that the NDA is for Zen/Vega, and Polaris 10 will have info sooner.
 
If the NDA for Polaris 10 is 29 June , AMD are idiots. Sorry but not saying anything for another month while nvidia just launched their new gen is idiotic.
 
Indeed, I think there was a pretty good increase in single card performance with Doom on latest drivers, I think I read reports of up to 30% in some configs.

So apparently the NDA for the Polaris event starting soon is 29:th of June. That'd be pretty bad honestly. Some speculate though that the NDA is for Zen/Vega, and Polaris 10 will have info sooner.

img_57480549a953atepq6.jpg


I guess we have a date for the NDA.

Edit: welp, guess I was late.

I really hope this is for the Vega because that's kind of ridiculous if it's for 10/11. Their mindshare potential is evaporating faster than water in California.
 
I thought that Vega was re-confirmed for 2017 during the last investors call and this editors day?

Polaris was always targeted at mid-year so 29th of June launch makes sense.
 
Top Bottom