AMD Polaris architecture to succeed Graphics Core Next

Are those Nvidia/AMD specific extensions used commonly ?
I mean the base API must have accumulated a number of limitations along the years so I would suspect some tech pushing devs have been using them.
Intels pixel sync was used in couple of games.
Not sure what nvidia and amd have done.
 
Aren't all hardware exclusive effects done via specific extensions that tie into DX11 (in DX 11 titles that is)? Puttovoi?

Maybe many Gameworks effects are enabled using CUDA or Nvapi, I don't know what AMD have on their side.

Intels pixel sync was used in couple of games.
Not sure what nvidia and amd have done.
Ah yes I remember GRID 2, Total War Rome, GRID autosport and some others using Intel's hardware extension for some neat OIT effects.

I also remember Johan Anderson from DICE wishing for Intel's Pixelsync to be exposed in future APIs.

I guess that has been done.
 
Of course you're here, shitting on AMD.

After how horrible the last cards ended up being (power draw/garbage driver support) can you blame him? I've been rooting for ATI/AMD since their original cards but what difference does that make when they constantly screw their customers?
 
The problem as always is that we have very little to base our speculation on, very few developpers have bothered shedding light on the matter and it is not quite clear to which extent DX12 and Vulkan will narrow the gap in efficiency between gaming PCs and consoles.
In this regard welcome to Beyond3D and in particular Andrew Lauritzen from Intel and Sebbbi aka Sebastian Aaltonen from Redlynx and one of the creators from the presentation I linked earlier:
https://forum.beyond3d.com/threads/ps4-longevity.56816/page-7#post-1843234
https://forum.beyond3d.com/threads/direct3d-feature-levels-discussion.56575/page-12#post-1848919
https://forum.beyond3d.com/threads/directx-12-api-preview.55653/page-12#post-1855321
They both often times shed some light, about questions "we" have.

To quote simply some jam:
AMD supports feature level 12.0. This brings some highly important features such as typed UAV load and bindless resources.

I can't count how many times during the last month alone our graphics programmers have cursed the lack of typed UAV load in DX11 (on PC). Without typed UAV load, in-place modification of textures is impossible (unless the texture is using 32 bit per channel format = eats the BW alive, incurs 4x+ sampling cost multiplier). Sampling integer textures (with for example gather4) is not allowed, so bit packing is out of the question. Also bit packing + filtering do not work. Texture/buffer view rules do not allow aliasing 32 bit integer texture/buffer on top of 16 bit int/float or 8888 or 11f-11f-10f etc. So you often need to perform unnecessary extra copies and allocate temp storage to perform read+modify+write style operations. This costs performance and complicates the code quite a bit. Typed UAV load makes compute shaders much more useful.

I guess it was too much to ask for DX12 to cover all the bases.
Are IHV specific DX12 extensions possible ? Nvidia have CUDA but it must be a hard sell considering it will only work on Nvidia.
1. Part:
It's a bit problematic, since not every chip supports all the same operations, but we could support some operations, which every IHV nowadays do.
2. Part:
Officially no, but for DX11 they were.
From AMD for example:
http://developer.amd.com/tools-and-sdks/graphics-development/amd-gpu-services-ags-library/

Intel did some to support PixelSync for DX11 or Rasterizer Ordered Views in DX12 speech. (Grid 2/smoke stuff)


Are those Nvidia/AMD specific extensions used commonly ?
I didn't heard a lot about them, I would guess no.
 
That confirms what I thought about DX11's being way behind the times.
DX12 might be faster from a runtime and production standpoint, that's the big thing I take away from it all as far as I'm concerned.
 
I apologize if I came across as abrasive, that really wasn't my intention.

Rise of the Tomb Raider (ex xbone exclusive) will be interesting to benchmark, it uses async compute on consoles so I wonder what hardware on the PC side will achieve the same quality at the same framerate.
Thus far the R7 260 did the job, but obviously the 1040 clock speed helped vs the 853 on the bone.

No worries, glad we could talk things out instead of getting into warrior status.

Agreed, it's always fun (for me at least) seeing the PC/PS4/XB1 comparison threads. I like to see how each team handles, prioritizes, and deals with the hardware and limitations/strengths of each platform.

In this regard welcome to Beyond3D and in particular Andrew Lauritzen from Intel and Sebbbi aka Sebastian Aaltonen from Redlynx and one of the creators from the presentation I linked earlier:
https://forum.beyond3d.com/threads/ps4-longevity.56816/page-7#post-1843234
https://forum.beyond3d.com/threads/direct3d-feature-levels-discussion.56575/page-12#post-1848919
https://forum.beyond3d.com/threads/directx-12-api-preview.55653/page-12#post-1855321
They both often times shed some light, about questions "we" have.

To quote simply some jam:



1. Part:
It's a bit problematic, since not every chip supports all the same operations, but we could support some operations, which every IHV nowadays do.
2. Part:
Officially no, but for DX11 they were.
From AMD for example:
http://developer.amd.com/tools-and-sdks/graphics-development/amd-gpu-services-ags-library/

Intel did some to support PixelSync for DX11 or Rasterizer Ordered Views in DX12 speech. (Grid 2/smoke stuff)



I didn't heard a lot about them, I would guess no.

Thank you very much for the posts and info. Very useful.
 
To go back to Polaris again.
Since we have two games 'announced' with ROVs + Conservative Rasterization:
Just Cause 3 and F1 2015

I hope more than ever that AMD will support DX12 FL12.1 with Polaris.

Ugh boys, please don't screw up.
 
To go back to Polaris again.
Since we have two games 'announced' with ROVs + Conservative Rasterization:
Just Cause 3 and F1 2015

I hope more than ever that AMD will support DX12 FL12.1 with Polaris.

Ugh boys, please don't screw up.

If Just Cause 3 or F1 2015 are updated (support for DX12, not simply the engines) it could indeed put further pressure on AMD because it would go public, but as far as I understand it neither will receive a DX12 patch. It's just that their respective engines already have DX12_1 support.

F1 2016 might support DX12 along with Avalanche's next project.

It is no surprise that many engines available today have DX12 support, I suppose countless devs have been experimenting with it for quite some time but unless we begin to see a massive wave of games supporting DX12_1 AMD can rest easy.
 
To go back to Polaris again.
Since we have two games 'announced' with ROVs + Conservative Rasterization:
Just Cause 3 and F1 2015

I hope more than ever that AMD will support DX12 FL12.1 with Polaris.

Ugh boys, please don't screw up.

I bet they will not, and half the 400 series GPUs will be rebranded models.
 
I hope I'm not crazy, when I think it would be crazy when JC3 and F1 2015 wouldn't receive DX12 Patches?

Since you do not announce your session with this headline:
Using New DirectX Features in Practice: Just Cause 3 Case Study (presented by Intel)
http://schedule.gdconf.com/session/using-new-directx-features-in-practice-just-cause-3-case-study-presented-by-intel

If you would simply talk about engine integration, you would say, look at the new DX12 Features in the Avalanche Engine.

It's also both times presented by Intel, so I'm pretty sure it's the same deal it was before with Grid 2 or Total War.
Intel Marketing, Intel Collaboration.
 
I hope I'm not crazy, when I think it would be crazy when JC3 and F1 2015 wouldn't receive DX12 Patches?

Since you do not announce your session with this headline:

http://schedule.gdconf.com/session/using-new-directx-features-in-practice-just-cause-3-case-study-presented-by-intel

If you would simply talk about engine integration, you would say, look at the new DX12 Features in the Avalanche Engine.

It's also both times presented by Intel, so I'm pretty sure it's the same deal it was before with Grid 2 or Total War.
Intel Marketing, Intel Collaboration.

Maybe I'm a bit thick but I don't see how that says JC3 and F1 2015 will receive official and public DX12 support. To me it sounds like they are just disclosing what they have been working on, and how their games could benefit from it.

Of course I would absolutely love Just Cause 3 to support DX12, and not only behind the scenes. F1 being an annual IP I don't think they will bother, but JC4 is not planned for 2016 is it ?
 
JC4 of course not.

And since every modern game comes with DLCs. seasons passes and whatsoever, long term support makes also sense from a technical standpoint.
 
JC4 of course not.

And since every modern game comes with DLCs. seasons passes and whatsoever, long term support makes also sense from a technical standpoint.

Good point, it's also smart to try to recoup your investment on a longer period than the traditional 3 months for typical AAA releases.
Publishers need to think long term.
 
That's taken ludicrously out of context, he's saying that usually graphics cards target 20% more performance, but that this time they "set a completely different goal". Here's what he actually said:



Here's the link to the original interview. (I've added quotation marks to Venturebeat's originally formatting, as they should have been included by whoever was transcribing the interview in the first place).

Not that any of these PR statements mean anything, anyway, but if you're going to try to base an argument on them at least quote them properly.
Maybe you misunderstood. He is stating that the GPU after Polaris this year, say Polaris 2 in 2017, they set a target of 20% improvement. Now it really depends on where Polaris lands this year but if it beats Maxwell by 20% then you sure as hell know that their next year's offerings are also crap against a Pascal that is 40% better than Maxwell.
 
I understood it the exact same way like Thraktor did.
And he made his points very clear.

This is the quote:
Raja Koduri (AMD) said:
When we set to design this GPU, we set a completely different goal than for the usual way the PC road maps go. Those are driven by, the benchmark score this year is X. Next year we need to target 20 percent better at this cost and this power. We decided to do something exciting with this GPU.Let’s spike it so we can accomplish something we hadn’t accomplished before.
Raja roughly said how it was in the past and what they did now.
I'm sure the head of a division wouldn't casually mention what their performance plans for the next year are.
 
One thing that hasn't been mentioned here about the Venturebeat article is that Koduri believes they are ahead of Nvidia with the transition, especially with mobile and mainstream market.

Given we've heard that AMD is doing their lower end chips at Globalfoundries 14 nm, and specifically we know of only two chips coming out, Koduri likely knows that the 14 nm production is ready much sooner than the TSMC 16 nm production. If we assume that they're preparing their enthusiast class GPU at TSMC, yet they only talk about these two mobile and mainstream chips, it would indicate that the chips done at TSMC are coming out considerably later. Of course Nvidia could be faster at TSMC if they do smaller chips first.
 
Are those Nvidia/AMD specific extensions used commonly ?
I mean the base API must have accumulated a number of limitations along the years so I would suspect some tech pushing devs have been using them.

on the nvidia side i think usage is mostly limited to nvidias gameworks suite. i dont think many devs bother using advanced nvidia only features on their own accord.
 
Does specific optimization techniques count ? I mean I can see that being the case considering the architecture is different from a PC but I read his quote as much, much more than that. If the visual integrity can be maintained on PC even with a different optimization technique then it's nowhere near impressive.

I read his quote as "visual features" that could only work on consoles, that's my interpretation.

I don't think that there are any "visual features" that can work only on consoles. The biggest advantage of PS4 over a PC is its fast GDDR5 UMA architecture, the rest is smaller stuff like cross lane operations which are there technically on PC but aren't exposed in modern APIs.

HRAA as I understand it was dropped from PC because NV discontinued CSAA with Maxwell and HRAA is based on coverage reconstruction. It's rather bad in IQ anyway so not a big loss.

If there is nothing that you can do on PS4 that you can't do on PC, then why would the head of AMD's GPU division (in an interview about improving their discreet GPU line) say that? I mean, I would understand it coming from a Sony/MS spokesperson, or something similar. But not from a guy trying to sell you your next PC GPU. Which is why I posted the quote.

But, as was mentioned, we don't really have any more information than that, so there isn't much to talk about at the moment, sadly.
He's just promoting their future products. And I would honestly rather hear him flat out saying that Polaris will support FL12_1 instead of going into completely pointless console territory.
 
Samsung Begins Mass Producing World’s Fastest DRAM – Based on Newest High Bandwidth Memory (HBM) Interface

From the article:
In addition, Samsung plans to produce an 8GB HBM2 DRAM package within this year. By specifying 8GB HBM2 DRAM in graphics cards, designers will be able to enjoy a space savings of more than 95 percent, compared to using GDDR5 DRAM, offering more optimal solutions for compact devices that require high-level graphics computing capabilities.

A single 8GB package. Crazy.
 
HRAA as I understand it was dropped from PC because NV discontinued CSAA with Maxwell and HRAA is based on coverage reconstruction. It's rather bad in IQ anyway so not a big loss.
It also uses analytical AA (GBAA) which is quite slow without having access to interpolators in hardware and API. (Have to use geometry shaders.)
 
One thing that hasn't been mentioned here about the Venturebeat article is that Koduri believes they are ahead of Nvidia with the transition, especially with mobile and mainstream market.

Given we've heard that AMD is doing their lower end chips at Globalfoundries 14 nm, and specifically we know of only two chips coming out, Koduri likely knows that the 14 nm production is ready much sooner than the TSMC 16 nm production. If we assume that they're preparing their enthusiast class GPU at TSMC, yet they only talk about these two mobile and mainstream chips, it would indicate that the chips done at TSMC are coming out considerably later. Of course Nvidia could be faster at TSMC if they do smaller chips first.
AMD is definitely not ahead of Nvidia in mobile GPUs. One look at the gaming laptop market shows that AMD is nowhere to be seen, Nvidia dominates that landscape. Even if we were talking about mobile SOCs Nvidia still is ahead of them in that considering Nvidia has the X1 series that is actually in a tablet while AMDs mullen series is not in one product so far.
 

Samsung rolling shit out in terms of fab. The upped the amount of ram per chip for their LPDDR4. They announce last week their 14nm finfet 2 process was ready for mass market. Now they announce HBM2.

Couple that with the idea that they are probably gonna be the iPhone OLED supplier, and Samsung is doing well even if their own branded electronics aren't selling at the high clip they use too.
 
One of the things i want to see from nVidia are the Tegra Pascal boards. I wonder if they'll pair it with a newer version of their Denver cpu.

If AMD plays their cards well they could very well grab some of the laptop market as well.
 
Yeah but the original roadmap paired the Denver (2?) CPU cores with Maxwell GPU. I guess now it will be Denver + Pascal

Hm, I don't think that they've specified what CPU cores they'll pair with Maxwell explicitly. Everyone just assumed that since they've used Denver in one configuration of K1 they'll use it in Erista and Parker as well. And they've decided to not use it in Erista/X1 and will use them again in Parker instead.
 
AMD is definitely not ahead of Nvidia in mobile GPUs. One look at the gaming laptop market shows that AMD is nowhere to be seen, Nvidia dominates that landscape. Even if we were talking about mobile SOCs Nvidia still is ahead of them in that considering Nvidia has the X1 series that is actually in a tablet while AMDs mullen series is not in one product so far.

Of course they're not, and that's not what was said. They'll be first to 14 nm products, and they'll have them for back-to-school period which is super important. Nvidia will apparently only do a refresh with 970MX and 980MX since Pascal won't be ready for mobile in time. This is very good chance for AMD to take back market share in mobile where they've been nearly non-existant.
https://translate.google.com/transl...oard.php?bo_table=news&wr_id=15839&edit-text=
 
Slightly OT, but I figured we were talking about AMD already so why not.

According to the AMD investors' report

Advanced Micro Devices Inc. said revenue fell 23% in the fourth quarter, amid lower sales of processors used in personal computers and a decline in game console royalties
http://www.wsj.com/articles/amd-fourth-quarter-revenue-tumbles-23-but-loss-narrows-as-costs-fall-1453242928

I wonder why there's a decline in the royalties for console APUs? Are the royalty structures front-loaded, or possibly related to # of consoles sold (since there were more sold this FY than last, right?)?

This also means that Sony/MS are making more per console sold now, which increases profit and also gives them another chance for a price drop while still being profitable.
 
This decline is likely to be seasonal as they've provided the APUs for holiday sales during the previous quarter and this quarter is usually rather slow in sales so the number of APUs ordered went down considerably.
 
This decline is likely to be seasonal as they've provided the APUs for holiday sales during the previous quarter and this quarter is usually rather slow in sales so the number of APUs ordered went down considerably.

Haha, thank you. I just realized I've been reading that wrong this whole time. For some reason I was thinking in my head that it was talking about the royalty fee per APU, not the overall royalties.

Silly me.

Thanks.
 
Haha, thank you. I just realized I've been reading that wrong this whole time. For some reason I was thinking in my head that it was talking about the royalty fee per APU, not the overall royalties.

Silly me.

Thanks.
Both of you are correct. Royalties do drop with time as the price drops and sales are seasonal. At least that's my understanding from AMD transcripts (Stock disclosures).

We should get a bump in sales for both the XB1 and PS4 when they start being used as DVRs, Cable TV STBs and Antenna TV ATSC 2 STBs supporting XTV as well as Vidipath servers and clients. Then UHD IPTV and Cord cutter IPTV (Playstation Vue) using HEVC and likely as 4K blu-ray players with digital bridge.

PS5 and XB2 will likely drop after 2018 so both generations will be shipping making money for AMD.
 
Not related to Polaris, but Falcon Northwest showed off a new desktop with a Radeon R9 Fury X2 dual-GPU card running a HTC Vive demo at the Virtual Reality Los Angeles 2016 Winter Expo.

bj06y5k.jpg


https://twitter.com/FalconNW?ref_src=twsrc^google|twcamp^serp|twgr^author

http://wccftech.com/amds-dual-fiji-based-radeon-r9-fury-x2-graphics-card-spotted-vrla/
http://www.fudzilla.com/news/graphics/39761-amd-shows-off-radeon-r9-fury-x2-dual-gpu-at-vrla-2016

At the Virtual Reality Los Angeles (VRLA) 2016 Winter Expo this weekend in Southern California, AMD showed off its upcoming Radeon R9 Fury X dual-GPU powered by two 28nm Fiji GPUs.

Antal Tungler, PR Manager for AMD, tweeted about the showcase on Saturday, instantly sparking the attention of graphics card aficionados inside and outside the event. The GPU was featured inside a new prototype system made by Falcon Northwest
 
Rumor has it that Apple may contract AMD to design a semi-custom x86 SOC for its iMac products in 2017 and 2018. According to a report from Bitsandchips.it this deal would allow Apple to secure a high performance x86 SOC design at a significantly lower cost than competing Intel solutions.

Zen + Polaris + HBM (likely some other Star name GPU) as a APU. The XB1 can emulate a Xbox 360 and likely the next generation APU with Zen can emulate the PS3 and any older generation chipset designed for consumers.

The Stock market is speculating on how long AMD can last. AMD was seriously impacted by the end of Moore's law (leakage increases with smaller nodes) and the costs in implementing HBM. CPUs require a larger cache as they get faster which Intel designed in and that raised the cost of Intel CPUs. GPUs as they get larger are memory starved; the answer for AMD was HBM and then HSA with a new memory interconnect fabric. To efficiently use HBM the cache and registers in CPU and GPUs need a redesign, this is also necessary for new power efficiency schemes. If they can do this and meet the price point the consumer is willing to pay they will win back market share. I think that is their plan and by 2018 their stock price should reflect how well they succeded.
 
Heard this many times with every new CPU/GPU Architecture from AMD. And in every case it gets dwarfed by intel/Nvidia.
 
Heard this many times with every new CPU/GPU Architecture from AMD. And in every case it gets dwarfed by intel/Nvidia.
Which is why the XB1 and PS4 are using AMD not Intel Nvidia?

The current iMac is using a small Intel CPU/APU and AMD dGPU. Zen is competitive with Intel CPUs and AMD GPUs are better than Intel. For the first time a APU could replace the combo APU + dGPU the iMac is using and it would be faster or at least cheaper than Intel and adequate at the worst. Then their is AMD using open standards, many of which still haven't been implemented because too few platforms support them.
 
Which is why the XB1 and PS4 are using AMD not Intel Nvidia?

The current iMac is using a small Intel CPU/APU and AMD dGPU. Zen is competitive with Intel CPUs and AMD GPUs are better than Intel. For the first time a APU could replace the combo APU + dGPU the iMac is using and it would be faster and cheaper than Intel. Then their is AMD using open standards, many of which still haven't been implemented because too few platforms support them.

Right OEM contracts are definitely all about performance and not about maximizing margins at the cost of supplier
 
Right OEM contracts are definitely all about performance and not about maximizing margins at the cost of supplier
I take it that was a sarcastic comment but I don't see how it applies: "this deal would allow Apple to secure a high performance x86 SOC design at a significantly lower cost than competing Intel solutions." A next generation APU/SoC should be cheaper than a CPU/APU + dGPU combination. AMD has recommended small APU + dGPU until next generation which has to incorporate multiple power saving schemes and HBM so that the memory can support all of them. The XB1 and PS4 as game consoles get a break in power efficiency requirements that still applies to computers.

Currently the US government can not purchase a Computer unless it complies with Energy Star/EU power requirements.

unapersson said:
Which is why we're all on Intel on our high end desktops now rather than X86-64 AMD.
Yes CPU/APU + dGPU at the present time favors Intel as it has the more efficient more powerful CPU/APU. The same is now seen in the iMac. When we move to larger APUs with the newer open source standards this should change for PCs and iMac. If most PCs come with a APU that is equal to a XB1 or PS4 and you can add a dGPU to this I expect most PCs or iMacs will ship with this chipset. The new memory fabric confirms this: 4 TF APU! I suspect in the short term the mainstream/cheapest APU will be PS4 level performance.

7e42c9447e754167c85105ffe1a1d866_XL.jpg
 
I take it that was a sarcastic comment but I don't see how it applies: "this deal would allow Apple to secure a high performance x86 SOC design at a significantly lower cost than competing Intel solutions." A next generation APU/SoC should be cheaper than a CPU/APU + dGPU combination. AMD has recommended small APU + dGPU until next generation which has to incorporate multiple power saving schemes and HBM so that the memory can support all of them. The XB1 and PS4 as game consoles get a break in power efficiency requirements that still applies to computers.

Currently the US government can not purchase a Computer unless it complies with Energy Star/EU power requirements.

Yes CPU/APU + dGPU at the present time favors Intel as it has the more efficient more powerful CPU/APU. The same is now seen in the iMac. When we move to larger APUs with the newer open source standards this should change for PCs and iMac. If most PCs come with a APU that is equal to a XB1 or PS4 and you can add a dGPU to this I expect most PCs or iMacs will ship with this chipset.

A CPU+dGPU combo will always be faster than just the CPU with iGPU, nothing will change with the new APUs from AMD. Intel's Skylake iGPU is more advanced than GCN and Iris graphics performance is nothing to be ashamed of - its usage of EDRAM is more or less the same thing which AMD plans to achieve with HBM and it's been out for a couple of years already. There is nothing new in what AMD plans to offer, the only reason Apple may use AMD's chips is because they'll be cheap while providing an acceptable level of performance.
 
A CPU+dGPU combo will always be faster than just the CPU with iGPU, nothing will change with the new APUs from AMD. Intel's Skylake iGPU is more advanced than GCN and Iris graphics performance is nothing to be ashamed of - its usage of EDRAM is more or less the same thing which AMD plans to achieve with HBM and it's been out for a couple of years already. There is nothing new in what AMD plans to offer, the only reason Apple may use AMD's chips is because they'll be cheap while providing an acceptable level of performance.
EDRAM is expensive and can not be provided in the amounts that GPUs need. In the short term EDRAM can improve the efficiency of iGPUs that have to rely on DDR3 or DDR4 (XB1 and Intel). HBM changes everything which is why Intel has plans to use HBM just as AMD does. Will AMD have a cheaper larger APU than Intel? The rumor thinks so and I think AMD has to to survive.

A CPU + dGPU because of TDP limitation can be larger and faster. Not saying this is not true, what I am saying is that next generation APUs becuse of HBM will be larger and the basic building block for a PC or iMac not a small APU + dGPU. So it changes to something close to a PS4 level APU (2 TF or smaller with larger faster CPU) as the minimum in a PC with dGPU optional. Sockets still supported with DDR4 on the motherboard so upper end PCs can have a 4TF APU and larger dGPU.

"AMD Has Three Semi-Custom Designs In The Pipeline In Addition To The Two Current Console Designs" Of the three new semi-custom designs one is gaming, one is beyond gaming and one is ARM with the beyond gaming either X-86 or ARM. AMD just announced a ARM server design so a guess is Nintendo NX, Apple iMac and the ARM server.

Edit: In the short term we could see a XB1 or smaller APU with EDRAM as the PC or iMac APU but for a larger APU it needs HBM and register - memory fabric changes.
 
Can we expect these to be on par, if not outright better than Pascal?

There's really no way to know outside of projected bandwidths etc. The high ends seem to be similar but we have no idea if the low, medium or high end will be released in what order for each respective company and how they will compare.
 
EDRAM is expensive and can not be provided in the amounts that GPUs need. In the short term EDRAM can improve the efficiency of iGPUs that have to rely on DDR3 or DDR4 (XB1 and Intel). HBM changes everything which is why Intel has plans to use HBM just as AMD does. Will AMD have a cheaper larger APU than Intel? The rumor thinks so and I think AMD has to to survive.
Did you see CPU with Iris graphics? The EDRAM chip on it is implemented in the exact same fashion as HBM stacks are implemented on Fiji. The difference is in UMA/NUMA obviously but for a typical PC usage scenario this difference is negligible as you will never be able to omit NUMA support on PC platform.

Last time I checked Intel did not plan to use HBM as they opt for HMC instead. In any case the memory type is not important, what you get with HBM is more bandwidth and this is just extensive evolution which does not change anything.

A CPU + dGPU because of TDP limitation can be larger and faster. Not saying this is not true, what I am saying is that next generation APUs becuse of HBM will be larger and the basic building block for a PC or iMac not a small APU + dGPU. So it changes to something close to a PS4 level APU (2 TF or smaller with larger faster CPU) as the minimum in a PC with dGPU optional. Sockets still supported with DDR4 on the motherboard so upper end PCs can have a 4TF APU and larger dGPU.
Nothing changes. dGPUs will progress at the exact same rate as iGPUs. It doesn't matter how PC APUs compare to console parts, reaching PS4 GPU performance in a PC APU won't change anything on the PC market as it's already way beyond of what is possible on current gen consoles.

Ya around June.

Back to school is what I've read last time.
 
Top Bottom