That seems to be causing a few issues in Breath of the Wild and i wonder if that's the reason why they kept the 900p resolution instead of going with 1080p, but then again MK8 has DoF as well in replay mode and it's a 1080p game on Switch.
I wonder if they managed to mitigate the issue for the final version of the game, and if with more time in the oven (or if the game was made with the Switch in mind from the scratch) they would've managed to found some workarounds for that and boost it to 1080p.
I wouldn't necessarily say that DoF blurring on its own would prevent a game from hitting 1080p/60fps, but that on a tile-based renderer you would generally want to minimise the number of render passes which require a full-resolution buffer to be fed out to main memory and then read back in again. A DoF pass would be one of these, but it's possible BotW has more of them. I've tried my best to maintain a media blackout of the game since its first announcement, though, so it's pretty difficult for me to speculate what they may be.
Alternatively it's possible that it's simply a production decision made by the BotW team. They may have just preferred to add graphical effects, etc., in docked mode rather than push the resolution all the way up to 1080p.
Makes sense. I wonder what kind of bandwidth can we expect from the eventual SRAM.
A general rule of on-die SRAM would be that it will give you "enough" bandwidth for whatever it's designed for. My guess is that if there's any extra memory on there it's in the form of increased GPU L2, as it's a relatively simple change which would allow them to use larger tiles, or more render targets per tile, or perhaps just to free more space for texture caching and so forth.
Didn't we see a post not long ago from a Ubi developer saying that 25.6GB/s would indeed be a huge bottleneck even accounting for TBR? Specifically because of these types of post processing effects like depth of field you mention? It could be that MK8 and RMX are able to run at 1080p/60fps because there is a large on-die cache or something.
Regarding the DoF issue in Zelda mentioned by Digital Foundry, I believe that was specifically for the demo version and didn't even occur consistently. So it might be something we don't see in the retail version which would indicate the effective bandwidth is higher than 25.6GB/s right?
Do you have a link for the Ubisoft comment? It's not something I remember (although I may have simply missed it at the time).
Regarding the "large on-die cache", that's exactly how TBR already works (the big increase in L2 cache of Maxwell over Kepler was a clue for this, even before anyone figured out they moved to TBR). A larger cache would allow the GPU to use larger tiles, which should improve efficiency somewhat, but a step-change in performance would only occur once the cache is large enough to hold the entire framebuffer, which would likely take up almost all of the die area of the chip pictured.
There is still the difference in efficiency between the more modern Maxwell architecture and the older R7000 (I think) architecture in PS4/XB1, which should mean the Maxwell flops do perform better than R7000 flops. Maybe not by 40% or 30% but there should be some gains purely from more modern architecture.
The other thing to mention is, if the NVN API is as good as everyone is saying it is, couldn't that be a similar advantage to the one Nvidia hardware has in a PC environment? Again, maybe not to the same extent, but still potentially an effective flop advantage. I think a game like Snake Pass (which struggles to reach 60fps on PS4) running at 1080p/30fps locked on Switch shows that the Switch will surely be punching above its weight when it comes to pure raw numbers.
I would guess that Snake Pass has some graphical effects toned down or removed from PS4 to Switch, and I also wouldn't read too much into pre-release frame rates. It'll be interesting to see a proper comparison of the two versions once the game releases, though, hopefully Digital Foundry will do an article on it.