(the following assumes that the reported numbers are correct regarding the PS5's I/O capabilities)
PS5's I/O capabilities massively outperforms other available solutions. But not because there is any secret sauce. It is because there is a dedicated hardware path from the SSD to the VRAM and GPU caches that can operate without CPU involvement. That is it.
I am certain similar solutions are being discussed in the PC space but because of the number of companies involved - that also need to agree regarding standards - it will take time.
The example you bring up is about compression. That is only one piece of the I/O puzzle and right now it is not even the key piece that slows the PC down - the key piece that is the problem is that the CPU still controls the PCI bus. Any bit that ends up in the hardware domain of your GPU needs to pass your CPU under the umbrella of some sort of API - that adds latency and throughout limitations. The former is the larger problem of the two.
Software based (CPU driven) I/O will of course continue to evolve but without a dedicated CPU independent hardware path there are clear limitations to what can be achieved.
Elog, we've already discussed this multiple times and you've been clarified on how you're wrong multiple times as well, so I'm not honestly interested doing the same dance again. There's much evidence that what you hold onto insofar as putting one solution on a pedestal far above the others is fundamentally false, but you are constantly telling yourself as to otherwise.
As well, the point of this thread was never about discussing I/O solutions, it was just something I sought to elucidate on from
RaySoft
's post. If you were not so determined (wrongly, imo) into valuing one particular solution heads-and-shoulders above others in this department for dubious reasons, you could see that almost every concern you bring up regarding I/O bottlenecks has, in some ways, solutions already in practice in other aspects of various technological computing/data markets. These aren't amateur companies just because their name isn't Sony.
Agree. I'm actually very doubtful there will be one. Many assume it just because of last gen without looking at the reasons why. Before that, there was no mid-gen refresh.
Still personally think there'll be some kind of mid-gen refresh, but it won't be focused on raw power. Doesn't need to be IMHO.
Great Hair
mentioned something about the 3090 seemingly struggling to do 4K60 with Ultra settings and RT on some titles. Dunno how true that is, I haven't watched any 3090 benchmark tests.
But even if that's the case, I don't think you need more power to accomplish doing that stuff consistently. If that were the case we'd be using 100 TF Fermi cards today. Or, as another example, it's like having a top-of-the-line computer from 2008 struggling to play a 4K60 Youtube video while some cheap modern tablet has no problem with it.
Performance gains going forward, IMHO, are going to come from a lot of things aside from raw power increases or node shrinks.
They've always done this for years against high-end Nvidia hardware. They somehow think that the "next" iteration is going to be that monster that they've all been wishing for instead of seeing realistically about 1) AMD is not at the forefront of tech like Nvidia. 2) Costs to own a console are rising to the brink of unaffordability, 3) tech takes LOTS of time and 4) the consoles will never really "lead" in any advances in tech (i.e. it will most likely have already been iterated upon through some other means). The sooner we can all come to grips with how this stuff pans out by taking our last 2 generations into account (PS4/PS5) and seeing the outcome there, the better off we'll be at having a reasonable conversation about future hardware. As it is now, it's not even worth entertaining the conversation as the wishlist is way out in left field like the majority of the Speculation thread.
Yeah, when you look at the entirity of the gaming market at any given time period, consoles were never at the front of the technological pack. If it wasn't the PC beating them to the punch, it was a microcomputer (Amiga, for example). If it wasn't a microcomputer, it was any number of highly advanced arcade machines of the time.
Even when we look at stuff like the SSD I/O for next-gen, this stuff has been done for a very long time in data center markets, and through technological advancements like data processing units (DPUs), which in principal do a lot of the same things the SSD I/O in the next-consoles will be doing, but applying that to data management over the network.
PS5 Pro is already 72CU chiplet in the making, probably 5nm or 3nm. XSX? Doesn't seem clear to me as it's not following RDNA2's roadmap, but anything can be fabricated/customized with the right amount of money.
I definitely think chiplets will be involved in a PS5 Pro, but I don't think it'll be 72 CUs
5nm seems like a lock for any mid-gen refreshes.
Very good thread and enjoyable read. I agree with most of your assumptions, though my suspicion is that that whole conversation about “continuous boost” is smoke & mirrors, not altogether different from some of the claims made about the Cell back in 2005.
The reason why we got a mid-gen refresh last time had to do with the fact, I believe, that both the X360 and the PS3 stuck around for almost eight years (2005-2013), for reasons not worth getting into in this particular thread, so the following generation came out underpowered (1.3/1.8 TFLOPs) just as display technology began to shift. Not sure if we’ll see the same movement from the major players this time around.
Personally, I think there's at least some substance to the "continuous boost" claims, though I'm honestly wondering why it would be worth it to keep the clocks at peak or near-peak the vast majority of the time knowing that generally games won't need that much unless for occasional big-time calculations. Sounds like a waste of electricity to me for the time where max clocks aren't needed.
I agree that the lack of any major shift in display tech that goes mainstream limits the appeal of mid-gen refreshes on that note. No one should be counting on 8K going mass-market adoption by 2023 or 2024. Maybe curved displays? That's still niche even on mobile devices and it'd be insurmountably harder for large televisions to adapt that, plus I just wonder what things that would even serve to fuel mid-gen refreshes.
The mid-gen refreshes, IMO, will probably be more like the kind of production cost-reduction, power-reduction things we saw in older gens (Genesis Model 2, PSOne, PS2 Slim etc.), with some good performance gains but nothing massive. Efficiency of design will go up in a good bit of areas, and maybe MSRP reductions will be possible by this point.
So I'm really itching to make time for the next part 'cuz I've got some neat ideas for mid-gen refreshes (even if none of it ends up happening xD).
No spec yet, but here's two links for possible early info.
Found some as yet to be covered GDDR7 Intellectual Property Previews / Hints. So an exclusive for [H]. There's apparently a draft GDDR7 JEDEC specification somewhere! "GDDR7 Memory Model GDDR7 Memory Model provides an smart way to verify the GDDR7 component of a SOC or a ASIC. The SmartDV's...
hardforum.com
To add my own input, DDR3 hit consumers in 07, with GDDR5 hitting consumers in 08. DDR 4 came in 2014 with GDDR6 following in 2018. With LPDDR5 already in phones and Intel targeting it for next year, 2023-2024 seems like a very possible target for GDDR7, but for that to hold true we would need to see an official specification soon.
Thanks dude, much appreciated! Also, that timeline for GDDR7 sounds pretty plausible; I actually had some specs for mid-gen refreshes written up but didn't consider GDDR7, I might have to make some changes to account for that now xD