Google's Stadia has GPU from AMD and CPU from Intel because the persistent memory needed by Google is exclusive Intel CPUs. AMD has better CPUs now but why did Google still went with Intel? Yep, that Optane.
The ReRAM that the Sony engineer presented, and the ReRAM that Sony has been touting has always been an SSD variant. It's connected via NVMe 5.0 and it has bandwidth speeds to 25.6gb/s and 1.4 microseconds latency for the 8-chip configuration. With that speed, a DIMM variant is not necessary specially if it will drive the cost up.
Let me remind you that Intel
Optane SSD is $1.2/gb, that is with retail mark-up and Intel profit. The Sony ReRAM that Amigo Tsushui said will be offered as a "second source" is also an SSD variant. That doesn't mean that Sony will not have a DIMM version, but the ReRAM that Sony has presented that they say will release as an alternative to Intel Optane is an SSD variant. Therefore it is insanity to expect Sony ReRAM to cost more than $1.2/gb otherwise there's no point producing them, Optane will eat it for breakfast.
Cerny also said SSD, not server storage class memory. Sony ReRAM is an SSD version and specs are given already: 25.6gb/s and 1.4 microseconds latency.
Do you honestly think that Sony will design the PS5 around Intel Optane technology? Is there even any evidence of this? I'll tell you what though, there is evidence (not proof) that ReRAM will be cheaper than that $1.2/gb Intel Optane.
You can keep parroting that ReRAM will be expensive just because. Of course it's fair to say it will be expensive because it's a new technology. But if you're confronted with an evidence (professional analysis) showing you are wrong, then you cannot operate your claim out of assumption anymore. It's your turn to show evidence that your contention is correct. At least show something that's based on a "professional analysis" because that's what I have been showing all this time. Your words does not weigh heavier than that chart. Hey it's not made by a random internet poster.
It's fair to doubt whether this technology will be ready for the PS5. We only have Amigo Tsushui's words to hold on to that it will be available in 2020. Again, it's fair to doubt that. And it's the burden by those who believe to prove that it will be ready in 2020. I don't have anything more to show as evidence. But it's not like PS5 will release in the next two months of so. So while it's fair to doubt, it's not fair to dismiss it outright.
Sony can talk all they want but until they start actually producing
substantiated results, and
soon, then talk is all it's going to remain. Meanwhile we have 3D Xpoint and Optane readily available, in real-world use cases, with real-world results, at mass scale. Me being a realist doesn't mean I'm trying to be a debbie-downer, it just means I look at things realistically. Anyone who wants to try using this to imply me being a "fanboy" (not saying that has happened yet but given the opening sentence I suspect some might) should calm down; even I was on Team ReRAM until I did some looking into where the tech is currently at in terms of actual developments and results on the market. That's when I started looking at it more reserved.
Many companies list targets and projections but that doesn't mean much without results. Now I'd be a
fool to say Sony couldn't accomplish this with their ReRAM solution, but the clock is ticking insofar as it being implemented in PS5. If they miss that timing, that is a large advantage gone; even if they were to release optional drives shortly after specifically designed for PS5, it would not be a default devs could count on being there with every single system. Meaning most would opt to not utilize it. Yes they could have the OS essentially do all of that optimization for them if it's there but then that means a larger footprint for the OS, more OS background tasks to run meaning less processing for game logic, etc. (I'm not implying it be a massive resource hog mind, just that SOME resources would need to be used in such a case).
Again I'm not dismissing it; I'm just looking at realistic chances here. 3D Xpoint is already here. It's already at scale. It already delivers results. And if MS by some chance is using it in their console (we really don't have much in way of concrete specs even after that TGA reveal), then that scale of production increases by magnitudes. Intel may be the only ones ATM who have products using it on the market, but they are not the only ones who manufacture it; Micron does as well. JUST in case ReRAM projections are missed for 2020, it is a secondary option Sony could go for and it wouldn't require them to be locked to Intel.
Lastly I gotta address the 25.6 GB/s speed you listed; they would not accomplish that over current NVMe spec, and there is no NVMe 5.0. I think you meant to say PCIe 5.0, but that will not be featured in any next-gen console (or any consumer electronics, for that matter; 4.0 is only starting to see some increase in support even right now. 3.0 is still the dominant spec on the market in both consumer and some data solutions that don't use other standards like SRIO).
In order to obtain 25.6 GB/s on even the highest NVMe spec (1.4a) and PCIe connection (4.0), they would need need
13 lanes dedicated to the ReRAM solution. That's not going to happen. At most they'll likely use 4x (possibly 5x or 6x) PCIe 4.0 lanes, giving them around 8 GB/s - 12 GB/s bandwidth speed (I say "about" because it's not exactly 8 GB - 12 GB; encoding schemes take up some of that but IIRC PCIe 4.0 has switched to a 128/130b encoding scheme whereas previous PCIes used 8/10b encoding; for reference SRIO uses 64/67b encoding).
Sony's only other option for 25.6 GB/s bandwidth for ReRAM as an SSD solution would be SRIO 4.2, but even then they would need
8 (technically 9) lanes dedicated to the connection over NVMe 1.4. That would probably be pushing it.
Also on the notion of a DIMM controller not being necessary simply due to the speed: this is not accurate. NVMe is designed to operate on flash and flash-like storage solutions with the properties of flash. Meaning it will only treat them like NAND devices, including in terms of how data is read and write. The latter part is particularly important because NAND memory writes via blocks; it is not byte (or bit)-addressable in terms of write. Also, traditionally it is not designed with random access in mind, as random access on NAND is generally slower than sequential read. NVMe, being a controller standard designed around NAND technology in mind, would just treat the ReRAM like higher-speed NAND, but that loses out on the DRAM advantages that aren't speed-related.
So a DRAM controller would still be preferable even if enough connections over PCIe with NVMe can net raw speeds in line with, say, DDR4 DRAM. That would also drive up the cost of the ReRAM, particularly the implementation of it, similar to how persistent RAM variant of Optane costs more (roughly 2x more) than the SSD-orientated Optane solution.