Finem Orientis
Member
"Both first and third-party PlayStation 5 titles that offer new gaming experiences are in the works, and we'll be introducing a strong lineup soon." President and CEO Kenichiro Yoshida
Ohhhhhh it is today... why somebody wrote 20th.Link to watch Sony corporate strategy meeting is here: https://www.sony.net/SonyInfo/IR/library/presen/strategy/
It should've begun already but it hasn't
It's already over, they're doing Q&A right nowIt happens tomorrow.
Great post - and I absolutely made a boat load of assumptions, pure speculation on my point: half the fun, for me.
Anyway, design philosophies are certainly at play - for example, it would make sense that Microsoft, a fundamentally software focused company, would put more emphasis on their software to solve problems, because it's what they've built their empire on. Sony's been in the hardware game for eons, so it makes sense they'd also play to their strengths. That's one component of it, no question. Cerny is bringing lots to the table at Sony as well, so we're starting to see the continued value of his expertise. I've said it before: Cerny is respected for a reason, so I'm not downplaying what him and his team have done.
For me, engineering always comes down: you do not do something unless you need to. Everything is done to achieve a goal. If Sony offloaded their I/O onto bespoke hardware, it's because they needed to in order to achieve a goal. So, that reason can be anything: it could be for latency goals, read speeds, heat dissipation, or component availability, or because their protocols were bottlenecking the I/O pipe. The short of it is Sony needed to offload it. Microsoft, on the other hands, didn't do this - so, there must also be a reason they didn't do this. Like Sony's reason, Microsoft's reason could be anything: perhaps they couldn't afford to, maybe they didn't have room to, they couldn't accommodate the additional heat, or - potentially - they had no need to. It's the balancing act of hardware engineering.
My insight is this: if Sony's I/O didn't create a significant load on the CPU, they wouldn't have put it on its own hardware and Cerny wouldn't have taken the time to explain that doing this frees up the CPU. Likewise, if Microsoft's I/O didn't create a significant load on the CPU, Microsoft wouldn't have explained that their new protocols freed up the CPU, negating the need for extra CPU power to handle the I/O. Taking both companies at face value, Sony indirectly said they needed to free up the CPU, so they added extra hardware. Microsoft directly said they needed to free up the CPU, and so wrote new software. I feel like these explanations and assertions from both companies imply that they both ran into bottlenecks, and solved the issue in different ways - playing to their strengths.
Microsoft is a company that develops "software". I don't see Microsoft collaborating with AMD to develop Hardware, much less the software for that new Hardware. If you have not even had the decency to bother to properly customize DX12U for XSX and have opted to use the generic version designed for PC. That does not mean that it is bad, but it certainly does not look good for a software company to give the impression that it has not made much effort to create decent tools that really measure up to the powerful machine they have bought. That will make that power not squeeze all its performance easily, and less from the beginning. That is the only weak point that I see Microsoft. Power without sufficient short-term control.
Look for the message in which I said that the PS5 GPU was more powerful in theoretical numbers than the XSX. It is not me who is stubborn in trying to compare those numbers. I have always said that PS5 is more efficient. I thought about it in January, and at this point I still think that it will continue to be so when consoles are on the market.
"We're very happy to be partnering with Microsoft on the development of AI Solutions, their use of RDNA3 during development over our lowly pathetic RDNA2 really helped to make our AI sensing better."Well, there it is. The mysterious MS-Sony partnership finally revealed.
![]()
I'm not sure what I'm responding to, frankly? I mean, DX12U is a multi-platform API. If it was fully customised for Series X... then it wouldn't be multi-platform any more. But, that's kind of the point. The name "Xbox" is a contraction of "Direct X Box", which was the informal name used to describe the console while it was in development. Xboxes are literally built to run DirectX. This isn't unique to this generation - the Xbox 360, for example, used DirectX9c, though I can't recall what version of DirectX the OG used. It's pretty common knowledge that the Xbox One has actually used different versions of Direct X, with DX12 being the most recent. So... yeah?Read This:
He then clarifies further on later posts.
Read This:
He then clarifies further on later posts.
Exactly and if people ACTUALLY paid attention during the presentation, FFS there's a god damn co-processor purely dedicated to their new ID-based storage access API. I remember Moore's Law is Dead saying a while ago that MS was originally planning to go with a 1.2-1.8GB/s drive without a custom decompressor for the Series X UNTIL they found out about what Sony was doing with their SSD tech. If him and his sources are anything to go by and that's true, just let that sink in for a moment before going to ramble on about how Microsoft's "SOFTWARE EXPERIENCE" is gonna help them close the gap with Sony's SSD tech.
![]()
Yeah, they don't have a hidden superfast memory pool, it's just paging 100gb from the ssd at normal transfer speedsIt was off a random guy but is it true? Is this just what that '100gb instantly' is? So is it just worded to sound like something amazing but is just the space in the ssd they can draw from at that ssd speed?
Basically just marketing speak and a lot falling for it and using it to level the ssd arguments?
Saying its leagues ahead makes sense and lines up with other rumours or apparent leaks. Like that one the other day with the 12 vs 10 boxes dev opinion and 2.5 - 5x faster, for me it's got to be in io.
I clicked on that Aaron Greenberg and colteastwood twitter link and was having a look through some comments, I know why do it but its amusing sometimes, but I came across this.
See the response...
It was off a random guy but is it true? Is this just what that '100gb instantly' is? So is it just worded to sound like something amazing but is just the space in the ssd they can draw from at that ssd speed?
Basically just marketing speak and a lot falling for it and using it to level the ssd arguments?
Yeah, they don't have a hidden superfast memory pool, it's just paging 100gb from the ssd at normal transfer speeds
Yeah, they don't have a hidden superfast memory pool, it's just paging 100gb from the ssd at normal transfer speeds
Did you expect they have 100GB Ram or Vram?
Yes it is what it is.
They still have to use their 2,4GB/4,8GB throughput.
You know that why MS is so good with marketing. They say blurt out some random stuff and most xbox Fans don't even understand it. They make up their own stories without any knowledge how it really work.
It's a bit sad to watch.
The true question is why MS let do this.
Honestly that's weird that even official PR use those fanboys, with high level of shitty word continue this.
Why not?
Seems like most companys like to take advantage of gullible people.
I mean you can see it everywhere all day.
Thats actually a real problem in modern society.
People don't even try to unterstand what some company tells them, they just go "yay sounds good take my money".
Then of course they start to fight for their product which they paid for.
Been there done that in my youth.
Quite glad I'm not so gullible and invested anymore.
Beeing sceptic about stuff is a healthy attitude. As long as you don't overdo it like "Flat-Earthers" and "Corona-Hoax" or "Deep State / Qanon" kind of people.
What's this AI partnership between Sony and MS? Anything to-do with gaming, or is this search/virtual assistant/chat bot shite?
I thought they had an agreement where Sony would use MS Azure servers as the backend for PSN? What's happening with that, and how would it affect PSNow/Gaikai/Remote Play?
Understand that, but that's not like this that MS will gain customer especially from thise that have a better feeling with Sony or Nintendo IP IMO.
Mind you, it is fast. The XsX has a pretty good SSDYeah, they don't have a hidden superfast memory pool, it's just paging 100gb from the ssd at normal transfer speeds
Mind you, it is fast. The XsX has a pretty good SSD
Obviously lossy is, well, lossy, so you can't retain 100% of the detail. But I have news for you, games have been using BC for ages and if you look at large BC7 compressed textures at ~50%-60%, they look indistinguishable from the original texture (except for alpha). So as much as you like FUD on the XSX, I doubt even DF analysis videos could tell the difference, especially after all the layers of post processing, filtering and reconstruction happes. I mean, it's so ironic that you seem so glade that UE5 is sub-4K and reconstructing to 4K because native doesn't matter anymore (or in other words, the image gets new pixels not in the original data, creating artifacts much worse than BCn compression) and at the same time worrying that XSX textures won't look good because of lossy compression.Block compression, whether named S3_TC (from Savage 3D technology originally) and debranded as generic DXTn, or using the newly DX10 BC/BCPACK moniker is hardly the topic. The important point is still about it being a lossy compressor, to offset storage bandwidth deficiency against lossless textures.
It seems you don't quite appreciate the difference in lossy formats, say comparing BC to jpeg for instance, if you feel my speculation about using BCpack textures could result in flashing pixels if sampling on an error.
Unlike jpeg, S3_TC/DXT compression is better at retaining hard edge details - because you can guide the block compressor to prioritize what it wants to retain.. The downside being that discontinuity errors are then more likely for what it was prepared to lose, because the compressor hasn't treated the image as a complete signal like jpeg does. Jpegs will lose precise detail at the expensive of retaining the overall shape of the signal as best it can through the compression ratios - which wouldn't cause harsh discontinuity errors.
It does, but it's not classic pop-in, it's more like Tasselation pop-in. The model you see in frame exists on screen and in memory while it is rendered, but when you want to render the next frame you don't discard it and start from scratch, you just stream in the new data, the delta. If you miss your frame target, what you get is the same model with fewer triangles, just like you can see every time there is a camera cut in the demo.As for the discussion about pop-in. you are going to have to work much harder to make that point.
Problem 1. Epic said the data for each frame gets crunched new every frame - obviously you can't fit the billions of polygons and 8K texture source data in memory, and only what gets rendered goes in the 16GB - so classic pop-in as you are describing doesn't exist in the scene, unless you are claiming Epic engineers are wrong.
It wasn't video encoding, common. Look at the video, you can see the data streaming and fill in. It's not textures, it's polygons.Problem 2. pop-in is visualized by seeing higher quality LoD/mipmap arrival over multiple frames. The video source is lossy compressed through h.264/h.265 so no way of telling if it was video encoding or engine errors, and the handful of native stills won't be for 3 consecutive frames at this point.
No, just no. One frame it's low poly, after a few frames the polygons fill in. It's the same statue on both images only a few frames later.Problem 3. The scene is of an Ancient building in the desert, and IMHO that picture actually just looks correct. The parts of the building have been artistically aged differently to make it look organic - like the wind and sand have damaged it differently - as it should be, rather than some perfect shinny render of the same geometry multiple times without any flaws.
"Both first and third-party PlayStation 5 titles that offer new gaming experiences are in the works, and we'll be introducing a strong lineup SOON" President and CEO Kenichiro Yoshida
"We're very happy to be partnering with Microsoft on the development of AI Solutions, their use of RDNA3 during development over our lowly pathetic RDNA2 really helped to make our AI sensing better."
Timdog, probably.
"Both first and third-party PS5 titles that offer new gaming experiences are in the works, and we'll be introducing a strong lineup soon." President and CEO Kenichiro Yoshida
You are missing the point. These textures are being used by software renderers in UE5 - if the historical talks by Carmack/Sweeney for going beyond geometry are to be believed -discontinuity errors on textures that are mapping on micro-polygons could be a real problem. If you can't appreciate the point I've tried to make - that others have - then I'm happy to leave you with your opinion unchanged.Obviously lossy is, well, lossy, so you can't retain 100% of the detail. But I have news for you, games have been using BC for ages and if you look at large BC7 compressed textures at ~50%-60%, they look indistinguishable from the original texture (except for alpha). So as much as you like FUD on the XSX, I doubt even DF analysis videos could tell the difference, especially after all the layers of post processing, filtering and reconstruction happes. I mean, it's so ironic that you seem so glade that UE5 is sub-4K and reconstructing to 4K because native doesn't matter anymore (or in other words, the image gets new pixels not in the original data, creating artifacts much worse than BCn compression) and at the same time worrying that XSX textures won't look good because of lossy compression.
It does, but it's not classic pop-in, it's more like Tasselation pop-in. The model you see in frame exists on screen and in memory while it is rendered, but when you want to render the next frame you don't discard it and start from scratch, you just stream in the new data, the delta. If you miss your frame target, what you get is the same model with fewer triangles, just like you can see every time there is a camera cut in the demo.
It wasn't video encoding, common. Look at the video, you can see the data streaming and fill in. It's not textures, it's polygons.
No, just no. One frame it's low poly, after a few frames the polygons fill in. It's the same statue on both images only a few frames later.
Saying its leagues ahead makes sense and lines up with other rumours or apparent leaks. Like that one the other day with the 12 vs 10 boxes dev opinion and 2.5 - 5x faster, for me it's got to be in io.
I clicked on that Aaron Greenberg and colteastwood twitter link and was having a look through some comments, I know why do it but its amusing sometimes, but I came across this.
See the response...
It was off a random guy but is it true? Is this just what that '100gb instantly' is? So is it just worded to sound like something amazing but is just the space in the ssd they can draw from at that ssd speed?
Basically just marketing speak and a lot falling for it and using it to level the ssd arguments?
The 100GB number is just referring to the game installation. There is no artificially imposed limit. The entire game data is quickly accessible, without the seek times by virtue of being installed in an SSD.MS offers 100gb of instant data - they just mean bypass the file system and address SSD storage direct like it was a slow portion of RAM.
Sony offers 852Gb of instant data using MS's definition - at least according to what has been released so far.
The 100gb on Xsex is a limitation compared to PS5. Still 100gb is a fair chunk of memory and ms have worked on things like compression so I don't think it will actually impact anything.
EDIT - Also to mention both Xsex and PS5 have a Flash File System so data can be looked up in the same way one would do it with a HDD. The above direct addressing systems are just faster in some scenarios.
As I've written before, it's borderline lies coming out of MS on this one and people with little technical knowledge fall for it.MS offers 100gb of instant data - they just mean bypass the file system and address SSD storage direct like it was a slow portion of RAM.
Sony offers 852Gb of instant data using MS's definition - at least according to what has been released so far.
The 100gb on Xsex is a limitation compared to PS5. Still 100gb is a fair chunk of memory and ms have worked on things like compression so I don't think it will actually impact anything.
EDIT - Also to mention both Xsex and PS5 have a Flash File System so data can be looked up in the same way one would do it with a HDD. The above direct addressing systems are just faster in some scenarios.
The 100gb on Xsex is a limitation compared to PS5.
As I've written before, it's borderline lies coming out of MS on this one and people with little technical knowledge fall for it.
They are actually trying to make people believe this technology is somehow new and revolutionary. It's not, mapping disk data to a virtual address space has been around for many years.
mmap - Wikipedia
en.wikipedia.org
If GS and realtime RT was 20+ years old technology, re-implemented and given a fancy name, yes then it would be disingenuous too. And I'm pretty sure it's not even new in XBox, any OS with virtual memory supports this and has for many years. PS4 and PS5 certainly also support is.it's a new feature of DX12U and XSX , they comminucate about it.
Is it a problem when sony talks about geometry shader and RT when turing do it for more than 1 year ?
If GS and realtime RT was 20+ years old technology, re-implemented and given a fancy name, yes then it would be disingenuous too. And I'm pretty sure it's not even new in XBox, any OS with virtual memory supports this and has for many years. PS4 and PS5 certainly also support is.
This looks real...
Or MS could pay 3rd parties to litter games with raytraced mirrors. DF will have a field day comparing that scenario.Depends what you mean by that since there will be instances where the developers will throw parity aside and take advantage of each platforms strengths.
Dual sense in 2077 probablyI just don't know if you're serious. But no, it's not. The initiator of the video (as far as I remember) even wrote that it was his idea of DualSense.![]()
To be honest: it looks cool, no question. The possibilities offered by the extended functions would also be exciting.Dual sense in 2077 probably![]()
Sorry, it's the other way around, you are the one who needs to prove that a high-quality lossy compression like BC7 will cause a real problem using micro polygons. Considering BC is one of the most popular methods to compress textures in games and one of the largest console makers decided to have a hardware unit dedicated just for that, I'm pretty sure the requirement for proof is on you.You are missing the point. These textures are being used by software renderers in UE5 - if the historical talks by Carmack/Sweeney for going beyond geometry are to be believed -discontinuity errors on textures that are mapping on micro-polygons could be a real problem. If you can't appreciate the point I've tried to make - that others have - then I'm happy to leave you with your opinion unchanged.
It's easy, there you go.Your going to have to prove your pop-in isn't video encoding - which obviously you can't because macro blocking issues look like mipmap pop-in with a moving camera. Throw shade on the video if you like, it doesn't really matter, because the capabilities of the UE5 look great on the PS5, and if you are splitting hairs about that video, then I think you might need to brace yourself for other trade offs coming to your preferred XsX version, which I expect to have more noticeable compromises because of the weaker IO system in the XsX.
OK? Gonna chime in on this, let's say even if this happened, you cannot conclude this as "SSD can't keep up" because there are many other places where we can all be sure are much more demanding and much faster and yet they didn't have problems at all, like the fly-through in the end, that part is much more demanding so inherently, we will see pop-ins and such "streaming issues" and "SSDs can't keep up"......but we didn't see any.Sorry, it's the other way around, you are the one who needs to prove that a high-quality lossy compression like BC7 will cause a real problem using micro polygons. Considering BC is one of the most popular methods to compress textures in games and one of the largest console makers decided to have a hardware unit dedicated just for that, I'm pretty sure the requirement for proof is on you.
It's easy, there you go.
Before - when not enough polygons streamed in:
![]()
After - when enough polygons streamed in:
![]()
If you think that's streaming encoding, you don't know what streaming encoding artifacts look like. Calling me "splitting hairs" on the PS5 is losing touch with the actual argument we are having. My point was never against the PS5, it was actually the other way around, that this system streams in polygons and textures for the next frame and if it fails, all you see is just a lower quality asset for a fraction of a second so all of these problems will be worse on XSX. But for some reason, you've decided to go with a strawman, that somehow I'm attacking the PS5 for having some form of extremely gentle pop-in during 0.01% of the demo. That is not the point, the point was to show you what happens when the SSD can't keep up.
There isn't a more demanding scene, this is after a camera cut. A 100% fresh frame is the harders work the SSD can have with a system that streams to the next frame. It's easier to stream something flying a million KMH than a camera cut in a next-frame streaming system.OK? Gonna chime in on this, let's say even if this happened, you cannot conclude this as "SSD can't keep up" because there many other places where we can all be sure are much more demanding and much faster and yet they didn't have problems at all, like the fly-through in the end, that part is much more demanding so inherently, we will see such "streaming issues" and "SSDs can't keep up"......but we didn't see any.
So we can conclude that what you're saying is kinda wrong and surely splitting hairs, you realize that you are looking at a tech demo on an old PS5 dev kit, not the latest one, and even if that happened, there were way more stress points in that gameplay demo that were much worse on the SSD, yet we didn't see ANY pop-in or anything like that.
This here completely renders your notice or point as IRRELEVANT.
When you are at a very high speed and revealing more geometry onto the screen while being at a very high speed is way more demanding than a cut to a frame, there's no other way around this, my guy.There isn't a more demanding scene, because this is after a camera cut. A 100% fresh frame is the harders work the SSD can have with a system that streams to the next frame. It's easier to stream something flying a million KMH than a camera cut in a next-frame streaming system.
I rely on technical knowledge of software, which I have been a developer of for many years, to write about this. No, I have no feelings in particular about this except when borderline lies are being used to convince less technical people, like yourself, about some technology.I see you're relying on feelings more than information to draw conclusion so let's stop it here.
Yup, let them dream.![]()
Tim Sweeney Explains Exactly Why the PS5's SSD and I/O Architecture Is Way More Efficient Than PC's
Epic founder Tim Sweeney explained exactly why the PlayStation 5's SSD and I/O Architecture is way more efficient than PC's.wccftech.com
It should be noted that Tim Sweeney has discussed the PlayStation 5 since that has been the company's primary focus, as Epic has been working for 'quite a long time' alongside Sony on that Unreal Engine 5 demo. However, the Xbox Series X also features its own Velocity Architecture which is designed to 'radically improve asset streaming' and to 'effectively multiply available memory' thanks to a custom NVMe SSD (rated at 2.4 GB/s with raw data), a dedicated hardware decompression block, the new Sampler Feedback Streaming technology and last but not least, the new DirectStorage API.
The latter component is particularly relevant because Microsoft already confirmed plans to bring it to Windows PC, too. According to Microsoft, DirectStorage can significantly reduce CPU overhead for I/O operations (such as those happening in the background to load the next parts of the world) from several cores to a small fraction of a single core. Needless to say, this could severely diminish the PC I/O issues mentioned above by Tim Sweeney.
Let them Dream?