Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
I tend to agree that they will be fundamentally similar. It's just curious to see the difference in processing's power dedicated to that though. XSX uses 1/10th of a core while PS5 has a dedicated block that's equivalent to 1-2 cores. I guess Sony is anticipating handling magnitudes more I/O ops in next gen games.
I don't think Microsoft has been at all negligent. They probably just envisaged flash memory as being about load times and less pop-in. Just like everyone else did before Sweeney unzipped his trousers.
Sony and Epic Games have been cooking this new paradigm up for a while, and it's probably no coincidence it wasn't publicly revealed until this close to release.

Sony have potentially played a blinder by partnering with pretty much the premier third-party engine like this. I wonder if Microsoft was also approached to work on future rendering technologies, and whether they'd have been as interested in investing the same resources as Sony have done.

I really think this big difference in I/O is because of how both are viewing next gen. Sony probably believes that they need an insane I/O to push game design while Microsoft only sees it as a way to reduce load times so the XSXs I/O is good enough for them.

It should be interesting to see who is right in the end but currently it seems like Sony made a better decision with the I/O based off that UE5 demo.
 
He seems to have reservations about feasibility on PC.

Give AMD time, a lot of the console IO customizations are going to become standard on PC.


Looks like he is actually saying the opposite that you can bypass kernel using unbuffered i/o but still have to deal with GPU overhead.
Again a-lot of this DirectStorage fixes.

also we don't know if GPUDirect Storage is coming to RTX
 
Last edited:
Well let's look at the actual text shall we?

"Enter Xbox Velocity Architecture, which features tight integration between hardware and software and is a revolutionary new architecture optimized for streaming of in game assets. This will unlock new capabilities that have never been seen before in console development, allowing 100 GB of game assets to be instantly accessible by the developer. "

Now "100gb of game assets [...] instantly accessible" is clearly suggesting there is something about that 100gb capacity that is significant.


The wording of it "instantly" accessible suggests to me that they are referring to a different access paradigm to this 100gb than the rest of the SSD. Otherwise why not say 1tb of game assets instantly accessible the same as Sony has?


But let's say for a moment you are right and the 100gb refers to a game install - then this would imply there is a 100gb install limit for games. That seems .... rather small.


Considering the only number MS mentioned in reference to their SSD storage was this 100gb capacity (they didn't mention access speeds or other capabilities for example) its rather strange that they were unable to convey what this 100gb number refers to.

I'm confident they wouldn't refer to a 100gb limit on game installs in a marketing piece, in fact I'm confident that no firmware architect would allow such a limitation.

There is no mechanism by which 100GB of flash storage becomes "instantly" accessible over a PCIe4.0 bus when their own quoted speed is 2.4GB/s sequential read.

It's marketing speak, it means the SSD is fast so it's good for streaming.

If Sony has put their SSD, flash controller, IO complex and cache scrubbers under a marketable term like "Tachyon Architecture" we'd be hearing a lot less about "PS5's SSD" and a lot less about Velocity Architecture.

Look at the actual details, quoted specs and number of elements. Not how aggressive and fast the marketing term sounds.
 
Well let's look at the actual text shall we?

"Enter Xbox Velocity Architecture, which features tight integration between hardware and software and is a revolutionary new architecture optimized for streaming of in game assets. This will unlock new capabilities that have never been seen before in console development, allowing 100 GB of game assets to be instantly accessible by the developer. "

Now "100gb of game assets [...] instantly accessible" is clearly suggesting there is something about that 100gb capacity that is significant.


The wording of it "instantly" accessible suggests to me that they are referring to a different access paradigm to this 100gb than the rest of the SSD. Otherwise why not say 1tb of game assets instantly accessible the same as Sony has?


But let's say for a moment you are right and the 100gb refers to a game install - then this would imply there is a 100gb install limit for games. That seems .... rather small.


Considering the only number MS mentioned in reference to their SSD storage was this 100gb capacity (they didn't mention access speeds or other capabilities for example) its rather strange that they were unable to convey what this 100gb number refers to.

I'm confident they wouldn't refer to a 100gb limit on game installs in a marketing piece, in fact I'm confident that no firmware architect would allow such a limitation.
It's a round large number to refer to any game's data files, intended to sound technical and impress people. Marketing involves a lot of that.

The only way that this makes any sense to me is that they are referring to the seek time advantage of SSD compared to HDD.
 
I don't think Microsoft has been at all negligent. They probably just envisaged flash memory as being about load times and less pop-in. Just like everyone else did before Sweeney unzipped his trousers.
Sony and Epic Games have been cooking this new paradigm up for a while, and it's probably no coincidence it wasn't publicly revealed until this close to release.

Sony have potentially played a blinder by partnering with pretty much the premier third-party engine like this. I wonder if Microsoft was also approached to work on future rendering technologies, and whether they'd have been as interested in investing the same resources as Sony have done.

Using a Minecraft Path Tracing tech demo, running at 1080p and hovering at 40 fps, might also inform us about where MS was putting its chips on?
 
Instead of having to use more bandwidth to the SSD for assets, the Sampler Feedback Technique intelligently allows the GPU to grab assets when needed instead which will utilize less memory in the long run. This doesn't mean that the PS5 won't have the advantage as far as SSD streaming goes, but it allows the XSX to not be that far behind. I think the SSD on the PS5 was Sony's brute force attempt for great looking games and the 12TF with the Velocity architecture was Microsoft's brute force attempt for great looking games. I'm starting to think the differences will be negligible and there won't be a gigantic difference in the way games look, but I feel the XSX will have the slight advantage when it comes to additional effects, physics, and resolution but it wont make or break anything. Marketing and games are where things will really matter.

Xbox will not have and additional effects, physics. The only difference will be in resolution or stable fps for 60fps games like this gen. I think GTA5 was only the exception where in some parts Xbox version had less detail environments.
 
This is maybe your 17th time posting these same tweets


Kudos my man. Kudos
Because people keep repeating;

might very well be something that is RDNA2 and not a custom block specific to XSX.
So yeah...

The 2x-3x bandwidth multiplier being touted doesn't relate to an advantage over just-in-time streaming versus keeping them resident in RAM, but fine grained control over how much of that texture needs to be streamed into RAM in the first instance, and intelligently fetching more as required. This isn't new tech, but offloading part of that logic to a hardware accelerator would help with latency (required for just-in-time streaming) and to a lesser extent CPU utilisation.
This is true. This is exactly why they advertise it as a 2x -3x increase is their I/O and thus SSD performance. Because if you require only a third in RAM of what would you otherwise require, that also means that you only have to read a third from the SSD of what you would otherwise have to read.

SFS isn't helping Microsoft stream in textures at an effective 2x-3x 4.8GB/s, and it's not even like more typical Sampler Feedback is some fixed speed metric anyway. In a lot of cases it won't be applicable, in others it will be much more marginal etc.
I'm not sure about Xbox One, but PS4 hasn't ever needed to pull in an entire texture file if it only needed some tiles from it.
The video linked here showed Gears of War struggling with texture streaming, though?
I think we're confusing two things here... We have;

1) Using tiles/partial textures to load less amount of textures from SSD into ram.
2) Increasing loading efficiency by only loading what is required from storage to RAM.

With number 1, even when you are using partial textures, you can still be loading textures that you don't use.
With number 2, you are avoiding the loading of (partial) textures that you will not use.

Based on the explanation by Eurogamer, I think MS is doing the latter, not the former. Reposting;

As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

They are talking about allocated pages. Allocated means present in RAM. There are really two possibilities here... Either

A) The allocated pages mentioned for the Xbox One X are already based on the use of partially resident textures, thus the increase in efficiency of 2x -3x they mention are on top of these.
or
B) The allocated pages mentioned for the Xbox One X are not based on the use of partially resident textures, and thus the benefit of PRT also applies.

In either case, I still don't see how the 2x -3x multiplier can be discarded. I can still be wrong though. In any case, it's indeed quite clear that SFS will have variable performance, rather than the 'fixed' hardware setup of the PS5.
 
Looks like he is actually saying the opposite that you can bypass kernel using unbuffered i/o but still have to deal with GPU overhead.
Again a-lot of this DirectStorage fixes.
He's explicitly referring to PC. The GPU driver overhead is not an issue on console.
 
I really think this big difference in I/O is because of how both are viewing next gen. Sony probably believes that they need an insane I/O to push game design while Microsoft only sees it as a way to reduce load times so the XSXs I/O is good enough for them.

It should be interesting to see who is right in the end but currently it seems like Sony made a better decision with the I/O based off that UE5 demo.

They'll both be good at different things, while being roughly the same at the same things, but Sony wouldn't have made the investment and sacrifice they did on the IO unless it was enabling something really fundamentally different and new. Cerny, Sweeney and Carmack aren't clowns.
 
Good to see that someone understands.
And I will emphasize that the XSX does the texture filtering (i.e. avoiding the loading of unnecessary textures) through hardware;



Agreed. It really isn't that complicated. If you want to load an 8k texture, but only need to use some of it, you save transfer bandwidth by only moving the data you actually need. Same deal if they do have a studio upscaling textures in real time (as they've eluded to), if you can transfer a 2048 texture and upscale to an 8k result, you just saved about 3/4 of the bandwidth hit. Basic math, nothing more. No magic involved or required.

People keep looking at it as an approach that increases the IO capability of the XSX, which it does not. These are just things designed to reduce the need for IO bandwidth and get more done with what is there.
 
Last edited:
Velocity Architecture is the latest Groundhog Day discussion in this thread lol

I swear you guys have been going back and forth for weeks analyzing the one or 2 sentences we have on what it is.
 
Is that a ban bet?

giphy.gif

Might be!
 
Instead of having to use more bandwidth to the SSD for assets, the Sampler Feedback Technique intelligently allows the GPU to grab assets when needed instead which will utilize less memory in the long run. This doesn't mean that the PS5 won't have the advantage as far as SSD streaming goes, but it allows the XSX to not be that far behind. I think the SSD on the PS5 was Sony's brute force attempt for great looking games and the 12TF with the Velocity architecture was Microsoft's brute force attempt for great looking games. I'm starting to think the differences will be negligible and there won't be a gigantic difference in the way games look, but I feel the XSX will have the slight advantage when it comes to additional effects, physics, and resolution but it wont make or break anything. Marketing and games are where things will really matter.
It means XSX will be further behind because PS5 have the same solution under different name on top of the fast SSD
 
In either case, I still don't see how the 2x -3x multiplier can be discarded. I can still be wrong though. In any case, it's indeed quite clear that SFS will have variable performance, rather than the 'fixed' hardware setup of the PS5.


You keep comparing its performance to the PS5, but you're alone there.


They'll both be good at different things, while being roughly the same at the same things, but Sony wouldn't have made the investment and sacrifice they did on the IO unless it was enabling something really fundamentally different and new. Cerny, Sweeney and Carmack aren't clowns.


OR MAYBE Sony bought them! (even though MS is the richest, war chest, money money money)
 
Last edited:
He's explicitly referring to PC. The GPU driver overhead is not an issue on console.

On PC.

Consoles don't have such problems.

I'm aware. Looks like most didn't see my edit.
I was talking about PC. DirectStorage API for Windows that reduces the CPU bottlenecks dramatically.
Then Nvidia's GPUDirect Storage (I/O software that bypasses CPU and System Ram to fetch data straight from the NVME into the VRAM). We don't know if Nvidia will release the I/O software to RTX line up. We know the software is currently supported in their data center lineup.

EBcfz5wVAAAhrft
 
Last edited:
LOL what? Can you quote anyone saying that the XsX IO customization is just part of RDNA2?

I questioned if texture filtering blocks the guy was talking about, could be part of RDNA2 because PRT was hardware supported in GCN.

Of course the guy on twitter goes "Velocity Hardware" is custom!!! Well that's not what anybody was asking but sure.
 
Last edited:
There is no mechanism by which 100GB of flash storage becomes "instantly" accessible over a PCIe4.0 bus when their own quoted speed is 2.4GB/s sequential read.

It's marketing speak, it means the SSD is fast so it's good for streaming.

If Sony has put their SSD, flash controller, IO complex and cache scrubbers under a marketable term like "Tachyon Architecture" we'd be hearing a lot less about "PS5's SSD" and a lot less about Velocity Architecture.

Look at the actual details, quoted specs and number of elements. Not how aggressive and fast the marketing term sounds.
100 GB is accessible does not mean bandwidth. My opinion is more like virtual memory.
 
Velocity Architecture is the latest Groundhog Day discussion in this thread lol

I swear you guys have been going back and forth for weeks analyzing the one or 2 sentences we have on what it is.

When one looks not as on par with the other, by a good margin, arguments must" close the gap".

Same in the other direction when it comes to flippity floppies. Though that is seemingly a much smaller gap, not a paradigm shift for gaming, and Sweeney/Sony opened the floodgates last week expressing that. Now Carmack has joined the conversation.

Can't let Brand A have any perceivable advantage towards Brand B, even if how it "closes the gap" is new way of doing things going forward. Hard for people to wrap their head around it when it's been years of traditional ways of doing things.

I am just glad the conversation is almost over that faster storage solutions can and will impact graphics with how fast you can feed the GPU.
 
Last edited:
When one looks not as on par with the other, by a good margin, arguments must" close the gap".

Same in the other direction when it comes to flippity floppies. Though that is seemingly a much smaller gap, not a paradigm shift for gaming, and Sweeney/Sony opened the floodgates last week expressing that. Now Carmack has joined the conversation.

Can't let Brand A have any perceivable advantage towards Brand B, even if how it "closes the gap" is new way of doing things going forward. Hard for people to wrap their head around it when it's been years of traditional ways of doing things.

I am just glad the conversation is almost over that faster storage solutions can and will impact graphics with how fast you can feed the GPU.
The only thing that hurts people who identify themselves with Xbox more than a PS5 almost on par with XsX, is actual innovation that gives it any kind of advantage. I sometimes think it's overstated too, just as much as the flops.
 
Last edited:
Its clear that Sony have gone the extra mile in their development of the ps5 in terms of their custom silicon and ssd/memory design.

Microsoft have gone for the more powerful, hate to say the term "brute force" but that seems how the systems have been build to me. I think some people have said the term innovation vs evolution. which i think is a fair assessment.

Just out of curiosity, if a mid range refresh was to go ahead in say 3 years time and both machines offered higher preforming models namely a pro version of their machine, Do you think Microsoft would try to take advantage of Sony's custom silicon designs and try to improve their system, or do you think they would they just do more of the same, as in more tfops,ram, bigger ssd.

Surely when Microsoft see's those big AAA exclusives games from Sony taking advantage of the new ssd tech in terms of building new worlds/levels and things people thought were not possible because of said ssd, Microsoft would want to up their game and try to incorporate it into their upcoming system to try and compete.
 
Last edited:
Its clear that Sony have gone the extra mile in their development of the ps5 in terms of their custom silicon and ssd/memory design.

Microsoft have gone for the more powerful, hate to say the term "brute force" but that seems how the systems have been build to me. I think some people have said the term innovation vs evolution. which i think is a fair assessment.

Just out of curiosity, if a mid range refresh was to go ahead in say 3 years time and both machines offered higher preforming models namely a pro version of there machine, Do you think Microsoft would try take advantage of Sony's custom silicon designs and try to improve their system or do you think they would they just do more of the same, as in more tfops,ram, bigger ssd.

Surely when Microsoft see's those big AAA exclusives games from Sony taking advantage of the new ssd tech in terms of building new worlds/levels and things people thought were not possible because of said ssd, Microsoft would want to up their game and try to incorporate it into their upcoming system to try and compete.

I doubt we see a mid-gen refresh in the traditional sense on MS's side. Probably just the next iteration of the hardware in 4 years or so, with all aspects being moved forward to whatever is achievable at their price points at that time.
 
I questioned if texture filtering blocks the guy was talking about, could be part of RDNA2 because PRT was hardware supported in GCN.

Of course the guy on twitter goes "Velocity Hardware" is custom!!! Well that's not what anybody was asking but sure.

Like I said earlier texture filter units are standard in GPUs but then Microsoft customise (tweak) them to suit their need as per James Stanard's Tweet. Similar to how Microsoft customised (tweaked) the L2 cache in Jaguar for One X among many other tweaks with GPU/CPU no doubt. All nice little wins that add up but thinking this single thing is a huge game changer I fear will lead to disappointment.
 
Even Linus's fanbase is calling him out over the lack of informed talk over the PS5's SSD, rather than just "these 3K PC drives are faster on sequential read". But I mean, for all their production value, they make a lot of mistakes and lack of research, and the WAN show is fully improv it seems.


Not sure if further comments from Tim were posted


Systems integration and whole-system performance. Bringing in data from high-bandwidth storage into video memory in its native format with hardware decompression is very efficient. The software and hardware stack go to great lengths to minimize latency and maximize the bandwidth that's actually accessible by games.

Those PC numbers are theoretical and are from drive into kernel memory. From there, it's a slow and circuitous journey through software decompression to GPU driver swizzling into video memory where you can eventually use it. The PS5 path for this is several times more efficient. And then there's latency.

On PC, there's a lot of layering and overhead. Then you have the issue of getting compressed textures into video memory requires reading into RAM, software decompressing, then calling into a GPU driver to transfer and swizzle them, with numerous kernel transitions throughout.

Intel's work on non-volatile NVDIMMs is very exciting and may get PC data transfer on a better track over the coming years.
 
There is no mechanism by which 100GB of flash storage becomes "instantly" accessible over a PCIe4.0 bus when their own quoted speed is 2.4GB/s sequential read.

It's marketing speak, it means the SSD is fast so it's good for streaming.

If Sony has put their SSD, flash controller, IO complex and cache scrubbers under a marketable term like "Tachyon Architecture" we'd be hearing a lot less about "PS5's SSD" and a lot less about Velocity Architecture.

Look at the actual details, quoted specs and number of elements. Not how aggressive and fast the marketing term sounds.

Maybe they mean accessible as can be addressed in under a millisecond, which is true, access is something like 0.3 ms on fast SSDs.

But you still have to transfer it from said SSD after you have located the data :messenger_beaming:
 



Why no answer?

I don't know. But he's asking if DX12 will be able to reach the same 2x-3x as was advertised for the XSX. Basically, the tweet that the question was asked to, kind of already says what the answer would be. It would be the same factor of 2x - 3x, at the cost of having pop-in. So you either load more in VRAM, or you make use of it and have pop-ins since there is no hardware to accelerate it.
 
Last edited:
Like I said earlier texture filter units are standard in GPUs but then Microsoft customise (tweak) them to suit their need as per James Stanard's Tweet. Similar to how Microsoft customised (tweaked) the L2 cache in Jaguar for One X among many other tweaks with GPU/CPU no doubt. All nice little wins that add up but thinking this single thing is a huge game changer I fear will lead to disappointment.
Time will tell. I guess the real question is what the 2x-3x applies to. In my opinion, people are confusing the use of partial textures, with the efficient use of textures. But we will see.

As I said earlier, I can still be wrong.
 
Sorry if this was already posted in here, I may have missed it

Looks like Bradly Halestorm had more comments for Gamingbolt, he had made the "Tempest Audio is most exciting new feature" comment a few days ago, I believe:


On Variable Frequency:



On MS Cross-Gen Support:
Yes Variable Frequency aka Sony's re-engineered Speed-Step has a lot more to offer regarding performance improvements, what's been mentioned so far in this article and others is only the tip of the iceberg... me thinks anyway

Coincidence I only just wrote about this earlier...
 
Because people keep repeating;


So yeah...

This is true. This is exactly why they advertise it as a 2x -3x increase is their I/O and thus SSD performance. Because if you require only a third in RAM of what would you otherwise require, that also means that you only have to read a third from the SSD of what you would otherwise have to read.


I think we're confusing two things here... We have;

1) Using tiles/partial textures to load less amount of textures from SSD into ram.
2) Increasing loading efficiency by only loading what is required from storage to RAM.

With number 1, even when you are using partial textures, you can still be loading textures that you don't use.
With number 2, you are avoiding the loading of (partial) textures that you will not use.

Based on the explanation by Eurogamer, I think MS is doing the latter, not the former. Reposting;

As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

They are talking about allocated pages. Allocated means present in RAM. There are really two possibilities here... Either

A) The allocated pages mentioned for the Xbox One X are already based on the use of partially resident textures, thus the increase in efficiency of 2x -3x they mention are on top of these.
or
B) The allocated pages mentioned for the Xbox One X are not based on the use of partially resident textures, and thus the benefit of PRT also applies.

In either case, I still don't see how the 2x -3x multiplier can be discarded. I can still be wrong though. In any case, it's indeed quite clear that SFS will have variable performance, rather than the 'fixed' hardware setup of the PS5.

Pages generally refer to a texture caching system. Partially Resident Textures were new as PS4 and XO came out, and were not yet available in DirectX as of 2012 at least. I wonder if it made the hardware on XO?
I'd like to know more.

Everything I've seen about what makes SFS custom to Microsoft so far comes down to latency and not stalling the pipeline with a cache miss by falling back to a lower MIP level.
More digging/pestering required maybe
 
Good to see that someone understands.
And I will emphasize that the XSX does the texture filtering (i.e. avoiding the loading of unnecessary textures) through hardware;


Texture Filtering was always in hardware, no?
Seems like they have custom logic for texture filtering.
 
Last edited:
A few thoughts on the design of the final PS5: After all the information that has been available to date, I strongly believe that we will see a black and white PS5 with a V-shape that looks very similar to the devkit.

Reasons:
1. The color of the DualSense. In all previous PlayStation consoles, the console and controller had the same colors (except for special editions). A PS5 only in white or black would look strange with this DualSense.

2. The cooling solution. It was often said that Sony opted for an interesting and better cooling solution. Since the heat development of the devkit is said to have improved, I assume that this was improved with the revisions of the devkits. I therefore consider it's very likely that the final console will look similar in the end to keep the shape optimized for cooling.

I think the design of the PS5 and what this cooling system is will be critical. If they got it wrong and the system is continuously throttling the clocks due to an unreasonably low power limit, that will hobble the PS5 for the whole generation.

In fact I think Sony may have put their console design up at the top of their priority list. I just don't believe Sony would accept an ugly looking design for a flagship piece of consumer electronics.

I'm certain they wouldn't accept a physical form factor like the Xsex.

So I'm guessing a key consideration of this system was form factor, that drives the cooling capability which in turn drives the amount of power the PS5 can consume which has led to the variable clock paradigm.

It's a tricky balancing act to get all those pieces in place successfully. All we heard from Cerny's was that he thought people would be "quite happy" with the cooling solution.

Perhaps he was being mischievous and it will will be ground breaking, or perhaps he was just being realistic and thinks people will be happy it's not a super loud heat pump in the gaming room.

I'm very much looking forward to this final piece of the puzzle. It's not the sexiest per of the console but cooling will make or break the PS5 this gen.
 
John Carmack is asked if he agrees with Tim Sweeney's glowing assessment of PS5's customizations
E6i4tAb.png
Seems very interesting.

These M.2 PCIE4 SSDs speeds are specified at theoretical 4.5GB/s... with unbuffered IO you can get 4.3GB/s speeds and normal calls only 2.4GB/s.
That makes me question if devs ever used unbuffered IO calls in games.

You still have the GPU drivers overhead in PC compared with consoles.
 
these people are too smart for to even understand wtf they are talking about lol
What I know is not much....

Buffered IO is how the OS normally handle IO calls... it has a memory space for buffer and just do IO calls when that buffer is empty.
Unbuffered IO by pass that buffer making the OS do the IO call even if the buffer is not empty.

Seems like there are limitations of use with Unbuffered IO.

For exemple if you will access the a file one time the unbuffered IO is faster but if you will keep reading the file after the first read the next reads is way faster with beffered IO... think like the buffer is the file in the memory so after the first read you don't need to go on disc again you will read from the buffer (memory RAM) that is way way way faster than disc.

Unbuffered IO
Access 1: OS call -> Disc -> OS (OS needs to go disc to get the data)
Access 2: OS call -> Disc -> OS (everytime)
Access 3: OS call -> Disc -> OS

Buffered IO
Access 1: OS call -> Memory -> Disc -> Memory -> OS (OS now have the data in memory)
Access 2: OS call -> Memory -> OS (OS just get the data from memory)
Access 3: OS call -> Memory -> OS (while the buffer has the data you don't use the disc IO)

So it is dependent of what are you doing... I can see a texture being used once in a frame to have big advantage using Unbuffered IO... now if it is a texture being used several times in the same frame then it is better to use Buffered IO.
You can say if you want Buffered/Unbuffered IO when opening the file in the code.
 
Last edited:
100 GB is accessible does not mean bandwidth. My opinion is more like virtual memory.

If they're using a 100GB swap file, not only does that chop 100GB off the SSD for game storage, it would also add wear onto the drive, take time to copy assets into it, and be a worse solution overall than just giving the GPU/CPU DMA to the flash memory. I really don't think XSX is doing that. I still think it was marketing speak and 100GB was referring to a hypothetical game package, and the "instantly available" was referring to SSD speed in getting game data into RAM over their 4.8GB/s pipe.
 
I'm generally curious on when people think it's time to be banned kamikaze style on sites (if that happens with this one).

Might find more info here, I think? I will need to bring myself to read through it at some point.

 
Then 'you guys' have to stop saying that SFS/XVA is simply part of RDNA2.
Right SFS exists even today in rtx series 2000 :messenger_beaming:

Also cannot be use for all the textures call and neither is a new technlogy never implemented before exist since the game Rage 1 XD
 
Last edited:
Just out of curiosity, if a mid range refresh was to go ahead in say 3 years time and both machines offered higher preforming models namely a pro version of their machine, Do you think Microsoft would try to take advantage of Sony's custom silicon designs and try to improve their system, or do you think they would they just do more of the same, as in more tfops,ram, bigger ssd.

I hope all this dick comparing causes Microsoft to make up big ground this generation so that any potential mid-gen refresh is powerhouse offerings sold at a loss.

If I was in charge of Xbox, I'd sack all the suits and pay Carmack whatever he wanted to come in and do what Cerny's doing for Sony. I'd love to see a Carmack designed console.
 
Status
Not open for further replies.
Top Bottom