Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
There's kinda no point even talking about RDNA 3 when we still don't have a full window into all of what RDNA 2 is. But it's a guarantee that both consoles almost certainly have features unique to their specific platform needs that likely won't be in RDNA 2, and may not necessarily be in RDNA 3 either and could maybe appear in something later. That doesn't necessarily make those potential features more advanced than features already announced, we just know that AMD and Nvidia are constantly iterating on their architecture and updating it as they find reasonable use cases.

Cerny already confirmed one potential feature, it's the PS5's GPU cache scrubbers. That was a Sony suggested customization or change. He says if that appears in a future AMD GPU, and it might, it would mean that they had a successful collaboration with AMD that produced features useful in the PC space. On the Xbox side, we seem to know about a few more. Their implementation of sampler feedback streaming is unique to Series X. Specifically, they have hardware texture filters built into the GPU (fully custom to Xbox Series X) that a microsoft graphics R&D engineer has confirmed are not part of standard RDNA 2. Then there's the fact we don't fully understand yet all of what DirectStorage is and what specific features are required for this to be applicable. Microsoft has said they will eventually bring it to PC. I bring this up because it's clear it's in part responsible for how the GPU can talk to the SSD and use it as virtual RAM. So toss in DirectStorage and whatever hardware support is around it, such as the decompression unit that's also designed for Microsoft's BCPack, a custom compression system specifically for GPU textures that DirectStorage almost certainly helps to manage. That could be a potential beyond RDNA2 feature.

Meaning we can also toss in Sony's kraken decompression unit as likely a potential feature enhancement that may not be apart of RDNA2 also, or it could be. Afterall, it was AMD who unveiled Radeon SSG where an SSD is literally bolted onto a GPU and usable as expandable memory, so i find it highly unlikely they didn't seek to further role that work into their PC high end consumer cards.

Keep in mind I am not saying that Microsoft's and Sony's SSD implementations are the same, I honestly don't think they are, only that there are obviously some similar concepts or thought processes at play, because we really don't have a full window into all that both sides are doing, just a high level overview at best. Anyway, it's entirely possible that such changes on the PC side may require more time because these consoles had far more immediate need for certain features than, say, the PC did, meaning all these considerations and hardware built in for the SSDs could actually be beyond RDNA2 features only because the consoles needed them now. So I say let's wait to find out. But there's no doubt that both Sony and Microsoft both have all sorts of modifications to their GPUs because their specific developers may have asked for things or made requested (almost certainly did) and they themselves may have felt there were improvements required in certain areas.

Take for example, Horizon Zero Dawn or the Last of Us 2 and Halo Infinite. All 3 of those developers may have specific quirks in their game engines that are specific to how their game is designed and may make a request of Sony or Microsoft to increase their capability in a specific regard, and that by doing so it will have a direct impact on their ability to do specific things in their games. Those things while they may not get the headlines like ray tracing or VRS or DLSS may still be extremely important and be features that are actually beyond RDNA2 and that could make it into later architectures if AMD is learning that this technique is becoming more common on games. They may end up deciding "okay, we need our GPU to handle these specific tasks better."

So THAT is what features beyond RDNA 2 may actually entail, specific optimizations for doing certain tasks better, but what I think most fans are doing is treating these secret RDNA 3 or RDNA 4 features as things that will blow features like Mesh Shaders, VRS, Ray Tracing away or at least be on par. It's possible, but then highly unlikely because it likely would have been discussed already because then that would mean it was an innovation brought on by Sony or Microsoft, meaning there is far less reason for them to not talk about it. So far it's been Microsoft who has been more willing to talk about all the new things they did, but then Cerny's presentation contained a lot of info also. People are treating it as if he barely said anything. He likely didn't say all, but he likely got all the major headliners out that are meant to be game changing GPU capability. The rest may be specific to how all these developer's existing or upcoming game engines are designed. I guarantee there are things specific in Xbox Series X that will make Halo Infinite's engine run better. I guarantee there are things in Playstation 5 that is optimal and uniquely designed for the next God of War or the next Horizon Zero Dawn.

So, yes, there are many customizations and features in these consoles that are likely not in RDNA 2 that's dropping this year and might arrive later. In fact, based on some of the requested developer modifications, they may invent new innovative capabilities from those, same as what happens in each cycle. Remember Sony said they had specific work done designed to better handle checkerboard rendering on PS4 Pro? These are the kinds of things that I'm certain was done across both consoles and is likely not RDNA 2 specific. Why else does Cerny stress they didn't just take a PC part. Why else does Microsoft's Jason Ronald and another graphics engineer stress that they built a lot of custom hardware into the Series X? Because developers have needs and desires, and so do they for where they want their games to go.

Hopefully this better explains the whole RDNA 3 and 4 craziness. We don't know what will make it into the newer architectures. We don't even know what's in RDNA 2, but just know that Sony's first party teams had requests and Sony likely did their best to oblige in order to specifically enhance their 1st party games or address specific weak points or bottlenecks to further gain. Same on the Microsoft side, and then we all know the important role that 3rd party plays to both consoles. They had input also. DICE gave input, iD software made suggestions, Epic Games made suggestions for Unreal existing and unreal for the future. You KNOW they talked to Rockstar, you know they talked to Ubisoft Montreal and Bungie and CDProjekt Red. Basically these consoles are likely loaded with all kinds of little tweaks and customizations, but just don't expect magic bullet solutions to evaporate raw performance advantages or certain features. Just trust that both consoles will kick ass especially for their first party and many 3rd party titles because they were designed to do just that.


Some very good points to ponder,
Thank you
 
RDN3 performance is based on the size of the die, the nanometer. RDNA2 on 7 nanometer die. If there is some unknown RDN3 features, it still restricted to 7nm die process. Personally i think its bullshit since RDNA2 is a leap already.

5nm will have better performance and watt then 7nm
The idea of have a feature of RNDA 3 or not released has many arguments against this but you in some way figure out a way to choose one of the worst,
really the nomenclature ?
 
Honest question, do you sincerely think that "the 12TF" of the Xbox Series X is "going to be there for all games"?
Why wouldn't it be? The chip runs at a constant rate. If you are talking efficiency that is another story. Neither console will come anywhere near their theoretical maximum.
 
My CPU was upgraded. :messenger_tears_of_joy:

I bet its some hardware that helps with the temporal injection some engines use, or whatever Sony deem the best way to scale for 120 hz for PSVR2 and what they are planning, could also be something to do with Forveated rendering with their own form of VRS on the outer vision.

I thought the temporal injection worked well to my eyes, hell most poeple could not tell Geurilla games were cheating in Killzone...

Or it could be nowt, lets wait and see.
 
Last edited:
This is from DF article
A new block known as the Geometry Engine offers developers unparalleled control over triangles and other primitives, and easy optimisation for geometry culling. Functionality extends to the creation of 'primitive shaders' which sounds very similar to the mesh shaders found in Nvidia Turing and upcoming RDNA 2 GPUs.

"A new block called the Geometry Engine" when they are specifically talking about the GPU new features, doesn't sound like something software or API related, but support baked on the hardware (like RT and VRS added support in hardware with RDNA2).

"Functionality extends to the creation of 'primitive shaders' which sounds very similar to the mesh shaders found in Nvidia Turing and upcoming RDNA 2 GPUs."

That quoted part is exactly what I've been saying. There is nothing special about the PS5's geometry engine. It is just what PlayStation is calling Nvidia's and DirectX Ultimate's mesh shaders. The only difference is in the naming. All the actual work is done on the same RDNA 2 hardware on both consoles. AMD did not create two separate hardware designs to do the exact same thing.

Oh and it is an API because the hardware already has a name and its called primitive shaders. And what does that quote say the geometry engine does? It says it creates primitive shaders. That sounds an awful lot like an API to use hardware functionality to me. It's doing exactly what DirectX Ultimate API is doing. If it looks like an API, works like an API, and is used like an API...its an API.
 
Neither console will come anywhere near their theoretical maximum.

The key question about efficiency then would be what are/where the theoretical/actual figures for all the elements of I/O in this gens consoles outside the known HDD 50-100MB/s speed?

From what Microsoft and Sony have said the 40/100x I/O increase is achievable real world figures aren't they?

I assume the main goal of the above is to get far closer to getting the max out of the APU/RAM?
 
There's kinda no point even talking about RDNA 3 when we still don't have a full window into all of what RDNA 2 is. But it's a guarantee that both consoles almost certainly have features unique to their specific platform needs that likely won't be in RDNA 2, and may not necessarily be in RDNA 3 either and could maybe appear in something later. That doesn't necessarily make those potential features more advanced than features already announced, we just know that AMD and Nvidia are constantly iterating on their architecture and updating it as they find reasonable use cases.

Cerny already confirmed one potential feature, it's the PS5's GPU cache scrubbers. That was a Sony suggested customization or change. He says if that appears in a future AMD GPU, and it might, it would mean that they had a successful collaboration with AMD that produced features useful in the PC space. On the Xbox side, we seem to know about a few more. Their implementation of sampler feedback streaming is unique to Series X. Specifically, they have hardware texture filters built into the GPU (fully custom to Xbox Series X) that a microsoft graphics R&D engineer has confirmed are not part of standard RDNA 2. Then there's the fact we don't fully understand yet all of what DirectStorage is and what specific features are required for this to be applicable. Microsoft has said they will eventually bring it to PC. I bring this up because it's clear it's in part responsible for how the GPU can talk to the SSD and use it as virtual RAM. So toss in DirectStorage and whatever hardware support is around it, such as the decompression unit that's also designed for Microsoft's BCPack, a custom compression system specifically for GPU textures that DirectStorage almost certainly helps to manage. That could be a potential beyond RDNA2 feature.

Meaning we can also toss in Sony's kraken decompression unit as likely a potential feature enhancement that may not be apart of RDNA2 also, or it could be. Afterall, it was AMD who unveiled Radeon SSG where an SSD is literally bolted onto a GPU and usable as expandable memory, so i find it highly unlikely they didn't seek to further role that work into their PC high end consumer cards.

Keep in mind I am not saying that Microsoft's and Sony's SSD implementations are the same, I honestly don't think they are, only that there are obviously some similar concepts or thought processes at play, because we really don't have a full window into all that both sides are doing, just a high level overview at best. Anyway, it's entirely possible that such changes on the PC side may require more time because these consoles had far more immediate need for certain features than, say, the PC did, meaning all these considerations and hardware built in for the SSDs could actually be beyond RDNA2 features only because the consoles needed them now. So I say let's wait to find out. But there's no doubt that both Sony and Microsoft both have all sorts of modifications to their GPUs because their specific developers may have asked for things or made requested (almost certainly did) and they themselves may have felt there were improvements required in certain areas.

Take for example, Horizon Zero Dawn or the Last of Us 2 and Halo Infinite. All 3 of those developers may have specific quirks in their game engines that are specific to how their game is designed and may make a request of Sony or Microsoft to increase their capability in a specific regard, and that by doing so it will have a direct impact on their ability to do specific things in their games. Those things while they may not get the headlines like ray tracing or VRS or DLSS may still be extremely important and be features that are actually beyond RDNA2 and that could make it into later architectures if AMD is learning that this technique is becoming more common on games. They may end up deciding "okay, we need our GPU to handle these specific tasks better."

So THAT is what features beyond RDNA 2 may actually entail, specific optimizations for doing certain tasks better, but what I think most fans are doing is treating these secret RDNA 3 or RDNA 4 features as things that will blow features like Mesh Shaders, VRS, Ray Tracing away or at least be on par. It's possible, but then highly unlikely because it likely would have been discussed already because then that would mean it was an innovation brought on by Sony or Microsoft, meaning there is far less reason for them to not talk about it. So far it's been Microsoft who has been more willing to talk about all the new things they did, but then Cerny's presentation contained a lot of info also. People are treating it as if he barely said anything. He likely didn't say all, but he likely got all the major headliners out that are meant to be game changing GPU capability. The rest may be specific to how all these developer's existing or upcoming game engines are designed. I guarantee there are things specific in Xbox Series X that will make Halo Infinite's engine run better. I guarantee there are things in Playstation 5 that is optimal and uniquely designed for the next God of War or the next Horizon Zero Dawn.

So, yes, there are many customizations and features in these consoles that are likely not in RDNA 2 that's dropping this year and might arrive later. In fact, based on some of the requested developer modifications, they may invent new innovative capabilities from those, same as what happens in each cycle. Remember Sony said they had specific work done designed to better handle checkerboard rendering on PS4 Pro? These are the kinds of things that I'm certain was done across both consoles and is likely not RDNA 2 specific. Why else does Cerny stress they didn't just take a PC part. Why else does Microsoft's Jason Ronald and another graphics engineer stress that they built a lot of custom hardware into the Series X? Because developers have needs and desires, and so do they for where they want their games to go.

Hopefully this better explains the whole RDNA 3 and 4 craziness. We don't know what will make it into the newer architectures. We don't even know what's in RDNA 2, but just know that Sony's first party teams had requests and Sony likely did their best to oblige in order to specifically enhance their 1st party games or address specific weak points or bottlenecks to further gain. Same on the Microsoft side, and then we all know the important role that 3rd party plays to both consoles. They had input also. DICE gave input, iD software made suggestions, Epic Games made suggestions for Unreal existing and unreal for the future. You KNOW they talked to Rockstar, you know they talked to Ubisoft Montreal and Bungie and CDProjekt Red. Basically these consoles are likely loaded with all kinds of little tweaks and customizations, but just don't expect magic bullet solutions to evaporate raw performance advantages or certain features. Just trust that both consoles will kick ass especially for their first party and many 3rd party titles because they were designed to do just that.
sampler feedback streaming is another fancy name for API exposure of old AMD partial resident textures.
 
This doesn't answer the question. The same was the case in 2013 and they were more vocal.

They were competing with Xbox at that time, now xbox is competing with Amazon and Google. PS4 is having solid sales lately, they're outsold everywhere around here and people getting rabid during the lockdown to have one maybe even extra in one house. So why the rush while you're enjoying total dominance?
 
Last edited:
what's the point arguing with you if you won't believe words of lead architect of PS5?
I know exactly exactly what Cerny said. The problem is that you don't. Here is what you incorrectly claim Cerny said...
no GE it's hardware unit inside GPU

Now here is what Cerny actually said:
"The PlayStation 5 has a new unit called the geometry engine which brings handling of triangles and other primitives under full programmatic control."
Proof @28:59


Nowhere does Cerny say the geometry engine is a hardware unit. He only says it is part of the PlayStation 5 that lets the programmer have full control over primitives. Hmm...now what does a programmer use to manipulate graphic primitives in the GPU? Bueller.... Bueller.... The smart ones in the class will have said an API.
 
I know exactly exactly what Cerny said. The problem is that you don't. Here is what you incorrectly claim Cerny said...


Now here is what Cerny actually said:
"The PlayStation 5 has a new unit called the geometry engine which brings handling of triangles and other primitives under full programmatic control."
Proof @28:59


Nowhere does Cerny say the geometry engine is a hardware unit. He only says it is part of the PlayStation 5 that lets the programmer have full control over primitives. Hmm...now what does a programmer use to manipulate graphic primitives in the GPU? Bueller.... Bueller.... The smart ones in the class will have said an API.

I code for a living, and have never heard people refer to an API as a "unit".
 
Don't be telling others what they can't post.. You're just like everyone else.

Sony confirmed their clocks are variable (fact 1)

Variable means not consistent, subject to change. The 10TF is not going to be there for all games, Sony even openly admits that's the case.
Since it is clear that you have a very basic tech knowledge but still you came here to post like a guru in the matter, lets see what a real dev has to say about the new consoles theoretical "peak".

Ali Salehi, a rendering engineer at Crytek:
The PlayStation 5 now has 36 CUs, and the Xbox Series X has 52 CUs are available to the developer. What is the difference?

The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in CU count, the two consoles' performance is almost the same. An interesting analogy from an IGN reporter was that the Xbox Series X GPU is like an 8-cylinder engine, and the PlayStation 5 is like turbocharged 6- cylinder engine. Raising the clock speed on the PlayStation 5 seems to me to have a number of benefits, such as the memory management, rasterization, and other elements of the GPU whose performance is related to the frequency not CU count. So in some scenarios PlayStation 5's GPU works faster than the Series X. That's what makes the console GPU to work even more frequently on the announced peak 10.28 Teraflops. But for the Series X, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions.

Enjoy 👇:messenger_tears_of_joy:
 
Last edited:
I code for a living, and have never heard people refer to an API as a "unit".
I code for a living too and hear units of code referred to all the time. Seriously how could you not? Or how about unit testing. When coders are doing that do you think they are testing hardware? No they are testing functional units of code which is what EVERYONE calls them.
 
Last edited:
Or because Microsoft knows they have this.


And as to the games being designed for console and ported to PC, won't they again be designed for console and ported to PC? I don't see a problem. I find it funny people are treating Series X as if it only has 10GB for games when it has 13.5GB. And 100% of the system RAM isn't used for graphics. There are a lot of other key parts of the game that need memory also and won't require the faster portion of memory.

And as a thought experiment, perhaps everybody with an 8GB RTX 2080 and RTX 2080 Super should just toss those GPUs out since they're apparently obsolete now based on not having 10GB of RAM, much less 13.5GB to match the next gen consoles? Even the RTX 2080 Ti only has 11GB. Xbox Series X will be totally fine in the RAM department and games are coming up fairly soon on the 7th that will prove this, keeping in mind these are just launch titles that probably aren't even using Sampler Feedback Streaming and a number of the newer software and hardware features of the system.

You would expect Microsoft to say what they did. They obviously want to sell this console. The top tier gfx cards are expected to twice as fast as these consoles. I am sure had console used more RAM next gen the same would be applied to the PC.

I'd say they went this way because they needed the bandwidth to hit their render targets and didn't want to go 20GB because of the cost of GDDR6. They are assuming that no more than 10GB will be needed for Vram on the system, which is probably a safe bet considering the total available to devs is 13.5GB. The CPU and Audio will need some memory too.

Definitely they should have had more RAM. Shadow Fall was a PS4 launch game nearly 7yrs old uses 4.5GB out of which 1.5GB was for CPU. Next gen we are having big upgrade in CPU and audio is only been given emphasis. I would expect CPU alone to use min of 3GB RAM. Since RAM modules are suppose to work in parallel and are required to be same speed for best performance. That is the point in going unified memory in the first place. So I am not sure how it will affect if the game uses more than 10GB of RAM.
 
Last edited:
I code for a living too and hear units of code referred to all the time. Seriously how could you not? Or how about unit testing. When coders are doing that do you think they are testing hardware? No they are testing functional units of code which is what EVERYONE calls them.
Translation units, code units, sure. But that's not the same as an API.
I would even avoid calling a library "unit", personally.
 
Last edited:
The funny thing about this whole war chest theory is that the PlayStation brand is literally what's making Sony profitable and relevant. Their insurance business is profitable but PlayStation accounts for a massive portion of their revenue and profit, and yet they had to make some pretty price conscious decisions when it came to the apu size and ram bandwidth.

MS, just like Sony, is bound by the same budget limitations.

If Sony invests and screws up the Playstation side of the business somehow they have MUCH more to lose than Microsoft.

The Xbox division is profitable for Microsoft but it isn't as huge a piece of the pie as it is for Sony so they can probably afford to take more risks.
 
I code for a living, and have never heard people refer to an API as a "unit".

Is it safe to assume that what Sony means by Geometry Engine is the same definition that AMD gives it?

isscc2020-5700-design.png


It looks like it's something inside the GPU. And I went a bit further back and they talked about the Geometry Engine in the Polaris white paper as well.
 
Since it is clear that you have a very basic tech knowledge but still you came here to post like a guru in the matter, lets see what a real dev has to say about the new consoles theoretical "peak".

Ali Salehi, a rendering engineer at Crytek:


Enjoy 👇:messenger_tears_of_joy:

Didnt you get the memo? He once had a PS4 game in his hands and made a tweet about it, so he is clearly a sony fanboy /s
 
Actually it kinda does matter, this is nitpicking though, one decompression blocks throughput _could_ be a bottleneck. We don't know though because we don't exactly know how much data the compression block can receive at once, how much it can decompress ( only one compression method simultaneously? ) and how much it can output at once. ( 4.x overall of for each compression method? ).
Basically to much unknowns.
I agree, we have unknowns regarding the decompressors. Thing is, if the compression block is the limiting factor why not use Mermaid? And if the bus is the limiting factor why not use Leviathan? I'm assuming both stroke a balance between the raw transfer speed and the decompression block or else they would have used other forms of decompression to make up for the slower part of the pipeline.

Well as you can see above, you just on time for Dr.Keo taking over of the BCPack to close the gap narrative _ I forgot the fellow gaffer who was in charge before, but that's ok because it's basically the same working model: API solutions with fancy names, gaining performance on the fully hardware supported SSD to i/o complex in ps5...
and the same supplier: they both selling the 2 decompression blocks on XSX special sauce _ btf i think this one is to blame on the DF XSX analysis, if i remember right, there was a part on the description of the subject poorly articulated, that can indeed lead to the misunderstanding.
And then yesterday we had a dude claiming ps5 is RDNA2 alpha, and XSX RDNA2.3 + and some more shenanigans

That's just the last 2 days and fresh memories... Oh! Thickgirls pull out a rumor too about future Navi23 cards (the smaller ones less CUs) had some kind of RT hybrid solution so not full RDNA2... but just for the fun of speculation like usual of course, he's very neutral and just likes to discuss all information unlike fanboys.
WTF is that post? Have you actually read my post before posing this toxicity?

It is one hardware decompression block and I'm not nitpicking. I'm going by what Microsoft themselves said. If it was nitpicking why aren't you also saying PS5 has 2 blocks since PS5 hardware decompression block supports both zlib and kraken?

The 22GB/s is not an edge case, it is the literal hardware specification. it does not require all planets to align nor does it require cthulhu to grace us with his presence. It is a case where data compresses really well and going by previous Sony research there are lots of places where data can compress really well like geometry and animation data can compress anywhere from 75% to 98% and texture data compresses between 40% to 60% and those were on zlib.

Kraken compresses better than zlib. And you're making up stuff now, Mark Cerny said typical, not the highest level of compression was 5.5GB to 8GB - 9GB that's a compression factor of 31% to 38% and up to 75% if the data compresses really well at the highest level. And like I said above Sony's own research shows that geometry and animation data compresses really well between 75% to 98%.

We don't have a lot of data on kraken because it is proprietary but what we have shows that kraken compresses better than zlib but where it really shines is the decompression speed. Compression is trivial, where you want the speed is on decompression because that determines how fast you can move data from storage to RAM hence the ridiculously fast 22GB/s hardware decompression block.

They didn't add it because it was a cool bullet point like you put it. It should be obvious at this point that Sony is going for speed, 3.5GHz CPU, 2.23GHz GPU, 5.5GB/s SSD, 22GB/s decompression block and lots of ASIC that aid in the movement of data at high speed. They said their goal was to eliminate load times and remove developer barriers when it comes to moving data.
oodle_seven_ratio_chart.png


chart
chart
I actually totally forgot that PS5 has a Zlib decompression block too, it's probably there for BC purposes.

And yes, 22GB/s is an edge case. Kraken compression has ~15% better compression ratio than Zlib, obviously, just like any compression it depends on the case (both data type and the data's from within that type). No one is denying that some files will be compressed by 10%, others by 35%, and others by 80%. But talking about 22GB/s is irrelevant because that's not the average case, 22GB/s isn't something developers will "take advantage over time" just like when Zlib hits over 75% compression (and it does, some times) it doesn't mean developers can somehow magically make all their data compress X4 using Zlib. Compression doesn't work like that.

So yes, 22GB/s is a cool bullet point and with other edge cases it probably can get higher than that in terms of compression (they might be limited by the decompression block at that point), but you keep using the 22GB/s figure in a misleading way. No, you will not get a single game this generation that has a transfer speed of 22GB/s. What you will have a specific file transferring at 22GB/s for a fraction of a millisecond here and there.
 
Last edited:
Yet MSFT software for Visual Studio is much worse than Sony (ICE) one. For console development. Think about it.

"For the same reason I can't imagine Sony releasing a competitive OS today. Logically they are generalist examples. Each one is good at what she has experience with. Sony is used to making Hardware (with its achievements and failures). And Microsoft to make Software (with its achievements and failures)."

Achivements and failures...

As I said, they are generalist examples.

This topic would give for many lines and debates that are still open. So I will try not to get out of the general scope.

Assuming that my comment was a generalization about the general history of each company, we could always go into detail about Microsoft's achievements and failures. And those of Sony, both in Hardware and Software. Just because everyone does everything doesn't mean they are experts at everything. And just because you're an expert at something doesn't mean that others can't do better. You have to differentiate between "where are you going?" and the "where do you come from?"

The problem many times is that it is more comfortable to use what "everyone" uses. And if a "Chrome" does not appear, people will still use "Internet Explorer". Even so, it took 8 years to reverse the usability percentages of each browser worldwide. Humans are beings of old customs and it is difficult to get us out of our comfort zone, but if you take the step and leave it you can achieve wonderful things. Another thing is that there is no alternative, so you're screwed (a lot happens).

In any case it seems true that "nobody" loves Windows. But it is still the most used OS. So there are only two options. Either consumers are doing something wrong, or Microsoft is doing something right.

Today, for me, if I had to make a comparison or simile with other programs, based exclusively on development tools for consoles, I would say the following:

MS ► Max
Sony ► Houdini

Both do everything, some do something better and other things worse, and what perhaps they cannot do can be programmed by you. But one seems to be made from badly assembled "patches" (a non-literal way of speaking) of the past that carry defects that remain uncorrected, applying new tools and still leaving old ones with their defects that sometimes conflict with new (it is incredible that Max is not yet able to have Boolean-worthy tools that work always and in all circumstances). While the other is perhaps somewhat more complex at a general level, but it is much more robust, effective and reliable at a basic level, and if you are interested you have the option of deepening your will without losing stability.

I hope that my colleagues do not throw me around the neck for giving this example. It is no more than that, an example based on my personal opinion.

Powered (again) by Google Translator.
 
A unit of code is not the same as API.
Unit is just so generic in coding it could refer to anything; I agree that sentence sounds like he's referring to a piece of hardware though. And I've never really heard anyone randomly call code a "unit" either lol
 
Yea. Ps4 is being sold at 300$ now which is pure profit for sony. Compared to when ps5 will come out and will be sold at a loss. Sonybwants to sell as many ps4 they can for these 2 AAA games. Then u will see ps5 marketing will start heavily

I saw 3 days ago a new, freshly brought PS4 Pro bundled with HZD+Uncharted 4 which are priced cheaply now, and sold for ~169 Omani Rials (~$440 USD)! Probably sold out within a day as it's hard to get your hands on PS4's now as the demand is pretty high and prices might even raise on other parts of the globe outside US market (the cheapest offers).
 
Idk. It died in the middle of the lockdown. I don't even know where to start to fix in a pandemic situation.

Wait...your PS4 had a YLOD 😱? I didn't know that was a thing, wow. Truly sorry to hear that.

Please don't downgrade PS5 anymore 😢

It doesn't matter. The scientific and technical facts stand on its own two feet. All in due time now.

Variable means not consistent, subject to change. The 10TF is not going to be there for all games, Sony even openly admits that's the case.

12TF's will most likely not be there for all games either. It seems to me to be just the reality for both consoles.

Cerny already confirmed one potential feature, it's the PS5's GPU cache scrubbers. That was a Sony suggested customization or change. He says if that appears in a future AMD GPU, and it might, it would mean that they had a successful collaboration with AMD that produced features useful in the PC space.

Well the quote is:

If the ideas are sufficiently specific to what we're trying to accomplish like the GPU cache scrubbers I was talking about then they end up being just for us.

If you see a similar discrete GPU available as a PC card at roughly the same time as we release our console that means our collaboration with AMD succeeded.

Almost seems a bit different from what you're saying? Perhaps I'm misinterpreting either lol.

(source:https://playstationvr.hateblo.jp/entry/2020/03/30/181003
 
Last edited:
I code for a living too and hear units of code referred to all the time. Seriously how could you not? Or how about unit testing. When coders are doing that do you think they are testing hardware? No they are testing functional units of code which is what EVERYONE calls them.
never heard such nonsense as unit of code :messenger_grinning_squinting: if you' are programmer? can you explain what this function does?

Python:
def function_x(a):
    if a:
        lp = [i for i in a[1 :] if i < a[0]]
        mp = [i for i in a[1 :] if i >= a[0]]
        return function_x(lp) + [a[0]] + function_x(mp)
    else:
        return a

and what methods it use?
 
The head of xbox has only in the last few years reported directly to Satya.

The Xbox division of today is a different beast to the launch version of Xbox One. Large companies make mistakes all the time. Large companies also change and shift their focus too, to turn things around.


We know better than to go off Microsoft pr and "revenue" I mean all the old school gamers have read articles like this before over the generations and things turned out differently or to be hot smoke, fiddling with numbers. And we all know profit and revenue are two diff beasts.

I agree though companies change based off pressure and Sony's dominance has made Microsoft more consumer friendly, following steps and more focused. We shall see but thy definitely don't get the benefit of doubt from me.
 
Last edited:
The way I see it is that MS may believe that getting a big AAA third party game on Gamepass day 1 would yield the following benefits:
  • Influence the decision making of potential console buyers this holiday season
  • Fortifies the value proposition of Gamepass (mindshare)
  • Increased uptake in GP subscriptions (many of whom may forget to cancel or see the value it offers beyond this)
  • May make some residual money off the increased user-base if the game sells worthwhile DLC?
  • They are effectively taking sales away from other platforms (may impact their rivals' bottom line)
For a single title, I think you could probably make the business case internally to get the chequebook out to do it, at least for launch but not sure it would become an ongoing trend. One downside I can see is that the game would presumably be a cross-gen game, this would mean that the cost would bloat as the third party would be impacted by the fact that the game would (again presumably) have to be made available to all hardware that Gamepass supports)?

This convo has made me think, wasn't there a rumour MS have signed a deal with Sega to bring their games to Gamepass day 1?

What is the money making behind that anyway? This is so far what we get from direct comparisons, not including PSN or foggy Xbox Live (because you don't pay anything on PC for Xbox Live):

WCCFsuperdatasubscriptions.jpg


 
What is the money making behind that anyway? This is so far what we get from direct comparisons, not including PSN or foggy Xbox Live (because you don't pay anything on PC for Xbox Live):

WCCFsuperdatasubscriptions.jpg


That article is from November 2018 BTW... not entirely sure how relevant that is today... especially since it's an analysts guess.
 
Translation units, code units, sure. But that's not the same as an API.
I would even avoid calling a library "unit", personally.
When talking to your customers you absolutely would call it a unit, module, plug-in...

People are reading way to much into what Cerny said. The "unit" is the new functionally the PS5 has that allows the programmers to directly manipulate and control the geometry that was previously only handled by fixed function GPU hardware. The unit is the API and the hardware changes that allowed the developer to inject their own code into the pipeline.

I honestly can't believe this debate has been going on for as long as it has. My original point has already been conceded. That was that the PS5 and the XSX will have essentially the same handling of the geometry pipeline. The only reason people started debating the name was because they tried to used the fact that Cerny called it the geometry engine to mean it must be something new that the XSX didn't have. That's when I said that the thing that did the actual geometry processing was in the base RDNA 2 and that both the PS5 and XSX were both using the same basic architecture. The name geometry engine was just what Sony was calling it while Microsoft called it mesh shaders.

The original reason for this tiny point of discussion has long since been over since I hope nobody is still trying to say the geometry engine functionality is something unique to the PS5.

Ok with that bit if history out of the way, I ask you and anyone who thinks I'm wrong about the API part to say which seems more plausable.

Scenario #1:
Sony looked at the planned functionality of RDNA 2 (they had full access because they helped design it) and said to AMD, "You know what? Even though those primitive shaders do exactly what we want, we want you to remove them and put in our geometry engine which does the exact same thing."

Scenario #2:
Sony looked at the primitive shaders in the RDNA 2 and liked what they saw. Maybe they added a tweak here and there to make it work better with the cache, super fast SSD, VR or whatever else. Then they exposed the new functionality in a new unit of their API and called the whole thing the geometry engine to put their brand on it.

Yeah... scenario #2 is the only one that makes any sense at all. Sony did not completely recreate primitives shaders just so they could call it the geometry engine. They just put their API in front of it and called the whole thing the geometry engine to give them an easier way to talk about it to the general public.
 
Last edited:
How soon do people think we'll see a major 3rd party cross platform game come day 1 to GamePass? Maybe this Thursday is too soon but I think it's inevitable within 6-12 months of XSX launch.

We all know Microsoft has money to burn, and Phil directly reports to Satya (who himself seems to feel gaming is an important growth sector).

Valhalla or Cyberpunk possibly? The game will still be multiformat, but be day 1 on GamePass. Well it could be any AAA game really but I can't see this not happening sooner or later for a major AAA game.
Oh, boy, this is the kind of mentality that day and date policies bring, Xbox users are getting spoiled by 'free' content and soon won't want to buy games. I'm witnessing this happen here in Brazil. Be advised, shareholders don't like to hear about this money burning excuse.
 
What is the money making behind that anyway? This is so far what we get from direct comparisons, not including PSN or foggy Xbox Live (because you don't pay anything on PC for Xbox Live):

WCCFsuperdatasubscriptions.jpg


PS Now was at 1 million users in October 2019, Microsoft currently is at 10 million. Sure there's 7 months between those numbers, but I would be surprised if PSNow would also be at 10 million. On the 13th of May we'll have the quarterly earnings of Sony so we'll probably hear the right number then.
 
Last edited by a moderator:
What is the money making behind that anyway? This is so far what we get from direct comparisons, not including PSN or foggy Xbox Live (because you don't pay anything on PC for Xbox Live):

WCCFsuperdatasubscriptions.jpg


Gonna reply a second time to point out how bullshit these numbers are.


PSNow surprassed 700k subscribers 6 months after Superdata claimed they were pulling in ~$140 million in quarterly revenue.

700k x 60 (3 months at full price the most Sony would pull in a quarter) is $42 million. And there is no way Sony is getting full price out of every PSNow sub.

They were probably pulling in more like $20 million a quarter when SuperData claimed they were pulling in $140 million...

Moral of the story: SuperData is incredibly unreliable, despite being owned by Nielson who are respected in the TV space.
 
Last edited:
That's why I'm asking, do we have any direct comparisons? That's all what I see so far. Microsoft is not transparent anyway with its sales.
No I don't think anyone really knows; Sony released sub numbers but nothing really indicating what the average person is paying.

MS hinted at subs but they literally could be largely people paying $1 to up their account for years.
 
How will the PS5 sustain the same Raytracing as the X having a lower CU number?

How will the PS5 keep a consistent 60fps 4K with an inferior GPU? You think it still stay around 10tf with raytracing and a higher frame rate?

You ignore it's a boosted clock, that variable, it not steady clock. X clock does not drop.
There no way the PS5 can match the X performance. You're going to see with raytracing the X superior specs matter.

. I let you live in your bubble that the PS5 will keep up.


You going to see I was right when consoles are out.

It seems like everyday the Xbox cult higher ups seem to send 'warriors' to uphold the mighty series X dominance.

Genuinely lost count of how many times I've read posts like this.
 




My take, effectively, if the 9th gen has features that the PC will only get with RDNA3, they /do/ have RDNA3 features ahead of time, regardless of the directionality of development. If me and my twin share features, whether you say they have my features or I have theirs, both are correct.
 
Last edited:
Status
Not open for further replies.
Top Bottom