• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*) Ali Salehi, a rendering engineer at Crytek contrasts the next Gen consoles in interview (Up: Tweets/Article removed)

Hobbygaming

has been asked to post in 'Grounded' mode.
So we have a random developer who;

1. Doesn't know that the PS5 has hyperthreading
2. Claims the PS5 loads games at least 6 timers faster than the XSX
3. Hasn't developed any games worthy of a mention
4. Has no access to the actual hardware

I rest my case.
😂 Hasn't developed any games worthy of a mention??
And how do you know he has no access to the hardware?? He starts by saying he can't talk about what he's working on, this implies that he is currently working on a next generation project
 

Kumomeme

Member
some people just can't understand shit. like their iq is below 0.

xbox ferrari top speed 140km/h. acceleration 5 sec to 100km
PS5 masseratti top speed 120km/h acceleration 3 sec to 100km

games are like roads, straight, curves to right and left and speed limit is 130km/h. yet you never really get a straight road more than 6 seconds. which car would reach it's own limit almost always and which car would be most useful?
i need DBZ version of this
 

squidilix

Member
flame-war.gif
 

Bojanglez

The Amiga Brotherhood
From reading this it sounds like one of his biggest issues is the fact that XSX is sitting on top of the Windows software stack which he implies is in itself a limitation from getting the most out of the hardware.

This generation is going to be fascinating, seeing how each studio, each engineer even goes about getting the most out of each platform. I'm sure both will be great, but I look forward to a time when the software can do the talking and not the engineers (as fascinating as this is). Although no doubt some clowns will spout out the "lazy developers" trope if their preferred console of choice doesn't match up to the other.
 
Last edited:
Two things:

1. PlayStation 5 does have hyperthreading. Of course it does since it's got a modern CPU.


2. "Consoles have always determined what the standard is"

Well, that sentence is simply

XOkiVsk.gif


Also, that recent Crytek hire seemed biased.
 
Last edited:
This interview is great because it has provided two pure gold threads with some amazing moments. Not going to bother because there are obvious translation issues that lead to hilarious outcomes. I will wait for the first DF comparisons, at this point nothing more needs to be said.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
Actually, the XB1 had lesser CUs and higher clocks compared to PS4. And we all know how it turned out, don't we? And that's where all the concerns regarding PS5 solution come from, because we all actually saw and experienced it already, and it was a bad experience.
Ah you're right it was 800 to 853 I forgot that
 

Panajev2001a

GAF's Pleasant Genius
Two things:

1. PlayStation 5 does have hyperthreading. Of course, it does since it's got a modern CPU.


2. "Consoles have always determined what the standard is"

Well, that sentence is simply

XOkiVsk.gif

It is not. That is why you can take the PC ports of these games and crank the resolution and the framerate up after a year or so of the console HW release (happening sooner and sooner). It is also why when new console launches you see minimum PC specs rising after they start selling.

Even CDPR said they were targeting consoles and that consoles were a key reason for Witcher 3 to be made with the scope and complexity it did.
 
Are less CUs better than more CUs?

Your question is an obvious trap.
But I' going to answer it and then I am going to compare your question to other stupid questions.

My Answer:
I depends on the surrounding system that uses these CU's.
Let's say you have 10 Cus vs 5 Cus in diefferent environments and used by different games/code.

10 CUs used in a system that is bottlenecked and running unoptimized code written in a haste or by an amateur using Slow but general API for several systems ( thousands of different hardware configurations )
5 CUs used in a system that is balanced and running optimized code that is carefully written using specific APIs for one particular system ( 1 hardware configuration )

I would tend to argue that both were able to perform similar. Opimization could even surpass brute force.

Back to how you expressed your question - It's actually a common thing used by poeple that try to disprove the opinion of other people by trying to make that sound like its unreasonable.
It's the same kind of question asked by people who believe in the flat-earth, who belive chemtrails are real, by people who don't believe in Human accelerated Climate-Change and so on.
Some of those people are doing to it troll, others do it because they don't understand complex things and only believe in simple truths. Something like all that matters is CU count, or TF for that matter.
 

geordiemp

Member
Doesn't matter, that doesn't stand in opposition to the result. How many people own an RTX 2080 series GPU of any kind? Barely over 2%, doesn't change their rendering and performative capability.

Nvidia has seperate tensor cores for RT, totally different unless AMD have them now in RDNA2, but if AMD did they shure would of bullet pointed the fact at their presentation. You cannot use Nvidia as an example. for AMD.

But yes XSX will be theoretically 15-20 % better for RT if there are no other distractions / bottlenecks or other things.


Practically better ? Nobody know yet except devs working on games,
 
he tells XSX use simple directx like pc and xbox one.... lol.... idiot..

direct x ultimate [xsx] was developed to give developers deeper levels of gpu programming and many funktions are absolutely new.

The funny thing:
all his answers are word for word to read since weeks in this forum.... absolutely nothing new.. only: "when..... then..."
but no one proof that [this] his "assumptions" are in reality true...

is he really a engineer or a bad oracle?
 

darkinstinct

...lacks reading comprehension.
Ok for arguments sake lets pretend RT cores are separate blocks/independent
What makes you think a clock frequency won't affect RT cores? We know clocks influence every part of the GPU pipeline backend and frontend. Why do you assume RT cores are exempt from this?

As Cerny's friend said: A rising tide lifts all boats
The limiting factor of raytracing is memory bandwidth.
 

Panajev2001a

GAF's Pleasant Genius
Actually, the XB1 had lesser CUs and higher clocks compared to PS4. And we all know how it turned out, don't we? And that's where all the concerns regarding PS5 solution come from, because we all actually saw and experienced it already, and it was a bad experience.

One could say that even after the several clockspeed increases the relative difference was still high and the system had other and worse bottlenecks.

Partially, the system needed the higher clockspeed as it was losing efficiency due to virtualisation (it is cheap, not free) and likely could contribute to increased compute efficiency together with low latency ESRAM, but PS4’s GPU invested greatly in a more elaborated HW scheduler and Async Compute engines (many more ACE’s with more HW queues each). PS4 also had the ROPS count advantage too IIRC.

So, we have much different architectures, different number of ROPS, different number of compute jobs management resources, etc... and different bottlenecks. It was not a proof for or against clockspeed vs additional resources and the clockspeed advantage was smaller too.
 
Last edited:

TBiddy

Member
This doesn't refute what I said lol

Him saying he can't talk about his project, in the context of this interview says he already has a devkit

Considering any developer with access to the hardware is most likely under a strict NDA (as verified by the DICE developer earlier), this developer either doesn't have access to the hardware (and thus, no NDA) or should be expecting a call from both MS and Sony very soon.

I think the former is the most likely case.
 

geordiemp

Member
This thread man, we dont know if he has a dev kit, he also was clever with NDA as he stated he could not talk about current project and he was just commenting on publically available knowledge . So we dont know for sure if he is right and has Ps5 or just speculating...

But anyway, continue GAF.

 
Last edited:

darkinstinct

...lacks reading comprehension.
Considering any developer with access to the hardware is most likely under a strict NDA (as verified by the DICE developer earlier), this developer either doesn't have access to the hardware (and thus, no NDA) or should be expecting a call from both MS and Sony very soon.

I think the former is the most likely case.
Correct. NDAs cover you not even being able to acknowledge you are under NDA. He is a mobile developer that spreads BS he has heard on the internet. Just like that former Sony dev who said PS5 was 12+ TF and shown in February because he read it on forums. People that do know don't talk about it, until they are instructed to do so.
 
Last edited:
It is not. That is why you can take the PC ports of these games and crank the resolution and the framerate up after a year or so of the console HW release (happening sooner and sooner). It is also why when new console launches you see minimum PC specs rising after they start selling.

Even CDPR said they were targeting consoles and that consoles were a key reason for Witcher 3 to be made with the scope and complexity it did.
It is bullshit. I bolded the word "always", which you somehow wanted to overlook, since it has not always been this sorry-ass way that it is now.
 
Last edited:
ALL:

CYBERPUNK 2077

WILL
TELL
THE TRUTH

its third party AAAA

This game need monster GPU and very good streaming [ssd]

we all will see what is better for third-party games.

Im sure after release Sony fans will talk ONLY about one thing [it's not the graphics😉] :

"look at the loading screen.... can it be that is loaded 1sec faster than on XSX?......"
 

Panajev2001a

GAF's Pleasant Genius
It is bullshit. I bolded the word "always", which you somehow wanted to overlook, since it has not always been this sorry-ass way that it is now.

It has been this way since the Xbox 360 launch at least and a lot of big industry defining titles were console only before that (hello GTA series ;)). Anyways, you say it “is” bullshit, but then admit it is that way now hehe.
 

Panajev2001a

GAF's Pleasant Genius
Considering any developer with access to the hardware is most likely under a strict NDA (as verified by the DICE developer earlier), this developer either doesn't have access to the hardware (and thus, no NDA) or should be expecting a call from both MS and Sony very soon.

I think the former is the most likely case.

He did not confirm being under NDA if you want to be accurate and anal ;).
 

geordiemp

Member
Correct. NDAs cover you not even being able to acknowledge you are under NDA. He is a mobile developer that is a Sony fanboy that spreads BS he has heard on the internet. Just like that former Sony dev who said PS5 was 12+ TF and shown in February because he read it on forums. People that do know don't talk about it, until they are instructed to do so.

Tell you what though, at least he is a current developer and a recognised company like Crytek and therefore more relevant comments than Timdogs mate who was a dev 15 years ago, and other rubbish from windows central.

Also the article does not state which console performs better, he is saying which one is easier to get more out of in his opinion.

But continue.
 
Last edited:

SLB1904

Banned
ALL:

CYBERPUNK 2077

WILL
TELL
THE TRUTH

its third party AAAA

This game need monster GPU and very good streaming [ssd]

we all will see what is better for third-party games.

Im sure after release Sony fans will talk ONLY about one thing [it's not the graphics😉] :

"look at the loading screen.... can it be that is loaded 1sec faster than on XSX?......"
Cyberpunk is AAA. Stop trying to make AAAA a thing.
Whats next
Aaaaaaaaaa lol
 

Hobbygaming

has been asked to post in 'Grounded' mode.
Considering any developer with access to the hardware is most likely under a strict NDA (as verified by the DICE developer earlier), this developer either doesn't have access to the hardware (and thus, no NDA) or should be expecting a call from both MS and Sony very soon.

I think the former is the most likely case.
We'll see ;)
 

nikolino840

Member
What I don't understand is why these guys can only say nice things about Sony by essentially trashing Microsoft (or vice-versa). It's like they have the emotional maturity of kids on a playground xD.

And again, it's an open secret Crytek and MS are not on good terms, so this kind of interview's expected. Kind of like Jonathan Blow's thing on Twitch when people asked him about the next-gen consoles. They may be talented tech people & devs but they definitely have their own angles considering past relationships and the fact neither of these systems have even launched, so why try speaking the way they do except to try setting a narrative?



You don't need to be a rendering engineer at Crytek to come to those conclusions, they're common-sense tbh.



Then ask yourself why are these developers giving these kind of interviews when neither system is out yet? What does this benefit to potential buyers, other than to influence their own independent thinking and analysis on the systems? Especially considering when this particular interview is from someone from a studio that has a pretty obvious messy relationship with one of the platform holders?

I'm sorry you can't do any of your own critical thinking and let authority figures do all the thinking for you because they may have a fancy title or two, but not everyone is like that. I thought similar when the ex-Sony dev came out talking in favor of XSX over PS5; again just look at their history and ask "why put that kind of narrative out there before these systems have even launched"?

Too bad you fanboys cannot rationalize in these terms, it takes having a true neutral perspective to do. Honest analysis and interpretation is a show of clarity of mind; clinging to whatever confirms your preexisting biases for the console war sporting event in your head is what seems like the ultimate act of foolishness and fear. Sleep on that for a bit.



Also for as many people that continue to bring up the obvious "faster CUs > more CUs" talks, they never bring up the poor ratio of high freqs to actual performance gain that was typified in RDNA1 GPUs on DUV, when these systems are using enhanced DUV rather than EUV.

PS5's GPU freq is clearly above the upper point of Navi's sweetspot on RDNA2, and there are honest questions to ask how much that has improved on RDNA2 on enhanced DUV. I mean, if questions about RDNA2's frontend improvement are up for speculation, this should be as well.
Maybe becouse PlayStation own the bigger part of the market and don't want to lose potential money
 

Three

Member
You're comparing GPU and CPU workloads, which are completely different. Rendering is inherently parallelisable, because it involves working on millions of pixels at a time. Just take a look at the PC space. The 5700 XT has very similar clock speeds to the 5500 XT, but 66% more CUs. And the 5700 XT is more than 66% faster.
You're right that it is highly parallel already for the GPU.

As for the example in the PC space. The clocks aren't similar and more importantly the memory bandwidth is not similar that's where the difference comes from. In the case of the 5700XT everything is better.


RX 5500 XT 8GB
RX 5700 XT
Boost Clock1625 MHz1755 MHz
GPU Clock1717 MHz1605 MHz
Memory Clock1750 MHz 14000 MHz effective1750 MHz

Memory
RX 5500 XT 8GBRX 5700 XT
Bandwidth224.0 GB/s448.0 GB/s
Memory Bus128 bit256 bit
Memory Size8192 MB8192 MB
Memory TypeGDDR6GDDR6
Render Config
RX 5500 XT 8GBRX 5700 XT
Compute Units2240
ROPs3264
Shading Units14082560
TMUs88160
Theoretical Performance
RX 5500 XT 8GBRX 5700 XT
FP16 (half) performance10.39 TFLOPS (2:1)17,970 GFLOPS (2:1)
FP32 (float) performance5.196 TFLOPS8,986 GFLOPS
FP64 (double) performance324.7 GFLOPS (1:16)561.6 GFLOPS (1:16)
Pixel Rate59.04 GPixel/s112.3 GPixel/
Texture Rate162.4 GTexel/s280.8 GTexel/s


The question is whether lower CUs at higher clock can provide comparable performance for most games at a given res. It certainly will not reach the max theoretical difference in performance because of the CU count.
 

Ashoca

Banned
ALL:

CYBERPUNK 2077

WILL
TELL
THE TRUTH

its third party AAAA

This game need monster GPU and very good streaming [ssd]

we all will see what is better for third-party games.

Im sure after release Sony fans will talk ONLY about one thing [it's not the graphics😉] :

"look at the loading screen.... can it be that is loaded 1sec faster than on XSX?......"

Whats great is that you can actually buy the xbox one version and get the xbox series X version for free. Hopefully, its the same case for PS5.
 

Virex

Banned
Original find belongs to @M-V2, I did translation and some editing and @CJY had also made a translation post in Next Gen Thread.

Edit 1: benjohn benjohn is a native Farsi speaker and he has gone over my translation, proofreading and making necessary changes, and I've replaced the whole thing from his post. So now this translation version should be more faithful to Ali Salehi's words. Thanks.


OK Here we go! It is a long one but full of info.

INTRO
The hardware specifications of the PlayStation 5 and Xbox Series X were officially announced a few weeks ago by Sony and Microsoft, and Digital Foundry had the opportunity to take a deep technical look at what we expect. Although there aren't many games for consoles yet, and we don't know much about their overall performance and user experience, the two companies are constantly competing in technical and complex debates that no one but engineers and programmers can understand. Providing the deepest technical information is not avoided this time around.

As we tracked down the information and read the specifications and were searching for more information on the matter, it seemed better to talk with an engineer and programmer at Crytek, one of the world's most tech-savvy companies, with a powerful gaming engine. That's why I called Ali Salehi, a rendering engineer from Crytek, and asked him, as an expert, to answer our questions about Xbox Traflops adavtages over PS5 and the power of the consoles, and to comment on which one is more powerful. Convincing answers with simple and understandable explanations that were contrary to expectations and numbers on paper.

In the following, you will read the conversation between Mohsen Vafnejad and Shayan Ziaei with Ali Salehi about the hardware specifications of the PlayStation 5 and Xbox Series X.

INTERVIEW
[Questions bolded, answers not]
Vijayato: In short, what is the job of a rendering engineer in a gaming company?

Ali Salehi: The technical visual section of each game is what we do. That means supporting new consoles, optimizing current algorithms, troubleshooting current ones, implementing new technology and features like RayTracing are somethings we do.

What is the significance of Teraflops, and does higher Teraflops mean a console is stronger?

Teraflops shows that this processor can be as efficient if it is in the best and most ideal state possible. The Teraflops figure is in ideal and theoretical conditions. In practice, however, the graphics card and console are a complex entities that rarely get to their fullest potential. Several elements must work together in harmony to provide each part of the feed to the other and output one part to another. If each of these elements fails to work properly, the efficiency of the other part will decrease. A good example of this is the PlayStation 3 console. Because of its SPUs, the PlayStation 3 had a lot more power on paper than the Xbox 360. But in practice, because of its complex architecture and bottlenecked Memory and other problems, you never reached the peak of efficiency.

There is an image here with following
[Woes of PlayStation 3
The PlayStation 3 had a hard time running multi-platform games compared to the Xbox 360. Red Dead Redemption and GTA IV, for example, ran at 720p on the Microsoft console, but the PlayStation 3 had a poorer output and eventually up scaled the resolution to 720p. But Sony's own studios have been able to offer more detailed games such as The Last of Us and Uncharted 2 and 3 due to their greater familiarity with the console and the development of special software accessibility.]

That is why it is not a good idea to base our opinions only on numbers. But if all the parts in the Xbox Series X can work optimally and the GPU works in its own peak, which is not possible in practice, we can achieve 12 TFlops. In addition to all this, we also have a software section. The example is the advent of of Vulkan and DirectX 12. The hardware did not change, but due to the change in the architecture of the software, the hardware could be better put in use.

The same can be said for consoles. Sony runs PlayStation 5 on its own operating system, but Microsoft has put a customized version of Windows on the Xbox Series X. The two are very different. Because Sony has developed exclusive software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same directX PC and for its consoles.

How have you experienced working with both consoles and how do you evaluate them?

I can't say anything right now about my own work, but I'm quoting others who have made a public statement. Developers say that the PlayStation 5 is the easiest console they’ve ever coded for. so they can reach the console's peak performance. In terms of software, coding on the PlayStation 5 is extremely simple and has many features which leave a lot of options for developers. All in all, the PlayStation 5 is a better console.

If I understood correctly, is Traflaps the final defining factor over GPU power? Or what do these floating points mean? How would you describe it for a user who doesn't understand all of these?

I think it was a bad PR move to put all these information out. This technical information does not matter to the average user and is not a final judgement over GPU power.

Graphics cards, for example, have 20 different sections, one of which is Compute Units, which performs the processing. If the rest of the components are best put to use in the best possible way, and there are no other restrictions, there is not bottleneck in memory, and as long as the processor has the necessary information, 12 Tflops can be achieved. So in an ideal world where we remove all the limiting parameters, that's possible, but it's not. ( he means we cannot remove all bottlenecks and 12 Tflpos only remains on paper)

A good example of this is the Xbox Series X hardware. Microsoft two seprate pools of Ram. The same mistake that they made over Xbox one. One pool of RAM has high bandwidth and the other pool of RAM has lower bandwidth. As a result, coding for the console is sometimes problematic. Because the total number of things we have to put in the faster pool RAM is so much that it will be annoying again, and add insult to injury the 4k output needs even more bandwidth. So there will be some factors which bottleneck XSX’s GPU.

You talked about the CUs. The PlayStation 5 now has 36 CUs, and the Xbox Series X has 52 CUs are available to the developer. What is the difference?

The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in CU count, the two consoles’ performance is almost the same. An interesting analogy from an IGN reporter was that the Xbox Series X GPU is like an 8-cylinder engine, and the PlayStation 5 is like turbocharged 6- cylinder engine. Raising the clock speed on the PlayStation 5 seems to me to have a number of benefits, such as the memory management, rasterization, and other elements of the GPU whose performance is related to the frequency not CU count. So in some scenarios PlayStation 5's GPU works faster than the Series X. That's what makes the console GPU to work even more frequently on the announced peak 10.28 Teraflops. But for the Series X, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions.

Doesn't this difference decline at the end of the generation, when developers become more familiar with the Series X hardware?

No, because the PlayStation API generally gives devs more freedom, and usually at the end of each generation, Sony consoles produce more detailed games. For example, in the early seventh generation, even multi-platform games for both consoles performed poorly on the PlayStation 3. But late in the generation Uncharted 3 and The Last of Us came out on the console. I think the next generation will be the same. But generally speaking XSX must have less trouble pushing more pixels. (He emphasizes on “only” pixels)

Sony says the smaller the number of CUs, the more you can integrate the tasks. What does Sony's claim mean?

It costs resources to use all the CUs at the same time. Because CUs need resources that are allocated to the GPU when they want to run code. If the GPU fails to distribute all the resources on all the CUs to execute a code, it will be forced to drop a number of CUs in use. For example, instead of 52, use 20 of them because GPU doesn't have enough resources for all CUs at all times.

Aware of this, Sony has used a faster GPU instead of a larger GPU to reduce allocation costs. A more striking example of this was in the CPUs. AMD has had high-core CPUs for a long time. Intels on the other hand has used less core but faster ones. Intel CPUs with less cores but faster ones perform better in Gaming. Clearly, a 16- or 32-core CPU has a higher number of Teraflops, but a CPU with a faster core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores all the time, they prefer to have fewer cores but faster.

Could the Hyperthreading feature included in the X series be the Microsoft's winning ace at the end of gerneration?

Technically, hypertheading has been on desktop computers since Pentium 4, and each physical core considers the CPU as two virtual cores, and in most cases helps with performance. Does the Series X feature allow the developer to decide for themselves whether they want to use these virtual cores or turn them off with more CPU clocks? And that's exactly what you're saying. It's not exactly a big deal to make a local decision from the start, so the use of hyperthreading is likely to be used at later time of the generation not at first.

Can you elaborate?

That is, the analysis requires very accurate code execution. So it's not something everyone knows right now. There are now much more important concerns for recognizing console hardware, and developers are likely to work with a smaller number of cores at the beginning of the next generation, but with a higher clock, and then move on to use SMT (Hyperthreading).

The 3328 Shader is available in the Xbox Series X Computing Unit. What is a Shader?, what does it do, and what does 3328 Shaders mean?

When developers want to execute code, they do so through units called Wavefront. Multiply the number of CUs by the number of Wavefronts and you have the number of shaders. But it doesn't really matter, and everything I said about the CUs applies here. Again, there are limitations that make all of these shaders unusable, and having many of them all at once aren't necessarily good.

There is another important issue to consider, as Mark Cerny put it. CUs or even Traflaps are not necessarily the same between all architectures. That is, Teraflops cannot be compared between devices and decide which one is actually numerically superior. So you can't trust these numbers and call it a day.

Comparisons between Android devices and Apple iPhones have also recently risen analogous to consoles, with Internet discussions suggesting that Android users have higher RAM but poorer performance than iPhones. Is the comparison between the two with the consoles correct?

Software stacks that are placed on top of the hardware determine everything. As performance updates increase exponentially, so do they. Sony has always had better software because Microsoft has to use Windows. So that's right.

Microsoft has insisted that the Xbox Series X frequency is constant under any circumstances, but Sony does not have such an approach and provides the console with a certain amount of energy to use it as a variable and depending on the situation. What are the differences between the two and which will be better for the developer?

What Sony has done is much more logical because it decides whether the GPU frequency is higher or the CPU's frequency at certain times, depending on the processing load. For example, on a loading page, only the CPU is needed and the GPU is not used. Or in a close-up scene of the character's face, GPU gets involved and CPU plays a very small role. On the other hand, it's good that the Series X has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.

Doesn't this freedom of action make things harder for the developer?

Not really, because we're already doing that on the engine. For example, the Dynamic Resolution Scaling technique used by some games is now measuring different elements and measuring how much the GPU is under pressure and how low the resolution should be kept to be fixed on the frame. So it's very easy to connect these together.

What is the use of the geometry engine or Geometry Engine that Sony is talking about?

I don't think it will be very useful in the first year or two. We'll probably see more of an impact for the second wave of games released on this console, but it doesn't have much use at the start.

The Series X chipset is 7 nanometers, and we know that the smaller the number, the better the chipset. Are you exploring the nanometer and transistors?

Lowering the nanometer means more transistors and controlling their heat in large numbers and smaller spaces. A production technology is better and the number of nanometers is not very important, what matters is the number of transistors.

PlayStation 5 SSD speeds reach 8-9 GB/s in peak mode. Now that we've reached this speed, what else will happen apart from loading games and more details?

The first thing to do is remove the loading page from the games. Microsoft also showed the ability to stop and run new games, which can run multiple games simultaneously and move between each in less than 5-6 seconds. This time will be under a second in PlayStation. Another thing that can be expected is a change in the game menu. When there is no loading, of course, there is no expectation and you no longer need to watch a video to load the game in the background.

How will the games on PC be in the meantime? Because having an SSD is a choice for a PC user.

Consoles have always determined what the standard is. Game developers also build games based on consoles, and if someone has a PC and doesn't have an SSD on it, they have to deal with long loads or think about buying an SSD.

As a programmer and developer, which do you consider the best console for working and coding? PlayStation 5 or Xbox X series?

Definitely PlayStation 5.

As a programmer, I would say that the PlayStation 5 is much better, and I don't think you can find a programmer who chooses XBX over PS5. For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants. It is in their interest and in our interest. Because there is only one way to do everything, and theirs is the best way possible.

Edit 2: There is a twitter translation feed


Thanks for the translation to English. Unfortunately I can't read Swedish
 

Chankoras

Member
Someone needs to inform nvidia. They don’t need to produce larger gpus, just up the clocks.
Same with poor old Microsoft, man, they have really fucked up. Less equals more apparently.
Last thing, I interpreted this as him saying that the ps5 was easier to work with, not the most powerful
isn't a console more than just a gpu? This developer isn't saying the Xbox is bad or that it is weaker, just that his opinion is that ps5 seems easier to get the most out of it, because the sum of its parts.
 

Gudji

Member
Thanks for the translation.

I mean what he's saying isn't new, we've heard PS4 had great tools too and it seems it's improved with PS5. Not sure why so many are calling him a liar, whatever.

Both consoles will be great no doubt.
 
Yes, brains are imploding left and right in here. It's way too much for some people.

Too hard, man :)

Here is the hot topic in the interview for me. Basically this dev is saying that XSX is 12tf just on paper.

:messenger_fire:👇
So in some scenarios PlayStation 5's GPU works faster than the Series X. That's what makes the console GPU to work even more frequently on the announced peak 10.28 Teraflops. But for the Series X, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions.
 
Last edited:

Whitecrow

Banned
Your question is an obvious trap.
But I' going to answer it and then I am going to compare your question to other stupid questions.

My Answer:
I depends on the surrounding system that uses these CU's.
Let's say you have 10 Cus vs 5 Cus in diefferent environments and used by different games/code.

10 CUs used in a system that is bottlenecked and running unoptimized code written in a haste or by an amateur using Slow but general API for several systems ( thousands of different hardware configurations )
5 CUs used in a system that is balanced and running optimized code that is carefully written using specific APIs for one particular system ( 1 hardware configuration )

I would tend to argue that both were able to perform similar. Opimization could even surpass brute force.

Back to how you expressed your question - It's actually a common thing used by poeple that try to disprove the opinion of other people by trying to make that sound like its unreasonable.
It's the same kind of question asked by people who believe in the flat-earth, who belive chemtrails are real, by people who don't believe in Human accelerated Climate-Change and so on.
Some of those people are doing to it troll, others do it because they don't understand complex things and only believe in simple truths. Something like all that matters is CU count, or TF for that matter.
You could also say:

AMD with more cores had less performance than Intel with less cores.
Thats why quantity is not the defining factor.

For what I understand, we can have much detailed games on PS5 even if they are at slightly lower resolution than XSX, because every silicon on the console is helping to that, and not only a beefy GPU.

Too hard, man :)

Here is the hot topic in the interview for me. Basically this dev is saying that XSX is 12tf just in paper.

:messenger_fire:👇
They are called bottlenecks. The dev said it clearly, TFLOPS is a theorical peak performance, that is, when all the work done by the GPU is linearly translated to performance, and that's not happening.
PS5 does a better job there. 10 TFS less bottlenecked may be perfectly comparable to 12 TFs bottlenecked.

And yes, I own from now on the bottlenecked term in reference to performance lol.
 
Last edited:

nikolino840

Member
That is what he is saying the PS5 hardware and software are easier to closer these highly ideal conditions than Xbox.

That is called efficiency.

You can research for example AMD and nVidia graphic cards where nVidia have better efficiency with higher clocks and less processing units to use its TFs than AMD.
I think no One Say ps5 Is not an Amazing console..but outside Phil Spencer can you find someone that Say that with series X with 12tf can do amazing games? Or the hardware Is not enough to bring Amazing games more then this gen?
 

Bankai

Member
Oef, this must be difficult to hear for all those "experts" here on NeoGAF :messenger_savoring:

I'm really baffled by this:
"But for the Series X, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions. "

Holy crap!

So in practise the PS5 is faster than the XBox Series X!
 

FireFly

Member
You're right that it is highly parallel already for the GPU.

As for the example in the PC space. The clocks aren't similar and more importantly the memory bandwidth is not similar that's where the difference comes from. In the case of the 5700XT everything is better.

The question is whether lower CUs at higher clock can provide comparable performance for most games at a given res. It certainly will not reach the max theoretical difference in performance because of the CU count.
It looks like the base and game clocks are the wrong way round in your table for the 5500 XT, as it should be 1717 MHz vs 1755 MHz for the game clocks, so the 5700 XT is 2.2% faster in this metric. Maybe the 5700 is a better point of comparison since it has 53% more computer power, thanks to the extra 63% more compute units at lower clock speeds.

DF have done some initial testing with 36 vs 40 CUs, but equal compute performance, and they find that so far performance scales better by adding more CUs than increasing the clock speeds:

 

geordiemp

Member
Someone needs to inform nvidia. They don’t need to produce larger gpus, just up the clocks.
Same with poor old Microsoft, man, they have really fucked up. Less equals more apparently.
Last thing, I interpreted this as him saying that the ps5 was easier to work with, not the most powerful

I am sure Nvidia will optimise there next gen of GPUs using the latest TSMC imrovements in the same way that AMD has done with RDNA2.

Your argument only applies to the older process before the TSMC 50 % per watt improvement shouted by AMD - NVIDIA will get same benefits is my read.

It looks like the base and game clocks are the wrong way round in your table for the 5500 XT, as it should be 1717 MHz vs 1755 MHz for the game clocks, so the 5700 XT is 2.2% faster in this metric. Maybe the 5700 is a better point of comparison since it has 53% more computer power, thanks to the extra 63% more compute units at lower clock speeds.

You cannot use clocks / power / TF arguments for RDNA1 when AMD has made it clear RDNA2 is 50 % improvement perf / watt to investors (which means legally correct)..

You arguments dont apply.....at all.
 
Last edited:

FireFly

Member
Oef, this must be difficult to hear for all those "experts" here on NeoGAF :messenger_savoring:

I'm really baffled by this:
"But for the Series X, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions. "

Holy crap!

So in practise the PS5 is faster than the XBox Series X!
The same applies for the PS5, since the teraflop figure is a theoretical peak.
 

Fdkenzo

Member
It looks like the base and game clocks are the wrong way round in your table for the 5500 XT, as it should be 1717 MHz vs 1755 MHz for the game clocks, so the 5700 XT is 2.2% faster in this metric. Maybe the 5700 is a better point of comparison since it has 53% more computer power, thanks to the extra 63% more compute units at lower clock speeds.

DF have done some initial testing with 36 vs 40 CUs, but equal compute performance, and they find that so far performance scales better by adding more CUs than increasing the clock speeds:


That is RDNA1, PS5 and XSX are on RDNA2. Stop comparing Apple with orange. DF is .........
 
Top Bottom