You’ve said plenty of things in the past.I told them before launch
You’ve said plenty of things in the past.I told them before launch
With most insight being very accurate, it's just how the messaging was formatted that was probably the issue.You’ve said plenty of things in the past.
The real question is, "is terraflops the best way to judge a consoles performance"?
Is it the split memory that makes Xbox harder to optimize? I think we have heard that from devs before, haven't we?
Enough already, TFs are theoretical maximum of crunching float numbers. Thus it generally means that GPU with more Flops, will have higher power output, but it is not a guarantee.
Especially when we have this Elephant in the room which is Direct X and situation, where PC code 100% compiles (without using those specific Xbox apis), thus we have receipt for disaster.
Devs still haven't been able to really grasps Direct X 12, so I don't expect them to utilize given HW/API. On Playstation, you simply have no other choice. It is more of Windows vs Apple software mentality.
Second best after bits i would say.The real question is, "is terraflops the best way to judge a consoles performance"?
Those calls are there, but it isn't separate it is addition of the Dx SDK, it has additional tooling, but the problem is that it is not enforced, like one thing would be enough, if the compiler would say "hey the target hw isn't compatible with these calls". Another elephant in the room is that dx11 to 12 wrapper, despite pundits want this info to disappear, it is a real thing.Split memory, lower pixel fillrate, more OS overhead taxing the CPU (PS5 has dedicated hardware for offloading almost all I/O tasks from CPU, and dedicated RAM as a cache buffer for SSD. Series systems have no dedicated RAM as cache buffer and weaker I/O subsystem so CPU has to do more of the heavy lifting) would be among some of the potential issues if it's not simply isolated to GDK/DX12U quirks.
PS5 having other advantages like cache scrubbers and cache coherency engines shouldn't be underestimated in what they do in helping the system out in terms of performance compared to Series X. But again that's assuming it's even anything at the hardware level; DX12U itself may not be curated the way it should for Xbox's GDK.
Yeah, Alex on Digital Foundry was referring to the bolded in a prior podcast himself. On PlayStation, you have one API solution for each potential thing you would want to do. On Xbox, you have a sea of solutions, some of which may actually be detrimental, even cause issues with a solution that seemed to work for another thing you just found the optimal solution for!
Xbox's GDK environment seems to have the problem of too many choices, due to utilizing DX12U, which by its nature has to offer a large range of solutions to account for various PC system configurations. I am surprised Microsoft have seemingly yet to make a curated development package of DX12U features, API calls etc. that are specific to optimal performance on Series X, and a similar one for Series S.
that game is completely broken.
that port was very clearly rushed out the door.
they even managed to wrongly align the light sources when RT shadows are enabled on Xbox...
they literally managed to not have VRR working in the 2 modes that are meant to be used with VRR displays...
like... that port is broken beyond belief
you can stand still and look straight up into the sky, and it will have drops below the SX's VRR window at 120hz, and half a second later it will be at 90fps, then back down to 20, back up to 70 etc.
Nobody cared about muh flops until last gen where it provided, for whatever reason, useful shorthand to discuss the differences in performance between XBO and PS4. Turns out that the PS4 had like, 40% more flops and also about that much more performance in cross platform games. But there were other differences too, like the PS4 had much faster memory and the XBO at launch had a garbage API and mandated Kinect usage that sapped away tons of resources. Also, the few times the XBO came out "ahead" was mostly during heavily CPU dependent games because XBO had a slightly faster CPU (Unity was one and there was maybe one or two more). When the One X and PS4 Pro came out, it was interesting because the gap in flops did not correspond to the gap in performance in the same way, but this was mostly unnoticed as I guess those were ultimately niche consoles.The real question is, "is terraflops the best way to judge a consoles performance"?
And that piece of crap can't even run Jedi Survivor at a stable framerate. Horrible tech.The 4090 RTX founder edition is 83 TFLOPS according to Nvidia. Did that translate to 8x more framerate than XBS or PS5 ? 8x more resolution ? Of course not, TFLOPS is one thing but there's many other factors. You can't reduce console performance to just a TFLOP number.
I can almost hearOh, I can't not wait for it to launch with tech breakdowns. My bet is on "4K" with all the bells and whistles and not 4K.
I'm pretty sure he meant Series S. Blame Microsoft for the cumbersome naming convention.
And the xbox series S....I know it's getting late in the generation but really we haven't seen what Series X can really do because of low sales & the software needing to be backed up Xbox One by & PC user base .
Who is willing to put all their eggs in Series X basket when it's 10 million owners at the most now & a lot of people will just play the game on Gamepass.
In my opinion compute is for when you don't know what's going to be important in the coming generation but once you have a good idea of what's needed you should add fixed function units or programmable logic to the hardware for the next console.The real question is, "is terraflops the best way to judge a consoles performance"?
WTF is thisAnd that piece of crap can't even run Jedi Survivor at a stable framerate. Horrible tech.
/s
The point I was making is that we haven't seen a dev focus on the specs of Series X , being that Series S has less memory it would limit what can be attempted by devs.And the xbox series S....
Those calls are there, but it isn't separate it is addition of the Dx SDK, it has additional tooling, but the problem is that it is not enforced, like one thing would be enough, if the compiler would say "hey the target hw isn't compatible with these calls". Another elephant in the room is that dx11 to 12 wrapper, despite pundits want this info to disappear, it is a real thing.
In my opinion compute is for when you don't know what's going to be important in the coming generation but once you have a good idea of what's needed you should add fixed function units or programmable logic to the hardware for the next console.
PS5 has the advantage in fixed function units because of the higher clocks & devs are not wasting time trying to come up with ways to use compute when most things just work now.
PS4 had extra compute in comparison to the fixed function units & you seen MM , Q-Games , Johhethan Blow & others pretty much waste a full generation trying to get the best out of compute but no one is putting that type of time in this generation for a small reward
The wrapper is translation layer, so you can keep your dx11 calls, wrap it in this thing and it will output un-optimized dx12 calls, just so you can run it on dx12 specific hw like Xbox (and with added rt for example), it is open source, you can find it here: microsoft/D3D11On12: The Direct3D11-On-12 mapping layer (github.com)Ah, okay. Yeah, 'enforcement' of the calls would be a better way to phrase it. A way programmers can be told tat such and such call is sub-optimal for the hardware target. Sometimes limiting options is actually a great thing.
I don't know a lot on the DX11 > DX12 wrapper. What's that about?
The whole, "it uses smaller sized textures and resolution than big brother, so it won't be an issue, just scale down" is being proven as a myth in the PC arena where even 1080p is struggling on DX12 with 8GB for current gen only engines/builds. Look at the Baldur's Gate 3's issues which revolves entirely around design (in its case, split-screen).The point I was making is that we haven't seen a dev focus on the specs of Series X , being that Series S has less memory it would limit what can be attempted by devs.
Witcher 3 is this, right?The wrapper is translation layer, so you can keep your dx11 calls, wrap it in this thing and it will output un-optimized dx12 calls, just so you can run it on dx12 specific hw like Xbox (and with added rt for example), it is open source, you can find it here: microsoft/D3D11On12: The Direct3D11-On-12 mapping layer (github.com)
The wrapper is translation layer, so you can keep your dx11 calls, wrap it in this thing and it will output un-optimized dx12 calls, just so you can run it on dx12 specific hw like Xbox (and with added rt for example), it is open source, you can find it here: microsoft/D3D11On12: The Direct3D11-On-12 mapping layer (github.com)
Ah the great playstation fud list. iirc it was sircaw who gave us this classicIn the end, Cerny and other developers who said the same thing were absolutely right, but then it was only a reason for ridicule and even portals like DF helped with it (there were even doubts about hardware RT on PS5... see it now on Ghostwire tokyo better than on Xbox xDD) this was 2020 in the forums... now it's time to pick up the cable when PS5 has shut up.
/sWTF is this
Enough already, TFs are theoretical maximum of crunching float numbers. Thus it generally means that GPU with more Flops, will have higher power output, but it is not a guarantee.
Especially when we have this Elephant in the room which is Direct X and situation, where PC code 100% compiles (without using those specific Xbox apis), thus we have receipt for disaster.
Devs still haven't been able to really grasps Direct X 12, so I don't expect them to utilize given HW/API. On Playstation, you simply have no other choice. It is more of Windows vs Apple software mentality.
Cannot verify, but more or less any "next-gen update" will be this on PC/Xbox. If they didn't have DX12 engine, so it is shady as fuck if I am being honest.Witcher 3 is this, right?
And that's really fucking bad, because new cards, aren't really optimized anymore for DX11 or Open GL a lot of what has been done in last 10 years or so, has been under utilized, by using these old ass APIs and not rewriting their code. They are still doing Unreal 4 games, because their middleware does not work on 5 and so on, it is ton of things.Oh, that....sounds bad. Like yes, it's an easy way to get DX11 code up and running on DX12-compliant devices. I also figure it's a good way to get XBO software programmed in DX11 to "just run" on Series X and S.
But it might also encourage some developers to continue using DX11 calls if they can just rely on the wrapper to port their calls to DX12 (even if unoptimized) and then maybe try optimizing the translated calls where it seems fit. But there could also be many instances where DX12 (and Ultimate) has its own new calls to completely replace the old ones, and method of handling certain functions with series of calls that would differ significantly from DX11. In those cases the wrapper probably isn't cutting it because those new calls and methodologies would be of the more optimized nature but the wrapper's only dumping the translated DX11 calls to unoptimized variants compliant with DX12 & DX12U.
So yeah, I can see how that creates some major problems. It can be both a blessing and a curse.
That's not what I'm saying at all & I actually expect some devs to take advantage of compute but for the most part they will go with what just worksAh, okay. Yeah, 'enforcement' of the calls would be a better way to phrase it. A way programmers can be told tat such and such call is sub-optimal for the hardware target. Sometimes limiting options is actually a great thing.
I don't know a lot on the DX11 > DX12 wrapper. What's that about?
This could be a big oof for Series X in the future if holds true. So you're of the mind that the X's compute advantage won't manifest into much after all? That's one of the things I thought would work out well enough for it down the line but if only a very few handful of games leveraged PS4's compute advantage on PS4 (some 1P, a couple 3P exclusives at most) in a targeted capacity, maybe that is a hint leveraging compute for specific tasks has a high barrier with low payoff.
It's possible X's compute advantage could still manifest into something if/when Mesh Shading takes off...although part of that boat isn't the party Xbox fans may've wanted to believe since both systems are capable of utilizing meshes, they just enforce the implementation differently within their graphics pipelines.
That's not what I'm saying at all & I actually expect some devs to take advantage of compute but for the most part they will go with what just works
Cannot verify, but more or less any "next-gen update" will be this on PC/Xbox. If they didn't have DX12 engine, so it is shady as fuck if I am being honest.
And that's really fucking bad, because new cards, aren't really optimized anymore for DX11 or Open GL a lot of what has been done in last 10 years or so, has been under utilized, by using these old ass APIs and not rewriting their code. They are still doing Unreal 4 games, because their middleware does not work on 5 and so on, it is ton of things.
Eventually, yeah. Here are some historic data:My bad, then.
Eventually as UE5 replaces UE4 we should see more devs shift away from DX11 and zero in on DX12 and leverage what modern GPUs are able to actually do. But that could still lead to a few more years of growing pains for devs who primarily rely on the DirectX side of things and are behind in adapting their engines and middleware to DX12U.
Apparently that might be the majority of XGS and Zenimax teams.
Redfalls on gamepass, fyi.Need to download more flops.
For the record a truck with 1000HP would be faster than a Lotus Elise.Doesnt really matter...
It like saying a truck has 1000HP, why is it not faster than a lotus elise with 250 HP ?
And when you bring in large deta sets that can't be split up into smaller pieces the difference in memory would mean that it would have to fit into the limits of Series S .The whole, "it uses smaller sized textures and resolution than big brother, so it won't be an issue, just scale down" is being proven as a myth in the PC arena where even 1080p is struggling on DX12 with 8GB for current gen only engines/builds. Look at the Baldur's Gate 3's issues which revolves entirely around design (in its case, split-screen).
Over 4,000 HP top speed around 40 mphFor the record a truck with 1000HP would be faster than a Lotus Elise.
Alex Battaglia joined Xbox Era discord? Lol. I always thought he came accross as PC/Nvidia fanboy coming from his rants. Guess that kinda explains his disinterest for PS consoles.In the end, Cerny and other developers who said the same thing were absolutely right, but then it was only a reason for ridicule and even portals like DF helped with it (there were even doubts about hardware RT on PS5... see it now on Ghostwire tokyo better than on Xbox xDD) this was 2020 in the forums... now it's time to pick up the cable when PS5 has shut up.
The amount of negative xbox threads on here this week I'm starting to wonder if Sony is paying people to start new negative threads at this point.
In reality, it performs right about where you expect it to given the specs. It's really the PS5 that reaches up, not the Series X that reaches down.