JimboJones
Member
you will need wifi 6 for this. If XsX does not, oh man.....
Why would you need wifi6 for this specific feature?
you will need wifi 6 for this. If XsX does not, oh man.....
Don't waste your time. The games and tech will speak for itself.People who think AI can't contribute anything to gaming should look at Flight Simulator. Almost everything in the game is designed by AI.
Yeah, "gamechanger" and "revolutionize" are not hyping at all. It's not like we haven't heard or read, the same buzzwords in almost every Microsoft article.Looks like it's just being reported, I don't see anyone calling this the holy grail.
He's replying to someone trying to shoot this down, it's just as bad as hyping it up. Worthy of being reported at least, is it not?
This isn't a game changer. It is a nice feature though.
Why the blatant overselling of this? Reeks of platform wars/fanboyism.
I thought DX12 was the game changer no one was talking about.
It's a big feature for a next gen console if supported.
Why would they need some secret sauce if they have 12TF?
Yeah, machine learning upscaling makes a lot of sense when you want to have raytracing. Otherwise you would have to choose one or the other, 4K gaming without raytracing or 1080p with. Which means devs would have to optimize two different versions of the game just for one console.Raytracing. You can have a 15-20TF GPU that will do 4K at 120FPS with ease, but then you apply RT and the framerate gets slaughtered. So staying at 1080p technically would allow to have a smooth RT performance, while on top of that leaving a lot of those 12TF for even greater details, instead of using them to just push native 4K like the X1X does, while at the end of the day still having that super detailed, crisp picture quality.
Raytracing. You can have a 15-20TF GPU that will do 4K at 120FPS with ease, but then you apply RT and the framerate gets slaughtered. So staying at 1080p technically would allow to have a smooth RT performance, while on top of that leaving a lot of those 12TF for even greater details, instead of using them to just push native 4K like the X1X does, while at the end of the day still having that super detailed, crisp picture quality.
Unfortunately it doesn't work like that.
As DLSS is showing over and over again, it is. Also, bare in mind the next-gen consoles will load a lot of the data straight from the SSDs, leaving much of the actual RAM for whatever is needed. And the RAM configuration and specs are yet to be revealed.
Having a single pipeline would allow less bandwidth, than having individual bandwidth allocations like on PC. This is why consoles should go along the route of having decided vram, ram, and cpu bandwidth, separate. Devs could do so much more.I'm not sure how DLSS is relevant here.
RT kills bandwidth, not amount of RAM.
I'm not sure how DLSS is relevant here.
RT kills bandwidth, not amount of RAM.
You can't do that until an engine is only using raytracing and not the hybrid approach anymore. That's something that will eventually be here - in 20 years st best. Until then, it is what it is.Having a single pipeline would allow less bandwidth, than having individual bandwidth allocations like on PC. This is why consoles should go along the route of having decided vram, ram, and cpu bandwidth, separate. Devs could do so much more.
Separate those, and RT will have less of a performance hit.
And until then... Would a limited single pipeline be beneficial, vs other proven methods? Isn't there bound to be a bottleneck at some point?You can't do that until an engine is only using raytracing and not the hybrid approach anymore. That's something that will eventually be here - in 20 years st best. Until then, it is what it is.
DLSS is basically the same AI-based image upscaling technique as DirectML. Secondly, the lower the resolution the lower bandwidth is required, as RT calculations are based on rays per each pixel, so for instance given a typical scenario with 3 rays/pixel the GPU's RT engine has to calculate 6.2MLN rays for each frame in 1080p, versus a massive 24,9MLN rays for native 4K, so yeah, that's A LOT of data.
But now combine the two technologies (like NV already does) - you lower the resolution to 1080p that isn't as much bandwidth (and computing) hungry as 4K in terms of RT calculations, and then you upscale the final image with DLSS, DirectML, CBR, or whatever the upscaling method you want.
As much as the very first incarnations of DLSS had very mixed results, to say the least, it's getting better and better basically each month, to a point today where it's not even hard/impossible to distinguish it from native 4K, but it even gives better results than the native image. And that Forza example with DirectML only further proves this is a worthy path, it's basically a win-win scenario where we get much better framerates and better picture quality at the same time.
Please think about how you will get the pixels that are required for image reconstruction.
Why would I? Nor am I qualified for that, nor anyone is paying me to do so, while all the gaming-related companies out there do have their highly-qualified engineers to figure it out, and well, needless to say, they already did. It's been done over and over again, be it DLSS, CBR, Temporal Injection, Reconstruction, whatever, it's been done multiple times already, with better or worse results, it's already been proven to work, so I don't know what else do you want/need, what are you trying to prove? Just type in "DLSS" in YT and you'll have countless examples/evidences. It just works, and that's all I, the devs, and all other gamers care. Maybe you are some turbo nerd who needs to see the specific line of code or mathematical equations instead of the actual real-time results to finally believe, but of so, then it's just you and your own world then.
Again irrelevant.
I'm not arguing that DLSS doesn't work.
I'm telling you that RT random memory access exhausts bandwidth so fast that not much is left for DLSS. RT and DLSS will fight for resources and not help each other.
That's the only point.
DirectML seems to be Microsoft's answer to Nvidia's DLSS 2.0.
It will be interesting to see how Minecraft Ray Tracing for Xbox Series X compares to Minecraft RTX. Ultimately the high CU count was the right choice.
If both consoles have anything truly comparable to DLSS 2.0, in actual results (not just on paper) then we are in for a treat
The same way the cloud was the next game changer starting with crackdown 3? MS has always been full of shit when talking future tech.
Just like Cerny said the Pro could give 8 tf of power and ssd is new cloud
Can you explain what Cerny said about 8.4 TFLOPS and why it is incorrect ? You will find out he was not incorrect and his statement was meant in context...
While Sony and Nintendo are marketing games, consoles and accessories, MS is marketing the graphical API update.
Just like Cerny said the Pro could give 8 tf of power and ssd is new cloud
I love fanboys, I'll put you down on my calendar for an Xmas card.Just like Cerny said the Pro could give 8 tf of power and ssd is new cloud
Agreed. Give me Crackdown 3 at 6 teraflops over Spider-Man at 1.84 any day.
I love fanboys, I'll put you down on my calendar for an Xmas card.
Checkboading used FP16.
Sony is the pioneer in these types of techniques with the PS4 PRO using checkboarding.
All companies should already have similar techniques, nvidia, sony, MS, AMD, Nintendo...
And obviously those who first used advanced techniques must not have stood by looking at the progress of others.
I find this hype in MS software bizarre, DX12 only exists because AMD launched the mantle and forced MS to update the DX.
While Sony and Nintendo are marketing games, consoles and accessories, MS is marketing the graphical API update.
Definitely incorrect. 8.4 teraflops never came, hence all the checkerboarding we got. If this spin came from ms it'd get tore down every second lol. I find your obsession with me quite amusing tho.
At what rate does PS4 Pro process half floats? What would be PS4 Pro’s throughout if all operations were half floats? That is what Cerny said. Nobody, specially not Cerny, said PS4 Pro would not need checkerboard rendering especially in the same breadth as building HW acceleration for this.
Again, can you explain what the spin is supposed to be, where the incorrect statement lies and what was the lie told or what?
Watched it. Now what ?Watch the Digital Foundry video with God of War on PS4 PRO.
Needed some time to run to Google lol. There's no need. Cerny also said all we need is 8 tf for 4k gaming. Our Ps4 pro "radically improves" performance to 8.4 tf, yet no true 4k gaming.
Ah, so you are now taking different unrelated statements and patching them together, context be darned, when the question was about explaining the reasoning you were apparently certain of?
You mean the "Power of SSD" or its predecessor "Power of Cell". You mean the PS2's "75 million polygons per second " lie. You mean that Killzone 2 demo that never looked like the actual game or Uncharted 4 and the promise of 60 fps(they even had trailer at 1080p 60 fps)The same way the cloud was the next game changer starting with crackdown 3? MS has always been full of shit when talking future tech.
You mean the "Power of SSD" or its predecessor "Power of Cell". You mean the PS2's "75 million polygons per second " lie. You mean that Killzone 2 demo that never looked like the actual game or Uncharted 4 and the promise of 60 fps(they even had trailer at 1080p 60 fps)
And who can forget the PS4 Pro reveal.Sony 'actively pushing' for 60FPS/1080p with PS4 - VideoGamer.com
Fluid frame rate performance targeted by Sony.www.videogamer.com
Yeah, in a Tesla datacentre compute card.Nvidia will probably have 22 TF by the end of the year.
Agreed 100%. Microsoft delivered Crackdown 3 at 4K@60FPS, Sony has yes to do anything remotely comparable.
Watched it. Now what ?