I said that i was thinking that it was the last official numberDid you not notice that 50m is from December 20, 2023?
I said that i was thinking that it was the last official numberDid you not notice that 50m is from December 20, 2023?
The 9070xt is nearly 30tflops and it buckles in pathtracing and that's with current gen fidelity settings. If they just want current gen visual fidelity with pathtracing turned on then the next gen architecture at 30tfops might suffice but it will not be able to deliver any improvements in fidelity such as geometry,draw distance, particles etc on top.A 30 TF PS6 with FSR4+ would definitely be good enough for Full path tracing. But without a ZEN6 CPU it may only do it at 30fps.
To answer that we gotta sum up what we know/what can be safely assumed:How many RTX 5090s is that?
To answer that we gotta sum up what we know/what can be safely assumed:
1) holidays 2028 release
2) 3nm node
3) console form factor
4) powerdraw below 250W
5) cooperation with amd hardware/tech
With that we can easily assume ps6 will be weaker from 5090, maybe not by much but its physically impossible to contain 5nm 575W powerdraw(oced models draw up to 666W actually) gpu and some decent cpu(likely gonna need those 20-30W too, even if its heavily undervolted/underclocked for max efficiency), add into that vram(maybe 24, maybe 32 gigs, but either of those 2 configurations for sure), fast ssd(not much faster from still extremly fast ps5 ssd, but likely bit faster aka will draw more power too).
Ofc it will still be huge jump from base ps5, especially in terms of rt and ai upscaling, raw raster tho, likely at max will be 3x what base ps5 got, aka 4090 territory, rt perf maybe even 10-15 better, add to that ai upscaling so native 1080p to 4k upscaling will look still crisp and not blurry at all, close to native 4k, thats how current dlss4 new transformative model is already btw).
What new console wont be is- cheap, im predicting likely 1k$/€ at least with disc drive.
Maybe it will "only" be 800$/€ for digital and 200 including disc drive, i count that as 1k total too.No way it'll be $1,000 for what you just typed. That should be illegal. At most it'll be $700.
Maybe it will "only" be 800$/€ for digital and 200 including disc drive, i count that as 1k total too.
$200 for a disc drive would be nuts!![]()
![]()
half a year after ps5pr0 launch, so u would think bluray drive would be relatively cheap by now, and yet:
749pln so 198usdNapęd dyskowy do konsoli PS5 Digital Edition (Slim) : Amazon.pl: Gry wideo
Napęd dyskowy do konsoli PS5 Digital Edition (Slim) : Amazon.pl: Gry wideowww.amazon.pl
german amazon sells it for 128€ so 145usd, but we are still 3,5years away from holiday 2028, prices will only go up![]()
Just stating the fact, those companies are greedy af, all of them, so if they find a way, any way, to charge us more, they definitely willI hate how you saying prices will go up, it's a crazy statement nowadays.
If ps6 is releasing 2 years from now
it will have gpu with roughly the power of an rtx 5080
you guys think that there is a chance for ps6 cpu to be as powerful as 9800x3D?
you guys think that there is a chance for ps6 cpu to be as powerful as 9800x3D?
1st option- if sony decides to go with cheaper/older zen5 archi(so same as 9800x3d) then ps6 cpu gonna be around 30% slower coz of simply undervolting/underclocking just so it doesnt eat 120W tdp like desktop 9800x3d but rather maybe 30-40W as part of SoC in console form factor.you guys think that there is a chance for ps6 cpu to be as powerful as 9800x3D?
No way.you guys think that there is a chance for ps6 cpu to be as powerful as 9800x3D?
If they keep using the same ugly PS3Cheap font in PS6 I'm going to be SO angry…Sure proper BC will never happen, but -- given this is the latest PS6 thread, thought I'd drop this little ad concept for the fellow dreamers:
![]()
Going by past PlayStations and flex leads used, if it ships with a PC/Kettle 3 pin lead OG PS3/PS4Pro, then typically under 450watts, probably 380watts in real world, but with new green directives and Sony being a green company, I would doubt this is the option.First what is the ps6 max wattage as a console ?
Why CPU?A 30 TF PS6 with FSR4+ would definitely be good enough for Full path tracing. But without a ZEN6 CPU it may only do it at 30fps.
Why CPU?
PT doesn't do anything with CPU that regular RT doesn't - and that in of itself isn't CPU limited on console.
Nobody knows. PS5/PS5 Pro draws about 200-230w depending on the game. I'm betting on it not being vastly higher than that. Maybe a slight increase, but pretty sure it will still be under 300w.First what is the ps6 max wattage as a console ?
Why CPU?
PT doesn't do anything with CPU that regular RT doesn't - and that in of itself isn't CPU limited on console.
To go higher they need to change the cable and socket to that of a PC PSU. 250watt is the limit on the normal cable, but you also need to leave 10% margin, so it ends up 20-25watts less in typical use.Nobody knows. PS5/PS5 Pro draws about 200-230w depending on the game. I'm betting on it not being vastly higher than that. Maybe a slight increase, but pretty sure it will still be under 300w.
Don't figure 8 cables go up to 500+ watts? I know the kettle type cables can take a couple thousands watts.To go higher they need to change the cable and socket to that of a PC PSU. 250watt is the limit on the normal cable, but you also need to leave 10% margin, so it ends up 20-25watts less in typical use.
RT traversal populates the acceleration structure for this - you don't issue draw-calls against objects that rays hit. And contents of these bounding hierarchies would be the same as regular RT, so PT doesn't change anything there.It can increase the amount of draw calls that the CPU has to process. Especially if the game is rendering reflections, which will require rendering more objects from out of the usual player viewport.
Not in ways that regular RT doesn't already do (see above).I assumed the CPU on some level would affect a game if the devs wanted it to have PT.
Pretty sure in the UK electrical products using the figure 8 are limited below 250watt, and it is likely because of the risk of them not being wired with a earth/ground pin, as the cable is only live and neutral AFAIK, but similar gauge cable - presumably with 3 wires - on a hair dyer can be as high as 1.8KW, so the cable being fixed and with a wired ground might be the difference - in the UK at least.Don't figure 8 cables go up to 500+ watts? I know the kettle type cables can take a couple thousands watts.
RT traversal populates the acceleration structure for this - you don't issue draw-calls against objects that rays hit. And contents of these bounding hierarchies would be the same as regular RT, so PT doesn't change anything there.
Draw-call cost is also dramatically lower to begin with on well optimised console codebase, but granted - a lot of modern games aren't that tightly optimised anymore.
Not in ways that regular RT doesn't already do (see above).
Also there's a misconception that RT is inherently CPU heavy process because of PC APIs that doesn't really apply to consoles the same way.
Don't figure 8 cables go up to 500+ watts? I know the kettle type cables can take a couple thousands watts.
Neither am I - the cost of RT falls mainly to two thingsI'm not talking about bvh traversal.
Neither am I - the cost of RT falls mainly to two things
BVH traversal + shading cost
BHV realtime updates
What you speak about (updating objects out of view) is part of BVH updates. But unlike frustum updates - BVH can be nearly or entirely static frame-2-frame - so the equivalent overhead here is relatively smaller. The downside is that when BVH does change (say, if large parts of levels move/change) - the cost can be substantial - but that's a workload that will typically run on GPU. Ultimately there should be very little for CPU to do here on a console (or in some cases, nothing at all).
Also - 'out of view' is a choice also - plenty of RT implementations out there exclude parts of the scene from RT hierarchy for performance reasons, so it's not a given 'all objects that can potentially be hit by rays need to be part of the update'. That's ground truth - but in realtime we almost never hit that.
Putting to onside that on a console like a PlayStation5 you have unified RAM so a draw call is reduced to the GPU reading from an updated area of RAM where it has been reading continuously for rendering instructions, I'm pretty sure the limit on PT/RT on consoles is a number of intersections and casts per frame - per BVH region - so that even if there was more work offscreen the GPU would finish calculating when it passed the lower threshold for that region and would then use the fallback technique for what it didn't finish tracing.Like I said, using PT, the bottleneck will always be in the GPU.
But because there is a need to render more objects that are outside the player view frustrum, that means more draw calls that the CPU has to process per frame.
Putting to onside that on a console like a PlayStation5 you have unified RAM so a draw call is reduced to the GPU reading from an updated area of RAM where it has been reading continuously for rendering instructions, I'm pretty sure the limit on PT/RT on consoles is a number of intersections and casts per frame - per BVH region - so that even if there was more work offscreen the GPU would finish calculating when it passed the lower threshold for that region and would then use the fallback technique for what it didn't finish tracing.
I don't think PT/RT on console runs to completion when it will miss its frame render time target.
It will depend on the implementation.Consoles don't have to deal with the cost of going through the PCIe bus, to transfer data from the CPU to the GPU.
But the CPU still has to process those draw calls.
It will depend on the implementation.
But something like Nanite does a lot of what it does within the GPU using a large complex shader call because the efficiency to give the complex shader call a more generalised request and leave the details within the shader allows for more throughput, so I don't believe it would burden the CPU much more, because even if it has to stream pre-calculated BVH structures of a largely static representation of off screen data only seen in a reflection, that work is offloaded to the IOcomplex.
The real question is: where is the flow control for the bulk of a frame's render time? is it on the CPU or the GPU?That is true. But even DX11 had the ability to join a bunch of draw calls with Driver Command List function. It's something that proper devs have been using for a while.
Nanite can join that. But it's big strength is that it's doing rasterization in compute, nor being limited by the pixel quad rasterizers, that nvidia and AMD uses.
The real question is: where is the flow control for the bulk of a frame's render time? is it on the CPU or the GPU?
I would argue that most rendering on AAA consoles games takes place on the GPU in the compute shaders and the CPU just manages things between successive frames and the simulations/AI within frame-time, but is effectively out of the conversation once a frame is being processed..
So like Fafalada I don't see the CPU use between successive frames massively increasing for PT/RT.
Considering the limitations of advancements in node shrinks and significant performance jumps between architectures, I think next gen, they might increase the power draw limits to get higher clocks. It's not looking feasible to get larger dies to fit in acceptable mm2 targets, and they might go the route of keeping clocks closer to desktop GPUs.Nobody knows. PS5/PS5 Pro draws about 200-230w depending on the game. I'm betting on it not being vastly higher than that. Maybe a slight increase, but pretty sure it will still be under 300w.
I don't think you are following my point. In the past CPU issued drawcalls at the type of granularity you are describing, but on console since the PS4 gen the drawcalls have became flow controlled within the compute shader, a shader that has full access to he unified memory of all the resources it needs to derive what it needs to render for that frame, and in the PS5 it can do the same for PT/RT. So the drawcalls don't really exist in the client/server paradigm that you are referencing. The GPU is working more like an SPU satellite Processor, so the CPU has no more low latency work to do for PT/RT IMO.Notice my previous posts.
I always sad that the bottleneck will be on the GPU side, while using PT.
What I meant to say is that using PT, due to having to render objects outside the normal player frustrum, there will be more draw calls for he CPU to process, per frame.
But of course, since the bottleneck is on the GPU side, by a significant margin, although the CPU has more work to do, it will still have to wait on the GPU, as the completes calculating PT.
On the other hand, we only have a handful of years developing real time hardware for RT/PT. While we have close to 25 years of shader development.
Maybe a few years from now, GPUs will have RT units that are much more efficient that current ones, so GPUs won't struggle as much with these loads.
I'm not sure we're talking about the same thing here - but there's no added 'classical' draw-calls for objects off screen - RT computes those bounces against whatever has been submitted into acceleration structure.But because there is a need to render more objects that are outside the player view frustrum, that means more draw calls that the CPU has to process per frame.
The only way I see them moving above 235watts is if they can do a revision within the first 18months that comes back to 235watts and I just can't see that happening with a single monolithic die.Considering the limitations of advancements in node shrinks and significant performance jumps between architectures, I think next gen, they might increase the power draw limits to get higher clocks. It's not looking feasible to get larger dies to fit in acceptable mm2 targets, and they might go the route of keeping clocks closer to desktop GPUs.
How this will fit into the green initiatives and policies is the question. I would certainly like for them push out an ambitious/capable hardware, even if it pushes heat/power draw trends for consoles.
One way I can see 3D cache working, is if Sony did something similar to Intel's Arrow Lake / Core Ultra 9 285K.i have a feeling that the ps6 will use a consolelike modified 3d cache APU
Some features missing, and some modified ones for PS6, like the current ps5 apu
One way I can see 3D cache working, is if Sony did something similar to Intel's Arrow Lake / Core Ultra 9 285K.
But the base tile houses the L3 cache on a cheaper 5nm node, while the CCD would be on 2nm and has up to the L2 cache, similarly to the image below, where the CCD is 3D stacked on top of the base tile.
![]()
This should allow for a cheaper larger amounts of L3 cache.
The only issue Sony would face, is if mass 3d stacking via micro-bump is viable.
I would question exactly what we need extra from a console CPU over the current Zen2 mobile and what is it worth in taking die space from a GPU going forward - other than a higher base clock on the primary CPU core to alleviate single core bottlenecks?Sony wont use 3D, with costs of making chips increasing non-stop.
They just need a new zen 6 architecture.
Just take a 9600X and compare to 5800X3D, the newer architecture is better in most games
Sony wont use 3D, with costs of making chips increasing non-stop.
They just need a new zen 6 architecture.
Just take a 9600X and compare to 5800X3D, the newer architecture is better in most games