wait and see)Well, sooner or later raw speed will compensate for any kind of customization inside the PS5 SSD.
Edit: didn't see the black part. Well, if I remember correctly was stated in the conference.
wait and see)Well, sooner or later raw speed will compensate for any kind of customization inside the PS5 SSD.
Edit: didn't see the black part. Well, if I remember correctly was stated in the conference.
finally everything clears up.
Hard case....
Please list all the contradictions point by point and we will find out who is right and who is wrong.
New David Cage game confirmed.So XSX operates at 60% efficiency when it's drinking urine.
Let's start with the fact that before that we did not discuss the issue of total bandwidth usage between CPU and GPU...Why ? Everything you posted is flat out wrong for an APU with shared bandwidth, its not a GPU sharing bandwidth between pools, so where do you start ?
CPU needs to read memory to tell GPU what to do, its a constant contention all the time, GPU shared is totally different.
Let's start with the fact that before that we did not discuss the issue of total bandwidth usage between CPU and GPU...
You just meant it by keeping this information in mind your while I was talking about sharing bandwidth between the two pools
For many, representations based on their faith seem preferable and believable.I mean it's not really hard to comprehend.
Just imagine 10 or 20GB on the same 320bit bus. Would that somehow magically be faster or slower just because the size of the ICs is different?
No of course not.
That graphic is so stupid
It's much bigger than expected. Ps4 vs xbone sales in US are something like 53:47 and this being a US poll it's pretty surprising.
I do not need in-depth knowledge here and now. I do not need to defend a dissertation... In any case, I answered your complaints.Go read the REsetera thread on next gen, they talking about relative L3 cache and CPU write back and how it could also effect bandwidth after the bandwidth discusuion was put to bed. So XSX might have a better design yet, so dont sweat yet.
I cant be arsed, you have your agenda and just dont want to learn,
It should, yes. Fluidity is the baseline of interactivity, not visual fidelity at this point.![]()
PS5 Cosmic Horror Quantum Error Targeting 4K, 60FPS with 'Beautiful' Raytracing
Lovecraftywww.pushsquare.com
![]()
Cosmic Horror FPS Quantum Error to Run at 4K@60FPS on PS5 With Full Ray Tracing
Quantum Error, one of the first independent games announced for PS5, has been confirmed to be targeting 4K@60FPS with 'full ray tracing'.wccftech.com
60fps should be the standard this upcoming console generation. If the developer wants to push for eye candy visuals @30fps at least give us the option to lower the resolution in order to achieve 60fps aka "performance mode".
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
99% no. Consoles are not PCs, maybe PS3 was pricey, but it was sold at a huge loss of 200 dollars instead of pumping the price even higher.Anyone think both these consoles are going to cost $600+
Damn, i thought that was an Ecco remake then.![]()
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
![]()
PS5 Cosmic Horror Quantum Error Targeting 4K, 60FPS with 'Beautiful' Raytracing
Lovecraftywww.pushsquare.com
![]()
Cosmic Horror FPS Quantum Error to Run at 4K@60FPS on PS5 With Full Ray Tracing
Quantum Error, one of the first independent games announced for PS5, has been confirmed to be targeting 4K@60FPS with 'full ray tracing'.wccftech.com
60fps should be the standard this upcoming console generation. If the developer wants to push for eye candy visuals @30fps at least give us the option to lower the resolution in order to achieve 60fps aka "performance mode".
Technically I don't see any reason why every game can't have a 60fps mode come next-gen. Just drop the resolution and or use dynamic resolution scaling. The CPU and GPU have enough grunt to make this a possibility. Even if devs had to drop down to 1080p I doubt people would mind for the better frame rate. Not adding a 60fps mode is just either straight up laziness or limited development time to optimize for such a mode.
Xbox wins sorry Naughty Dog you cannot do any similar.... hahaha oh the irony
Because by bumping up a game to 60 fps, it'll make the textures worse. Which every game designer at the studio will go haywire as gamers won't be able to notice the ingredients list on a rusty soup can on the ground.Technically I don't see any reason why every game can't have a 60fps mode come next-gen. Just drop the resolution and or use dynamic resolution scaling. The CPU and GPU have enough grunt to make this a possibility. Even if devs had to drop down to 1080p I doubt people would mind for the better frame rate. Not adding a 60fps mode is just either straight up laziness or limited development time to optimize for such a mode.
Devs at least had a legit excuse why they couldn't do this with current gen consoles because of the weak ass CPUs.
Because by bumping up a game to 60 fps, it'll make the textures worse. Which every game designer at the studio will go haywire as gamers won't be able to notice the ingredients list on a rusty soup can on the ground.
Never mind a solid 60 fps. Some games can't even hold 30.
Funny how most console games don't have performance/res options (some games you get 2-3 choices), yet on PC they don't have a problem finding the resources to ensure a million configs work where there are 20-30 sliders to fiddle with to help a gamer adjust performance.
A console game might have visual and audio settings, yet they don't do anything to performance unless there are dedicated perf/res modes.
I dont know if consoles will never hit that peak performance. my gpu runs at 1950 mhz at all times even though nvidia says the boost clocks are only 1750 mhz on it. my pro runs so hot there is no way devs like ND and SSM are letting precious clock cycles go free.
I think next gen is going to be very interesting because even if the worst case scenario is that the audio chip and SSD dont help make up the resolution gap, shorter load times, and better audio will give each and every PS5 port several distinct features and advantages.
This is the first time im hearing about the L2/L3 cache bandwidth advantage. where was this mentioned? also, is the faster rasterization going to make up a 17% gap in tflops? ps5 only has 22% higher clockspeeds, and lacks 44% of CUs, i dont see how faster clocks on 16 fewer CUs will make a dent.
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
I know these guys are top tier devs who think the PS5 is unbalanced, afterall it's SSD will be slowed down significantly by it's CPU, and I know Orphan of the Machine is going to wrap up all the best graphics awards later this year, but I really wished they spent as much time on sound as they did on graphics...Orphan of the Machine sound effects comes straight out of a 90's MS-DOS game....
I told folk we would be getting lots of 4k 60fps games next gen.....Expect the same from SWWS just the same...Physics and 4K 60 fps
Maybe it's just this game but higher framerates seems a general target this gen. Fucking finally.
![]()
![]()
Finally a game with Physics....I told folk we would be getting lots of 4k 60fps games next gen.....Expect the same from SWWS just the same...
They actually had to patch AC and remove some NPC's to help the PS4Someone has missed the sarcasm in the obviously false statement of the XBone being the superior console, that said it did have a CPU advantage as they both had the same processor but the xbox had a higher clock.
Physics. AI, Sound should all be huge next gen...….We should definitely be excited....Ohhhh!!!!! Textures and Loadtimes as well....Finally a game with Physics....
Crysis 1 was the last with this.
Orphan Of The Machine Gameplay video from Dynamic Voltage Games
Coming to XBOX Series X at 4k 120fps .
Since, everyone is posting 'real devs'
Both will work similar the thing is if PS5 can use its SSD bandwidth and geometry engine to be able to match XSX?
Why does this developer keep getting so much attention?
I would be shocked if this didn't hit 4K/120fps on XB1X let alone XBSX.
Since, everyone is posting 'real devs'
Let me put this bluntly - the memory configuration on the Series X is sub-optimal.
I understand there are rumours that the SX had 24 GB or 20 GB at some point early in its design process but the credible leaks have always pointed to 16 GB which means that, if this was the case, it was very early on in the development of the console. So what are we (and developers) stuck with? 16 GB of GDDR6 @ 14 GHz connected to a 320-bit bus (that's 5 x 64-bit memory controllers).
Microsoft is touting the 10 GB @ 560 GB/s and 6 GB @ 336 GB/s asymmetric configuration as a bonus but it's sort-of not. We've had this specific situation at least once before in the form of the NVidia GTX 650 Ti and a similar situation in the form of the 660 Ti. Both of those cards suffered from an asymmetrical configuration, affecting memory once the "symmetrical" portion of the interface was "full".
Now, you may be asking what I mean by "full". Well, it comes down to two things: first is that, unlike some commentators might believe, the maximum bandwidth of the interface is limited to the 320-bit controllers and the matching 10 chips x 32 bit/pin x 14 GHz/Gbps interface of the GDDR6 memory.
That means that the maximum theoretical bandwidth is 560 GB/s, not 896 GB/s (560 + 336). Secondly, memory has to be interleaved in order to function on a given clock timing to improve the parallelism of the configuration. Interleaving is why you don't get a single 16 GB RAM chip, instead we get multiple 1 GB or 2 GB chips because it's vastly more efficient. HBM is a different story because the dies are parallel with multiple channels per pin and multiple frequencies are possible to be run across each chip in a stack, unlike DDR/GDDR which has to have all chips run at the same frequency.
However, what this means is that you need to have address space symmetry in order have interleaving of the RAM, i.e. you need to have all your chips presenting the same "capacity" of memory in order for it to work. Looking at the diagram above, you can see the SX's configuration, the first 1 GB of each RAM chip is interleaved across the entire 320-bit memory interface, giving rise to 10 GB operating with a bandwidth of 560 GB/s but what about the other 6 GB of RAM?
Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.
The fallout of this can be quite complicated depending on how Microsoft have worked out their memory bus architecture. It could be a complete "switch" whereby on one clock cycle the memory interface uses the interleaved 10 GB portion and on the following clock cycle it accesses the 6 GB portion. This implementation would have the effect of averaging the effective bandwidth for all the memory. If you average this access, you get 392 GB/s for the 10 GB portion and 168 GB/s for the 6 GB portion for a given time frame but individual cycles would be counted at their full bandwidth.
However, there is another scenario with memory being assigned to each portion based on availability. In this configuration, the memory bandwidth (and access) is dependent on how much RAM is in use. Below 10 GB, the RAM will always operate at 560 GB/s. Above 10 GB utilisation, the memory interface must start switching or splitting the access to the memory portions. I don't know if it's technically possible to actually access two different interleaved portions of memory simultaneously by using the two 16-bit channels of the GDDR6 chip but if it were (and the standard appears to allow for it), you'd end up with the same memory bandwidths as the "averaged" scenario mentioned above.
If Microsoft were able to simultaneously access and decouple individual chips from the interleaved portions of memory through their memory controller then you could theoretically push the access to an asymmetric balance, being able to switch between a pure 560 GB/s for 10 GB RAM and a mixed 224 GB/s from 4 GB of that same portion and the full 336 GB/s of the 6 GB portion (also pictured above). This seems unlikely to my understanding of how things work and undesirable from a technical standpoint in terms of game memory access and also architecture design.
In comparison, the PS5 has a static 448 GB/s bandwidth for the entire 16 GB of GDDR6 (also operating at 14 GHz, across a 256-bit interface). Yes, the SX has 2.5 GB reserved for system functions and we don't know how much the PS5 reserves for that similar functionality but it doesn't matter - the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation before it has to start "lowering" the effective bandwidth of the memory below that of the PS5... or the SX has an averaged mixed memory bandwidth that is always below that of the baseline PS4. Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time.
![]()
![]()
Analyse This: The Next Gen Consoles (Part 9) [UPDATED]
So, the Xbox Series X is mostly unveiled at this point, with only some questions regarding audio implementation and underlying graphics...hole-in-my-head.blogspot.com
The 4k 60 fps not will be an standard on console for the simple reason the AAA games want to have more details in screen of course some of them will have 60 fps.I do not understand why some people still compare consoles with some PC components. On paper you can compare, but not in reality! Don't be surprised if the PS5 and XSX will have better performance on the new next-gen games than the 2080ti and I'm not exaggerating when I say that. Remember, the consoles are totally different from a PC, and the specifications in the new consoles are extremely powerful!
Another problem, where's that concern with 4K 60fps ?! All games will run smoothly at 4K 60fps (this will be the standard).
The 4k 60 fps not will be an standard on console for the simple reason the AAA games want to have more details in screen of course some of them will have 60 fps.
But when start to use RT in many parts the 4k 30 fps will be more common.
Is not about the optimization, is about the way developer or even the business people want to present something.No, the optimization is better on console, dont forget that.
And, Even now there are new tools like DLSS that can increase the FPS, it can double the FPS with RT ON. RT will evolve and will not be a killer like in the first year of His life.