![]()
So, good or bad news if true ?
Not surprised here at all since MS has said that they want to break the "generations" concept.![]()
So, good or bad news if true ?
The only thing I would add is that is that GPU is locked off from accessing the 6Gb/336Gbs pool and can ONLY see the 10Gb pool. It always has access to the lower 1 Gb address space of the 10 MCs regardless of What the CPU is doing. The CPU has access to all 16GB and the upper and lower addresses but Devs are emphasized to use the upper 1Gb address for CPU Audio and OS storage. The distribution of access and usage isnt uniform but mostly your example is about right.
Would you mind sharing your thoughts on this
Doesn't stop the next gen from having new features like being able to suspend more than one game.![]()
So, good or bad news if true ?
for me, the same with loading screens, teleporting and other screens. that time i rather spend to play or doing other staff. PS5 SSD is huge deal for me personally, for me it is more important than few pixel more.While everyone in here is talking about "wide and slow" and "narrow and fast", I am now looking at a paltry 1GB patch for The Division 2... being copied for the past 45 minutes on top of the original game. Now THIS is something that will be a thing of the past on the new consoles, and I am happy with just THAT.
That was all.
I will be extremely disappointed is the xbox ui is not updated
Yup SSDs will provide quality of life improvements for both devs and end users. U.I. Stuff will be night and day from last gen.While everyone in here is talking about "wide and slow" and "narrow and fast", I am now looking at a paltry 1GB patch for The Division 2... being copied for the past 45 minutes on top of the original game. Now THIS is something that will be a thing of the past on the new consoles, and I am happy with just THAT.
That was all.
and did you read the other source too? (the first spoiler)The addressing model is usually just a stride over some fixed amounts of 64/ 128B lines.
So it doesn't really matter in my calculations.
I was emphasizing what's different in the current XBSX design from the "usual" ones that are symmetric.
All the strides/address switching problems exist in the symmetric ones too.
Lady Gaia claims are pretty obvious too.
I just don't want to use the language of "reducing 320bit bus to 192bit" but you can look at it as such.
When the "second" part of the "bigger" chips is accessed you cannot access the "small" chips at all.
That's implied in my calculations.
MS updates the UI through out the life of its consoles anyway so there's that for you.I will be extremely disappointed is the xbox ui is not updated
![]()
So, good or bad news if true ?
anyone who actually thinks higher clocks are in any meaningful way better than slower clocks with more CUs is just as delusional as the 13TF crowd was.
the Series X is more powerful in every way aside of the SSD speed, get over it.
higher clocks will not result in better graphics than the way wider GPU, we only have to look at PC hardware tests to see exactly this. wider cards perform noticeably better and overclocking a narrower card gets you only so much performance gains.
and the fact that sone even try to spin the dynamic clock rates as something positive is more than ridiculous.
the only reason the PS5 has dynamic clock rates is because the hardware is not able to run at full clocks and full load, how the fuck is that good?
changing clocks like that are only used to squeeze a bit more performance out of a system that needs to be as cheap as possible and to look better on a specs sheet.
the only good thing about them is that it gets more performance out of the chip they have, that's it, that's all that's positive about it. it's a necessary evil basically
realistically speaking the PS5 will most likely never run at its full clocks on both ends, because if it would, these changing clocks wouldn't be needed, but they are needed. why? because the system can't reliably run at the highest clocks.
what this means is GPU intensive games will need to downclock the CPU in order to make sure the GPU is having no issues.
and CPU intensive games will need to downclock the GPU for the same reason.
this will most likely not be an issue with launch window titles since those will still be developed to run on jaguar CPUs as well, but if open world games get more complex, if more and more advanced AI and physics get used, the CPU will be taxed more and more, meaning that the GPU will most likely be downclocked.
the Series X has both a higher clocked CPU and also a more capable GPU and better RAM. meaning when games come around that will take full advantage of high end PC hardware, and then get ported to console they will run and look better on Series X, no clock speed advantage or SSD speed will change that.
Image broken not showing![]()
So, good or bad news if true ?
Well, Xbox gets regular UI updates, so the UI staying the same at launch time is not a major issue.That is a "mistake" in my opinion. Even when I do understand that they want all the users to have the same experience (for brand consistency), a new cool UI/Dashboard could have been another good incentive for current users to upgrade.
In any event, it looks like quite clean.
I will be a bit disappointed if the PS5 do not have a total new UI. Not because I do not like the current one, but because I am tired of using it. 7 years is enough![]()
Image broken not showing
Bad. For me at least.![]()
So, good or bad news if true ?
The addressing model is usually just a stride over some fixed amounts of 64/ 128B lines.
So it doesn't really matter in my calculations.
I was emphasizing what's different in the current XBSX design from the "usual" ones that are symmetric.
All the strides/address switching problems exist in the symmetric ones too.
Lady Gaia claims are pretty obvious too.
I just don't want to use the language of "reducing 320bit bus to 192bit" but you can look at it as such.
When the "second" part of the "bigger" chips is accessed you cannot access the "small" chips at all.
That's implied in my calculations.
Do you rly think the XSX can hold it's CPU and GPU clock's under heavy CPU/GPU load at the same time? No, it won't.
After the fan can't sustain the temps anymore, the heat would be too massive and the box will shutdown in the end. The point is that it will never happen in games, and the reason is that games will _never_ draw max cpu and gpu recources at the same time.
The big difference between yours and Ldy Gaia calculation is the GBs of everything that is not GPU bound, thats CPU + anything else.
She used 48 GBs you used much less. Is here estimate based on PC CPUs in general ?
I must admit, I always seem to agree with her statements....and next gen games will do allot more than Jaguar thats for sure and probably also at 60 FPS.
I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
thanks.![]()
Some information from Playstation Mag UK :
- Godfall is a PS5 console exclusive, no PS4 or cross gen. Tailored to run on PS5.
- PS5 allows Godfall to feel and play like no other game thanks to PS5 CPU and GPU.
- Made by a team of 75 people.
- Monster Hunter World in terms of gameplay, with elements of Dark Souls in combat.
- Some ex-Destiny 2 team members are involved.
- The game rewards aggressive play. Skill based combat based on timing in order to hit max damage.
- Visual style and world building influenced by The Stormlight Archive, The First Law and Foundation series.
- Positives around animation.
- High fantasy setting divided into Earth, Air, Fire and Spirit Elements.
- You are one of the least remaining Knight's Order tasked with stopping an apocalyptic event.
- Start of the game you set a classed based on type of armour = 3 sets to pick from. A lot of customization to unlock as you progress.
- Bosses designed to repel multiple people at once ie. in co-op bosses can take both of you out.
- Game based around drop in and drop out gameplay like Destiny and Monster Hunter World with heavy Dark Souls Influence.
So you think total access to non GPU assests will be under 30 ?
Dont see how that has anything to do with AVX code ?
A CPU is more like a workhorse. It's designed for more broad appeal (general purpose) instead of specializing in single tasks. It can do ANYTHING you throw at it.. Even real-time raytracing, but since it's not specialized, it takes it's time. What's absolutely crucial though, is that it's there.. NOTHING would work without it. It's pushing out jobs to other more specialized hardware (GPU etc) all the time. That way it's both the heart & brain of any system.I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
You could be correct, but surely if that were the case they would have officially had DF clear up the confusion to control the message as a definite win, no?
My guess is that they messed up and pulled the trigger on 20GB unified 560GB/s for both CPU and GPU access. Then they found signal integrity issues in late testing when pushing for 12TF, and were forced to chose slower GPU clock with 20GB or asymmetric ram or noisy console - for heat reasons to maintain integrity. The asymmetric ram solution they've chose AFAIK is that in any one data clock they can access the 6GB of the memory at 336GB/s from GPU or CPU, but not both sharing on a single data clock. And CPU can access the 10GB at 336GB/s exclusively, or the GPU can access it at 560GB/s exclusively, in a single data clock.
Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.
The contention I mentioned has to do with access between pools it can't be simultaneous. Im rather curious how it all works and seen a few comments on ree from technical inclined members which is why i brought it up. Anyways you can check above the relevant bits which go in more detail about it. P psorcerer conclusion is similar to lady gaias
Yup....bandwidth reduces for both systems, the difference is that Ps5 will access its CPU and sound and other non GPU data at the same 448 speed, and hence it takes up less time away from the GPU needs.
Lady Gaia explains it nicely, assumes typical CPU bandwidth requirement of 48 GBs and in constant use in the slower access RAM, as thats the way code runs, CPU runs code, GPU displays what its told..
Funnily enough, taking the CPU access out, it leaves 39 GBs GPU access per TF for both......strange that......I am sure MS and SONY know what they are doing, and that is not a coincidence. But it could also be a leveller for both systems on big asset games....
Both are equally bandwidth limited and RAM limited IMO. Maybe one of them will slash out and upgrade as a last minute move, Sony with 16 gbps or MS with more RAM. to feed the wider bus properly...
Both Sony and MS have chosen compromises based on RAM costs. Both are not ideal.
Hence why I tyhink the RDNA2 silicon is more expensive than everyone thinks, as both have made big compromises on costs...and we are not seeing $ 399 ...
![]()
Lady Gaia claims are pretty obvious too.
I just don't want to use the language of "reducing 320bit bus to 192bit" but you can look at it as such.
When the "second" part of the "bigger" chips is accessed you cannot access the "small" chips at all.
That's implied in my calculations.
I would expect CPU usage in next gen games to be vastly different to current gen games. Once the baseline is a 16 thread CPU with solid IPC and clocks, developers will find ways to keep it busy: Smarter AI, more complex physics simulations, better animation systems or even advanced audio effects. CPUs are not only used for "bad coding", they are used for branchy coding where parallelisation is not effective. CPU code can be as optimized as GPU code; else there would not be any need for SIMD/AVX in the first place.I do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
I think they planned to be able to support mixed memory quantities from the beginning.
According to MS they started with 12 TF, and worked from there. Given early predictions from AMD and memory manufacturers on performance and memory clocks, that will have set an early start point for bus width. Then cost becomes a factor, as does reliability in data transfer physics and stuff.
The only statement that MS made about CPU / IO / AUDIO was that it was at 336GB/s maximum wherever it was. They didn't say that accessing the 10GB GPU optimal RAM from these components would block all other access by the GPU. Even if the CPU accesses to the 10GB did limit the channels it was accessing to 3/5th peak speed (you'd be much better of running at full speed and caching somewhere before CPU), it would seem pretty extreme if all other channels not doing anything related to CPU/IO/Audio were all similarly limited or blocked entirely.
I can't see why the memory would be "interleaved" in the way you're describing.
Memory accesses take tens or more often (on GPU) hundreds of cycles. You put in a request and wait to get the result back. And there's a turnaround penalty for DRAM - it can only read or write at anyone time, and you have to wait for all currently "in flight" reads / writes on a channel to complete before to can change from read to write or vice versa.
I think it might be possible to read from different sections of the address range simultaneously, if the memory controllers have been built with that in mind.
Note that I'm not saying you can read from both the "optimal" and "normal" ranges of a single 2GByte chip at the same time*, just that you can do so at the same speed, on a per chip / channel /sub channel (if the controller has them*) basis.
(*maybe a 64-bit channel is further subdivided into, say 2 32-bit sub channels like the X1X).
You can only access the "slower" 6GB across 3 channels. You can access the faster 10GB across 5. Obviously, any channel accessing the 6 GB can't also be accessing the 10 GB.
But what *I* think is that even if the three channels are accessing the slower 6 GB, that still leaves 2 channels connected to only the 4 x 1GB memory chips that might be able to continue working. That is *if* there are jobs they can make good on in the memory connected across those other 128-bits / 2 channels.
And remember, most cases won't have the CPU accessing all 3 channels across that 192-bit, three channel range at once. And the rest of the system has to keep working. So I really don't think it's "all or nothing" between the two ranges. I think it all depends on which channels are needed for which accesses. I don't think there's a hard and fast split in they way most people are trying to describe.
It's simply not efficient in terms of power, cost, area, latency, throughput .... anything.
Lady Gaia might be right, but I've seen nothing that states that accessing the slow 6GB (over their 192-bit bus) disables the channels connecting to the remaining 4GB of memory, or that channels connected to the 2GB chips can't access fast or slow ranges independently, if access patterns permit it.
If hitting one memory channel to access 64Bytes of data for the CPU knocked out the other four channels for the duration of that access .... that would be frikkin' crazy!
I would expect access to the slower areas to cockblock more GPU access than you might like, but a complete shutdown of anything accessing the "optimal" memory is too much.
It's possible I guess but I really don't expect it. MS have been extensively profiling access patterns for years now. Would be interesting to hear Lady Gaia's thoughts on this, but I really don't like the look of ResetEra.
I agree with you on price though. If I had to bet, I'd say $499 dollars. These are both shaping up to be really fantastic machines, and both Sony and MS have really pushed to deliver on the idea of a next gen system that can grow into the next few years.
Its not a overclock, the GPU was designed to run at that frequencyOverclocking 20%
Preview or magazine already out?![]()
Some information from Playstation Mag UK :
- Godfall is a PS5 console exclusive, no PS4 or cross gen. Tailored to run on PS5.
- PS5 allows Godfall to feel and play like no other game thanks to PS5 CPU and GPU.
- Made by a team of 75 people.
- Monster Hunter World in terms of gameplay, with elements of Dark Souls in combat.
- Some ex-Destiny 2 team members are involved.
- The game rewards aggressive play. Skill based combat based on timing in order to hit max damage.
- Visual style and world building influenced by The Stormlight Archive, The First Law and Foundation series.
- Positives around animation.
- High fantasy setting divided into Earth, Air, Fire and Spirit Elements.
- You are one of the least remaining Knight's Order tasked with stopping an apocalyptic event.
- Start of the game you set a classed based on type of armour = 3 sets to pick from. A lot of customization to unlock as you progress.
- Bosses designed to repel multiple people at once ie. in co-op bosses can take both of you out.
- Game based around drop in and drop out gameplay like Destiny and Monster Hunter World with heavy Dark Souls Influence.
Can you elaborate on that? You mean bandwidth wise or cores utilizationI do not believe in high CPU usage in gaming. CPU should be used only for "bad code". I.e. code that doesn't need to be fast, just needs to run somehow.
AFAIK, by the slightly snarky remarks on AVX by Cerny, he doesn't believe in CPU too much too.
I didn't write it, just copied it to ask for inputI can't see why the memory would be "interleaved" in the way you're describing.
New control looks pretty sweet.
New control looks pretty sweet.
A hint at the color of the PS5 design?
lol I like itmight be awesome to use but it looks hideous, looks like the PS5 could be white
might be awesome to use but it looks hideous, looks like the PS5 could be white
New control looks pretty sweet.
New control looks pretty sweet.
Can someone edit this to make it look all black ?
There's a button in the middle at the bottom to turn it off. You can see it in the larger images on the PS blog.Better battery, no share button (create button instead, so probably same type of shit).
Not a fan of the built in mic though, better be a way to turn it off.
Can someone edit this to make it look all black ?
"DualSense marks a radical departure from our previous controller offerings and captures just how strongly we feel about making a generational leap with PS5. The new controller, along with the many innovative features in PS5, will be transformative for games – continuing our mission at PlayStation to push the boundaries of play, now and in the future. To the PlayStation community, I truly want to thank you for sharing this exciting journey with us as we head toward PS5's launch in Holiday 2020. We look forward to sharing more information about PS5, including the console design, in the coming months."
– Jim Ryan, President & CEO, Sony Interactive Entertainment