On the YT video, click the share button under the video, then check the "start at [time]" box. It'll add "?t=x" to the end of the url. x = seconds.[/time]Off topic question, how do you timestamp a youtube video then post it here?
Thanks in advance.
On the YT video, click the share button under the video, then check the "start at [time]" box. It'll add "?t=x" to the end of the url. x = seconds.[/time]Off topic question, how do you timestamp a youtube video then post it here?
Thanks in advance.
Right click inside the video screen, timelapse video link.Off topic question, how do you timestamp a youtube video then post it here?
Thanks in advance.
So is Sony for PS4 and MS for Xbox one also paying RAD for Kraken, since current gen machines are also able to use Kraken?There is statement about Sony passing licenses along. I assume Sony needed to pay RAD Tools, since Kraken is proprietary.
Oh the PS4 and Xbox One also have a hardware dedicated Kraken decompression chip?So is Sony for PS4 and MS for Xbox one also paying RAD for Kraken, since current gen machines are also able to use Kraken?
Again if a game use SOFTWARE kraken support the game developer is paying RAD GAme Tools. That's how 3rd party tools works. It's simple, i don't get why you don't like this. And btw they totally are like dolby, you have to pay for atmos encoding and decoding, same goes for DTS or to use BT or pretty much everything, even WIFISo is Sony for PS4 and MS for Xbox one also paying RAD for Kraken, since current gen machines are also able to use Kraken?
It's the game dev's choice, if they use kraken compression then they must make sure that decompression is also taken care of. Game development middlewares are not like other techonlogies like Dolby atmos and so forth.
Yes, you can also right-click the video and "copy video URL at current time". That's new to me.Right click inside the video screen, timelapse video link.
Let me ask you this, Do you think Sony is paying RAD for PS4 as PS4 also seems to be decompressing kraken data without a dedicated unit?
There is no decompression algorithm implemented in hardware. It is hardware which can decompress such data faster. Like dedicated ray-tracing hardware. The ray-tracing algorithm is not in the hardware but the game engine provides the ray-tracing algorithm and the hardware is made to handle that algorithm faster.Oh the PS4 and Xbox One also have a hardware dedicated Kralen decompression chip?
I am talking about the PS5 implementing the decompression algorithm in hardware. As it should have been obvious.
Don't bring fanboyism here. It's a question that i wasn't even asking you in the first place.Do you think MS is paying for their propriatery compression ?
Both consoles are using others technology...FFS...
Move along, nothing to see here.
Some of the fanboy arguments in here take some beating...
Ah but console A maybe buys it a dollar cheaper than console B concern - YAWN !
that's not how hw work, to create a piece of hw that can hardware decompress you need to know how the algorihtm works, so you need to pretty much have access to the source code. You can't create something that accellerate decompression based on nothing.. really this is not how those things works.There is no decompression algorithm implemented in hardware. It is hardware which can decompress such data faster. Like dedicated ray-tracing hardware. The ray-tracing algorithm is not in the hardware but the game engine provides the ray-tracing algorithm and the hardware is made to handle that algorithm faster.
dedicated to it doesn't mean you have to literally make that algorithm in it. There's specific type of logics that it needs to handle in order to accomplish the task.that's not how hw work, to create a piece of hw that can hardware decompress you need to know how the algorihtm works, so you need to pretty much have access to the source code. You can't create something that accellerate decompression based on nothing.. really this is not how those things works.
it totally means that, dedicated hw support means you have to know the ins and out to make it works, otherwise is general sw support, pretty much like now, you can decompress using the apu a x86 general usage chip that can adapt to everything you throw at it.dedicated to it doesn't mean you have to literally make that algorithm in it. There's specific type of logics that it needs to handle in order to accomplish the task.
If you watch the DF video you see the final mother board multiple times and the ram setup.)This is XSX's CG render die shot and PCB. This is not the real world PCB and die shot.
![]()
My guess is eight normal 32 bit physical PHY GDDR6 controllers (like NAVI 10) with two extra 32 PHY GDDR6 controllers above the two CCX CPU modules.
I placed 2GB chips closer to the CCX CPU modules i.e. locality design rules?
XSX's CG render die shot reminds me of X1X's die shot layout.
![]()
NAVI 10 PCB example
![]()
So many posts but I still have no idea where you stand, what your perspective or what your point of view is. Such a waste of time.dedicated to it doesn't mean you have to literally make that algorithm in it. There's specific type of logics that it needs to handle in order to accomplish the task.
Don't bring fanboyism here. It's a question that i wasn't even asking you in the first place.
btw, which side did you think I am from, Xbox fanboy or PlayStation fanboy?
Sony created a custom decompressor for a proprietary format. They have to pay the owners of the format or come to some kind of deal but this means that it becomes free for developers to use since PS5 sdk will have to ship with the libraries required to use it since it is part of the hardware. This is a cost Sony has deemed important to take on as developers are starting to use it.dedicated to it doesn't mean you have to literally make that algorithm in it. There's specific type of logics that it needs to handle in order to accomplish the task.
They are still blurry at 1080p.If you watch the DF video you see the final mother board multiple times and the ram setup.)
That's really nothing more than wishful thinking. The two devices are designed around the same basic technologies so advantages in power will be there for all to see. This is the equivalent of saying "The PS5's SSD is technically superior in every way but many factors are involved in making a specific device "better".
You can trust the specs far more than any spin or commentary that contradicts them.
Are those still a thing on SSDs? There's no more seek time like HDDs
I'm still mightily confused as to how a GCN @ 13.8TF can be worse than an RDNA1 @ 9.75TF.
To me, it's like someone telling me that a ton of bricks is heavier than a ton of feathers.
Is a teraflop really such a bad measurement of things? After all, a flop is the measurement of floating point operations per second. A terraflop is a billion floating point operations per second.
Therefore, is one piece out hardware is outputing 13.8TF per second, while the other is outputting 9.75T per second, it should follow that the one with the higher number is handling more of these calculations per second and therefore is the better device.
What else is going on to affect the performance?
Another Quote because is just stumbled upon a youtube video explainning this:
Also at those here discussing about the decompresaion engines/chips in the next gen. Would it be possible for pc hardware mainboard manufactureres to create their own decompression engines for the pc segment?
I mean this could actually be a selling point if you target gamers as an audience: This mainboar has a decompression engine for xyz formats boosting your gaming performance!
This is XSX's CG render die shot and PCB. This is not the real world PCB and die shot.
![]()
My guess is eight normal 32 bit physical PHY GDDR6 controllers (like NAVI 10) with two extra 32 PHY GDDR6 controllers above the two CCX CPU modules.
I placed 2GB chips closer to the CCX CPU modules i.e. locality design rules?
XSX's CG render die shot reminds me of X1X's die shot layout.
![]()
NAVI 10 PCB example
![]()
not official
I'm still mightily confused as to how a GCN @ 13.8TF can be worse than an RDNA1 @ 9.75TF.
To me, it's like someone telling me that a ton of bricks is heavier than a ton of feathers.
Is a teraflop really such a bad measurement of things? After all, a flop is the measurement of floating point operations per second. A terraflop is a billion floating point operations per second.
Therefore, is one piece out hardware is outputing 13.8TF per second, while the other is outputting 9.75T per second, it should follow that the one with the higher number is handling more of these calculations per second and therefore is the better device.
What else is going on to affect the performance?
I agree.Such a waste of time.
I believe in the vision that Sony has shown with the PS5. In particular the ultra fast boot time, no loading screens(not merely reduced loading times), more rich immersive audio and the DualSense controller, while also providing generational leap in graphical fidelity.So many posts but I still have no idea where you stand, what your perspective or what your point of view is.
Not enough to identify GDDR6's number model number for each chip.![]()
Here's why I brought up the contention earlier; this is a shot of the XSX PCB from DF's teardown video on March 16th. Now, you could be right and this is just a prototype board, but I think most assume this to be the final PCB.
Also when you go to the XSX website page and scroll down a bit, you see this:
![]()
A much prettier version of the other shot but essentially the same PCB, and the chip layout for the GDDR6 is exactly the same.
Now, I'd like for the mix of chips to be accessible simultaneously, that'd be great! Whether or not that's something MS's done with the XSX is up for debate. However, it most likely wouldn't be with the chip setup you had in your graphs since we now have two official PCB shots with pretty much final configs that have the GDDR6 arranged differently from the graphs.
How tf are you gonna do a concept vid and not even show off your design?
I believe in it, too. I just wish we got more glimpses into this future.I agree.
I believe in the vision that Sony has shown with the PS5. In particular the ultra fast boot time, no loading screens(not merely reduced loading times), more rich immersive audio and the DualSense controller, while also providing generational leap in graphical fidelity.
![]()
Cool concept.
Edit: erased response because the debate is pointless.
My biggest complaint was also the loading times! and I am so happy that it is one of the key points of focus for the design of PS5. And I can't wait to just see the games on PS5.I believe in it, too. I just wish we got more glimpses into this future.
It's funny since 2013 I've been complaining about current generation load times every week and now they give me exactly what I wanted. They even go above and beyond.
Cannot wait to play whenever I want to play, no waiting, no loading, no boredom.
Hopefully we can get TLOU2 and GOT out of the way as soon as possible (by enjoying the hell out of those). I believe once that is done Sony will start talking as this generation will be finally over (with a bang).
Right now I feel like a starved fucking dog.
It may sound cynical, but I love Corona, just rotting at home coping hard on my Pro, playing Nioh 2 and FF7. But man if I had PS5 right now it would be so much better.
There is one worry I do have, the internal restructuring at SIE does not sound good. Layden gone, Hirai gone, Yoshida downgraded to indie stuff.
No need to worry about PS5 tech though, it sounds insane.
more rich immersive audio and the DualSense controller
I think it's clear the I/O and SSD set up in PS5 is manifestly superior.
When you have near just in time access to any data in the 825GB SSD, you don't need a 100GB partition of virtual memory.
This combination of hardware and software to enable near instant access of data is what Microsoft termed "Velocity Architecture" only Sony does not give it a fancy name and PS5 is faster at it based on specs of the storage and IO.
The speaker on DS4 isn't widely used as well but in GoW(2018), the speaker plays a little sound along with vibration everytime we recall the Leviathan axe. It is so satisfying, it cannot be overstated.Do you think that Developers for multiplatform games will build their games around this? That they will take their time to build their games around 3D audio and will use all those special features of the DualSense controller?
I doubt it, since xbox supports dolby atmos but there are only a few dolby atmos games and for PS4 not many games supported the touchpad or for xbox not many games supported Impulse triggers.
thing is, game development is as expensive as ever, not sure if taking time to develop features that only a single console can take advantage of is worth it to the developers, but we will see.
hopefully more devs use it, but I really doubt that, since in the past, devs did not use those features.
Even with 100GB sequestered which would be maybe a whole game in vmem or maybe two.. why would you need to stream at all. The vmem solution is still faster than streaming...
With 100 gb sequestered for vmem the SSD still has 900 gb of ssd room (not really by play along) available to it for all other purposes...
So you get 560gb/s of ram, 100 gb of vmem and still 4.8-6gb/s of streaming. What's there not to like here? The logo on the outside of the box?
![]()
Here's why I brought up the contention earlier; this is a shot of the XSX PCB from DF's teardown video on March 16th. Now, you could be right and this is just a prototype board, but I think most assume this to be the final PCB.
Also when you go to the XSX website page and scroll down a bit, you see this:
![]()
A much prettier version of the other shot but essentially the same PCB, and the chip layout for the GDDR6 is exactly the same.
Now, I'd like for the mix of chips to be accessible simultaneously, that'd be great! Whether or not that's something MS's done with the XSX is up for debate.
On the accessibility Lady Gaia said that its unlikely as it would need much more tracks to RAM chips and memory controller. See below
![]()
In later posts she gave her cedentials. She knows this stuff.
xbsex price spotted in a polish supermarket approx £482 = $600
![]()
Xbox Series X – mamy potwierdzenie ceny. Dużo czy mało?
Koronawirus paraliżuje gospodarkę, ograbia graczy z najważniejszej imprezy (E3), ale nie zabierze raczej im ich najwięk…translate.google.com
xbsex price spotted in a polish supermarket approx £482 = $600
![]()
Xbox Series X – mamy potwierdzenie ceny. Dużo czy mało?
Koronawirus paraliżuje gospodarkę, ograbia graczy z najważniejszej imprezy (E3), ale nie zabierze raczej im ich najwięk…translate.google.com
![]()
Xbox Series X 'Texture Compression BCPack' Reportedly Better Than The PS5's Kraken
Microsoft revealed the complete specs for Xbox Series X earlier this month. Today, we might have something really interesting - Xbox Series X BCPack.www.thegamepost.com
What is that based on? We don't actually have all information on the SSD and I/O for either system, so it seems a bit premature to claim.
If you're just going by paper specs, keep in mind a lot of people have also said to rule out claiming one system as superior to the other just because its on-paper GPU specs (particularly TF) are better.
Just a bit on what we don't know regarding the SSDs for each system:
-Random access latency on first block
-Random access latency (general)
-NAND type (we are assuming QLC but they could be using TLC, or some mix of SLC NAND as a very small cluster)
-Bandwidth (only sequential read speeds have been given, and speed is not the same as bandwidth)
-Page size
-Block size
-Full performance of compression/decompression tools (as a general rule: both systems support Zlib, BCPack is superior for texture compress/decompress, Kraken is better suited for general data. BCPack's top end compression figure lower than Kracken's but Kracken's top end figure is mainly applicable to only data that can actually compress that well without data integrity loss)
And for I/O:
-Bus contention (Tempest Engine already specified it can use up to 20 GB/s bandwidth; no evidence they are using an Onion bus for it)
-USB hub speeds and latency figures
-Ethernet and Wifi/Bluetooth figures (Bluetooth less so because it's not very power-hungry thankfully).
-Data caching for SSD (to speed up read and write operations)
-SSD bandwidth contention (kind of more directed at PS5 since keeping Cerny's power limit talk in mind. I'm curious if maximum data speed rate on the SSD will have a factoring impact on the variable frequency of the PS5 since the SSD does use a good amount of power itself and could factor into the potential 2% frequency drop Cerny mentioned in the presentation).
Basically, like with quite a few other things with these systems, it's better to wait and see, or at least get a lot more specific data before drawing absolute conclusions.
You haven't a clue what you're talking about. That 100GB virtual ram still has a bandwidth of (drum roll) 2.4GB/s. You've only just partitioned some of your main storage so the operating system can move data that is not being actively used to the SSD while data actively being used remains in RAM. When the GPU needs the data, it has to be moved from the "Virtual RAM" through a 2.4GB/s bandwidth.Even with 100GB sequestered which would be maybe a whole game in vmem or maybe two.. why would you need to stream at all. The vmem solution is still faster than streaming...
With 100 gb sequestered for vmem the SSD still has 900 gb of ssd room (not really by play along) available to it for all other purposes...
So you get 560gb/s of ram, 100 gb of vmem and still 4.8-6gb/s of streaming. What's there not to like here? The logo on the outside of the box?
You haven't a clue what you're talking about. That 100GB virtual ram still has a bandwidth of (drum roll) 2.4GB/s. You've only just partitioned some of your main storage so the operating system can move data that is not being actively used to the SSD while data actively being used remains in RAM. When the GPU needs the data, it has to be moved from the "Virtual RAM" through a 2.4GB/s bandwidth.
You're under the impression that the virtual ram has a bandwidth of 100GB/s?
Why would you want an optical audio port when higher quality audio can be send via HDMI? Source
This doesn't do what you think it does, it doesn't affect or change how interleaved memory works, its mainly CPU & GPGPU oriented, it also under utilizes bandwidth and again it has no impact on interleaved memory. Nothing changesAMD GPUs use the "combined scatter" and "combined gather" methods.
It has already been pointed out to you this is not the case, only one decompression unit which handles both compression algorithmsXSX's SSD area has two hardware decompressors which are
Good question that neither Sony/MS touched. Im guessing streaming data can be prearranged sequentially and the customizations minimize the impact of non sequential readsSequential or random?
You can split each chip using their 16bit address for simultaneous access but you are still working with immutable physical limitations, you are effectively halving memory bandwidth for each respective poolMy simplified XSX work-in-progress interleaved memory model for the logical single 320-bit channel model.
For 2G GDDR6 chips, I factored the dual 16bit channels and AMD GPU's "combined scatter" memory access patterns.
I'm following this example
There are other details to work out. Without documentation, I don't know XSX's customizations done on the "logical view" to "physical view" resolve map.
Because headsets?
![]()
Actually a toslink has a datarate of 125mbit for second, so really low, that's the reason it doesn't support modern audio codec and got replaced by HDMI long ago. Toslink is a dead technology we didn't want to abandon, and the fault lies in console makers for never offering standard solutions for headsets.Because headsets?
![]()
![]()
You can as well use a very long 3.5 AUX wire to connect to your TV to share what HDMI is offering, and it's not guaranteed. New USB 3.1 Gen 2 can provide 10Gbps though, not sure if that's enough to match/surpass optical audio output. USB 2.0 was 0.48Gbps, USB 3.0 was 5Gbps. Upcoming USB 4.0 is 40Gbps which matches Apple's Thunderbolt.
Not sure what's the speed of fiber optic audio but it's probably 10Gbps just like the USB 3.1 Gen 2.
meh it can do 2 channel 24 bit 96kHz I can't hear more.. I only have 2 ears.Actually a toslink has a datarate of 125mbit for second, so really low, that's the reason it doesn't support modern audio codec and got replaced by HDMI long ago. Toslink is a dead technology we didn't want to abandon, and the fault lies in console makers for never offering standard solutions for headsets.
xbsex price spotted in a polish supermarket approx £482 = $600
![]()
Xbox Series X – mamy potwierdzenie ceny. Dużo czy mało?
Koronawirus paraliżuje gospodarkę, ograbia graczy z najważniejszej imprezy (E3), ale nie zabierze raczej im ich najwięk…translate.google.com